source
stringlengths
273
149k
source_labels
sequence
paper_id
stringlengths
9
11
target
stringlengths
18
668
There are many differences between convolutional networks and the ventral visual streams of primates. For example, standard convolutional networks lack recurrent and lateral connections, cell dynamics, etc. However, their feedforward architectures are somewhat similar to the ventral stream, and warrant a more detailed comparison. A recent study found that the feedforward architecture of the visual cortex could be closely approximated as a convolutional network, but the ing architecture differed from widely used deep networks in several ways. The same study also found, somewhat surprisingly, that training the ventral stream of this network for object recognition ed in poor performance. This paper examines the performance of this network in more detail. In particular, I made a number of changes to the ventral-stream-based architecture, to make it more like a DenseNet, and tested performance at each step. I chose DenseNet because it has a high BrainScore, and because it has some cortex-like architectural features such as large in-degrees and long skip connections. Most of the changes (which made the cortex-like network more like DenseNet) improved performance. Further work is needed to better understand these . One possibility is that details of the ventral-stream architecture may be ill-suited to feedforward computation, simple processing units, and/or backpropagation, which could suggest differences between the way high-performance deep networks and the brain approach core object recognition. For most cortical areas, these are layers L2/3, L4, L5, and L6. For V1, these are L2/3(blob), L2/3(interblob), L4B, L4Cα, L4Cβ. For LGN, they are parvo, magno, and koniocellular divisions. reported validation accuracy of 79% on CIFAR-10, for a similar ventral-stream sub-network that 49 omitted connections with FLNe<0.15, and was trained for 50 epochs with the Adam update algorithm. In the present study, networks were trained for 300 epochs, using SGD with momentum 0.9, starting 51 with learning rate 0.1 and reducing it by 10x every 100 epochs. This ed in validation accuracy 52 of 84.59%. A standard DenseNet (github.com/kuangliu/pytorch-cifar) was trained using the same 53 procedure, ing in validation accuracy of 95.36%. To understand the basis of this performance gap, I created hybrid networks, with features of both the 55 ventral-stream network (VSN) and DenseNet. The VSN has a wide range of kernel sizes, optimized 56 to fill realistic receptive field sizes. In the first hybrid (H1), all kernel sizes of the VSN were set 57 to 3x3. The VSN also has a wide range of sparsity, with some connections consisting mostly of 58 zeros. In the second hybrid network (H2), in addition to using 3x3 kernels, I eliminated pixel-wise 59 sparsity, and limited channel-wise sparsity so that at least half of the input channels were used in 60 each connection. Thirdly (H3), I replaced each layer with a two-layer bottleneck module, specifically 61 a 1x1-kernel layer followed by 3x3 layer with four times fewer channels. The number of channels Most of these modifications improved performance, when applied cumulatively to make the ventral- stream increasingly similar to a DenseNet (Table 2,. In the monkey inferotemporal cortex data, mean selectivity is 3.5, and mean sparseness is 12.51. The ventral-stream model has much higher means, and DenseNet has much lower means. C. Representational dissimilarity, using a subset of images from. The plotted values are percentiles of one minus the Pearson correlations between responses to different stimuli. Monkey cell data shows relatively low values (high similarity) throughout the lower-right quarter of this matrix (spanning non-animal natural and artificial images), but neither of the deep networks does. efficiently, or the connection pattern of the ventral stream may be better suited to extracting certain 135 feature combinations in unsupervised learning than to communicating gradients through many layers.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkegNmFUIS
An approximation of primate ventral stream as a convolutional network performs poorly on object recognition, and multiple architectural features contribute to this.
In reinforcement learning, it is common to let an agent interact with its environment for a fixed amount of time before resetting the environment and repeating the process in a series of episodes. The task that the agent has to learn can either be to maximize its performance over (i) that fixed amount of time, or (ii) an indefinite period where the time limit is only used during training. In this paper, we investigate theoretically how time limits could effectively be handled in each of the two cases. In the first one, we argue that the terminations due to time limits are in fact part of the environment, and propose to include a notion of the remaining time as part of the agent's input. In the second case, the time limits are not part of the environment and are only used to facilitate learning. We argue that such terminations should not be treated as environmental ones and propose a method, specific to value-based algorithms, that incorporates this insight by continuing to bootstrap at the end of each partial episode. To illustrate the significance of our proposals, we perform several experiments on a range of environments from simple few-state transition graphs to complex control tasks, including novel and standard benchmark domains. Our show that the proposed methods improve the performance and stability of existing reinforcement learning algorithms. The reinforcement learning framework BID19 BID2 BID21 BID10 involves a sequential interaction between an agent and its environment. At every time step t, the agent receives a representation S t of the environment's state, selects an action A t that is executed in the environment which in turn provides a representation S t+1 of the successor state and a reward signal R t+1. An individual reward received by the agent does not directly indicate the quality of its latest action as some rewards may indeed be the consequence of a series of actions taken far in advance. Thus, the goal of the agent is to learn a good policy by maximizing the discounted sum of future rewards also known as return: DISPLAYFORM0 A discount factor 0 ≤ γ < 1 is necessary to exponentially decay the future rewards ensuring bounded returns. While the series is infinite, it is common to use this expression even in the case of possible terminations. Indeed, episode terminations can be considered to be the entering of an absorbing state that transitions only to itself and generates zero rewards thereafter. However, when the maximum length of an episode is predefined, it is easier to rewrite the expression above by explicitly including the time limit T: DISPLAYFORM1 Optimizing for the expectation of the return specified in Equation 2 is suitable for naturally timelimited tasks where the agent has to maximize its expected return G 0:T over a fixed episode length only. In this case, since the return is bounded, a discount factor of γ = 1 can be used. However, in practice it is still common to keep γ smaller than 1 in order to give more priority to short-term rewards. Under this optimality model, the objective of the agent does not go beyond the time limit. Therefore, an agent optimizing under this model should ideally learn to take more risky actions that reveal higher expected return than safer ones as approaching the end of the time limit. In Section 2, we study this case and illustrate that due to the presence of the time limit, the remaining time is present in the environment's state and is essential to its Markov property BID19. Therefore, we propose to include a notion of the remaining time in the agent's input, an approach that we refer to as time-awareness. We describe various general scenarios where lacking a notion of the remaining time can lead to suboptimal policies and instability, and demonstrate significant performance improvements for time-aware agents. Optimizing for the expectation of the return specified by Equation 1 is relevant for time-unlimited tasks where the interaction is not limited in time by nature. In this case, the agent has to maximize its expected return over an indefinite (e.g. infinite) period. However, it is desirable to use time limits in order to diversify the agent's experience. For example, starting from highly diverse states can avoid converging to suboptimal policies that are limited to a fraction of the state space. In Section 3, we show that in order to learn good policies that continue beyond the time limit, it is important to differentiate between the terminations that are due to time limits and those from the environment. Specifically, for value-based algorithms, we propose to continue bootstrapping at states where termination is due to the time limit, or generally any other causes other than the environmental ones. We refer to this method as partial-episode bootstrapping. We describe various scenarios where having a time limit can facilitate learning, but where the aim is to learn optimal policies for indefinite periods, and demonstrate that our method can significantly improve performance. We evaluate the impact of the proposed methods on a range of novel and popular benchmark domains using a deep reinforcement learning BID0 BID9 algorithm called the Proximal Policy Optimization (PPO), one which has recently been used to achieve stateof-the-art performance in many domains BID17 BID8. We use the OpenAI Baselines 1 implementation of the PPO algorithm with the hyperparameters reported by BID17, unless explicitly specified. All novel environments are implemented using the OpenAI Gym framework BID3 and the standard benchmark domains are from the MuJoCo BID22 Gym collection. We modified the TimeLimit wrapper to include remaining time in the observations for the proposed time-aware agent and a flag to separate timeout terminations from environmental ones for the proposed partial-episode bootstrapping agent. For every task involving PPO, to have perfect reproducibility, we used the same 40 seeds from 0 to 39 to initialize the pseudo-random number generators for the agents and environments. Every 5 training cycles (i.e. 10240 time steps), we perform an evaluation on a complete episode and store the sums of rewards, discounted returns, and estimated state-values. For generating the performance plots, we average the values across all runs and then apply smoothing with a sliding window of size 10. The performance graphs show these smoothed averages as well as their standard error. We empirically show that time-awareness significantly improves the performance and stability of PPO for the time-limited tasks and can sometimes in quite interesting behaviors. For example, in the Hopper-v1 domain with T = 300, our agent learns to efficiently jump forward and fall towards the end of its time in order to maximize its travelled distance and achieve a "photo finish". For the time-unlimited tasks, we show that bootstrapping at the end of partial episodes allows to significantly outperform the standard PPO. In particular, on Hopper-v1, even if trained with episodes of only 200 steps, our agent manages to learn to hop for at least 10 6 time steps, ing in more than two hours of rendered video. Detailed for all variants of the tasks using PPO with and without the proposed methods are available in the Appendix. The source code will be made publicly available shortly. A visual depiction of highlights of the learned behaviors can be viewed at the address sites.google.com/view/time-limits-in-rl. In tasks that are time-limited by nature, the learning objective is to optimize the expectation of the return G γ 0:T from Equation 2. Interactions are systematically terminated at a fixed predetermined time step T if no environmental termination occurs earlier. This time-wise termination can be seen as transitioning to a terminal state whenever the time limit is reached. The states of the agent's environment, formally a Markov decision process (MDP) BID14, thus must contain a notion of the remaining time that is used by its transition function. This time-dependent MDP can be thought of as a stack of T time-independent MDPs, one for each time step, followed by one that only transitions to a terminal state. Thus, a decision-making agent in such an environment, at every time step t ∈ {0, ..., T − 1}, takes a decision that in transitioning to a new state from the next MDP in the stack and receiving a reward. Thus, a time-unaware agent effectively has to act in a partially observable Markov decision process (POMDP) BID12 where states that only differ by their remaining time appear identical. This phenomenon is a form of state-aliasing BID24 that is known to lead to suboptimal policies and instability due to the infeasibility of correct credit assignment. In this case, the terminations due to time limits can only be interpreted as part of the environment's stochasticity where the time-unaware agent perceives a chance of transitioning to a terminal state from any given state. In fact, this perceived stochasticity is dynamically changing with the agent's behavioral policy. For example, an agent could choose to stay in a fixed initial state during the entire course of an episode and perceive the probability of termination from that state to be 1 T, whereas it could choose to always move away from it in which case this probability would be perceived to be zero. In the view of the above, we propose time-awareness for reinforcement learning agents in timelimited domains by including directly the remaining time T − t in the agent's representation of the environment's state or by providing a mean to infer it. The importance of the inclusion of a notion of time in time-limited problems was first demonstrated by BID7, but seems to have been overlooked in the design of the benchmark domains and the evaluation of reinforcement learning agents. A major difference between the approach of BID7 and that of ours, however, is that we consider a more general class of time-dependent MDPs where the reward distribution and the transitions can also be time-dependent, preventing the possibility to consider multiple time instances at once as it is the case for the Q T -learning algorithm BID7.Here, we illustrate the issues faced by time-unaware agents by exemplifying the case for value-based methods. The state-value function for a time-aware agent in an environment with time limit T is: DISPLAYFORM0 By denoting an estimate of the state-value function byv π, the temporal-difference (TD) update rule BID18 after transitioning from a state s to a state s and receiving a reward r as a of an action a is given by either of the following expressions conditioned on the time step t: DISPLAYFORM1 The proposed added notion of the remaining time is indicated in bold and blue. A time-unaware agent would be deprived of this information and thus would updatev π (s) with or without bootstrapping from the estimated value of s depending on whether the time limit is reached. Confused by the conflicting updates for estimating the value of the same state, instead of learning an accurate value function, this time-unaware agent learns an approximate average of these inconsistent updates. Figure 2: The color-coded learned action probabilities overlaid on our Queue of Cars problem (black and white indicate 0 and 1, respectively). For each block, the top row represents the dangerous action and the bottom row the safe one. The 9 non-terminal states are represented horizontally. Left: a time-aware PPO agent at various times: the agent learns to optimally select the dangerous action. Right: 5 different instances of the time-unaware PPO agent. If the time limit T is never varied, inclusion of the time t as a notion of the remaining time would be sufficient. However, for more generality we choose to represent the remaining time T −t. In practice, we used the remaining time normalized from 1 to 0, concatenated to the observations provided by the Gym environments by modifying the TimeLimit wrapper. To give a simple example of the learning of an optimal time-dependent policy, we consider an MDP containing two states A and B. The agent always starts in A and has the possibility to choose an action to "stay" in place with no rewards or a "jump" action that transitions it to state B with a reward of +1. However, state B is considered a trap where the only possible action leads to a penalty of −1. The episodes terminate after a fixed number of steps T. The goal of the game is thus to jump at the last moment. For a time-unaware agent, the task is impossible to master for T > 1 and the best feasible policy would be to stay in place, ing in an overall return of 0. In contrast, a time-aware agent can learn to stay in place for T − 1 steps and then jump, scoring an undiscounted sum of rewards of +1. To further illustrate the impact of state-aliasing for time-unaware agents, we consider a deterministic gridworld environment (see FIG0) consisting of two possible goals rewarding 50 for reaching the top-right and 20 for the bottom-left. The agent has 5 actions: to move in cardinal directions or to stay in place. Any movement incurs a penalty of −1 while staying in place generates a reward of 0. Episodes terminate via a timer at T = 3 or if the agent has reached a goal. The initial state is randomly selected for every episode, excluding goal states. For training, we used a tabular Q-learning BID23 with completely random actions, trained until convergence with a decaying learning rate and a discount factor of γ = 0.99.The time-aware agent has a state-action value table for each time step and easily learns the optimal policy which is to go for the closest goal when there is enough time, and to stay in place otherwise. For the time-unaware agent, the greedy values of the cells adjacent to the goal with the terminal reward of 50 converge to 49 and those adjacent to the goal with 20 converge to 19 because of the −1 penalty on every move. Then, since the time limit is T = 3, from each remaining cell, the agent may have between 1 and 3 steps. For 2/3 of the times, the time-unaware agent receives −1 and bootstraps from the successor cell and for 1/3 of the times it receives −1 and experiences termination. Thus, for v(s) = arg max a q(s, a) and N (s) denoting the neighbors of s, for states non adjacent to the goals we have: v(s) = 2/3(−1 + γ max s ∈N (s) v(s)) + 1/3(−1). This learned value function leads to a policy that always tries to go for the closest goal even if there is not enough time. While the final optimal policy does not require time information, this example clearly shows that the confusion during training due to state-aliasing can create a leakage of the values to states that are out of reach. It is worth noting that, Monte Carlo methods such as REINFORCE BID26 BID20 are not susceptible to this leakage as they use complete returns instead of bootstrapping. However, without awareness of the remaining time, Monte Carlo methods would still not be able to learn an optimal policy in many cases, such as the Last Moment problem or the Queue of Cars problem in the subsequent section. An interesting property of time-aware agents is the ability to dynamically adapt to the remaining time that can, for example, be correlated with the current progress of the agent. To illustrate this, we introduce an environment which we call Queue of Cars where the agent controls a vehicle that is held up behind an intermittently moving queue of cars. The agent's goal is to reach an exit located 9 slots away from its starting position. At any time, the agent can choose the "safe" action to stay in the queue which may in advancing to the next slot with 50% probability. Alternatively, it has the possibility to attempt to overtake by a "dangerous" action that even though it has 80% probability to advance, it poses a 10% chance of collision with the oncoming traffic and terminating the episode. The agent receives no rewards unless it reaches its destination to receive a +1 reward. The episode terminates by reaching the destination, running out of time, or colliding during an overtake. In this task, an agent can have a lucky sequence of safe transitions and reach the destination within the time limit without ever needing to attempt an overtake. However, the opposite can also happen in which case the agent would need to overtake the cars to reach its destination in time. Time-unaware agents cannot possibly gauge the necessity to rush and thus can only learn a statistically efficient combination of dangerous and safe actions based on position only. Figure 2 shows this situation for a time-unaware PPO over 5 different runs against a time-aware one that adapts to the remaining time based on its distance to the goal to take more dangerous actions in the face of time insufficiency. A discount factor of γ = 1 was used for both agents. In this section, we evaluate the performance of PPO with and without the remaining time as part of the agent's input on a set of deterministic, continuous control tasks from the OpenAI's MuJoCo Gym benchmarks BID3 BID22 BID4. By default, these environments use predefined time limits and are each reset to a random initial state after an episode termination. FIG1 shows the performance of a time-unaware PPO against a time-aware one, demonstrating that time-awareness significantly improves the performance of PPO. The learned state-values shown for the InvertedPendulum-v1 task (see FIG1) illustrate perfectly the difference between a time- aware agent and a time-unaware one in terms of their estimated expected return as the episode progresses. While time-awareness enables the agent to learn an accurate exponential or linear decay of the expected return with time, the time-unaware agent only learns a constant estimate due to state-aliasing. Figure 4 (left) shows the performance comparisons of PPO with and without time-awareness in the Hopper-v1 domain with time limit T = 300. With a discount rate of 0.99, the standard PPO is initially on par with the time-aware PPO and later starts to plateau. As the agents become better, they start to experience terminations due to the time limit more frequently, at which point the timeunaware agent begins to perceive inconsistent returns for seemingly similar states. The advantage of the time-aware PPO becomes even clearer in the case of a discount rate of 1 where the time-unaware PPO diverges quite drastically. A possible reason is that the time-unaware PPO agent experiences much more significant conflicts as the returns are now the sum of the undiscounted rewards. This is while, the time-aware PPO still continues to perform well as it is able to assign credits appropriately based on the knowledge of the remaining time. Time-awareness does not only help agents by avoiding the conflicting updates. In fact, in naturally time-limited tasks where the agents have to maximize their performance for a limited time, timeaware agents can demonstrate quite interesting ways of ensuring to achieve this objective. FIG2 show the average final pose of the time-aware (middle) and time-unaware (right) agents. We can see that the time-aware agent learns to jump towards the end of its time in order to maximize its expected return, ing in a "photo finish", something that a time-unaware agent cannot accurately achieve. Finally, FIG2 (bottom-right) shows an interesting behavior robustly demonstrated by the timeunaware PPO in the case of γ = 1 that is to actively stay in place, accumulating at least the rewards coming from the bonus for staying alive. In this section, we explored the scenario where the aim is to learn a policy that maximizes the expected return over a limited time. We proposed to include a notion of the remaining time as part of the agent's observation to avoid state-aliasing which can cause suboptimal policies and instability. However, this scenario is not always ideal as there are cases where, even though the agent experiences time limits in its interaction with the environment, the objective is to learn a policy for a time-unlimited task. For instance, as we saw in the Hopper environment, the learned policy that maximizes the return over the T = 300 time steps generally in a photo finish which would lead to a fall and subsequent termination if the simulation was to be extended. Such a policy is not viable if the goal is to learn to move forward for an indefinite period of time. One solution is to not have time limits during training. However, it is often more efficient to instead have short snippets of interactions to expose the agent to diverse experiences. In the next section, we explore this case and propose a method that enables to effectively learn in such domains from partial episodes. In tasks that are not time-limited by nature, the learning objective is to optimize the expectation of the return G γ 0 from Equation 1. While the agent has to maximize its expected return over an indefinite (possibly infinite) period, it is desirable to still use time limits in order to frequently reset the environment and increase the diversity of the agent's experiences. A common mistake, however, is to then consider the terminations due to such time limits as environmental ones. This is equivalent to optimizing for returns G In the case of value-based algorithms, we propose to continue bootstrapping at states where termination is due to the time limit. The state-value function of a policy (from Equation 3) can be rewritten in terms of the time-limited return G γ t:T and the bootstrapped value from the last state v π (S T): DISPLAYFORM0 By denoting an estimate of the state-value function byv π, the temporal-difference update rule after transitioning from a state s to a state s and receiving a reward r as a of an action a is given by either of the following expressions conditioned on the time step t: DISPLAYFORM1 The proposed partial-episode bootstrap is indicated in bold and green. An agent without this modification would updatev π (s) with or without bootstrapping from the estimated value of s depending on whether there is some remaining time or not. Similarly to Equation 4, the conflicting updates for estimating the value of the same state leads to an approximate average of these updates. While this section is related to the previous one, it is somewhat orthogonal. In the previous section, one of the issues was bootstrapping values from states that were out-of-reach, letting the agent falsely believe that more rewards were available after. On the opposite, the problem presented here is when systematic bootstrapping is not performed from states at the time limit and thus, forgetting that more rewards would actually be available thereafter. We revisit the gridworld environment from Section 2.2. While previously the agent's task was to learn an optimal policy for a given time limit, we now consider how an agent can learn a good policy for an indefinite period from partial-episode experiences. We use the same setup as in Section 2.2. Again, we use a tabular Q-learning, but instead of considering terminations due to time limits as environmental ones, we continue bootstrapping from the non-terminal states that are reached at the time limits. This modification allows our agent to learn the time-unlimited optimal policy of always going for the most rewarding goal (see FIG0 . On the other hand, while the standard agent that is not performing the final bootstrapping (see FIG0) had values from out-of-reach cells leaking into its learned value function, these bootstraps did not occur in sufficient proportion with respect to the terminations due to time limits in order to let the agent learn the time-unlimited optimal policy. For the next experiments, we again use PPO but with two key modifications. We modify the Gym's TimeLimit wrapper to not include the remaining time (as needed for Section 2), but instead to include a flag to differentiate the terminations due to the time limit from the environmental ones. We also modify the PPO's implementation to enable continuing to bootstrap when terminations are due to time limits only. This involves modifying the implementation of the generalized advantage estimation (GAE). While GAEs use an exponentially-weighted average of n-step value estimations for bootstrapping that are more complex than the one-step lookahead bootstrapping explained in Equation 6, continuing to bootstrap from the last non-terminal states (i.e. at the end of the partial-episodes) is the only modification required for the proposed approach. 3 during the evaluations. The standard PPO agent degrades drastically after some time. Here, we consider the Hopper-v1 environment from Section 2.4, but instead aim to learn a policy that maximizes the agent's expected return over a time-unlimited horizon. We do not revisit the Reacherv1 and the InvertedPendulum-v1 environments as their extensions to time-unlimited domains is not of particular value-that is, staying at the target position long after the target is reached (Reacher-v1) or maintaining the pendulum's vertical pose long after it is balanced (InvertedPendulum-v1). The aim here is to show that by continuing to bootstrap from episode terminations that are due to time limits only, we are able to learn good policies for time-unlimited domains. FIG4 demonstrates performance evaluations of the standard PPO against one with the proposed partial-episode bootstrapping modification. The agents are trained on time-limited episodes of maximum T = 200 time steps, and are evaluated in the same environment, but with T = 10 6 time steps. We show that the proposed bootstrapping method significantly outperforms the standard PPO. During the evaluations, the standard PPO agent managed to reach a maximum of 7238 time steps on only one of the 40 training seeds, while our agent managed to reach the evaluation time limit of T = 10 6 on 7 occasions. This is quite impressive as this time limit corresponds to more than 2 hours of rendered hopping. The proposed bootstrapping at time-limit terminations was shown to enable our agent to effectively learn a good long-term policy on Hopper-v1. However, it could be argued that since the Hopper-v1 environment always starts in quite similar configurations, the ant policy is overfitted and almost completely cyclic. To demonstrate the ability of our proposed agent in learning non-cyclic policies for time-unlimited domains, we create a novel MuJoCo environment consisting of a torque-controlled ball that has to be used to push a cube to specified target positions. Once the cube has touched the target, the agent is rewarded and the target is moved away from the cube to a new random position. Because the task lacks terminal states, it can continue indefinitely. The objects are surrounded by fixed bounding walls. The inner edge of the walls stops the cube but not the ball in order to let the agent move the cube even if it is in a corner. The movements of the ball are limited to the horizontal plane and to the area defined by the outer edge of the walls. The environment's state representation consists of the objects' coordinates and velocities, and the cube's rotation. The agent receives no rewards unless the cube reaches a target location, at which point the agent receives a reinforcement reward of 1.Due to the absence of reward shaping, reinforcement learning agents are prone to being stuck, unable to learn to solve problems. Thus it is often useful to introduce a time limit during training in order to facilitate learning. We use a training time limit of 50, only sufficient to push the cube to one target location in most cases. The evaluations, however, consisted of 10 3 steps, long enough to allow successfully reaching several targets. FIG4 shows the performance comparison of the standard PPO against one with the proposed modification. An entropy coefficient of 0.03 is used to encourage exploration and higher the chance of reaching a target and experiencing a reinforcement reward. We found this value to yield best performance for both agents among those from the set {0, 0.01, 0.02, 0.03, 0.04, 0.05}. While the performance of the standard PPO degrades significantly after some time, it is clear that bootstrapping at the time limit helps our agent to perform significantly better. The maximum number of targets reached by our agent in a single episode of evaluation (T = 103) was 36 against 21 for the standard PPO. We showed in Section 2 that time-awareness is required for correct credit assignment in domains where the agent has to optimize its performance over a time-limited horizon. However, even without the knowledge of the remaining time, reinforcement learning agents still often manage to perform relatively well. This could be due to several reasons including: If the time limit is sufficiently long that terminations due to time limits are hardly ever experienced-for instance, in the Arcade Learning Environment (ALE) BID1 BID13 BID12. In deep learning BID11 BID15, it is highly common to use a stack of previous observations or recurrent neural networks (RNNs) BID6 to address scenarios with partial observations BID25 ). These solutions may to an extent help when the remaining time is not included as part of the agent's input. However, they are much more complex architectures and are only next-best solutions, while including a notion of the remaining time is quite simple and allows better diagnosis of the learned policies. The proposed approach is quite generic and can potentially be applied to domains with varying time limits where the agent has to learn to generalize as the remaining time approaches zero. In real-world applications such as robotics the proposed approach could easily be adapted by using the real time instead of simulation time steps. In order for the proposed partial-episode bootstrapping in Section 3 to work, as is the case for valuebased methods in general, the agent needs to bootstrap from reliable estimated predictions. This is in general resolved by enabling sufficient exploration. However, when the interactions are limited in time, exploration of the full state-space may not be feasible from some fixed starting states. Thus, a good way to allow appropriate exploration in such domains is to sufficiently randomize the initial states. It is worth noting that the proposed partial-episode bootstrapping is quite generic in that it is not restricted to partial episodes caused only due to time limits. In fact, this approach is valid for any early termination causes. For instance, it is common in the curriculum learning literature to start from near the goal states (easier tasks), and gradually expand to further states (more difficult tasks) BID5. In this case, it can be helpful to stitch the learned values by terminating the episodes and bootstrapping as soon as the agent enters a state that is already well known. Since the proposed methods were shown to enable to better optimize for the time-limited and timeunlimited domains, we believe that they have the potential to improve the performance and stability of a large number of existing reinforcement learning algorithms. We also propose that, since reinforcement learning agents are in fact optimizing for the expected returns, and not the undiscounted sum of rewards, it is more appropriate to consider this measure for performance evaluation. We considered the problem of learning optimal policies in time-limited and time-unlimited domains using time-limited interactions. We showed that when learning policies for time-limited tasks, it is important to include a notion of the remaining time as part of the agent's input. Not doing so can cause state-aliasing which in turn can in suboptimal policies, instability, and slower convergence. We then showed that, when learning policies that are optimal for time-unlimited tasks, it is more appropriate to continue bootstrapping at the end of the partial episodes when termination is due to time limits, or any other early termination causes other than environmental ones. In both cases, we illustrated that our proposed methods can significantly improve the performance of PPO and allow to optimize more directly, and accurately, for either of the optimality models. Reacher-v1, = 1.0, training limit = 50, evaluation limit = 50
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0 ]
HyDAQl-AW
We consider the problem of learning optimal policies in time-limited and time-unlimited domains using time-limited interactions.
Although stochastic gradient descent (SGD) is a driving force behind the recent success of deep learning, our understanding of its dynamics in a high-dimensional parameter space is limited. In recent years, some researchers have used the stochasticity of minibatch gradients, or the signal-to-noise ratio, to better characterize the learning dynamics of SGD. Inspired from these work, we here analyze SGD from a geometrical perspective by inspecting the stochasticity of the norms and directions of minibatch gradients. We propose a model of the directional concentration for minibatch gradients through von Mises-Fisher (VMF) distribution, and show that the directional uniformity of minibatch gradients increases over the course of SGD. We empirically verify our using deep convolutional networks and observe a higher correlation between the gradient stochasticity and the proposed directional uniformity than that against the gradient norm stochasticity, suggesting that the directional statistics of minibatch gradients is a major factor behind SGD. Stochastic gradient descent (SGD) has been a driving force behind the recent success of deep learning. Despite a series of work on improving SGD by incorporating the second-order information of the objective function BID26 BID21 BID6 BID22 BID7, SGD is still the most widely used optimization algorithm for training a deep neural network. The learning dynamics of SGD, however, has not been well characterized beyond that it converges to an extremal point BID1 due to the non-convexity and highdimensionality of a usual objective function used in deep learning. Gradient stochasticity, or the signal-to-noise ratio (SNR) of the stochastic gradient, has been proposed as a tool for analyzing the learning dynamics of SGD. BID28 identified two phases in SGD based on this. In the first phase, "drift phase", the gradient mean is much higher than its standard deviation, during which optimization progresses rapidly. This drift phase is followed by the "diffusion phase", where SGD behaves similarly to Gaussian noise with very small means. Similar observations were made by BID18 and BID4 who have also divided the learning dynamics of SGD into two phases. BID28 have proposed that such phase transition is related to information compression. Unlike them, we notice that there are two aspects to the gradient stochasticity. One is the L 2 norm of the minibatch gradient (the norm stochasticity), and the other is the directional balance of minibatch gradients (the directional stochasticity). SGD converges or terminates when either the norm of the minibatch gradient vanishes to zeros, or when the angles of the minibatch gradients are uniformly distributed and their non-zero norms are close to each other. That is, the gradient stochasticity, or the SNR of the stochastic gradient, is driven by both of these aspects, and it is necessary for us to investigate not only the holistic SNR but also the SNR of the minibatch gradient norm and that of the minibatch gradient angles. In this paper, we use a von Mises-Fisher (vMF hereafter) distribution, which is often used in directional statistics BID20, and its concentration parameter κ to characterize the directional balance of minibatch gradients and understand the learning dynamics of SGD from the perspective of directional statistics of minibatch gradients. We prove that SGD increases the direc-tional balance of minibatch gradients. We empirically verify this with deep convolutional networks with various techniques, including batch normalization BID12 and residual connections BID9, on MNIST and CIFAR-10 (BID15). Our empirical investigation further reveals that the proposed directional stochasticity is a major drive behind the gradient stochasticity compared to the norm stochasticity, suggesting the importance of understanding the directional statistics of the stochastic gradient. Contribution We analyze directional stochasticity of the minibatch gradients via angles as well as the concentration parameter of the vMF distribution. Especially, we theoretically show that the directional uniformity of the minibatch gradients modeled by the vMF distribution increases as training progresses, and verify this by experiments. In doing so, we introduce gradient norm stochasticity as the ratio of the standard deviation of the minibatch gradients to their expectation and theoretically and empirically show that this gradient norm stochasticity decreases as the batch size increases. Related work Most studies about SGD dynamics have been based on two-phase behavior BID28 BID18 BID4. BID18 investigated this behavior by considering a shallow neural network with residual connections and assuming the standard normal input distribution. They showed that SGD-based learning under these setups has two phases; search and convergence phases. BID28 on the other hand investigated a deep neural network with tanh activation functions, and showed that SGD-based learning has drift and diffusion phases. They have also proposed that such SNR transition (drift + diffusion) is related to the information transition divided into empirical error minimization and representation compression phases. have reported that the information transition is not generally associated with the SNR transition with ReLU BID23 ) activation functions. BID4 instead looked at the inner product between successive minibatch gradients and presented transient and stationary phases. Unlike our work here, the experimental verification of the previous work conducted under limited settings -the shallow network BID18, the specific activation function BID28, and only MNIST dataset BID28 BID4 -that conform well with their theoretical assumptions. Moreover, their work does not offer empirical about the effect of the latest techniques including both batch normalization BID12 layers and residual connections BID9. Norms and Angles Unless explicitly stated, a norm refers to L 2 norm. · and ·, · thus correspond to L 2 norm and the Euclidean inner product on R d, respectively. We use x n ⇒ x to indicate that "a random variable x n converges to x in distribution." Similarly, x n P → x means convergence in probability. An angle θ between d-dimensional vectors u and v is defined by θ = Loss functions A loss function of a neural network is written as f (w) = 1 n n i=1 f i (w), where w ∈ R d is a trainable parameter. f i is "a per-example loss function" computed on the i-th data point. We use I and m to denote a minibatch index set and its batch size, respectively. Further, we call f I (w) = 1 m i∈I f i (w) "a minibatch loss function given I". In Section 3.1, we use g i (w) and g(w) to denote −∇ w f i (w) and −∇ w f I (w), respectively. In Section 3.3, the index i is used for the corresponding minibatch index set I i. For example, the negative gradient of f Ii (w) is written aŝ g i (w). During optimization, we denote a parameter w at the i-th iteration in the t-th epoch as w Figure 1: Characteristics of the vMF distribution in a 2-dimensional space. 100 random samples are drawn from vMF(µ, κ) where µ = and κ = {0, 5, 50}.on the hypersphere DISPLAYFORM0 Here, the concentration parameter κ determines how the samples from this distribution are concentrated on the mean direction µ and C d (κ) is constant determined by d and κ. If κ is zero, then it is a uniform distribution on the unit hypersphere, and as κ → ∞, it becomes a point mass on the unit hypersphere (Figure 1). The maximum likelihood estimates for µ and κ arê DISPLAYFORM1 1−r 2 where x i's are random samples from the vMF distribution andr = n i=1 xi n. The formula forκ is approximate since the exact computation is intractable BID0. It is a usual practice for SGD to use a minibatch gradientĝ(w) = −∇ w f I (w) instead of a full batch gradient g(w) = −∇ w f (w). The minibatch index set I is drawn from {1,. . .,n} randomly.ĝ(w) satisfies E[ĝ(w)] = g(w) and Cov(ĝ(w),ĝ(w)) ≈ 1 mn n i=1 g i (w)g i (w) for n m where n is the number of full data points and g i (w) = −∇ w f i (w) BID11. As the batch size m increases, the randomness ofĝ(w) decreases. Hence E ĝ(w) tends to g(w), and Var(ĝ(w) ), which is the variance of the norm of the minibatch gradient, vanishes. The convergence rate analysis is as the following: Theorem 1. Letĝ(w) be a minibatch gradient induced from the minibatch index set I of batch size m from {1, . . ., n} and suppose γ = max i,j∈{1,...,n} | g i (w), g j (w) |. Then DISPLAYFORM0 and DISPLAYFORM1 Proof. See Supplemental A.According to Theorem 1, a large batch size m reduces the variance of ĝ(w) centered at E ĝ(w) with convergence rate O(1/m). We empirically verify this by estimating the gradient norm stochasticity at random points while varying the minibatch size, using a fully-connected neural network (FNN) with MNIST, as shown in FIG2 (a) (see Supplemental E for more details.) This theorem however only demonstrate that the gradient norm stochasticity is (l.h.s. of FORMULA3) is low at random initial points. It may blow up after SGD updates, since the upper bound (r.h.s. of FORMULA3) is inversely proportional to g(w). This implies that the learning dynamics and convergence of SGD, measured in terms of the vanishing gradient, i.e., n b i=1ĝ i (w) ≈ 0, is not necessarily explained by the vanishing norms of minibatch gradients, but rather by the balance of the directions ofĝ i (w)'s, which motivates our investigation of the directional statistics of minibatch gradients. See FIG2 (b) as an illustration. In order to investigate the directions of minibatch gradients and how they balance, we start from an angle between two vectors. First, we analyze an asymptotic behavior of angles between uniformly random unit vectors in a high-dimensional space. Theorem 2. Suppose that u and v are mutually independent d-dimensional uniformly random unit vectors. Then, DISPLAYFORM0 Proof. See Supplemental B.According to Theorem 2, the angle between two independent uniformly random unit vectors is normally distributed and becomes increasingly more concentrated as d grows FIG3 ). If SGD iterations indeed drive the directions of minibatch gradients to be uniform, then, at least, the distribution of angles between minibatch gradients and a given uniformly sampled unit vector follows asymptotically DISPLAYFORM1 Figures 3(b) and 3(c) show that the distribution of the angles between minibatch gradients and a given uniformly sampled unit vector converges to an asymptotic distribution after SGD iterations. Although we could measure the uniformity of minibatch gradients how the angle distribution between minibatch gradients is close to, it is not as trivial to compare the distributions as to compare numerical values. This necessitates another way to measure the uniformity of minibatch gradients. We draw a density plot θ(u,ĝ j (w)) ĝ j (w)) ) for 3, 000 minibatch gradients (black) at w = w 0 0 (b) and w = w 0 final, with training accuracy of > 99.9%, (c) when u is given. After SGD iterations, the density of θ(u,ĝ j (w)) converges to an asymptotic density (red). The dimension of FNN is 635,200. To model the uniformity of minibatch gradients, we propose to use the vMF distribution in Definition 1. The concentration parameter κ measures how uniformly the directions of unit vectors are distributed. By Theorem 1, with a large batch size, the norm of minibatch gradient is nearly deterministic, andμ is almost parallel to the direction of full batch gradient. In other words, κ measures the concentration of the minibatch gradients directions around the full batch gradient. The following Lemma 1 introduces the relationship between the norm of averaged unit vectors and κ, the approximate estimator of κ. Lemma 1. The approximated estimator of κ induced from the d-dimensional unit vectors DISPLAYFORM0 Moreover, h(·) and h (·) are strictly increasing and increasing on [0, n b), respectively. Proof. See Supplemental C.1. DISPLAYFORM1 which is measured from the directions from the current location w to the fixed points p i's, where h(·) is a function defined in Lemma 1. Since h(·) is an increasing function, we may focus only on DISPLAYFORM2 pi−w pi−w to see howκ behaves with respect to its argument. Lemma 2 implies that the estimated directional concentrationκ decreases if we move away from w FIG5 ). In other words,κ(w) <κ(w 0 0). DISPLAYFORM3 If all p i's are not on a single ray from the current location w, then there exists positive number η such that DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 Proof. See Supplemental C.2.We make the connection between the observation above and SGD by first viewing p i's as local minibatch solutions. Definition 2. For a minibatch index set I i, p i (w) = arg min w ∈N (w;ri) f Ii (w) is a local minibatch solution of I i at w, where N (w; r i) is a neighborhood of radius r i at w. Here, r i is determined by w and I i for p i (w) to exist uniquely. Under this definition, p i (w) is local minimum of a minibatch loss function f Ii near w. Then we reasonably expect that the direction ofĝ i (w) = −∇ w f Ii (w) is similar to that of p i (w) − w. Each epoch of SGD with a learning rate η computes a series of w DISPLAYFORM7.., n b } with a large batch size or at the early stage of SGD iterations. Combining these approximations, ĝ i (w DISPLAYFORM8 For example, suppose that t = 0, n b = 3 and τ = 1, and assume that p i (w DISPLAYFORM9 Hence, we haveκ(w DISPLAYFORM10 DISPLAYFORM11 for a sufficiently small ξ > 0, then there exists positive number η such that DISPLAYFORM12 Proof. See Supplemental C.3.This Theorem 3 asserts thatκ(·) decreases even with some perturbation along the averaged direction Without the corollary above, we need to solve p i (w 0 t) = arg min w∈N (w 0 t ;r) f Ii (w) for all i ∈ {1, . . ., n s}, where n s is the number of samples to estimate κ, in order to computeκ(w 0 t). Corollary 3.1 however implies that we can computeκ(w 0 t) by usingĝ DISPLAYFORM13 In Practice Although the number of all possible minibatches in each epoch is n b = n m, it is often the case to use n b ≈ n/m minibatches at each epoch in practice to go from w 0 t to w 0 t+1. Assuming that these n b minibatches were selected uniformly at random, the average of the n b normalized minibatch gradients is the maximum likelihood estimate of µ, just like the average of all n b normalized minibatch gradients. Thus, we expect with a large n b, DISPLAYFORM14 and that SGD in practice also satisfiesκ(w In order to empirically verify our theory on directional statistics of minibatch gradients, we train various types of deep neural networks using SGD and monitor the following metrics for analyzing the learning dynamics of SGD:• Training loss Figure 5: We show the averageκ (black curve) ± std. (shaded area), as the function of the number of training epochs (in log-log scale) across various batch sizes in MNIST classifications using FNN with fixed learning rate 0.01 and 5 random initializations. Althoughκ with the large batch size decreases more smoothly rather than the small batch size, we observe thatκ still decreases well with minibatches of size 64. We did not match the ranges of the y-axes across the plots to emphasize the trend of monotonic decrease.• Validation loss DISPLAYFORM0 The latter three quantities are statistically estimated using n s = 3, 000 minibatches. We useκ to denote the κ estimate. We train the following types of deep neural networks (Supplemental E):• FNN: a fully connected network with a single hidden layer • DFNN: a fully connected network with three hidden layers • CNN: a convolutional network with 14 layers BID16 In the case of the CNN, we also evaluate its variant with skip connections (+Res) BID9.As it was shown recently by BID27 that batch normalization BID12 improves the smoothness of a loss function in terms of its Hessian, we also test adding batch normalization to each layer right before the ReLU BID23 ) nonlinearity (+BN). We use MNIST for the FNN, DFNN and their variants, while CIFAR-10 (BID15 for the CNN and its variants. Our theory suggests a sufficiently large batch size for verification. We empirically analyze how large a batch size is needed in Figure 5 . From these plots,κ decreases monotonically regardless of the minibatch size, but the variance over multiple training runs is much smaller with a larger minibatch size. We thus decide to use a practical size of 64. With this fixed minibatch size, we use a fixed learning rate of 0.01, which allows us to achieve the training accuracy of > 99.9% for every training run in our experiments. We repeat each setup five times starting from different random initial parameters and report both the mean and standard deviation. FNN and DFNN We first observe thatκ decreases over training regardless of the network's depth in FIG12 (a,b). We however also notice thatκ decrease monotonically with the FNN, but less so with its deeper variant (DFNN). We conjecture this is due to the less-smooth loss landscape of a deep neural network. This difference between FNN and DFNN however almost entirely vanishes when batch normalization (+BN) is applied (FIG12 (e,f)). This was expected as batch normalization is known to make the loss function behave better, and our theory assumes a smooth objective function. CNN The CNN is substantially deeper than either FNN or DFNN and is trained on a substantially more difficult problem of CIFAR-10. In other words, the assumptions underlying our theory may not hold as well. Nevertheless, as shown in FIG12 Effect of +BN and +Res Based on our observations that the uniformity of minibatch gradients increases monotonically, when a deep neural network is equipped with residual connection (+Res) and trained with batch normalization (+BN), we conjecture that the loss function induced from these two techniques better satisfies the assumptions underlying our theoretical analysis, such as its well-behavedness. This conjecture is supported by for instance BID27, who demonstrated batch normalization guarantees the boundedness of Hessian, and BID25, who showed residual connections eliminate some singularities of Hessian.κ near the end of training The minimum averageκ of DFNN+BN, which has 1, 920, 000 parameters, is 71, 009.20, that of FNN+BN, which has 636, 800 parameters, is 23, 059.16, and that of CNN+BN+Res, which has 207, 152 parameters, is 20, 320.43. These averageκ are within a constant multiple of estimated κ using 3,000 samples from the vMF distribution with true κ = 0 (35, 075.99 with 1, 920, 000 dimensions, 11, 621.63 with 636, 800 dimensions, and 3, 781.04 with 207, 152 dimensions.) This implies that we cannot say that the underlying directional distribution of minibatch gradients in all these cases at the end of training is not close to uniform BID5. For more detailed analysis, see Supplementary F. The gradient stochasticity (GS) was used by BID28 as a main metric for identifying two phases of SGD learning in deep neural networks. This quantity includes both the gradient norm stochasticity (GNS) and the directional uniformity κ, implying that either or both of GNS and κ could drive the gradient stochasticity. We thus investigate the relationship among these three quantities as well as training and validation losses. We focus on CNN, CNN+BN and CNN+Res+BN trained on CIFAR-10. and directional uniformityκ. We normalized each quantity by its maximum value over training for easier comparison on a single plot. In all the cases, SNR (orange) andκ (red) are almost entirely correlated with each other, while normSNR is less correlated. (Second row) We further verify this by illustrating SNR-κ scatter plots (red) and SNR-normSNR scatter plots (blue) in log-log scales. These plots suggest that the SNR is largely driven by the directional uniformity. From FIG13 (First row), it is clear that the proposed metric of directional uniformityκ correlates better with the gradient stochasticity than the gradient norm stochasticity does. This was especially prominent during the early stage of learning, suggesting that the directional statistics of minibatch gradients is a major explanatory factor behind the learning dynamics of SGD. This difference in correlations is much more apparent from the scatter plots in FIG13 (Second row). We show these plots created from other four training runs per setup in Supplemental G. Stochasticity of gradients is a key to understanding the learning dynamics of SGD BID28 and has been pointed out as a factor behind the success of SGD (see, e.g., BID17 BID14 . In this paper, we provide a theoretical framework using von Mises-Fisher distribution, under which the directional stochasticity of minibatch gradients can be estimated and analyzed, and show that the directional uniformity increases over the course of SGD. Through the extensive empirical evaluation, we have observed that the directional uniformity indeed improves over the course of training a deep neural network, and that its trend is monotonic when batch normalization and skip connections were used. Furthermore, we demonstrated that the stochasticity of minibatch gradients is largely determined by the directional stochasticity rather than the gradient norm stochasticity. Our work in this paper suggests two major research directions for the future. First, our analysis has focused on the aspect of optimization, and it is an open question how the directional uniformity relates to the generalization error although handling the stochasticity of gradients has improved SGD BID24 BID11 BID29 BID13 . Second, we have focused on passive analysis of SGD using the directional statistics of minibatch gradients, but it is not unreasonable to suspect that SGD could be improved by explicitly taking into account the directional statistics of minibatch gradients during optimization. In proving Theorem 1, we use Lemma A.1. Define selector random variables BID11 as below: DISPLAYFORM0 Then we haveĝ DISPLAYFORM1 Lemma A.1. Letĝ(w) be a minibatch gradient induced from the minibatch index set I with batch size m from {1, . . ., n}. Then DISPLAYFORM2 where γ = max i,j∈{1,...,n} | g i (w), g j (w) |. DISPLAYFORM3 Theorem 1. Letĝ(w) be a minibatch gradient induced from the minibatch index set I of batch size m from {1, . . ., n} and suppose γ = max i,j∈{1,...,n} | g i (w), g j (w) |. Then DISPLAYFORM4 and DISPLAYFORM5 Hence, Var(ĝ(w) ) DISPLAYFORM6 2 ≤ E ĝ(w) 2. From the second inequality and Lemma A.1, DISPLAYFORM0. DISPLAYFORM1 For proofs, Slutsky's theorem and delta method are key to describe limiting behaviors of random variables in distributional sense. Theorem B.1. (Slutsky's theorem, BID3) Let {x n}, {y n} be a sequence of random variables that satisfies x n ⇒ x and y n P → ρ when n goes to infinity and ρ is constant. Then x n y n ⇒ cx Theorem B.2. (Delta method, BID3) Let y n be a sequence of random variables that satisfies √ n(y n − µ) ⇒ N (0, σ 2). For a given smooth function f: R → R, suppose that f (µ) exists and is not 0 where f is a derivative. Then DISPLAYFORM0 Lemma B.1. Suppose that u and v are mutually independent d-dimensional uniformly random unit vectors. Then, DISPLAYFORM1 Proof. Note that d-dimensional uniformly random unit vectors u can be generated by normalization of d-dimensional multivariate standard normal random vectors x ∼ N (0, DISPLAYFORM2 Suppose that two independent uniformly random unit vector u and v are generated by two indepen- DISPLAYFORM3 By SLLN, we have DISPLAYFORM4 Since almost sure convergence implies convergence in probability, DISPLAYFORM5 Therefore, by Theorem B.1 (Slutsky's theorem), DISPLAYFORM6 Theorem 2. Suppose that u and v are mutually independent d-dimensional uniformly random unit vectors. Then, DISPLAYFORM7 Proof. Suppose that µ = 0, σ = 1, and f (·) = DISPLAYFORM8 Moreover, h(·) and h (·) are strict increasing and increasing on [0, n b), respectively. (1 −r 2) 2 and its numerator is always positive for d > 2. When d = 2, DISPLAYFORM0 (1 −r 2) 2 > 0. Soκ increases asr increases. The Lipschitz continuity of h(·) directly comes from the continuity of DISPLAYFORM1 Recall that any continuous function on the compact interval [0, n b (1 −)] is bounded. Hence the derivative ofκ with respect to u is bounded. This implies the Lipschitz continuity of h(·).h(·) is strictly increasing sincer = u n b. Further, DISPLAYFORM2 If all p i's are not on a single ray from the current location w, then there exists positive number η such that DISPLAYFORM3 Proof. Without loss of generality, we regard w as the origin. DISPLAYFORM4. Therefore, we only need to show DISPLAYFORM5 we have DISPLAYFORM6 Note that p j = p j and x j = 1. We have DISPLAYFORM7 Since the equality holds when u, x j 2 = u 2 x j 2 for all j, we have strict inequality when all p i's are not located on a single ray from the origin. The proof of Theorem 3 is very similar to that of Lemma 2. DISPLAYFORM0 for sufficiently small ξ > 0, then there exists positive number η such that DISPLAYFORM1 for all ∈ (0, η].Proof. We regard w 0 t as the origin 0. For simplicity, write DISPLAYFORM2 Now we differentiatef with respect to, that is, DISPLAYFORM3 Recall thatp j = p j. Rewrite pj pj = x j and use f in the proof of Lemma 2 DISPLAYFORM4 Since f < 0 by the proof of Lemma 2, DISPLAYFORM5 By using x j = 1 and applying the Cauchy inequality, DISPLAYFORM6 Define r = min j p j. If ) where η is a learning rate. To prove Corollary 3.1, we need to showκ(w 0 t+1) <κ(w 0 t) which is equivalent to DISPLAYFORM7 DISPLAYFORM8 Since DISPLAYFORM9 is Lipschitz continuous on R BID2. If the batch size is sufficiently large and the learning rate η is sufficiently small, ĝ i (w DISPLAYFORM10 If we denote τ η as, we can convert to. DISPLAYFORM11 Since both w DISPLAYFORM12 where σ max (A) and σ min (A) are maximal and minimal singular values of A, respectively. If A is positive-definite matrix, then DISPLAYFORM13 Here λ max (A) and λ min (A) are maximal and minimal eigenvalues of A, respectively. Lemma D.1. If the condition number of the positive definite Hessian matrix of f Ii at a local minibatch solution p i, denoted by H i = ∇ w 2 f Ii (p i), is close to 1 (well-conditioned), then the direction to p i from w is approximately parallel to its negative gradient at w. That is, for all w ∈ R, DISPLAYFORM14 Proof. By the second order Taylor expansion, DISPLAYFORM15 Then, we only need to show DISPLAYFORM16 Since H i is positive definite, we can diagonalize it as H i = P i Λ i P i where P i is an orthonormal transition matrix for H i. DISPLAYFORM17 for sufficiently small ξ. This implies since DISPLAYFORM18.Then we can apply Theorem 3 andκ(w DISPLAYFORM19 where h(·) is increasing and Lipschitz continuous(Lemma 1). By Lemma D.1, we have DISPLAYFORM20 t ) where rhs is bounded by ξ. Hence, Lipschitz continuity of h(·) implies that DISPLAYFORM21 Since t is arbitrary, we can apply this for all w ∈ R including w 0 t+1. For all cases, their weighted layers do not have biases, and dropout BID30 is not applied. We use Xavier initializations BID8 and cross entropy loss functions for all experiments. FNN The FNN is a fully connected network with a single hidden layer. It has 800 hidden units with ReLU BID23 activations and a softmax output layer. DFNN The DFNN is a fully connected network with three hidden layers. It has 800 hidden units with ReLU activations in each hidden layers and a softmax output layer. CNN The network architecture of CNN is similar to the network introduced in BID10 as a CIFAR-10 plain network. The first layer is 3 × 3 convolution layer and the number of output filters are 16. After that, we stack of {4, 4, 3, 1} layers with 3 × 3 convolutions on the feature maps of sizes {32, 16, 8, 4} and the numbers of filters {16, 32, 64, 128}, respectively. The subsampling is performed with a stride of 2. All convolution layers are activated by ReLU and the convolution part ends with a global average pooling BID19, a 10-way fully-conneted layers, and softmax. Note that there are 14 stacked weighted layers.+BN We apply batch normalization right before the ReLU activations on all hidden layers.+Res The identity skip connections are added after every two convolution layers before ReLU nonlinearity (After batch normalization, if it is applied on it.). We concatenate zero padding slices backwards when the number of filters increases. We use neither data augmentations nor preprocessings except scaling pixel values into both MNIST and CIFAR-10. In the case of CIFAR-10, for validation, we randomly choose 5000 images out of 50000 training images. Figure 8: We showκ estimated from {1, 000 (black), 2, 000 (blue), 3, 000 (red)} random samples of the vMF distribution with underlying true κ in 10, 000-dimensional space, as the function of κ (in log-log scale except 0). For large κ, it is well-estimated byκ regardless of sample sizes. When the true κ approaches 0, we need a larger sample size to more accurately estimate this. 635, 200 20, 111.90 ± 13.04 14, 196.89 ± 14.91 11, 607.39 ± 9.27 FNN+BN 636, 800 20, 157.57 ± 14.06 14, 259.83 ± 16.38 11, 621.63 ± 6.83 DFNN 1, 915, 200 60, 619.02 ± 13.49 42, 849.86 ± 18.90 34, 983.31 ± 15.62 DFNN+BN 1, 920, 000 60, 789.84 ± 17.93 42, 958.71 ± 25.61 35, 075.99 ± 12.39 F SOME NOTES ABOUT THE κ ESTIMATE We point out that, for a small κ, the absolute value ofκ is not a precise indicator of the uniformity due to its dependence on the dimensionality, as was investigated earlier by BID5. In order to verify this claim, we run some simulations. First, we vary the number of samples and the true underlying κ with the fixed dimensionality (Unfortunately, we could not easily go over 10, 000 dimensions due to the difficulty in sampling from the vMF distribution with positive κ.). We draw {1, 000, 2, 000, 3, 000} random samples from the vMF distribution with the designated κ. We computeκ from these samples. As can be seen from Figure 8, theκ approaches the true κ from above as the number of samples increases. When the true κ is large, the estimation error rapidly becomes zero as the number of samples approaches 3, 000. When the true κ is low, however, the gap does not narrow completely even with 3, 000 samples. While fixing the true κ to 0 and the number of samples to {1, 000, 2, 000, 3, 000}, we vary the dimensionality to empirically investigate theκ. We choose to use 3, 000 samples to be consistent with our experiments in this paper. We run five simulations each and report both mean and standard deviation TAB1.We clearly observe the trend of increasingκ's with respect to the dimensions. This suggests that we should not compare the absolute values ofκ's across different network architectures due to the differences in the number of parameters. This agrees well with BID5 which empirically showed that the threshold for rejecting the null hypothesis of κ = p by usingκ where p is a fixed value grows with respect to the dimensions. We show plots from other four training runs in FIG12. For all runs, the curves of GS (inverse of SNR) andκ are strongly correlated while GNS (inverse of normSNR) is less correlated to GS. Figure 9: (a,c,e) We plot the evolution of the training loss (Train loss), validation loss (Valid loss), inverse of gradient stochasticity (SNR), inverse of gradient norm stochasticity (normSNR) and directional uniformity κ. We normalized each quantity by its maximum value over training for easier comparison on a single plot. In all the cases, SNR (orange) andκ (red) are almost entirely correlated with each other, while normSNR is less correlated. (b,d,f) We further verify this by illustrating SNR-κ scatter plots (red) and SNR-normSNR scatter plots (blue) in log-log scales. These plots suggest that the SNR is largely driven by the directional uniformity. Figure 10: (a,c,e) We plot the evolution of the training loss (Train loss), validation loss (Valid loss), inverse of gradient stochasticity (SNR), inverse of gradient norm stochasticity (normSNR) and directional uniformity κ. We normalized each quantity by its maximum value over training for easier comparison on a single plot. In all the cases, SNR (orange) andκ (red) are almost entirely correlated with each other, while normSNR is less correlated. (b,d,f) We further verify this by illustrating SNR-κ scatter plots (red) and SNR-normSNR scatter plots (blue) in log-log scales. These plots suggest that the SNR is largely driven by the directional uniformity. Under review as a conference paper at ICLR 2019 Figure 11: (a,c,e) We plot the evolution of the training loss (Train loss), validation loss (Valid loss), inverse of gradient stochasticity (SNR), inverse of gradient norm stochasticity (normSNR) and directional uniformityκ. We normalized each quantity by its maximum value over training for easier comparison on a single plot. In all the cases, SNR (orange) and κ (red) are almost entirely correlated with each other, while normSNR is less correlated. (b,d,f) We further verify this by illustrating SNR-κ scatter plots (red) and SNR-normSNR scatter plots (blue) in log-log scales. These plots suggest that the SNR is largely driven by the directional uniformity. Figure 12: (a,c,e) We plot the evolution of the training loss (Train loss), validation loss (Valid loss), inverse of gradient stochasticity (SNR), inverse of gradient norm stochasticity (normSNR) and directional uniformity κ. We normalized each quantity by its maximum value over training for easier comparison on a single plot. In all the cases, SNR (orange) andκ (red) are almost entirely correlated with each other, while normSNR is less correlated. (b,d,f) We further verify this by illustrating SNR-κ scatter plots (red) and SNR-normSNR scatter plots (blue) in log-log scales. These plots suggest that the SNR is largely driven by the directional uniformity.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkeT8iR9Y7
One of theoretical issues in deep learning
Design of reliable systems must guarantee stability against input perturbations. In machine learning, such guarantee entails preventing overfitting and ensuring robustness of models against corruption of input data. In order to maximize stability, we analyze and develop a computationally efficient implementation of Jacobian regularization that increases classification margins of neural networks. The stabilizing effect of the Jacobian regularizer leads to significant improvements in robustness, as measured against both random and adversarial input perturbations, without severely degrading generalization properties on clean data. Stability analysis lies at the heart of many scientific and engineering disciplines. In an unstable system, infinitesimal perturbations amplify and have substantial impacts on the performance of the system. It is especially critical to perform a thorough stability analysis on complex engineered systems deployed in practice, or else what may seem like innocuous perturbations can lead to catastrophic consequences such as the Tacoma Narrows Bridge collapse and the Space Shuttle Challenger disaster . As a rule of thumb, well-engineered systems should be robust against any input shifts -expected or unexpected. Most models in machine learning are complex nonlinear systems and thus no exception to this rule. For instance, a reliable model must withstand shifts from training data to unseen test data, bridging the so-called generalization gap. This problem is severe especially when training data are strongly biased with respect to test data, as in domain-adaptation tasks, or when only sparse sampling of a true underlying distribution is available, as in few-shot learning. Any instability in the system can further be exploited by adversaries to render trained models utterly useless (; ; ; a; ; ; ;). It is thus of utmost importance to ensure that models be stable against perturbations in the input space. Various regularization schemes have been proposed to improve the stability of models. For linear classifiers and support vector machines , this goal is attained via an L 2 regularization which maximizes classification margins and reduces overfitting to the training data. This regularization technique has been widely used for neural networks as well and shown to promote generalization (; ;). However, it remains unclear whether or not L 2 regularization increases classification margins and stability of a network, especially for deep architectures with intertwining nonlinearity. In this paper, we suggest ensuring robustness of nonlinear models via a Jacobian regularization scheme. We illustrate the intuition behind our regularization approach by visualizing the classification margins of a simple MNIST digit classifier in Figure 1 (see Appendix A for more). Decision cells of a neural network, trained without regularization, are very rugged and can be unpredictably unstable (Figure 1a). On average, L 2 regularization smooths out these rugged boundaries but does not necessarily increase the size of decision cells, i.e., does not increase classification margins (Figure 1b). In contrast, Jacobian regularization pushes decision boundaries farther away from each training data point, enlarging decision cells and reducing instability (Figure 1c). The goal of the paper is to promote Jacobian regularization as a generic scheme for increasing robustness while also being agnostic to the architecture, domain, or task to which it is applied. In support of this, after presenting the Jacobian regularizer, we evaluate its effect both in isolation as well as in combination with multiple existing approaches that are intended to promote robustness and generalization. Our intention is to showcase the ease of use and complimentary nature of our proposed regularization. Domain experts in each field should be able to quickly incorporate our regularizer into their learning pipeline as a simple way of improving the performance of their state-of-the-art system. The rest of the paper is structured as follows. In Section 2 we motivate the usage of Jacobian regularization and develop a computationally efficient algorithm for its implementation. Next, the effectiveness of this regularizer is empirically studied in Section 3. As regularlizers constrain the learning problem, we first verify that the introduction of our regularizer does not adversely affect learning in the case when input data remain unperturbed. Robustness against both random and adversarial perturbations is then evaluated and shown to receive significant improvements from the Jacobian regularizer. We contrast our work with the literature in Section 4 and conclude in Section 5. Here we introduce a scheme for minimizing the norm of an input-output Jacobian matrix as a technique for regularizing learning with stochastic gradient descent (SGD). We begin by formally defining the input-output Jacobian and then explain an efficient algorithm for computing the Jacobian regularizer using standard machine learning frameworks. Let us consider the set of classification functions, f, which take a vectorized sensory signal, x ∈ R I, as input and outputs a score vector, z = f (x) ∈ R C, where each element, z c, is associated with likelihood that the input is from category, c. 1 In this work, we focus on learning this classification function as a neural network with model parameters θ, though our findings should generalize to any parameterized function. Our goal is to learn the model parameters that minimize the classification objective on the available training data while also being stable against perturbations in the input space so as to increase classification margins. 1 Throughout the paper, the vector z denotes the logit before applying a softmax layer. The probabilistic output of the softmax pc relates to zc via pc ≡ The input-output Jacobian matrix naturally emerges in the stability analysis of the model predictions against input perturbations. Let us consider a small perturbation vector, ∈ R I, of the same dimension as the input. For a perturbed input x = x +, the corresponding output values shift to where in the second equality the function was Taylor-expanded with respect to the input perturbation and in the third equality the input-output Jacobian matrix, was introduced. As the function f is typically almost everywhere analytic, for sufficiently small perturbations the higher-order terms can be neglected and the stability of the prediction is governed by the input-output Jacobian. From Equation, it is straightforward to see that the larger the components of the Jacobian are, the more unstable the model prediction is with respect to input perturbations. A natural way to reduce this instability then is to decrease the magnitude for each component of the Jacobian matrix, which can be realized by minimizing the square of the Frobenius norm of the input-output Jacobian, For linear models, this reduces exactly to L 2 regularization that increases classification margins of these models. For nonlinear models, however, Jacobian regularization does not equate to L 2 regularization, and we expect these schemes to affect models differently. In particular, predictions made by models trained with the Jacobian regularization do not vary much as inputs get perturbed and hence decision cells enlarge on average. This increase in stability granted by the Jacobian regularization is visualized in Figure 1, which depicts a cross section of the decision cells for the MNIST digit classification problem using a nonlinear neural network . The Jacobian regularizer in Equation can be combined with any loss objective used for training parameterized models. Concretely, consider a supervised learning problem modeled by a neural network and optimized with SGD. At each iteration, a mini-batch B consists of a set of labeled examples, {x α, y α} α∈B, and a supervised loss function, L super, is optimized possibly together with some other regularizer R(θ) -such as L 2 regularizer λWD 2 θ 2 -over the function parameter space, by minimizing the following bare loss function To integrate our Jacobian regularizer into training, one instead optimizes the following joint loss where λ JR is a hyperparameter that determines the relative importance of the Jacobian regularizer. By minimizing this joint loss with sufficient training data and a properly chosen λ JR, we expect models to learn both correctly and robustly. 2 Minimizing the Frobenius norm will also reduce the L 1 -norm, since these norms satisfy the inequalities ||J(x)||F ≤ i,c Jc;i (x) ≤ √ IC||J(x)||F. We prefer to minimize the Frobenius norm over the L 1 -norm because the ability to express the former as a trace leads to an efficient algorithm [see Equations through]. In the previous section we have argued for minimizing the Frobenius norm of the input-output Jacobian to improve robustness during learning. The main question that follows is how to efficiently compute and implement this regularizer in such a way that its optimization can seamlessly be incorporated into any existing learning paradigm. Recently, Sokolić et al. also explored the idea of regularizing the Jacobian matrix during learning, but only provided an inefficient algorithm requiring an increase in computational cost that scales linearly with the number of output classes, C, compared to the bare optimization problem (see explanation below). In practice, such an overhead will be prohibitively expensive for many large-scale learning problems, e.g. ImageNet classification has C = 1000 target classes . (Our scheme, in contrast, can be used for ImageNet: see Appendix H.) Here, we offer a different solution that makes use of random projections to efficiently approximate the Frobenius norm of the Jacobian. 3 This only introduces a constant time overhead and can be made very small in practice. When considering such an approximate algorithm, one naively must trade off efficiency against accuracy for computing the Jacobian, which ultimately trades computation time for robustness. Prior work by briefly considers an approach based on random projection, but without providing any analysis on the quality of the Jacobian approximation. Here, we describe our algorithm, analyze theoretical convergence guarantees, and verify empirically that there is only a negligible difference in model solution quality between training with the exact computation of the Jacobian as compared to training with the approximate algorithm, even when using a single random projection (see Figure 2). Given that optimization is commonly gradient based, it is essential to efficiently compute gradients of the joint loss in Equation and in particular of the squared Frobenius norm of the Jacobian. First, we note that automatic differentiation systems implement a function that computes the derivative of a vector such as z with respect to any variables on which it depends, if the vector is first contracted with another fixed vector. To take advantage of this functionality, we rewrite the squared Frobienus norm as where a constant orthonormal basis, {e}, of the C-dimensional output space was inserted in the second equality and the last equality follows from definition and moving the constant vector inside the derivative. For each basis vector e, the quantity in the last parenthesis can then be efficiently computed by differentiating the product, e · z, with respect to input parameters, x. Recycling that computational graph, the derivative of the squared Frobenius norm with respect to the model parameters, θ, can be computed through backpropagation with any use of automatic differentiation. Sokolić et al. essentially considers this exact computation, which requires backpropagating gradients through the model C times to iterate over the C orthonormal basis vectors {e}. Ultimately, this incurs computational overhead that scales linearly with the output dimension C. Instead, we further rewrite Equation in terms of the expectation of an unbiased estimator where the random vectorv is drawn from the (C − 1)-dimensional unit sphere S C−1. Using this relationship, we can use samples of n proj random vectorsv µ to estimate the square of the norm as which converges to the true value as O(n −1/2 proj). The derivation of Equation and the calculation of its convergence make use of random-matrix techniques and are provided in Appendix B. Finally, we expect that the fluctuations of our estimator can be suppressed by cancellations within a mini-batch. With nearly independent and identically distributed samples in a mini-batch of size The difference between the exact method (cyan) and the random projection method with n proj = 1 (blue) and n proj = 3 (red orange) is negligible both in terms of accuracy (a) and the norm of the input-output Jacobian (b) on the test set for LeNet' models trained on MNIST with λ JR = 0.01. Shading indicates the standard deviation estimated over 5 distinct runs and dashed vertical lines signify the learning rate quenches. Algorithm 1 Efficient computation of the approximate gradient of the Jacobian regularizer. Inputs: mini-batch of |B| examples x α, model outputs z α, and number of projections n proj. Outputs: Square of the Frobenius norm of the Jacobian J F and its gradient ∇ θ J F. Uniform sampling from the unit sphere for each α. 1, we expect the error in our estimate to be of order (n proj |B|) −1/2. In fact, as shown in Figure 2, with a mini-batch size of |B| = 100, single projection yields model performance that is nearly identical to the exact method, with computational cost being reduced by orders of magnitude. The complete algorithm is presented in Algorithm 1. With a straightforward implementation in PyTorch and n proj = 1, we observed the computational cost of the training with the Jacobian regularization to be only ≈ 1.3 times that of the standard SGD computation cost, while retaining all the practical benefits of the expensive exact method. In this section, we evaluate the effectiveness of Jacobian regularization on robustness. As all regularizers constrain the learning problem, we begin by confirming that our regularizer effectively reduces the value of the Frobenius norm of the Jacobian while simultaneously maintaining or improving generalization to an unseen test set. We then present our core , that Jacobian regularization provides significant robustness against corruption of input data from both random and adversarial perturbations (Section 3.2). In the main text we present mostly with the MNIST dataset; the corresponding experiments for the CIFAR-10 and ImageNet datasets are relegated to Appendices E and H. The following specifications apply throughout our experiments: Datasets: The MNIST data consist of black-white images of hand-written digits with 28-by-28 pixels, partitioned into 60,000 training and 10,000 test samples . We preprocess the data by subtracting the mean (0.1307) and dividing by the variance (0.3081) of the training data. No regularization 49.2 ± 1.9 67.0 ± 1.7 83.3 ± 0.7 90.4 ± 0.5 98.9 ± 0.1 32.9 ± 3.3 L 2 49.9 ± 2.1 68.1 ± 1.9 84.3 ± 0.8 91.2 ± 0.5 99.2 ± 0.1 4.6 ± 0.2 Dropout 49.7 ± 1.7 67.4 ± 1.7 83.9 ± 1.8 91.6 ± 0.5 98.6 ± 0.1 21.5 ± 2.3 Jacobian 49.3 ± 2.1 68.2 ± 1.9 84.5 ± 0.9 91.3 ± 0.4 99.0 ± 0.0 1.1 ± 0.1 All Combined 51.7 ± 2.1 69.7 ± 1.9 86.3 ± 0.9 92.7 ± 0.4 99.1 ± 0.1 1.2 ± 0.0 Implementation Details: For the MNIST dataset, we use the modernized version of LeNet-5 , henceforth denoted LeNet' (see Appendix D for full details). We optimize using SGD with momentum, ρ = 0.9, and our supervised loss equals the standard cross-entropy with one-hot targets. The model parameters θ are initialized at iteration t = 0 by the Xavier method and the initial descent value is set to 0. The hyperparameters for all models are chosen to match reference implementations: the L 2 regularization coefficient (weight decay) is set to λ WD = 5 · 10 −4 and the dropout rate is set to p drop = 0.5. The Jacobian regularization coefficient λ JR = 0.01, is chosen by optimizing for clean performance and robustness on the white noise perturbation. (See Appendix G for performance dependence on the coefficient λ JR .) The main goal of supervised learning involves generalizing from a training set to unseen test set. In dealing with such a distributional shift, overfitting to the training set and concomitant degradation in test performance is the central concern. For neural networks one of the most standard antidotes to this overfitting instability is L 2 reguralization (; ;). More recently, dropout regularization has been proposed as another way to circumvent overfitting . Here we show how Jacobian regualarization can serve as yet another solution. This is also in line with the observed correlation between the input-output Jacobian and generalization performance . We first verify that in the clean case, where the test set is composed of unseen samples drawn from the same distribution as the training data, the Jacobian regularizer does not adversely affect classification accuracy. Table 1 reports performance on the MNIST test set for the LeNet' model trained on either a subsample or all of the MNIST train set, as indicated. When learning using all 60,000 training examples, the learning rate is initially set to η 0 = 0.1 with mini-batch size |B| = 100 and then decayed ten-fold after each 50,000 SGD iterations; each simulation is run for 150,000 SGD iterations in total. When learning using a small subsample of the full training set, training is carried out using SGD with full batch and a constant learning rate η = 0.01, and the model performance is evaluated after 10,000 iterations. The main observation is that optimizing with the proposed Jacobian regularizer or the commonly used L 2 and dropout regularizers does not change performance on clean data within domain test samples in any statistically significant way. Notably, when few samples are available during learning, performance improved with increased regularization in the form of jointly optimizing over all criteria. Finally, in the right most column of Table 1, we confirm that the model trained with all data and regularized with the Jacobian minimization objective has an order of magnitude smaller Jacobian norm than models trained without Jacobian regularization. This indicates that while the model continues to make the same predictions on clean data, the margins around each prediction has increased as desired. We test the limits of the generalization provided by Jacobian regularization by evaluating an MNIST learned model on data drawn from a new target domain distribution -the USPS test set. Here, models are trained on the MNIST data as above, and the USPS test dataset consists of 2007 black-white images of hand-written digits with Table 2: Generalization on clean test data from an unseen domain. LeNet' models learned with all MNIST training data are evaluated for accuracy on data from the novel input domain of USPS test set. Here, each regularizer, including Jacobian, increases accuracy over an unregularized model. In addition, the regularizers may be combined for the strongest generalization effects. Averages and 95% confidence intervals are estimated over 5 distinct runs. No regularization L 16-by-16 pixels; images are upsampled to 28-by-28 pixels using bilinear interpolation and then preprocessed following the MNIST protocol stipulated above. Table 2 offers preliminary evidence that regularization, of each of the three forms studied, can be used to learn a source model which better generalizes to an unseen target domain. We again find that the regularizers may be combined to increase the generalization property of the model. Such a regularization technique can be immediately combined with state-of-the-art domain adaptation techniques to achieve further gains. This section showcases the main robustness of the Jacobian regularizer, highlighted in the case of both random and adversarial input perturbations. The real world can differ from idealized experimental setups and input data can become corrupted by various natural causes such as random noise and occlusion. Robust models should minimize the impact of such corruption. As one evaluation of stability to natural corruption, we perturb each test input image x to x = x + crop where each component of the perturbation vector is drawn from the normal distribution with variance σ noise as and the perturbed image is then clipped to fit into the range before preprocessing. As in the domain-adaptation experiment above, models are trained on the clean MNIST training data and then tested on corrupted test data. Results in Figure 3a show that models trained with the Jacobian regularization is more robust against white noise than others. This is in line with -and indeed quantitatively validates -the embiggening of decision cells as shown in Figure 1. Adversarial Perturbations: The world is not only imperfect but also possibly filled with evil agents that can deliberately attack models. Such adversaries seek a small perturbation to each input example that changes the model predictions while also being imperceptible to humans. Obtaining the actual smallest perturbation is likely computationally intractable, but there exist many tractable approxima-tions. The simplest attack is the white-box untargeted fast gradient sign method (FGSM) , which distorts the image as x = x + crop with This attack aggregates nonzero components of the input-output Jacobian to a substantial effect by adding them up with a consistent sign. In Figure 3b we consider a stronger attack, projected gradient descent (PGD) method , which iterates the FGSM attack in Equation k times with fixed amplitude ε FGSM = 1/255 while also requiring each pixel value to be within 32/255 away from the original value. Even stronger is the Carlini-Wagner (CW) attack presented in Figure 3c, which yields more reliable estimates of distance to the closest decision boundary (see Appendix F). Results unequivocally show that models trained with the Jacobian regularization is again more resilient than others. As a baseline defense benchmark, we implemented adversarial training, where the training image is corrupted through the FGSM attack with uniformly drawn amplitude ε FGSM ∈ [0, 0.01]; the Jacobian regularization can be combined with this defense mechanism to further improve the robustness. 5 Appendix A additionally depicts decision cells in adversarial directions, further illustrating the stabilizing effect of the Jacobian regularizer. To our knowledge, double backpropagation (; is the earliest attempt to penalize large derivatives with respect to input data, in which (∂L super /∂x) 2 is added to the loss in order to reduce the generalization gap. 6 Different incarnations of a similar idea have appeared in the following decades (; ; ; ; ; ; ;). Among them, Jacobian regularization as formulated herein was proposed by to combat against adversarial attacks. However, the authors did not implement it due to a computational concern -resolved by us in Section 2 -and instead layer-wise Jacobians were penalized. Unfortunately, minimizing layer-wise Jacobians puts a stronger constraint on model capacity than minimizing the input-output Jacobian. In fact, several authors subsequently claimed that the layer-wise regularization degrades test performance on clean data (; b) and in marginal improvement of robustness . Very recently, full Jacobian regularization was implemented in Sokolić et al., but in an inefficient manner whose computational overhead for computing gradients scales linearly with the number of output classes C compared to unregularized optimization, and thus they had to resort back to the layer-wise approximation above for the task with a large number of output classes. This computational problem was resolved by in exactly the same way as our approach (referred to as spherical SpectReg in). As emphasized in Section 2, we performed more thorough theoretical and empirical convergence analysis and showed that there is practically no difference in model solution quality between the exact and random projection method in terms of test accuracy and stability. Further, both of these two references deal only with the generalization property and did not fully explore strong distributional shifts and noise/adversarial defense. In particular, we have visualized (Figure 1) and quantitatively borne out (Section 3) the stabilizing effect of Jacobian regularization on classification margins of a nonlinear neural network. In this paper, we motivated Jacobian regularization as a task-agnostic method to improve stability of models against perturbations to input data. Our method is simply implementable in any open source automatic differentiation system, and additionally we have carefully shown that the approximate nature of the random projection is virtually negligible. Furthermore, we have shown that Jacobian regularization enlarges the size of decision cells and is practically effective in improving the generalization property and robustness of the models, which is especially useful for defense against input-data corruption. We hope practitioners will combine our Jacobian regularization scheme with the arsenal of other tricks in machine learning and prove it useful in pushing the (decision) boundary of the field and ensuring stable deployment of models in everyday life. We show in Figure S1 plots similar to the ones shown in Figure 1 in the main text, but with different seeds for training models and around different test data points. Additionally, shown in Figure S2 are similar plots but with different scheme for hyperplane slicing, based on adversarial directions. Interestingly, the adversarial examples constructed with unprotected model do not fool the model trained with Jacobian regularization. Figure S2: Cross sections of decision cells in the input space for LeNet' models trained on the MNIST dataset along adversarial hyperplanes. Namely, given a test sample (black dot), the hyperplane through it is spanned by two adversarial examples identified through FGSM, one for the model trained with L 2 regularization λ WD = 0.0005 and dropout rate 0.5 but no defense (dark-grey dot; left figure) and the other for the model with the same standard regularization methods plus Jacobian regularization λ JR = 0.01 and adversarial training (white-grey dot; right figure). Let us denote by Ev ∼S C−1 [F (v) ] the average of the arbitrary function F over C-dimensional vectorsv sampled uniformly from the unit sphere S C−1. As in Algorithm 1, such a unit vector can be sampled by first sampling each component v c from the standard normal distribution N and then normalizing it asv ≡ v/||v||. In our derivation, the following formula proves useful: where e is an arbitrary C-dimensional unit vector and dµ (O) First, let us derive Equation. Using Equation, the square of the Frobenius norm can then be written as where in the second line we insert the identity matrix in the form I = O T O and make use of the cyclicity of the trace; in the third line we rewrite the trace as a sum over an orthonormal basis {e} of the C-dimensional output space; in the forth line Equation was used; and in the last line we note that the expectation no longer depends on the basis vectors e and perform the trivial sum. This completes the derivation of Equation. Next, let us compute the variance of our estimator. Using tricks as before, but in reverse order, yields In this form, we use the following formula (Collins andŚniady, 2006;) to evaluate the first term After the dust settles with various cancellations, the expression for the variance simplifies to We can strengthen our claim by using the relation ||AB|| The right-hand side is independent of J and thus independent of the details of model architecture and particular data set considered. In the end, the relative error of the random-projection estimate for ||J(x)|| 2 F with n proj random vectors will diminish as some order-one number divided by n −1/2 proj. In addition, upon averaging ||J(x)|| 2 F over a mini-batch of samples of size |B|, we expect the relative error of the Jacobian regularization term to be additionally suppressed by ∼ 1/ |B|. Finally, we speculate that in the large-C limit -possibly relevant for large-class datasets such as the ImageNet -there might be additional structure in the Jacobian traces (e.g. the central-limit concentration) that leads to further suppression of the variance. It is also possible to derive a closed-form expression for the derivative of the Jacobian regularizer, thus bypassing any need for random projections while maintaining computational efficiency. The expression is here derived for multilayer perceptron, though we expect similar computations may be done for other models of interest. We provide full details in case one may find it practically useful to implement explicitly in any open-source packages or generalize it to other models. Let us denote the input x i and the output z c = z Defining the layer-wise Jacobian as the total input-output Jacobian is given by The Jacobian regularizer of interest is defined as (up to the magnitude coefficient λ JR) Its derivatives with respect to biases and weights are denoted as Some straightforward algebra then yields and where we have set B Algorithmically, we can iterate the following steps for = L, L − 1,..., 1: 2. Compute Note that the layer-wise Jacobians, J 's, are calculated within the standard backpropagation algorithm. The core of the algorithm is in the computation of Ω j −1,j in Equation. It is obtained by first backpropagating from − 1 to 1, then forwardpropagating from 1 to L, and finally backpropagating from L to + 1. It thus makes the cycle around, hence the name cyclopropagation. In order to describe architectures of our convolutional neural networks in detail, let us associate a tuple [F, C in → C out, S, P ; M] to a convolutional layer with filter width F, number of in-channels C in and out-channels C out, stride S, and padding P, followed by nonlinear activations and then a max-pooling layer of width M (note that M = 1 corresponds to no pooling). Let us also associate a pair [N in → N out] to a fully-connected layer passing N in inputs into N out units with activations and possibly dropout. With these notations, our LeNet' model used for the MNIST experiments consists of a input followed by a convolutional layer with [5, 1 → 6, 1, 2; 2], another one with [5, 6 → 16, 1, 0; 2], a fully-connected layer with [2100 → 120] and dropout rate p drop, another fully-connected layer with [120 → 84] and dropout rate p drop, and finally a fully-connected layer with [84 → 10], yielding 10-dimensional output logits. For our nonlinear activations, we use the hyperbolic tangent. For the CIFAR-10 dataset, we use the model architecture specified in the paper on defensive distillation (b), abbreviated as DDNet. Specifically, the model consists of a input followed by convolutional layers with In addition, we experiment with a version of ResNet-18 modified for the 32-by-32 input size of CIFAR-10 and shown to achieve strong performance on clean image recognition. 9 For this architecture, we use the standard PyTorch initialization of the parameters. Data preproceessing and optimization hyperparameters for both architectures are specified in the next section. For our ImageNet experiments, we use the standard ResNet-18 model available within PyTorch (torchvision.models.resnet) together with standard weight initialization. Note that there is typically no dropout regularization in the ResNet models but we still examine the effect of L 2 regularization in addition to Jacobian regularization. is vacuous. 9 Model available at: https://github.com/kuangliu/pytorch-cifar. No regularization 12.9 ± 0.7 15.5 ± 0.7 20.5 ± 1.3 26.6 ± 1.0 76.8 ± 0.4 115.1 ± 1.8 L • Datasets: the CIFAR-10 dataset consists of color images of objects -divided into ten categories -with 32-by-32 pixels in each of 3 color channels, each pixel ranging in, partitioned into 50,000 training and 10,000 test samples . The images are preprocessed by uniformly subtracting 0.5 and multiplying by 2 so that each pixel ranges in [−1, 1]. • Optimization: essentially same as for the LeNet' on MNIST, except the initial learning rate for full training. Namely, model parameters θ are initialized at iteration t = 0 by the Xavier method for DDNet and standard PyTorch initialization for ResNet-18, along with the zero initial velocity v(t = 0) = 0. They evolve under the SGD dynamics with momentum ρ = 0.9, and for the supervised loss we use cross-entropy with one-hot targets. For training with the full training set, mini-batch size is set as |B| = 100, and the learning rate η is initially set to η 0 = 0.01 for the DDNet and η 0 = 0.1 for the ResNet-18 and in both cases quenched ten-fold after each 50,000 SGD iterations; each simulation is run for 150,000 SGD iterations in total. For few-shot learning, training is carried out using full-batch SGD with a constant learning rate η = 0.01, and model performance is evaluated after 10,000 iterations. • Hyperparameters: the same values are inherited from the experiments for LeNet' on the MNIST and no tuning was performed. Namely, the weight decay coefficient λ WD = 5·10 −4; the dropout rate p drop = 0.5; the Jacobian regularization coefficient λ JR = 0.01; and adversarial training with uniformly drawn FGSM amplitude ε FGSM ∈ [0, 0.01]. The relevant for generalization properties are shown in Table S3. One difference from the MNIST counterparts in the main text is that dropout improves test accuracy more than L 2 regularization. Meanwhile, for both setups the order of stability measured by ||J|| F on the test set more or less stays the same. Most importantly, turning on the Jacobian regularizer improves the stability by orders of magnitude, and combining it with other regularizers do not compromise this effect. The relevant for robustness against input-data corruption are plotted in Figures S3 and S4. The success of the Jacobian regularizer is retained for the white-noise and CW adversarial attack. For the PGD attack are mixed at high degradation level when Jacobian regularization is combined with adversarial training. This might be an artifact stemming from the simplicity of the PGD search algorithm, which overestimates the shortest distance to adversarial examples in comparison to the CW attack (see Appendix F), combined with Jacobian regularization's effect on simplifying the loss landscape with respect to the input space that the attack methods explore. In Figure S5, we compare the effects of various input perturbations on changing model's decision. For each attack method, fooling L 2 distance in the original input space -before preprocessing -is measured between the original image and the fooling image as follows (for all attacks, cropping is performed to put pixels in the range in the orignal space): (i) for the white noise attack, a random direction in the input space is chosen and the magnitude of the noise is cranked up until the model yields wrong prediction; (ii) for the FGSM attack, the gradient is computed at a clean sample and then the magnitude ε FGSM is cranked up until the model is fooled; (iii) for the PGD attack, the attack step with ε FGSM = 1/255 is iterated until the model is fooled [as is customary for PGD and described in the main text, there is saturation constraint that demands each pixel value to be within 32/255 (MNIST) and 16/255 (CIFAR-10) away from the original clean value]; and (iv) the CW attack halts when fooling is deemed successful. Here, for the CW attack (see for details of the algorithm) the Adam optimizer on the logits loss (their f 6) is used with the learning rate 0.005, and the initial value of the conjugate variable, c, is set to be 0.01 and binary-searched for 10 iterations. For each model and attack method, the shortest distance is evaluated for 1,000 test samples, and the test error (= 100% − test accuracy) at a given distance indicates the amount of test examples misclassified with the fooling distance below that given distance. Below, we highlight various notable features. • The most important highlight is that, in terms of effectiveness of attacks, CW > PGD > FGSM > white noise, duly respecting the complexity of the search methods for finding adversarial examples. Compared to CW attack, the simple methods such as FGSM and PGD attacks could sometime yield erroneous picture for the geometry of the decision cells, especially regarding the closest decision boundary., we used 10,000 test examples (rather than 1,000 used for other figures) to compensate for the lack of multiple runs.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryl-RTEYvB
We analyze and develop a computationally efficient implementation of Jacobian regularization that increases the classification margins of neural networks.
With the increasing demand to deploy convolutional neural networks (CNNs) on mobile platforms, the sparse kernel approach was proposed, which could save more parameters than the standard convolution while maintaining accuracy. However, despite the great potential, no prior research has pointed out how to craft an sparse kernel design with such potential (i.e., effective design), and all prior works just adopt simple combinations of existing sparse kernels such as group convolution. Meanwhile due to the large design space it is also impossible to try all combinations of existing sparse kernels. In this paper, we are the first in the field to consider how to craft an effective sparse kernel design by eliminating the large design space. Specifically, we present a sparse kernel scheme to illustrate how to reduce the space from three aspects. First, in terms of composition we remove designs composed of repeated layers. Second, to remove designs with large accuracy degradation, we find an unified property named~\emph{information field} behind various sparse kernel designs, which could directly indicate the final accuracy. Last, we remove designs in two cases where a better parameter efficiency could be achieved. Additionally, we provide detailed efficiency analysis on the final 4 designs in our scheme. Experimental validate the idea of our scheme by showing that our scheme is able to find designs which are more efficient in using parameters and computation with similar or higher accuracy. CNNs have achieved unprecedented success in visual recognition tasks. The development of mobile devices drives the increasing demand to deploy these deep networks on mobile platforms such as cell phones and self-driving cars. However, CNNs are usually resource-intensive, making them difficult to deploy on these memory-constrained and energy-limited platforms. To enable the deployment, one intuitive idea is to reduce the model size. Model compression is the major research trend for it. Previously several techniques have been proposed, including pruning BID18, quantization BID28 and low rank approximation BID6. Though these approaches can can offer a reasonable parameter reduction with minor accuracy degradation, they suffer from the three drawbacks: 1) the irregular network structure after compression, which limits performance and throughput on GPU; 2) the increased training complexity due to the additional compression or re-training process; and 3) the heuristic compression ratios depending on networks, which cannot be precisely controlled. Recently the sparse kernel approach was proposed to mitigate these problems by directly training networks using structural (large granularity) sparse convolutional kernels with fixed compression ratios. The idea of sparse kernel was originally proposed as different types of convolutional approach. Later researchers explore their usages in the context of CNNs by combining some of these sparse kernels to save parameters/computation against the standard convolution. For example, MobileNets BID12 realize 7x parameter savings with only 1% accuracy loss by adopting the combination of two sparse kernels, depthwise convolution BID26 and pointwise convoluiton BID20, to replace the standard convolution in their networks. However, despite the great potential with sparse kernel approach to save parameters/computation while maintaining accuracy, it is still mysterious in the field regarding how to craft an sparse kernel design with such potential (i.e., effective sparse kernel design). Prior works like MobileNet BID12 and Xception BID1 just adopt simple combinations of existing sparse kernels, and no one really points out the reasons why they choose such kind of design. Meanwhile, it has been a long-existing question in the field whether there is any other sparse kernel design that is more efficient than all state-of-the-art ones while also maintaining a similar accuracy with the standard convolution. To answer this question, a native idea is to try all possible combinations and get the final accuracy for each of them. Unfortunately, the number of combination will grow exponentially with the number of kernels in a design, and thus it is infeasible to train each of them. Specifically, even if we limit the design space to four common types of sparse kernels -group convolution BID16, depthwise convolution BID26, pointwise convolution BID20 and pointwise group convolution ) -the total number of possible combinations would be 4 k, given that k is the number of sparse kernels we allow to use in a design (note that each sparse kernel can appear more than once in a design).In this paper, we craft the effective sparse kernel design by efficiently eliminating poor candidates from the large design space. Specifically, we reduce the design space from three aspects: composition, performance and efficiency. First, observing that in normal CNNs it is quite common to have multiple blocks which contain repeated patterns such as layers or structures, we eliminate the design space by ignoring the combinations including repeated patterns. Second, realizing that removing designs with large accuracy degradation would significantly reduce the design space, we identify a easily measurable quantity named information field behind various sparse kernel designs, which is closely related to the model accuracy. We get rid of designs that lead to a smaller information field compared to the standard convolution model. Last, in order to achieve a better parameter efficiency, we remove redundant sparse kernels in a design if the same size of information field is already retained by other sparse kernels in the design. With all aforementioned knowledge, we present a sparse kernel scheme that incorporates the final four different designs manually reduced from the original design space. Additionally, in practice, researchers would also like to select the most parameter/computation efficient sparse kernel designs based on their needs, which drives the demand to study the efficiency for different sparse kernel designs. Previously no research has investigated on the efficiency for any sparse kernel design. In this paper, three aspects of efficiency are addressed for each of the sparse kernel designs in our scheme: 1) what are the factors which could affect the efficiency for each design? 2) how does each factor affect the efficiency alone? 3) when is the best efficiency achieved combining all these factors in different real situations?Besides, we show that the accuracy of models composed of new designs in our scheme are better than that of all state-of-the-art methods under the same constraint of parameters, which implies that more efficient designs are constructed by our scheme and again validates the effectiveness of our idea. The contributions of our paper can be summarized as follows:• We are the first in the field to point out that the information field is the key for the sparse kernel designs. Meanwhile we observe the model accuracy is positively correlated to the size of the information field.• We present a sparse kernel scheme to illustrate how to eliminate the original design space from three aspects and incorporate the final 4 types of designs along with rigorous mathematical foundation on the efficiency.• We provide some potential network designs which are in the scope of our scheme and have not been explored yet and show that they could have superior performances. We first give a brief introduction to the standard convolution and the four common styles of sparse kernels. Standard convolution is the basic component in most CNN models, kernels of which can be described as a 4-dimensional tensor: W ∈ R C×X×Y ×F, where C and F are the numbers of the input and the output channels and X and Y are the spatial dimensions of the kernels. Let I ∈ R C×U ×V be the input tensor, where U and V denote the spatial dimensions of the feature maps. Therefore, the output activation at the output feature map f and the spatial location (x, y) can be expressed as, DISPLAYFORM0 Group convolution is first used in AlexNet BID16 ) for distributing the model over two GPUs. The idea of it is to split both input and output channels into disjoint groups and each output group is connected to a single input group and vice versa. By doing so, each output channel will only depend on a fraction of input channels instead of the entire ones, thus a large amount of parameters and computation could be saved. Considering the number of group as M, the output activation (f, x, y) can be calculated as, DISPLAYFORM0 The idea of depthwise convolution is similar to the group convolution, both of which sparsifies kernels in the channel extent. In fact, depthwise convolution can be regarded as an extreme case of group convolution when the number of groups is exactly the same with the number of input channels. Also notice that in practice usually the number of channels does not change after the depthwise convolution is applied. Thus, the equation above can be further rewritten as, DISPLAYFORM1 Pointwise convolution is actually a 1 × 1 standard convolution. Different from the group convolution, pointwise convolution achieves the sparsity over the spatial extent by using kernels with 1 × 1 spatial size. Similarly, the equation below shows how to calculate one output activation from the pointwise convolution in detail, DISPLAYFORM0 To sparsify kernels in both the channel and the spatial extents, the group convolution can be combined together with the pointwise convolution, i.e., pointwise group convolution. Besides the use of 1 × 1 spatial kernel size, in pointwise group convolution each output channel will also depend on a portion of input channels. The specific calculations for one output activation can be found from the equation below, DISPLAYFORM0 3 SPARSE KERNEL SCHEMERecall that the total number of combinations will grow exponentially with the number of kernels in a design, which could in a large design space. In this paper, we craft the effective sparse kernel design (i.e., design that consumes less parameters but maintains accuracy with the standard convolution) by efficiently examining the design space. Specifically, first we determine the initial design space by setting the maximum number of sparse kernels (length). To decide this number, two aspects are considered: 1) in order to give the potential to find more efficient designs which have not been explored yet, the maximum length of sparse kernel design should be greater than the numbers of all state-of-the-art ones; 2) it is also obvious that the greater length is more likely to consume more parameters, which contradicts our goal to find more efficient designs. Therefore combining the two aspects together, we set the maximum length to 6, which is not only greater than the largest number (i.e., 3) in all current designs, but also makes designs with the maximum length could still be able to be more efficient than the standard convolution. We then start to reduce the design space from three aspects: composition, performance and efficiency. In the following paragraphs, we will introduce the three aspects in detail. Composition. The overall layout in CNNs provides a good insight for us to quickly reduce the design space. Specifically, in normal CNNs it is quite common to have multiple stages/blocks which contain repeated patterns such as layers or structures. For example, in both VGG BID27 and ResNet BID9 there are 4 stages and inside each stage are several same repeated layers. Inspired by the fact, when we replace the standard convolution using various sparse kernel designs intuitively there is no need to add these repeated patterns to the original place of each standard convolutional layer. For example, suppose there are three types of sparse kernels, A, B and C, then the following combinations should be removed as containing repeated patterns: AAAAAA, ABABAB and ABCABC. AAAAAA contains the repeated pattern of A, while ABABAB and ABCABC have the patterns of AB and ABC respectively. Repeated patterns are also easy to detect, which makes the entire process extremely fast. To find such patterns, we can use the regular expression matching. The corresponding expression for the matched combinations should be (.+?)1+, where (.+?) denotes the first capturing group which contains at least one character, but as few as possible, and 1+ means try to match the same character(s) as most recently matched by the first group as many times as possible. As a , we can efficiently eliminate the design space with the help of the regular expression. Performance. There are lots of sparse kernel designs that could in large accuracy degradation, which gives us another opportunity to greatly reduce the design space. To get rid of them, we need an easily measurable (i.e., no training) property behind various designs that could directly indicate the final accuracy. Fortunately, after analyzing many prior works and conducting many experimental studies, we do find such property. We name it information field. Definition 1. (Information Field) Information field is the area in input tensor which one or more convolutional layers use to generate one output activation. For one output tensor, sizes of information fields for all activations are usually the same. FIG0 shows the spatial and channel dependency for the standard convolution, from which we can also find out the size of information field. Assuming the spatial kernel size is 3 × 3, starting from any output node in the figure we can see that in terms of the channel dimension each output channel will connect to all input channels and for the spatial dimensions one output activation will depend on activations inside a 3 × 3 spatial area. Therefore the information field for the standard convolution will be (3, 3, C) where C is the number of input channels. We find that information field is the key behind all sparse kernel designs, and also observe the model accuracy is positively correlated to the size of information field, the idea of which is also validated by later experiments in Section 4.2.With the help of information field, sparse kernel designs that would in large accuracy degradation could be easily removed from the original design space without actually training the models. Specifically, first for each design we calculate the size of information field by adding up it sequentially from the leftmost kernel to the rightmost one. For example, we use a three-dimensional vector,, to represent the initial values of information field on three different dimensions (i.e., two spatial dimensions and one channel dimension), then corresponding values of the vector will be updated based on the known properties of the sparse kernel encountered. After the rightmost kernel, the final vector we get will be the size of information field for the design. Finally we compare it with that of the standard convolution. If the two sizes are the same, we will keep the design, otherwise we will simply discard it. For instance, the design composed of one depthwise convolution will be removed since the information field of it only contains one channel area instead of the full channel space from the standard convolution. Efficiency. Considering a better parameter efficiency more designs could be removed. In paragraphs above, we only eliminate designs in terms of the accuracy via checking the size of information field. Recall that our goal is also to find efficient designs. Thus, while ensuring the accuracy we also need to take the efficiency into consideration. In fact, there are two cases that could worsen the efficiency and should be regarded as redundant designs: 1) it can be easily verified that the size of information field will never decrease when passing through sparse kernels in a design, thus there could be one situation that after one kernel, the size of information field still remains the same, which means the kernel does not help with regards to the information field even if the final size is the same as the standard convolution; 2) it is also possible that the same size of information field with the standard convolution is already retained by a fraction of sparse kernels in a design, in which case, other kernels can also be considered as not contributing to the information field. In terms of parameter efficiency designs in both of the two cases contain non-contributed kernels, therefore we can remove them from the original design space. To detect designs within the two cases, we introduce a early-stop mechanism during the process to check the size of information field above. Specifically, as per the two cases we check two things when adding up information field from the leftmost kernel in a design: 1) we record the size of information field before entering each kernel and compare it with the new size calculated after that kernel. If the two sizes are the same, we stop adding up information field for the design and directly go to the next one; 2) we add another conditional check every time we get a new size of information field. If the size is still less than or equal to that of the standard convolution, we will continue to add up information field from the next kernel, otherwise we will stop and go to the next design. With all aforementioned knowledge, we manually reduce the original design space (4 1 +4 2 +· · ·+4 6) to 4 different types of sparse kernel designs 1. In the next section we will present the 4 final designs respectively. Also notice that other techniques to save parameters such as bottleneck structure BID9 appear to be complimentary to our approach, which can be combined together to further improve parameter efficiency while maintaining accuracy. To validate this idea, we also consider the bottleneck structure when reducing the design space. Depthwise Convolution + Pointwise Convolution. Unlike the standard convolution which combines spatial and channel information together to calculate the output, the combination of depthwise convolution (DW) and pointwise convolution (PW) split the two kinds of information and deal with them separately. The output activation at location (f, x, y) can be written as DISPLAYFORM0 where W 1 and W 2 correspond to the kernels of depthwise convolution and pointwise convolution respectively. The dependency of such design is depicted in FIG0, from which we can easily verify that the size of information field is the same with the standard convolution. Group Convolution + Pointwise Group Convolution. The combination of group convolution (GC) and pointwise group convolution (PWG) can be regarded as an extension for the design above, where group convolution is applied on the pointwise convolution. However, simply using pointwise group convolution would reduce the size of information field on the channel dimension since depthwise convolution will not deal with any channel information. To recover the information field depthwise convolution is replaced with the group convolution. Meanwhile channel permutation should be added between the two layers. Assuming the number of channels does not change after the first group convolution, the output activation can be calculated as DISPLAYFORM1 DISPLAYFORM2 and N denote numbers of groups for group convolution and pointwise group convolution and W 1 and W 2 correspond to the kernels of group convolution and pointwise group convolution respectively. FIG0 shows the information field of this design clearly. Pointwise Convolution + Depthwise Convolution + Pointwise Convolution. Although two pointwise convolutions do not ensure a better efficiency in our scheme, the combination with bottleneck structure can help ease the problem, which makes it survive as one of the last designs. Following the normal practice we set bottleneck ratio to 1: 4, which implies the ratio of bottleneck channels to output channels. Also notice that more parameters could be saved if we place the depthwise convolution between the two pointwise convolutions since now depthwise convolution would only apply on a reduced number of channels. As a , the output activation T (f, x, y) is calculated as DISPLAYFORM3 where K denote the number of bottleneck channels and W 1, W 2 and W 3 correspond to the kernels of first pointwise convolution, depthwise convolution and second pointwise convolution respectively. Along with the equation FIG0 shows that the information field of such design is same with the standard convolution. Pointwise Group Convolution + Depthwise Convolution + Pointwise Group Convolution. The combination of two pointwise group convolutions and one depthwise convolution can also ensure the same size of information field. Similarly, channel permutation is needed. The bottleneck structure is also adopted to achieve a better efficiency. The output activation is calculated as DISPLAYFORM4 DISPLAYFORM5 and N represent the number of bottleneck channels and numbers of groups for first pointwise group convolution and second pointwise group convolution and W 1, W 2 and W 3 correspond to the kernels of first pointwise group convolution, depthwise convolution and second pointwise group convolution respectively. Both the equation and FIG0 could verify the same size of information field with the standard convolution. In addition, we find that the efficiency for different designs in our scheme do not always overlap. Thus to save the pain for researchers to find the most parameter/computation efficient designs based on their needs, we study the efficiency for each of the designs. Specifically, we consider two real situations which are frequently encountered by researchers when applying sparse kernel designs (i.e., given the input and the output for a layer and given the total number of parameters for a layer) and give accurate conditions when the best efficiency could be achieved. Efficiency given the input and the output. Given the numbers of input and output channels C and F. The total number of parameters after applying this design is 9C + CF, and the number of parameters for standard convolution is 9CF. Therefore the parameter efficiency of such method is 1/F + 1/9 represented by the ratio of parameters after and before applying such design. Clearly, given C and F, the parameter efficiency is always the same. Efficiency given the total amount of parameters. It can be easily verified that given the total number of parameters the greatest width is reached when the best efficiency is achieved. Thus the condition for the best efficiency given the total amount of parameters should be the same with the one when the greatest width is reached. The total number of parameters P for the design can be expressed as DISPLAYFORM0 when studying the greatest width, we need to assume the ratio between C and F does not change, thus the number of output channels F could be written like F = α · C where usually α ∈ N +. As a , from the equation above when P is fixed, the greatest width G (i.e., DISPLAYFORM1) will also be fixed, which indicates that the parameter efficiency is always the same. Efficiency given the input and the output. Similarly, we use the ratio of parameters to show parameter efficiency of this design. Given C and F, the number of parameters after using such design can be written as 3 · 3 · DISPLAYFORM0 Since the number of parameters for standard convolution is 9CF, the ratio will become DISPLAYFORM1. Notice that to ensure the same size of information field with standard convolution, in any input group of the second layer there should be at least one output channel from each one of the output groups of the first layer, therefore M · N should be less than or equal to the number of output channels from the first layer, i.e., M · N ≤ C. To further illustrate the relationship between the best parameter efficiency and the choices of M and N, we have the following theorem (the proof is given in the Appendix): Theorem 1. With the same size of information field, the best parameter efficiency is achieved if and only if the product of the two group numbers equals the channel number of the intermediate layer. As per the theorem, the best parameter efficiency can be achieved only when M · N = C. Thus the ratio will become Efficiency given the total amount of parameters. Given the total number of parameters P for one design, both M and N could affect the width of the network. As per Theorem 1 the greatest C can be reached only when C = M · N. When F = α · C, P could be written like DISPLAYFORM2 Given the number of parameters P, width C has a upper bound when 9N = αM, which is also the condition for the best efficiency. The greatest width G is (DISPLAYFORM3 Efficiency given the input and the output. Same as before, given the number of input channels C, bottleneck channels K and output channels F . After applying the design, the total amount of parameters is reduced to DISPLAYFORM0 The number of parameters for standard convolution is still 9CF . Notice that K = F/4, therefore the ratio can be further expressed as DISPLAYFORM1 36C . Clearly, given C, K and F, such design will also in a fixed efficiency. Efficiency given the total amount of parameters. When F = α · C and K = F/4, the total number of parameters P will be DISPLAYFORM2 when P is fixed, the greatest width G is also fixed, i.e., −9α+ √ 81α 2 +16α 2 P +16αP 2(α 2 +α). Efficiency given the input and the output. We use the same way to evaluate parameter efficiency for this design. First, the number of parameters after applying such method is 1 DISPLAYFORM0 The number for standard convolution is 9CF. Since K = F/4 and as per Theorem 1 the best parameter efficiency can be achieved only when K = M · N, the ratio of parameters can then be represented as DISPLAYFORM1. Thus given C, K and F, the best parameter efficiency can be reached by setting DISPLAYFORM2 Efficiency given the total amount of parameters. Similarly, according to the Theorem 1 the greatest C can be reached only when the number of bottleneck channels K = M · N. Since F = α · C and K = F/4, the total number of parameters of one design P can be expressed as DISPLAYFORM3 Given the number of parameters P, the greatest width G exists when αM = N. 4.1 IMPLEMENTATION DETAILS DISPLAYFORM0 The overall layout of the network is shown in TAB0. Identity mapping BID10 ) is used over each block. When building the models, we can simply replace every block in the layout with the standard convolution or the sparse kernel designs mentioned in Section 3. Batch normalization (BN) BID14 ) is adopted right after each layer in the block and as suggested by BID1 ) nonlinear activation ReLU is only performed after the summation of the identity shortcut and the output of each block. We evaluate our models on ImageNet 2012 dataset BID4 BID25, which contains 1.2 million training images and 50000 validation images from 1000 categories. We follow the same data augmentation scheme in BID10 a) which includes randomized cropping, color jittering and horizontal flipping. All models are trained for 100 epochs with batch size 256. SGD optimizer is used with the Nesterov momentum. The weight decay is 0.0001 and the momentum is 0.9. We adopt the similar weight initialization method from BID8 BID9 BID13. The learning rate starts with 0.1 and is divided by 10 every 30 epochs. All reported are single center crop top-1 performances. Relationship between the information field and the model accuracy. In Section 3, we have shown that all the sparse kernel designs generated by our scheme share the same size of the infor- mation field when the size of input is fixed. Meanwhile different sparse kernel designs could save different amount of parameters/computation compared to the standard convolution and the saved computation/parameters can then be used to increase the number of channels, enlarge the information field, and increase the final accuracy. The fundamental idea behind this is that we believe the information field is an essential property of all sparse kernel designs and could directly affect the final accuracy. To verify this idea we choose a bottleneck-like design and conduct some comparisons by tuning different number of groups. We adopt the same overall network layout in TAB0. It can be easily verified that given the same size of the input tensor the change of the number of groups in the bottleneck-like design will not affect the size of the information field in the output. Results are shown in TAB1. Specifically, compare on row 2 and row 5, we can see that by increasing the number of group from 2 to 32, more than a half amount of parameters will be saved to generate the same width, however the model accuracy will only decrease slightly. Meanwhile a further comparison on row 5 and row 6 indicate that if we use the saved parameters to increase the network width, the accuracy could still be improved. Since both of the two networks contain the same amount of parameters, overall network layout and type of sparse kernel design, the performance gains should only come from the increase of network width (information field). Same phenomenon could also be found by comparing on row 1 and row 2.Besides we investigate on different usages of parameters, on row 3 and row 4 show that the increase of network width has better potential for the improvement of accuracy than that of the depth, which also indicates that the size of the information field could play a more important role on the model accuracy. Additionally in TAB1 can further explain the sparse kernel design (PW+DW+PW) in Section 3.2 where we directly apply the most parameter-efficient depthwise convolution in the middle since it has the same size of the information field with other group numbers. Comparisons of different sparse kernel designs. We also compare different sparse kernel designs mentioned in Section 3. Results are shown in TAB2. As mentioned in Section 3 all designs have the same-sized information field given the same input. Results from TAB2 show that given the close amount of parameters by choosing different sparse kernel designs or group numbers models with different widths can be constructed, and the final accuracy is positively correlated to the model width (the size of the information field), which also coincides with our analysis above. Also notice that here do not necessarily indicate one type of sparse kernel design is always better than the other one in terms of the parameter efficiency since as per the analysis in Section 3 the efficiency also depends on other factors like the number of groups. For example, considering the same number of parameters and overall network layout, there could be a combination of group numbers M and N such that the network with the design GConv(M)+PWGConv(N) is wider than that of DW+PW. Based on the sparse kernel scheme, we are also able to construct more efficient designs than the state-of-the-art ones. TAB3 shows comparisons between the sparse kernel designs generated by our scheme and the state-of-the-art ones. For fair comparisons, we use the same network layout as shown in TAB0 and replace blocks in it with corresponding designs, and the model size around 11.0M is selected as it is the size that different models (e.g., Xception, ResNeXt and ShuffleNet) can be (a) 11.3 192 29.9 ResNeXt BID31 11. 1 192 29.8 Xception BID1 11.2 280 28.5 ShuffleNet 11.3 560 25.6GConv+PWGConv 8.6 200 27.0 PWGConv+DW+PWGConv 10.4 700 24.9 easily configured to. Results in TAB3 indicate that sparse kernel designs in our scheme could even yield better accuracy with a smaller model size, which also validates the idea of our sparse kernel scheme. Also notice that the choices of group numbers used in our designs are chosen to help easily accommodate both the similar model size and the overall network layout, which may not be the most efficient ones that are supposed to in a wider network with better accuracy under the same limitation of parameters. Model Compression. Traditional model compression techniques include pruning, quantization and low-rank approximation. Pruning BID0 BID22 BID21 reduces redundant weights, network connections or channels in a pre-trained model. However, it could face difficulty for deploying on hardware like GPU since some pruning methods may be only effective when the weight matrix is sufficiently sparse. Quantization BID35 BID2 BID1 BID5 BID23 reduces the number of bits required to represent weights. Unfortunately, this technique will require specialized hardware support. Low rank approximation BID17 BID15 BID32 BID24 BID7 uses two or more matrices to approximate the original matrix values in a pre-trained model. Nevertheless, since the process is an approximation of original matrix values maintaining a similar accuracy will always need additional re-training. The focus of this paper, the sparse kernel approach, mitigates all these problems by directly training networks using structural sparse convolutional kernels. In this paper, we present a scheme to craft the effective sparse kernel design by eliminating the large design space from three aspects: composition, performance and efficiency. During the process to reduce the design space, we find an unified property named information field behind various designs, which could directly indicate the final accuracy. Meanwhile we show the final 4 designs in our scheme along with detailed efficiency analysis. Experimental also validate the idea of our scheme. Proof of Theorem 1Proof. Without loss of generality we use the example in Section 3.3.2 to prove the theorem. Recall that the total number of parameters for such design can be expressed as DISPLAYFORM0 then the problem could be interpreted as proving that the minimum value of P can be achieved if and only if M · N = C.We prove the theorem by contradiction. Assume the minimum value of P could be achieved when M · N < C. Then we can always find a N = C/M > N such that the combination of M and N could in a smaller value of P, which contradicts our assumption. The theorem is hence proved.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJlg1n05YX
We are the first in the field to show how to craft an effective sparse kernel design from three aspects: composition, performance and efficiency.
In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from $O(2^T)$ to $O(T^2)$. Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization. Weakly-supervised temporal action localization has been of interest to the community recently. The setting is to train a model with solely video-level class labels, and to predict both the class and the temporal boundary of each action instance at the test time. The major challenge in the weakly-supervised localization problem is to find the right way to express and infer the underlying location information with only the video-level class labels. Traditionally, this is achieved by explicitly sampling several possible instances with different locations and durations BID2 BID11. The instance-level classifiers would then be trained through multiple instances learning BID4 BID40 or curriculum learning BID1 ). However, the length of actions and videos varies too much such that the number of instance proposals for each video varies a lot and it can also be huge. As a , traditional methods based on instance proposals become infeasible in many cases. Recent research, however, has pivoted to acquire the location information by generating the class activation sequence (CAS) directly BID17, which produces the classification score sequence of being each action for each snippet over time. The CAS along the 1D temporal dimension for a video is inspired by the class activation map (CAM) BID46 BID19 BID18 in weakly-supervised object detection. The CAM-based models have shown that despite being trained on image-level labels, convolutional neural networks (CNNs) have the remarkable ability to localize objects. Similar to object detection, the basic idea behind CAS-based methods for action localization in the training is to sample the non-overlapping snippets from a video, then to aggregate the snippet-level features into a video-level feature, and finally to yield a video-level class prediction. During testing, the model generates a CAS for each class that identifies the discriminative action regions, and then applies a threshold on the CAS to localize each action instance in terms of the start time and the end time. In CAS-based methods, the feature aggregator that aggregates multiple snippet-level features into a video-level feature is the critical building block of weakly-supervised neural networks. A model's ability to capture the location information of an action is primarily determined by the design of the aggregators. While using the global average pooling over a full image or across the video snippets has shown great promise in identifying the discriminative regions BID46 BID19 BID18, treating each pixel or snippet equally loses the opportunity to benefit from several more essential parts. Some recent works BID17 BID49 have tried to learn attentional weights for different snippets to compute a weighted sum as the aggregated feature. However, they suffer from the weights being easily dominated by only a few most salient snippets. In general, models trained with only video-level class labels tend to be easily responsive to small and sparse discriminative regions from the snippets of interest. This deviates from the objective of the localization task that is to locate dense and integral regions for each entire action. To mitigate this gap and reduce the effect of the domination by the most salient regions, several heuristic tricks have been proposed to apply to existing models. For example, BID35 BID44 attempt to heuristically erase the most salient regions predicted by the model which are currently being mined, and force the network to attend other salient regions in the remaining regions by forwarding the model several times. However, the heuristic multiple-run model is not end-to-end trainable. It is the ensemble of multiple-run mined regions but not the single model's own ability that learns the entire action regions. "Hide-and-seek" BID28 randomly masks out some regions of the input during training, enforcing the model to localize other salient regions when the most salient regions happen to be masked out. However, all the input regions are masked out with the same probability due to the uniform prior, and it is very likely that most of the time it is the that is being masked out. A detailed discussion about related works can be found in Appendix D.To this end, we propose the marginalized average attentional network (MAAN) to alleviate the issue raised by the domination of the most salient region in an end-to-end fashion for weakly-supervised action localization. Specifically, MAAN suppresses the action prediction response of the most salient regions by employing marginalized average aggregation (MAA) and learning the latent discriminative probability in a principled manner. Unlike the previous attentional pooling aggregator which calculates the weighted sum with attention weights, MAA first samples a subset of features according to their latent discriminative probabilities, and then calculates the average of these sampled features. Finally, MAA takes the expectation (marginalization) of the average aggregated subset features over all the possible subsets to achieve the final aggregation. As a , MAA not only alleviates the domination by the most salient regions, but also maintains the scale of the aggregated feature within a reasonable range. We theoretically prove that, with the MAA, the learned latent discriminative probability indeed reduces the difference of response between the most salient regions and the others. Therefore, MAAN can identify more dense and integral regions for each action. Moreover, since enumerating all the possible subsets is exponentially expensive, we further propose a fast iterative algorithm to reduce the complexity of the expectation calculation procedure and provide a theoretical analysis. Furthermore, MAAN is easy to train in an end-to-end fashion since all the components of the network are differentiable. Extensive experiments on two large-scale video datasets show that MAAN consistently outperforms the baseline models and achieves superior performance on weakly-supervised temporal action localization. In summary, our main contributions include: a novel end-to-end trainable marginalized average attentional network (MAAN) with a marginalized average aggregation (MAA) module in the weaklysupervised setting; theoretical analysis of the properties of MAA and an explanation of the reasons MAAN alleviates the issue raised by the domination of the most salient regions; a fast iterative algorithm that can effectively reduce the computational complexity of MAA; and a superior performance on two benchmark video datasets, THUMOS14 and ActivityNet1.3, on the weakly-supervised temporal action localization. incorporates MAA, and introduce the corresponding inference process on weakly-supervised temporal action localization in Sec. 2.4. Let {x 1, x 2, · · · x T} denote the set of snippet-level features to be aggregated, where x t ∈ R m is the m dimensional feature representation extracted from a video snippet centered at time t, and T is the total number of sampled video snippets. The conventional attentional weighted sum pooling aggregates the input snippet-level features into a video-level representation x. Denote the set of attentional weights corresponding to the snippet-level features as {λ 1, λ 2, · · · λ T}, where λ t is a scalar attentional weight for x t. Then the aggregated video-level representation is given by DISPLAYFORM0 as illustrated in FIG0 (a). Different from the conventional aggregation mechanism, the proposed MAA module aggregates the features by firstly generating a set of binary indicators to determine whether a snippet should be sampled or not. The model then computes the average aggregation of these sampled snippet-level representations. Lastly, the model computes the expectation (marginalization) of the aggregated average feature for all the possible subsets, and obtains the proposed marginalized average aggregated feature. Formally, in the proposed MAA module, we first define a set of probabilities {p 1, p 2, · · · p T}, where each p t ∈ is a scalar corresponding to x t, similar to the notation λ t mentioned previously. We then sample a set of random variables {z 1, z 2, · · · z T}, where z t ∼ Bernoulli(p t), i.e., z t ∈ {0, 1} with probability P (z t = 1) = p t. The sampled set is used to represent the subset selection of snippet-level features, in which z t = 1 indicates x t is selected, otherwise not. Therefore, the average aggregation of the sampled subset of snipped-level representations is given by s = DISPLAYFORM1 z i, and our proposed aggregated feature, defined as the expectation of all the possible subset-level average aggregated representations, is given by DISPLAYFORM2 which is illustrated in FIG0 (b). Direct learning and prediction with the attention weights λ in Eq. in weakly-supervised action localization leads to an over-response in the most salient regions. The MAA in Eq. has two properties that alleviate the domination effect of the most salient regions. First, the partial order preservation property, i.e., the latent discriminative probabilities preserve the partial order with respect to their attention weights. Second, the dominant response suppression property, i.e., the differences in the latent discriminative probabilities between the most salient items and others are smaller than the differences between their attention weights. The partial order preservation property guarantees that it does not mix up the action and non-action snippets by assigning a high latent discriminative probability to a snippet with low response. The dominant response suppression property reduces the dominant effect of the most salient regions and encourages the identification of dense and more integral action regions. Formally, we present the two properties in Proposition 1 and Proposition 2, respectively. Detailed proofs can be found in Appendix A and Appendix B respectively. Proposition 1. Let z i ∼ Bernoulli(p i) for i ∈ {1, ..., T}. Then for T ≥ 2, Eq. holds true, and DISPLAYFORM0 where DISPLAYFORM1 Proposition 1 shows that the latent discriminative probabilities {p i} preserve the partial order of the attention weights {λ i}. This means that a large attention weight corresponds to a large discriminative probability, which guarantees that the latent discriminative probabilities preserve the ranking of the action prediction response. Eq. can be seen as a factorization of the attention weight λ i into the multiplication of two components, p i and c i, for i ∈ {1, ..., T}. p i is the latent discriminative probability related to the feature of snippet i itself. The factor c i captures the contextual information of snippet i from the other snippets. This factorization can be considered to be introducing structural information into the aggregation. Factor c i can be considered as performing a structural regularization for learning the latent discriminative probabilities p i for i ∈ {1, ..., T}, as well as for learning a more informative aggregation. DISPLAYFORM2 as an index set. Then I = ∅ and for ∀i ∈ I, ∀j ∈ {1, ..., T} inequality holds true. DISPLAYFORM3 The index set I can be viewed as the most salient features set. Proposition 2 shows that the difference between the normalized latent discriminative probabilities of the most salient regions and others is smaller than the difference between their attention weights. It means that the prediction for each snippet using the latent discriminative probability can reduce the gap between the most salient featuress and the others compared to conventional methods that are based on attention weights. Thus, MAAN suppresses the dominant responses of the most salient featuress and encourages it to identify dense and more integral action regions. Directly learning the attention weights λ leans to an over response to the most salient region in weakly-supervised temporal localization. Namely, the attention weights for only a few snippets are too large and dominate the others, while attention weights for most of the other snippets that also belong to the true action are underestimated. Proposition 2 shows that latent discriminative probabilities are able to reduce the gap between the most salient features and the others compared to the attention weights. Thus, by employing the latent discriminative probabilities for prediction instead of the attention weights, our method can alleviate the dominant effect of the most salient region in weakly-supervised temporal localization. Given a video containing T snippet-level representations, there are 2 T possible configurations for the subset selection. Directly summing up all the 2 T configurations to calculate x has a complexity of O(2 T). In order to reduce the exponential complexity, we propose an iterative method to calculate x with O(T 2) complexity. Let us denote the aggregated feature of {x 1, x 2, · · · x t} with length t as h t, and denote DISPLAYFORM0 z i for simplicity, then we have a set of and the aggregated feature of {x 1, x 2, · · · x T} can be obtained as x = h T. In Eq. FORMULA8, Z t is the summation of all the z i, which indicates the number of elements selected in the subset. Although there are 2 t distinct configurations for {z 1, z 2, · · · z t}, it has only t + 1 distinct values for Z t, i.e. 0, 1, · · ·, t. Therefore, we can divide all the 2 t distinct configurations into t + 1 groups, where the configurations sharing with the same Z t fall into the same group. Then the expectation h t can be calculated as the summation of the t + 1 parts. That is, DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 where the m t i, indicating the i th part of h t for group Z t = i, is shown in Eq. FORMULA11. DISPLAYFORM4 In order to calculate DISPLAYFORM5 The key idea here is that m The latter case is also related to the probability P (Z t = i − 1). By denoting q t i−1 = P (Z t = i − 1) for simplicity, we can obtain m t+1 i as a function of several elements: DISPLAYFORM6 Similarly, the computation of q t+1 i = P (Z t+1 = i) comes from two cases: the probability of selecting i − 1 items from the first t items and selecting the (t + 1) th item, i.e., q t i−1 p t+1; and the probability of selecting i items all from the first t items and not selecting the (t + 1) th item, i.e., DISPLAYFORM7 We derive the function of m t+1 iand q t+1 i in Proposition 3. Detailed proofs can be found in Appendix C. DISPLAYFORM8 i ∈ {0, 1, · · ·, t + 1} can be obtained recurrently by Eq. FORMULA16 and Eq. FORMULA17. DISPLAYFORM9 DISPLAYFORM10 DISPLAYFORM11 Proposition 3 provides a recurrent formula to calculate m t i. With this recurrent formula, we calculate the aggregation h T by iteratively calculating m t i from i = 1 to t and t = 1 to T. Therefore, we can obtain the aggregated feature of {x 1, DISPLAYFORM12 The iterative computation procedure is summarized in Algorithm 1 in Appendix E. The time complexity is O(T 2).With the fast iterative algorithm in Algorithm 1, the MAA becomes practical for end-to-end training. A demonstration of the computation graph for q DISPLAYFORM13 Network Architecture: We now describe the network architecture that employs the MAA module described above for weakly-supervised temporal action localization. We start from a previous stateof-the-art base architecture, the sparse temporal pooling network (STPN) BID17. As shown in FIG4, it first divides the input video into several non-overlapped snippets and extracts the I3D feature for each snippet. Each snippet-level feature is then fed to an attention module to generate an attention weight between 0 and 1. STPN then uses a feature aggregator to calculate a weighted sum of the snippet-level features with these class-agnostic attention weights to create a video-level representation, as shown on the left in FIG5. The video-level representation is then passed through an FC layer followed by a sigmoid layer to obtain class scores. Our MAAN uses the attention module to generate the latent discriminative probability p t and replaces the feature aggregator from the weighted sum aggregation by the proposed marginalized average aggregation, which is demonstrated on the right in FIG5 Training with video-level class labels: Formally, the model first performs aggregation of the snippet-level features (i.e. DISPLAYFORM0 Then, it applies a logistic regression layer (FC layer + sigmoid) to output video-level classification prediction probability. Specifically, the prediction probability for class c ∈ {1, 2, · · · C} is parameterized as σ c j = σ(w c x j), where x j is the aggregated feature for video j ∈ {1, ..., N}. Suppose each video x j is i.i.d and each action class is independent from the other, the negative log-likelihood function (cross-entropy loss) is given as follows: DISPLAYFORM1 where y c j ∈ {0, 1} is the ground-truth video-level label for class c happening in video j and W = [w 1, ..., w C].Temporal Action Localization: Let s c = w c x be the video-level action prediction score, and σ(s c) = σ(w c x) be the video-level action prediction probability. In STPN, asx = T t=1 λ t x t, the s c can be rewritten as: DISPLAYFORM2 In STPN, the prediction score of snippet t for action class c in a video is defined as: DISPLAYFORM3 where σ(·) denotes the sigmoid function. In MAAN, asx = E[DISPLAYFORM4, according to Proposition 1, the s c can be rewritten as: DISPLAYFORM5 The latent discriminative probability p t corresponds to the class-agnostic attention weight for snippet t. According to Proposition 1 and Proposition 2, c t does not relate to snippet t, but captures the context of other snippets. w c corresponds to the class-specific weights for action class c for all the snippets, and w c x t indicates the relevance of snippet t to class c. To generate temporal proposals, we compute the prediction score of snippet t belonging to action class c in a video as: DISPLAYFORM6 We denote the s c = (s as the class activation sequence (CAS) for class c. Similar to STPN, the threshold is applied to the CAS for each class to extract the one-dimensional connected components to generate its temporal proposals. We then perform non-maximum suppression among temporal proposals of each class independently to remove highly overlapped detections. Compared to STPN (Eq. FORMULA0), MAAN (Eq. FORMULA0) employs the latent discriminative probability p t instead of directly using the attention weight λ t (equivalent to c t p t) for prediction. Proposition 2 suggests that MAAN can suppress the dominant response s c t compared to STPN. Thus, MAAN is more likely to achieve a better performance in weakly-supervised temporal action localization. This section discusses the experiments on the weakly-supervised temporal action localization problem, which is our main focus. We have also extended our algorithm on addressing the weakly-supervised image object detection problem and the relevant experiments are presented in Appendix F. Datasets. We evaluate MAAN on two popular action localization benchmark datasets, THU-MOS14 BID10 and ActivityNet1.3 BID8. THUMOS14 contains 20 action classes for the temporal action localization task, which consists of 200 untrimmed videos (3,027 action instances) in the validation set and 212 untrimmed videos (3,358 action instances) in the test set. Following standard practice, we train the models on the validation set without using the temporal annotations and evaluate them on the test set. ActivityNet1.3 is a large-scale video benchmark for action detection which covers a wide range of complex human activities. It provides samples from 200 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. This dataset contains 10,024 training videos, 4,926 validation videos and 5,044 test videos. In the experiments, we train the models on the training videos and test on the validation videos. Evaluation Metrics. We follow the standard evaluation metric by reporting mean average precision (mAP) values at several different levels of intersection over union (IoU) thresholds. We use the benchmarking code provided by ActivityNet 1 to evaluate the models. Implementation Details. We use two-stream I3D networks pre-trained on the Kinetics dataset BID12 ) to extract the snippet-level feature vectors for each video. All the videos are divided into sets of non-overlapping video snippets. Each snippet contains 16 consecutive frames or optical flow maps. We input each 16 stacked RGB frames or flow maps into the I3D RGB or flow models to extract the corresponding 1024 dimensional feature vectors. Due to the various lengths of the videos, in the training, we uniformly divide each video into T non-overlapped segments, and randomly sample one snippet from each segment. Therefore, we sample T snippets for each video as the input of the model for training. We set T to 20 in our MAAN model. The attention module in FIG4 consists of an FC layer of 1024 × 256, a LeakyReLU layer, an FC layer of 256 × 1, and a sigmoid non-linear activation, to generate the latent discriminative probability p t. We pass the aggregated video-level representation through an FC layer of 1024 × C followed by a sigmoid activation to obtain class scores. We use the ADAM optimizer BID14 with an initial learning rate of 5 × 10 −4 to optimize network parameters. At the test time, we first reject classes whose video-level probabilities are below 0.1. We then forward all the snippets of the video to generate the CAS for the remaining classes. We generate the temporal proposals by cutting the CAS with a threshold th. The combination ratio of two-stream modalities is set to 0.5 and 0.5. Our algorithm is implemented in PyTorch 2. We run all the experiments on a single NVIDIA Tesla M40 GPU with a 24 GB memory. We first compare our MAAN model on the THUMOS14 dataset with several baseline models that use different feature aggregators in FIG4 to gain some basic understanding of the behavior of our proposed MAA. The descriptions of the four baseline models are listed below. STPN. It employs the weighed sum aggregationx = T t=1 λ t x t to generate the video-level representation. Dropout. It explicitly performs dropout sampling with dropout probability p = 0.5 in STPN to obtain the video-level representation,x = T t=1 r t λ t x t, r t ∼ Bernoulli(0.5). We test all the models with the cutting threshold th as 0.2 of the max value of the CAS. We compare the detection average precision (%) at IoU = [0.1 : 0.1 : 0.9] and the video-level classification mean average precision (%) (denoted as Cls mAP) on the test set in TAB0. From TAB0, we can observe that although all the methods achieve a similar video-level classification mAP, their localization performances vary a lot. It shows that achieving a good video-level classification performance cannot guarantee obtaining a good snippet-level localization performance because the former only requires the correct prediction of the existence of an action, while the latter requires the correct prediction of both its existence and its duration and location. Moreover, TAB0 demonstrates that MAAN consistently outperforms all the baseline models at different levels of IoUs in the weakly-supervised temporal localization task. Both the "Norm" and "SoftmaxNorm" are the normalized weighted average aggregation. However, the "SoftmaxNorm" performs the worst, because the softmax function over-amplifies the weight of the most salient snippet. As a , it tends to identify very few discriminative snippets and obtains sparse and non-integral localization. The "Norm" also performs worse than our MAAN. It is the normalized weighted average over the snippet-level representation, while MAAN can be considered as the normalized weighted average (expectation) over the subsetlevel representation. Therefore, MAAN encourages the identification of dense and integral action segments as compared to "Norm" which encourages the identification of only several discriminative snippets. MAAN works better than "Dropout" because "Dropout" randomly drops out the snippets with different attention weights by uniform probabilities. At each iteration, the scale of the aggregated feature varies a lot, however, MAAN samples with the learnable latent discriminative probability and conducts the expectation of keeping the scale of the aggregated feature stable. Compared to STPN, MAAN also achieves superior . MAAN implicitly factorizes the attention weight into c t p t, where p t learns the latent discriminative probability of the current snippet, and c t captures the contextual information and regularizes the network to learn a more informative aggregation. The properties of MAA disallow the predicted class activation sequences to concentrate on the most salient regions. The quantitative show the effectiveness of the MAA feature aggregator. The temporal CAS generated by MAAN can cover large and dense regions to obtain more accurate action segments. In the example in FIG8, MAAN can discover almost all the actions that are annotated in the ground-truth; however, the STPN have missed several action segments, and also tends to only output the more salient regions in each action segment. Other methods are much sparser compared to MAAN. The first row of FIG8 shows several action segments in red and in green, corresponding to action segments that are relatively difficult and easy to be localized, respectively. We can see that all the easily-localized segments contain the whole person who is performing the "HammerThrow" action, while the difficultly-localized segments contain only a part of the person or the action. Our MAAN can successfully localize the easy segments as well as the difficult segments; however, all the other methods fail on the difficult ones. It shows that MAAN can identify several dense and integral action regions other than only the most discriminative region which is identified by the other methods. We also compare our model with the state-of-the-art action localization approaches on the THU-MOS14 dataset. The numerical are summarized in TAB1. We include both fully and weakly-supervised learning, as in BID17. As shown in TAB1, our implemented STPN performs slightly better than the reported in the original paper BID17. From TAB1, our proposed MAAN outperforms the STPN and most of the existing weakly-supervised action localization approaches. Furthermore, our model still presents competitive compared with several recent fully-supervised approaches even when trained with only video-level labels. We train the MAAN model on the ActivityNet1.3 training set and compare our performance with the recent state-of-the-art approaches on the validation set in TAB3. The action segment in ActivityNet is usually much longer than that of THUMOS14 and occupies a larger percentage of a video. We use a set of thresholds, which are [0.2, 0.15, 0.1, 0.05] of the max value of the CAS, to generate the proposals from the one-dimensional CAS. As shown in TAB3, with the set of thresholds, our implemented STPN performs slightly better than the reported in the original paper (Nguyen BID23 47.7 43.5 36.3 28.7 19.0 10.3 5.3 --Yeung et al. BID38 48.9 44.0 36.0 26.4 17.1 ----Yuan et al. BID39 51.4 42.6 33.6 26.1 18.8 ----Shou et al. BID24 --40.1 29.4 23.3 13.1 7.9 --Yuan et al. BID41 51.0 45.2 36.5 27.8 17.8 ----Xu et al. BID37 54.5 51.5 44.8 35.6 28.9 ----Zhao et al. 66.0 59.4 51.9 41.0 29.8 ---- Wang et al. 44.4 37.7 28.2 21.1 13.7 ----Singh & Lee BID28 36.4 27.8 19.5 12.7 6.8 ----STPN BID17 BID34 45.1 4.1 0.0 Shou et al. BID24 45.3 26.0 0.2 Xiong et al. 39.1 23.5 5.5Weakly-supervised STPN BID17 29.3 16.9 2.6 STPN BID17, 2018). With the same threshold and experimental setting, our proposed MAAN model outperforms the STPN approach on the large-scale ActivityNet1.3. Similar to THUMOS14, our model also achieves good that are close to some of the fully-supervised approaches. We have proposed the marginalized average attentional network (MAAN) for weakly-supervised temporal action localization. MAAN employs a novel marginalized average aggregation (MAA) operation to encourage the network to identify the dense and integral action segments and is trained in an end-to-end fashion. Theoretically, we have proved that MAA reduces the gap between the most discriminant regions in the video to the others, and thus MAAN generates better class activation sequences to infer the action locations. We have also proposed a fast algorithm to reduce the computation complexity of MAA. Our proposed MAAN achieves superior performance on both the THUMOS14 and the ActivityNet1.3 datasets on weakly-supervised temporal action localization tasks compared to current state-of-the-art methods. We thank our anonymous reviewers for their helpful feedback and suggestions. Prof. Ivor W. Tsang was supported by ARC FT130100746, ARC LP150100671, and DP180100106.A PROOF OF PROPOSITION 1 Proof. DISPLAYFORM0 In addition, DISPLAYFORM1 Thus, we achieve DISPLAYFORM2 A DISPLAYFORM3 where 1(·) denotes the indicator function. We achieve Eq. FORMULA2 by partitioning the summation into t + 1 groups. Terms belonging to group i have DISPLAYFORM4, and we achieve Eq.. We now give the proof of the recurrent formula of Eq. DISPLAYFORM0 Proof. DISPLAYFORM1 Then, we have DISPLAYFORM2 Since DISPLAYFORM3 C.3 PROOF OF RECURRENT FORMULA OF q t+1 iWe present the proof of Eq. DISPLAYFORM4 Proof. DISPLAYFORM5 = z1,z2,···zt,zt+1 DISPLAYFORM6 D RELATED WORK Video Action Analysis. Researchers have developed quite a few deep network models for video action analysis. Two-stream networks BID26 and 3D convolutional neural networks (C3D) BID29 are popular solutions to learn video representations and these techniques, including their variations, are extensively used for video action analysis. Recently, a combination of two-stream networks and 3D convolutions, referred to as I3D, was proposed as a generic video representation learning method, and served as an effective backbone network in various video analysis tasks such as recognition, localization BID23, and weakly-supervised learning.Weakly-Supervised Temporal Action Localization. There are only a few approaches based on weakly-supervised learning that rely solely on video-level class labels to localize actions in the temporal domain. Wang et al. proposed a UntrimmedNet framework, where two softmax functions are applied across class labels and proposals to perform action classification and detect important temporal segments, respectively. However, using the softmax function across proposals may not be effective for identifying multiple instances. Singh et al. BID28 designed a Hide-and-Seek model to randomly hide some regions in a video during training and force the network to seek other relevant regions. However, the randomly hiding operation, as a data augmentation, cannot guarantee whether it is the action region or the region that is hidden during training, especially when the dropout probabilities for all the regions are the same. Nguyen et al. BID17 proposed a sparse temporal pooling network (STPN) to identify a sparse set of key segments associated with the actions through attention-based temporal pooling of video segments. However, the sparse constraint may force the network to focus on very few segments and lead to incomplete detection. In order to prevent the model from focusing only on the most salient regions, we are inspired to propose the MAAN model to explicitly take the expectation with respect to the average aggregated features of all the sampled subsets from the video. Feature Aggregators. Learning discriminative localization representations with only video-level class labels requires the feature aggregation operation to turn multiple snippet-level representations into a video-level representation for classification. The feature aggregation mechanism is widely adopted in the deep learning literature and a variety of scenarios, for example, neural machine translation BID0, visual question answering BID9, and so on. However, most of these cases belong to fully-supervised learning where the goal is to learn a model that attends the most relevant features given the supervision information corresponding to the task directly. Many variant feature aggregators have been proposed, ranging from nonparametric max pooling and average pooling, to parametric hard attention BID6, soft attention BID30 BID22, second-order pooling BID5 BID15, structured attention BID13 BID16, graph aggregators BID43 BID7, and so on. Different from the fullysupervised setting where the feature aggregator is designed for the corresponding tasks, we develop a feature aggregator that is trained only with class labels, and then to be used to predict the dense action locations for test data. Different from the heuristic approaches BID35 BID44 which can be considered as a kind of hard-code attention by erasing some regions with a hand-crafted threshold, we introduce the end-to-end differentiable marginalized average aggregation which incorporates learnable latent discriminative probabilities into the learning process. E MARGINALIZED AVERAGE AGGREGATION We also evaluate the proposed model on the weakly-supervised object localization task. For weaklysupervised object localization, we are given a set of images in which each image is labeled only with its category label. The goal is to learn a model to predict both the category label as well as the bounding box for the objects in a new test image. Based on the model in BID46 (denoted as CAM model), we replace the global average pooling feature aggregator with other kinds of feature aggregator, such as the weighted sum pooling and the proposed MAA by extending the original 1D temporal version in temporal action localization into a 2D spatial version. We denote the model with weighted sum pooling as the weighted-CAM model. For the weighted-CAM model and the proposed MAAN model, we use an attention module to generate the attention weight λ in STPN or the latent discriminative probability p in MAAN. The attention module consists of a 2D convolutional layer of kernel size 1 × 1, stride 1 with 256 units, a LeakyReLU layer, a 2D convolutional layer of kernel size 1 × 1, stride 1 with 1 unit, and a sigmoid non-linear activation. We evaluate the weakly-supervised localization accuracy of the proposed model on the CUB-200-2011 dataset BID31. The CUB-200-2011 dataset has 11,788 images of 200 categories with 5,994 images for training and 5,794 for testing. We leverage the localization metric suggested by BID21 for comparison. This metric computes the percentage of images that is misclassified or with bounding boxes with less than 50% IoU with the groundtruth as the localization error. We compare our MAA aggregator (MAAN) with the weighted sum pooling (weighted-CAM) and global average pooling (CAM BID48). For MAAN and weighted-CAM, we pool the convolutional feature for aggregation into two different sizes, 4 × 4 and 7 × 7. We fix all other factors (e.g. network structure, hyper-parameters, optimizer), except for the feature aggregators to evaluate the models. The localization errors for different methods are presented in TAB5, where the GoogLeNet-GAP is the CAM model. Our method outperforms GoogLeNet-GAP by 5.06% in a Top-1 error. Meanwhile, MAAN achieves consistently lower localization error than weighted-CAM on the two learning schemes. It demonstrates that the proposed MAAN can improve the localization performance in the weakly-supervised setting. Moreover, both MAAN and weighted-CAM obtain smaller localization error when employing the 7 × 7 learning scheme than the 4 × 4 learning scheme. FIG11 visualizes the heat maps and localization bounding boxes obtained by all the compared methods. The object localization heat maps generated by the proposed MAAN can cover larger object regions and obtain more accurate bounding boxes.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HkljioCcFQ
A novel marginalized average attentional network for weakly-supervised temporal action localization
Deep image prior (DIP), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is essential for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into ``delay-embedding'' and ``transformation (\ie encoder-decoder)'', and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion and super-resolution of MMES are quite similar even competitive to DIP in our extensive experiments, and these would help us for reinterpreting/characterizing the DIP from a perspective of ``low-dimensional patch-manifold prior''. The most important piece of information for image/tensor restoration would be the "prior" which usually converts the optimization problems from ill-posed to well-posed, and/or gives some robustness for specific noises and outliers. Many priors were studied in computer science problems such as low-rank representation (; ; ;), smoothness (; ;), sparseness , non-negativity , statistical independence , and so on. Particularly in today's computer vision problems, total variation (TV) , low-rank representation (; ; ;), and non-local similarity priors are often used for image modeling. These priors can be obtained by analyzing basic properties of natural images, and categorized as "unsupervised image modeling". By contrast, the deep image prior (DIP) has been come from a part of "supervised" or "data-driven" image modeling framework (i.e., deep learning) although the DIP itself is one of the state-of-the-art unsupervised image restoration methods. The method of DIP can be simply explained to only optimize an untrained (i.e., randomly initialized) fully convolutional generator network (ConvNet) for minimizing squares loss between its generated image and an observed image (e.g., noisy image), and stop the optimization before the overfitting. explained the reason why a high-capacity ConvNet can be used as a prior by the following statement: Network resists "bad" solutions and descends much more quickly towards naturally-looking images, and its phenomenon of "impedance of ConvNet" was confirmed by toy experiments. However, most researchers could not be fully convinced from only above explanation because it is just a part of whole. One of the essential questions is why is it ConvNet? or in more practical perspective, to explain what is "priors in DIP" with simple and clear words (like smoothness, sparseness, low-rank etc) is very important. In this study, we tackle the question why ConvNet is essential as an image prior, and try to translate the "deep image prior" with words. For this purpose, we divide the convolution operation into "embedding" and "transformation" (see Fig. 9 in Appendix). Here, the "embedding" stands for delay/shift-embedding (i.e., Hankelization) which is a copy/duplication operation of image-patches by sliding window of patch size (τ, τ). The embedding/Hankelization is a preprocessing to capture the delay/shift-invariant feature (e.g., non-local similarity) of signals/images. This "transformation" is basically linear transformation in a simple convolution operation, and it also indicates some nonlinear transformation from the ConvNet perspective. To simplify the complicated "encoder-decoder" structure of ConvNet used in DIP, we consider the following network structure: Embedding H (linear), encoding φ r (non-linear), decoding ψ r (non-linear), and backward embedding H † (linear) (see Fig. 1). Note that its encoder-decoder part (φ r, ψ r) is just a simple multi-layer perceptron along the filter domain (i.e., manifold learning), and it is sandwitched between forward and backward embedding (H, H †). Hence, the proposed network can be characterized by Manifold Modeling in Embedded Space (MMES). The proposed MMES is designed as simple as possible while keeping a essential ConvNet structure. Some parameters τ and r in MMES are corresponded with a kernel size and a filter size in ConvNet. When we set the horizontal dimension of hidden tensor L with r, each τ 2 -dimensional fiber in H, which is a vectorization of each (τ, τ)-patch of an input image, is encoded into r-dimensional space. Note that the volume of hidden tensor L looks to be larger than that of input/output image, but representation ability of L is much lower than input/output image space since the first/last tensor (H,H) must have Hankel structure (i.e., its representation ability is equivalent to image) and the hidden tensor L is reduced to lower dimensions from H. Here, we assume r < τ 2, and its lowdimensionality indicates the existence of similar (τ, τ)-patches (i.e., self-similarity) in the image, and it would provide some "impedance" which passes self-similar patches and resist/ignore others. Each fiber of Hidden tensor L represents a coordinate on the patch-manifold of image. It should be noted that the MMES network is a special case of deep neural networks. In fact, the proposed MMES can be considered as a new kind of auto-encoder (AE) in which convolution operations have been replaced by Hankelization in pre-processing and post-processing. Compared with ConvNet, the forward and backward embedding operations can be implemented by convolution and transposed convolution with one-hot-filters (see Fig. 12 in Appendix for details). Note that the encoder-decoder part can be implemented by multiple convolution layers with kernel size and non-linear activations. In our model, we do not use convolution explicitly but just do linear transform and non-linear activation for "filter-domain" (i.e., horizontal axis of tensors in Fig. 1). The contributions in this study can be summarized as follow: A new and simple approach of image/tensor modeling is proposed which translates the ConvNet, effectiveness of the proposed method and similarity to the DIP are demonstrated in experiments, and most importantly, there is a prospect for interpreting/characterizing the DIP as "low-dimensional patch-manifold prior". Note that the idea of low-dimensional patch manifold itself has been proposed by and. Peyre had firstly formulated the patch manifold model of natural images and solve it by dictionary learning and manifold pursuit. Osher et al. formulated the regularization function to minimize dimension of patch manifold, and solved Laplace-Beltrami equation by point integral method. In comparison with these studies, we decrease the dimension of patch-manifold by utilizing AE shown in Fig. 1. A related technique, low-rank tensor modeling in embedded space, has been studied recently by. However, the modeling approaches here are different: multi-linear vs nonlinear manifold. Thus, our study would be interpreted as manifold version of in a perspective of tensor completion methods. Note that applied their model for only tensor completion task. By contrast, we investigate here tensor completion, super-resolution, and deconvolution tasks. Another related work is devoted to group sparse representation (GSR) (a). The GSR is roughly characterized as a combination of similar patch-grouping and sparse modeling which is similar to the combination of embedding and manifold-modeling. However, the computational cost of similar patch-grouping is obviously higher than embedding, and this task is naturally included in manifold learning. The main difference between above studies and our is the motivation: Essential and simple image modeling which can translate the ConvNet/DIP. The proposed MMES has many connections with ConvNet/DIP such as embedding, non-linear mapping, and the training with noise. From a perspective of DIP, there are several related works. First, the deep geometric prior utilises a good properties of a multi-layer perceptron for shape reconstruction problem which efficiently learn a smooth function from 2D space to 3D space. It helps us to understand DIP from a perspective of manifold learning. For example, it can be used for gray scale image reconstruction if an image is regarded as point could in 3D space (i, j, X ij). However, this may not provide the good image reconstruction like DIP, because it just smoothly interpolates a point cloud by surface like a Volonoi interpolation. Especially it can not provide a property of self-similarity in natural image. Second, deep decoder reconstructs natural images from noises by nonconvolutional networks which consists of linear channel/color transform, ReLU, channel/color normalization, and upsampling layers. In contrast that DIP uses over-parameterized network, deep decoder uses under-parameterized network and shows its ability of image reconstruction. Although deep decoder is a non-convolutional network, Authors emphasize the closed relationship between convolutional layers in DIP and upsampling layers in deep decoder. In this literature, Authors described "If there is no upsampling layer, then there is no notion of locality in the ant image" in deep decoder. It implies the "locality" is the essence of image model, and the convolution/upsampling layer provides it. Furthermore, the deep decoder has a close relationship with our MMES. Note that the MMES is originally/essentially has only decoder and inverse MDT (see Eq.), and the encoder is just used for satisfying Hankel structure. The decoder and inverse MDT in our MMES are respectively corresponding linear operation and upsampling layer in deep decoder. Moreover, concept of under-parameterization is also similar to our MMES. From this, we can say the essence of image model is the "locality", and its locality can be provided by "convolution", "upsampling", or "delay-embedding". This is why the image restoration from single image with deep convolutional networks has highly attentions which are called by zero-shot learning, internal learning, or self-supervised learning (; ; ; ; ; ;). Recently, two generative models: SinGAN and InGAN learned from only a single image, have been proposed. Key concept of both papers is to impose the constraint for local patches of image to be natural. From a perspective of the constraint for local patches of image, our MMES has closed relationship with these works. However, we explicitly impose a low-dimensional manifold constraint for local patches rather than adversarial training with patch discriminators. Here, on the contrary to Section 1, we start to explain the proposed method from the concept of MMES, and we systematically derive the MMES structure from it. Conceptually, the proposed tensor reconstruction method can be formulated by minimize where Y ∈ R J1×J2×···×J N is an observed corrupted tensor, X ∈ R I1×I2×···×I N is an estimated tensor, F: R I1×I2×···×I N → R J1×J2×···×J N is a linear operator which represents the observation system, H: R I1×I2×···×I N → R D×T is padding and Hankelization operator with sliding window of size (τ 1, τ 2, ..., τ N), and we impose each column of matrix H can be sampled from an r-dimensional manifold M r in D-dimensional Euclid space (see Appendix B for details). We have r ≤ D. For simplicity, we putted D:= n τ n and T:= n (I n +τ n −1). For tensor completion task, F:= P Ω is a projection operator onto support set Ω so that the missing elements are set to be zero. For superresolution task, F is a down-sampling operator of images/tensors. For deconvolution task, F is a convolution operator with some blur kernels. Fig. 2 shows the concept of proposed manifold modeling in case of image inpainting (i.e., N = 2). We minimize the distance between observation Y and reconstruction X with its support Ω, and all patches in X should be included in some restricted manifold M r. In other words, X is represented by the patch-manifold, and the property of the patch-manifold can be image priors. For example, low dimensionality of patch-manifold restricts the non-local similarity of images/tensors, and it would be related with "impedance" in DIP. We model X indirectly by designing the properties of patch-manifold M r. We consider an AE to define the r-dimensional manifold M r in (n τ n)-dimensional Euclidean space as follows: where. Note that, in general, the use of AE models is a widely accepted approach for manifold learning . The properties of the manifold M r are determined by the properties of φ r and ψ r. By employing multi-layer perceptrons (neural networks) for φ r and ψ r, encoder-decoder may provide a smooth manifold. In this section, we combine the conceptual formulation and the AE guided manifold constraint to derive a equivalent and more practical optimization problem. First, we redefine a tensor X as an output of generator: Algorithm 1 Optimization algorithm for tensor reconstruction input: where l t ∈ R r, and H † is a pseudo inverse of H. At this moment, X is a function of {l t} T t=1, however Hankel structure of matrix H can not be always guaranteed under the unconstrained condition of l t. For guaranteeing the Hankel structure of matrix H, we further transform it as follow: where we put A r: R D×T → R D×T as an operator which auto-encodes each column of a input matrix with (ψ r,φ r), and [g 1, g 2, ..., g T] as a matrix, which has Hankel structure and is transformed by Hankelization of some input tensor Z ∈ R I1×I2×···×I N. Note that Z is the most compact representation for Hankel matrix [g 1, g 2, ..., g T]. Eq. describes the MMES network shown in Fig. 1: H,φ r,ψ r and H † are respectively corresponding to forward embedding, encoding, decoding, and backward embedding, where encoder and decoder can be defined e.g. by multi-layer perceptrons (i.e., repetition of linear transformation and non-linear activation). F, where A r is an AE which defines the manifold M r. In this study, the AE/manifold is learned from an observed tensor Y itself, thus the optimization problem is finally formulated as where we refer respectively the first and second terms by a reconstruction loss and an auto-encoding loss, and λ > 0 is a trade-off parameter for balancing both losses. Optimization problem consists of two terms: a reconstruction loss, and an auto-encoding loss. Hyperparameter λ is set to balance both losses. Basically, λ should be large because auto-encoding loss should be zero. However, very large λ prohibits minimizing the reconstruction loss, and may lead to local optima. Therefore, we adjust gradually the value of λ in the optimization process. Algorithm 1 shows an optimization algorithm for tensor reconstruction and/or enhancement. For AE learning, we employs a strategy of denoising-auto-encoder (see Appendix in detail). Adaptation of λ is just an example, and it can be modified appropriately with data. Here, the trade-off parameter λ is adjusted for keeping L rec > L AE, but for no large gap between both losses. By exploiting the convolutional structure of H and H † (see Appendix B.1), the calculation flow of L rec and L AE can be easily implemented by using neural network libraries such as TensorFlow. We employed Adam optimizer for updating (Z, A r). Here, we show the selective experimental to demonstrate the close similarity and some slight differences between DIP and MMES. First, toy examples with a time-series signal and a gray-scale image were recovered by the proposed method to show its basic behaviors. Thereafter, we show the main by comparison with DIP and other selective methods on color-image inpainting, superresolution, and deconvolution tasks. Optional of optimization behavior, hyper-parameter sensitivity, and volumetric/3D image completion are shown in Appendix. In this section, we apply the proposed method into a toy example of signal recovery. Fig. 3 shows a of this experiment. A one-dimensional time-series signal is generated from Lorentz system, and corrupted by additive Gaussian noise, random missing, and three block occlusions. The corrupted signal was recovered by the subspace modeling , and the proposed manifold modeling in embedded space. Window size of delay-embedding was τ = 64, the lowest dimension of auto-encoder was r = 3, and additive noise standard deviation was set to σ = 0.05. Manifold modeling catched the structure of Lorentz attractor much better than subspace modeling. Similar patches are located near each other, and the smooth change of patterns can be observed. It implies the relationship between non-local similarity based methods (; ; ; a), and the manifold modeling (i.e., DAE) plays a key role of "patch-grouping" in the proposed method. The difference from the non-local similarity based approach is that the manifold modeling is "global" rather than "non-local" which finds similar patches of the target patch from its neighborhood area. In this section, we compare performance of the proposed method with several selected unsupervised image inpainting methods: low-rank tensor completion (HaLRTC) , parallel lowrank matrix factorization (TMac) , tubal nuclear norm regularization (tSVD) (b), Tucker decomposition with rank increment (Tucker inc.) , lowrank and total-variation (LRTV) regularization (;, smooth PARAFAC tensor completion (SPC) , GSR (a), multi-way delay embedding based Tucker modeling (MDT-Tucker) , and DIP . Implementation and detailed hyper-parameter settings are explained in Appendix. Basically, we carefully tuned the hyper-parameters for all methods to perform the best scores of peak-signal-tonoise ratio (PSNR) and structural similarity (SSIM). Fig. 5(a) shows the eight test images and averages of PSNR and SSIM for various missing ratio {50%, 70%, 90%, 95%, 99%} and for selective competitive methods. The proposed method is quite competitive with DIP. Fig. 6 shows the illustration of . The 99% of randomly selected voxels are removed from 3D-tensors, and the tensors were recovered by various methods. Basically low-rank priors (HaLRTC, TMac, tSVD, Tucker) could not recover such highly incomplete image. In piecewise smoothness prior (LRTV), over-smoothed images were reconstructed since the essential image properties could not be captured. There was a somewhat jump from them by SPC (i.e., smooth prior of basis functions in low-rank tensor decomposition). MDT-Tucker further improves it by exploiting the shift-invariant multi-linear basis. GSR nicely recovered the global pattern of images but details were insufficient. Finally, the reconstructed images by DIP and MMES recovered both global and local patterns of images. In this section, we compare the proposed method with selected unsupervised image super-resolution methods: Bicubic interpolation, GSR (a), ZSSR and DIP . Implementation and detailed hyper-parameter settings are explained in Appendix. Basically, we carefully tuned the hyper-parameters for all methods to perform the best scores of PSNR and SSIM. Fig. 5(b) shows values of PSNR and SSIM of the computer simulation . We used three color images, and six color images. Super resolution methods scaling up them from four or eight times down-scaled images of them with Lanczos2 kernels. According to this quantitative evaluation, bicubic interpolation was clearly worse than others. ZSSR worked well for up-scaling from, however the performances were substantially decreased for upscaling from. Basically, GSR, DIP, and MMES were very competitive. In detail, DIP was slightly better than GSR, and the proposed MMES was slightly better than DIP. More detailed PSNR/SSIM values are given by Table 3 in Appendix. Fig. 7 shows selected high resolution images reconstructed by four super-resolution methods. In general, bicubic method reconstructed blurred images and these were visually worse than others. GSR had smooth outlines in all images, but these were slightly blurred. ZSSR was weak for very low-resolution images. DIP reconstructed visually sharp images but these images had jagged artifacts along the diagonal lines. The proposed MMES reconstructed sharp and smooth outlines. In this section, we compare the proposed method with DIP for image deconvolution/deblurring task. Three color images are prepared and blurred by using three different Gaussian filters. For DIP we choose the best early stopping timing from {1000, 2000, ..., 10000} iterations. For MMES, we employed the fixed AE structure as [32τ 2, r, 32τ 2], and parameters as τ = 4, r = 16, and σ = 0.01 for all nine cases. Fig. 8 shows the reconstructed deblurring images by DIP and MMES. Tab. 1 shows the PSNR and SSIM values of these . We can see that the similarity of the methods qualitatively and quantitatively. It is well known that there is no mathematical definition of interpretability in machine learning and there is no one unique definition of interpretation. We understand the interpretability as a degree to which a human can consistently predict the model's or performance. The higher the interpretability of a deep learning model, the easier it is for someone to comprehend why certain performance or predictions or expected output can be achieved. We think that a model is better interpretable than another model if its performance or behaviors are easier for a human to comprehend than performance of the other models. The manifold learning and associated auto-encoder (AE) can be viewed as the generalized non-linear version of principal component analysis (PCA). In fact, manifold learning solves the key problem of dimensionality reduction very efficiently. In other words, manifold learning (modeling) is an approach to non-linear dimensionality reduction. Manifold modeling for this task are based on the idea that the dimensionality of many data sets is only artificially high. Although the patches of images (data points) consist of hundreds/thousands pixels, they may be represented as a function of only a few or quite limited number underlying parameters. That is, the patches are actually samples from a low-dimensional manifold that is embedded in a high-dimensional space. Manifold learning algorithms attempt to uncover these parameters in order to find a low dimensional representation of the images. In our MMES approach to solve the problem we applied original embedding via multi-way delay embedding transform (MDT or Hankelization). Our algorithm is based on the optimization of cost function and it works towards extracting the low-dimensional manifold that is used to describe the high-dimensional data. The manifold is described mathematically by Eq. and cost function is formulated by Eq.. As mentioned at introduction, reported an important phenomenon of noise impedance of ConvNet structures. Here, we provide a prospect for explaining the noise impedance in DIP through the MMES. Let us consider the sparse-land model, i.e. noise-free images are distributed along low-dimensional manifolds in the high-dimensional Euclidean space and images perturbed by noises thicken the manifolds (make the dimension of the manifolds higher). Under this model, the distribution of images can be assumed to be higher along the low-dimensional noise-free image manifolds. When we assume that the image patches are sampled from low-dimensional manifold like sparse-land model, it is difficult to put noisy patches on the low-dimensional manifold. Let us consider to fit the network for noisy images. In such case the fastest way for decreasing squared error (loss function) is to learn "similar patches" which often appear in a large set of image-patches. Note that finding similar image-patches for denoising is well-known problem solved, e.g., by BM3D algorithm, which find similar image patches by template matching. In contrast, our auto-encoder automatically maps similar-patches into close points on the low-dimensional manifold. When similar-patches have some noise, the low-dimensional representation tries to keep the common components of similar patches, while reducing the noise components. This has been proved by so that a (denoising) auto-encoder maps input image patches toward higher density portions in the image space. In other words, a (denoising) auto-encoder has kind of a force to reconstruct the low-dimensional patch manifold, and this is our rough explanation of noise impedance phenomenon. Although the proposed MMES and DIP are not completely equivalent, we see many analogies and similarities and we believe that our MMES model and associated learning algorithm give some new insight for DIP. A beautiful manifold representation of complicated signals in embedded space has been originally discovered in a study of dynamical system analysis (i.e., chaos analysis) for time-series signals . After this, many signal processing and computer vision applications have been studied but most methods have considered only linear approximation because of the difficulty of non-linear modeling (; ; ; ;). However nowadays, the study of non-linear/manifold modeling has been well progressed with deep learning, and it was successfully applied in this study. Interestingly, we could apply this non-linear system analysis not only for time-series signals but also natural color images and tensors (this is an extension from delay-embedding to multi-way shiftembedding). The best of our knowledge, this is the first study to apply Hankelization with AE into general tensor data reconstruction. MMES is a novel and simple image reconstruction model based on the low-dimensional patchmanifold prior which has many connections to ConvNet. We believe it helps us to understand how work ConvNet/DIP through MMES, and support to use DIP for various applications like tensor/image reconstruction or enhancement (; ; ;). Finally, we established bridges between quite different research areas such as the dynamical system analysis, the deep learning, and the tensor modeling. The proposed method is just a prototype and can be further improved by incorporating other methods such as regularizations, multi-scale extensions, and adversarial training. We can see the anti-diagonal elements of above matrix are equivalent. Such matrix is called as "Hankel matrix". For a two-dimensional array we consider unfold of it and inverse folding by unfold, and The point here is that we scan matrix elements column-wise manner. Hankelization of this twodimensional array (matrix) with τ = is given by scanning a matrix with local-window column-wise manner, and unfold and stack each local patch left-to-right. Thus, it is given as We can see that it is not a Hankel matrix. However, it is a "block Hankel matrix" in perspective of block matrix, a matrix that its elements are also matrices. We can see the block matrix itself is a Hankel matrix and all elements are Hankel matrices, too. Thus, Hankel matrix is a special case of block Hankel matrix in case of that all elements are scalar. In this paper, we say simply "Hankel structure" for block Hankel structure. Figure 9 shows an illustrative explanation of valid convolution which is decomposed into delayembedding/Hankelization and linear transformation. 1D valid convolution of f with kernel h = [h 1, h 2, h 3] can be provided by matrix-vector product of the Hankel matrix and h. In similar way, 2D valid convolution can be provided by matrix-vector product of the block Hankel matrix and unfolded kernel. Multiway-delay embedding transform (MDT) is a multi-way generalization of Hankelization proposed by. In , MDT is defined by using the multi-linear tensor product with multiple duplication matrices and tensor reshaping. Basically, we use the same operation, but a padding operation is added. Thus, the multiway-delay embedding used in this study is defined by where D×T is an unfolding operator which outputs a matrix from an input N -th order tensor, and S n ∈ R is a duplication matrix. Fig. 10 shows the duplication matrix with τ. For example, our Hankelization with reflection padding of f = [f 1, f 2, ..., f 7] with τ = 3 is given by Fig. 11 shows an example of our multiway-delay embedding in case of second order tensors. The overlapped patch grid is constructed by multi-linear tensor product with S n. Finally, all patches are splitted, lined up, and vectorized. The Moore-Penrose pseudo inverse of H is given by where, and trim τ = pad † τ is a trimming operator for removing (τ n −1) elements at start and end of each mode. Note that H † •H is an identity map, but H • H † is not, that is kind of a projection. Delay embedding and its pseudo inverse can be implemented by using convolution with all onehot-tensor windows of size (τ 1, τ 2, ..., τ N). The one-hot-tensor windows can be given by folding a D-dimensional identity matrix I D ∈ R D×D into I D ∈ R τ1×···×τ N ×D. Fig. 12 shows a calculation flow of multi-way delay embedding using convolution in a case of N = 2. Multi-linear tensor product is replaced with convolution with one-hot-tensor windows. Pseudo inverse of the convolution with padding is given by its adjoint operation, which is called as the "transposed convolution" in some neural network library, with trimming and simple scaling with Figure 10: Duplication matrix. In case that we have I columns, it consists of (I − τ + 1) identity matrices of size (τ, τ). In this section, we discuss how to design the neural network architecture of auto-encoder for restricting the manifold M r. The simplest way is controlling the value of r, and it directly restricts the dimensionality of latent space. There are many other possibilities: Tikhonov regularization , drop-out , denoising auto-encoder , variational auto-encoder (Diederik P), adversarial auto-encoder , alpha-GAN (, and so on. All methods have some perspective and promise, however the cost is not low. In this study, we select an attractive and fundamental one: "denoising auto-encoder"(DAE) . The DAE is attractive because it has a strong relationship with Tikhonov regularization , and decreases the entropy of data . Furthermore, learning with noise is also employed in the deep image prior. Finally, we designed an auto-encoder with controlling the dimension r and the standard deviation σ of additive zero-mean Gaussian noise. In case of multi-channel or color image recovery case, we use a special setting of generator network because spacial pattern of individual channels are similar and the patch-manifold can be shared. Fig. 14 shows an illustration of the auto-encoder shared version of MMES in a case of color image recovery. In this case, we put three channels of input and each channel input is embedded, independently. Then, three block Hankel matrices are concatenated, and auto-encoded simultaneously. Inverted three images are stacked as a color-image (third-order tensor), and finally color-transformed. The last color-transform can be implemented by convolution layer with kernel size, and it is also optimized as parameters. It should be noted that the input three channels are not necessary to correspond to RGB, but it would be optimized as some compact color-representation. Here, we explain detailed experimental settings in Section 4.2. In this section, we compared performance of the proposed method with several selected unsupervised image inpainting methods: low-rank tensor completion (HaLRTC) , parallel low-rank matrix factorization (TMac) , tubal nuclear norm regularization (tSVD) (b), Tucker decomposition with rank increment (Tucker inc.) , low-rank and total-variation (LRTV) regularization 2 (;, smooth PARAFAC tensor completion (SPC) 3 , GSR 4 (a), multi-way . For this experiments, hyper-parameters of all methods were tuned manually to perform the best peaksignal-to-noise ratio (PSNR) and for structural similarity (SSIM), although it would not be perfect. For DIP, we did not try the all network structures with various kernel sizes, filter sizes, and depth. We just employed "default architecture", which the details are available in supplemental material 7 of , and employed the best at the appropriate intermediate iterations in optimizations based on the value of PSNR. For the proposed MMES method, we adaptively selected the patch-size τ, and dimension r. Table 2 shows parameter settings of τ = [τ, τ] and r for MMES. Noise level of denoising auto-encoder was set as σ = 0.05 for all images. For auto-encoder, same architecture shown in Fig. 13 was employed. Initial learning rate of Adam optimizer was 0.01 and we decayed the learning rate with 0.98 every 100 iterations. The optimization was stopped after 20,000 iterations for each image. Here, we explain detailed experimental settings in Section 4.3. In this section, we compare performance of the proposed method with several selected unsupervised image super-resolution methods: bicubic interpolation, GSR 8 (a), ZSSR 9 and DIP . In this experiments, DIP was conducted with the best number of iterations from {1000, 2000, 3000, ..., 9000}. For four times (x4) up-scaling in MMES, we set τ = 6, r = 32, and σ = 0.1. For eight times (x8) up-scaling in MMES, we set τ = 6, r = 16, and σ = 0.1. For all images in MMES, the architecture of auto-encoder consists of three hidden layers with sizes of [8τ 2, r, 8τ 2]. We assumed the same Lanczos2 kernel for down-sampling system for all super-resolution methods. Tab. 3 shows values of PSNR and SSIM of the . We used three color images, and six color images. Super resolution methods scaling up them from four or eight times down-scaled images of them. According to this quantitative evaluation, bicubic interpolation was clearly worse than others. ZSSR was good for color images, however the performance were substantially decreased for color image. Basically, GSR, DIP, and MMES were very competitive. In detail, DIP was slightly better than GSR, and the proposed MMES was slightly better than DIP. For this experiment, we recovered 50% missing gray-scale image of'Lena'. We stopped the optimization algorithm after 20,000 iterations. Learning rate was set as 0.01, and we decayed the learning rate with 0.98 every 100 iterations. λ was adapted by Algorithm 1 every 10 iterations. Fig. 15 shows optimization behaviors of reconstructed image, reconstruction loss L rec, auto-encoding loss L DAE, and trade-off coefficient λ. By using trade-off adjustment, the reconstruction loss and the auto-encoding loss were intersected around 1,500 iterations, and both losses were jointly decreased after the intersection point. We evaluate the sensitivity of MMES with three hyper-parameters: r, σ, and τ. First, we fixed the patch-size as, and dimension r and noise standard deviation σ were varied. the reconstruction of a 99% missing image of'Lena' by the proposed method with different settings of (r, σ). The proposed method with very low dimension (r = 1) provided blurred , and the proposed method with very high dimension (r = 64) provided which have many peaks. Furthermore, some appropriate noise level (σ = 0.05) provides sharp and clean . For reference, Fig. 16 shows the difference of DIP optimized with and without noise. From both , the effects of learning with noise can be confirmed. Next, we fixed the noise level as σ = 0.05, and the patch-size were varied with some values of r. Fig. 18 shows the with various patch-size settings for recovering a 99% missing image. The patch sizes τ of or were appropriate for this case. Patch size is very important because it depends on the variety of patch patterns. If patch size is too large, then patch variations might expand and the structure of patch-manifold is complicated. By contrast, if patch size is too small, then the information obtained from the embedded matrix H is limited and the reconstruction becomes difficult in highly missing cases. The same problem might be occurred in all patch-based image reconstruction methods (; ; ; a). However, good patch sizes would be different for different images and types/levels of corruption, and the estimation of good patch size is an open problem. Multi-scale approach may reduce a part of this issue but the patch-size is still fixed or tuned as a hyper-parameter. In this section, we show the of MR-image/3D-tensor completion problem. The size of MR image is. We randomly remove 50%, 70%, and 90% voxels of the original MR-image and recover the missing MR-images by the proposed method and DIP. For DIP, we implemented the 3D version of default architecture in TensorFlow, but the number of filters of shallow layers were slightly reduced because of the GPU memory constraint. For the proposed method, 3D patch-size was set as τ =, the lowest dimension was r = 6, and noise level was σ = 0.05. Same architecture shown in Fig. 13 was employed. Fig. 19 shows reconstruction behavior of PSNR with final value of PSNR/SSIM in this experiment. From the values of PSNR and SSIM, the proposed MMES outperformed DIP in low-rate missing cases, and it is quite competitive in highly missing cases. The some degradation of DIP might be occurred by the insufficiency of filter sizes since much more filter sizes would be required for 3D ConvNet than 2D ConvNet. Moreover, computational times required for our MMES were significantly shorter than that of DIP in this tensor completion problem., r=4, r=8, r=16, r=32, r=48
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgBra4YDS
We propose a new auto-encoder incorporated with multiway delay-embedding transform toward interpreting deep image prior.
Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The ing model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance. Lately, the topic of security in machine learning is enjoying increased interest. This can be largely attributed to the success of big data in conjunction with deep learning and the urge for creating and processing ever larger data sets for data mining. However, with the emergence of more and more machine learning services becoming part of our daily lives, making use of our data, special measures must be taken to protect privacy. Unfortunately, anonymization alone often is not sufficient BID12; BID1 and standard machine learning approaches largely disregard privacy aspects and are susceptible to a variety of adversarial attacks BID11. In this regard, machine learning can be analyzed to recover private information about the participating user or employed data as well; BID16; BID3; BID6. BID2 propose a measure for to assess the memorization of privacy related data. All the aspects of privacy-preserving machine learning are aggravated when further restrictions apply such as a limited number of participating clients or restricted communication bandwidth such as mobile devices.In order to alleviate the need of explicitly sharing data for training machine learning models, decentralized approaches have been proposed, sometimes referred to as collaborative BID15 or federated learning BID9. In federated learning BID9 a model is learned by multiple clients in decentralized fashion. Learning is shifted to the clients and only learned parameters are centralized by a trusted curator. This curator then distributes an aggregated model back to the clients. However, this alone is not sufficent to preserve privacy. In BID14 it is shown that clients be identified in a federated learning setting by the model updates alone, necessitating further steps. Clients not revealing their data is an advance in privacy protection. However, when a model is learned in conventional way, its parameters reveal information about the data that was used during training. In order to solve this issue, the concept of differential privacy (dp) BID4 for learning algorithms was proposed by BID0. The aim is to ensure a learned model does not reveal whether a certain data point was used during training. We propose an algorithm that incorporates a dp-preserving mechanism into federated learning. However, opposed to BID0 we do not aim at protecting w.r.t. a single data point only. Rather, we want to ensure that a learned model does not reveal whether a client participated during decentralized training. This implies a client's whole data set is protected against differential attacks from other clients. Our main contributions: First, we show that a client's participation can be hidden while model performance is kept high in federated learning. We demonstrate that our proposed algorithm can achieve client level differential privacy at a minor loss in model performance. An independent study BID10, published at the same time, proposed a similar procedure for client level-dp. Experimental setups however differ and BID10 also includes elementlevel privacy measures. Second, we propose to dynamically adapt the dp-preserving mechanism during decentralized training. Empirical studies suggest that model performance is increased that way. This stands in contrast to latest advances in centralized training with differential privacy, were such adaptation was not beneficial. We can link this discrepancy to the fact that, compared to centralized learning, gradients in federated learning exhibit different sensibilities to noise and batch size throughout the course of training. In federated learning BID9, communication between curator and clients might be limited (e.g. mobile phones) and/or vulnerable to interception. The challenge of federated optimization is to learn a model with minimal information overhead between clients and curator. In addition, clients' data might be non-IID, unbalanced and massively distributed. The algorithm'federated averaging' recently proposed by BID9 A lot of research has been conducted in protecting differential privacy on data level when a model is learned in a centralized manner. This can be done by incorporating a dp-preserving randomized mechanism (e.g. the Gaussian mechanism) into the learning process. We use the same definition for differential privacy in randomized mechanisms as BID0: A randomized mechanism M: D → R, with domain D and range R satisfies (, δ)-differential privacy, if for any two adjacent inputs d, d ∈ D and for any subset of outputs S ⊆ R it holds that DISPLAYFORM0 In this definition, δ accounts for the probability that plain -differential privacy is broken. The Gaussian mechanism (GM) approximates a real valued function f: D → R with a differentially private mechanism. Specifically, a GM adds Gaussian noise calibrated to the functions data set sensitivity S f. This sensitivity is defined as the maximum of the absolute distance DISPLAYFORM1 In the following we assume that σ and are fixed and evaluate an inquiry to the GM about a single approximation of f (d). We can then bound the probability that -dp is broken according to: δ ≤ 5 4 exp(−(σ) 2 /2) (Theorem 3.22 in BID5). It should be noted that δ is accumulative and grows if the consecutive inquiries to the GM. Therefore, to protect privacy, an accountant keeps track of δ. Once a certain threshold for δ is reached, the GM shall not answer any new inquires. Recently, BID0 proposed a differentially private stochastic gradient descent algorithm (dp-SGD). dp-SGD works similar to mini-batch gradient descent but the gradient averaging step is approximated by a GM. In addition, the mini-batches are allocated through random sampling of the data. For being fixed, a privacy accountant keeps track of δ and stops training once a threshold is reached. Intuitively, this means training is stopped once the probability that the learned model reveals whether a certain data point is part of the training set exceeds a certain threshold. We propose to incorporate a randomized mechanism into federated learning. However, opposed to we do not aim at protecting a single data point's contribution in learning a model. Instead, we aim at protecting a whole client's data set. That is, we want to ensure that a learned model does not reveal whether a client participated during decentralized training while maintaining high model performance. In the framework of federated optimization BID9, the central curator averages client models (i.e. weight matrices) after each communication round. In our proposed algorithm, we will alter and approximate this averaging with a randomized mechanism. This is done to hide a single client's contribution within the aggregation and thus within the entire decentralized learning procedure. The randomized mechanism we use to approximate the average consists of:• Random sub-sampling (step 1 in FIG0): Let K be the total number of clients. In each communication round a random subset Z t of size m t ≤ K is sampled. The curator then distributes the central model w t to only these clients. The central model is optimized by the clients' on their data. The clients in Z t now hold distinct local models {w k} mt k=0. The difference between the optimized local model and the central model will be referred to as client k's update ∆w k = w k − w t. The updates are sent back to the central curator at the end of each communication round.• Distorting (step 3 and 4 in FIG0): A Gaussian mechanism is used to distort the sum of all updates. This requires knowledge about the set's sensitivity with respect to the summing operation. We can enforce a certain sensitivity by using scaled versions instead of the true updates: DISPLAYFORM0 ). Scaling ensures that the second norm is limited ∀k, w k 2 < S. The sensitivity of the scaled updates with respect to the summing operation is thus upper bounded by S. The GM now adds noise (scaled to sensitivity S) to the sum of all scaled updates. Dividing the GM's output by m t yields an approximation to the true average of all client's updates, while preventing leakage of crucial information about an individual. A new central model w t+1 is allocated by adding this approximation to the current central model w t. DISPLAYFORM1 Gaussian mechanism approximating sum of updatesWhen factorizing 1/m t into the Gaussian mechanism, we notice that the average's distortion is governed by the noise variance S 2 σ 2 /m. However, this distortion should not exceed a certain limit. Otherwise too much information from the sub-sampled average is destroyed by the added noise and there will not be any learning progress. GM and random sub-sampling are both randomized mechanisms. (Indeed, BID0 used exactly this kind of average approximation in dp-SGD. However, there it is used for gradient averaging, hiding a single data point's gradient at every iteration). Thus, σ and m also define the privacy loss incurred when the randomized mechanism provides an average approximation. In order to keep track of this privacy loss, we make use of the moments accountant as proposed by Abadi et al. BID0. This accounting method provides much tighter bounds on the incurred privacy loss than the standard composition theorem (3.14 in BID5). Each time the curator allocates a new model, the accountant evaluates δ given, σ and m. Training shall be Step 1: At each communication round t, m t out of total K clients are sampled uniformly at random. The central model w t is distributed to the sampled clients. Step 2: The selected clients optimize w t on their local data, leading to w k. Clients centralize their local updates: DISPLAYFORM2 Step 3: The updates are clipped such that their sensitivity can be upper bounded. The clipped updates are averaged. Step 4: The central model is updated adding the averaged, clipped updates and distorting them with Gaussian noise tuned to the sensitivity's upper bound. Having allocated the new central model, the procedure can be repeated. However, before starting step 1, a privacy accountant evaluates the privacy loss that would arise through performing another communication round. If that privacy loss is acceptable, a new round may start. stopped once δ reaches a certain threshold, i.e. the likelihood, that a clients contribution is revealed gets too high. The choice of a threshold for δ depends on the total amount of clients K. To ascertain that privacy for many is not preserved at the expense of revealing total information about a few, we have to ensure that δ 1 K, refer to chapter 2.3 for more details. Choosing S: When clipping the contributions, there is a trade-off. On the one hand, S should be chosen small such that the noise variance stays small. On the other hand, one wants to maintain as much of the original contributions as possible. Following a procedure proposed by BID0, in each communication round we calculate the median norm of all unclipped contributions and use this as the clipping bound S = median{w k} k∈Zt. We do not use a randomized mechanism for computing the median, which, strictly speaking, is a violation of privacy. However, the information leakage through the median is small (Future work will contain such a privacy measure).Choosing σ and m: for fixed S, the ratio r = σ 2 /m governs distortion and privacy loss. It follows that the higher σ and the lower m, the higher the privacy loss. The privacy accountant tells us that for fixed r = σ 2 /m, i.e. for the same level of distortion, privacy loss is smaller for σ and m both being small. An upper bound on the distortion rate r and a lower bound on the number of sub-sampled clientsm would thus lead to a choice of σ. A lower bound on m is, however, hard to estimate. That is, because data in federated settings is non-IID and contributions from clients might be very distinct. We therefore define the between clients variance V c as a measure of similarity between clients' updates. Definition. Let w i,j define the (i, j)-th parameter in an update of the form w ∈ R q×p, at some communication round t. For the sake of clarity, we will drop specific indexing of communication rounds for now. The variance of parameter (i, j) throughout all K clients is defined as, DISPLAYFORM3 where DISPLAYFORM4 We then define V c as the sum over all parameter variances in the update matrix as, DISPLAYFORM5 Further, the Update scale U s is defined as, DISPLAYFORM6 Algorithm 1 Client-side differentially private federated optimization. K is the number of participating clients; B is the local mini-batch size, E the number of local epochs, η is the learning rate, {σ} T t=0is the set of variances for the GM. {m t} T t=0 determines the number of participating clients at each communication round. defines the dp we aim for. Q is the threshold for δ, the probability that -dp is broken. T is the number of communication rounds after which δ surpasses Q. B is a set holding client's data sliced into batches of size B 1: procedure SERVER EXECUTION 2:Initialize: w 0, Accountant(, K) initialize weights and the priv. accountant 3:for each round t = 1, 2,... do δ ← Accountant(m t, σ t) Accountant returns priv. loss for current round for each client k ∈ Z t in parallel do 8: DISPLAYFORM0 DISPLAYFORM1 w ← w t for each local Epoch i = 1, 2,... E do In order to test our proposed algorithm we simulate a federated setting. For the sake of comparability, we choose a similar experimental setup as BID9 did. We divide the sorted MNIST set into shards. Consequently, each client gets two shards. This way most clients will have samples from two digits only. A single client could thus never train a model on their data such that it reaches high classification accuracy for all ten digits. We are investigating differential privacy in the federated setting for scenarios of K ∈ {100, 1000, 10000}. In each setting the clients get exactly 600 data points. For K ∈ {1000, 10000}, data points are repeated. For all three scenarios K ∈ {100, 1000, 10000} we performed a cross-validation grid search on the following parameters:• Number of batches per client B• Epochs to run on each client E• Number of clients participating in each round m• The GM parameter σ In accordance to BID0 we fixed to the value of 8. During training we keep track of privacy loss using the privacy accountant. Training is stopped once δ reaches e − 3, e − 5, e − 6 for 100, 1000 and 10000 clients, respectively. In addition, we also analyze the between clients variance over the course of training. In the cross validation grid search we look for those models that reach the highest accuracy while staying below the respective bound on δ. In addition, when multiple models reach the same accuracy, the one with fewer needed communication rounds is preferred. TAB3 holds the best models found for K ∈ {100, 1000, 10000}. We list the accuracy (ACC), the number of communication rounds (CR) needed and the arising communication costs (CC). Communication costs are defined as the number of times a model gets send by a client over the course of training, i.e. T t=0 m t. In addition, as a benchmark, TAB3 also holds the ACC, CR and CC of the best performing non-differentially private model for K = 100. In Fig. 2, the accuracy of all four best performing models is depicted over the course of training. In FIG3, the accuracy of non-differentially private federated optimization for K = 100 is depicted again together with the between clients variance and the update scale over the course of training. 100 clients, non differentially private 10000 clients, differentially private 1000 clients, differentially private 100 clients, differentially private Figure 2: Accuracy of digit classification from non-IID MNIST-data held by clients over the course of decentralized training. For differentially private federated optimization, dots at the end of accuracy curves indicate that the δ-threshold was reached and training therefore stopped. As intuitively expected, the number of participating clients has a major impact on the achieved model performance. For 100 and 1000 clients, model accuracy does not converge and stays significantly below the non-differentially private performance. However, 78% and 92% accuracy for K ∈ {100, 1000} are still substantially better than anything clients would be able to achieve when only training on their own data. In domains where K lays in this order of magnitude and differential privacy is of utmost importance, such models would still substantially benefit any client participating. An example for such a domain are hospitals. Several hundred could jointly learn a model, while information about a specific hospital stays hidden. In addition, the jointly learned model could be used as an initialization for further client-side training. For K = 10000, the differentially private model almost reaches accuracies of the non-differential private one. This suggests that for scenarios where many parties are involved, differential privacy comes at almost no cost in model performance. These scenarios include mobile phones and other consumer devices. In the cross-validation grid search we also found that raising m t over the course of training improves model performance. When looking at a single early communication round, lowering both m t and σ t in a fashion such that σ 2 t /m t stays constant, has almost no impact on the accuracy gain during that round. however, privacy loss is reduced when both parameters are lowered. This means more communication rounds can be performed later on in training, before the privacy budget is drained. In subsequent communication rounds, a large m t is unavoidable to gain accuracy, and a higher privacy cost has to be embraced in order to improve the model. This observation can be linked to recent advances of information theory in learning algorithms. As observable in FIG3, BID17 suggest, we can distinguish two different phases of training: label fitting and data fitting phase. During label fitting phase, updates by clients are similar and thus V c is low, as FIG3 shows. U c, however, is high during this initial phase, as big updates to the randomly initialized weights are performed. During data fitting phase V c rises. The individual updates w k look less alike, as each client optimizes on their data set. U c however drastically shrinks, as a local optima of the global model is approached, accuracy converges and the contributions cancel each other out to a certain extend. FIG3 shows these dependencies of V c and U c.We can conclude: i) At early communication rounds, small subsets of clients might still contribute an average update w t representative of the true data distribution ii) At later stages a balanced (and therefore bigger) fraction of clients is needed to reach a certain representativity for an update. iii) High U c makes early updates less vulnerable to noise. We were able to show through first empirical studies that differential privacy on a client level is feasible and high model accuracies can be reached when sufficiently many parties are involved. Furthermore, we showed that careful investigation of the data and update distribution can lead to optimized privacy budgeting. For future work, we plan to derive optimal bounds in terms of signal to noise ratio in dependence of communication round, data representativity and between-client variance as well as further investigate the connection to information theory. Additionally, we plan to further investigate the dataset dependency of the bounds. For assessing further applicability in bandwith-limited settings, we plan to investigate the applicability of proposed approach in context of compressed gradients such as proposed by BID8.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkVRTj0cYQ
Ensuring that models learned in federated fashion do not reveal a client's participation.
Employing deep neural networks as natural image priors to solve inverse problems either requires large amounts of data to sufficiently train expressive generative models or can succeed with no data via untrained neural networks. However, very few works have considered how to interpolate between these no- to high-data regimes. In particular, how can one use the availability of a small amount of data (even 5-25 examples) to one's advantage in solving these inverse problems and can a system's performance increase as the amount of data increases as well? In this work, we consider solving linear inverse problems when given a small number of examples of images that are drawn from the same distribution as the image of interest. Comparing to untrained neural networks that use no data, we show how one can pre-train a neural network with a few given examples to improve reconstruction in compressed sensing and semantic image recovery problems such as colorization. Our approach leads to improved reconstruction as the amount of available data increases and is on par with fully trained generative models, while requiring less than 1% of the data needed to train a generative model. We study the problem of recovering an image x x x 0 ∈ R n from m linear measurements of the form y y y 0 = A A Ax x x 0 + η η η ∈ R m where A A A ∈ R m×n is a known measurement operator and η η η ∈ R m denotes the noise in our system. Problems of this form are ubiquitous in various domains ranging from image processing, machine learning, and computer vision. Typically, the problem's difficulty is a of its ill-posedness due to the underdetermined nature of the system. To resolve this ambiguity, many approaches enforce that the image must obey a natural image model. While traditional approaches typically use hand-crafted priors such as sparsity in the wavelet basis, recent approaches inspired by deep learning to create such natural image model surrogates have shown to outperform these methods. Deep Generative Priors: Advancements in generative modelling have allowed for deep neural networks to create highly realistic samples from a number of complex natural image classes. Popular generative models to use as natural image priors are latent variable models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). This is in large part due to the fact that they provide a low-dimensional parameterization of the natural image manifold that can be directly exploited in inverse imaging tasks. When enforced as a natural image prior, these models have shown to outperform traditional methods and provide theoretical guarantees in problems such as compressed sensing, phase retrieval, and blind deconvolution/demodulation. However, there are two main drawbacks of using deep generative models as natural image priors. The first is that they require a large amount of data to train, e.g., hundreds of thousands of images to generate novel celebrity faces. Additionally, they suffer from a non-trivial representation error due to the fact that they model the natural image manifold through a low-dimensional parameterization. Untrained Neural Network Priors: On the opposite end of the data spectrum, recent works have shown that randomly initialized neural networks can act as natural image priors without any learning. first showed this to be the case by solving tasks such as denoising, inpainting, and super-resolution via optimizing over the parameters of a convolutional neural network to fit to a single image. The showed that the neural network exhibited a bias towards natural images, but due to the high overparameterization in the network, required early stopping to succeed. A simpler model was later introduced in which was, in fact, underparameterized and was able to both compress images while solving various linear inverse problems. Both methods require no training data and do not suffer from the same representation error as generative models do. Similar to generative models, they have shown to be successful image priors in a variety of inverse problems. Based on these two approaches, we would like to investigate how can one interpolate between these data regimes in a way that improves upon work with untrained neural network priors and ultimately reaches or exceeds the success of generative priors. More specifically, we would like to develop an algorithm that 1) performs just as well as untrained neural networks with no data and 2) improves performance as the amount of provided data increases. We introduce a framework to solve inverse problems given a few examples (e.g., 5 − 25) drawn from the same data distribution as the image of interest (e.g., if the true image is of a human face, the examples are also human faces). Our main contributions are the following: • We show how one can pre-train a neural network using a few examples drawn from the data distribution of the image of interest. Inspired by, we propose to jointly learn a latent space and parameters of the network to fit to the examples that are given and compare the use of an 2 reconstruction loss and a Maximum Mean Discrepancy (MMD) loss. • We then propose to solve the inverse problem via a two-step process. We first optimize over the pre-trained network's latent space. Once a solution is found, we then refine our estimate by optimizing over the latent space and parameters of the network jointly to improve our solution. found this method to work well in the case when the network is a fully trained generative model, and we show here that even a pre-trained neural network from a small number of examples can benefit from such an approach. • We show that our approach improves upon untrained neural networks in compressed sensing even with as few as 5 examples from the data distribution and exhibits improvements as the number of examples increases. We also show that semantics can be learned from these few examples in problems such as colorization where untrained neural networks fail. With only 100 examples, our model's performance is competitive with fully trained generative models. Related work: We mention that there has been previous work in investigating how to use a small amount of data to help solve the compressed sensing problem. The authors use an untrained neural network as a natural image prior and, when given a small amount of data, adopt a learned regularization term when solving the inverse problem. This term is derived by posing the recovery problem as a Maximum a Posteriori (MAP) estimation problem and by placing a Gaussian prior on the weights of the untrained network. While we have not compared our method to this learned regularization approach here, we aim to do so in a subsequent manuscript. We consider the problem of recovering an image x x x 0 ∈ R n from noisy linear measurements of the form y y y 0 = A A Ax x x 0 + η η η ∈ R m where A A A ∈ R m×n and m n. We also assume that x x x 0 is drawn from a particular data distribution D and that we are given a low number of examples drawn from the same distribution, i.e., x x x 0 ∼ D and we are given x x x i ∼ D where i ∈ [S]. Here and throughout this work, we refer to these examples drawn from D as low shots. We propose using the range of a deep neural network as a natural image model. In particular, we model the image x x x 0 as the output of a neural network G(z z z; θ θ θ), where z z z ∈ R k is a latent code and θ θ θ ∈ R P are the parameters of the network. Pre-training: Prior to solving the inverse problem, we propose to first pre-train the network using the low shots that are given. More specifically, we fit the weights and input of the neural network to the low shots to provide a crude approximation to the data distribution underlying the image of interest. Given low shots {x, we aim to find latent codes {z z z i} S i=1 and parameters θ θ θ that solve where L: R n × R n → R is a loss function. We investigate the use of different loss functions in a later section. The ing optimal parameters found are denoted byθ θ θ,ẑ z z 1,...,ẑ z z S. Solving the inverse problem: Using the weights found via pre-training, we begin solving the inverse problem by first optimizing over the latent code space to find an approximate solution: We investigated different ways to initialize the latent code and found that sampling from a multivariate Gaussian distribution fit using {ẑ z z i} S i=1 was sufficient. Note here that we keep the parameters of the network fixed after training. The intuition is that we want to use the semantics regarding the data distribution learned via pre-training the network's parameters and find the optimal latent code that corresponds to the image of interest. Once the optimal latent codeẑ z z is found, we then refine our solution by solving withθ θ θ andẑ z z as our initial iterates. The ing parameters θ θ θ 0 and z z z 0 give our final estimate: Losses to learn the data distribution: We discuss two loss functions that we considered in our experiments to learn semantics regarding the underlying data distribution. The first is a simple 2 reconstruction loss to promote data fidelity, i.e., we pre-train our network by solving While used a combination of the Laplacian-L1 loss and 2 loss, we found the 2 loss to work well. The second loss is an estimate of the kernel MMD for comparing two probability distributions using only finitely many samples. In our case, given a kernel k(·, ·) 2 and low shots x x x j ∼ D for j ∈ [S], we want to find parameters and S inputs that solve the following: We compare the success of these two loss functions in the following section. We now consider solving inverse problems with our approach and compare to three different baselines: an untrained neural network, optimizing the latent space of a trained Wasserstein GAN with gradient penalty, and the image-adaptivity approach of (IAGAN). Each method uses the same DCGAN architecture with a latent code dimension of 128. In each problem, the image of interest is from a hold-out test set from the CelebA dataset. The GAN was trained on a corpus of over 200, 000 64 × 64 celebrity images and our low-shot models were trained on small (5 − 100 images) subsets of this. Compressed Sensing: We first consider the compressed sensing problem where we want to recover an image x x x 0 ∈ R n from random Gaussian measurements of the form y y y 0 = A A Ax x x 0 ∈ R m where A A A ∈ R m×n has i.i.d. N entries with m n. We refer to amount of undersampling m n as the compression ratio. We trained our models using the two different loss functions proposed in the previous section for various numbers of shots S ∈. We note that as the number of shots increases, our method continues to improve and we see comparable performance between our method with only 100 shots and optimizing over the latent code space of a fully trained GAN. While the 2 trained nets perform slightly better than the MMD trained nets for low numbers of shots, the MMD trained nets improve more steadily and consistently as the number of shots increases. While we expect IAGAN to be superior due to being trained with over 200, 000 images, the MMD trained model's performance with 100 images is not far behind. We note that for higher numbers of measurements, the untrained neural network's performance surpasses that of our 5 shot models. This is mainly due to the fact that we optimized the untrained neural network's parameters for a longer period of time and with a higher learning rate than our low shot models in this easier setting. We now consider an inverse problem that requires an understanding of the underlying data's semantics: the colorization task. Here we want to recover an RGB image x x x 0 ∈ R 64×64×3 from its grayscale version y y y 0 = A A Ax x x 0 ∈ R 64×64. The operator A A A mixes the color channels of the image via the ITU-R 601-2 luma transform (the same transform used by the Python Imaging Library (PIL)). Untrained neural networks clearly fail in solving this type of problem since they have no prior information regarding the data distribution. We compare our model trained with 10 shots using the MMD loss to the various baselines in Figure 2. Note that even with only 10 previous examples, our model does not fall prey to the usual issues with using untrained neural networks for colorization. Our algorithm provides faithful image reconstructions that are even on par with a trained GAN. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryxOh7n9Ir
We show how pre-training an untrained neural network with as few as 5-25 examples can improve reconstruction results in compressed sensing and semantic recovery problems like colorization.
We propose Cooperative Training (CoT) for training generative models that measure a tractable density for discrete data. CoT coordinately trains a generator G and an auxiliary predictive mediator M. The training target of M is to estimate a mixture density of the learned distribution G and the target distribution P, and that of G is to minimize the Jensen-Shannon divergence estimated through M. CoT achieves independent success without the necessity of pre-training via Maximum Likelihood Estimation or involving high-variance algorithms like REINFORCE. This low-variance algorithm is theoretically proved to be superior for both sample generation and likelihood prediction. We also theoretically and empirically show the superiority of CoT over most previous algorithms in terms of generative quality and diversity, predictive generalization ability and computational cost. Generative modeling is essential in many scenarios, including continuous data modeling (e.g. image generation BID6, stylization BID17, semisupervised classification BID13) and sequential discrete data modeling (e.g. neural text generation BID2).For discrete data with tractable density like natural language, generative models are predominantly optimized through Maximum Likelihood Estimation (MLE), inevitably introducing exposure bias BID14, which in that given a finite set of observations, the optimal parameters of the model trained via MLE do not correspond to the ones maximizing the generative quality. Specifically, the model is trained on the data distribution of inputs and tested on a different distribution of inputs, namely, the learned distribution. This discrepancy implies that in the training stage, the model is never exposed to its own errors and thus in the test stage, the errors made along the way will quickly accumulate. On the other hand, for general generative modeling tasks, an effective framework, named Generative Adversarial Network (GAN) BID6, was proposed to train an implicit density model for continuous data. GAN introduces a discriminator D φ parametrized by φ to distinguish the generated samples from the real ones. As is proved in BID6, GAN essentially optimizes an approximately estimated Jensen-Shannon divergence (JSD) between the currently learned distribution and the target distribution. GAN shows promising in many unsupervised and semi-supervised learning tasks. The success of GAN in the naissance of a new paradigm of deep generative models, i.e. adversarial networks. However, since the gradient computation requires backpropagation through the generator's output, GAN can only model the distribution of continuous variables, making it non-applicable for generating discrete sequences like natural language. Researchers then proposed Sequence Generative Adversarial Network (SeqGAN), which uses model-free policy gradient algorithm to optimize the original GAN objective. With SeqGAN, the expected JSD between current and target discrete data distribution is minimized if the training is perfect. SeqGAN shows observable improvements in many tasks. Since then, many variants of SeqGAN have been proposed to improve its performance. Nonetheless, SeqGAN is not an ideal algorithm for this problem, and current algorithms based on it cannot show stable, reliable and observable improvements that covers all scenarios, according to a previous survey. The detailed reason will be discussed in detail in Section 2.In this paper, we propose Cooperative Training (CoT), a novel, low-variance, bias-free algorithm for training likelihood-based generative models on discrete data by directly optimizing a wellestimated Jensen-Shannon divergence. CoT coordinately trains a generative module G, and an auxiliary predictive module M, called mediator, for guiding G in a cooperative fashion. For theoretical soundness, we derive the proposed algorithm directly from the definition of JSD. We further empirically and theoretically demonstrate the superiority of our algorithm over many strong baselines in terms of generative performance, generalization ability and computational performance in both synthetic and real-world scenarios. Notations. P denotes the target data distribution. θ denotes the parameters of the generative module G. φ denotes the parameters of the auxiliary predictive mediator module M. Any symbol with subscript g and m stands for that of the generator and mediator, respectively. s stands for a complete sample from the training dataset or a generated complete sequence, depending on the specific context. s t means the t-length prefix of the original sequence, i.e. an incomplete sequence of length t. x denotes a token, and x t stands for a token that appears in the t-th place of a sequence. Thus s t = [x 0, x 1, x 2, . . ., x t−1] while the initial case s 0 is ∅. Maximum likelihood estimation is equivalent to minimizing the KL divergence using the samples from the real distribution:min DISPLAYFORM0 where G θ (s) is the estimated probability of s by G θ and p data is the underlying real distribution. Limitations of MLE. MLE is essentially equivalent to optimizing a directed Kullback-Leibler (KL) divergence between the target distribution P and the currently learned distribution G, denoted as KL(P G). However, since KL divergence is asymmetric, given finite observations this target is actually not ideal. As stated in BID0, MLE tries to minimize DISPLAYFORM1 • When P (s) > 0 and G(s) → 0, the KL divergence grows to infinity, which means MLE assigns an extremely high cost to the "mode dropping" scenarios, where the generator fails to cover some parts of the data.• When G(s) > 0 and P (s) → 0, the KL divergence shrinks to 0, which means MLE assigns an extremely low cost to the scenarios, where the model generates some samples that do not locate on the data distribution. Likewise, optimizing KL(G P) will lead to exactly the reversed problems of the two situations. An ideal solution is to optimize a symmetrized and smoothed version of KL divergence, i.e. the Jensen-Shannon divergence (JSD), which is defined as DISPLAYFORM2 where M = 1 2 (P + G). However, directly optimizing JSD is conventionally considered as an intractable problem. JSD cannot be directly evaluated and optimized since the equally interpolated distribution M is usually considered to be unconstructable, as we only have access to the learned model G instead of P. SeqGAN incorporates two modules, i.e. the generator and discriminator, parametrized by θ and φ respectively, as in the settings of GAN. By alternatively training these two modules, SeqGAN optimizes such an adversarial target: min Collect two equal-sized mini-batch of samples {sg} and {sp} from G θ and P, respectively 5: DISPLAYFORM0 Mix {sg} and {sp} as {s} 6:Update mediator M φ with {s} via Eq. 7: end for 8:Generate a mini-batch of sequences {s} ∼ G θ 9:Update generator G θ with {s} via Eq. 10: until CoT convergesThe objectives of generator G θ and discriminator D φ in SeqGAN can be formulated as DISPLAYFORM1 Discriminator: DISPLAYFORM2 where s ∼ G θ = [x 1, ..., x n] denotes a complete sequence sampled from the generator and the action value BID16, the fact that SeqGAN is essentially based on model-free reinforcement learning makes it a non-trivial problem for SeqGAN to converge well. As a , SeqGAN usually gets stuck in some fake local optimals. Specifically, although the discriminator can distinguish the samples from the generator easily, it is not able to effectively guide the generator because of the vanishing gradient, as is discussed in a recent survey. Although this problem can be alleviated by reshaping the reward signals based on the relative rankings of the outputs in a mini-batch BID11 BID8, they are more technical workarounds than essential solutions. DISPLAYFORM3 Second, SeqGAN trained via REINFORCE suffers from the "mode collapse" problem, which is similar to the original GAN. That is to say, the learned distribution "collapses" to the other side of KL divergence, i.e. KL(G P), which leads to the loss of diversity of generated samples. In other words, SeqGAN trains the model for better generative quality at the cost of diversity.3 COOPERATIVE TRAINING To be consistent with the goal that the target distribution should be well-estimated in both quality and diversity senses, an ideal algorithm for such models should be able to optimize a symmetric divergence or distance. For sequential discrete data modeling, since the data distribution is decomposed into a sequential product of finite-dimension multinomial distributions (always based on the softmax form), the failures of effectively optimizing JSD when the generated and real data distributions are distant, as discussed in, will not appear. As such, to optimize JSD is feasible. However, to our knowledge, no previous algorithms provide a direct, low-variance optimization of JSD. In this paper, we propose Cooperative Training (CoT), as shown in Algorithm 1, to directly optimize a well-estimated unbiased JSD for training such models. Each iteration of Cooperative Training mainly consists of two parts. The first part is to train a mediator M φ, which is a density function that estimates a mixture distribution of the learned generative distribution G θ and target latent distribution P = p data as DISPLAYFORM0 Since the mediator is only used as a density prediction module during training, the directed KL divergence is now free from so-called exposure bias for optimization of M φ. Denote DISPLAYFORM1 Lemma 1 (Mixture Density Decomposition) DISPLAYFORM2 By Lemma 1, for each step, we can simply mix balanced samples from training data and the generator, then train the mediator via Maximum Likelihood Estimation with the mixed samples. The objective J m (φ) for the mediator M parametrized by φ therefore becomes DISPLAYFORM3 Since the objective of MLE is bias-free for predictive purposes, the estimated M φ is also bias-free when adopted for estimating JSD. The training techniques and details will be discussed in Section 4.After each iteration, the mediator is exploited to optimize an estimated Jensen-Shannon divergence for G θ: DISPLAYFORM4 Note that the gradient Eq. should be performed for only one step because once G θ is updated the current mediator's estimation M φ becomes inaccurate. For any sequence or prefix of length t, we have: DISPLAYFORM5 DISPLAYFORM6 The detailed derivations can be found in the supplementary material. Note that Lemma 2 can be applied recursively. That is to say, given any sequence s t of arbitrary length t, optimizing s t's contribution to the expected JSD can be decomposed into optimizing the first term of Eq. FORMULA0 and solving an isomorphic problem for s t−1, which is the longest proper prefix of s t. When t = 1, since in Markov decision process the probability for initial state s 0 is always 1.0, it is trivial to prove that the final second term becomes 0.Therefore, Eq. can be reduced through recursively applying Lemma 2. After removing the constant multipliers and denoting the predicted probability distribution over the action space, i.e. G θ (·|s t) and M φ (·|s t), as π g (s t) and π m (s t) respectively, the gradient ∇ θ J g (θ) for training generator via Cooperative Training can be formulated as DISPLAYFORM7 For tractable density models with finite discrete action space in each step, the practical effectiveness of this gradient is well guaranteed for the following reasons. First, with a random initialization of the model, the supports of distributions G θ and P are hardly disjoint. Second, the first term of Eq. FORMULA0 is to minimize the cross entropy between G and M *, which tries to enlarge the overlap of two distributions. Third, since the second term of Eq. FORMULA0 is equivalent to maximizing the entropy of G, it encourages the support of G to cover the whole action space, which avoids the case of disjoint supports between G and P.The overall objective of CoT can be formulated as finding the maximal entropy solution of max DISPLAYFORM8 Note the strong connections and differences between the optimization objective of CoT FORMULA0 and that of GAN. FIG0 illustrates the whole Cooperative Training process. CoT has theoretical guarantee on its convergence. Theorem 3 (Jensen-Shannon Consistency) If in each step, the mediator M φ of CoT is trained to be optimal, i.e. M φ = M * = 1 2 (G θ + P), then optimization via Eq. leads to minimization of JSD(G P).Proof. Let p denote the intermediate states. It would be used in the detailed proof. All we need to show is DISPLAYFORM0 By inversely applying Lemma 2, the left part in Eq. can be recovered as DISPLAYFORM1 which is equivalent to DISPLAYFORM2 Since now mediator is trained to be optimal, i.e. M φ = M *, we have DISPLAYFORM3 This means training through CoT leads to minimization ofĴSD(P G θ). When the mediator is trained to be optimal,ĴSD(P G θ) = JSD(P G θ). This verifies the theorem. CoT has several practical advantages over previous methods, including MLE, Scheduled Sampling (SS) BID3 and adversarial methods like SeqGAN.First, although CoT and GAN both aim to optimize an estimated JSD, CoT is exceedingly more stable than GAN. This is because the two modules, namely generator and mediator, have similar tasks, i.e. to approach the same data distribution generatively and predictively. The superiority of CoT over inconsistent methods like Scheduled Sampling is obvious, since CoT theoretically guarantees the training effectiveness. Compared with methods that require pre-training in order to reduce variance like SeqGAN, CoT is computationally cheaper. More specifically, under recommended settings, CoT has the same order of computational complexity as MLE.Besides, CoT works independently. In practice, it does not require model pre-training via conventional methods like MLE. This is the first time that unbiased unsupervised learning is achieved on sequential discrete data without using supervised approximation for variance reduction or sophisticated smoothing as in Wasserstein GAN with gradient penalty (WGAN-GP) BID7. An interesting problem is to ask why we need to train a mediator by mixing the samples from both sources G and P, instead of directly training a predictive modelP on the training set via MLE. There are basically two points to interpret this. To apply the efficient training objective 13, one needs to obtain not only the mixture density model M = 1 2 (P + G) but also its decomposed form in each timestep i.e. M φ (s) = n t=1 M φ (s t |s t−1), without which the term π m (s t) in Eq 13 cannot be computed efficiently. This indicates that if we directly estimate P and compute M = 1 2 (G + P), the obtained M will be actually useless since its decomposed form is not available. Besides, as a derivative problem of "exposure bias", there is no guarantee for the modelP to work well on the generated samples i.e. s ∼ G θ to guide the generator towards the target distribution. Given finite observations, the learned distributionP is trained to provide correct predictions for samples from the target distribution P. There is no guarantee thatP can stably provide correct predictions for guiding the generator. Ablation study is provided in the appendix. Following the synthetic data experiment setting in, we design a synthetic Turing test, in which the negative log-likelihood NLL oracle from an oracle LSTM is calculated for evaluating the quality of samples from the generator. Particularly, to support our claim that our method causes little mode collapse, we calculated NLL test, which is to sample an extra batch of samples from the oracle, and to calculate the negative log-likelihood measured by the generator. We show that under this more reasonable setting, our proposed algorithm reaches the state-of-the-art performance with exactly the same network architecture. Note that models like LeakGAN BID8 contain architecture-level modification, which is orthogonal to our approach, thus will not be included in this part. The are shown in TAB2. Computational Efficiency Although in terms of time cost per epoch, CoT does not achieve the state-of-the-art, we do observe that CoT is remarkably faster than previous RL-GAN approaches. Besides, consider the fact that CoT is a sample-based optimization algorithm, which involves time BID3 8.89 8.71/-(MLE) (The same as MLE) 32.54 ± 1.14s Professor Forcing BID10 9 To show the hyperparameter robustness of CoT, we compared it with the similar as were evaluated in SeqGAN. DISPLAYFORM0 cost in sampling from the generator, this is acceptable. The also verifies our claim that CoT has the same order (i.e. the time cost only differs in a constant multiplier or extra lower order term) of computational complexity as MLE.Hyper-parameter Robustness. We perform a hyper-parameter robustness experiment on synthetic data experiment. When compared with the of similar experiments as in SeqGAN, our approach shows less sensitivity to hyper-parameter choices, as shown in FIG1. Note that since in all our attempts, the evaluated JSD of SeqGAN fails to converge, we evaluated NLL oracle for it as a replacement. Self-estimated Training Progress Indicator. Like the critic loss, i.e. estimated Earth Mover Distance, in WGANs, we find that the training loss of the mediator, namely balanced NLL, can be a real-time training progress indicator as shown in FIG2. Specifically, in a wide range, balanced NLL is a good estimation of real JSD(G P) with a steady translation, namely, balanced N LL = JSD(G P) + H(G) + H(P). 2.900 (σ = 0.025) 3.118 (σ = 0.018) 3.122 RankGAN BID11 As an important sequential data modeling task, zero-prior text generation, especially long and diversified text generation, is a good testbed for evaluating the performance of a generative model. Following the experiment proposed in LeakGAN BID8, we choose EMNLP 2017 WMT News Section as our dataset, with maximal sentence length limited to 51. We pay major attention to both quality and diversity. To keep the comparison fair, we present two implementations of CoT, namely CoT-basic and CoT-strong. As for CoT-basic, the generator follows the settings of that in MLE, SeqGAN, RankGAN and MaliGAN. As for CoT-strong, the generator is implemented with the similar architecture in LeakGAN.For quality evaluation, we evaluated BLEU on a small batch of test data separated from the original dataset. For diversity evaluation, we evaluated the estimated Word Mover Distance BID9, which is calculated through training a discriminative model between generated samples and real samples with 1-Lipschitz constriant via gradient penalty as in WGAN-GP BID7. To keep it fair, for all evaluated models, the architecture and other training settings of the discriminative models are kept the same. The are shown in TAB4 and TAB5. In terms of generative quality, CoT-basic achieves state-of-the-art performance over all the baselines with the same architecture-level capacity, especially the long-term robustness at n-gram level. CoT-strong using a conservative generation strategy, i.e. setting the inverse temperature parameter α higher than 1, as in BID8 achieves the best performance over all compared models. In terms of generative diversity, the show that our model achieves the state-of-the-art performance on all metrics including NLL test, which is the optimization target of MLE. We proposed Cooperative Training, a novel training algorithm for generative modeling of discrete data. CoT optimizes Jensen-Shannon Divergence, which does not have the exposure bias problem as the forward KLD. Models trained via CoT shows promising in sequential discrete data modeling tasks, including sample quality and the generalization ability in likelihood prediction tasks. B SAMPLE COMPARISON AND DISCUSSION TAB6 shows samples from some of the most powerful baseline models and our model. • CoT produces remarkably more diverse and meaningful samples when compared to Leak-GAN.• The consistency of CoT is significantly improved when compared to MLE. The Optimal Balance for Cooperative Training We find that the same learning rate and iteration numbers for the generator and mediator seems to be the most competitive choice. As for the architecture choice, we find that the mediator needs to be slightly stronger than the generator. For the best in the synthetic experiment, we adopt exactly the same generator as other compared models and a mediator whose hidden state size is twice larger (with 64 hidden units) than the generator. Theoretically speaking, we can and we should sample more batches from G θ and P respectively for training the mediator in each iteration. However, if no regularizations are used when training the mediator, it can easily over-fit, leading the generator's quick convergence in terms of KL(G θ P) or NLL oracle, but divergence in terms of JSD(G θ P). Empirically, this could be alleviated by applying dropout techniques BID15 with 50% keeping ratio before the output layer of RNN. After applying dropout, the empirical show good consistency with our theory that, more training batches for the mediator in each iteration is always helpful. However, applying regularizations is not an ultimate solution and we look forward to further theoretical investigation on better solutions for this problem in the future. " I think it was alone because I can do that, when you're a lot of reasons, " he said. It's the only thing we do, we spent 26 and $35(see how you do is we lose it," said both sides in the summer. CoT We focus the plans to put aside either now, and which doesn't mean it is to earn the impact to the government rejected. The argument would be very doing work on the 2014 campaign to pursue the firm and immigration officials, the new review that's taken up for parking. This method is true to available we make up drink with that all they were willing to pay down smoking. The number of people who are on the streaming boat would study if the children had a bottle -but meant to be much easier, having serious ties to the outside of the nation. However, they have to wait to get the plant in federal fees and the housing market's most valuable in tourism. MLE after the possible cost of military regulatory scientists, chancellor angela merkel's business share together a conflict of major operators and interest as they said it is unknown for those probably 100 percent as a missile for britain. but which have yet to involve the right climb that took in melbourne somewhere else with the rams even a second running mate and kansas. " la la la la 30 who appeared that themselves is in the room when they were shot her until the end " that jose mourinho could risen from the individual. when aaron you has died, it is thought if you took your room at the prison fines of radical controls by everybody, if it's a digital plan at an future of the next time. Possible Derivatives of CoT The form of equation 13 can be modified to optimize other objectives. One example is the backward KLD (a.k.a. Reverse KLD) i.e. KL(G P). In this case, the objective of the so-called "Mediator" and "Generator" thus becomes:"Mediator", now it becomes a direct estimatorP φ of the target distribution P: DISPLAYFORM0 Generator: DISPLAYFORM1 Such a model suffers from so-called mode-collapse problem, as is analyzed in Ian's GAN Tutorial BID5. Besides, as the distribution estimatorP φ inevitably introduces unpredictable behaviors when given unseen samples i.e. samples from the generator, the algorithm sometimes fails (numerical error) or diverges. In our successful attempts, the algorithm produces similar (not significantly better than) as CoT. The quantitive are shown as follows: Although under evaluation of weak metrics like BLEU, if successfully trained, the model trained via Reverse KL seems to be better than that trained via CoT, the disadvantage of Reverse KL under evaluation of more strict metric like eWMD indicates that Reverse KL does fail in learning some aspects of the data patterns e.g. completely covering the data mode.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkxxIs0qY7
We proposed Cooperative Training, a novel training algorithm for generative modeling of discrete data.
Intrinsic rewards in reinforcement learning provide a powerful algorithmic capability for agents to learn how to interact with their environment in a task-generic way. However, increased incentives for motivation can come at the cost of increased fragility to stochasticity. We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics. Ultimately, an estimate of the conditional probability of observed states is used as our intrinsic reward for curiosity. In our experiments, a video game agent uses our model to autonomously learn how to play Atari games using our curiosity reward in combination with extrinsic rewards from the game to achieve improved performance on games with sparse extrinsic rewards. When stochasticity is introduced in the environment, our method still demonstrates improved performance over the baseline. Methods encouraging agents to explore their environment by rewarding actions that yield unexpected are commonly referred to as curiosity (Schmidhuber (1991; 1990a; b) ). Using curiosity as an exploration policy in reinforcement learning has many benefits. In scenarios in which extrinsic rewards are sparse, combining extrinsic and intrinsic curiosity rewards gives a framework for agents to discover how to gain extrinsic rewards. In addition, when agents explore, they can build more robust policies for their environment even if extrinsic rewards are readily available . These policies learned through exploration can give an agent a more general understanding of the of their actions so that the agent will have a greater ability to adapt using their existing policy if their environment changes. Despite these benefits, novelty-driven exploration methods can be distracted by randomness. (b;) When stochastic elements are introduced in the environment, agents may try to overfit to noise instead of learning a deterministic model of the effect of their own actions on their world. In particular, Burda et al. (2018a) showed that when a TV with white noise is added to an environment in which an agent is using the intrinsic curiosity module (ICM) developed by , the agent stops exploring the environment and just moves back and forth in front of the TV. In this paper, we present a new method for agent curiosity which provides robust performance in sparse reward environments and under stochasticity. We use a conditional variational autoencoder to develop a model of our environment. We choose to develop a conditional variational autoencoder (CVAE) due to the success of this architecture in modeling dynamics shown in the video prediction literature . We incorporate additional modeling techniques to regularize for stochastic dynamics in our perception model. We compute our intrinsic reward for curiosity by sampling from the latent space of the CVAE and computing an associated conditional probability which is a more robust metric than the commonly used pixel-level reconstruction error. The primary contributions of our work are the following. 1. Perception-driven approach to curiosity. We develop a perception model which integrates model characteristics proven to work well for deep reinforcement learning with recent architectures for estimating dynamics from pixels. This combination retains robust-ness guarantees from existing deep reinforcement learning models while improving the ability to capture complex visual dynamics. 2. Bayesian metric for surprise. We use the entropy of the current state given the last state as a measurement for computing surprise. This Bayesian approach will down-weight stochastic elements of the environment when learning a model of dynamics. As a , this formulation is robust to noise. For our experiments, autonomous agents use our model to learn how to play Atari games. We measure the effectiveness of our surprise metric as a meaningful intrinsic reward by tracking the total achieved extrinsic reward by agents using a combination of our intrinsic reward with extrinsic rewards to learn. We show that the policy learned by a reinforcement learning algorithm using our surprise metric outperforms the policies learned by alternate reward schemes. Furthermore, we introduce stochasticity into the realization of actions in the environment, and we show that our method still demonstrates successful performance beyond that of the baseline method. Perception-Driven Curiosity. Several existing models incentivize curious agent behavior through estimating and seeking visual novelty. and generalize count-based exploration traditionally used in tabular settings for continuous states. Burda et al. (2018b) learns a predictive model on features given by a randomly initialized target network and uses reconstruction error of the random features as intrinsic reward. provides a full review of recent perception-driven methods to encourage curiosity. In this work, we combine a CVAE, an architecture recently used to successfully estimate dynamics from image frames, with methodological approaches from deep reinforcement learning to build a robust perception model in which visual novelty is computed via an estimation of the conditional log-likelihood of observed states. Prediction-Based Exploration Bonuses. proposed an approach to exploration by building prediction models and formulating intrinsic reward as the error of the next state prediction. Recently, this line of work has been shown to explore efficiently in a large number of simulated environments by and Burda et al. (2018a). formalizes the prediction error as Bayesian surprise given a heteroscedastic Gaussian predictive model. This approach is closest to our own. However, in contrast to these reward methods which are built upon simple predictive models, our formulation of Bayesian surprise is computed via importance sampling from our latent variable model. This construction of surprise is significant due to the ability of this variational inference approach to express complex multimodal distributions over images. Information-Theoretic Measures for Exploration. Several methods rely on maximizing information-theoretic measures of agent behavior. The method by maximizes the entropy of agent trajectories between successive states directly. propose to learn skills without supervision by maximizing the mutual information between a latent skill embedding and the behaviour that the associated skill produces. introduce a method for unsupervised goal-conditional reinforcement learning which maximizes the entropy of the distribution of possible goals. uses the KL divergence between the previous and current dynamics models as intrinsic reward as a proxy for information gain. We indirectly measure the entropy of agent trajectories by measuring surprise in terms of the entropy of next state given the previous state, providing an alternative to these existing approaches. Video Prediction with Latent Variable Models. introduced a stochastic model for sequential data based on variational inference approaches by and. This model was adopted for high-dimensional data such as video by;; and. A similar model based on was used for next frame prediction in. We leverage the success of variational inference techniques for high-dimensional data to construct a stochastic model of videos from which surprise can be efficiently estimated. We construct a perception model for our agents using a conditional variational autoencoder (CVAE) which generates an estimation of an embedded state, φ t+1, given the embedded state itself, φ t+1. This generative model is conditioned on the last embedded state, φ t, and action, a t. Intuitively, this construction gives an agent a visual model of the environment conditioned on the dynamics associated with their interactions with the environment. The state embeddings are encoded by a neural network submodule in our architecture which computes feature vectors φ t and φ t+1 from states s t and s t+1 respectively. In our experiments, the states we observe from our simulation environment are image frames. We derive our approach and the following properties of our model from the theoretical properties of conditional variational autoencoders presented by. We first define p(φ t+1 |z, φ t, a t) as the generative distribution from which we draw the output φ t+1. The prior distribution of the latent space is given by p(z|φ t, a t) which is relaxed in the CVAE formulation to make the latent variable, z, statistically independent of the input variables. Thus, our prior for the latent space distribution is given by p(z) ∼ N (0, I). Through training, our model learns a latent representation. The distribution of this representation, q(z|φ t+1, φ t, a t), approximates p(z|φ t, a t). Using the previously defined distributions and the analysis by , we define the empirical lower bound of the conditional log-likelihood and objective function, f, of our CVAE as We recall that the sum of log-probabilities is equal to the reconstruction loss of our model up to a constant and multiplicative offset. Thus, we denote We can write the KL-divergence term in Equation 1 as The final component of our perception model is a neural network predicting a t from φ t and φ t+1 built off the inverse model presented by. This component regularizes for dynamics in the environment which do not contribute to agent actions. We note that this component controls for environment stochasticity when learning the weights of our network. The error in action prediction can be formulated in terms of maximum likelihood estimation of the parameters of our network under a multinomial distribution. This error is used as the loss for our action-prediction network and denoted L A. We can now use the approximation of our CVAE objective function and the loss from our actionprediction network to formulate the total loss for our perception model as We tune the hyper-parameters λ 1, λ 2, and λ 3 to weight the contribution of each loss in our model. The tuning procedure and hyperparameters used in our experiments are given in Appendix B. The full architecture of our perception model is shown Figure 1.b. We define Bayesian surprise as the amount an agent should be curious about an observation derived from a conditional likelihood of the observation occurring given the current world model of the agent. From our definition of curiosity, we want to reward actions more strongly which in less likely outcomes. Therefore, we use the negative of this conditional probability estimate as a reward for agents. In our approach, this probabilistic reward takes the form r t = − log p(φ t+1 |φ t, a t). Similar objectives were used in prior work which considered simple homoscedastic (Burda et al. (2018a) ) or heteroscedastic Gaussian forward models. Due to our use of a base CVAE architecture, our perception model can capture multimodal distributions over images. To retain this improved expressiveness in our derived intrinsic reward, we use importance sampling from the latent space of our CVAE to estimate conditional likelihoods for our formulation of surprise as follows. = log E (z∼q(z|φt+1,φt,at)) p(φ t+1 |z, φ t, a t)p(z) q(z|φ t+1, φ t, a t) ≥ E (z∼q(z|φt+1,φt,at)) log p(φ t+1 |z, φ t, a t)p(z) q(z|φ t+1, φ t, a t) We use the reconstruction loss of our model to compute the conditional probability log p(φ t+1 |z, φ t, a t). We recall that the negative logarithm of our conditional probability is equal to Bayesian surprise, so we explicitly define our reward as follows. We use the Bayesian surprise computed by our perception model as intrinsic reward input to a reinforcement learning algorithm. The interaction of this reward and our perception model with the reinforcement learning procedure is visualized in Figure 1.a. We evaluate the ability of our model to enable effective and robust exploration. We use Atari video games as simulation environments since they provide reasonably complex visual environments with large variations in both sparsity of extrinsic reward and stochasticity in scenes between different games. As a , Atari games have been frequently used as a testbed for curiosity approaches. (; a; b;) We use our intrinsic reward measurement with the proximal policy optimization (PPO) reinforcement learning algorithm developed by due to the ability of PPO to perform well with relatively little hyperparameter tuning. In training, we combine our intrinsic reward with extrinsic rewards provided by the game environments for task-specific success such as knocking blocks out of a wall in Breakout. We compare the ability of agents using this reward combination to learn to play different Atari games against the ability of agents using a leading alternate prediction-based exploration bonus by in combination with extrinsic rewards. We also compare our approach to agent behavior derived from policies learned by purely extrinsic rewards. Note that, though combination rewards are used to train PPO, each method is evaluated by comparing extrinsic reward per episode alone since extrinsic rewards measure the successful accomplishment of tasks in each game. The hyperparameters used in training as well as additional implementation details are given in Appendix B. Furthermore, Appendix A shows details and analysis of the perception model performance throughout training via this active learning procedure. We first test the impact of our curiosity-reward on learning to play Atari games with varying levels of extrinsic reward sparseness. Gravitar, Beam Rider, Breakout, Space Invaders, and River Raid all have reasonably dense extrinsic rewards. In contrast, Montezuma's Revenge, Pitfall, and Private Eye all have sparse extrinsic rewards and are thus known as a traditionally challenging games for deep reinforcement learning algorithms to play successfully with only the information provided by game scene observations. Our for training 3 seeds for each method over 10 million timesteps in each of these games are plotted in Figure 2. Table 1 summarizes the of the extrinsic rewards achieved at the end of training. The best performance for each game is bold in the respective row. For games with dense extrinsic rewards, the best performance is split somewhat equally between each of the 3 reward strategies. Thus, we conclude that we perform comparably to ICM in the case where extrinsic rewards are readily available. However, we also conclude that the value of using curiosity for the purpose of achieving improved performance in these cases is substantially less significant than when rewards are sparse considering that we also show comparable performance to using extrinsic reward only in these games. We recall the games with sparse extrinsic rewards are Pitfall, Montezuma's Revenge, and Private Eye. Though PPO achieved the highest extrinsic reward on Pitfall, no successful game strategy was found. The mean extrinsic reward for each method is always negative in Pitfall. Exploration necessary to ultimately discover a successful strategy may require incurring temporary negative extrinsic rewards, so comparing the average before any working strategy is learned by any method is premature. On both Montezuma's Revenge and Private Eye, our method outperforms the combination of ICM and extrinsic rewards as well as extrinsic rewards only. We further observe that the variance of our method at convergence in Private Eye is very low. These demonstrate the improved ability of our approach to use information from better models for perceiving complex scene dynamics in order to learn in the absence of extrinsic rewards. There is a challenging balance with methods rewarding novelty to encourage exploration of the unknown while avoiding confounding the model by focusing on randomness. We showed that our method explores more effectively than our baseline in cases where extrinsic rewards are sparse. However, we now need to demonstrate that that improved performance did not introduce brittleness to environmental noise. Following Burda et al. (2018b), we perform a sticky action experiment to demonstrate the robustness of our model to stochasticity. With a 25% probability, the action an agent takes is repeated over 8 consecutive frames in the environment though the agent believes that their new action decisions are being executing across those frames. We observe that the performance of ICM and our model in the presence of sticky actions in comparable with slightly better performance from our model. Our for training 3 seeds for each method over 10 million timesteps in each of these games are plotted in Figure 2. Table 2 summarizes the of the extrinsic rewards achieved at the end of training. The best performance for each game is bold in the respective row. In summary, we presented a novel method to compute curiosity through the use of a meaningfully constructed model for perception. We used a conditional variational autoencoder (CVAE) to learn scene dynamics from image and action sequences and computed an intrinsic reward for curiosity via a conditional probability derived from importance sampling from the latent space of our CVAE. In our experiments, we demonstrated that our approach allows agents to learn to accomplish tasks more effectively in environments with sparse extrinsic rewards without compromising robustness to stochasticity. We show robustness to stochasticity in our action space which we support through the actionprediction network used in our perception model. However, robustness to stochasticity in scenes is a separate challenge which the method we use as our baseline, ICM, cannot handle well. (a) Stochasticity in scenes occurs when there are significant changes between sequential image frames which are random with respect to agent actions. We hypothesize that this stochasticity requires a different approach to handle. A consideration in comparing models for curiosity and exploration in deep reinforcement learning is that typically both the dynamics model and intrinsic reward metric are constructed and compared as unit as we did in this paper. However, a conditional probability estimation could be derived the dynamics model given by ICM just as reconstruction error could be used as intrinsic reward from our CVAE. Alternately, other metrics measuring novelty and learning such as the KL divergence between sequential latent distributions in our model have been proposed in a general manner by. An interesting direction for future work would be to explore the impact of intrinsic reward metrics for curiosity on robustness to stochasticity in scenes independent across different choices of dynamics model. We analyze the ability of an agent using our model to perceive the environment. We confirm that our perception model generates image embeddings which can successfully construct realistic image frames from our video games. We also show that increasing ability to perceive the environment is associated with decreasing intrinsic rewards. To set up this experiment, we built a visual decoder to reconstruct images from the embeddings learned by the visual encoder shown in Figure 1.b. We trained our decoder on the game images (s t) and the image embeddings (φ t). Then, we used our decoder to reconstruct images from our predicted image embeddings (φ t). We chose to execute this experiment on Kung Fu Master since it is one of the more visually complex Atari games. Figure 4: Relationship between reconstruction error and intrinsic reward. Figure 5 shows the reconstructed images from our perception model next to the image frames which are our states. We note that the observations provided by the OpenAI Gym Atari game simulations are grayscale and rescaled to size 84x84. These images are what we input into our visual encoder. The reconstructions of our predicted embeddings are shown next to these images along with measurements of reconstruction error between the embedded images and intrinsic reward associated with that time step. We observe that the high intrinsic rewards are associated with poor reconstructions of the image frames. Analogously, low intrinsic rewards are associated with good reconstructions of the image frames. We note that all of the scenes presented in this visualization are intentionally taken from relatively early in training. At this time, the agent has learned to only partially perceive the environment. Thus, we can visualize reasonable reconstructed scenes, but we have enough reconstruction error such that the perception component of our network will dominate the likelihood measurement we use for surprise. When the perception model improves after additional training, the relationship between intrinsic reward and image reconstruction error becomes less strong since intrinsic reward is also conditioned on the likelihood of the next state prediction which is determined by transition dynamics as well. We more clearly observe the correlation between reconstruction error and intrinsic reward in Figure 4 for a subset of the training samples in our model. The linear trend becomes weaker and rewards are not as tightly clustered for samples later in training which demonstrates that our model recognizes distinct transition dynamics based on likelihood. Through our analysis in Figure 5, we observe the success of our CVAE model in perceiving the game environment in Kung Fu Master. In addition, we validate the success of our designed relationship between intrinsic reward and ability to perceive the environment with this analysis in Figure 4. To execute our experiments, we leveraged the implementation of the PPO algorithm provided by. We used the Atari simulation environment for our video game simulations available in OpenAI Gym and developed by. We also incorporated infrastructure code from along with their inverse model implementation which we use as an action-prediction network in our approach. We trained each method for 10 million time steps on each environment. Each time step is associated with processing one new frame from the environment. On a 2080 Ti NVIDIA GPU, training for 10 million time steps took approximately 7 hours. This training time is associated with processing about 425 frames per second. To tune hyperparameters for loss weights in our model, we used intrinsic reward only and swept a range of values between zero and one in a grid search for each weight. Thus, we analyzed how different weighting of the loss values induced agents to explore differently, and we choose the combination which yielded the maximum ing extrinsic reward realization through intrinsic reward training only. Once we had optimal loss weights, we then tuned for the intrinsic and extrinsic reward weight combination. We swept a range of values between zero and one this weight combination as well, and we again chose the values which yielded the highest extrinsic reward. We tried 3 different scales of latent space dimension before choosing the one which caused our perception model to perform the best. The hyperparameters we used in our experiments are listed in Table 3. Note that the intrinsic reward weight and extrinsic reward weight were the same for ICM and our method though we tuned for the weights separately for each model. Also, note we took all the hyperparameters required in ICM other than reward weighting from. Intrinsic and extrinsic reward weights were not provided for using ICM with PPO. The first row shows the game scene images fed into our perception model as states. The second row shows the reconstructed images produced by our perception model. The remaining rows list intrinsic reward and embedded image reconstruction error respectively for each image pair.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJlBQkrFvr
We introduce a method for computing an intrinsic reward for curiosity using metrics derived from sampling a latent variable model used to estimate dynamics.
Word embedding is a powerful tool in natural language processing. In this paper we consider the problem of word embedding composition \--- given vector representations of two words, compute a vector for the entire phrase. We give a generative model that can capture specific syntactic relations between words. Under our model, we prove that the correlations between three words (measured by their PMI) form a tensor that has an approximate low rank Tucker decomposition. The of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings. We also complement our theoretical with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method. Word embeddings have become one of the most popular techniques in natural language processing. A word embedding maps each word in the vocabulary to a low dimensional vector. Several algorithms (e.g., ;) can produce word embedding vectors whose distances or inner-products capture semantic relationships between words. The vector representations are useful for solving many NLP tasks, such as analogy tasks or serving as features for supervised learning problems .While word embeddings are good at capturing the semantic information of a single word, a key challenge is the problem of composition: how to combine the embeddings of two co-occurring, syntactically related words to an embedding of the entire phrase. In practice composition is often done by simply adding the embeddings of the two words, but this may not be appropriate when the combined meaning of the two words differ significantly from the meaning of individual words (e.g., "complex number" should not just be "complex"+"number").In this paper, we try to learn a model for word embeddings that incorporates syntactic information and naturally leads to better compositions for syntactically related word pairs. Our model is motivated by the principled approach for understanding word embeddings initiated by , and models for composition similar to. gave a generative model (RAND-WALK) for word embeddings, and showed several previous algorithms can be interpreted as finding the hidden parameters of this model. However, the RAND-WALK model does not treat syntactically related word-pairs differently from other word pairs. We give a generative model called syntactic RAND-WALK (see Section 3) that is capable of capturing specific syntactic relations (e.g., adjective-noun or verb-object pairs). Taking adjective-noun pairs as an example, previous works (; ;) have tried to model the adjective as a linear operator (a matrix) that can act on the embedding of the noun. However, this would require learning a d × d matrix for each adjective while the normal embedding only has dimension d. In our model, we use a core tensor T ∈ R d×d×d to capture the relations between a pair of words and its context. In particular, using the tensor T and the word embedding for the adjective, it is possible to define a matrix for the adjective that can be used as an operator on the embedding of the noun. Therefore our model allows the same interpretations as many previous models while having much fewer parameters to train. One salient feature of our model is that it makes good use of high order statistics. Standard word embeddings are based on the observation that the semantic information of a word can be captured by words that appear close to it. Hence most algorithms use pairwise co-occurrence between words to learn the embeddings. However, for the composition problem, the phrase of interest already has two words, so it would be natural to consider co-occurrences between at least three words (the two words in the phrase and their neighbors).Based on the model, we can prove an elegant relationship between high order co-occurrences of words and the model parameters. In particular, we show that if we measure the Pointwise Mutual Information (PMI) between three words, and form an n × n × n tensor that is indexed by three words a, b, w, then the tensor has a Tucker decomposition that exactly matches our core tensor T and the word embeddings (see Section 2, Theorem 1, and Corollary 1). This suggests a natural way of learning our model using a tensor decomposition algorithm. Our model also allows us to approach the composition problem with more theoretical insights. Based on our model, if words a, b have the particular syntactic relationships we are modeling, their composition will be a vector v a + v b + T (v a, v b, ·). Here v a, v b are the embeddings for word a and b, and the tensor gives an additional correction term. By choosing different core tensors it is possible to recover many previous composition methods. We discuss this further in Section 3.Finally, we train our new model on a large corpus and give experimental evaluations. In the experiments, we show that the model learned satisfies the new assumptions that we need. We also give both qualitative and quantitative for the new embeddings. Our embeddings and the novel composition method can capture the specific meaning of adjective-noun phrases in a way that is impossible by simply "adding" the meaning of the individual words. Quantitative experiment also shows that our composition vector are better correlated with humans on a phrase similarity task. Syntax and word embeddings Many well-known word embedding methods (e.g., ;) don't explicitly utilize or model syntactic structure within text. find that such syntax-blind word embeddings fail to capture syntactic information above and beyond what a statistical parser can obtain, suggesting that more work is required to build syntax into word embeddings. Several syntax-aware embedding algorithms have been proposed to address this. Levy & Goldberg (2014a) propose a syntax-oriented variant of the well-known skip-gram algorithm of , using contexts generated from syntactic dependency-based contexts obtained with a parser. build syntax-awareness into a neural network model for word embeddings by indroducing a negative set of samples in which the order of the context words is shuffled, in hopes that the syntactic elements which are sensitive to word order will be captured. Word embedding composition Several works have addressed the problem of composition for word embeddings. On the theoretical side, give a theoretical justification for additive embedding composition in word models that satisfy certain assumptions, such as the skipgram model, but these assumptions don't address syntax explicitly. present a mathematical framework for reasoning about syntax-aware word embedding composition that motivated our syntactic RAND-WALK model. Our new contribution is a concrete and practical learning algorithm with theoretical guarantees. Mitchell & Lapata (2008; 2010) explore various composition methods that involve both additive and multiplicative interactions between the component embeddings, but some of these are limited by the need to learn additional parameters post-hoc in a supervised fashion. get around this drawback by first training word embeddings for each word and also for tokenized adjective-noun pairs. Then, the composition model is trained by using the constituent adjective and noun embeddings as input and the adjective-noun token embedding as the predictive target. treat adjectives as matrices and nouns as vectors, so that the composition of an adjective and noun is just matrix-vector multiplication. The matrices and vectors are learned through an extension of the skip-gram model with negative sampling. In contrast to these approaches, our model gives rise to a syntax-aware composition function, which can be learned along with the word embeddings in an unsupervised fashion, and which generalizes many previous composition methods (see Section 3.3 for more discussion).Tensor factorization for word embeddings As Levy & Goldberg (2014b) and point out, some popular word embedding methods are closely connected matrix factorization problems involving pointwise mutual information (PMI) and word-word co-occurrences. It is natural to consider generalizing this basic approach to tensor decomposition. demonstrate this technique by performing a CP decomposition on triple word co-occurrence counts. explore this idea further by defining a third-order generalization of PMI, and then performing a symmetric CP decomposition on the ing tensor. In contrast to these recent works, our approach arives naturally at the more general Tucker decomposition due to the syntactic structure in our model. Our model also suggests a different (yet still common) definition of third-order PMI. Notation For a vector v, we use v to denote its Euclidean norm. For vectors u, v we use u, v to denote their inner-product. For a matrix M, we use M to denote its spectral norm, DISPLAYFORM0 to denote its Frobenius norm, and M i,: to denote it's i-th row. In this paper, we will also often deal with 3rd order tensors, which are just three-way indexed arrays. We use ⊗ to denote the tensor product: DISPLAYFORM1 Tensor basics Just as matrices are often viewed as bilinear functions, third order tensors can be interpreted as trilinear functions over three vectors. Concretely, let T be a d × d × d tensor, and let x, y, z ∈ R d. We define the scalar T (x, y, z) ∈ R as follows DISPLAYFORM2 This operation is linear in x, y and z. Analogous to applying a matrix M to a vector v (with the vector M v), we can also apply a tensor T to one or two vectors, ing in a matrix and a vector, respectively: DISPLAYFORM3 We will make use of the simple facts that z, T (x, y, ·) = T (x, y, z) and [T (x, ·, ·)] y = T (x, y, ·).Tensor decompositions Unlike matrices, there are several different definitions for the rank of a tensor. In this paper we mostly use the notion of Tucker rank. A tensor T ∈ R n×n×n has Tucker rank d, if there exists a core tensor S ∈ R d×d×d and matrices A, B, C ∈ R n×d such that DISPLAYFORM4 The equation above is also called a Tucker decomposition of the tensor T. The Tucker decomposition for a tensor can be computed efficiently. When the core tensor S is restricted to a diagonal tensor (only nonzero at entries S i,i,i), the decomposition is called a CP decomposition; which can also be written as DISPLAYFORM5. In this case, the tensor T is the sum of d rank-1 tensors (A i,: ⊗ B i,: ⊗ C i,:). However, unlike matrix factorizations and the Tucker decomposition, the CP decomposition of a tensor is hard to compute in the general case (Håstad, 1990;). Later in Section 4 we will also see why our model for syntactic word embeddings naturally leads to a Tucker decomposition. In this section, we introduce our syntactic RAND-WALK model and present formulas for inference in the model. We also derive a novel composition technique that emerges from the model. We first briefly review the RAND-WALK model . In this model, a corpus of text is considered as a sequence of random variables w 1, w 2, w 3,..., where w t takes values in a vocabulary V of n words. Each word w ∈ V has a word embedding v w ∈ R d. The prior for the word embeddings is v w = s ·v, where s is a positive bounded scalar random variable with constant expectation τ and upper bound κ, andv ∼ N (0, I).The distribution of each w t is determined in part by a random walk {c t ∈ R d | t = 1, 2, 3 . . .}, where c t -called a discourse vector -represents the topic of the text at position t. This random walk is slow-moving in the sense that c t+1 − c t is small, but mixes quickly to a stationary distribution that is uniform on the unit sphere, which we denote by C.Let C denote the sequence of discourse vectors, and let V denote the set of word embeddings. Given these latent variables, the model specifies the following conditional probability distribution: DISPLAYFORM0 The graphical model depiction of RAND-WALK is shown in FIG0. One limitation of RAND-WALK is that it can't deal with syntactic relationships between words. Observe that conditioned on c t and V, w t is independent of the other words in the text. However, in natural language, words can exhibit more complex dependencies, e.g. adjective-noun pairs, subject-verb-object triples, and other syntactic or grammatical structures. In our syntactic RAND-WALK model, we start to address this issue by introducing direct pairwise word dependencies in the model. When there is a direct dependence between two words, we call the two words a syntactic word pair. In RAND-WALK, the interaction between a word embedding v and a discourse vector c is mediated by their inner product v, c. When modeling a syntactic word pair, we need to mediate the interaction between three quantities, namely a discourse vector c and the word embeddings v and v of the two relevant words. A natural generalization is to use a trilinear form defined by a tensor T, i.e. DISPLAYFORM0 Here, T ∈ R d×d×d is also a latent random variable, which we call the composition tensor. We model a syntactic word pair as a single semantic unit within the text (e.g. in the case of adjectivenoun phrases). We realize this choice by allowing each discourse vector c t to generate a pair of words w t, w t with some small probability p syn. To generate a syntactic word pair w t, w t, we first generate a root word w t conditioned on c t with probability proportional to exp(c t, w t), and then we draw w t from a conditional distribution defined as follows: DISPLAYFORM1 Here exp(c t, v b) would be proportional to the probability of generating word b in the original RAND-WALK model, without considering the syntactic relationship. The additional term T (v a, v b, c t) can be viewed as an adjustment based on the syntactic relationship. We call this extended model Syntactic RAND-WALK. FIG0 gives the graphical model depiction for a syntactic word pair, and we summarize the model below. Definition 1 (Syntactic RAND-WALK model). The model consists of the following:1. Each word w in vocabulary has a corresponding embedding v w ∼ s ·v w, where s ∈ R ≥0 is bounded by κ and DISPLAYFORM2 2. The sequence of discourse vectors c 1,..., c t are generated by a random walk on the unit sphere, c t − c t+1 ≤ w / √ d and the stationary distribution is uniform.3. For each c t, with probability 1−p syn, it generates one word w t with probability proportional to exp(c t, v wt).4. For each c t, with probability p syn, it generates a syntactic pair w t, w t with probability proportional to exp(c t, v wt) and exp(c t, DISPLAYFORM3 We now calculate the marginal probabilities of observing pairs and triples of words under the syntactic RAND-WALK model. We will show that these marginal probabilities are closely related to the model parameters (word embeddings and the composition tensor). All proofs in this section are deferred to supplementary material. Throughout this section, we consider two adjacent context vectors c t and c t+1, and condition on the event that c t generated a single word and c t+1 generated a syntactic pair 1. The main bottleneck in computing the marginal probabilities is that the conditional probailities specified in equations FORMULA6 and are not normalized. Indeed, for these equations to be exact, we would need to divide by the appropriate partition functions, namely Z ct:= w∈V exp(v w, c t) for the former and Z ct,a:= w∈V exp(c t, v w + T (v a, v w, c t)) for the latter. Fortunately, we show that under mild assumptions these quantities are highly concentrated. To do that we need to control the norm of the composition tensor. Definition 2. The composition tensor T is (K,)-bounded, if for any word embedding v a, v b, we have DISPLAYFORM0 To make sure exp(c t, v w +T (v a, v w, c t)) are within reasonable ranges, the value K in this definition should be interpreted as an absolute constant (like 5, similar to previous constants κ and τ). Intuitively these conditions make sure that the effect of the tensor cannot be too large, while still making sure the tensor component T (v a, v b, c) can be comparable (or even larger than) v b, c. We have not tried to optimize the log factors in the constraint for T (v a, ·, ·) + I 2.Note that if the tensor component T (v a, ·, ·) has constant singular values (hence comparable to I), we know these conditions will be satisfied with K = O and = O(DISPLAYFORM1). Later in Section 5 we verify that the tensors we learned indeed satisfy this condition. Now we are ready to state the concentration of partition functions:Lemma 1 (Concentration of partition functions). For the syntactic RAND-WALK model, there exists a constant Z such that Pr DISPLAYFORM2 Furthermore, if the tensor T is (K,)-bounded, then for any fixed word a ∈ V, there exists a constant Z a such that Pr DISPLAYFORM3 Using this lemma, we can obtain simple expressions for co-occurrence probabilities. In particular, for any fixed w, a, b ∈ V, we adopt the following notation: DISPLAYFORM4 Here in particular we use [a, b] to highlight the fact that a and b form a syntactic pair. Note p(w, a) is the same as the co-occurrence probability of words w and a if both of them are the only word generated by the discourse vector. Later we will also use p(w, b) to denote Pr[DISPLAYFORM5 We also require two additional properties of the word embeddings, namely that they are norm-bounded above by some constant times DISPLAYFORM6 and that all partition functions are bounded below by a positive constant. Both of these properties hold with high probability over the word embeddings provided n d log d and d log n, as shown in the following lemma: Lemma 2. Assume that the composition tensor T is (K,)-bounded, where K is a constant. With probability at least 1 − δ 1 − δ 2 over the word vectors, where DISPLAYFORM7, there exist positive absolute constants γ and β such that v i ≤ κγ for each i ∈ V and Z c ≥ β and Z c,a ≥ β for any unit vector c ∈ R d and any word a ∈ V.We can now state the main . Theorem 1. Suppose that the events referred to in Lemma 1 hold. Then DISPLAYFORM8 DISPLAYFORM9 where is from the (K,)-boundedness of T and w is from Definition 1. Our model suggests that the latent discourse vectors contain the meaning of the text at each location. It is therefore reasonable to view the discourse vector c corresponding to a syntactic word pair (a, b) as a suitable representation for the phrase as a whole. The posterior distribution of c given (a, b) satisfies DISPLAYFORM0 Since Pr[c t = c] is constant, and since Z c and Z c,a concentrate on values that don't depend on c, the MAP estimate of c given [a, b], which we denote byĉ, satisfieŝ DISPLAYFORM1 Hence, we arrive at our basic tensor composition: for a syntactic word pair (a, b), the composite embedding for the phrase is DISPLAYFORM2 Note that our composition involves the traditional additive composition DISPLAYFORM3 e. the composition tensor allows us to compactly associate a matrix with each word in the same vein as. Depending on the actual value of T, the term T (v a, v b, ·) can also recover any manner of linear or multiplicative interactions between v a and v b, such as those proposed in. In this section we discuss how to learn the parameters of the syntactic RAND-WALK model. Theorem 1 provides key insights into the learning problem, since it relates joint probabilities between words (which can be estimated via co-occurrence counts) to the word embeddings and composition tensor. By examining these equations, we can derive a particularly simple formula that captures these relationships. To state this equation, we define the PMI for 3 words as DISPLAYFORM0 We note that this is just one possible generalization of pointwise mutual information (PMI) to several random variables, but in the context of our model, it is a very natural definition as all the partition numbers will be canceled out. Indeed, as an immediate corollary of Theorem 1, we have Corollary 1. Suppose that the events referred to in Lemma 1 hold. Then for p same as Theorem 1 DISPLAYFORM1 That is, if we consider P M I3(a, b, w) as a n × n × n tensor, Equation equation 8 is exactly a Tucker decomposition of this tensor of Tucker rank d. Therefore, all the parameters of the syntactic RAND-WALK model can be obtained by finding the Tucker decomposition of the PMI3 tensor. This equation also provides a theoretical motivation for using third-order pointwise mutual information in learning word embeddings. We now discuss concrete details about our implementation of the learning algorithm. Corpus. We train our model using a February 2018 dump of the English Wikipedia. The text is pre-processed to remove non-textual elements, stopwords, and rare words (words that appear less than 1000 within the corpus), ing in a vocabulary of size 68,279. We generate a matrix of word-word co-occurrence counts using a window size of 5. To generate the tensors of adjective-noun-word and verb-object-word co-occurrence counts, we first run the Stanford Dependency Parser on the corpus in order to identify all adjective-noun and verb-object word pairs, and then use context windows that don't cross sentence boundaries to populate the triple co-occurrence counts. Training. We first train the word embeddings according to the RAND-WALK model, following. Using the learned word embeddings, we next train the composition tensor T via the following optimization problem DISPLAYFORM0 where X (a,b),w denotes the number of co-occurrences of word w with the syntactic word pair (a, b) (a denotes the noun/object) and f (x) = min(x, 100). This objective function isn't precisely targeting the Tucker decomposition of the PMI3 tensor, but it is analogous to the training criterion used in , and can be viewed as a negative log-likelihood for the model. To reduce the number of parameters, we constrain T to have CP rank 1000. We also trained the embeddings and tensor jointly, but found that this approach yields very similar . In all cases, we utilize the Tensorflow framework BID0 with the Adam optimizer (using default parameters), and train for 1-5 epochs. In this section, we verify and evaluate our model empirically on select qualitative and quantitative tasks. In all of our experiments, we focus solely on syntactic word pairs formed by adjective-noun phrases, where the noun is considered the root word. empirically verify the model assumptions of RAND-WALK, and since we trained our embeddings in the same way, we don't repeat their verifications here. Instead, we verify two key properties of syntactic RAND-WALK. We check the assumptions that the tensor T is (K,)-bounded. Ranging over all adjective-noun pairs in the corpus, we find that DISPLAYFORM0 2 has mean 0.052 and maximum 0.248, DISPLAYFORM1 F has mean 1.61 and maximum 3.23, and DISPLAYFORM2 has mean 0.016 and maximum 0.25. Each of these three quantities has a well-bounded mean, but T (v a, ·, ·) + I 2 has some larger outliers. If we ignore the log factors (which are likely due to artifacts in the proof) in Definition 2, the tensor is (K,) bounded for K = 4 and = 0.25. In addition to Definition 2, we also directly check its implications: our model predicts that the partition functions Z c,a concentrate around their means. To check this, given a noun a, we draw 1000 random vectors c from the unit sphere, and plot the histogram of Z c,a.Results for a few randomly selected words a are given in FIG2. All partition functions that we inspected exhibited good concentration. We test the performance of our new composition for adjective-noun and verb-object pairs by looking for the words with closest embedding to the composed vector. For a phrase (a, b), we compute c = v a + v b + T (v a, v b, ·), and then retrieve the words w whose embeddings v w have the largest cosine similarity to c. We compare our to the additive composition method. TAB0 show for three adjective-noun and verb-object phrases. In each case, the tensor composition is able to retrieve some words that are more specifically related to the phrase. However, the tensor composition also sometimes retrieves words that seem unrelated to either word in the phrase. We conjecture that this might be due to the sparseness of co-occurrence of three words. We also observed cases where the tensor composition method was about on par with or inferior to the additive composition method for retrieving relevant words, particularly in the case of low-frequency phrases. More can be found in supplementary material. Published as a conference paper at ICLR 2019 We also test our tensor composition method on a adjective-noun phrase similarity task using the dataset introduced by. The data consists of 108 pairs each of adjective-noun and verb-object phrases that have been given similarity ratings by a group of 54 humans. The task is to use the word embeddings to produce similarity scores that correlate well with the human scores; we use both the Spearman rank correlation and the Pearson correlation as evaluation metrics for this task. We note that the human similarity judgments are somewhat noisy; intersubject agreement for the task is 0.52 as reported in.Given a phrase (a, b) with embeddings v a, v b, respectively, we found that the tensor composition FORMULA6, we split the data into a development set of 18 humans and a test set of the remaining 36 humans. We use the development set to select the optimal scalar weight for the weighted tensor composition, and using this fixed parameter, we report the using the test set. We repeat this three times, rotating over folds of 18 subjects, and report the average . DISPLAYFORM0 DISPLAYFORM1 yields worse performance than the simple additive composition v a + v b. For this reason, we consider a weighted tensor composition vAs a baseline, we also report the average using just the additive composition, as well as a weighted additive composition βv a + v b, where β ≥ 0. We select β using the development set ("weighted1") and the test set ("weighted2"). We allow weighted2 to cheat in this way because it provides an upper bound on the best possible weighted additive composition. Additionally, we compare our method to the smoothed inverse frequency ("sif") weighting method that has been demonstrated to be near state-of-the-art for sentence embedding tasks . We also test embeddings of the form p + γω a ω b T (v a, v b, ·) ("sif+tensor"), where p is the sif embedding for (a, b), ω a and ω b are the smoothed inverse frequency weights used in the sif embeddings, and γ is a positive weight selected using the development set. The motivation for this hybrid embedding is to evaluate the extent to which the sif embedding and tensor component can independently improve performance on this task. We perform these same experiments using two other standard sets of pre-computed word embeddings, namely GloVe 3 and carefully optimized cbow vectors 4 . We re-trained the composition tensor using the same corpus and technique as before, but substituting these pre-computed embeddings in place of the RAND-WALK (rw) embeddings. However, a bit of care must be taken here, since our syntactic RAND-WALK model constrains the norm of the word embeddings to be related to the frequency of the words, whereas this is not the case with the pre-computed embeddings. To deal with this, we rescaled the pre-computed embeddings sets to have the same norms as their counterparts in the rw embeddings, and then trained the composition tensor using these rescaled embeddings. At test time, we use the original embeddings to compute the additive components of our compositions, but use the rescaled versions when computing the tensor components. The for adjective-noun phrases are given in Tables 3. We observe that the tensor composition outperforms the additive compositions on all embedding sets apart from the Spearman correlation on the cbow vectors, where the weighted additive 2 method has a slight edge. The sif embeddings outperform the additive and tensor methods, but combining the sif embeddings and the tensor components yields the best performance across the board, suggesting that the composition tensor captures additional information beyond the individual word embeddings that is useful for this task. There was high consistency across the folds for the optimal weight parameter α, with α = 0.4 for the rw embeddings, α =.2,.3 for the glove embeddings, and α =.3 for the cbow embeddings. For the sif+tensor embeddings, γ was typically in the range [.1, .2].The for verb-object phrases are given in Table 4. Predicting phrase similarity appears to be harder in this case. Notably, the sif embeddings perform worse than unweighted vector addition. As before, we can improve the sif embeddings by adding in the tensor component. The tensor composition method achieves the best for the glove and cbow vectors, but weighted addition works best for the randwalk vectors. Overall, these demonstrate that the composition tensor can improve the quality of the phrase embeddings in many cases, and the improvements are at least somewhat orthogonal to improvements In this section we present additional qualitiative demonstrating the use of the composition tensor for the retrieval of words related to adjective-noun and verb-object phrases. In TAB3, we show for the phrases "giving birth", "solve problem", and "changing name". These phrases are all among the top 500 most frequent verb-object phrases appearing in the training corpus. In these examples, the tensor-based phrase embeddings retrieve words that are generally markedly more related to the phrase at hand, and there are no strange false positives. These examples demonstrate how a verb-object phrase can encompass an action that isn't implied simply by the object or verb alone. The additive composition doesn't capture this action as well as the tensor composition. Moving on to adjective-noun phrases, in TAB4, we show for the phrases "United States", "Soviet Union", and "European Union". These phrases, which all occur with comparatively high frequency in the corpus, were identified as adjective-noun phrases by the tagger, but they function more as compound proper nouns. In each case, the additive composition retrieves reasonably relevant words, while the tensor composition is more of a mixed bag. In the case of "European Union", the tensor composition does retrieve the highly relevant words eec (European Economic Community) and eea (European Economic Area), which the additive composition misses, but the tensor composition also produces several false positives. It seems that for these types of phrases, the additive composition is sufficient to capture the meaning. In TAB5, we fix the noun "taste" and vary the modifying adjective to highlight different senses of the noun. In the case of "expensive taste", both compositions retrieve words that seem to be either related to "expensive" or "taste", but there don't seem to be words that are intrinsically related to the phrase as a whole (with the exception, perhaps, of "luxurious", which the tensor composition retrieves). In the case of "awful taste", both compositions retrieve fairly similar words, which mostly relate to the physical sense of taste (rather than the more abstract sense of the word). For the phrase "refined taste", the additive composition fails to capture the sense of the phrase and retrieves many words related to food taste (which are irrelevant in this context), whereas the tensor composition retrieves more relevant words. In TAB6, we fix the noun "friend" and vary the modifying adjective, but in all three cases, the adjective-noun phrase has basically the same meaning. In the case of "close friend" and "dear friend", both compositions retrieve fairly relevant and similar words. In the case of "best friend", both compositions retrieve false positives: the additive composition seems to find words related to movie awards, while the tensor composition finds unintuitive false positives. We note that in all three phrases, the tensor composition consistently retrieves the words "confidante", "confided" or "confides", "coworker", and "protoge", all of which are fairly relevant. We test the effect of using the composition tensor for a sentiment analysis task. We use the movie review dataset of Pang and Lee as well as the Large Movie Review dataset , which consist of 2,000 movie reviews and 50,000 movie reviews, respectively. For a fixed review, we identify each adjective-noun pair (a, b) and compute T (v a, v b, ·). We add these compositions together with the word embeddings for all of the words in the review, and then normalize the ing sum. This vector is used as the input to a regularized logistic regression classifier, which we train using scikit-learn with the default parameters. We also consider a baseline method where we simply add together all of the word embeddings in the movie review, and then normalize the sum. We evaluate the test accuracy of each method using TAB7. Although the tensor method seems to have a slight edge over the baseline, the differences are not significant. In this section we will prove the main Theorem 1, which establishes the connection between the model parameters and the correlations of pairs/triples of words. As we explained in Section 3, a crucial step is to analyze the partition function of the model and show that the partition functions are concentrated. We will do that in Section B.1. We then prove the main theorem in Section B.2. More details and some technical lemmas are deferred to Section B.3 In this section we will prove concentrations of partition functions (Lemma 1). Recall that we need the tensor to be K-bounded (where K is a constant) for this to work. DISPLAYFORM0 Note that K here should be considered as an absolute constant (like 5, in fact in Section 5 we show K is less than 4). We first restate Lemma 1 here: Lemma 3 (Lemma 1 restated). For the syntactic RAND-WALK model, there exists a constant Z such that Pr DISPLAYFORM1 Furthermore, if the tensor T is (K,)-bounded, then for any fixed word a ∈ V, there exists a constant Z a such that Pr DISPLAYFORM2 In fact, the first part of this Lemma is exactly Lemma 2.1 in. Therefore we will focus on the proof of the second part. For the second part, we know the probability of choosing a word b is proportional to DISPLAYFORM3 If the probability of choosing word w is proportional to exp(r, v w) for some vector r (think of r = T (v a, ·, c)+c), then in expectation the partition function should be equal to nE v∼D V [exp( r, v)] (here D V is the distribution of word embedding). When the number of words is large enough, we hope that with high probability the partition function is close to its expectation. Since the Gaussian distribution is spherical, we also know that the expected partition function nE v∼D V [exp( r, v)] should only depend on the norm of r. Therefore as long as we can prove the norm of r = T (v a, ·, c)+c remain similar for most c, we will be able to prove the desired in the lemma. We will first show the norm of r = T (v a, ·, c) + c is concentrated if the tensor T is (K,)-bounded. Throughout all subsequent proofs, we assume that < 1 and d ≥ log 2 n/ 2.Lemma 4. Let v a be a fixed word vector, and let c be a random discourse vector. If T is (K,)-bounded with d ≥ log 2 n/ 2, we have DISPLAYFORM4 where 0 ≤ L ≤ K is a constant that depends on v a, and δ = exp(−Ω(log 2 n)).Proof. Since c is a uniform random vector on the unit sphere, we can represent c as c = z/ z, where z ∼ N (0, I) is a standard spherical Gaussian vector. For ease of notation, let M = T (v a, ·, ·)+I, and write the singular value decomposition of M as M = U ΣV T. Note that Σ = diag(λ 1, . . ., λ d) and U and V are orthogonal matrices, so that in particular, the random variable y = V T z has the same distribution as z, i.e. its entries are i.i.d. standard normal random variables. Further, U x 2 = x 2 for any vector x, since U is orthogonal. Hence, we have DISPLAYFORM5 Since both the numerator and denominator of this quantity are generalized χ 2 random variables, we can apply Lemma 7 to get tail bounds on both. Observe that by assumption, we have λ 2 i ≤ Kd 2 / log 2 n for all i, and DISPLAYFORM6 We will apply Lemma 7 to prove concentration bounds for A, in this case we have DISPLAYFORM7 Under our assumptions, we know λ 2 max ≤ Kd 2 / log 2 n and DISPLAYFORM8 Similarly, we can apply Lemma 7 to B (in fact we can apply simpler concentration bounds for standard χ 2 distribution), and we get DISPLAYFORM9 If we take x = 1 16 log 2 n, we know 2 DISPLAYFORM10 When both events happen we know considered as a constant). This finishes the proof. DISPLAYFORM11 Using this lemma, we will show that the expected condition number nE v∼D V [exp( r, v)] (where r = T (v a, ·, c) + c) is concentrated Lemma 5. Let v a be a fixed word vector, and let c be a random discourse vector. If T is (K,)-bounded, there exists Z a such that we have DISPLAYFORM12 where Z a = Θ(n) depends on v a, and δ = exp(−Ω(log 2 n)).Proof. We know v = s ·v wherev ∼ N (0, I) and s is a (random) scaling. Let r = T (v a, ·, c) + c. Conditioned on s we know r, v is equivalent to a Gaussian random variable with standard deviation σ = r s. For this random variable we know DISPLAYFORM13. In particular, this implies g(x + γ) ≤ exp(κ 2 γ/2)g(x) (for small γ).By Lemma 4, we know with probability at least 1 − Ω(log 2 n), r 2 ∈ L ± O. Therefore, when this holds, we have DISPLAYFORM14 The multiplicative factor on the RHS is bounded by 1 + O when is small enough (and κ is a constant). This finishes the proof. Now we know the expected partition function is concentrated (for almost all discourse vectors c), it remains to show when we have finitely many words the partition function is concentrated around its expectation. This was already proved in Arora et al. FORMULA6, we use their lemma below: Lemma 6. For any fixed vector r (whose norm is bounded by a constant), with probability at least 1 − exp(−Ω(log 2 n)) over the choices of the words, we have DISPLAYFORM15 DISPLAYFORM16 This is essentially Lemma 2.1 in (see Equation A.32). The version we stated is a bit different because we allow r to have an arbitrary constant norm (while in their proof vector r is the discourse vector c and has norm 1). This is a trivial corollary as we can move the norm of r into the distribution of the scaling factor s for the word embedding. Finally we are ready to prove Lemma 1.Proof of Lemma 1. The first part is exactly Lemma 2.1 in.For the second part, note that the partition function DISPLAYFORM17 We will use E[Z c,a] to denote its expectation over the randomness of the word embedding {v i}. By Lemma 5, we know for at least 1 − exp(−Ω(log 2 n)) fraction of discourse vectors c, the expected partition function is concentrated (E[Z c,a] ∈ (1 ± O)Z a ). Let S denote the set of c such that Lemma 5 holds. Now by Lemma 6 we know for any x ∈ S, with probability at least 1 DISPLAYFORM18 Therefore we know if we consider both c and the embedding as random variables, Pr[Z c,a ∈ (1 ± O( + z))Z a ] ≥ 1 − δ where δ = exp(−Ω(log 2 n)). Let S be the set of word embedding such that there is at least DISPLAYFORM19 That is, with probability at least 1 − √ δ (over the word embeddings), there is at least 1 − √ δ fraction of c such that Z c,a ∈ (1 ± O( + z))Z a. In this section we prove Theorem 1 and Corollary 1. The proof is very similar to the proof of Theorem 2.2 in. We use several lemmas in that proof, and these lemmas are deferred to Section B.3.Proof of Theorem 1. Throughout this proof we consider two adjacent discourse vectors c, c, where c generated a single word w and c generated a syntactic pair (a, b).The first two in Theorem 1 are exactly the same as Theorem 2.2 in. Therefore we only need to prove the for p ([a, b] ) and p(w, [a, b] ). Here the last step used Lemma 10. Since both Z and Z a can be bounded by O(n), and DISPLAYFORM0 is bounded by (4κ + √ 2K) 2, we know the first term is of order Ω(1/n 2), and the second term is negligible. We end with the proof of Lemma 2.Proof of Lemma 2. Just for this proof, we use the following notation. Let I d×d be the d-dimensional identity matrix, and let x 1, x 2,..., x n be i.i.d. draws from N (0, I d×d). Let y i = x i 2, and note that y, v i, c) ). We first cover the unit sphere by a finite number of metric balls of small radius. Then we show that with high probability, the partition function at the center of these balls is indeed bounded below by a constant. Finally, we show that the partition function evaluated at an arbitrary point on the unit sphere can't be too far from the partition function at one of the ball centers provided the norms of the v i are not too large. We finish by appropriately controlling the norms of the v i. ≥.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1eqjiCctX
We present a generative model for compositional word embeddings that captures syntactic relations, and provide empirical verification and evaluation.
Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI. This paper describes progresses on this challenge in the context of man-made environments, which are visually diverse but contain intrinsic semantic regularities. We propose a hybrid model-based and model-free approach, LEArning and Planning with Semantics (LEAPS), consisting of a multi-target sub-policy that acts on visual inputs, and a Bayesian model over semantic structures. When placed in an unseen environment, the agent plans with the semantic model to make high-level decisions, proposes the next sub-target for the sub-policy to execute, and updates the semantic model based on new observations. We perform experiments in visual navigation tasks using House3D, a 3D environment that contains diverse human-designed indoor scenes with real-world objects. LEAPS outperforms strong baselines that do not explicitly plan using the semantic content. Deep reinforcement learning (DRL) has undoubtedly witnessed strong achievements in recent years BID7; BID9. However, training an agent to solve tasks in a new unseen scenario, usually referred to as its generalization ability, remains a challenging problem . In model-free RL, the agent is trained to reactively make decisions from the observations, e.g., first-person view, via a black-box policy approximator. However the generalization ability of agents trained by model-free RL is limited, and is even more evident on tasks that require extensive planning BID9 ). On the other hand, model-based RL learns a dynamics model, predicting the next observation when taking an action. With the model, sequential decisions can be made via planning. However, learning a model for complex tasks and with high dimensional observations, such as images, is challenging. Current approaches for learning action-conditional models from video are only accurate for very short horizons BID3 ). Moreover, it is not clear how to efficiently adapt such models to changes in the domain. In this work, we aim to improve the generalization of RL agents in domains that involve highdimensional observations. Our insight is that in many realistic settings, building a pixel-accurate model of the dynamics is not necessary for planning high-level decisions. There are semantic structures and properties that are shared in real-world man-made environments. For example, rooms in indoor scenes are often arranged by their mutual functionality (e.g., bathroom next to bedroom, dining room next to kitchen). Similarly, objects in rooms are placed at locations of practical significance (e.g., nightstand next to bed, chair next to table). Humans often make use of such structural priors when exploring a new scene, or when making a high-level plan of actions in the domain. However, pixel-level details are still necessary for carrying out the high-level plan. For example, we need high-fidelity observations to locate and interact with objects, open doors, etc. Based on this observation, we propose a hybrid framework, LEArning and Planning with Semantics (LEAPS), which consists of a model-based component that works on the semantic level to pursue a high-level target, and a model-free component that executes the target by acting on pixel-level inputs. Concretely, we train model-free multi-target subpolicies in the form of neural networks that take the first-person views as input and sequentially execute sub-targets towards the final goal; build a semantic model in the form of a latent variable model that only takes semantic signals, i.e., low-dimensional binary vectors, as input and is dynamically updated to plan the next sub-target. LEAPS has following advantages: via model-based planning, generalization ability is improved; by learning the prior distribution of the latent variable model, we capture the semantic consistency among the environments; the semantic model can be efficiently updated by posterior inference when the agent is exploring the unseen environment, which is effective even with very few exploration experiences thanks to the Bayes rule; and the semantic model is lightweight and fully interpretable. Our approach requires observations that are composed of both pixel-level data and a list of semantic properties of the scene. In general, automatically extracting high-level semantic structure from data is difficult. As a first step, in this work we focus on domains where obtaining semantics is easy. In particular, we consider environments which resemble the real-world and have strong object detectors available . An example of such environments is House3D which contains 45k human-designed 3D scenes BID12. House3D provides a diverse set of scene layouts, object types, sizes and connectivity, which all conform to a consistent "natural" semantics. Within these complex scenes, we tackle navigation tasks within novel indoor scenes. Note that this problem is extremely challenging as the agent needs to reach far-away targets which can only be completed effectively if it can successfully reason about the overall structure of the new scenario. Lastly, we emphasize that although we consider navigation as a concrete example in this work, our approach is general and can be applied to other tasks for which semantic structures and signals are availableOur extensive experiments show that our LEAPS framework outperforms strong model-free RL approaches, even when the semantic signals are given as input to the policy. Furthermore, the relative improvements of LEAPS over baselines become more significant when the targets are further away from the agent's birthplace, indicating the effectiveness of planning on the learned semantic model. Most deep RL agents are tested in the same training environments , disregarding generalization. While limited, robust training approaches have been proposed to enforce an agent's generalization ability, such as domain randomization BID10 and data augmentation by generating random mazes for training . In our work, we use a test set of novel unseen environments, where an agent cannot resort to memorization or simple pattern matching to solve the task. Meta-learning has shown promising for fast adaptation to novel environments. Methods include learning a good initialization for gradient descent or learning a neural network that can adapt its policy during exploration BID2 ). We propose to learn a Bayesian model over the semantic level and infer the posterior structure via the Bayes rule. Our approach can work even without any exploration steps in a new environment and is interpretable and can be potentially combined with any graph-based planning algorithm. Our work can be viewed as a special case of hierarchical reinforcement learning (HRL). Unlike other approaches BID11 BID1, in our work high-level planning is performed based on the semantic signals. With orders of magnitudes fewer parameters, our approach is easier to learn compared to recurrent controllers. LEAPS assumes a discrete semantic signal in addition to the continuous state. A similar assumption is also adopted in BID13, where the discrete signals are called "attributes" and used for planning to solve compositional tasks within the same fully observable environment. BID5 use additional discrete signals to tackle the sparse reward problem. The schema network further assumes that even the continuous visual signal can be completely represented in a binary form and therefore directly runs logical reasoning on the binary states. For evaluating our approach, we focus on the problem of visual navigation, which has been studied extensively . Classical approaches build a 3D map of the scene using SLAM, which is subsequently used for planning . More recently, end-to-end approaches have been applied to tackle various domains, such as maze , indoor scenes BID14 and Google street view . Evidently, navigation performance deteriorates as the agent's distance from the target increases BID14 BID12. To aid navigation and boost performance, auxiliary tasks are often introduced during training. Another direction for visual navigation is to use a recurrent neural network and represent the memory in the form of a 2D spatial map (; ; BID9 BID14 such that a differentiable planning computation can be performed on the spatial memory. Our approach considers more general graph structures beyond dense 2D grids and captures relationships between semantic signals, which we utilize as an informative latent structure in semantically rich environments like House3D. Similar to our work, Savinov et al. BID6 constructs a graph of nodes corresponding to different locations of the environment. However, they rely on a pre-exploration step within the test scene and build the graph completely from the pixel space. In LEAPS, we use semantic knowledge and learn a prior over the semantic structures that are shared across real-world scenes. This allows us to directly start solving for the task at hand without any exploratory steps. We assume familiarity with standard DRL notations. Complete definitions are in Appendix A. We consider a contextual Markov decision process E(c) defined by E(c) = (S, A, P (s |s, a; c), r(s, a; c)). Here c represents the objects, layouts and any other semantic information describing the environment, and is sampled from C, the distribution of possible semantic scenarios. For example, c can be intuitively understood as encoding the complete map for navigation, or the complete object and obstacle layouts in robotics manipulations, not known to the agent in advance, and we refer to them as the context. Semantic Signal: At each time step, the agent observes from s a tuple (s o, s s), which consists of: a high-dimensional observation s o, e.g., the first person view image, and a low-dimensional discrete semantic signal s s, which encodes semantic information. Such signals are common in AI, e.g., in robotic manipulation tasks s s indicates whether the robot is holding an object; for games it is the game status of a player; in visual navigation it indicates whether the agent reached a landmark; while in the AI planning literature, s s is typically a list of predicates that describe binary properties of objects. We assume s s is provided by an oracle function, which can either be directly provided by the environment or extracted by some semantic extractor. Generalization: Let µ(a|{s (t) } t; θ) denote the agent's policy parametrized by θ conditioned on the previous states {s (t) } t. The objective of generalization is to train a policy on training environments E train such that the accumulative reward R(µ(θ); c) on test set E test is maximized. The key motivation of LEAPS is the fact that while each environment can be different in visual appearances, there are structural similarities between environments that can be captured as a probabilistic graphical model over the semantic information. On a high level, we aim to learn a Bayesian model M (D, c) that captures the semantic properties of the context c, from the agent's exploration experiences D. Given a new environment E(c), the agent computes the posterior P (c |D, M) for the unknown context c via the learned model M and its current experiences D. This allows the agent to plan according to its belief of c to reach the goal more effectively. Thanks to the Bayes rule, this formulation allows probabilistic inference even with limited (or even no) exploration experiences. Learning an accurate and complete Bayesian model M (D, c) can be challenging. We learn an approximate latent variable model M(y, z; ψ) parameterized by ψ with observation variable y and latent variable z that only depend on the semantic signal s s. Suppose we have K different semantic signals T 1,..., T K and s s ∈ {0, 1} K where s s (T k) denotes whether the kth signal T k (e.g., landmarks in navigation) is reached or not. Assuming T 1 is the final goal of the task, from any state s, we want to reach some final state s with s s (T 1) = 1. In this work, we consider navigation as a concrete example, which can be represented as reaching a state where a desired semantic signal becomes'true'. We exploit the fact that navigation to a target can be decomposed into reaching several way points on way to the target, and therefore can be guided by planning on the semantic signals, i.e., arrival at particular way points. Note that there can be 2K different values for s s. For efficient computation, we assume independence between different semantic signals T k: we use a binary variable z i,j to denote whether some state s with s s (T j) = 1 can be "directly reached", i.e., by a few exploration steps, from some state s with s s (T i) = 1, regardless of other signals T k ∈ {T i, T j}. In addition, we also assume reversibility, i.e., z i,j = z j,i, so only K(K − 1)/2 latent variables are needed. Before entering the unknown environment, the agent does not know the true value of z i,j, but holds some prior belief DISPLAYFORM0 is some parameter to be learned. After some exploration steps, the agent receives a noisy observation y i,j of z i,j, i.e., whether a state s with s s (T j) = 1 is reached. We define the observation model P (y i,j |z i,j) as follows: DISPLAYFORM1 At any time step, the agent hold an overall belief P (z|Y) of the semantic structure of the unknown environment, based on its experiences Y, namely the samples of y. Multi-target sub-policies: With our semantic model, we correspondingly learn multi-target subpolicies µ(a|{s DISPLAYFORM0 is particularly trained for sub-target T i, i.e., reaching a state s with s s (T i) = 1. Hence the semantic model can be treated as a modelbased planning module that picks an intermediate sub-target for the sub-policies to execute so that the final target T 1 can be reached with the highest probability. Inference and planning on M: We assume the agent explores the current environment for a short horizon of N steps and receives semantic signals s s,..., s s (N). Then we compute the bit-OR operation over these binary vectors DISPLAYFORM1. By the reversibility assumption, for T i and T j with B(T i) = B(T j) = 1, we know that T i and T j are "directly reachable" for each other, namely a sample of y i,j = 1, and otherwise y i,j = 0. Combining all the history samples of y and the current batch from B as Y, we can perform posterior inference P (z|Y) by the Bayes rule. By the independence assumption, we can individually compute the belief of each latent variable z i,j, denoted byẑ i,j = P (z i,j |Y i,j). Given the current beliefsẑ i,j, the current semantic signals s s and the goal T 1, we search for an optimal plan τ * = {τ 0, τ 1, . . ., τ m−1, τ m}, where τ i ∈ {1 . . . K} denotes an index of concepts and particularly τ m = 1, so that the joint belief along the path from some current signal to the goal is maximized: DISPLAYFORM2 After obtaining τ, we execute the sub-policy for the next sub-target T τ 1, and then repeatedly update the model and replan every N steps. The model parameters ψ have two parts: ψ prior for the prior of z and ψ obs for the noisy observation y. Note that ψ obs is related to the performance of the sub-policies µ(θ): if µ(θ) has a high success rate for reaching sub-targets, ψ obs should be low; when µ(θ) is poor, ψ obs should be higher (cf. Eq. ).Learning ψ prior: We learn ψ prior from E train. During training, for each pair of semantic signals T i and T j, we run random explorations from some state s with s(T i) = 1. If eventually we reach some state s with s (T j) = 1, we consider T i and T j are reachable and therefore a positive sample z i,j = 1; otherwise a negative sample z i,j = 0. Suppose Z denotes the samples we obtained for z from E train. We run maximum likelihood estimate for ψ prior by maximizing L MLE (ψ prior) = P (Z|ψ prior).Learning ψ obs: There is no direct supervision for ψ obs. However, we can evaluate a particular value of ψ obs by policy evaluation on the validation environments E valid. We optimize the accumulative reward DISPLAYFORM0, with the semantic model M (ψ). Analytically optimizing L valid is hard. Instead, we apply local search in practice to find the optimal ψ obs. The LEAPS agent consists of two parts, the multi-target sub-policy µ(T i, θ) and the semantic model M (ψ). Learning the multi-target sub-policies can be accomplished by any standard deep RL method on E train. For the semantic model, learning ψ prior does not depend on the sub-policies and can be reused even with different sub-policies; ψ obs depends on the sub-policies so it should be learned after µ(T i, θ) is obtained. Figure 1: Visualization of learned semantic prior of M(ψ): the most and least likely nearby rooms for dining room (L), bedroom (M) and outdoor (R), with numbers denoting ψ z, i.e., the probability of two rooms connecting to each other. Figure 2: Example of a successful trajectory. The agent is spawned inside the house, targeting "outdoor". Left: the 2D top-down map with sub-target trajectories ("outdoor" -orange; "garage" -blue; "living room" -green); Right, 1st row: RGB visual image; Right, 2nd row: the posterior of the semantic graph and the proposed sub-targets (red arrow). Initially, the agent starts by executing the sub-policy "outdoor" and then "garage" according to the prior knowledge (1st graph), but both fail (top orange and blue trajectories in the map). After updating its belief that garage and outdoor are not nearby (grey edges in the 2nd graph), it then executes the "living room" sub-policy with success (red arrow in the 2nd graph, green trajectory). Finally, it executes "outdoor" sub-policy again, explores the living room and reaches the goal (3rd graph, bottom orange trajectory). RoomNav is a concept driven navigation task based on the House3D environment BID12. In RoomNav, the agent is given a concept target, i.e., a room type, and needs to navigate to find the target room. RoomNav pre-selected a fixed set of target room types and provides a training set of 200 houses, a testing set of 50 houses and a small validation set of 20 houses. Semantic signals: We choose the K = 8 most common room types as our semantic signals, such that s s (T i) denotes whether the agent is currently in a room with type T i 1. When given a target T i, reaching a state s with s s (T i) = 1 becomes our final goal. House3D provides bounding boxes for rooms, which can be directly used as the oracle for semantic signals. But in practice, we only use these oracle signals to train a room type detector and use this detector to extract semantic information during evaluation. Details can be found in the beginning part of Sec. 6.The semantic model and sub-policies: In navigation, the reachability variable z i,j can naturally represent the connectivity between room type T i and room type T j 2. We run random explorations in training houses between rooms to collect samples for learning ψ prior. For learning ψ obs, we perform a grid search and evaluate on the validation set. For sub-policies, we learn target driven LSTM policies by A3C with shaped reward on E train. More details are in Appendix. G. In this section, we experiment on RoomNav and try to answer the following questions: Does the learned prior distribution capture meaningful semantic consistencies? Does our LEAPS agent generalize better than the model-freel RL agent that only takes image input? Our LEAPS Figure 3: Comparison in success rate with model-free baselines (Sec. 6.2). We evaluate performance of random policy (blue), model-free RL baseline (pure µ(θ), green) and our LEAPS agent (red), with increasing horizon H from left to right (left: H = 300; middle: H = 500; right: H = 1000). Each row shows a particular metric. Top row: success rate (y-axis) w.r.t. the distance in meters from the birthplace to target room (x-axis); middle row: success rate with confidence interval (yaxis) w.r.t. the shortest planning distance in the ground truth semantic model (x-axis); bottom row: relative improvement of LEAPS over the baseline (y-axis) w.r.t. the optimal plan distance (x-axis). As the number of planning computations, i.e., H/N, increases (from left to right), LEAPS agent outperforms baselines more. LEAPS also has higher relative improvements, i.e., 40% to 180%, for targets requiring more semantic planning computations (i.e., plan-steps > 2). agent takes additional semantic signals as input. How does LEAPS compare to other model-free RL approaches that also take the semantic signals as part of the inputs but in a different way from our semantic model? For example, what about replacing our semantic model with a complicated RNN controller? What do our LEAPS agents perform under some other metric considering the episode length? We consider a recently proposed metric, SPL BID0 in Sec. 6.4.Semantic Signals: All the semantic signals fed to our semantic model at test time are extracted by a CNN room type detector, which is trained on the (noisy) oracle semantic information provided by the House3D on E train and validated on E valid. During training, all the approaches directly use the oracle semantic signals. More details are in Appendix J and Appendix D. We visualize the learned prior P (z|ψ prior) in Fig. 1 with 3 room types and their most and least likely connected rooms. The learned prior indeed captures reasonable relationships: bathroom is likely to connect to a bedroom; kitchen is often near a dining room while garage is typically outdoor. We follow measure the testing success rate of different agents under various horizon lengths on E test. More details are in Appendix C and F. We compare our LEAPS agent with two baselines random policy (denoted by "random") and model-free RL agent that only takes in image input s o and executes µ(T i, θ) throughout the episode (denoted by "pure µ(θ)"). For LEAPS agent, we set N = 30, i.e., update the semantic model every 30 steps. We experiment on horizons H = 300, 500, 1000 and evaluate the success rate and relative improvements of our LEAPS agent over the baselines in Fig. 3. As the number of planning computations, H/N, increases, our LEAPS agent outperforms the baselines more significantly in success rate. Note that since targets of plan-steps 1 do not require any planning computations, hence it is as expected that LEAPS does not improve much over the pure policy. The best relative improvements are achieved for targets neither too faraway nor too close, i.e., plan steps equal to 3 or 4. Interestingly, we observe that there is a small success rate increase for targets that are 5 plan steps away. We suspect that this is because it is rare to see houses that has a diameter of 5 in the semantic model (imagine a house where you need to go through 5 rooms to reach a place). Such houses may have structural properties that makes navigation easier. Fig. 2 shows an example of a success trajectory of our LEAPS agent. We visualize the progression of the episode, describe the plans and show the updated graph after exploration. Here we consider two semantic-aware agents that also takes the semantic signals as input. We train new sub-policies µ s (θ s) taking both s o and s s as input. Note that updating and planning on M (Eq. 2) only depend on the current semantic signal s s, the target T i, and the accumulative bit-OR feature B. Hence, we fixed the same set of sub-policies µ(θ) used by our LEAPS agent, and train an LSTM controller with 50 hidden units on E train that takes all the necessary semantic information, and produce a sub-target every N steps. Training details are in Appendix I. Note that the only difference between our LEAPS agent and this HRL agent is the representation of the planning module. The LSTM controller has access to exactly the same semantic information as our model M and uses a much more complicated neural model. Thus we expect it to perform competitively to our LEAPS agent. The are shown in FIG1, where our LEAPS agent outperforms both baselines. The semantic augmented policy µ s (θ s) does not improve much on the original µ(θ). For the HRL agent with an LSTM controller, the LEAPS agent achieves higher relative improvements for targets requiring more planning computations (i.e., plan-steps > 1), and also has the following advantages: M can be learned more efficiently with much fewer parameters: an LSTM with 50 hidden units has over 10 4 parameters while M (ψ) only has 38 parameters 3; M can adapt to new sub-policies µ(θ) with little fine-tuning (ψ prior remains unchanged) while the LSTM controller needs to re-train; the model M and the planning procedure are fully interpretable. In the previous evaluation, we only consider the metric success rate under different horizons. Indeed, another informative metric will be the episode length. There are two important factors: we would expect a better navigation agent to finish the semantic task in a shorter amount of actions; we should assign more credits to the agent when it finishes a hard episode while less credits when finishing an easy one. Recently, there is a new evaluation metric proposed for embodied navigation agent by BID0 capturing both these two factors, i.e., the Success weighted by Path Length (SPL) metric. SPL is a function considering both success rate and the path length to reach the goal from the starting point defined by Li,Pi), where N is total episodes evaluated, S i indicates whether the episode is success or not, L i is the ground truth shortest path distance in the episode, P i is the number of steps the agent actually took. DISPLAYFORM0 We evaluate the performance of LEAPS agents against all baseline agents in the metric of both success rate and SPL in FIG2. Our LEAPS agent has the highest average SPL (rightmost column) with a big margin over all baseline agents in all the cases. Notably, the margin in SPL is much more significant than the margin in pure success rate. More importantly, as the horizon increases, namely, more planning computations allowed, the SPL margin of LEAPS over the best remaining 3 We assign the same value to all ψ We evaluate performance of the semantic augmented model-free agent ("aug. µ s (θ)", blue), the HRL agent with the same sub-policies as LEAPS but with an LSTM controller ("RNN control.", green) and our LEAPS agent (red), with increasing horizon H from left to right (left: H = 300; middle: H = 500; right: H = 1000). Top row: success rate (y-axis) w.r.t. the distance in meters from birthplace to target (x-axis); middle row: success rate with confidence interval (y-axis) w.r.t. the shortest planning distance in the ground truth semantic model (x-axis); bottom row: relative improvements of LEAPS over the baselines (y-axis) w.r.t. the optimal plan distance (x-axis). Our LEAPS agent outperforms both of the baselines for targets requring planning computations (i.e., plan-steps > 1). For faraway targets with plan-steps > 2 in longer horizons (H ≥ 500), LEAPS improves over augmented policy by 80% and over RNN controller by 10% in success rate. Note that even though the LSTM controller has two orders of magnitudes more parameters than our semantic model M, our LEAPS agent still performs better. baselines strictly increases. This again indicates the effectiveness of our semantic model and shows that it indeed helps solve harder tasks, i.e., finding those faraway targets requiring more planning computations. We also notice that in shorter horizons (H ≤ 500), for plan-steps equal to 4, LEAPS agents have the highest success rate but relatively lower SPL. This is because that our LEAPS agent updates the semantic model every fixed N = 30 steps. This relatively low update frequency may potentially increase the episode length to reach a goal that requires more planning computations. However, when the horizon is long, i.e., allowing enough planning computations, our LEAPS agents significantly outperform all the baselines in SPL metric. It will be helpful if the LEAPS agent can learn to update the semantic model instead of updating it in a fixed frequency. We leave this to future works. In this work, we proposed LEAPS to improve generalization of RL agents in unseen environments with diverse room layouts and object arrangements, while the underlying semantic information is opt plan-steps 1 2 3 4 5 overall Horizon H = 300 random 20.5 / 15.9 6.9 / 16.7 3.8 / 10.7 1.6 / 4.2 3.0 / 8.8 7.2 / 13.6 pure µ(θ) 49.4 / 47.6 11.8 / 27.6 2.0 / 4.8 2.6 / 10.8 4.2 / 13.2 13.1 / 22.9 aug.µ S (θ) 47.8 / 45.3 11.4 / 23.1 3.0 / 7.8 3.4 / 8.1 4.4 / 11.2 13.0 / 20.5 RNN control. 52.7 / 45.2 13.6 / 23.6 3.4 / 9.6 3.4 / 10.2 6.0 / 17.6 14.9 / 21.9 LEAPS 53.4 / 58.4 15.6 / 31.5 4.5 / 12.5 3.6 / 6.6 7.0 / 18.0 16.4 / 27.9 Horizon H = 500 random 21.9 / 16.9 9.3 / 18.3 5.2 / 12.1 3.6 / 6.1 4.2 / 9.9 9.1 / 15.1 pure µ(θ) 54.0 / 57.5 15.9 / 25.6 3.8 / 7.7 2.8 / 6.4 4.8 / 8.6 16.2 / 22.9 aug.µ S (θ) 54.1 / 51.8 15.5 / 26.5 4.6 / 8. Our LEAPS agents have the highest success rates for all the cases requiring planning computations, i.e., plan-steps larger than 1. For SPL metric, LEAPS agents have the highest overall SPL value over all baseline methods (rightmost column). More importantly, as the horizon increases, LEAPS agents outperforms best baselines more. LEAPS requires a relatively longer horizon for the best practical performances since the semantic model is updated every fixed N = 30 steps, which may potentially increase the episode length for short horizons. More discussions are in Sec. 6.4.shared with the environments in which the agent is trained on. We adopt a graphical model over semantic signals, which are low-dimensional binary vectors. During evaluation, starting from a prior obtained from the training set, the agent plans on model, explores the unknown environment, and keeps updating the semantic model after new information arrives. For exploration, sub-policies that focus on multiple targets are pre-trained to execute primitive actions from visual input. The semantic model in LEAPS is lightweight, interpretable and can be updated dynamically with little explorations. As illustrated in the House3D environment, LEAPS works well for environments with semantic consistencies -typical of realistic domains. On random environments, e.g., random mazes, LEAPS degenerates to exhaustive search. Our approach is general and can be applied to other tasks, such as robotics manipulations where semantic signals can be status of robot arms and object locations, or video games where we can plan on semantic signals such as the game status or current resources. In future work we will investigate models for more complex semantic structures. Environment: We consider a contextual Markov Decision Process E(c) defined by E(c) = (S, A, P (s |s, a; c), r(s, a; c)), where S is the state space and A is the action space. c represents the objects, layouts and any other semantic information describing the environment, and is sampled from C, the distribution of possible semantic scenarios. r(s, a; c) denotes the reward function while P (s |s, a; c) describes transition probability conditioned on c. For example, c can be intuitively understood as encoding the complete map for navigation, or the complete object and obstacle layouts in robotics manipulations, not known to the agent in advance, and we refer to them as the context. Semantic Signal: At each time step, the agent's observation is a tuple (s o, s s), which consists of: (a) a high-dimensional observation s o, e.g. the first person view image, and (b) a low-dimensional semantic signal s s, which encodes semantic information. Such low-dimensional discrete signals are commonly used in AI, e.g. in robotic manipulation tasks s s indicates whether the robot is holding an object; for games it is the game status of a player; in visual navigation it indicates whether the agent reached a landmark; while in the AI planning literature, s s is typically a list of predicates that describe binary properties of objects. We assume s s is provided by an oracle function, which can either be directly provided by the environment or extracted by some semantic extractor. Generalization: Let µ(a|{s (t) } t; θ) denote the agent's policy parametrized by θ conditioned on the previous states {s (t) } t and R(µ(θ); c) denote the accumulative reward of µ(θ) in E(c). The objective is to find the best policy that maximizes the expected accumulative reward DISPLAYFORM0 In practice, we sample a disjoint partition of a training set E train = {E(c i)} i and a testing set E test = {E(c j)} j, where {c i} and {c j} are samples from C. We train µ(θ) with a shaped reward r train only on E train, and measure the empirical generalization performance of the learned policy on E test with the original unshaped reward (e.g., binary reward of success or not). In RoomNav the 8 targets are: kitchen, living room, dining room, bedroom, bathroom, office, garage and outdoor. We inherit the success measure of "see" from BID12: the agent needs to see some corresponding object for at least 450 pixels in the input frame and stay in the target area for at least 3 time steps. For the binary signal s s, we obtain from the bounding box information for each room provided from SUNCG dataset BID8, which is very noisy. Originally the House3D environment supports 13 discrete actions. Here we reduce it to 9 actions: large forward, forward, left-forward, right-forward, large left rotate, large right rotate, left rotate, right rotate and stay still. We following the evaluation setup from BID12 and measure the success rate on E test over 5750 test episodes, which consists of 5000 random generated configurations and 750 specialized for faraway targets to increase the confidence of measured success rate. These 750 episodes are generated such that for each plan-distance, there are at least 500 evaluation episodes. Each test episode has a fixed configuration for a fair comparison between different approaches, i.e., the agent will always start from the same location with the same target in that episode. Note that we always ensure that the target is connected to the birthplace of the agent, and the the birthplace of the agent is never within the target room. In the experiment sections, we use a CNN detector to extract the semantic signals at test time. Here we also evaluate the performance of all the approaches when using the ground truth signal from the oracle provided by the House3D environment. The are in Figure 6, where we also include the LEAPS agent using CNN detector as a references. Generally, using both the ground truth signal and using the CNN detector yield comparable overall performances in both metrics of success rate and Figure 6: Metrics of Success Rate(%) / SPL(‰) evaluating the performances of LEAPS and baselines agents using the ground truth oracle semantic signals provided by the environments. We also include the performance of LEAPS agent using CNN detector as a reference. Note that even using an CNN detector, LEAPS agents outperforms all baselines in both metrics of success rate and SPL. Notably, the performance of LEAPS-CNN agents is comparable to LEAPS-true agents and sometimes even better. This indicates that our semantic model can indeed tolerate practical errors in CNN detectors. More discussions are in Sec. D. We illustrate the ground truth shortest distance information as well as the average episode length of success episodes for all the approaches. The are shown in FIG3. Note that the average ground truth shortest path is around 46.86 steps. Considering the fact agent has 9 actions per step as well as the strong partial observability, this indicates that our benchmark semantic navigation task is indeed challenging. For confidence interval of the measured success rate, we computed it by fitting a binomial distribution. For optimal plan steps, we firstly extract all the room locations, and then construct a graph where a vertex is a room while an edge between two vertices is the shortest distance between these two rooms. After obtaining the graph and a birthplace of the agent, we compute shortest path from the birthplace to the target on this graph to derive the optimal plan steps. Hyperparameters: We utilize the same policy architecture as BID12. It was mentioned in BID12 ) that using segmentation mask + depth signals as input leads to relatively better performances for policy learning. So we inherit this setting here. In the original House3D paper, a gated attention module is used to incorporate the target instruction. Here, since we only have K = 8 different sub-policies, we simply train an individual policy for each target and we empirically observe that this leads to better performances. We run A3C with γ = 0.97, batch size 64, learning rate 0.001 with Adam, weight decay 10 −5, entropy bonus 0.1. We backprop through at most 30 time steps. We also compute the squared l 2 norm of logits and added to the loss with a coefficient 0.01. We also normalize the advantage to mean 0 and standard deviation 1.Reward shaping: We used a shaped reward function similar to BID12: the reward at each time step is computed by the difference of shortest paths in meters from the agent's location to the goal after taking a action. We also add a time penalty of 0.1 and a collision penalty of 0.3. When the agent reaches the goal, the success reward is 10.Curriculum learning: We run a curriculum learning by increasing the maximum of distance between agent's birth meters and target by 3 meters every 10000 iterations. We totally run 60000 training iterations and use the final model as our learned policy µ(θ). After evalution on the validation set, we choose to run random exploration for 300 steps to collect a sample of z. For a particular environment, we collect totally 50 samples for each z i,j.For all i = j, we set ψ For the LSTM controller, we ran A2C with batch size 32, learning rate 0.001 with adam, weight decay 0.00001, gamma 0.99, entropy bonus 0.01 and advantage normalization. The reward function is designed as follows: for every subtask it propose, it gets a time penalty of 0.1; when the agent reach the target, it gets a success bonus of 2.The input of the LSTM controller consists of s s (t) (K bits), B (K bits), last subtask T k, and the final target T i. We convert T i and T k to a one-hot vector and combine the other two features to feed into the LSTM. Hence the input dimension of LSTM controller is 4K, namely 32 in RoomNav. For the semantic augmented LSTM policy, µ s (θ s), we firstly use the CNN extract visual features from s o and combine the input semantic features and the visual features as the combined input to the LSTM in the policy. We noticed that in order to have a room type classifier, only using the single first person view image is not enough. For example, the agent may face towards a wall, which is not informative, but is indeed inside the bedroom (and the bed is just behind).So we take the panoramic view as input, which consists of 4 images, s 1 o,..., s 4 o with different first person view angles. The only exception is that for target "outdoor", we notice that instead of using a panoramic view, simply keeping the recent 4 frames in the trajectory leads to the best prediction accuracy. We use an CNN feature extractor to extract features f (s i o) by applying CNN layers with kernel size 3, strides and channels. We also use relu activation and batch norm. Then we compute the attention weights over these 4 visual features by l i = f (s) and a i = softmax(l i). Then we compute the weighted average of these four frames g = i a i f (s i o) and feed it to a single layer perceptron with 32 hidden units. For each semantic signal, we generate 15k positive and 15k negative training data from E train and use Adam optimizer with learning rate 5e-4, weight decay 1e-5, batch size 256 and gradient clip of 5. We keep the model that has the best prediction accuracy on E valid.For a smooth prediction during testing, we also have a hard threshold and filtering process on the CNN outputs: s s (T i) will be 1 only if the output of CNN has confidence over 0.85 for consecutively 3 steps.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgs1n05YQ
We propose a hybrid model-based & model-free approach using semantic information to improve DRL generalization in man-made environments.
In this paper, we focus on two challenges which offset the promise of sparse signal representation, sensing, and recovery. First, real-world signals can seldom be described as perfectly sparse vectors in a known basis, and traditionally used random measurement schemes are seldom optimal for sensing them. Second, existing signal recovery algorithms are usually not fast enough to make them applicable to real-time problems. In this paper, we address these two challenges by presenting a novel framework based on deep learning. For the first challenge, we cast the problem of finding informative measurements by using a maximum likelihood (ML) formulation and show how we can build a data-driven dimensionality reduction protocol for sensing signals using convolutional architectures. For the second challenge, we discuss and analyze a novel parallelization scheme and show it significantly speeds-up the signal recovery process. We demonstrate the significant improvement our method obtains over competing methods through a series of experiments. High-dimensional inverse problems and low-dimensional embeddings play a key role in a wide range of applications in machine learning and signal processing. In inverse problems, the goal is to recover a signal X ∈ R N from a set of measurements Y = Φ(X) ∈ R M, where Φ is a linear or non-linear sensing operator. A special case of this problem is compressive sensing (CS) which is a technique for efficiently acquiring and reconstructing a sparse signal BID12 BID6 BID1. In CS Φ ∈ R M ×N (M N) is typically chosen to be a random matrix ing in a random low-dimensional embedding of signals. In addition, X is assumed to be sparse in some basis Γ, i.e., X = ΓS, where S 0 = K N.While sparse signal representation and recovery have made significant real-world impact in various fields over the past decade , arguably their promise has not been fully realized. The reasons for this can be boiled down to two major challenges: First, real-world signals are only approximately sparse and hence, random/universal sensing matrices are sub-optimal measurement operators. Second, many existing recovery algorithms, while provably statistically optimal, are slow to converge. In this paper, we propose a new framework that simultaneously takes on both these challenges. To tackle the first challenge, we formulate the learning of the dimensionality reduction (i.e., signal sensing operator) as a likelihood maximization problem; this problem is related to the Infomax principle BID24 asymptotically. We then show that the simultaneous learning of dimensionality reduction and reconstruction function using this formulation gives a lower-bound of the objective functions that needs to be optimized in learning the dimensionality reduction. This is similar in spirit to what Vincent et al. show for denoising autoencoders in the non-asymptotic setting BID38. Furthermore, we show that our framework can learn dimensionality reductions that preserve specific geometric properties. As an example, we demonstrate how we can construct a data-driven near-isometric low-dimensional embedding that outperforms competing embedding algorithms like NuMax BID18. Towards tackling the second challenge, we introduce a parallelization (i.e., rearrangement) scheme that significantly speeds up the signal sensing and recovery process. We show that our framework can outperform state-of-the-art signal recovery methods such as DAMP BID26 and LDAMP BID25 both in terms of inference performance and computational efficiency. We now present a brief overview of prior work on embedding and signal recovery. Beyond random matrices, there are other frameworks developed for deterministic construction of linear (or nonlinear) near-isometric embeddings BID18 BID16 BID0 BID35 BID39 BID5 BID37 BID32. However, these approaches are either computationally expensive, not generalizable to outof-sample data points, or perform poorly in terms of isometry. Our framework for low-dimensional embedding shows outstanding performance on all these aspects with real datasets. Algorithms for recovering signals from undersampled measurements can be categorized based on how they exploit prior knowledge of a signal distribution. They could use hand-designed priors BID7 BID13 BID9 BID29, combine hand-designed algorithms with data-driven priors BID25 BID3 BID20 BID8 BID17, or take a purely data-driven approach BID28 BID22 BID41. As one moves from hand-designed approaches to data-driven approaches, models lose simplicity and generalizability while becoming more complex and more specifically tailored for a particular class of signals of interest. Our framework for sensing and recovering sparse signals can be considered as a variant of a convolutional autoencoder where the encoder is linear and the decoder is nonlinear and specifically designed for CS application. In addition, both encoder and decoder contain rearrangement layers which significantly speed up the signal sensing and recovery process, as we discuss later. Convolutional autoencoder has been previously used for image compression; however, our work is mainly focused on the CS application rather than image compression. In CS, measurements are abstract and linear whereas in the image compression application measurements are a compressed version of the original image and are nonlinear. Authors in have used bicubic interpolation for upscaling images; however, our framework uses a data-driven approach for upscaling measurements. Finally, unlike the image compression application, when we deploy our framework for CS and during the test phase, we do not have high-resolution images beforehand. In addition to image compression, there have been previous works BID34 BID22 to jointly learn the signal sensing and reconstruction algorithm in CS using convolutional networks. However, the problem with these works is that they divide images into small blocks and recover each block separately. This blocky reconstruction approach is unrealistic in applications such as medical imaging (e.g. MRI) where the measurement operator is a Fourier matrix and hence we cannot have blocky reconstruction. Since both papers are designed for block-based recovery whereas our method senses/recovers images without subdivision, we have not compared against them. Note that our method could be easily modified to learn near-optimal frequency bands for medical imaging applications. In addition, BID34 and BID22 use an extra denoiser (e.g. BM3D, DCN) for denoising the final reconstruction while our framework does not use any extra denoiser and yet outperforms state-of-the-art as we show later. Beside using convolutional autoencoders, authors in BID40 have introduced the sparse recovery autoencoder (SRA). In SRA, the encoder is a fully-connected layer while in this work, the encoder has a convolutional structure and is basically a circulant matrix. For large-scale problems, learning a fully-connected layer (as in the SRA encoder) is significantly more challenging than learning convolutional layers (as in our encoder). In SRA, the decoder is a T -step projected subgradient. However, in this work, the decoder is several convolutional layers plus a rearranging layer. It should also be noted that the optimization in SRA is solely over the measurement matrix and T (which is the number of layers in the decoder) scalar values. However, here, the optimization is performed over convolution weights and biases that we have across different layers of our network. In this section, we describe our framework for sparse signal representation and recovery and demonstrate how we can learn (near-)optimal projections and speed up signal recovery using parallelization along with convolutional layers. We call our framework by DeepSSRR, which stands for Deep Sparse Signal Representation and Recovery. DeepSSRR consists of two parts: A linear dimensionality reduction Φ: R N → R M for taking undersampled measurements and a nonlinear inverse mapping f Λ : R M → R N for recovering signals from their undersampled measurements. We learn both Φ and f Λ from training data. DeepSSRR FIG0 ) is based primarily on deep convolutional networks (DCN) as this gives us two advantages: (a) sparse connectivity of neurons, and (b) having shared weights which increases learning speed compared to fully-connected networks. Therefore, we impose a convolutional network architecture on both Φ and f Λ while learning them. Please note that we assume that measurements are linear; however, it is easy to extend DeepSSRR to adopt nonlinear measurements, i.e., allowing for Φ to be nonlinear by adding nonlinear units to convolutional layers. Given that the intervening layers are linear, one might argue that one convolutional layer (i.e., a single circulant matrix) is enough since we can merge kernel matrices into a single matrix. However, we consider a multi-layer architecture for learning Φ for two reasons. First, computationally it is cheaper to have separate and smaller kernels and second, it makes the implementation suitable for adding the aforementioned nonlinearities. We previously mentioned that in order to speed up the sensing and recovery process, we add a parallelization scheme in learning both Φ and f Λ that we describe in the following. Our original sensing model was Y = ΦX where X ∈ R N and Y ∈ R M. Assume that the undersampling ratio, i.e., M N is equal to 1 r. The left vector-matrix multiplication in FIG1 (a) denotes a convolution of zero-padded input signal with size N = rM = r(M + q − 1), filter size rq, stride (i.e., filter shift at every step) of size r, and output size of M. If we denote the input signal by X (in) and output by X (out) and filter by W we can write DISPLAYFORM0 If we concatenate the sub-filters and sub-signals denoted in orange in the left vector-matrix multiplication of FIG1 (a), we derive a new vector-matrix multiplication shown on the right side of FIG1 (a). There the input size is M = (M + q − 1), filter size is q, stride size is 1, and output size is M. Equation FORMULA0 states that the left convolution in FIG1 (a) can be written as the summation of r separate and parallel convolutions shown on the right side. Much like in the sensing part (i.e., learning Φ), as shown in FIG1 (b), a large strided deconvolution can be chopped into several parallel smaller deconvolutions for the recovery part (i.e., learning f Λ ). Because of these parallelizations, the computational complexity of calculating the outputs of layers in DeepSSRR is O(M) which is much less than the one for typical iterative and unrolled algorithms O(M N) (e.g. DAMP and LDAMP BID26) or previous recovery algorithms based on deep learning O(N) (e.g. DeepInverse). DISPLAYFORM1 Input: Training Dataset D, Number of Epochs n epochs, Network Parameters Ω e Output: A near-isometric embedding Φ: R N → R M for i = 1 to n epochs do -generate a randomly permuted training set → P(D) for every batch B j ∈ P(D) do -compute embedding Φ(X) for every x ∈ B j -compute the loss function corresponding to B j as the maximum deviation from isometry DISPLAYFORM0 As DeepSSRR architecture is shown in FIG0, For learning Φ, we first divide the input signal (of size N) into r (r = N M) sub-signals (of size M) such that all the congruent entries (modulo r) are in the same sub-signal. Then we run parallel convolutions on r sub-signals and stack the outputs (of size M), deriving a tensor of length M and depth r. Through several convolutional layers, we turn this tensor into a vector of size M which is the measurements vector Y and this completes construction of Φ. Similarly and for learning f Λ , through several convolutional layers, we turn vector Y into a tensor of length M and depth r. We then unstack channels similar to the sub-pixel layer architecture BID33 and derive the final reconstruction X = f Λ (Y) = f Λ (ΦX). We use MSE as a loss function and ADAM BID21 to learn the convolution kernels and biases. where M N. Therefore, an important question is how does one design Φ? Conventional CS is based on random projections of a signal which means that Φ is a random matrix in conventional CS. However, since signals are usually structured, random projections are not optimal for successfully recovering the corresponding signals. In many applications (e.g. medical imaging), we know a lot about the signals we are acquiring. Hence, given a large-scale dataset of the same type of signals of interest, we can learn (near-)optimal measurement matrices. As in the usual CS paradigm, if we assume that the measurement matrix Φ is fixed, each DISPLAYFORM0 } consists of pairs of signals and their corresponding measurements. Accordingly, we define the optimal measurement operator Φ as the one which maximizes the probability of training data given the undersampled projections, Φ = arg max DISPLAYFORM1 According to the law of large numbers, notice that we can write DISPLAYFORM2 where in (a) I denotes the mutual information, and the equality follows since the Shannon entropy H(X) is constant for every Φ. According to, in the asymptotic setting, the measurement matrix which maximizes the probability of training data given its measurements, maximizes the mutual information between the input signal and undersampled measurements as well. Equation FORMULA4 is the same as infomax principle first introduced in BID24.Now, suppose that we have a function f : DISPLAYFORM3 and reconstructs input signals as DISPLAYFORM4 We define the best reconstruction as the one which generates training data with the highest probability. In other words, we define DISPLAYFORM5 Therefore, in the asymptotic setting and similar to we can write DISPLAYFORM6 = arg max DISPLAYFORM7 In practice and since we do not know the true underlying probability distribution of P(X| X), we maximize a parametric distribution q(X| X) instead. In this case, in the asymptotic setting we can write DISPLAYFORM8 = arg max DISPLAYFORM9 Therefore, since Kullback-Leibler divergence is bounded above zero we have DISPLAYFORM10 [log(P(X|Y = ΦX; Λ))], meaning that learning a parametric distribution for reconstructing X from Y is equivalent to maximizing a lower-bound of true conditional entropy and accordingly, mutual information between the input signal X and undersampled measurements Y. Hence, although we are not maximizing the mutual information between X and Y, we are maximizing a lower-bound of it through learning Φ and Λ. If we assume X = X +, where and has an isotropic Gaussian distribution, then, since q(X| X = x) = N (x, λI), the above maximization may be performed by minimizing the mean squared error (MSE). DeepSSRR is mainly designed for jointly sensing and recovering sparse signals for CS applications. However, we can specifically train the sensing part of DeepSSRR (without using the recovery part) for several important dimensionality reduction tasks. The sensing part of DeepSSRR (i.e., the encoder or matrix Φ) is a linear low-dimensional embedding that we can apply it to learn a mapping from a subset of R N to R M (M < N) that is a near-isometry, i.e., a mapping that nearly preserves all inter-point distances. This problem has a range of applications, from approximate nearest neighbor search to the design of sensing matrices for CS. Recall that, for a set Q ⊂ R N and > 0, the (linear or nonlinear) mapping Φ: Q → R M is an -isometry w.r.t the 2 -norm if for every x and x in Q we have DISPLAYFORM11 Algorithm 1 demonstrates the use of the low-dimensional embedding matrix Φ of DeepSSRR to construct a near-isometric embedding. We achieve this by penalizing the maximum deviation from isometry in several batches of data that are created by permuting the original training data in every training epoch. In Section 3 we will show how our proposed algorithm works compared to competing methods. We now illustrate the performance of DeepSSRR against competing methods in several problems. We first study the quality of the linear embeddings produced by DeepSSRR and its comparison with two other linear algorithms -NuMax BID18 and random Gaussian projections. To show the price of linearity, we also pit these against the nonlinear version of DeepSSRR and a DCN (eight nonlinear convolutional layers + a max-pooling layer). We use the grayscale version of CIFAR-10 dataset (50,000 training + 10,000 test 32 × 32 images). We train DeepSSRR and DCN according to Algorithm 1 by using filters of size 5 × 5. For DeepSSRR, depending on the size of the embedding we use five to seven layers to learn Φ in Algorithm 1.Figure 3(a) shows the size of embedding M as a function of the isometry constant for different methods. For the random Gaussian projections we have considered 100 trials and the horizontal error bars represent the deviation from average value. As we can see, the nonlinear version of DeepSSRR low-dimensional embedding outperforms almost all the other methods by achieving a given isometry constant with fewer measurements. The only exception is when > 0.6 (i.e., a regime where we are not demanding a good isometry), where the DCN outperforms the nonlinear version of DeepSSRR; though, with more number of parameters. A convolutional layer is equivalent to the product of a circulant matrix and the vectorized input. The number of nonzero elements in a circulant matrix depends on the size of the convolution filter. As the number of such layers grows, so does the number of nonzero elements in the final embedding matrix. There are lower bounds BID30 on the number of nonzero elements in a matrix to ensure it is near-isometric. TAB0 shows the isometry constant value of DeepSSRR's low-dimensional embedding with different number of layers and different filter sizes. As we can see, gets smaller as the final embedding matrix has more nonzero elements (more layers, larger filters).Approximate Nearest Neighbors. Finding the closest k points to a given query datapoint is challenging for high-dimensional datasets. One solution is to create a near-isometric embedding that maps datapoints from R N to R M (M < N) and solving the approximate nearest neighbors (ANN) problem in the embedded space. FIG2 compares the performance of different methods in the ANN problem. It shows the fraction of k-nearest neighbors that are retained when embedding datapoints in a low-dimensional space. We have considered two separate embedding problems: First M = 65 for random embedding and NuMax and M = 64 for DCN and DeepSSRR's low-dimensional embedding. Second, M = 289 for random embedding and NuMax and M = 256 for DCN and DeepSSRR's lowdimensional embedding. Since the size of the embedding for DCN and DeepSSRR's low-dimensional embedding is smaller in both settings, they have a more challenging task to find the nearest neighbors. As shown in FIG2 (b) DeepSSRR's low-dimensional embedding outperforms other approaches. We divide the discussion of this section into two parts. In the first part, we study the performance of DeepSSRR in the sparse signal recovery problem. The discussion of this part along with experimental showing the effect of learning a sparse representation and parallelization on different criteria (e.g. phase transition, recovery accuracy and speed) are provided in Appendix A. In the second part that we provide in the following, we study the performance of DeepSSRR for the compressive image recovery problem. Compressive Image Recovery. In this part, we study the compressive image recovery problem by comparing DeepSSRR with two state-of-the-art algorithms DAMP BID26 and LDAMP BID25. Both DAMP and LDAMP use random Gaussian Φ while DeepSSRR learns a Φ. Here we run DAMP for 10 iterations and use a BM3D denoiser at every iteration. We also run LDAMP for 10 layers and use a 20-layer DCN in every layer as a denoiser. For DeepSSRR, we use 7 layers to learn the Φ and 7 layers to learn the f Λ . DeepSSRR is trained with an initial learning rate of 0.001 that is changed to 0.0001 when the validation error stops decreasing. For training, we have used batches of 128 images of size 64 × 64 from ImageNet BID31. Our training and validation sets include 10,000 and 500 images, respectively. On the other hand, DeepSSRR uses only 7 convolutional layers to recover the Man image which is significantly smaller compared to LDAMP's number of layers. Iterative recovery algorithms and their unrolled versions such as DAMP and LDAMP typically involve a matrix vector multiplication in every iteration or layer, and hence their computational complexity is O(M N). In DeepSSRR, the length of feature maps in every convolutional layer is equal to the size of embedding M. Therefore, computing the output of typical middle layers will cost O(M) that is significantly cheaper than the one for iterative or unrolled methods such as DAMP and LDAMP.Effect of the Number of Layers. Our experiments indicate that having more number of layers does not necessarily in a better signal recovery performance. This phenomenon is also observed in BID11 for the image super-resolution problem. The reason for this problem is the increased non-convexity and non-smoothness of loss function as we add more layers. One way to mitigate this problem is to add skip connections between layers. As shown in, skip connections smooth the loss surface of deep networks and make the optimization problem simpler. In this paper we introduced DeepSSRR, a framework that can learn both near-optimal sensing schemes, and fast signal recovery procedures. Our findings set the stage for several directions for future exploration including the incorporation of adversarial training and its comparison with other methods BID2 BID14 BID10 ). Furthermore, a major question arising from our work is quantifying the generalizability of a DeepSSRR-learned model based on the richness of training data. We leave the exploration of this for future research. In this section we study the problem of sparse signal recovery by comparing DeepSSRR to another DCN called DeepInverse and to the LASSO BID36 1 -solver implemented using the coordinate descent algorithm of BID15. We assume that the optimal regularization parameter of the LASSO is given by an oracle in order to obtain its best possible performance. Also, both training and test sets are wavelet-sparsified versions of 1D signals of size N = 512 extracted from rows of CIFAR-10 images and contain 100,000 and 20,000 signals, respectively. While DeepSSRR learns how to take undersampled measurements of data through its low-dimensional embedding Φ, DeepInverse uses random undersampling (i.e., a random Φ). DeepSSRR in this section has 3 layers for learning Φ and 3 layers for learning f Λ with filter size 25 × 1 while DeepInverse has five layers for learning the inverse mapping with filter size 125 × 1.Figure 5(a) shows the 1 phase transition plot BID13 ). This plot associates each grid point to an ordered pair (δ, ρ) ∈ 2, where δ = M N denotes the undersampling ratio and ρ = K M denotes the normalized sparsity level. Each grid point (δ, ρ) represents the probability of an algorithm's success in signal recovery for that particular problem configuration. As the name suggests, there is a sharp phase transition between values of (δ, ρ) where recovery fails with high probability to when it succeeds with high probability. In FIG4 (a), the blue curve is the 1 phase transition curve. The circular points denote the problem instances on which we study the performance of DeepInverse and the LASSO. The square points denote the problem instances on which we have trained and tested DeepSSRR. By design, all these problem instances are on the "failure" side of the 1 phase transition. For DeepSSRR (square points), we have made recovery problems harder by reducing δ and increasing ρ. The arrows between the square points and circular points in FIG4 (a) denote correspondence between problem instances in DeepSSRR and DeepInverse. TAB1 shows the average normalized MSE (NMSE) for the test signals. While DeepSSRR recovers the same signals from fewer measurements, it outperforms DeepInverse and the LASSO. DeepSSRR outperforms DeepInverse while having significantly fewer number of parameters (less than 70,000 vs. approximately 200,000 parameters). This is mainly due to the fact that DeepSSRR learns Φ instead of using a random Φ as is the case in DeepInverse and conventional CS. While training and test sets are the same, the configuration for DeepInverse (and LASSO) is (δ, ρ) = (0.7, 0.72) and for DeepSSRR is (δ, ρ) = (0.5, 1.003) which means we have given DeepSSRR a more challenging problem. As shown in FIG4, due to the extra parallelization scheme (i.e., rearrangement layer) convergence is significantly faster for DeepSSRR compared to DeepInverse. DeepSSRR outperforms the LASSO after only 4 training epochs while DeepInverse takes 138 epochs. This fast convergence has two major reasons: First, DeepSSRR has fewer number of parameters to learn. Second, DeepSSRR learns adaptive measurements (i.e., low-dimensional embedding) instead of using random measurements (i.e., random embedding). ≤ 0.01, where X (j) is the j-th sample,X (j) is the recovered signal from measurements of j-th sample, and I is the indicator function. We denote empirical successful recovery probability by P δ = 1 q q j=1 ϕ δ,j. In FIG4, our test samples are k-sparse where k = 34, and we have considered three different configurations: M = 64, 128, 256 that correspond to above, on, and below the 1 phase transition, respectively. As we can see in FIG4, DeepSSRR significantly outperforms LASSO when the problem configuration lies above (failure phase) or on the 1 phase transition and LASSO slightly outperforms when the problem configuration lies below the 1 phase transition (success phase). For a setting below the 1 phase transition, we expect 1 minimization to behave the same as 0 minimization. However, DeepSSRR should learn a transformation for transforming measurements back to the original signals. Furthermore, FIG4 shows the price we pay for using a linear low-dimensional embedding ΦX instead of a nonlinear one Φ(X). The main message of FIG4 (c) is that by using DeepSSRR we can have a significantly better phase transition compared to 1 -minimization. In this section we study another example of the compressive image recovery problem. The settings we have used in here is exactly the same as Section 3.2. FIG8 shows the reconstruction of the mandrill image (M N = 0.25). FIG8 shows the reconstruction of whole face and FIG8 shows the reconstruction of nose and cheeks. As we can see, although LDAMP slightly outperforms our method in FIG8, our method does a significantly better job in recovering the texture of nose and cheeks in FIG8. Not only our method outperforms LDAMP by 0.9 dB, but also it has a better visual quality and fewer artifacts (e.g. less over-smoothing). In this section we compare the running time of different algorithms. We consider the reconstruction of a 512 × 512 image with an undersampling ratio of M N = 0.25. Table 3 shows the comparison between different algorithms. We should note that authors in BID25 have used coded diffraction pattern in DAMP and LDAMP which simplifies the computational complexity of vector-matrix multiplications in DAMP and LDAMP to O(N log(N)) instead of O(M N). In addition, we should note that LDAMP uses filters of size 3 × 3 in its convolutional layers while we use filters of size 5 × 5 in the convolutional layers of our architecture. Table 3 shows that our method is almost 4 times faster than the LDAMP method.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1xVTjCqKQ
We use deep learning techniques to solve the sparse signal representation and recovery problem.
To select effective actions in complex environments, intelligent agents need to generalize from past experience. World models can represent knowledge about the environment to facilitate such generalization. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination. We efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance. Intelligent agents can achieve goals in complex environments even though they never encounter the exact same situation twice. This ability requires building representations of the world from past experience that enable generalization to novel situations. World models offer an explicit way to represent an agent's knowledge about the world in a parametric model learned from experience that can make predictions about the future. When the sensory inputs are high-dimensional images, latent dynamics models can abstract observations to predict forward in compact state spaces (; ;). Compared to predictions in image space, latent states have a small memory footprint and enable imagining thousands of trajectories in parallel. Learning effective latent dynamics models is becoming feasible through advances in deep learning and latent variable models (; ; ;). Behaviors can be derived from learned dynamics models in many ways. Often, imagined rewards are maximized by learning a parametric policy (; ;) or by online planning . However, considering only rewards within a fixed imagination horizon in shortsighted behaviors. Moreover, prior work commonly resorts to derivative-free optimization for robustness to model errors (; ;), rather than leveraging the analytic gradients offered by neural network dynamics models . We present Dreamer, an agent that learns long-horizon behaviors from images purely by latent imagination. A novel actor critic algorithm accounts for rewards beyond the planning horizon while making efficient use of the neural network dynamics. For this, we predict state values and actions in the learned latent space as summarized in Figure 1. The values optimize Bellman consistency for imagined rewards and the policy maximizes the values by propagating their analytic gradients back through the dynamics. In comparison to actor critic algorithms that learn online or by experience replay;;, world models enable interpolating between past experience and offer analytic gradients of multi-step returns for efficient policy optimization. Figure 2: Agent observations for 5 of the 20 control tasks used in our experiments. These pose a variety of challenges including contact dynamics, sparse rewards, many degrees of freedom, and 3D environments that exceed the difficult to tasks previously solved through world models. The agent observes the images as 64 × 64 × 3 pixel arrays. The key contributions of this paper are summarized as follows: • Learning long-horizon behaviors in imagination Purely model-based agents can be shortsighted due to finite imagination horizons. We approach this limitation in latenby predicting both actions and state values. Training purely by latent imagination lets us efficiently learn the policy by propagating analytic gradients of the value function back through latent state transitions. • Empirical performance for visual control We pair Dreamer with three representation learning objectives to evaluate it on the DeepMind Control Suite with image inputs, shown in Figure 2. Using the same hyper parameters for all tasks, Dreamer exceeds existing model-based and model-free agents in terms of data-efficiency, computation time, and final performance. Reinforcement learning We formulate visual control as a partially observable Markov decision process (POMDP) with discrete time step t ∈ [1; T], continuous vector-valued actions a t ∼ p(a t | o ≤t, a <t) generated by the agent, and high-dimensional observations and scalar rewards o t, r t ∼ p(o t, r t | o <t, a <t) generated by the unknown environment. The goal is to develop an agent that maximizes the expected sum of rewards E p T t=1 r t. Figure 2 shows a selection of our tasks. Agent components The classical components of agents that learn in imagination are dynamics learning, behavior learning, and environment interaction . In the case of Dreamer, the behavior is learned by predicting hypothetical trajectories in the compact latent space of the world model. As outlined in Figure 3 and detailed in Algorithm 1, Dreamer performs the following operations throughout the agent's life time, either sequentially interleaved or in parallel: • Learn the latent dynamics model from the dataset of past experience to predict future rewards from actions and past observations. Any learning objective for the world model can be incorporated with Dreamer. We review existing methods for learning latent dynamics in Section 4. • Learn action and value models from predicted latent trajectories, as described in Section 3. The value model optimizes Bellman consistency for imagined rewards and the action model is updated by propagating gradients of value estimates back through the neural network dynamics. • Execute the learned action model in the world to collect new experience for growing the dataset. Latent dynamics Dreamer uses a latent dynamics model that consists of three components. The representation model encodes observations and actions to create continuous vector-valued model states s t with Markovian transitions (; ;). The transition model predicts future model states without seeing the corresponding observations that will cause them. The reward model predicts the rewards given the model states, The model mimics a non-linear Kalman filter , latent state space model, or HMM with real-valued states. However, it is conditioned on actions and predicts rewards, allowing the agent to imagine the outcomes of potential action sequences without executing them in the environment. Dreamer learns long-horizon behaviors in the compact latent space of a learned world model. For this, we propagate stochastic gradients of multi-step returns through neural network predictions of actions, states, rewards, and values using reparameterization. This section describes our core contribution. The latent dynamics define a Markov decision process (MDP;) that is fully observed since the compact model states s t are Markovian. We denote imagined quantities with τ as the time index. Imagined trajectories start at the true model states s t of observation sequences drawn from the agent's past experience. They follow predictions of the transition model, and a policy a τ ∼ q(a τ | s τ). The objective is to maximize expected imagined rewards E q ∞ τ =t γ τ −t r τ with respect to the policy. Action and value models Consider imagined trajectories with a finite horizon H. Dreamer uses an actor critic approach to learn behaviors that consider rewards beyond the horizon. We learn an action model and a value model in the latent space of the world model. The action model implements the policy and aims to predict actions that solve the imagination environment. The value model estimates the state values V(s τ) E q(·|sτ) t+H τ =t γ τ −t r τ for the action model, the expected sum of imagined rewards that it achieves in each state s τ, Action model: The action and value models are trained cooperatively as typical in policy iteration: the action model aims to maximize an estimate of the value, while the value model aims to match an estimate of the value that changes as the action model changes. We use dense neural networks for the action and the value model with parameters φ and ξ, respectively. The action model outputs a tanh-transformed Gaussian with sufficient statistics predicted by the neural network. This allows for reparameterized sampling that lets sampled actions depend deterministically on the neural network output, allowing to backpropagate analytic gradients through the sampling operation, Value estimation To learn the action and value models, we need to estimate the state values of imagined trajectories {s τ, a τ, r τ} t+H τ =t. These trajectories branch off of the model states s t of sequence batches drawn from the agent's dataset of experience and predict forward for the imagination horizon H using actions sampled from the action model. State values can be estimated in multiple Figure 4: Imagination horizons. We compare the final performance of Dreamer to learning an action model without value prediction and to online planning using PlaNet. Learning a state value model to estimate rewards beyond the imagination horizon makes Dreamer more robust to the horizon length. The agents use reconstruction for representation learning and an action repeat of R = 2. ways that trade off bias and variance , where the expectations are estimated with the imagined trajectories. V R simply sums the rewards from τ until the horizon and ignores rewards beyond it. This allows learning the action model without value model, an ablation we compare to in our experiments. V k N estimates rewards beyond k steps with the learned value model. Dreamer uses V λ, which computes an exponentially-weighted average of the estimates for different k to balance bias and variance. Figure 4 shows that learning a value function in imagination enables Dreamer to solve long-horizon tasks while being robust to the imagination horizon. The experimental details and on all tasks are described in Section 5. Learning objective To update the action and value models, we first compute the value estimates V λ (s τ) for states s τ along the imagined trajectories. The objective for the action model q φ (a τ | s τ) is to output actions that in state trajectories with high value estimates. The objective for the value model v ξ (s τ), in turn, is to regress the value estimates, The value model is simply updated to regress the targets, around which we stop the gradient as typical in temporal difference learning . The action model uses analytic gradients through the learned dynamics to maximize the value estimates. To understand this, we note that the value estimates depend on the reward and value predictions, which depend on the imagined states, which in turn depend on the imagined actions. Since these steps are all implemented as neural networks with reparameterized sampling, we analytically compute . The world model is fixed while learning the action and value models. Comparison to actor critic methods Agents using Reinforce gradients employ a value baseline to reduce gradient variance, such as A3C and PPO , while Dreamer backpropagates through the value model. This is similar to analytic actor critics , such as DDPG and SAC . However, these do not leverage gradients through the state transitions and only maximize immediate Q-values. MVE and STEVE extend these to multi-step Q-learning using learned dynamics to help rewards propagate faster into the value estimates. We simply predict state values, which is sufficient for policy optimization since we backpropagate through the dynamics. We empirically compare learning action and value models from V λ, learning the action model from V R which does not require a value model, and online planning in our experiments in Learning behaviors in imagination requires a world model that generalizes well. We focus on latent dynamics models that predict forward in a compact latent space, facilitating long-term predictions and allowing to imagine thousands of trajectories in parallel. Several objectives for learning representations for control have been proposed (; ; ; . We review three approaches for learning representations to use with Dreamer: image reconstruction, contrastive estimation, and reward prediction. Reward prediction Latent imagination requires a representation model p(s t | s t−1, a t−1, o t), transition model q(s t | s t−1, a t−1,), and reward model q(r t | s t), as described in Section 2. In principle, this could be achieved by simply learning to predict future rewards given actions and past observations . Given a large and diverse dataset, such representations should be sufficient for solving a given control problem. However, while the agent is still exploring and especially when the reward signal is limited, additionally learning about observations is likely to improve the world model . Representation learning The world model is learned from sequences {(o t, a t, r t)} T t=1 drawn from the agent's dataset of experience. To learn representations that generalize, the model states s 1:T should be predictive of observations o 1:T and rewards r 1:T while not overfitting to individual examples in the dataset. At a high level, this is formalized by an information bottleneck , max The first terms encourages mutual information between the model states and the observations and rewards. The second term penalizes information between model states and dataset indices i 1:T by an amount 0 ≤ β ≤ 1. The dataset indices relate to the images by a Dirac delta p(o t | i t) as in. The information bottleneck poses the representation learning problem in a generic way and provides a common view on pixel reconstruction and contrastive estimation. While the two information terms are difficult to estimate, they are easy to bound an optimize . Reconstruction We first describe the world model used by PlaNet , shown in Figure 3a. It bounds the objective by predicting observations and rewards from the model states, where the expectation samples sequences from the dataset and states from the representation model. The bound includes a reconstruction term, a reward prediction term, and a KL regularizer. We refer to Appendix C for the derivation. The bound uses four distributions that we implement as neural networks and optimize jointly to increase the bound, Representation model: Episode Return n/a n/a n/a n/a Dreamer (2e6) PlaNet (2e6) D4PG (1e9 steps) A3C (1e9 steps, proprio) Figure 6: Performance comparison to existing methods. Dreamer exhibits the data-efficiency of PlaNet while exceeding the asymptotic performance of the best model-free agents. After 2 * 10 6 environment steps, Dreamer reaches an average performance of 802 across tasks, compared to PlaNet at 312 and the top model-free D4PG agent at 786 after 10 9 steps. Results are averages over 3 seeds. We implement the transition model as recurrent state space model (RSSM;), the representation model by combining the RSSM with a convolutional neural network (CNN;) applied to the image observation, the observation model as a transposed CNN, and the reward model as dense network. The combined parameter vector θ is updated by reparameterization gradients . Contrastive estimation Accurately predicting pixels in visually complex environments can be a challenging task. We can avoid reconstruction by instead predicting model states . While the observation marginal above was a constant, we now face the state marginal. Using the InfoNCE bound (Gutmann and Hyvärinen, 2010;) as described in Appendix C, where o q(s t | o) estimates the marginal by summing over observations o of the current sequence batch. Intuitively, q(s t | o t) makes the state predictable from the current image and ln o q(s t | o) keeps it diverse to prevent collapse. Instead of the observation model, the bound uses a state model, State model: We implement the state model as a CNN and again optimize the bound with respect to the combined parameter vector θ using reparameterization gradients. While avoiding pixel prediction, the amount of information this bound can extract efficiently is limited . We empirically compare reconstruction, contrastive, and reward objectives in our experiments in Figure 8. Visual control tasks We evaluate Dreamer on 20 continuous control tasks with image observations of the DeepMind Control Suite , illustrated in Figure 2. These tasks pose a variety of challenges, including partial observability, sparse rewards, contact dynamics, and 3D environments. We selected the tasks on which report non-zero performance from image inputs. Agent observations are images of shape 64 × 64 × 3, actions range from 1 to 12 dimensions, rewards are between 0 and 1, episodes contain 1000 steps, and initial states are randomized. Visualizations of our agent are available at https://dreamrl.github.io. Implementation All experiments used a single Nvidia V100 GPU and 10 CPU cores per training run. Our implementation uses TensorFlow Probability and will be open sourced. The training time for our implementation of Dreamer is 10 hours per 10 6 environment steps without parallelization, compared to 17 hours for online planning using PlaNet, and 24 hours for D4PG. We use the same hyper parameters across all tasks including a fixed action repeat of R = 2, as detailed in Appendix B. The world models are learned by reconstruction unless noted otherwise. Baseline methods We compare Dreamer to several baselines: The current best reported performance on the considered tasks is by D4PG , an improved variant of DDPG that uses distributed experience collection, distributional Q-learning, multi-step returns, and prioritized experience replay. PlaNet learns the same world model as Dreamer and selects actions using online planning instead of learning an action model. We include the numbers for D4PG from and re-run PlaNet with R = 2 for a fair comparison. Performance comparison To evaluate the performance of Dreamer, we compare with state-of-theart reinforcement learning agents. The are summarized in Figure 6. With an average score of 802 across tasks after 2 * 10 6 environment steps, Dreamer exceeds the performance of the strong model-free D4PG agent that achieves an average of 786 within 10 9 environment steps. At the same time, Dreamer inherits the data-efficiency of PlaNet, confirming that the learned world model can help to generalize from small amounts of experience. The empirical success of Dreamer shows that learning behaviors by latent imagination can outperform top methods based on experience replay. Long-horizon behavior To investigate its ability to learn long-horizon behaviors, we compare Dreamer to alternatives for deriving behaviors from the world model at various horizon lengths. For this, we learn an action model to maximize imagined rewards without value model and compare to online planning using PlaNet. Figure 4 shows the final performance for different imagination horizons, confirming that the value model makes Dreamer more robust to the horizon and in high performance even for short horizons. Performance curves for all 20 tasks with horizon of 20 are shown in Appendix D, where Dreamer outperforms the alternatives on 15 of 20 tasks and 3 ties. Representation learning Dreamer can be used with any dynamics model that predicts future rewards given actions and past observations. Since the representation learning objective is orthogonal to our algorithm, we compare three natural choices described in Section 4: pixel reconstruction, contrastive estimation, and pure reward prediction. Figure 8 shows clear differences in task performance for different representation learning approaches, with pixel reconstruction outperforming contrastive estimation on most tasks. This suggests that future improvements in representation learning are likely to transfer over to task performance with Dreamer. Reward prediction along was not sufficient to solve any of the tasks in our experiments. Further ablations are included in the appendix of the paper. Prior work learns latent dynamics for visual control by derivative-free policy learning or online planning, augments model-free agents with multi-step predictions, or uses analytic gradients of Qvalues or multi-step rewards, often for low-dimensional tasks. In comparison, Dreamer uses analytic gradients to efficiently learn long-horizon behaviors for visual control purely by latent imagination. Control with latent dynamics E2C and RCE embed images to predict forward in a compact space to solve simple tasks. World Models learn latent dynamics in a two-stage process to evolve linear controllers in imagination. PlaNet learns them jointly and solves visual locomotion tasks by latent online planning. Similarly, SOLAR latent space. I2A hands imagined trajectories to a model-free policy, while and learn belief representations to accelerate model-free agents. Imagined multi-step returns VPN , MVE , and STEVE learn dynamics for multi-step Q-learning from a replay buffer. AlphaGo combines predictions of actions and state values with planning, assuming access to the true dynamics. Also assuming access to the dynamics, POLO plans to explore by learning a value ensemble. PETS , VisualMPC , and PlaNet plan online using derivative-free optimization, and POPLIN improves online planning by self-imitation. Planning with neural network gradients was shown on small problems but has been challenging to scale . Analytic value gradients DPG , DDPG, and SAC leverage gradients of learned immediate action values to learn a policy by experience replay. SVG reduces the variance of model-free on-policy algorithms by analytic value gradients of one-step model predictions. ME-TRPO accelerates learning of a model-free agent via gradients of predicted rewards for proprioceptive inputs. DistGBP directly uses model gradients for online planning in simple tasks. We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination. For this, we propose a novel actor critic method that optimizes a parametric policy by propagating analytic gradients of multi-step values back through latent neural network dynamics. Dreamer outperforms previous approaches in data-efficiency, computation time, and final performance on a variety of challenging continuous control tasks from image inputs. While our approach compares favourably on these tasks, future research on learning representations is likely needed to scale latent imagination to visually more complex environments. A DETAILED ALGORITHM Update θ to predict rewards using representation learning. Imagine trajectories {(s τ, a τ)} t+H τ =t from each s t using action model. Predict rewards E q θ (r τ | s τ) and values v ξ (s τ). Compute value estimates V λ (s τ) via Equation 6. Update φ according to maximize t+H τ =t V λ (s τ) by gradient ascent. Update ξ to minimize Compute action a t ∼ q φ (a t | s t) with the action model. Add exploration noise to action. Model components We use the convolutional encoder and decoder networks from , the RSSM of , and implement all other functions as three dense layers of size 300 with ELU activations . Distributions in latent space are 30-dimensional diagonal Gaussians. The action model outputs an unconstrained mean and softplus standard deviation for the Normal distribution that is then transformed using tanh. Learning updates We draw batches of 50 sequences of length 50 to train the world model, value model, and action model models using Adam with learning rates 10 −3, 3 * 10 −4, 3 * 10 −4, respectively. We do not scale the KL regularizers (β = 1) but clip them below 3 free nats as in PlaNet. The imagination horizon is H = 20 and the same trajectories are used to update both action and value models. We use a slow moving value network that is updated every 100 gradient steps to compute the V λ value estimates with γ = 0.99 and λ = 0.95. Environment interaction The dataset is initialized with C = 5 episodes collected using random actions. We iterate between 100 training steps and collecting 1 episode by executing the predicted mode action with Normal(0, 0.3) exploration noise. Instead of manually selecting the action repeat for each environment as in and, we fix it to 2 for all environments. See Figure 11 for an assessment of the robustness to different action repeat values. The information bottleneck objective defined in Equation 9 for latent dynamics models is, For the generative objective, we lower bound the first term using the non-negativity of the KL divergence and drop the marginal data probability as it does not depend on the representation model, For the contrastive objective, we subtract the constant marginal probability of the data under the variational encoder, apply Bayes rule, and use the InfoNCE mini-batch bound , E ln q(o t | s t) + ln q(r t | s t) + = E ln q(o t | s t) − ln q(o t) + ln q(r t | s t) = E ln q(s t | o t) − ln q(s t) + ln q(r t | s t) ≥ E ln q(s t | o t) − ln o q(s t | o) + ln q(r t | s t). For the second term, we use the non-negativity of the KL divergence to obtain an upper bound, I(s 1:T ; i 1:T | a 1:T) = E p(o 1:T,r 1:T,s 1:T,a 1:T,i 1:T) t ln p(s t | s t−1, a t−1, i t) − ln p(s t | s t−1, a t−1) = E t ln p(s t | s t−1, a t−1, o t) − ln p(s t | s t−1, a t−1) ≤ E t ln p(s t | s t−1, a t−1, o t) − ln q(s t | s t−1, a t−1) = E t KL p(s t | s t−1, a t−1, o t) q(s t | s t−1, a t−1). This lower bounds the objective. Comparison of action selection schemes on the continuous control tasks of the DeepMind Control Suite from pixel inputs. The lines show mean scores over environment steps and the shaded areas show the standard deviation across 3 seeds. We compare Dreamer that learns both actions and values in imagination, to only learning actions in imagination, and to online planning using CEM without policy learning. The baselines include the top model-free algorithm D4PG, the common A3C agent, and the hybrid SLAC agent.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1lOTC4tDS
We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination using analytic value gradients.
Transfer reinforcement learning (RL) aims at improving learning efficiency of an agent by exploiting knowledge from other source agents trained on relevant tasks. However, it remains challenging to transfer knowledge between different environmental dynamics without having access to the source environments. In this work, we explore a new challenge in transfer RL, where only a set of source policies collected under unknown diverse dynamics is available for learning a target task efficiently. To address this problem, the proposed approach, MULTI-source POLicy AggRegation (MULTIPOLAR), comprises two key techniques. We learn to aggregate the actions provided by the source policies adaptively to maximize the target task performance. Meanwhile, we learn an auxiliary network that predicts residuals around the aggregated actions, which ensures the target policy's expressiveness even when some of the source policies perform poorly. We demonstrated the effectiveness of MULTIPOLAR through an extensive experimental evaluation across six simulated environments ranging from classic control problems to challenging robotics simulations, under both continuous and discrete action spaces. We envision a future scenario where a variety of robotic systems, which are each trained or manually engineered to solve a similar task, provide their policies for a new robot to learn a relevant task quickly. For example, imagine various pick-and-place robots working in factories all over the world. Depending on the manufacturer, these robots will differ in their kinematics (e.g., link length, joint orientations) and dynamics (e.g., link mass, joint damping, friction, inertia). They could provide their policies to a new robot , even though their dynamics factors, on which the policies are implicitly conditioned, are not typically available . Moreover, we cannot rely on a history of their individual experiences, as they may be unavailable due to a lack of communication between factories or prohibitively large dataset sizes. In such scenarios, we argue that a key technique to develop is the ability to transfer knowledge from a collection of robots to a new robot quickly only by exploiting their policies while being agnostic to their different kinematics and dynamics, rather than collecting a vast amount of samples to train the new robot from scratch. The scenario illustrated above poses a new challenge in the transfer learning for reinforcement learning (RL) domains. Formally, consider multiple instances of a single environment that differ in their state transition dynamics, e.g., independent ant robots with different leg designs in Figure 1, which reach different locations by executing the same walking actions. These source agents interacting with one of the environment instances provide their deterministic policy to a new target agent in another environment instance. Then, our problem is: can we efficiently learn the policy of a target agent given only the collection of source policies? Note that information about source environmental dynamics, such as the exact state transition distribu- Figure 2: Overview of MULTIPOLAR. We formulate a target policy π target with the sum of 1) the adaptive aggregation F agg of deterministic actions from source policies L and 2) the auxiliary network F aux for predicting residuals around F agg. tions and the history of environmental states, will not be visible to the target agent as mentioned above. Also, the source policies are neither trained nor hand-engineered for the target environment instance, and therefore not guaranteed to work optimally and may even fail . These conditions prevent us from adopting existing work on transfer RL between different environmental dynamics, as they require access to source environment instances or their dynamics for training a target policy (e.g., ; ; ;). Similarly, meta-learning approaches (; ;) cannot be used here because they typically train an agent on a diverse set of tasks (i.e., environment instances). Also, existing techniques that utilize a collection of source policies, e.g., policy reuse frameworks (Fernández & ; ;) and option frameworks (; ;), are not a promising solution because, to our knowledge, they assume source policies have the same environmental dynamics but have different goals. As a solution to the problem, we propose a new transfer RL approach named MULTI-source POLicy AggRegation (MULTIPOLAR). As shown in Figure 2, our key idea is twofold; 1) In a target policy, we adaptively aggregate the deterministic actions produced by a collection of source policies. By learning aggregation parameters to maximize the expected return at a target environment instance, we can better adapt the aggregated actions to unseen environmental dynamics of the target instance without knowing source environmental dynamics nor source policy performances. 2) We also train an auxiliary network that predicts a residual around the aggregated actions, which is crucial for ensuring the expressiveness of the target policy even when some source policies are not useful. As another notable advantage, the proposed MULTIPOLAR can be used for both continuous and discrete action spaces with few modifications while allowing a target policy to be trained in a principled fashion. Similar to;;;; , our method assumes that the environment structure (state/action space) is identical between the source and target environments, while dynamics/kinematics parameters are different. This assumption holds in many real-world applications such as in sim-to-real tasks , industrial insertion tasks (different dynamics comes from the differences in parts), and wearable robots (with users as dynamics). We evaluate MULTIPOLAR in a variety of environments ranging from classic control problems to challenging robotics simulations. Our experimental demonstrate the significant improvement of sample efficiency with the proposed approach, compared to baselines that trained a target policy from scratch or from a single source policy. We also conducted a detailed analysis of our approach and found it works well even when some of the source policies performed poorly in their original environment instance. Main contributions: a new transfer RL problem that leverages multiple source policies collected under diverse environmental dynamics to train a target policy in another dynamics, and MULTIPOLAR, a simple yet principled and effective solution verified in our extensive experiments. Reinforcement Learning We formulate our problem under the standard RL framework , where an agent interacts with its environment modeled by a Markov decision process (MDP). An MDP is represented by the tuple M = (ρ 0, γ, S, A, R, T) where ρ 0 is the initial state distribution and γ is a discount factor. At each timestep t, given the current state s t ∈ S, the agent executes an action a t ∈ A based on its policy π(a t | s t ; θ) that is parameterized by θ. The environment returns a reward R(s t, a t) ∈ R and transitions to the next state s t+1 based on the state transition distribution T (s t+1 | s t, a t). In this framework, RL aims to maximize the expected return with respect to the policy parameters θ. In this work, we consider K instances of the same environment that differ only in their state transition dynamics. We model each environment instance by an indexed MDP: where no two state transition distributions T i, T j; i = j are identical. We also assume that each T i is unknown when training a target policy, i.e., agents cannot access the exact form of T i nor a collection of states sampled from T i. Source Policies For each of the K environment instances, we are given a deterministic source policy µ i: S → A that only maps states to actions. Each source policy µ i can be either parameterized (e.g., learned from interacting with the environment modeled by M i) or non-parameterized (e.g., heuristically designed by humans). Either way, we assume no prior knowledge about µ i is available for a target agent, such as their representations or original performances, except that they were acquired in M i with an unknown T i. Problem Statement Given the set of source policies L = {µ 1, . . ., µ K}, our goal is to train a new target agent's policy π target (a t | s t ; L, θ) in a sample efficient fashion, where the target agent interacts with another environment instance M target = (ρ 0, S, A, R, T target) and T target is not necessarily identical to T i (i = 1 . . ., K). As shown in Figure 2, with the Multi-Source Policy Aggregation (MULTIPOLAR), we formulate a target policy π target using a) the adaptive aggregation of deterministic actions from the set of source policies L, and b) the auxiliary network predicting residuals around the aggregated actions. We first present our method for the continuous action space, and then extend it to the discrete space. Adaptive Aggregation of Source Policies Let us denote by a (i) t = µ i (s t) the action predicted deterministically by source policy µ i given the current state s t. For the continuous action space, a D is a D-dimensional real-valued vector representing D actions performed jointly in each timestep. For the collection of source policies L, we derive the matrix of their deterministic actions: The key idea of this work is to aggregate A t adaptively in an RL loop, i.e., to maximize the expected return. This adaptive aggregation gives us a "baseline" action that could introduce a strong inductive bias in the training of a target policy, without knowing source environmental dynamics T i. More specifically, we define the adaptive aggregation function F agg: S → A that produces the baseline action based on the current state s t as follows: where θ agg ∈ R K×D is a matrix of trainable parameters, is the element-wise multiplication, and 1 K is the all-ones vector of length K. θ agg is neither normalized nor regularized, and can scale each action of each policy independently. This means that we do not merely adaptively interpolate action spaces, but more flexibly emphasize informative source actions while suppressing irrelevant ones. Predicting Residuals around Aggregated Actions Moreover, we learn auxiliary network F aux: S → A jointly with F agg, to predict residuals around the aggregated actions. F aux is used to improve the target policy training in two ways. 1) If the aggregated actions from F agg are already useful in the target environment instance, F aux will correct them for a higher expected return. 2) Otherwise, F aux learns the target task while leveraging F agg as a prior to have a guided exploration process. Any network could be used for F aux as long as it is parameterized and fully differentiable. Finally, the MULTIPOLAR function is formulated as: where θ aux denotes a set of trainable parameters for F aux. Note that the idea of predicting residuals for a source policy has also been presented by;;. The main difference here is that, while these works just add raw action outputs provided from a single hand-engineered source policy, we adaptively aggregate actions from multiple source policies in order to obtain a more flexible and canonical representation. indicates that the j-th action is to be executed. Following Eqs. and, the output of F (s t ; L, θ agg, θ aux) can be viewed as D-dimensional un-normalized action scores, from which we can sample a discrete action after normalizing it by the softmax function. We aim to empirically demonstrate the sample efficiency of a target policy trained with MULTIPO-LAR (denoted by "MULTIPOLAR policy"). To complete the experiments in a reasonable amount of time, we set the number of source policies to be K = 4 unless mentioned otherwise. Moreover, we investigate the factors that affect the performance of MULTIPOLAR. To ensure fair comparisons and reproducibility of experiments, we followed the guidelines introduced by and François- for conducting and evaluating all of our experiments. Baseline Methods To show the benefits of leveraging source policies, we compared our MULTI-POLAR policy to the standard multi-layer perceptron (MLP) trained from scratch, which is typically used in RL literature (; François-). As another baseline, we also used MULTIPOLAR with K = 1, which is an extension of residual policy learning (; ;) (denoted by "RPL") with adaptive residuals as well as the ability to deal with both continuous and discrete action spaces. We stress here that the existing transfer RL or meta RL approaches that train a universal policy network agnostic to the environmental dynamics, such as; , cannot be used as a baseline since they require a policy to be trained on a distribution of environment instances, which is not possible in our problem setting. Also, other techniques using multiple source policies, such as policy reuse frameworks, are not applicable because their source policies should be collected under the target environmental dynamics. Environments To show the general effectiveness of the MULTIPOLAR policy, we conducted comparative evaluations of MULTIPOLAR on the following six OpenAI Gym environments: Roboschool Hopper, Roboschool Ant, Roboschool InvertedPendulumSwingUp, Acrobot, CartPole, and LunarLander. We chose these six environments because 1) the parameterization of their dynamics and kinematics is flexible enough, 2) they cover discrete action space (Acrobot and CartPole) as well as continuous action space, and 3) they are samples of three distinct categories of OpenAI Gym environments, namely Box2d, Classic Control, and Roboschool. Experimental Procedure For each of the six environments, we first created 100 environment instances by randomly sampling the dynamics and kinematics parameters from a specific range. For example, these parameters in the Hopper environment were link lengths, damping, friction, armature, and link mass 1 Then, for each environment instance, we trained an MLP policy. The trained MLP policies were used in two ways: a) the baseline MLP policy for each environment instance, and b) a pool of 100 source policy candidates from which we sample K of them to train MULTIPOLAR policies and one of them to train RPL policies 2. Specifically, for each environment instance, we trained three MULTIPOLAR and three RPL policies with distinct sets of source policies selected randomly from the candidate pool. The learning procedure explained above was done three times with fixed different random seeds to reduce variance in due to stochasticity. As a , for each of the six environments, we had 100 environment instances × 3 random seeds = 300 experiments for MLP and 100 environment instances × 3 choices of source policies × 3 random seeds = 900 experiments for RPL and MULTIPOLAR. The aim of this large number of experiments is to obtain correct insights into the distribution of performances . Due to the large number of experiments for all the environments, our detailed analysis and ablation study of MULTIPOLAR components were conducted with only Hopper, as its sophisticated second-order dynamics plays a crucial role in agent performance . Implementation Details All the experiments were done using the Stable Baselines implementation of learning algorithms as well as its default hyperparameters and MLP network architecture for each environment (see Appendix A.1 for more details). Based on the performance of learning algorithms reported in the , all the policies were trained with Soft Actor-Critic in the LunarLander environment and with Proximal Policy Optimization in the rest of the environments. For fair comparisons, in all experiments, auxiliary network F aux had an identical architecture to that of the MLP. Therefore, the only difference between MLP and MULTIPOLAR was the aggregation part F agg, which made it possible to evaluate the contribution of transfer learning based on adaptive aggregation of source policies. Also, we avoided any random seed optimization since it has been shown to alter the policies' performance . Evaluation Metric Following the guidelines of , to measure sampling efficiency of training policies, i.e., how quick the training progresses, we used the average episodic reward over a various number of training samples. Also, to ensure that higher average episodic reward is representative of better performance and to estimate the variation of it, we used the sample bootstrap method to estimate statistically relevant 95% confidence bounds of the of our experiments. Across all the experiments, we used 10K bootstrap iterations and the pivotal method. Further details on evaluation method can be found in Appendix A.3. Sample Efficiency of MULTIPOLAR Figure 3 and Table 1 clearly show that on average, in all the environments, MULTIPOLAR outperformed baseline policies in terms of sample efficiency and sometimes the final episodic reward 3. For example, in Hopper over 2M training samples, MULTI-POLAR with K = 4 achieved a mean of average episodic reward about three times higher than MLP (i.e., training from scratch) and about twice higher than RPL (i.e., using only a single source policy). It is also noteworthy that MULTIPOLAR with K = 4 had on par or better performance than RPL, which indicates the effectiveness of leveraging multiple source policies 4. Figure 7 in Appendix, shows the individual average learning curve for each of the instances of Roboschool environments. Ablation Study To demonstrate the importance of each component of MULTIPOLAR, we evaluated the following degraded versions: θ agg fixed to 1, which just averages the deterministic actions from the source policies without adaptive weights (similar to the residual policy learning methods that used raw action outputs of a source policy), and F aux learned independent of s t, which replaces the state-dependent MLP with an adaptive "placeholder" parameter vector making actions just a linear combination of source policy outputs. As shown in Table 2, the full version of MULTIPOLAR significantly outperformed both of the degraded versions, suggesting that the adaptive aggregation and predicting residuals are both critical. Figure 4 illustrates an example of the histogram of final episodic reward (average rewards of the last 100 training episodes) for the source policy candidates obtained in the Hopper environment. As shown in the figure, the source policies were diverse in terms of the performance on their original environment instances 5. In this setup, we investigate the effect of source policies performances on MULTIPOLAR sample efficiency. We created two separate pools of source policies, where one contained only high-performing and the other only lowperforming source policies 6. Table 3 summarizes the of sampling source policies from these pools (4 high, 2 high & 2 low, and 4 low performances) and compares them to the original MULTIPOLAR (shown as 'Random') also reported in Table 1. Not surprisingly, MULTIPOLAR performed the best when all the source policies were sampled from the highperformance pool. However, we emphasize that such highquality policies are not always available in practice, due to the variability of how they are learned or hand-crafted under their own environment instance. Figure 6 in Appendix B.1 illustrates that MULTIPOLAR can successfully learn to suppress the useless low-performing source policies. Effect of Number of Source Policies Finally, we show how the number of source policies contributes to MULTIPOLAR's sample efficiency in Table 4. Specifically, we trained MULTIPOLAR policies up to K = 16 to study how the mean of average episodic rewards changes. The monotonic performance improvement over K (for K ≤ 16), is achieved at the cost of increased training and inference time. In practice, we suggest balancing this speed-performance trade-off by using as many source policies as possible before reaching the inference time limit required by the application. Our work is broadly categorized as an instance of transfer RL , in which a policy for a target task is trained using information collected from source tasks. In this section, we highlight how our work is different from the existing approaches and also discuss the current limitations as well as future directions. Transfer between Different Dynamics There has been very limited work on transferring knowledge between agents in different environmental dynamics. As introduced briefly in Section 1, some methods require training samples collected from source tasks. These sampled experiences are then used for measuring the similarity between environment instances (; ;) or for conditioning a target policy to predict actions . Alternative means to quantify the similarity is to use a full specification of MDPs or environmental dynamics. In contrast, the proposed MULTI-POLAR allows the knowledge transfer only through the policies acquired from source environment instances, which is beneficial when source and target environments are not always connected to exchange information about their environmental dynamics and training samples. Leveraging Multiple Policies The idea of utilizing multiple source policies can be found in the literature of policy reuse frameworks (Fernández & ; ; ; ;). The basic motivation behind these works is to provide "nearly-optimal solutions" for short-duration tasks by reusing one of the source policies, where each source would perform well on environment instances with different rewards (e.g., different goals in maze tasks). In our problem setting, where environmental dynamics behind each source policy are different, reusing a single policy without an adaptation is not the right approach, as described in and also demonstrated in our experiment. Another relevant idea is hierarchical RL (; ;) that involves a hierarchy of policies (or action-value functions) to enable temporal abstraction. In particular, option frameworks (; ;) make use of a collection of policies as a part of "options". However, they assumed all the policies in the hierarchy to be learned in a single environment instance. Another relevant work along this line of research is , which meta-learns a hierarchy of multiple sub-policies by training a master policy over the distribution of tasks. Nevertheless, hierarchical RL approaches are not useful for leveraging multiple source policies each acquired under diverse environmental dynamics. Learning Residuals in RL Finally, some recent works adopt residual learning to mitigate the limited performance of hand-engineered policies (; ;). We are interested in a more extended scenario where various source policies with unknown performances are provided instead of a single sub-optimal policy. Also, these approaches focus only on RL problems for robotic tasks in the continuous action space, while our approach could work on both of continuous and discrete action spaces in a broad range of environments. Limitations and Future Directions Currently, our work has several limitations. First, MULTI-POLAR may not be scalable to a large number of source policies, as its training and testing times will increase almost linearly with the number of source policies. One possible solution for this issue would be pre-screening source policies before starting to train a target agent, for example, by testing each source on the target task and taking them into account in the training phase only when they are found useful. Moreover, our work assumes source and target environment instances to be different only in their state transition distribution. An interesting direction for future work is to involve other types of environmental differences, such as dissimilar rewards and state/action spaces. We presented a new problem setting of transfer RL that aimed to train a policy efficiently using a collection of source policies acquired under diverse environmental dynamics. We demonstrated that the proposed MULTIPOLAR is, despite its simplicity, a principled approach with high training sample efficiency on a variety of environments. Our transfer RL approach is advantageous when one does not have access to a distribution of diverse environmental dynamics. Future work will seek to adapt our approach to more challenging domains such as a real-world robotics task. In this section, we present all the experimental details for the six environments we used. Note that we did not do any hyperparameter-tuning but followed the default parameters of. We used the Roboschool implementation of Hopper, Ant, and InvertedPendulumSwingup since they are based on an open-source engine, which makes it possible for every researcher to reproduce our experiments. To run our experiments in parallel, we used GNU Parallel tool . A.1 HYPERPARAMETERS Tables 5 and 6 summarize all the hyperparameters used for experiments on each environment. As done by , to have a successful training, rewards and input observations are normalized using their running average and standard deviation for all the environments except CartPole and LunarLander. Also, in all of the experiments, θ agg is initialized to be the all-ones matrix. Tables 7, 8, 9, 10, 11 and 12. We defined these sampling ranges such that the ing environments are stable enough for successfully training an MLP policy. To do so, we trained MLP policies across wide ranges of environmental parameters and chose the ranges in which the policy converged. In this section, we explain how we calculated the mean of average episodic rewards in Tables 1, 2, 3, and 4, over a specific number of training samples (the numbers at the header of the tables e.g., 25K, 50K, 75K, and 100K for the CartPole) which we denote by T in what follows. For each experiment in an environment instance, we computed the average episodic reward by taking the average of the rewards over all the episodes the agent played from the beginning of the training until collecting T number of training samples. Then we collected the computed average episodic rewards of all the experiments, i.e., all the combinations of three random seeds, three random sets of source policies (for RPL and MULTIPOLAR), and 100 target environment instances. Finally, we used the sample bootstrap method to estimate the mean and the 95% confidence bounds of the collected average episodic rewards. We used the Facebook Boostrapped implementation: https://github.com/facebookincubator/bootstrapped. To generate environment instances, we uniformly sampled the dynamics and kinematics parameters from the ranges defined in Section A.2. Figure 5 illustrates the histograms of the final episodic rewards of source policies on the original environment instances in which they were acquired. Figure 6 visualizes an example of how the aggregation parameters θ agg for the four policies and their three actions were learned during the 2M timestep training of MULTIPOLAR (K = 4) policy in the Hopper environment. In this example, the source policies in the first and second rows were sampled from low-performance pools whereas those in the third and fourth rows were sampled from high-performance pools (see Section 4.2 for more details). It illustrates that MULTIPOLAR can successfully suppress the two useless low-performing policies as the training progresses. To further study how having low-performing source policies affects MULTIPOLAR sample efficiency, we evaluated MULTIPOLAR (K=4) in the Hopper environment, where the sources are randomly initialized policies, i.e., policies that predict actions randomly. Following our experimental procedure explained in Section 4.1, Table 13 reports the bootstrap mean and 95% confidence bounds of average episodic rewards over various training samples for this experiment and compares it with MULTIPOLAR with four low-performing sources. This suggests that the sample efficiency of MULTIPOLAR (K = 4) with low-performing source policies (i.e., source policies which had low final episodic rewards in their own environments) is almost the same as with randomly initialized source policies.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byx9p2EtDH
We propose MULTIPOLAR, a transfer RL method that leverages a set of source policies collected under unknown diverse environmental dynamics to efficiently learn a target policy in another dynamics.
Reinforcement learning algorithms rely on carefully engineered rewards from the environment that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is difficult and not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is such intrinsic reward function which uses prediction error as a reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. {\em without any extrinsic rewards}, across $54$ standard benchmark environments, including the Atari game suite. Our show surprisingly good performance as well as a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many games. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://doubleblindsupplementary.github.io/large-curiosity/. Reinforcement learning (RL) has emerged as a popular method for training agents to perform complex tasks. In RL, the agent's policy is trained by maximizing a reward function that is designed to align with the task. The rewards are extrinsic to the agent and specific to the environment they are defined for. Most of the success in RL has been achieved when this reward function is dense and well-shaped, e.g., a running "score" in a video game BID19. However, designing a wellshaped reward function is a notoriously challenging engineering problem. An alternative to "shaping" an extrinsic reward is to supplement it with dense intrinsic rewards BID24, that is, rewards that are generated by the agent itself. Examples of intrinsic reward include "curiosity" BID20 BID33 BID37 BID9 BID25 which uses prediction error as reward signal, and "visitation counts" BID2 BID22 BID28 BID18 which discourages the agent from revisiting the same states. The idea is that these intrinsic rewards will bridge the gaps between sparse extrinsic rewards by guiding the agent to efficiently explore the environment to find the next extrinsic reward. But what about scenarios with no extrinsic reward at all? This is not as strange as it sounds. Developmental psychologists talk about intrinsic motivation (i.e., curiosity) as the primary driver in the early stages of development BID38 BID30: babies appear to employ goal-less exploration to learn skills that will be useful later on in life. There are plenty of other examples, from playing Minecraft to visiting your local zoo, where no extrinsic rewards are required. Indeed, there is evidence that pre-training an agent on a given environment using only intrinsic rewards allows it to learn much faster when fine-tuned to a novel task in a novel environment BID25 BID23. Yet, so far, there has been no systematic study of learning with only intrinsic rewards. In this paper, we perform a large-scale empirical study of agents driven purely by intrinsic rewards across a range of diverse simulated environments. In particular, we choose the dynamics-based curiosity model of intrinsic reward presented in BID25 because it is scalable and trivially parallelizable, making it ideal for large-scale experimentation. The central idea is to represent intrinsic reward as the error in predicting the consequence of the agent's action given its current state, Figure 1: A snapshot of the 54 environments investigated in the paper. We show that agents are able to make progress using no extrinsic reward, or end-of-episode signal, and only using curiosity. Video , code and models at https://doubleblindsupplementary.github.io/large-curiosity/.i.e., the prediction error of learned forward-dynamics of the agent. We thoroughly investigate the dynamics-based curiosity across 54 environments: video games, physics engine simulations, and virtual 3D navigation tasks, shown in Figure 1.To develop a better understanding of curiosity-driven learning, we further study the crucial factors that determine its performance. In particular, predicting the future state in the high dimensional raw observation space (e.g., images) is a challenging problem and, as shown by recent works BID25 BID39, learning dynamics in an auxiliary feature space leads to improved . However, how one chooses such an embedding space is a critical, yet open research problem. To ensure stable online training of dynamics, we argue that the desired embedding space should: 1) be compact in terms of dimensionality, 2) preserve sufficient information about the observation, and 3) be a stationary function of the observations. Through systematic ablation, we examine the role of different ways to encode agent's observation such that an agent can perform well, driven purely by its own curiosity. Here "performing well" means acting purposefully and skillfully in the environment. This can be assessed quantitatively, in some cases, by measuring extrinsic rewards or environment-specific measures of exploration, or qualitatively, by observing videos of the agent interacting. We show that encoding observations via a random network turn out to be a simple, yet surprisingly effective technique for modeling curiosity across many popular RL benchmarks. This might suggest that many popular RL video game testbeds are not as visually sophisticated as commonly thought. Interestingly, we discover that although random features are sufficient for good performance in environments that were used for training, the learned features appear to generalize better (e.g., to novel game levels in Super Mario Bros.).The main contributions of this paper are: (a) Large-scale study of curiosity-driven exploration across a variety of environments including: the set of Atari games BID1, Super Mario Bros., virtual 3D navigation in Unity BID13, multi-player Pong, and Roboschool environments. (b) Extensive investigation of different feature spaces for learning the dynamics-based curiosity: random features, pixels, inverse-dynamics BID25 and variational auto-encoders BID14 and evaluate generalization to unseen environments. (c) Analysis of some limitations of a direct prediction-error based curiosity formulation. We observe that if the agent itself is the source of stochasticity in the environment, it can reward itself without making any actual progress. We empirically demonstrate this limitation in a 3D navigation task where the agent controls different parts of the environment. Consider an agent that sees an observation x t, takes an action a t and transitions to the next state with observation x t+1. We want to incentivize this agent with a reward r t relating to how informative the transition was. To provide this reward, we use an exploration bonus involving the following elements: (a) a network to embed observations into representations φ(x), (b) a forward dynamics network to predict the representation of the next state conditioned on the previous observation and action p(φ(x t+1)|x t, a t ). Given a transition tuple {x t, x t+1, a t}, the exploration reward is then defined as r t = − log p(φ(x t+1)|x t, a t ), also called the surprisal BID0 ).An agent trained to maximize this reward will favor transitions with high prediction error, which will be higher in areas where the agent has spent less time, or in areas with complex dynamics. Such a dynamics-based curiosity has been shown to perform quite in some cases BID25, especially when the dynamics are learned in an embedding space rather than raw observations. In this paper, we explore dynamics-based curiosity further. We use mean-squared error corresponding to a fixed-variance Gaussian density as surprisal, i.e., f (x t, a t) − φ(x t+1) 2 2 where f is the learned dynamics model. However, any other density model could be used. Consider the representation φ in the curiosity formulation above. If φ(x) = x, the forward dynamics model makes predictions in the observation space. A good choice of feature space can make the prediction task more tractable and filter out irrelevant aspects of the observation space. But what makes a good feature space for dynamics driven curiosity? We propose the qualities that a good feature space must have:• Compactness: The features should be easy to model by being low(er)-dimensional and filtering out irrelevant parts of the observation space.• Sufficiency: The features should contain all the important information. Otherwise, the agent may fail to be rewarded for exploring some relevant aspect of the environment.• Stability: Non-stationary rewards make it difficult for reinforcement agents to learn. Exploration bonuses by necessity introduce non-stationarity since what is new and novel becomes old and boring with time. In a dynamics-based curiosity formulation, there are two sources of non-stationarity: the forward dynamics model is evolving over time as it is trained and the features are changing as they learn. The former is intrinsic to the method, and the latter should be minimized where possibleIn this work, we systematically investigate the efficacy of a number of feature-learning methods, summarized briefly as follows:Pixels The simplest case is where φ(x) = x and we fit our forward dynamics model in the observation space. Pixels are sufficient, since no information has been thrown away, and stable since there is no feature learning component. However, learning from pixels is tricky because the observation space may be high-dimensional and complex. Random Features (RF) The next simplest case is where we take our embedding network, a convolutional network, and fix it after random initialization. Because the network is fixed, the features are stable. The features can be made compact in dimensionality, but they are not constrained to be. However, random features may fail to be sufficient. Variational Autoencoders (VAE) VAEs were introduced in BID14 BID29 to fit latent variable generative models p(x, z) for observed data x and latent variable z with prior p(z) using variational inference. The method calls for an inference network q(z|x) that approximates the posterior p(z|x). This is a feedforward network that takes an observation as input and outputs a mean and variance vector describing a Gaussian distribution with diagonal covariance. We can then use the mapping to the mean as our embedding network φ. These features will be a low-dimensional approximately sufficient summary of the observation, but they may still contain some irrelevant details such as noise, and the features will change over time as the VAE trains. Inverse Dynamics Features (IDF) Given a transition (s t, s t+1, a t) the inverse dynamics task is to predict the action a t given the previous and next states s t and s t+1. Features are learned using a common neural network φ to first embed s t and s t+1. The intuition is that the features learned should correspond to aspects of the environment that are under the agent's immediate control. This feature learning method is easy to implement and in principle should be invariant to certain kinds of noise (see BID25) for a discussion). A potential downside could be that the features learned may not be sufficient, that is they do not represent important aspects of the environment that the agent cannot immediately affect. A summary of these characteristics is provided in TAB1. Note that the learned features are not stable because their distribution changes as learning progresses. One way to achieve stability could be to pre-train VAE or IDF networks. However, unless one has access to the internal state of the game, it is not possible to get a representative data of the game scenes to train the features. One way is to act randomly to collect data, but then it will be biased to where the agent started, and won't generalize further. Since all the features involve some trade-off of desirable properties, it becomes an empirical question as to how effective each of them is across environments. Deciding upon a feature space is only first part of the puzzle in implementing a practical system. Here, we detail the critical choices we made in the learning algorithm. Our goal was to reduce nonstationarity in order to make learning more stable and consistent across environments. Through the following considerations outlined below, we are able to get exploration to work reliably for different feature learning methods and environments with minimal changes to the hyper-parameters.• PPO. In general, we have found the PPO algorithm to be a robust learning algorithm that requires little hyper-parameter tuning and so we stick to it for our experiments.• Reward normalization. Since the reward function is non-stationary, it is useful to normalize the scale of the rewards so that the value function can learn quickly. We did this by dividing the rewards by a running estimate of the standard deviation of the sum of discounted rewards.• Advantage normalization. While training with PPO, we normalize the advantages in a batch to have a mean of 0 and a standard deviation of 1.• Observation normalization. We run a random agent on our target environment for 10000 steps, then calculate the mean and standard deviation of the observation and use these to normalize the observations when training. This is useful to ensure that the features do not have very small variance at initialization, and also ensure features have less variation across different environments.• More actors. The stability of the method is greatly increased by increasing the number of parallel actors (which affects the batch-size) used. We typically use 128 parallel runs of the same environment for data collection while training an agent.• Normalizing the features. In combining intrinsic and extrinsic rewards, we found it useful to ensure that the scale of the intrinsic reward was consistent across state space. We achieved this by using batch-normalization BID11 in the feature embedding network. One important point is that the use of an end-of-episode signal, sometimes called'done', can often leak information about the true reward function (assuming, as is common, that we have access to an extrinsic reward signal that we hide from the agent to measure pure exploration). If we don't remove the'done' signal, many of the Atari games become too simple. For example, a simple strategy of giving +1 artificial reward at every time-step when the agent is alive and 0 upon death is sufficient to obtain a high score in some games, e.g. the Atari game'Breakout' where it will seek to maximize the episode length and hence its score. In the case of negative rewards, the agent will try to end the episode as quickly as possible. These evaluation curves show the mean reward (with standard error) of agents trained purely by curiosity, without reward or an end-of-episode signal. We see that our purely curiosity-driven agent is able to gather rewards in these environments without using any extrinsic reward at training. Results on all of the Atari games are in the appendix in FIG5. We find curiosity model trained on pixels does not work well across any environment and VAE features perform either same or worse than random and inverse dynamics features. Further, inverse dynamics-trained features perform better than random features in 55% of the Atari games. An interesting outcome of this analysis is that random features for modeling curiosity are a simple, yet surprisingly strong baseline and likely to work well in half of the Atari games. In light of this, if we want to study the behavior of a pure exploration agent, we should not bias it. In the infinite horizon setting (i.e., the discounted returns are not truncated at the end of the episode and always bootstrapped using the value function), death is just another transition to the agent, to be avoided only if it is "boring". Therefore, we removed'done' to separate the gains of an agent's exploration from merely that of the death signal. In practice, we do find that the agent avoids dying in the games since that brings it back to the beginning of the game -an area it has already seen many times and where it can predict the dynamics well. This subtlety has been neglected by previous works showing experiments without extrinsic rewards. In all of our experiments, both the policy and the embedding network work directly from pixels. For our implementation details including hyper-parameters and architectures, please refer to the Appendix A. Unless stated otherwise, all curves are the average of three runs with different seeds, and the shaded areas are standard errors of the mean. We have released the code and videos of a purely curious agent playing across all environments on our website. We begin by scaling up a pure curiosity-driven learning to a large number of environments without using any extrinsic rewards. We pick a total of 54 diverse simulated environments, as shown in Figure 1, including 48 Atari games, Super Mario Bros., 2 Roboschool scenarios (learning Ant controller and Juggling), Two-player Pong, 2 Unity mazes (with and without a TV controlled by the agent). The goal of this large-scale analysis is to investigate the following questions: (a) What happens when you run a pure curiosity-driven agent on a variety of games without any extrinsic rewards? (b) What kinds of behaviors can you expect from these agents? (c) What is the effect of the different feature-learning variants in dynamics-based curiosity on these behaviors?Atari Games To answer these questions, we began with a collection of well-known Atari games and ran a suite of experiments with different feature-learning methods. One way to measure how well a purely curious agent performs is to measure the extrinsic reward it is able to achieve, i.e. how good is the agent at playing the game. We show the evaluation curves of mean extrinsic reward in on 8 common Atari games in FIG0 and all 48 Atari suite in FIG5 in the appendix. It is important to note that the extrinsic reward is only used for evaluation, not for training. However, this is just a proxy for pure exploration because the game rewards could be arbitrary and might not align at all with how the agent explores out of curiosity. The first thing to notice from the curves is: most of them are going up. This shows that a pure curiosity-driven agent can often learn to obtain external rewards without seeing any extrinsic rewards during training! To understand why this is happening, consider the game'Breakout'. The main control action of the game is to keep hitting the bouncing ball with the paddle, but this does not earn any points. The game score increases only when the ball hits a brick (which then disappears). But the more bricks are struck by the ball, the more complicated the pattern of remaining bricks becomes, making the agent more curious to explore further, hence, collecting points as a bi-product. Furthermore, when the agent runs out of lives, the bricks are reset to the initial configuration, which has been seen by the agent many times before and is hence very predictable, so the agent tries to increase curiosity by staying alive and avoiding the death reset. The fact that the curiosity reward is often sufficient is an unexpected and might suggest that many popular RL test-beds do not need an external reward at all. It is likely that game designers (similar to architects, urban planners, gardeners, etc.) are purposefully setting up curricula to guide agents through the task by curiosity alone. This could explain why curiosity-like objective aligns reasonably well with the extrinsic reward in many human-designed environments BID15 BID4 BID10 BID45. However, this is not always the case, and sometimes a curious agent can even do worse than a random agent. This happens when the extrinsic reward has little correlation with the agent's exploration, or when the agent fails to explore efficiently (e.g. see games 'Atlantis' and 'IceHockey' in FIG5). We encourage the reader to refer to the game-play videos of the agent available on the website for a better understanding of the learned skills. Comparison of feature learning methods: We compare four feature learning methods in FIG0: raw pixels, random features, inverse dynamics features and VAE features. Training dynamics on raw pixels performs poorly across all the environments, while encoding pixels into features does better. This is likely because it is hard to learn a good dynamics model in pixel space, and prediction errors may be dominated by small irrelevant details. Surprisingly, random features (RF) perform quite well across tasks and sometimes better than using learned features. One reason for good performance is that the random features are kept frozen (stable), the dynamics model learned on top of them has an easier time because of the stationarity of the target. In general, random features should work well in the domains where visual observations are simple enough, and random features can preserve enough information about the raw signal, for instance, Atari games. One scenario where IDF features consistently outperform random features is for generalization, e.g. training on one level of Mario Bros and testing on another (see Section 3.2 for details).The VAE method also performed well but was somewhat unstable, so we decided to use RF and IDF for further experiments. The detailed in appendix FIG5 compares IDF vs. RF across the full Atari suite. To quantify the learned behaviors, we compared our curious agents to a randomly acting agent. We found that an IDF-curious agent collects more game reward than a random agent in 75% of the Atari games, an RF-curious agent does better in 70%. Further, IDF does better than RF in 55% of the games. Overall, random features and inverse dynamics features worked well in general. Further details in the appendix. Super Mario Bros. We compare different feature-learning methods in Mario Bros in FIG0. Super Mario Bros has already been studied in the context of extrinsic reward free learning BID25 in small-scale experiments, and so we were keen to see how far curiosity alone can push the agent. We used a more efficient version of the Mario simulator, allowing for longer training, while keeping observation space, actions, and dynamics of the game the same. Due to 100x longer training and using PPO for optimization, our agent was able to pass several levels of the game, significantly improving over prior exploration on Mario Bros. Could we further push the performance of a purely curious agent by making the underlying optimization more stable? One way is to scale up the batch-size. We do so by increasing the number of parallel threads for running environments from 128 to 1024. We show the comparison between The discontinuous jump on the graph corresponds to the agent reaching a limit of the environment -after a certain number of steps in the environment the Atari Pong emulator starts randomly cycling through colors and becomes unresponsive to agent's actions training using 128 and 1024 parallel environment threads in FIG1 (a). As apparent from the graph, training with large batch-size using 1024 parallel environment threads performs much better. In fact, the agent is able to explore much more of the game: discovering 11 different levels of the game, finding secret rooms and defeating bosses. Note that the x-axis in the figure is the number of gradient steps, not the number of frames, since the point of this large-scale experiment is not a claim about sample-efficiency, but performance with respect to training the agent. This suggests that the performance of a purely curiosity-driven agent would improve as the training of base RL algorithm (PPO in our case) gets better. The video is on the website. We modified the Pong environment from the Roboschool framework to only have one paddle and to have two balls. The action space is continuous with two-dimensions, and we discretized the action space into 5 bins per dimension giving a total of 25 actions. Both the policy and embedding network are trained on pixel observation space (note: not state space). This environment is more difficult to control than the toy physics used in games, but the agent learns to intercept and strike the balls when it comes into its area. We monitored the number of bounces of the balls as a proxy for interaction with the environment, as shown in FIG1. See the video on the project website. Roboschool Ant Robot We also explored using the Ant environment which consists of an Ant with 8 controllable joints on a track. We again discretized the action space and trained policy and embedding network on raw pixels (not state space). However, in this case, it was less easy to measure exploration because the extrinsic distance reward measures progress along the racetrack, but a purely curious agent is free to move in any direction. We find that a walking like behavior emerges purely out of a curiosity-driven training. We refer the reader to the video showing that the agent is meaningfully interacting with the environment. Multi-agent curiosity in Two-player Pong We have already seen that a purely curiosity-driven agent learns to play several Atari games without reward, but we wonder how much of that behavior is caused by the fact that the opposing player is a computer agent with a hard-coded strategy. What would happen if we were to make both the players curious-driven? To find out, we set up a twoplayer Pong game where both the sides (paddles) of the game are controlled by two curiosity-driven agents. We shared the initial layers of both the agents but have different action heads, i.e., total action space is now the cross product of the actions of player 1 by the actions of player 2.Note that the extrinsic reward is meaningless in this context since the agent is playing both sides, so instead, we show the length of the episode. The are shown in FIG1 (c). We see from the episode length that the agent learns to have longer rallies over time, learning to play pong without any teacher -purely by curiosity on both sides. In fact, the game rallies eventually get so long that they break our Atari emulator causing the colors to change radically, which crashes the policy as shown in the plot. In the previous section, we showed that our purely curious agent can learn to explore efficiently and learn useful skills, e.g., game playing behaviour in games, walking behaviour in Ant etc. So far, these skills were shown in the environment where the agent was trained on. However, one advantage of developing reward-free learning is that one should then be able to utilize abundant "unlabeled" environments without reward functions by showing generalization to novel environments. To test this, we first pre-train our agent using curiosity only in the Level 1-1 of Mario Bros. We investigate how well RF and IDF-based curiosity agents generalize to novel levels of Mario. In FIG2, we show two examples of training on one level of Mario and fine-tuning on another testing level, and compare to learning on the testing level from scratch. The training signal in all the cases is curiosity-only reward. In the first case, from Level 1-1 to Level 1-2, the global statistics of the environments match (both are 'day-time' environments, i.e., blue sky) but levels have different enemies, different geometry, and different difficulty. We see that there is strong transfer from for both methods in this scenario. However, the transfer performance is weaker in the second scenario from Level 1-1 to Level 1-3. This is so because the problem is considerably harder for the latter level pairing as there is a color pallette shift from day to night, as shown in FIG2.We further note that IDF-learned features transfer in both the cases and random features transfer in the first case, but do not transfer in the second scenario from day to night. These might suggest that while random features perform well on training environments, learned features appear to generalize better to novel levels. However, this needs more analysis in the future across a large variety of environments. Overall, we find some promising evidence showing that skills learned by curiosity help our agent explore efficiently in novel environments. In all our experiments so far, we have shown that our agents can learn useful skills without any extrinsic rewards, driven purely by curiosity. However, in many scenarios, we might want the agent to perform some particular task of interest. This is usually conveyed to the agent by defining extrinsic rewards. When rewards are dense (e.g. game score at every frame), classic RL works well and intrinsic rewards generally should not help performance. However, designing dense rewards is a challenging engineering problem (see introduction for details). In this section, we evaluate how well curiosity can help an agent perform a task in presence of sparse, or just terminal, rewards. Terminal reward setting: For many real problems, only terminal reward is available, e.g. in navigation, you only get rewards once you find what you were looking for. This is a setting where classic RL typically performs poorly. Hence, we consider the 3D navigation in a maze designed in the Unity ML-agent framework with 9 rooms and a sparse terminal reward. The action space is discrete, consisting of: move forward, look left 15 degrees, look right 15 degrees and no-op. The agent starts in room-1, which is furthest away from room-9 which contains the goal. We compare an agent trained with extrinsic reward (+1 when the goal is reached, 0 otherwise) to an agent trained with extrinsic + intrinsic reward. Extrinsic only (classic RL) never finds the goal in all our trials, which means it is impossible to get any meaningful gradients. Whereas extrinsic+intrinsic typically converges to getting the reward every time. Results in FIG3 show for vanilla PPO, PPO + IDF-curiosity and PPO + RF-curiosity. In preliminary experiments, we picked 5 Atari games which have sparse rewards (as categorized by BID2), and compared extrinsic (classic RL) vs. extrinsic+intrinsic (ours) reward performance. In 4 games out of 5, curiosity bonus improves performance (see Table 2 in the appendix, the higher score is better). We would like to emphasize that this is not the focus of the paper, and these experiments are provided just for completeness. We just combined extrinsic (coefficient 1.0) and intrinsic reward (coefficient 0.01) directly without any tuning. We leave the question on how to optimally combine extrinsic and intrinsic rewards as a future direction. Intrinsic Motivation: A family of approaches to intrinsic motivation reward an agent based on prediction error BID34 BID39 BID25 BID0, prediction uncertainty BID41 BID9 ), or improvement (a BID18 of a forward dynamics model of the environment that gets trained along with the agent's policy. As a the agent is driven to reach regions of the environment that are difficult to predict for the forward dynamics model, while the model improves its predictions in these regions. This adversarial and non-stationary dynamics can give rise to complex behaviors. Relatively little work has been done in this area on the pure exploration setting where there is no external reward. Of these mostly closely related are those that use a forward dynamics model of a feature space such as BID39 where they use autoencoder features, and BID25 where they use features trained with an inverse dynamics task. These correspond roughly to the VAE and IDF methods detailed in Section 2.1.Smoothed versions of state visitation counts can be used for intrinsic rewards BID2 BID7 BID22 BID44 . Count-based methods have already shown very strong when combining with extrinsic rewards such as setting the state of the art in the Atari game Montezuma's Revenge BID2, and also showing significant exploration of the game without using the extrinsic reward. It is not yet clear in which situations count-based approaches should be preferred over dynamics-based approaches; we chose to focus on dynamics-based bonuses in this paper since we found them straightforward to scale and parallelize. In our preliminary experiments, we did not have sufficient success with already existing count-based implementations in scaling up for a large-scale study. Learning without extrinsic rewards or fitness functions has also been studied extensively in the evolutionary computing where it is referred to as 'novelty search' BID16 BID40 . There the novelty of an event is often defined as the distance of the event to the nearest neighbor amongst previous events, using some statistics of the event to compute distances. One interesting finding from this literature is that often much more interesting solutions can be found by not solely optimizing for fitness. Other methods of exploration are designed to work in combination with maximizing a reward function, such as those utilizing uncertainty about value function estimates BID21, or those using perturbations of the policy for exploration BID6 BID27 . BID35 and BID24 BID23 provide a great review of some of the earlier work on approaches to intrinsic motivation. Alternative methods of exploration include BID42 where they utilize an adversarial game between two agents for exploration. In BID8, they optimize a quantity called empowerment which is a measurement of the control an agent has over the state. In a concurrent work, diversity is used as a measure to learn skills without reward functions BID5 .Random Features: One of the findings in this paper is the surprising effectiveness of random features, and there is a substantial literature on random projections and more generally randomly initialized neural networks. Much of the literature has focused on using random features for classification BID31 BID12 BID46 where the typical finding is that whilst random features can work well for simpler problems, feature learning performs much better once the problem becomes sufficiently complex. Whilst we expect this pattern to also hold true for dynamics-based exploration, we have some preliminary evidence showing that learned features appear to generalize better to novel levels in Mario Bros. We have shown that our agents trained purely with a curiosity reward are able to learn useful behaviours: (a) Agent being able to play many Atari games without using any rewards. (b) Mario being able to cross over 11 levels without any extrinsic reward. (c) Walking-like behavior emerged in the Ant environment. (d) Juggling-like behavior in Robo-school environment (e) Rally-making behavior in Two-player Pong with curiosity-driven agent on both sides. But this is not always true as there are some Atari games where exploring the environment does not correspond to extrinsic reward. More generally, our suggest that, in many game environments designed by humans, the extrinsic reward is often aligned with the objective of seeking novelty. Limitation of prediction error based curiosity: A more serious potential limitation is the handling of stochastic dynamics. If the transitions in the environment are random, then even with a perfect dynamics model, the expected reward will be the entropy of the transition, and the agent will seek out transitions with the highest entropy. Even if the environment is not truly random, unpredictability caused by a poor learning algorithm, an impoverished model class or partial observability can lead to exactly the same problem. We did not observe this effect in our experiments on games so we designed an environment to illustrate the point. Figure 6: We add a noisy TV to the unity environment in Section 3.3. We compare IDF and RF with and without the TV.We return to the maze of Section 3.3 to empirically validate a common thought experiment called the noisy-TV problem. The idea is that local sources of entropy in an environment like a TV that randomly changes channels when an action is taken should prove to be an irresistible attraction to our agent. We take this thought experiment literally and add a TV to the maze along with an action to change the channel. In Figure 6 we show how adding the noisy-TV affects the performance of IDF and RF. As expected the presence of the TV drastically slows down learning, but we note that if you run the experiment for long enough the agents do sometimes converge to getting the extrinsic reward consistently. We have shown empirically that stochasticity can be a problem, and so it is important for future work to address this issue in an efficient manner. Future Work: We have presented a simple and scalable approach that can learn nontrivial behaviors across a diverse range of environments without any reward function or end-of-episode signal. One surprising finding of this paper is that random features perform quite well, but learned features appear to generalize better. While we believe that learning features will become more important once the environment is complex enough, we leave that for future work to explore. Our wider goal, however, is to show that we can take advantage of many unlabeled (i.e., not having an engineered reward function) environments to improve performance on a task of interest. Given this goal, showing performance in environments with a generic reward function is just the first step, and future work will hopefully investigate transfer from unlabeled to labeled environments. We have released the training code and environments on our website 1. For full details, we refer the reader to our code and video in the website. To better measure the amount of exploration, we provide the best return of curiosity-driven agents in figure 7(a) and the episode lengths in FIG4. Notably on Pong the increasing episode length combined with a plateau in returns shows that the agent maximizes the number of ball bounces, rather than the reward. FIG5 shows the performance of curiosity-driven agents based on Inverse Dynamics and Random features on 48 Atari games. Although not the focus of this paper, for completeness we include some on combining intrinsic and extrinsic reward on several sparse reward Atari games. When combining with extrinsic rewards, we use the end of the episode signal. The reward used is the extrinsic reward plus 0.01 times the intrinsic reward. The are shown in Table 2. We don't observe a large difference between the settings, likely because the combination of intrinsic and extrinsic reward needs to be tuned. We did observe that one of the intrinsic+extrinsic runs on Montezuma's Revenge explored 10 rooms. We observe that the extrinsic returns of curiosity-driven agents often increases despite the agents having no access to the extrinsic return or end of episode signal. In multiple environments, the performance of the curiositydriven agents is significantly better than that of a random agent, although there are environments where the behavior of the agent is close to random, or in fact seems to minimize the return, rather than maximize it. For the majority of the training process RF perform better than a random agent in about 67% of the environments, while IDF perform better than a random agent in about 71% of the environments. Reward Gravitar Freeway Venture PrivateEye MontezumaRevenge Ext Only 999.3 ± 220.7 33.3 ± 0.6 0 ± 0 5020.3 ± 395 1783 ± 691.7 Ext + Int 1165.1 ± 53.6 32.8 ± 0.3 416 ± 416 3036.5 ± 952.1 2504.6 ± 4.6 Table 2: These compare the mean reward (± std-error) after 100 million frames across 3 seeds for an agent trained with intrinsic plus extrinsic reward versus extrinsic reward only. The extrinsic (coefficient 1.0) and intrinsic reward (coefficient 0.01) were directly combined without any hyper-parameter tuning. We leave the question on how to optimally combine extrinsic and intrinsic rewards up to future work. This is to emphasize that combining extrinsic with intrinsic rewards is not the focus of the paper, and these experiments are provided just for completeness. We show the analogue of the plot shown in FIG1 Figure 9: Best extrinsic returns on the Mario scaling experiments. We observe that larger batches allow the agent to explore more effectively, reaching the same performance in less parameter updates, and also achieving better ultimate scores.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJNwDjAqYX
An agent trained only with curiosity, and no extrinsic reward, does surprisingly well on 54 popular environments, including the suite of Atari games, Mario etc.
This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). Evaluated on these adversarially transformed examples, we demonstrate that adding regularization on top of standard or adversarial training reduces the relative error by 20% for CIFAR10 without increasing the computational cost. This outperforms handcrafted networks that were explicitly designed to be spatial-equivariant. Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set.
[ 0, 0, 0, 1 ]
B1e6oy39aE
for spatial transformations robust minimizer also minimizes standard accuracy; invariance-inducing regularization leads to better robustness than specialized architectures
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is `greater than,' `similar to,' or `smaller than' the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison , the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner. To measure the quality of something, we often compare it with other things of a similar kind. Before assigning 4 stars to a film, a critic would have thought, "It is better than 3-star films but worse than 5-stars." This ranking through pairwise comparisons is done in various decision processes . It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly (; a). Also, it is easy to tell a higher pitch between two notes, but absolute pitch is a rare ability . Ranking through comparisons has been investigated for machine learning. In learning to rank (LTR), the pairwise approach learns, between two documents, which one is more relevant to a query . Also, in ordinal regression , to predict the rank of an object, binary classifications are performed to tell whether the rank is higher than a series of thresholds or not. In this paper, we propose order learning to learn ordering relationship between objects. Thus, order learning is related to LTR and ordinal regression. However, whereas LTR and ordinal regression assume that ranks form a total order , order learning can be used for a partial order as well. Order learning is also related to metric learning . While metric learning is about whether an object is'similar to or dissimilar from' another object, order learning is about'greater than or smaller than.' Section 2 reviews this related work. In order learning, a set of classes, Θ = {θ 1, θ 2, · · ·, θ n}, is ordered, where each class θ i represents one or more object instances. Between two classes θ i and θ j, there are three possibilities: θ i > θ j or θ i < θ j or neither (i.e. incomparable). These relationships are represented by the order graph. The goal of order learning is to determine the order graph and then classify an instance into one of the classes in Θ. To achieve this, we develop a pairwise comparator that determines ordering relationship between two instances x and y into one of three categories: x is'greater than,''similar to,' or'smaller than' y. Then, we use the comparator to measure an input instance against multiple reference instances in known classes. Finally, we estimate the class of the input to maximize the consistency among the comparison . It is noted that the parameter optimization of the pairwise comparator, the selection of the references, and the discovery of the order graph are jointly performed to minimize a common loss function. Section 3 proposes this order learning. We apply order learning to facial age estimation. Order learning matches age estimation well, since it is easier to tell a younger one between two people than to estimate each person's age directly (; a). Even when we assume that age classes are linearly ordered, the proposed age estimator performs well. The performance is further improved, when classes are divided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner. Section 4 describes this age estimator and discusses its . Finally, Section 5 concludes this work. Pairwise comparison: It is a fundamental problem to estimate the priorities (or ranks) of objects through pairwise comparison. In the classic paper, noted that, even when direct estimates of certain quantities are unavailable, rough ratios between them are easily obtained in many cases. Thus, he proposed the scaling method to reconstruct absolute priorities using only relative priorities. The scaling method was applied to monocular depth estimation (a) and aesthetic assessment (b). Ranking from a pairwise comparison matrix has been studied to handle cases, in which the matrix is huge or some elements are noisy (; ; ;). On the other hand, the pairwise approach to LTR learns, between two documents, which one is more relevant to a query (; ; ;). The proposed order learning is related to LTR, since it also predicts the order between objects. But, while LTR sorts multiple objects with unknown ranks and focuses on the sorting quality, order learning compares a single object x with optimally selected references with known ranks to estimate the rank of x. Ordinal regression: Ordinal regression predicts an ordinal variable (or rank) of an instance. Suppose that a 20-year-old is misclassified as a 50-year old and a 25-year old, respectively. The former error should be more penalized than the latter. Ordinal regression exploits this characteristic in the design of a classifier or a regressor. and , a conversion scheme was proposed to transform an ordinal regression problem into multiple binary classification problems. Ordinal regression based on this conversion scheme has been used in various applications, including age estimation (; ;) and monocular depth estimation . Note that order learning is different from ordinal regression. Order learning performs pairwise comparison between objects, instead of directly estimating the rank of each object. In age estimation, ordinal regression based on the conversion scheme is concerned with the problem, "Is a person's age bigger than a threshold θ?" for each θ. In contrast, order learning concerns "Between two people, who is older?" Conceptually, order learning is easier. Technically, if there are N ranks, the conversion scheme requires N − 1 binary classifiers, but order learning needs only a single ternary classifier. Moreover, whereas ordinal regression assumes that ranks form a total order, order learning can be used even in the case of a partial order . Metric learning: A distance metric can be learned from examples of similar pairs of points and those of dissimilar pairs . The similarity depends on an application and is implicitly defined by user-provided examples. If a learned metric generalizes well to unseen data, it can be used to enforce the desired similarity criterion in clustering , classification , or information retrieval . Both metric learning and order learning learn important binary relations in mathematics: metric and order . However, a metric decides whether an object x is similar to or dissimilar from another object y, whereas an order tells whether x is greater than or smaller than y. Thus, a learned metric is useful for grouping similar data, whereas a learned order is suitable for processing ordered data. Age estimation: Human ages can be estimated from facial appearance (Kwon & da). proposed the aging pattern subspace, and introduced biologically inspired features to age estimation. Recently, deep learning has been adopted for age estimation. proposed OR-CNN for age estimation, which is an ordinal regressor using the conversion scheme. proposed Ranking-CNN, which is another ordinal regressor. While OR-CNN uses a common feature for multiple binary classifiers, Ranking-CNN employs a separate CNN to extract a feature for each binary classifier. grouped adjacent ages via the group-n encoding, determined whether a face belongs to each group, and combined the to predict the age. proposed the mean-variance loss to train a CNN classifier for age estimation. proposed the deep regression forests for age estimation. developed a compact age estimator using the two-points representation. Also, proposed a continuity-aware probabilistic network for age estimation. Figure 1: Examples of order graphs, in which node n precedes node m (n → m), if n divides m. For clarity, self-loops for reflexivity and edges deducible from transitivity are omitted from the graphs. 3 ORDER LEARNING Let us first review mathematical definitions and concepts related to order. An order , often denoted by ≤, is a binary relation on a set Θ = {θ 1, θ 2, · · ·, θ n} that satisfies the three properties of In real-world problems, an order describes ranks or priorities of objects. For example, in age estimation, θ i ≤ θ j means that people in age class θ i look younger than those in θ j. We may use the symbol →, instead of ≤, to denote an order on a finite set Θ. Then, the order can be represented by a directed graph using elements in Θ as nodes. If θ i → θ j, there is a directed edge from node θ i to node θ j. The order graph is acyclic because of antisymmetry and transitivity. For example, for n, m ∈ N, let n → m denote that m is a multiple of n. Note that it is an order on any subset of N. Figure 1(a) is the graph representing this order on {1, . . ., 9}. Elements θ i and θ j are comparable if θ i → θ j or θ j → θ i, or incomparable otherwise. In Figure 1(a), 6 and 8 are incomparable. In age estimation, it is difficult to compare apparent ages of people in different ethnic groups or of different genders. An order on a set Θ is total (or linear) if all elements in Θ are comparable to one another. In such a case, Θ is called a linearly ordered set. In some real-world problems, orders are not linear. In this work, a subset Θ c of Θ is referred to as a chain, if Θ c is linearly ordered and also maximal, i.e. there is no proper superset of Θ c that is linearly ordered. In Figure 1 (a), nodes 1, 2, 4, and 8 form a chain. In Figure 1 (b), the entire set is composed of three disjoint chains. Let Θ = {θ 1, θ 2, · · ·, θ n} be an ordered set of classes, where each class θ i represents one or more object instances. For example, in age estimation, age class 11 is the set of 11-year-olds. The objective of order learning is to determine the order graph, such as Figure 1 (a) or (b), and categorize an object instance into one of the classes. However, in many cases, order graphs are given explicitly or obvious from the contexts. For example, in quality assessment, there are typically five classes (poor → satisfactory → good → very good → excellent), forming a single chain. Also, in age estimation, suppose that an algorithm first classifies a person's gender into female or male and then estimates the age differently according to the gender. In this case, implicitly, there are separate age classes for each gender, and the age classes compose two disjoint chains similarly to Figure 1 (b). Thus, in this subsection, we assume that the order graph is already known. Also, given an object instance, we assume that the chain to which the instance belongs is known. Then, we attempt to categorize the instance into one of the classes in the chain. Section 3.4 will propose the order learning in the case of an unknown order graph, composed of disjoint chains. Instead of directly estimating the class of each instance, we learn pairwise ordering relationship between two instances. Let Θ c = {0, 1, . . ., N − 1} be a chain, where N is the number of classes. Let x and y be two instances belonging to classes in Θ c. Let θ(·) denote the class of an instance. Then, x and y are compared and their ordering relationship is defined according to their class difference as where τ is a threshold. To avoid confusion, we use', ≈, ≺' for the instance ordering, while'>, =, <' for the class order. In practice, the categorization in∼ is performed by a pairwise comparator in Figure 2, which consists of a Siamese network and a ternary classifier (b). To train the comparator, only comparable instance pairs are employed. We estimate the class θ(x) of a test instance x by comparing it with reference instances y m, 0 ≤ m ≤ M − 1, where M is the number of references. The references are selected from training data such that they are from the same chain as x. Given x and y m, the comparator provides one of three categories', ≈, ≺' as a . Let θ be an estimate of the true class θ(x). Then, the consistency between the comparator and the estimate is defined as is the indicator function. The function φ con (x, y m, θ) returns either 0 for an inconsistent case or 1 for a consistent case. For example, suppose that the pairwise comparator declares x ≺ y m but θ − θ(y m) > τ. Then, φ con (x, y m, θ) = 0 · 1 + 0 · 0 + 1 · 0 = 0. Due to a possible classification error of the comparator, this inconsistency may occur even when the estimate θ equals the true class θ(x). To maximize the consistency with all references, we estimate the class of x bŷ which is called the maximum consistency (MC) rule. Figure 3 illustrates this MC rule. It is noted that', ≈, ≺' is not an mathematical order. For example, if θ(x) + 3 4 τ = θ(y) = θ(z) − 3 4 τ, then x ≈ y and y ≈ z but x ≺ z. This is impossible in an order. More precisely, due to the quantization effect of the ternary classifier in∼,', ≈, ≺' is quasi-transitive , and'≈' is symmetric but intransitive. We use this quasi-transitive relation to categorize an instance into one of the classes, on which a mathematical order is well defined. In the simplest case of 1CH, all classes form a single chain Θ c = {0, 1, . . ., N − 1}. For example, in 1CH age estimation, people's ages are estimated regardless of their ethnic groups or genders. We implement the comparator in Figure 2 is written within the box. For θ = 7, there are six inconsistent boxes. For θ = 9, there are 24 such boxes. In this example, θ = 7 minimizes the inconsistency, or equivalently maximizes the consistency. Therefore,θ MC (x) = 7. where T is the set of all training instances and R ⊂ T is the set of reference instances. First, we initialize R = T and minimize co via the stochastic gradient descent. Then, we reduce the reference set R by sampling references from T. Specifically, for each class in Θ c, we choose M/N reference images to minimize the same loss co, where M is the number of all references and N is the number of classes. In other words, the reliability score of a reference candidate y is defined as and the M/N candidates with the highest reliability scores are selected. Next, after fixing the reference set R, the comparator is trained to minimize the loss co. Then, after fixing the comparator parameters, the reference set R is updated to minimize the same loss co, and so forth. In the test phase, an input instance is compared with the M references and its class is estimated using the MC rule in. In KCH, we assume that classes form K disjoint chains, as in Figure 1 (b). For example, in the supervised 6CH for age estimation, we predict a person's age according to the gender in {female, male} and the ethnic group in {African, Asian, European}. Thus, there are 6 chains in total. In this case, people in different chains are assumed to be incomparable for age estimation. It is supervised, since gender and ethnic group annotations are used to separate the chains. The supervised 2CH or 3CH also can be implemented by dividing chains by genders only or ethnic groups only. The comparator is trained similarly to 1CH. However, in computing the comparator loss in, a training instance x and a reference y are constrained to be from the same chain. Also, during the test, the type (or chain) of a test instance should be determined. Therefore, a K-way type classifier is trained, which shares the feature extractor with the comparator in Figure 2 and uses additional fully-connected (FC) layers. Thus, the overall loss is given by where co is the comparator loss and ty is the type classifier loss. The comparator and the type classifier are jointly trained to minimize this overall loss. During the test, given an input instance, we determine its chain using the type classifier, and compare it with the references from the same chain, and then estimate its class using the MC rule in. This subsection proposes an algorithm to separate classes into K disjoint chains when there are no supervision or annotation data available for the separation. First, we randomly partition the training set Input: T = training set of ordinal data, K = # of chains, N = # of classes in each chain, and M = # of references in each chain 1: Partition T randomly into T0,..., TK−1 and train a pairwise comparator 2: From T k, select M/N references y with the highest reliability scores α k (y) 4: end for 5: repeat 6: for each instance x do Membership Update (T k) 7: Assign it to T k *, where k * = arg max k β k (x) subject to the regularization constraint 8: end for 9: Fine-tune the comparator and train a type classifier using T0,..., TK−1 to minimize = co + ty 10: Assign it to T k where k is its type classification 12: end for 13: From T k, select M/N references y with the highest reliability scores α k (y) 15: end for 16: until convergence or predefined number of iterations Output: Pairwise comparator, type classifier, reference sets R0,..., RK−1 to, the comparator loss co can be written as where R k ⊂ T k is the set of references for the kth chain, α k (y) = x∈T k j q xy j log p xy j is the reliability of a reference y in the kth chain, and β k (x) = y∈R k j q xy j log p xy j is the affinity of an instance x to the references in the kth chain. Note that β k (x) = − y∈R k D(q xy p xy) where D is the Kullback-Leibler distance . Second, after fixing the chain membership T k for each chain k, we select references y to maximize the reliability scores α k (y). These references form R k. Third, after fixing R 0,..., R K−1, we update the chain membership T 0,..., T K−1, by assigning each training instance x to the kth chain that maximizes the affinity score β k (x). The second and third steps are iteratively repeated. Both steps decrease the same loss co in. The second and third steps are analogous to the centroid rule and the nearest neighbor rule in the Kmeans clustering , respectively. The second step determines representatives in each chain (or cluster), while the third step assigns each instance to an optimal chain according to the affinity. Furthermore, both steps decrease the same loss alternately. However, as described in Algorithm 1, we modify this iterative algorithm by including the membership refinement step in lines 10 ∼ 12. Specifically, we train a K-way type classifier using T 0,..., T K−1. Then, we accept the type classification to refine T 0,..., T K−1. This refinement is necessary because the type classifier should be used in the test phase to determine the chain of an unseen instance. Therefore, it is desirable to select the references also after refining the chain membership. Also, in line 7, if we assign an instance x to maximize β k (x) only, some classes may be assigned too few training instances, leading to data imbalance. To avoid this, we enforce the regularization constraint so that every class is assigned at least a predefined number of instances. This regularized membership update is described in Appendix A. We develop an age estimator based on the proposed order learning. Order learning is suitable for age estimation, since telling the older one between two people is easier than estimating each person's age directly (; a). It is less difficult to distinguish between a 5-year-old and a 10-year-old than between a 65-yearold and a 70-year-old. Therefore, in age estimation, we replace the categorization based on the arithmetic difference in∼ with that based on the geometric ratio as follows. which represent'older,''similar,' and'younger.' The consistency in is also modified accordingly. There are 5 reference images for each age class within range in this work (M = 330, N = 66). Thus, a test image should be compared with 330 references. However, we develop a twostep approach, which does at most 130 comparisons but performs as good as the method using 330 comparisons. The two-step estimation is employed in all experiments. It is described in Appendix B. We align all facial images using SeetaFaceEngine and resize them into 256 × 256 × 3. Then, we crop a resized image into 224 × 224 × 3. For the feature extractors in Figure 2, we use VGG16 without the FC layers . They yield 512-channel feature vectors. Then, the vectors are concatenated and input to the ternary classifier, which has three FC layers, yielding 512-, 512-, and 3-channel vectors sequentially. The 3-channel vector is normalized to the softmax probabilities of the three categories', ≈, ≺.' In∼, τ age is set to 0.1. In KCH with K ≥ 2, the type (or chain) of a test image should be determined. Thus, we design a type classifier, which shares the feature extractor with the comparator. Similarly to the ternary classifier, the type classifier uses three FC layers, yielding 512-, 512-, and K-channel vectors sequentially. The comparator and the type classifier are jointly trained. To initialize the feature extractors, we adopt the VGG16 parameters pre-trained on ImageNet . We randomly initialize all the other layers. We update the parameters using the Adam optimizer . We set the learning rate to 10 −4 for the first 70 epochs. Then, we select 5 references for each age class. Using the selected references, we fine-tune the network with a learning rate of 10 −5. We repeat the reference selection and the parameter fine-tuning up to 3 times. In the case of unsupervised chains, we enforce the regularization constraint (line 7 in Algorithm 1). By default, for each age, all chains are constrained to be assigned the same number of training images. If there are L training images of θ-year-olds, the age classes θ in the K chains are assigned L/K images, respectively, according to the affinity scores β k (x) by Algorithm 2 in Appendix A. MORPH II is the most popular age estimation benchmark, containing about 55,000 facial images in the age range. IMDB-WIKI is another dataset containing about 500,000 celebrity images obtained from IMDB and Wikipedia. It is sometimes used to pre-train age estimation networks. Optionally, we also select 150,000 clean data from IMDB-WIKI to pre-train the proposed pairwise comparator. Although several facial age datasets are available, most are biased to specific ethnic groups or genders. Data unbalance restricts the usability and degrades the generalization performance. Thus, we form a'balanced dataset' from MORPH II, AFAD , and UTK (b). Table 1 shows how the balanced dataset is organized. Before sampling images from MORPH II, AFAD, and UTK, we rectify inconsistent labels by following the strategy in. For each combination of gender in {female, male} and ethnic group in {African, Asian, European}, we sample about 6,000 images. Also, during the sampling, we attempt to make the age distribution as For performance assessment, we calculate the mean absolute error (MAE) and the cumulative score (CS) . MAE is the average absolute error between predicted and ground-truth ages. Given a tolerance level l, CS computes the percentage of test images whose absolute errors are less than or equal to l. In this work, l is fixed to 5, as done in Chang et al. Table 2 compares the proposed algorithm (1CH) with conventional algorithms on MORPH II. As evaluation protocols for MORPH II, we use four different settings, including the 5-fold subjectexclusive (SE) and the 5-fold random split (RS) . Appendix C.1 describes these four settings in detail and provides an extended version of Table 2. OHRank, OR-CNN, and Ranking-CNN are all based on ordinal regression. OHRank uses traditional features, yielding relatively poor performances, whereas OR-CNN and Ranking-CNN use CNN features. DEX, DRFs, MO-CNN, MV, and BridgeNet employ VGG16 as backbone networks. Among them, MV and BridgeNet achieve the state-of-the-art , by employing the mean-variance loss and the gating networks, respectively. The proposed algorithm outperforms these algorithms in setting C, which is the most challenging task. Furthermore, in terms of CS, the proposed algorithm yields the best performances in all four settings. These outstanding performances indicate that order learning is an effective approach to age estimation. In Table 3, we analyze the performances of the proposed algorithm on the balanced dataset according to the number of hypothesized chains. We also implement and train the state-of-the-art MV on the balanced dataset and provide its using supervised chains. Let us first analyze the performances of the proposed algorithm using'supervised' chains. The MAE and CS scores on the balanced dataset are worse than those on MORPH II, since the balanced dataset contains more diverse data and thus is more challenging. By processing facial images separately according to the genders (2CH), the proposed algorithm reduces MAE by 0.05 and improves CS by 0.2% in comparison with 1CH. Similar improvements are obtained by 3CH or 6CH, which consider the ethnic groups only or both gender and ethnic groups, respectively. In contrast, in the case of MV, multi-chain hypotheses sometimes degrade the performances; e.g., MV (6CH) yields a lower CS than MV (1CH). Regardless of the number of chains, the proposed algorithm trains a single comparator but uses a different set of references for each chain. The comparator is a ternary classifier. In contrast, MV (6CH) should train six different age estimators, each of which is a 66-way classifier, to handle different chains. Thus, their training is more challenging than that of the single ternary classifier. Note that, for the multi-chain hypotheses, the proposed algorithm first identifies the chain of a test image using the type classifiers, whose accuracies are about 98%. In Table 3, these Comparison are color-coded. Cyan, yellow, and magenta mean that the test subject is older than , similar to (≈), and younger than (≺) a reference. The age is estimated correctly as 22. type classifiers are used to obtain the of the proposed algorithm, whereas the ground-truth gender and ethnic group of each test image are used for MV. Figure 4 shows how to estimate an age in 6CH. In this test, the subject is a 22-year-old Asian male. He is compared with the references who are also Asian males. Using the comparison , the age is correctly estimated as 22 by the MC rule in. Table 4 lists the MAE for each test chain. Europeans yield poorer MAEs than Africans or Asians. However, this is not due to inherent differences between ethnic groups. It is rather caused by differences in image qualities. As listed in Table 1, more European faces are sampled from UTK. The UTK faces were crawled from the Internet and their qualities are relatively low. Also, from the cross-chain test using 6CH, some observations can be made: • Except for the As-F test chain, the lowest MAE is achieved by the references in the same chain. • Eu-M and Eu-F are mutually compatible. For Eu-M, the second best performance is obtained by the Eu-F references, and vice versa. On the other hand, some chains, such as Af-M and Eu-F, are less compatible for the purpose of the proposed age estimation. Table 3 also includes the performances of the proposed algorithm using'unsupervised' chains. The unsupervised algorithm outperforms the supervised one, which indicates that the gender or ethnic group is not the best information to divide data for age estimation. As in the supervised case, 2CH, 3CH, and 6CH yield similar performances, which means that two chains are enough for the balanced set. Compared with MV (1CH), the unsupervised algorithm (2CH) improves the performances significantly, by 0.33 in terms of MAE and 4.1% in terms of CS. Figure 5 shows how training images are divided into two chains in the unsupervised 2CH. During the membership update, for each age, each chain is regularized to include at least a certain percentage (κ) of the training images. In the default mode, the two chains are assigned the same number of images with κ = 50%. However, Appendix C.3 shows that the performance is not very sensitive to κ. At κ = 10%, MAE = 4.17 and CS = 73.7%. From Figure 5, we observe • The division of the chains is not clearly related to genders or ethnic groups. Regardless of genders or ethnic groups, about half of the images are assigned to chain 1 and the others to chain 2. • At κ = 10%, chain 1 mostly consists of middle ages, while chain 2 of 10s, 20s, 60s, and 70s. • At κ = 50%, there is no such strong age-dependent tendency. But, for some combinations of gender, ethnic group, and age band, it is not equal division. For example, for Asian females, a majority of 40s are assigned to chain 1 but a majority of 50s and 60s are assigned to chain 2. The unsupervised algorithm is designed to divide instances into multiple clusters when gender and ethnic group information is unavailable. As shown in Appendix C.3, different κ's yield various clustering . Surprisingly, these different clusters still outperform the supervised algorithm. For example, at κ = 10%, let us consider the age band of 20s and 30s. If the references in chain 2 are used to estimate the ages of people in chain 1, the average error is 4.6 years. On the contrary, if the references in chain 1 are used for chain 2, the average error is −5.4 years. These opposite biases mean that people in chain 1 tend to look older than those in chain 2. These'looking-older' people in 20s and 30s compose the blue cluster (chain 1) together with most people in 40s and 50s in Figure 5. In this case,'looking-older' people in 20s and 30s are separated from'looking-younger' ones by the unsupervised algorithm. This is more effective than the gender-based or ethnic-group-based division of the supervised algorithm. Appendix C presents more on age estimation. Order learning was proposed in this work. In order learning, classes form an ordered set, and each class represents object instances of the same rank. Its goal is to determine the order graph of classes and classify a test instance into one of the classes. To this end, we designed the pairwise comparator to learn ordering relationships between instances. We then decided the class of an instance by comparing it with reference instances in the same chain and maximizing the consistency among the comparison . For age estimation, it was shown that the proposed algorithm yields the stateof-the-art performance even in the case of the single-chain hypothesis. The performance is further improved when the order graph is divided into multiple disjoint chains. In this paper, we assumed that the order graph is composed of disjoint chains. However, there are more complicated graphs, e.g. Figure 1 (a), than disjoint chains. For example, it is hard to recognize an infant's sex from its facial image . But, after puberty, male and female take divergent paths. This can be reflected by an order graph, which consists of two chains sharing common nodes up to a certain age. It is an open problem to generalize order learning to find an optimal order graph, which is not restricted to disjoint chains. During the chain membership update in Algorithm 1, we assign an instance x to chain k to maximize β k (x) subject to the regularization constraint. As mentioned in Section 4.1, in age estimation, this regularization is enforced for each age. Let X denote the set of θ-year-olds for a certain θ. Also, let K = {0, 1, . . ., K − 1} be the set of chains. Suppose that we should assign at least a certain number (L) of instances in X to each chain. This is done by calling RegularAssign(K, X, L) in Algorithm 2, which is a recursive function. Algorithm 2 yields the membership function c(x) as output. For example, c(x) = 1 means that x belongs to chain 1. Input: K = set of chains, X = set of instances, and L = minimum number 1: for each k ∈ K do Initialize chains 2: X k = ∅ 3: end for 4: for each x ∈ X do Irregular partitioning 5: c(x) = arg max k∈K β k (x) 6: X c(x) = X c(x) ∪ {x} 7: end for 8: km = arg min k∈K |X k | Chain of the minimum size 9: if |X km | ≥ L then 10: return 11: else 12: while |X km | < L do Increase X km 14: x = maxx∈X β km (x) 15: X km = X km ∪ {x} 17: end while 18: B TWO-STEP ESTIMATION There are 5 reference images for each age within range in this work. Thus, for the age estimation of a test image using the MC rule in, the test image should be compared with M = 330 reference images. However, we reduce the number of comparisons using a two-step approach. First, the test image is compared with the 35 references of ages 15, 25,..., 75 only, and a rough age estimateθ 1 is obtained using the MC rule. Second, it is compared with the 105 references of all ages within [θ 1 − 10,θ 1 + 10], and the final estimateθ 2 is obtained. Since there are at least 10 common references in the first and second steps, the two-step estimation requires at most 130 comparisons. • Setting A: 5,492 images of Europeans are randomly selected and then divided into training and testing sets with ratio 8:2 . • Setting B: About 21,000 images are randomly selected, while restricting the ratio between Africans and Europeans to 1:1 and that between females and males to 1:3. They are divided into three subsets (S1, S2, S3). The training and testing are done under two subsettings . -(B1) training on S1, testing on S2 + S3 -(B2) training on S2, testing on S1 + S3 • Setting C (SE): The entire dataset is randomly split into five folds, subject to the constraint that the same person's images should belong to only one fold, and the 5-fold crossvalidation is performed. • Setting D (RS): The entire dataset is randomly split into five folds without any constraint, and the 5-fold cross-validation is performed. Table 5 is an extended version of Table 2. It includes the of more conventional algorithms. We assess the proposed age estimator (1CH) on the FG-NET database . FG-NET is a relatively small dataset, composed of 1,002 facial images of 82 subjects. Ages range from 0 to 69. For FG-NET, the leave one person out (LOPO) approach is often used for evaluation. In other words, to perform tests on each subject, an estimator is trained using the remaining 81 subjects. Then, the are averaged over all 82 subjects. In order to assess the generalization performance, we do not retrain the comparator on the FG-NET data. Instead, we fix the comparator trained on the balanced dataset and just select references from the remaining subjects' faces in each LOPO test. For the comparator, the arithmetic scheme in∼ is tested as well as the default geometric scheme in∼. For comparison, MV is tested, but it is trained for each LOPO test., the proposed algorithm outperforms MV, even though the comparator is not retrained. These indicate that the comparator generalizes well to unseen data, as long as the training images cover a desired age range. Also, note that the geometric scheme provides better performances than the arithmetic scheme. Figure 6 compares MAEs according to a test age. Again, within the covered range, the proposed algorithm significantly outperforms MV especially when test subjects are older than 45. The ordering relationship between two instances can be categorized via the arithmetic scheme in∼ using a threshold τ or the geometric scheme in∼ using a threshold τ age. Table 8 lists the performances of the proposed algorithm (1CH) according to these thresholds. We see that the geometric scheme outperforms the arithmetic scheme in general. The best performance is achieved with τ age = 0.1, which is used in all experiments in the main paper. Note that the scores are poorer than those in Table 3, since the comparator is trained for a smaller number of epochs to facilitate this test. At τ age = 0.1, two teenagers are declared to be not'similar to' each other if their age difference is larger than about 1. Also, two forties are not'similar' if the age difference is larger than about 5. C.5 PERFORMANCE ACCORDING TO NUMBER OF REFERENCES Table 9: The performances of the proposed algorithm (supervised) on the balanced dataset according to the number of references for each age class (M/N). In general, the performances get better with more references. However, the performances are not very sensitive to M/N. They saturate when M/N ≥ 5. Therefore, we set M/N = 5 in this work. Figure 8: All reference images in the supervised 6CH. For some ages in certain chains, the balanced dataset includes less than 5 faces. In such cases, there are less than 5 references.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HygsuaNFwr
The notion of order learning is proposed and it is applied to regression problems in computer vision
We study how the topology of a data set comprising two components representing two classes of objects in a binary classification problem changes as it passes through the layers of a well-trained neural network, i.e., one with perfect accuracy on training set and a generalization error of less than 1%. The goal is to shed light on two well-known mysteries in deep neural networks: (i) a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent; (ii) successful neural network architectures rely on having many layers, despite the fact that a shallow network is able to approximate any function arbitrary well. We performed extensive experiments on persistent homology of a range of point cloud data sets. The consistently demonstrate the following: Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers. No matter how complicated the topology of the data set we begin with, when passed through a well-trained neural network, the Betti numbers of both components invariably reduce to their lowest possible values: zeroth Betti number is one and all higher Betti numbers are zero. Furthermore, the reduction in Betti numbers is significantly faster for ReLU activation compared to hyperbolic tangent activation --- consistent with the fact that the former define nonhomeomorphic maps (that change topology) whereas the latter define homeomorphic maps (that preserve topology). Lastly, shallow and deep networks process the same data set differently --- a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep network spreads topological changes more evenly across all its layers.
[ 0, 0, 0, 1, 0, 0, 0 ]
SkgBfaNKPr
We show that neural networks operate by changing topologly of a data set and explore how architectural choices effect this change.
The convergence rate and final performance of common deep learning models have significantly benefited from recently proposed heuristics such as learning rate schedules, knowledge distillation, skip connections and normalization layers. In the absence of theoretical underpinnings, controlled experiments aimed at explaining the efficacy of these strategies can aid our understanding of deep learning landscapes and the training dynamics. Existing approaches for empirical analysis rely on tools of linear interpolation and visualizations with dimensionality reduction, each with their limitations. Instead, we revisit the empirical analysis of heuristics through the lens of recently proposed methods for loss surface and representation analysis, viz. mode connectivity and canonical correlation analysis (CCA), and hypothesize reasons why the heuristics succeed. In particular, we explore knowledge distillation and learning rate heuristics of (cosine) restarts and warmup using mode connectivity and CCA. Our empirical analysis suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers. The introduction of heuristics such as normalization layers BID19 BID0, residual connections BID11, and learning rate strategies BID26 BID9 ) have greatly accelerated progress in Deep Learning. Many of these ingredients are now commonplace in modern architectures, and some of them have also been buttressed with theoretical guarantees BID1 BID28 BID10. However, despite their simplicity and efficacy, why some of these heuristics work is still relatively unknown. Existing attempts at explaining these strategies empirically have been limited to intuitive explanations and the use of tools such as spectrum analysis , linear interpolation between two models and low-dimensional visualizations of the loss surface. In our work, we instead use recent tools built specifically for analyzing deep networks, viz., mode connectivity and singular value canonical correlation analysis (SVCCA) . We investigate three strategies in detail: (a) cosine learning rate decay, (b) learning rate warmup, and (c) knowledge distillation, and list the summary of our contributions at the end of this section. Cosine annealing BID26, also known as stochastic gradient descent with restarts (SGDR), and more generally cyclical learning rate strategies , have been recently proposed to accelerate training of deep networks BID3. The strategy involves reductions and restarts of learning rates over the course of training, and was motivated as means to escape spurious local minima. Experimental have shown that SGDR often improves convergence both from the standpoint of iterations needed for convergence and the final objective. Learning rate warmup BID9 also constitutes an important ingredient in training deep networks, especially in the presence of large or dynamic batch sizes. It involves increasing the learning rate to a large value over a certain number of training iterations followed by decreasing the learning rate, which can be performed using step-decay, exponential decay or other such schemes. The strategy was proposed out of the need to induce stability in the initial phase of training with large learning rates (due to large batch sizes). It has been employed in training of several architectures at scale including ResNets and Transformer networks .Further, we investigate knowledge distillation (KD) BID13. This strategy involves first training a (teacher) model on a typical loss function on the available data. Next, a different (student) model (typically much smaller than the teacher model) is trained, but instead of optimizing the loss function defined using hard data labels, this student model is trained to mimic the teacher model. It has been empirically found that a student network trained in this fashion significantly outperforms an identical network trained with the hard data labels. We defer a detailed discussion of the three heuristics, and existing explanations for their efficacy to sections 3, 4 and 5 respectively. Finally, we briefly describe the tools we employ for analyzing the aforementioned heuristics. Mode connectivity (MC) is a recent observation that shows that, under circumstances, it is possible to connect any two local minima of deep networks via a piecewise-linear curve BID5. This shows that local optima obtained through different means, and exhibiting different local and generalization properties, are connected. The authors propose an algorithm that locates such a curve. While not proposed as such, we employ this framework to better understand loss surfaces but begin our analysis in Section 2 by first establishing its robustness as a framework. Deep network analyses focusing on the weights of a network are inherently limited since there are several invariances in this, such as permutation and scaling. propose using CCA along with some pre-processing steps to analyze the activations of networks, such that the ing comparison is not dependent on permutations and scaling of neurons. They also prove the computational gains of using CCA over alternatives (BID25) for representational analysis and employ it to better understand many phenomenon in deep learning. • We use mode connectivity and CCA to improve understanding of cosine annealing, learning rate warmup and knowledge distillation. For mode connectivity, we also establish the robustness of the approach across changes in training choices for obtaining the modes.• We demonstrate that the reasons often quoted for the success of cosine annealing are not substantiated by our experiments, and that the iterates move over barriers after restarts but the explanation of escaping local minima might be an oversimplification.• We show that learning rate warmup primarily limits weight changes in the deeper layers and that freezing them achieves similar outcomes as warmup.• We show that the latent knowledge shared by the teacher in knowledge distillation is primarily disbursed in the deeper layers.2 EMPIRICAL TOOLS 2.1 MODE CONNECTIVITY introduce a framework, called mode connectivity, to obtain a low loss (or high accuracy, in the case of classification) curve of simple form, such as a piecewise linear curve, that connects optima (modes of the loss function) found independently. This observation suggests that points at the same loss function depth are connected, somewhat contrary to several empirical claiming that minima are isolated or have barriers between them 1.Let w a ∈ R D and w b ∈ R D be two modes in the D-dimensional parameter space obtained by optimizing a given loss function L(w) (like the cross-entropy loss). We represent a curve connecting Validation accuracy corresponding to models on the following 6 different curves -curve GA represents curve connecting mode G (one found with default hyperparameters) and mode A (using large batch size), similarly, curve GB connects mode G and mode B (using Adam), curve GC connects to mode C (using linearly decaying learning rate), curve GD to mode D (with lesser L2 regularization), curve GE to mode E (using a poor initialization), and curve GF to mode F (without using data augmentation). t = 0 corresponds to mode G for all plots.w a and w b by φ θ (t): → R D, such that φ θ = w a and φ θ = w b. To find a low loss path, we find the set of parameters θ ∈ R D that minimizes the following loss: DISPLAYFORM0 where U is the uniform distribution in the interval. To optimize (θ) for θ, we first need to chose a parametric form for φ θ (t). One of the forms proposed by is a polygonal chain with a single bend at θ as follows DISPLAYFORM1 To minimize (θ), we sample t ∼ U at each iteration and use ∇ θ L(φ θ (t)) as an unbiased estimate for the true gradient ∇ θ (θ) to perform updates on θ, where θ is initialized with 1 2 (w a +w b). To demonstrate that the curve-finding approach works in practice, use two optima found using different initializations but a common training scheme which we detail below. We explore the limits of this procedure by connecting optima obtained from different training strategies. Our goal of this investigation is to first establish the robustness of the framework in order to seamlessly use it as a tool for analysis. In particular, we experiment with different initializations, optimizers, data augmentation choices, and hyperparameter settings including regularization, training batch sizes, and learning rate schemes. We note in passing that while the framework was proposed to connect two points in the parameter space that are at equal depth in the loss landscape, it is well-defined to also connect points at different depths; in this case, the path corresponds to one that minimizes the average loss along the curve. Conventional wisdom suggests that the different training schemes mentioned above will converge to regions in the parameter space that are vastly different from each other. Examples of this include size of minibatches used during training BID22, choice of optimizer BID12 ), initialization BID8 and choice of regularizer. Having a high accuracy connection between these pairs would seem counterintuitive. For obtaining the reference model (named mode G), we train the VGG-16 model architecture using CIFAR-10 training data BID23 for 200 epochs with SGD. We then build 6 variants of the reference mode G as follows: we obtain mode A using a training batch size of 4000, mode B by using the Adam optimizer instead of SGD, mode C with a linearly decaying learning rate instead of the step decay used in mode G, mode D using a smaller weight decay of 5 × 10 −6, mode E by increasing the variance of the initialization distribution to 3 × 2/n and mode F using no data augmentation. Note that for the set of modes {A, B, C, D, E, F}, all the other hyper-parameters and settings except the ones mentioned above are kept same as that for mode G. We use the mode connectivity algorithm on each of the 6 pairs of modes including G and another mode, ing in curves GA, GB, GC, GD, GE, and GF. FIG0 shows the validation accuracy for models on each of the 6 connecting curves during the 20th, 40th, 60th and 80th epochs of the mode connectivity training procedure and also for models on the line segment joining the two endpoints (corresponding to the initialization for θ at epoch 0). As described in Section 2.1, for a polychain curve GX (connecting modes G and X using the curve described by θ), model parameters φ θ (t) on the curve are given by p φ θ (t) = 2(tp θ + (0.5 − t)p G ) if 0 ≤ t ≤ 0.5 and p φ θ (t) = 2((t − 0.5)p X + (1 − t)p θ ) if 0.5 < t ≤ 1 where p G, p θ and p X are parameters of the models G, θ, and X respectively. Thus φ θ = G and φ θ = X.In a few epochs of the curve training, for all 6 pairs, we can find a curve such that each point on it generalizes almost as well as models from the pair that is being connected. Note that by virtue of existence of these 6 curves, there exists a high accuracy connecting curve (albeit with multiple bends) for each of the 7 2 pairs of modes. We refer the reader to Appendix 7 for a t-SNE plot of the modes and their connections, and also for additional plots and details. Having established the high likelihood of the existence of these curves, we use this procedure along with interpolation of the loss surface between parameters at different epochs as tools to analyze the dynamics of SGD and SGDR. Canonical correlation analysis (CCA) is a classical tool from multivariate statistics BID16 that investigates the relationships between two sets of random variables. have proposed coupling CCA with pre-processing steps like Singular Value Decomposition (SVD) or Discrete Fourier Transform (DFT) to design a similarity metric for two neural net layers that we want to compare. These layers do not have to be of the same size or belong to the same network. Given a dataset with m examples X = {x 1, . . . x m}, we denote the scalar output of the neuron z l i (i-th neuron of layer l) for the input x i by f z L i (x i). These scalar outputs can be stacked (along n different neurons and m different datapoints) to create a matrix L ∈ R m×n representing the output of a layer corresponding to the entire dataset. This choice of comparing neural network layers using activations instead of weights and biases is crucial to the setup proposed. Indeed, invariances due to re-parameterizations and permutations limit the interpretability of the model weights BID4. However, under CCA of the layers, two activation sets are comparable by design. Given representations corresponding to two layers L a ∈ R ma×n and L b ∈ R m b ×n, SVCCA first performs dimensionality reduction using SVD to obtain L a ∈ R m a ×n and L b ∈ R m b ×n while preserving 99% of the variance. The subsequent CCA step involves transforming L a and L b to a 1 L a and b 1 L b respectively where {a 1, b 1} is found by maximizing the correlation between the transformed subspaces, and the corresponding correlation is denoted by ρ 1. This process continues, using orthogonality constraints, till c = min{m a, m b} leading to the set of correlation values {ρ 1, ρ 2 . . . ρ c} corresponding to c pairs of canonical variables {{a 1, b 1}, {a 2, b 2},... {a c, b c}} respectively. We refer the reader to for details on solving these optimization problems. The average of these c correlations 1 n i ρ i is then considered as a measure of the similarity between the two layers. For convolutional layers, suggest using a DFT pre-processing step before CCA, since they typically have a large number of neurons (m a or m b), where performing raw SVD and CCA would be computationally too expensive. This procedure can then be employed to compare different neural network representations and to determine how representations evolve over training iterations. BID26 introduced SGDR as a modification to the common linear or step-wise decay of learning rates. The strategy decays learning rates along a cosine curve and then, at the end of the decay, restarts them to its initial value. The learning rate at the t-th epoch in SGDR is given by the following expression in where η min and η max are the lower and upper bounds respectively for the learning rate. T cur represents how many epochs have been performed since the last restart and a warm restart is simulated once T i epochs are performed. Also T i = T mult × T i−1, meaning the period T i for the learning rate variation is increased by a factor of T mult after each restart. While the strategy has been claimed to outperform other learning rate schedulers, little is known why this has been the case. One explanation that has been given in support of SGDR is that it can be useful to deal with multi-modal functions, where the iterates could get stuck in a local optimum and a restart will help them get out of it and explore another region; however, BID26 do not claim to observe any effect related to multi-modality. BID18 propose an ensembling strategy using the set of iterates before restarts and claim that, when using the learning rate annealing cycles, the optimization path converges to and escapes from several local minima. We empirically investigate if this is actually the case by interpolating the loss surface between parameters at different epochs and studying the training and validation loss for parameters on the hyperplane passing through 2 the two modes found by SGDR and their connectivity. Further, by employing the CCA framework as described in Section 2.2, we investigate the progression of training, and the effect of restarts on the model activations. We train a VGG-16 network on the CIFAR-10 dataset using SGDR. For our experiments, we choose T 0 = 10 epochs and T mult = 2 (warm restarts simulated every 10 epochs and the period T i doubled at every new warm restart), η max = 0.05 and η min = 10 −6. We also perform VGG training using SGD (with momentum of 0.9) and a step decay learning rate scheme (initial learning rate of η 0 = 0.05, scaled by 5 at epochs 60 and 150). In order to understand the loss landscape on the optimization path of SGDR, the pairs of iterates obtained just before the restarts {w 30, w 70}, {w 70, w 150} and {w 30, w 150} are given as inputs to the mode connectivity algorithm, where w n is the model corresponding to parameters at the n-th epoch of training. FIG1 (b) shows the training loss for models along the line segment joining these pairs and those on the curve found through mode connectivity. For the baseline case of SGD training, we connect the iterates around the epochs when we decrease our learning rate in the step decay learning rate scheme. Thus, we chose {w 55, w 65}, {w 145, w 165} and {w 55, w 165} as input pairs to the mode connectivity algorithm. FIG1 (c) shows the training loss for models along the line segments joining these pairs and the curves found through mode connectivity. From FIG1 (b), it is clear that for the pairs {w 30, w 150} and {w 70, w 150} the training loss for points on segment is much higher than the endpoints suggesting that SGDR indeed finds paths that move over a barrier 3 in the training loss landscape. In contrast, for SGD (without restarts) in FIG1 (c) none of the three pairs show evidence of having a training loss barrier on the line segment joining them. Instead there seems to be an almost linear decrease of training loss along the direction of these line segments, suggesting that SGD's trajectory is quite different from SGDR's. We present additional experiments, including for other metrics, in Appendix 8.To further understand the SGDR trajectory, we evaluate the intermediate points on the hyperplane in the D-dimensional space defined by the three points: w 70, w 150 and w 70−150, where w 70−150 is the bend point that defines the high accuracy connection for the pair {w 70, w 150}. ) suggests that SGDR helps the iterates converge to a different region although neither of w 70 or w 150 are technically a local minimum, nor do they appear to be lying in different basins, hinting that BID18's claims about SGDR converging to and escaping from local minima might be an oversimplification. 4 Another insight we can draw from FIG3 (a) is that the path found by mode connectivity corresponds to lower training loss than the loss at the iterates that SGDR converges to (L(w 150) > L(w 70−150)). However, FIG3 (b) shows that models on this curve seem to overfit and not generalize as well as the iterates w 70 and w 150. Thus, although gathering models from this connecting curve might seem as a novel and computationally cheap way of creating ensembles, this generalization gap alludes to one limitation in doing so; point to other shortcomings of curve ensembling in their original work. In FIG3, the region of the plane between the iterates w 70 and w 150 corresponds to higher training loss but lower validation loss than the two iterates. This hints at a reason why averaging iterates to improve generalization using cyclic or constant learning rates has been found to work well. Finally, in FIG0 in Appendix 9, we present the CCA similarity plots for two pairs of models: epochs 10 and 150 (model at the beginning and end of training), and epochs 150 and 155 (model just before and just after a restart). For standard SGD training, observe that the activations of the shallower layers bear closer resemblance than the deeper layers between a partially and fully trained network from a given training run. For SGDR training, we witness similar (discussed in Appendix 9), meaning that the representational similarities between the network layers at the beginning and end of training are alike for SGDR and SGD, even though restarts lead to a trajectory that tends to cross over barriers. Learning rate warmup is a common heuristic used by many practitioners for training deep neural nets for computer vision BID9 and natural language processing BID2 ) tasks. Theoretically, it can be shown that the learning dynamics of SGD rely on the ratio of the batch size and learning rate (; BID21 BID14 . And hence, an increase in batch size over a baseline requires an accompanying increase in learning rate for comparable training. However, in cases when the batch size is increased significantly, the curvature of the loss function typically does not support a proportional increase in the learning rate. Warmup is hence motivated as a means to use large learning rates without causing training instability. We particularly focus on the importance of the learning rate schedule's warmup phase in the large batch (LB) training of deep convolutional neural networks as discussed in Goyal Using CCA as a tool to study the learning dynamics of neural networks through training iterations, we investigate the differences and similarities for the following 3 training configurations -(a) large batch training with warmup (LB + warmup), (b) large batch training without warmup (LB no warmup) and (c) small batch training without warmup (SB no warmup). We train a VGG-11 architecture on the CIFAR-10 dataset using SGD with momentum of 0.9. Learning rate for the small batch case (batch-size of 100) is set to 0.05, and for the large batch cases (batch-size of 5000) is set to 2.5 as per the scaling rule. For the warmup, we increase the learning rate from 0 to 2.5 over the first 200 iterations. Subsequently, we decrease the learning rate as per the step decay schedule for all runs, scaling it down by a factor of 10 at epochs 60, 120 and 150. We plot the learning rate and validation accuracy for these 3 cases in Figure 4 Figure 4 (c) plots the similarity for layer i of iter a with the same layer of iter b (this corresponds to diagonal elements of the matrices in FIG6) for these three setups. An evident pattern in FIG6, (b) and (c) is the increase in similarity for the last few layers (stack of fully-connected layers) for the LB + warmup and SB cases, which is absent in the LB without warmup case. This suggests that when used with the large batch size and learning rate, warmup tends to avoid unstably large changes in the fully-connected (FC) stack for this network configuration. To validate this proposition, we train using the LB without warmup setup, but freezing the fully-connected stack for the first 20 epochs 5 (LB no warmup + FC freeze). Figure 4(M denotes the i-th layer of network M, T denotes the teacher network (VGG16), S distilled is the student network trained using distillation and S indep. is the student network trained using hard training labels.suggesting the validity our proposition in this case. We refer the reader to Appendix 10 for analogous for ResNet-18 and ResNet-32 BID11; thus also demonstrating the generality of our claim. Finally, note from Figure 4 (d) that no qualitative difference exists in the trajectory beyond the warmup when compared to the standard training approach . We study knowledge distillation as proposed by BID13 using CCA to measure representational similarity between layers of the teacher and student model. Distillation involves training a "student" model using the output probability distribution of a "teacher" model. This has been widely known to help the student model perform better than it would, if it were trained using hard labels due to knowledge transfer from the teacher model. The reason often quoted for the success of distillation is the transfer of dark knowledge from the teacher to the student BID13, and more recently, as an interpretation of importance weighing BID6. We investigate if this knowledge transfer is limited to certain parts of the network, and if representational similarity between layers of the student and teacher model and a student can help answer this question. To construct an example of distillation that can be used for our analysis, we use a VGG-16 model as our teacher network and a shallow convolutional network ([conv, maxpool, relu] x2, fc, relu, fc, fc, softmax) as the student network. We train the shallow network for CIFAR-10 using the teacher's predicted probability distribution (softened using a temperature of 5), (S distilled), and for the baseline, train another instance of the same model in a standard way using hard labels, (S indep.). Over 5 runs for each of the two setups, we find the distillation training attains the best validation accuracy at 85.18% while standard training attains its best at 83.01%. We compare their layer-wise representations with those of the teacher network (T). FIG8 shows the CCA plots and the absolute value of their difference. The scores of these two pairs are quite similar for the shallow layers of the student network relative to the deeper layers, suggesting that the difference that knowledge distillation brings to the training of smaller networks is restricted to the deeper layers (fc stack). Similar are obtained through different configurations for the student and teacher when the student benefits from the teacher's knowledge. We hypothesize that the dark knowledge transferred by the teacher is localized majorly in the deeper (discriminative) layers, and less so in the feature extraction layers. We also note that this is not dissimilar to the hypothesis of BID6, and also relates ot the from the literature on fine-tuning or transfer learning BID8; BID17 which suggest training of only higher layers. Heuristics have played an important role in accelerating progress of deep learning. Founded in empirical experience, intuition and observations, many of these strategies are now commonplace in architectures. In the absence of strong theoretical guarantees, controlled experiments aimed at explaining the the efficacy of these strategies can aid our understanding of deep learning and the training dynamics. The primary goal of our work was the investigation of three such heuristics using sophisticated tools for landscape analysis. Specifically, we investigate cosine annealing, learning rate warmup, and knowledge distillation. For this purpose, we employ recently proposed tools of mode connectivity and CCA. Our empirical analysis sheds light on these heuristics and suggests that: (a) the reasons often quoted for the success of cosine annealing are not evidenced in practice; (b) that the effect of learning rate warmup is to prevent the deeper layers from creating training instability; and (c) that the latent knowledge shared by the teacher is primarily disbursed in the deeper layers. Inadvertently, our investigation also leads to the design of new heuristics for practically improving the training process. Through our on SGDR, we provide additional evidence for the success of averaging schemes in this context. Given the empirical suggesting the localization of the knowledge transfer between teacher and student in the process of distillation, a heuristic can be designed that only trains portions of the (pre-trained) student networks instead of the whole network. For instance, recent on self-distillation BID6 show improved performance via multiple generations of knowledge distillation for the same model. Given our , computational costs of subsequent generations can be reduced if only subsets of the model are trained, instead of training the entire model. Finally, the freezing of weights instead of employing learning rate warmup allows for comparable training performance but with reduced computation during the warmup phase. We note in passing that our also ties in with of Hoffer et al. FORMULA2 The learning rate is initialized to 0.05 and scaled down by a factor of 5 at epochs {60, 120, 160} (step decay). We use a training batch size of 100, momentum of 0.9, and a weight decay of 0.0005. Elements of the weight vector corresponding to a neuron are initialized randomly from the normal distribution N (0, 2/n) where n is the number of inputs to the neuron. We also use data augmentation by random cropping of input images. Figures 7, 8 and 9 show the Validation Loss, Training Accuracy and Training Loss respectively for the curves joining the 6 pairs discussed in Section 2.1.1. These too, confirm the overfitting or poor generalization tendency of models on the curve. We use t-SNE BID27 to visualize these 7 modes and the θ points that define the connectivity for the 6 pairs presented in Section 2.1.1, in a 2-dimensional plot in FIG0. Since t-SNE is known to map only local information correctly and not preserve global distances, we caution the reader about the limited interpretability of this visualization, it is presented simply to establish the notion of connected modes. The W n in FIG3 is equivalent to meaning it is the point on the plane (linear combination of w 70, w 150 and θ) with the least l-2 distance from the original point (iterate in this case). DISPLAYFORM0 8.3 CONNECTING MODES w 30 AND w 70 FROM SGDRIn Section 3, we present some experiments and make observations on the trajectory of SGDR by using the plane defined by the points w 70, w 150 and w 70−150. Here we plot the Training loss and Validation loss surface in FIG0 for another plane defined by SGDR's iterates w 30, w 70 and their connection w 30−70 to ensure the reader that the observations made are general enough. The VGG-16 architecture used in Section 3 does not include Batch Normalization, which has been known to alter properties of the loss surface . Therefore we train VGG-16 with Batch Normalization using SGDR to verify if our observations hold for this case too. As pointed out in Appendix A.2 of, at the test stage, we compute the Batch Normalization statistics for a network on the curve with an additional pass over the data, since these are not collected during training. Except Batch Normalization, other training parameters are kept the same as discussed for Section 3.Figure 13(a) shows the training loss for models along the line segment and MC curve joining the pair of iterates from SGDR. For the two pairs {w 30, w 150} and {w 70, w 150}, we again observe a higher training loss for models on the line segment, suggesting that for this setup too, SGDR finds paths that move over a barrier in the training loss landscape. We further evaluate the intermediate points FIG0, we present the CCA similarity plots comparing two pairs of models: epochs 10 and 150, and epochs 150 and 155. The (i, j) th block of the matrix denotes the correlation between the i th layer of the first model and the j th layer of the other. A high correlation implies that the layers learn similar representations and vice versa. We present the former to compare against the typical stepwise or linear decay of SGD, and the latter to demonstrate the immediate effect of restarting on the model. showed in their work that for typical SGD training, a CCA similarity plot between a partially and completed trained network reveals that the activations of the shallower layers bears closer resemblance in the two models than the deeper layers. We note that, despite the restart, a similar tendency is seen in SGDR training as well. This again suggests that the restart does not greatly impact the model, both in weights and representations, and especially so in the shallower layers. A comparison of epochs 150 and 155, i.e., before and after a restart also stands as evidence for this hypothesis. In Figure 4 (d), we show that the stability induced by warmup when training with large batches and learning rates can also be obtained by holding the FC stack frozen. This experiment was conducted on the VGG-11 network . To demonstrate the generality of our claim, we present additional experiments on two ResNet architectures: 18 and 32. The setup for this experiment is identical to the VGG-11 one with one change: instead of the learning rate being set to 2.5, which is the learning rate for SB (0.05) times the batch size increase (50×), we set it to 5.0 since SB training is better with 0.1. For the warmup case, we linearly increase the learning rate from 0 to 5 again for 20 epochs. Experiments on other configurations yielded similar . Whether these remain true also for training larger datasets, such as ImageNet, remains to be shown and is a topic of future research.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r14EOsCqKX
We use empirical tools of mode connectivity and SVCCA to investigate neural network training heuristics of learning rate restarts, warmup and knowledge distillation.
The increasing demand for neural networks (NNs) being employed on embedded devices has led to plenty of research investigating methods for training low precision NNs. While most methods involve a quantization step, we propose a principled Bayesian approach where we first infer a distribution over a discrete weight space from which we subsequently derive hardware-friendly low precision NNs. To this end, we introduce a probabilistic forward pass to approximate the intractable variational objective that allows us to optimize over discrete-valued weight distributions for NNs with sign activation functions. In our experiments, we show that our model achieves state of the art performance on several real world data sets. In addition, the ing models exhibit a substantial amount of sparsity that can be utilized to further reduce the computational costs for inference. With the advent of deep neural networks (NNs) impressive performances have been achieved in many applications such as computer vision BID13, speech recognition, and machine translation, among others. However, the performance improvements are largely attributed to increasing hardware capabilities that enabled the training of ever-increasing network architectures. On the other side, there is also a growing interest in making NNs available for embedded devices with drastic memory and power limitations -a field with plenty of interesting applications that barely profit from the tendency towards larger and deeper network structures. Thus, there is an emerging trend in developing NN architectures that allow fast and energy-efficient inference and require little storage for the parameters. In this paper, we focus on reduced precision methods that restrict the number of bits per weight while keeping the network structures at a decent size. While this reduces the memory footprint for the parameters accordingly, it can also in drastic improvements in computation speed if appropriate representations for the weight values are used. This direction of research has been pushed towards NNs that require in the extreme case only a single bit per weight. In this case, assuming weights w ∈ {−1, 1} and binary inputs x ∈ {−1, 1}, costly floating point multiplications can be replaced by cheap and hardware-friendly logical XNOR operations. However, training such NNs is inherently different as discrete valued NNs cannot be directly optimized using gradient based methods. Furthermore, NNs with binary weights exhibit their full computational benefits only in case the sign activation function is used whose derivative is zero almost everywhere, and, therefore, is not suitable for backpropagation. Most methods for training reduced precision NNs either quantize the weights of pre-trained full precision NNs BID3 or train reduced precision NNs by maintaining a set of full precision weights that are deterministically or stochastically quantized during forward or backward propagation. Gradient updates computed with the quantized weights are then applied to the full precision weights BID4 BID19 BID8. This approach alone fails if the sign activation function is used. A promising approach is based on the straight through gradient estimator (STE) BID1 which replaces the zero gradient of hard threshold functions by a non-zero surrogate derivative. This allows information in computation graphs to flow backwards such that parameters can be updated using gradient based optimization methods. Encouraging are presented in BID8 where the STE is applied to the weight binarization and to the sign activation function. These methods, although showing The aim is to obtain a single discrete-valued NN (top right) with a good performance. We achieve this by training a distribution over discrete-valued NNs (bottom right) and subsequently deriving a single discrete-valued NN from that distribution. (b) Probabilistic forward pass: The idea is to propagate distributions through the network by approximating a sum over random variables by a Gaussian and subsequently propagating that Gaussian through the sign activation function.convincing empirical performance, have in common that they appear rather heuristic and it is usually not clear whether they optimize any well defined objective. Therefore, it is desired to develop principled methods that support discrete weights in NNs. In this paper, we propose a Bayesian approach where we first infer a distribution q(W) over a discrete weight space from which we subsequently derive discrete-valued NNs. Thus, we can optimize over real-valued distribution parameters using gradient-based optimization instead of optimizing directly over the intractable combinatorial space of discrete weights. The distribution q(W) can be seen as an exponentially large ensemble of NNs where each NN is weighted by its probability q(W).Rather than having a single value for each connection of the NN, we now maintain a whole distribution for each connection (see bottom right of FIG0 (a)). To obtain q(W), we employ variational inference where we approximate the true posterior p(W |D) by minimizing the variational objective KL(q(W)||p(W |D)). Although the variational objective is intractable, this idea has recently received a lot of attention for real-valued NNs due to the reparameterization trick which expresses gradients of intractable expectations as expectations of tractable gradients BID20 BID12 BID2. This allows us to efficiently compute unbiased gradient samples of the intractable variational objective that can subsequently be used for stochastic optimization. Unfortunately, the reparameterization trick is only suitable for real-valued distributions which renders it unusable for our case. The recently proposed Gumbel softmax distribution BID10 BID16 overcomes this issue by relaxing one-hot encoded discrete distributions with probability vectors. Subsequently, the reparameterization trick can again be applied. However, for the sign activation function one still has to rely on the STE or similar heuristics. The log-derivative trick offers an alternative for discrete distributions to express gradients of expectations with expectations of gradients BID18. However, the ing gradient samples are known to suffer from high variance. Therefore, the log-derivative trick is typically impractical unless suitable variance reduction techniques are used. This lack of practical methods has led to a limited amount of literature investigating Bayesian NNs with discrete weights BID23.In this work, we approximate the intractable variational objective with a probabilistic forward pass (PFP) BID26 BID23 BID6 BID21. The idea is to propagate probabilities through the network by successively approximating the distributions of activations with a Gaussian and propagating this Gaussian through the sign activation function FIG0 ). This in a well-defined objective whose gradient with respect to the variational parameters can be computed analytically. This is true for discrete weight distributions as well as for the sign activation function with zero gradient almost everywhere. The method is very flexible in the sense that different weight distributions can be used in different layers. We utilize this flexibility to represent the weights in the first layer with 3 bits and we use ternary weights w ∈ {−1, 0, 1} in the remaining layers. In our experiments, we evaluate the performance of our model by reporting the error of (i) the most probable model of the approximate posterior q(W) and (ii) approximated expected predictions using the PFP. We show that averaging over small ensembles of NNs sampled from W ∼ q(W) can improve the performance while inference using the ensemble is still cheaper than inference using a single full precision NN. Furthermore, our method exhibits a substantial amount of sparsity that further reduces the computational overhead. Compared to BID8, our method requires less precision for the first layer, and we do not introduce a computational overhead by using batch normalization which appears to be a crucial component of their method. The paper is outlined as follows. In Section 2, we introduce the notation and formally define the PFP. Section 3 shows details of our model. Section 4 shows experiments. In Section 5 we discuss important issues concerning our model and Section 6 concludes the paper. The structure of a feed-forward NN with L layers is determined by the number of neurons DISPLAYFORM0 Here, d 0 is the dimensionality of the input, d L is the dimensionality of the output, and d l for 0 < l < L is the number of hidden neurons in layer l. A NN defines a function y = x L = f (x 0) by iteratively applying a linear transformation a l = W l x l−1 to the inputs from the previous layer followed by a non-linear function DISPLAYFORM1 which is applied element-wise to its inputs. For l = L, we use the softmax activation function smax i (a) = exp(a i)/ j exp(a j). Note that the expensive softmax function does not need to be computed at test time. where D l is a finite set.1. For a Bayesian treatment of NNs, we assume a prior distribution p(W) over the discrete weights and interpret the output of the NN after the softmax activation as likelihood p(D|W) for the data set DISPLAYFORM0 is intractable for NNs of any decent size, we employ variational inference to approximate it by a simpler distribution q(W |ν) by minimizing KL(q(W |ν)||p(W |D)) with respect to the variational parameters ν. We adopt the common mean-field assumption where the approximate posterior factorizes into a product of factors q(w|ν w) for each weight w ∈ W.2 The variational objective is usually transformed as DISPLAYFORM1 Minimizing this expression with respect to ν does not involve the intractable posterior p(W |D) and the evidence log p(D) is constant with respect to the variational parameters ν. The KL term can be seen as a regularizer that pulls the approximate posterior q(W) towards the prior distribution p(W) whereas the expected log-likelihood captures the data. While the KL term is tractable if both the prior and the approximate posterior distribution assume independence of the weights, the expected log-likelihood is typically intractable due to a sum over exponentially many terms. We propose a PFP as closed-form approximation to the expected log-likelihood. The approximation of the expected log-likelihood resembles a PFP. In particular, we have DISPLAYFORM0 where we defined DISPLAYFORM1. The overall aim is to successively get rid of the weights in each layer and consequently reduce the exponential number of terms to sum over. In the first approximation in, we approximate the activation distribution with Gaussians using a central limit argument. These Gaussian distributions are propagated through the sign activation function ing in Bernoulli distributions in. These two steps are iterated until a Gaussian approximation of the output activations in FORMULA5 is obtained. This integral is approximated using a second-order Taylor expansion of the log-softmax around µ a L. In the following subsections we provide more details of the individual approximations. The activations of the neurons are computed as weighted sums over the outputs from the previous layers. Since the inputs and the weights are random variables, the activations are random variables as well. Given a sufficiently large number of input neurons, we can apply a central limit argument and approximate the activation distributions with Gaussians N (a DISPLAYFORM0). For computational convenience, we further assume that the activations within a layer are independent. Assuming that the inputs x l−1 j and the weights w l ij are independent, we have DISPLAYFORM1 where DISPLAYFORM2 In case of l = 1, we assume no variance at the inputs and thus the second term of σ a l i in cancels. In the next step, the Gaussian distributions N (a DISPLAYFORM3) over the activations are transformed by the sign activation function. The expectation of the ing Bernoulli distribution with values x ∈ {−1, 1} of the sign activation function is given by µ DISPLAYFORM4 2 ) where erf denotes the error function. The raw second moment as needed for is µ (x l i) 2 = 1. After iterating this approximation up to the last layer, it remains to calculate the expectation of the log-softmax with respect to the Gaussian approximation of the output activations in. Since this integral does not allow for an analytic solution, we approximate the log-softmax by its second-order Taylor approximation around the mean µ a L with a diagonal covariance approximation, ing in DISPLAYFORM0 The maximization of the first term in enforces the softmax output of the true class to be maximized, whereas the second term becomes relevant if there is no output close to one. For softmax outputs substantially different from zero or one, the product inside the sum is substantially larger than zero and the corresponding variance is penalized. In short, the second term penalizes large output variances if their corresponding output activation means are large and close to each other. In this section we provide details of the finite weight sets D l and their corresponding prior and approximate posterior distributions, respectively. As reported in several papers BID8 BID0, it appears to be crucial to represent the weights in the first layer using a higher precision. Therefore, we use three bits for the weights in the first layer and ternary weights in the remaining layers. We use D 1 = {−0.75, −0.5, . . ., 0.75} for the first layer which can be represented as fixed point numbers using three bits. Note that |D 1 | = 7 and we could actually represent one additional value with three bits. However, for the sake of symmetry around zero, we refrain from doing so as we do not want to introduce a bias towards positive or negative values. We empirically observed this range of values to perform well for our problems with inputs x ∈ [−1, 1]. The values in D 1 can be scaled with an arbitrary factor at training time without affecting the output of the NN since only the sign of the activations is relevant. We investigated two different variational distributions q(W 1).(i) General distribution: We store for each weight the seven logits corresponding to the unnormalized log-probabilities for each of the seven values. The normalized probabilities can be recovered using the softmax function. This simple and most general distribution for finite discrete distributions has the advantage that the model can express a tendency towards individual discrete values. Consequently, we expect the maximum of the distribution to be a reasonable value that the model explicitly favors. This is fundamentally different from training full precision networks and quantizing the weights afterwards. The disadvantage of this approach is that the number of variational parameters and the computation time for the means µ w and variances σ w scales with the size of D 1.(ii) Discretized Gaussian: To get rid of the dependency of the number of parameters on |D 1 |, we also evaluated a discretized Gaussian. The distribution is parameterized by a mean m w and a variance v w and the logits of the ing discrete distribution are given by −(w − m w) 2 /(2 v w) for w ∈ D 1 FIG1 ). We denote this distribution as N D1 (m w, v w).3 This parameterization has the advantage that only two parameters are sufficient to represent the ing discrete distribution for an arbitrary size of D 1. Furthermore, the ing distribution is unimodal and neighboring values tend to have similar probabilities which appears natural. Nevertheless, there is no closedform solution for the mean µ w and variance σ w of this distribution and we have to compute a weighted sum involving the |D 1 | normalized probabilities. For the prior distribution p(W 1), we use for both aforementioned variational distributions the discretized Gaussian N D1 (0, γ) with γ being a tunable hyperparameter. Computing the KL-divergence KL(q(W 1)||p(W 1)) also requires computing a weighted sum over |D 1 | values. For the remaining layers we use ternary weights w ∈ D l = {−1, 0, 1}. We use a shifted binomial distribution, i.e. w ∼ Binomial(2, w p) − 1. This distribution requires only a single parameter w p per weight for the variational distribution. The mean µ w is given by 2w p − 1 and the variance σ w is given by 2w p (1 − w p). This makes the Bernoulli distribution an efficient choice for computing the required moments. It is convenient to select a binomial prior distribution p(w) = Binomial(2, 0.5) as it is centered at zero and we get KL(q(w)||p(w)) = |D l |(log(2w p)w p + log(2(1 − w p))(1 − w p)).These favorable properties of the binomial distribution might raise the question why it is not used in the first layer, especially since the required expectations, variances and KL-divergences are available in closed-form independent of the size of D l. We elaborate more on this in Section 5. We normalize the activations of layer l by d l−1. 4 This scales the activation means µ a towards zero and keeps the activation variances σ a independent of the number of incoming neurons. Consequently, the expectation of the Bernoulli distribution after applying the sign activation function µ x = erf(µ a /(2σ a) 1 2 ) is less prone to be in the saturated region of the error function and, thus, gradients can flow backwards in the computation graph. Note that activation normalization influences only the PFP and does not change the classification of individual NNs W ∼ q(W) since only the sign of the activation is relevant. The variational inference objective does not allow to easily trade off between the importance of the expected log-likelihood E q(W) [log p(D|W)] and the KL term KL(q(W)||p(W)). This is problematic since there are usually many more NN weights than there are data samples and the 3 Note that the mean µw and the variance σw of N D 1 (mw, vw) are in general different from mw and vw. 4 The activation variance σa is normalized by d l−1. KL term tends to be orders of magnitudes larger than the expected log-likelihood. As a , the optimization procedure mainly focuses on keeping the approximate posterior q(W) close to the prior p(W) whereas the influence of the data is too small. We propose to counteract this issue by trading off between the expected log-likelihood and the KL term using a convex combination, i.e., DISPLAYFORM0 Here, λ ∈ is a tunable hyperparameter that can be interpreted as creating λ/(1 − λ) copies of the data set D. A similar technique is used in BID17 to avoid getting stuck in poor local optima. Another approach to counteract this issue is the KL-reweighting scheme proposed in BID2. Due to the exponential weight decay, only a few minibatches are influenced by the KL term whereas the vast majority is mostly influenced by the expected log-likelihood. We evaluated the performance of our model (NN VI) on MNIST , variants of the MNIST data set BID14, and a TIMIT data set BID27. Details about the individual data sets can be found in the supplementary material. We selected a three layer structure with d 1 = d 2 = 1200 hidden units for all experiments. We evaluated both the general parameterization and the Gaussian parameterization as variational distribution q(W >1) for the first layer (Section 3.1), and a binomial variational distribution q(W >1) for the following layers (Section 3.2). We optimized the variational distribution using ADAM BID11 without KL reweighting BID2, and using rmsprop with KL reweighting, and report the experiment ing in the better classification performance. For both optimization algorithms, we employ an exponential decay of the learning rate η where we multiply η after every epoch by a factor α ≤ 1. We use dropout BID24 with dropout rate p in for the input layer and a common dropout rate p hid for the remaining hidden layers. We normalize the activation by d l−1 p where p is either p in or p hid to consider dropout in the activation normalization. We tuned the hyperparameters λ DISPLAYFORM0 hid ∈ [0, 0.8] with 50 iterations of Bayesian optimization BID22. We report the for the single most probable model from the approximate posterior W = arg max W q(W). This model is indeed a low precision network that can efficiently be implemented in hardware. We also report by computing predictions using the PFP which can be seen as approximating the expected prediction arg max t E q(W) [p(t|x, W)] as it is desired in Bayesian inference. We compare our model with real-valued NNs (NN real) trained with batch normalization BID9, dropout, and ReLU activation function. We also evaluated the model from BID8 (NN STE) which uses batch normalization, dropout, real-valued weights in the first layer and binary weights w ∈ {−1, 1} in the subsequent layers. The binary weights and the sign activation function are handled using the STE. For NN (real) and NN STE, we tuned the hyperparameters η ∈ [10 −4, 10 DISPLAYFORM1, and p hid ∈ [0, 0.8] on a separate held-out validation set using 50 iterations of Bayesian optimization. The are shown in TAB0. Our model (single) performs on par with NN (real) and it outperforms NN STE on the TIMIT data set and the more challenging variants of MNIST with different kinds of artifacts. Interestingly, our model outperforms the other models on MNIST Background and MNIST Background Random by quite a large margin which could be due to the Bayesian nature of our model. The PFP outperforms the single most probable model. This is no surprise since the PFP uses all the available information from the approximate posterior q(W). Overall, the performance of the general variational distribution seems to be slightly better than the discretized Gaussian at the cost of more computational overhead at training time. On a Nvidia GTX 1080 graphics card, a training epoch on the MNIST data set took approximately 8.8 seconds for NN VI general and 7.5 seconds for NN VI Gauss compared to 1.3 seconds for NN (real). The computational bottleneck of our method is the first layer since here the moments require computing weighted sums over all discrete values w ∈ D 1. Next, we approximate the expected predictions arg max t E q(W) [p(t|x, W)] by sampling several models W ∼ q(W). We demonstrate this on the MNIST Rotated Background data set. NNs with batch normalization, dropout, sign activation function, real-valued weights in the first layer and binary weights in the remaining layers. NN VI (our method): 3 bits for the first layer and ternary weights for the remaining layers. We evaluated the single most probable model, the probabilistic forward pass (pfp), the general 3 bit distribution for the first layer, and the discretized Gaussian for the first layer. shows the classification error of Bayesian averaging over 1000 NNs sampled from the model with the best PFP performance using the discretized Gaussian. We see that the performance approaches the PFP which indicates that the PFP is a good approximation to the true expected predictions. However, the size of the ensemble needed to approach the PFP is quite large and the computation time of evaluating a large ensemble is much larger than a single PFP. Therefore, we investigated a greedy forward selection strategy, where we sample 100 NNs out of which we include only the NN in the ensemble which leads to the lowest error. This is shown in FIG1 (c). Using this strategy in a slightly better performance than Bayesian averaging. Most importantly, averaging only a few models in a decent performance increase while still allowing for faster inference than full precision NNs. Our NNs obtained by taking the most probable model from q(W) can be efficiently implemented in hardware. They require only multiplications with 3 bit fixed point values as opposed to multiplications with floating point values in NN (real) and NN STE. In the special case of image data, the inputs are also given as 8 bit fixed point numbers. By scaling the inputs and the weights from the first layer appropriately, this in an ordinary integer multiplication while leaving the output of the sign activation function unchanged. In the following layers we only have to compute multiplications as logical XNOR operations and accumulate -1 and +1 values for the activations. TAB2 shows the fraction of non-zero weights of the best performing single models from TAB0. Especially in the input layer where we have our most costly 3 bit weights, there are a lot of zero weights on most data sets. This can be utilized to further reduce the computational costs. For example, on the MNIST Background Random data set, evaluating a single NN requires only approximately 23000 integer multiplications and 1434000 XNOR operations instead of approximately 2393000 floating point multiplications. The presented model has many tunable parameters, especially the type of variational distributions for the individual layers, that heavily influence the behavior in terms of convergence at training time and performance at test time. The binomial distribution appears to be a natural choice for evenly spaced values with many desirable properties. It is fully specified by only a single parameter, and its mean, variance, and KL divergence with another binomial has nice analytic expressions. Furthermore, neighboring values have similar probabilities which rules out odd cases in which, for instance, there is a value with low probability in between of two values with high probability. Unfortunately, the binomial distribution is not suited for the first layer as here it is crucial to be able to set weights with high confidence to zero. However, when favoring zero weights by setting w p = 0.5, the variance of the binomial distribution takes on its largest possible value. This might not be a problem in case predictions are computed as the true expectations with respect to q(W) as in the PFP, but it in bad classification errors when deriving a single model from q(W). We also observed that using the binomial distribution in deeper layers favor the weights −1 and 1 over 0 (cf. TAB2). This might indicate that binary weights w ∈ {−1, 1} using a Bernoulli distribution could be sufficient, but in our experiments we observed this to perform worse. We believe this to stem partly from the importance of the zero weight and partly from the larger variance of 4w p (1 − w p) of the Bernoulli distribution compared to the variance of 2w p (1 − w p) of the binomial distribution. Furthermore, there is a general issue with the sign activation functions if the activations are close to zero. In this case, a small change to the inputs can cause the corresponding neuron to take on a completely different value which might have a large impact on the following layers of the NN. We found dropout to be a very helpful tool to counteract this issue. FIG1 shows histograms of the activations of the second hidden layer for both a model trained with dropout and the same model trained without dropout. We can see that without dropout the activations are much closer to zero whereas dropout introduces a much larger spread of the activations and even causes the histogram to decrease slightly in the vicinity of zero. Thus, the activations are much more often in regions that are stable with respect to changes of their inputs which makes them more robust. We believe that such regularization techniques are crucial if the sign activation function is used. We introduced a method to infer NNs with low precision weights. As opposed to existing methods, our model neither quantizes the weights of existing full precision NNs nor does it rely on heuristics to compute "approximated" gradients of functions whose gradient is zero almost everywhere. We perform variational inference to obtain a distribution over a discrete weight space from which we subsequently derive a single discrete-valued NN or a small ensemble of discrete-valued NNs. Our method propagates probabilities through the network which in a well defined function that allows us to optimize the discrete distribution even for the sign activation function. The weights in the first layer are modeled using fixed point values with 3 bits precision and the weights in the remaining layers have values w ∈ {−1, 0, 1}. This reduces costly floating point multiplications to cheaper multiplications with fixed point values of 3 bits precision in the first layer, and logical XNOR operations in the following layers. In general, our approach allows flexible bit-widths for each individual layer. We have shown that the performance of our model is on par with state of the art methods that use a higher precision for the weights. Furthermore, our model exhibits a large amount of sparsity that can be utilized to further reduce the computational overhead. A DATA SETS The MNIST data set BID15 contains grayscale images of size 28 × 28 showing handwritten digits. It is split into 50000 training samples, 10000 validation samples, and 10000 test samples. The task is to classify the images to digits. The pixel intensities are normalized to the range [−1, 1] by dividing through 128 and subtracting 1. We use the MNIST data set in the permutationinvariant setting where the model is not allowed to use prior knowledge about the image structure, i.e., convolutional NNs are not allowed. The variants of the MNIST data set BID14 contain images of size 28 × 28 showing images of the original MNIST data set that have been transformed by various operations in order to obtain more challenging data sets. The variants of the MNIST data set are split into 10000 training samples, 2000 validation samples and 50000 test samples. In particular, there are the following variants:• MNIST Basic: This data set has not been transformed. The data set is merely split differently into training, validation, and test set, respectively.• MNIST Background: The pixels of the images have been replaced by random image patches.• MNIST Background Random: The pixels of the images have been set to a uniformly random pixel value.• MNIST Rotated: The images are randomly rotated.• MNIST Rotated Background: The transformations from MNIST Rotated and MNIST Background are combined Some samples of the individual data sets are shown in FIG3. We also normalized the pixel intensities of these data sets to lie in the range [−1, 1]. The TIMIT data set BID27 contains samples of 92 features representing a phonetic segment. The task is to classify the phonetic segment to one of 39 phonemes. The data is split into 140173 training samples, 50735 validation samples (test) and 7211 test samples (core test). Details on data preprocessing can be found in BID5. We normalized the features to have zero mean and unit variance.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1h2DllAW
Variational Inference for infering a discrete distribution from which a low-precision neural network is derived
Many irregular domains such as social networks, financial transactions, neuron connections, and natural language structures are represented as graphs. In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs. However, in many of the applications, the underlying graph changes over time and existing GNNs are inadequate for handling such dynamic graphs. In this paper we propose a novel technique for learning embeddings of dynamic graphs based on a tensor algebra framework. Our method extends the popular graph convolutional network (GCN) for learning representations of dynamic graphs using the recently proposed tensor M-product technique. Theoretical that establish the connection between the proposed tensor approach and spectral convolution of tensors are developed. Numerical experiments on real datasets demonstrate the usefulness of the proposed method for an edge classification task on dynamic graphs. Graphs are popular data structures used to effectively represent interactions and structural relationships between entities in structured data domains. Inspired by the success of deep neural networks for learning representations in the image and language domains, recently, application of neural networks for graph representation learning has attracted much interest. A number of graph neural network (GNN) architectures have been explored in the contemporary literature for a variety of graph related tasks and applications (; ; ;). Methods based on graph convolution filters which extend convolutional neural networks (CNNs) to irregular graph domains are popular (; ;). Most of these GNN models operate on a given, static graph. In many real-world applications, the underlining graph changes over time, and learning representations of such dynamic graphs is essential. Examples include analyzing social networks , predicting collaboration in citation networks , detecting fraud and crime in financial networks , traffic control , and understanding neuronal activities in the brain . In such dynamic settings, the temporal interdependence in the graph connections and features also play a substantial role. However, efficient GNN methods that handle time varying graphs and that capture the temporal correlations are lacking. By dynamic graph, we mean a sequence of graphs (V, A (t), X (t) ), t ∈ {1, 2, . . ., T}, with a fixed set V of N nodes, adjacency matrices A (t) ∈ R N ×N, and graph feature matrices X (t) ∈ R N ×F where X (t) n: ∈ R F is the feature vector consisting of F features associated with node n at time t. The graphs can be weighted, and directed or undirected. They can also have additional properties like (time varying) node and edge classes, which would be stored in a separate structure. Suppose we only observe the first T < T graphs in the sequence. The goal of our method is to use these observations to predict some property of the remaining T − T graphs. In this paper, we use it for edge classification. Other potential applications are node classification and edge/link prediction. In recent years, tensor constructs have been explored to effectively process high-dimensional data, in order to better leverage the multidimensional structure of such data . Tensor based approaches have been shown to perform well in many image and video processing ap- plications;;;; ). A number of tensor based neural networks have also been investigated to extract and learn multi-dimensional representations, e.g. methods based on tensor decomposition , tensor-trains , and tensor factorized neural network . Recently, a new tensor framework called the tensor M-product framework (; ;) was proposed that extends matrix based theory to high-dimensional architectures. In this paper, we propose a novel tensor variant of the popular graph convolutional network (GCN) architecture , which we call TensorGCN. It captures correlation over time by leveraging the tensor M-product framework. The flexibility and matrix mimeticability of the framework, help us adapt the GCN architecture to tensor space. Figure 1 illustrates our method at a high level: First, the time varying adjacency matrices A (t) and feature matrices X (t) of the dynamic graph are aggregated into an adjacency tensor and a feature tensor, respectively. These tensors are then fed into our TensorGCN, which computes an embedding that can be used for a variety of tasks, such as link prediction, and edge and node classification. GCN architectures are motivated by graph convolution filtering, i.e., applying filters/functions to the graph Laplacian (in turn its eigenvalues) , and we establish a similar connection between TensorGCN and spectral filtering of tensors. Experimental on real datasets illustrate the performance of our method for the edge classification task on dynamic graphs. Elements of our method can also be used as a preprocessing step for other dynamic graph methods. The idea of using graph convolution based on the spectral graph theory for GNNs was first introduced by. then proposed Chebnet, where the spectral filter was approximated by Chebyshev polynomials in order to make it faster and localized. presented the simplified GCN, a degree-one polynomial approximation of Chebnet, in order to speed up computation further and improve the performance. There are many other works that deal with GNNs when the graph and features are fixed/static; see the review papers by and and references therein. These methods cannot be directly applied to the dynamic setting we consider. devised the Graph Convolutional Recurrent Network for graphs with time varying features. However, this method assumes that the edges are fixed over time, and is not applicable in our setting. proposed a method called EdgeConv, which is a neural network (NN) approach that applies convolution operations on static graphs in a dynamic fashion. Their approach is not applicable when the graph itself is dynamic. develop a temporal GCN method called T-GCN, which they apply for traffic prediction. Their method assumes the graph remains fixed over time, and only the features vary. The set of methods most relevant to our setting of learning embeddings of dynamic graphs use combinations of GNNs and recurrent architectures (RNN), to capture the graph structure and handle time dynamics, respectively. The approach in uses Long Short-Term Memory (LSTM), a recurrent network, in order to handle time variations along with GNNs. They design architectures for semi-supervised node classification and for supervised graph classification. presented a variant of GCN called EvolveGCN, where Gated Recurrent Units (GRU) and LSTMs are coupled with a GCN to handle dynamic graphs. This paper is currently the stateof-the-art. However, their approach is based on a heuristic RNN/GRU mechanism, which is not theoretically viable, and does not harness a tensor algebraic framework to incorporate time varying information. present a tensor NN which utilizes the M-product tensor framework. Their approach can be applied to image and other high-dimensional data that lie on regular grids, and differs from ours since we consider data on dynamic graphs. Here, we cover the necessary preliminaries on tensors and the M-product framework. For a more general introduction to tensors, we refer the reader to the review paper by. In this paper, a tensor is a three-dimensional array of real numbers denoted by boldface Euler script letters, e.g. X ∈ R I×J×T. Matrices are denoted by bold uppercase letters, e.g. X; vectors are denoted by bold lowercase letter, e.g. x; and scalars are denoted by lowercase letters, e.g. x. An element at position (i, j, t) in a tensor is denotes by subscripts, e.g. X ijt, with similar notation for elements of matrices and vectors. A colon will denote all elements along that dimension; X i: denotes the ith row of the matrix X, and X::k denotes the kth frontal slice of X. The vectors X ij: are called the tubes of X. The framework we consider relies on a new definition of the product of two tensors, called the M-product (; ;). A distinguishing feature of this framework is that the M-product of two three-dimensional tensors is also three-dimensional, which is not the case for e.g. tensor contractions . It allows one to elegantly generalize many classical numerical methods from linear algebra, and has been applied e.g. in neural networks , imaging; ), facial recognition, and tensor completion and denoising (; ;). Although the framework was originally developed for three-dimensional tensors, which is sufficient for our purposes, it has been extended to handle tensors of dimension greater than three . The following definitions 3.1-3.3 describe the M-product. Definition 3.1 (M-transform). Let M ∈ R T ×T be a mixing matrix. The M-transform of a tensor X ∈ R I×J×T is denoted by X × 3 M ∈ R I×J×T and defined elementwise as We say that X × 3 M is in the transformed space. may also be written in matrix form as, where the unfold operation takes the tubes of X and stack them as columns into a T × IJ matrix, and fold(unfold(X)) = X. Appendix A provides illustrations of how the M-transform works. Definition 3.2 (Facewise product). Let X ∈ R I×J×T and Y ∈ R J×K×T be two tensors. The I×J×T and Y ∈ R J×K×T be two tensors, and let M ∈ R T ×T be an invertible matrix. The M-product, denoted by X Y ∈ R I×K×T, is defined as In the original formulation of the M-product, M was chosen to be the Discrete Fourier Transform (DFT) matrix, which allows efficient computation using the Fast Fourier Transform (FFT) (; ; . The framework was later extended for arbitrary invertible M (e.g. discrete cosine and wavelet transforms) . A benefit of the tensor M-product framework is that many standard matrix concepts can be generalized in a straightforward manner. Definitions 3.4-3.7 extend the matrix concepts of diagonality, identity, transpose and orthogonality to tensors (; . Definition 3.5 (Identity tensor). LetÎ ∈ R N ×N ×T be defined facewise asÎ::t = I, where I is the matrix identity. The M-product identity tensor I ∈ R N ×N ×T is then defined as Definition 3.6 (Tensor transpose). The transpose of a tensor X is defined as X def = Y × 3 M −1, where Y::t = (X × 3 M)::t for each t ∈ {1, . . ., T}. Definition 3.7 (Orthogonal tensor). A tensor X ∈ R N ×N ×T is said to be orthogonal if X X = X X = I. Leveraging these concepts, a tensor eigendecomposition can now be defined (; : Definition 3.8 (Tensor eigendecomposition). Let X ∈ R N ×N ×T be a tensor and assume that each frontal slice (X × 3 M)::t is symmetric. We can then eigendecompose these as (X × 3 M)::t = Q::tD::tQ::t, whereQ::t ∈ R N ×N is orthogonal andD::t ∈ R N ×N is diagonal (see e.g. Theorem 8.1.1 in). The tensor eigendecomposition of X is then defined as Our approach is inspired by the first order GCN by for static graphs, owed to its simplicity and effectiveness. For a graph with adjacency matrix A and feature matrix X, a GCN layer takes the form Y = σ(ÃXW), wherẽ is the matrix identity, W is a matrix to be learned when training the NN, and σ is an activation function, e.g., ReLU. Our approach translates this to a tensor model by utilizing the M-product framework. We first introduce a tensor activation functionσ which operates in the transformed space. Definition 4.1. Let A ∈ R I×J×T be a tensor and σ an elementwise activation function. We define the activation functionσ asσ(A) We can now define our proposed dynamic graph embedding. Let A ∈ R N ×N ×T be a tensor with frontal slices A::t =à (t), whereà (t) is the normalization of A (t). Moreover, let X ∈ R N ×F ×T be a tensor with frontal slices X::t = X (t). Finally, let W ∈ R F ×F ×T be a weight tensor. We define our dynamic graph embedding as Y = A X W ∈ R N ×F ×T. This computation can also be repeated in multiple layers. For example, a 2-layer formulation would be of the form One important consideration is how to choose the matrix M which defines the M-product. For time-varying graphs, we choose M to be lower triangular and banded so that each frontal slice (A × 3 M)::t is a linear combination of the adjacency matrices A::max(1,t−b+1),..., A::t, where we refer to b as the "bandwidth" of M. This choice ensures that each frontal slice (A × 3 M)::t only contains information from current and past graphs that are close temporally. Specifically, the entries of M are set to otherwise, which implies that k M tk = 1 for each t. Another possibility is to treat M as a parameter matrix to be learned from the data. In order to avoid over-parameterization and improve the performance, we choose the weight tensor W (at each layer), such that each of the frontal slices of W in the transformed domain remains the same, i.e., (W × 3 M)::t = (W × 3 M)::t ∀t, t. In other words, the parameters in each layer are shared and learned over all the training instances. This reduces the number of parameters to be learned significantly. An embedding Y ∈ R N ×F ×T can now be used for various prediction tasks, like link prediction, and edge and node classification. In Section 5, we apply our method for edge classification by using a model similar to that used by: Given an edge between nodes m and n at time t, the predictive model is where (Y × 3 M) m:t ∈ R F and (Y × 3 M) n:t ∈ R F are row vectors, U ∈ R C×2F is a weight matrix, and C the number of classes. Note that the embedding Y is first M-transformed before the matrix U is applied to the appropriate feature vectors. This, combined with the fact that the tensor activation functions are applied elementwise in the transformed domain, allow us to avoid ever needing to apply the inverse M-transform. This approach reduces the computational cost, and has been found to improve performance in the edge classification task. Here, we present the that establish the connection between the proposed TensorGCN and spectral convolution of tensors, in particular spectral filtering and approximation on dynamic graphs. This is analogous to the graph convolution based on spectral graph theory in the GNNs by , , and. All proofs are provided in Appendix D. Let L ∈ R N ×N ×T be a form of tensor Laplacian defined as L def = I − A. Throughout the remainder of this subsection, we will assume that the adjacency matrices A (t) are symmetric. Following the work by, three-dimensional tensors in R M ×N ×T can be viewed as operators on N × T matrices, with those matrices "twisted" into tensors in R N ×1×T. With this in mind, we define a tensor variant of the graph Fourier transform. Definition 4.4 (Tensor-tube M-product). Let X ∈ R I×J×T and θ ∈ R 1×1×T. Analogously to the definition of the matrix-scalar product, we define X θ via (X θ) ij: Definition 4.5 (Tensor graph Fourier transform). Let X ∈ R N ×F ×T be a tensor. We define a tensor graph Fourier transform F as F (X) This is analogous to the definition of the matrix graph Fourier transform. This defines a convolution like operation for tensors similar to spectral graph convolution . Each lateral slice X:j: is expressible in terms of the set {Q :n:} N n=1 as follows: where each (Q X :j:) n1: ∈ R 1×1×T can be considered a tubal scalar. In fact, the lateral slices Q:n: form a basis for the set R N ×1×T with product; see Appendix D for further details. Definition 4.6 (Tensor spectral graph filtering). Given a signal X ∈ R N ×1×T and a function g: R 1×1×T → R 1×1×T, we define the tensor spectral graph filtering of X with respect to g as where In order to avoid the computation of an eigendecomposition, use a polynomial to approximate the filter function. We take a similar approach, and approximate g(D) with an M-product polynomial. For this approximation to make sense, we impose additional structure on g. Assumption 4.7. Assume that g: where f is defined elementwise as Proposition 4.8. Suppose g satisfies Assumption 4.7. For any ε > 0, there exists an integer K and a set {θ where · is the tensor Frobenius norm, and where As in the work of , a tensor polynomial approximation allows us to approximate X filt in without computing the eigendecomposition of L: All that is necessary is to compute tensor powers of L. We can also define tensor polynomial analogs of the Chebyshev polynomials and do the approximation in in terms of those instead of the tensor monomials D k. This is not necessary for the purposes of this paper. Instead, we note that if a degree-one approximation is used, the computation in becomes, which is analogous to the parameter choice made in the degree-one approximation by , we get If we let X contain F signals, i.e., X ∈ R N ×F ×T, and apply F filters, becomes where Θ ∈ R F ×F ×T. This is precisely our embedding model, with Θ replaced by a learnable parameter tensor W. Here, we present for edge classification on four datasets 1: The Bitcoin Alpha and OTC transaction datasets , the Reddit body hyperlink dataset , and a chess dataset . The bitcoin datasets consist of transaction histories for users on two different platforms. Each node is a user, and each directed edge indicates a transaction and is labeled with an integer between −10 and 10 which indicates the senders trust for the receiver. We convert these labels to two classes: positive (trustworthy) and negative (untrustworthy). The Reddit dataset is build from hyperlinks from one subreddit to another. Each node represents a subreddit, and each directed edge is an interaction which is labeled with −1 for a hostile interaction or +1 for a friendly interaction. We only consider those subreddits which have a total of 20 interactions or more. In the chess dataset, each node is a player, and each directed edge represents a match with the source node being the white player and the target node being the black player. Each edge is labeled −1 for a black victory, 0 for a draw, and +1 for a white victory. The data is temporally partitioned into T graphs, with each graph containing data from a particular time window. Both T and the time window length can vary between datasets. For each node-time pair (n, t) in these graphs, we compute the number of outgoing and incoming edges and use these two numbers as features. The adjacency tensor A is then constructed as described in Section 4. The T frontal slices of A are divided into S train training slices, S val validation slices, and S test testing slices, which come sequentially after each other; see Figure 2 and Table 2. Since the adjacency matrices corresponding to graphs are very sparse for these datasets, we apply the same technique as and add the entries of each frontal slice A::t to the following l − 1 frontal slices A::t,..., A::(t+l−1), where we refer to l as the "edge life." Note that this only affects A, and that the added edges are not treated as real edges in the classification problem. The bitcoin and Reddit datasets are heavily skewed, with about 90% of edges labeled positively, and the remaining labeled negatively. Since the negative instances are more interesting to identify (e.g. to prevent financial fraud or online hostility), we use the F1 score to evaluate the experiments on these datasets, treating the negative edges as the ones we want to identify. The classes are more well-balanced in the chess dataset, so we use accuracy to evaluate those experiments. We choose to use an embedding Y train = A::(1:Strain) X::(1:Strain) W for training. When computing the embeddings for the validation and testing data, we still need S train frontal slices of A, which we get by using a sliding window of slices. This is illustrated in Figure 2, where the green, blue and red blocks show the frontal slices used when computing the embeddings for the training, validation and testing data, respectively. The embeddings for the validation and testing data are Y val = A::(Sval+1:Strain+Sval) X::(Sval+1:Strain+Sval) W and Y test = A::(Sval+Stest+1:T) X::(Sval+Stest+1:T) W, respectively. Preliminary experiments with 2-layer architectures did not show convincing improvements in performance. We believe this is due to the fact that the datasets only have two features, and that a 1-layer architecture therefore is sufficient for extracting relevant information in the data. For training, we use the cross entropy loss function: where f (m, n, t) ∈ R C is a one-hot vector encoding the true class of the edge (m, n) at time t, and α ∈ R C is a vector summing to 1 which contains the weight of each class. Since the bitcoin and Reddit datasets are so skewed, we weigh the minority class more heavily in the loss function for those datasets, and treat α as a hyperparameter; see Appendix C for details. The experiments are implemented in PyTorch with some preprocessing done in Matlab. Our code is available at [url redacted for review]. In the experiments, we use an edge life of l = 10, a bandwidth b = 20, and F = 6 output features. Since the graphs in the considered datasets are directed, we also investigate the impact of symmetrizing the adjacency matrices, where the symmetrized version of an adjacency matrix A is defined as A sym def = 1/2(A + A). We compare our method with three other methods. The first one is a variant of the WD-GCN by , which they specify in Equation (8a) of their paper. For the LSTM layer in their description, we use 6 output features instead of N. This is to avoid overfitting and make the method more comparable to ours which uses 6 output features. For the final layer, we use the same prediction model as that used by for edge classification. The second method is a 1-layer variant of EvolveGCN-H by. The third method is a simple baseline which uses a 1-layer version of the GCN by. It uses the same weight matrix W for all temporal graphs. Both EvolveGCN-H and the baseline GCN use 6 output features as well. Table 3 shows the when the adjacency matrices have not been symmetrized. In this case, our method outperforms the other methods on the two bitcoin datasets and the chess dataset, with WD-GCN performing best on the Reddit dataset. Table 4 shows the for when the adjacency matrices have been symmetrized. Our method outperforms the other methods on the Bitcoin OTC dataset and the chess dataset, and performs similarly but slightly worse than the best performing methods on the Bitcoin Alpha and Reddit datasets. Overall, it seems like symmetrizing the adjacency matrices leads to lower performance. We have presented a novel approach for dynamic graph embedding which leverages the tensor Mproduct framework. We used it for edge classification in experiments on four real datasets, where it performed competitively compared to state-of-the-art methods. Future research directions include further developing the theoretical guarantees for the method, investigating optimal structure and learning of the transform matrix M, using the method for other prediction tasks, and investigating how to utilize deeper architectures for dynamic graph learning. We provide some illustrations that show how the M-transform in Definition 3.1 works. Recall that X × 3 M = fold(M unfold(X)). The matrix X is first unfolded into a matrix, as illustrated in Figure 3. This unfolded tensor is then multiplied from the left by the matrix M, as illustrated in Figure 4; the figure also illustrates the banded lower triangular structure of M. Finally, the output matrix is folded back into a tensor. The fold operation is defined to be the inverse of the unfold operation. • The Bitcoin Alpha dataset is available at https://snap.stanford.edu/data/soc-sign-bitcoin-alpha.html. • The Bitcoin OTC dataset is available at https://snap.stanford.edu/data/soc-sign-bitcoin-otc.html. • The Reddit dataset is available at https://snap.stanford.edu/data/soc-RedditHyperlinks.html. Note that we use the dataset with hyperlinks in the body of the posts. • The chess dataset is available at http://konect.uni-koblenz.de/networks/chess. When partitioning the data into T graphs, as described in Section 5, if there are multiple data points corresponding to an edge (m, n) for a given time step t, we only add that edge once to the corresponding graph and set the label equal to the sum of the labels of the different data points. For example, if bitcoin user m makes three transactions to n during time step t with ratings 10, 2, −1, then we add a single edge (m, n) to graph t with label 10 + 2 − 1 = 11. For training, we run gradient descent with a learning rate of 0.01 and momentum of 0.9 for 10,000 iterations. For each 100 iterations, we compute and store the performance of the model on the validation data. As mentioned in Section 5, the weight vector α in the loss function is treated as a hyperparameter in the bitcoin and Reddit experiments. Since these datasets all have two edge classes, let α 0 and α 1 be the weights of the minority (negative) and majority (positive) classes, respectively. Since these parameters add to 1, we have α 1 = 1 − α 0. For all methods, we repeat the bitcoin and Reddit experiments once for each α 0 ∈ {0.75, 0.76, . . ., 0.95}. For each model and dataset, we then find the best stored performance of the model on the validation data across all α 0 values. We then treat the corresponding model as the trained model, and report its performance on the testing data in Tables 3 and 4. The for the chess experiment are computed in the same way, but only for a single vector α = . Throughout this section, · will denote the Frobenius norm (i.e., the square root of the sum of the elements squared) of a matrix or tensor, and · 2 will denote the matrix spectral norm. We first provide a few further that clarify the algebraic properties of the M-product. Let R 1×1×T denote the set of 1 × 1 × T tensors. Similarly, let R N ×1×T denote the set of N × 1 × T tensors. Under the M-product framework, the set R 1×1×T play a role similar to that played by scalars in matrix algebra. With this in mind, the set R N ×1×T can be seen as a length N vector consisting of tubal elements of length T. Propositions D.1 and D.2 make this more precise. Proposition D.1 (Proposition 4.2 in). The set R 1×1×T with product, which is denoted by (, R 1×1×T), is a commutative ring with identity. Proposition D.2 (Theorem 4.1 in). The set R N ×1×T with product, which is denoted by (, R N ×1×T), is a free module over the ring (, R 1×1×T). A free module is similar to a vector space. Like a vector space, it has a basis. Proposition D.3 shows that the lateral slices of Q in the tensor eigendecomposition form a basis for (, R N ×1×T), similarly to how the eigenvectors in a matrix eigendecomposition form a basis. Proposition D.3. The lateral slices Q:n: ∈ R N ×1×T of Q in Definition 3.8 form a basis for (, R N ×1×T). Proof. Let X ∈ R N ×1×T. Note that Since each frontal face of Q × 3 M is an invertible matrix, this implies that each frontal face of S × 3 M is zero, and hence S = 0. So the lateral slices of Q are also linearly independent in (, R N ×1×T).
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylVTTVtvH
We propose a novel tensor based method for graph convolutional networks on dynamic graphs
Our main motivation is to propose an efficient approach to generate novel multi-element stable chemical compounds that can be used in real world applications. This task can be formulated as a combinatorial problem, and it takes many hours of human experts to construct, and to evaluate new data. Unsupervised learning methods such as Generative Adversarial Networks (GANs) can be efficiently used to produce new data. Cross-domain Generative Adversarial Networks were reported to achieve exciting in image processing applications. However, in the domain of materials science, there is a need to synthesize data with higher order complexity compared to observed samples, and the state-of-the-art cross-domain GANs can not be adapted directly. In this contribution, we propose a novel GAN called CrystalGAN which generates new chemically stable crystallographic structures with increased domain complexity. We introduce an original architecture, we provide the corresponding loss functions, and we show that the CrystalGAN generates very reasonable data. We illustrate the efficiency of the proposed method on a real original problem of novel hydrides discovery that can be further used in development of hydrogen storage materials. In modern society, a big variety of inorganic compositions are used for hydrogen storage owing to its favorable cost BID4. A vast number of organic molecules are applied in solar cells, organic light-emitting diodes, conductors, and sensors BID25. Synthesis of new organic and inorganic compounds is a challenge in physics, chemistry and in materials science. Design of new structures aims to find the best solution in a big chemical space, and it is in fact a combinatorial optimization problem. In this work, we focus on applications of hydrogen storage, and in particular, we challenge the problem to investigate novel chemical compositions with stable crystals. Traditionally, density functional theory (DFT) plays a central role in prediction of chemically relevant compositions with stable crystals BID22. However, the DFT calculations are computationally expensive, and it is not acceptable to apply it to test all possible randomly generated structures. A number of machine learning approaches were proposed to facilitate the search for novel stable compositions BID3. There was an attempt to find new compositions using an inorganic crystal structure database, and to estimate the probabilities of new candidates based on compositional similarities. These methods to generate relevant chemical compositions are based on recommender systems BID10. The output of the recommender systems applied in the crystallographic field is a rating or preference for a structure. A recent approach based on a combination of machine learning methods and the high-throughput DFT calculations allowed to explore ternary chemical compounds BID21, and it was shown that statistical methods can be of a big help to identify stable structures, and that they do it much faster than standard methods. Recently, support vector machines were tested to predict crystal structures BID16 showing that the method can reliably predict the crystal structure given its composition. It is worth mentioning that data representation of observations to be passed to a learner, is critical, and data representations which are the most suitable for learning algorithms, are not necessarily scientifically intuitive BID23.Deep learning methods were reported to learn rich hierarchical models over all kind of data, and the GANs BID8 ) is a state-of-the-art model to synthesize data. Moreover, deep networks were reported to learn transferable representations BID18. The GANs were already exploited with success in cross-domain learning applications for image processing BID13 BID12.Our goal is to develop a competitive approach to identify stable ternary chemical compounds, i.e., compounds containing three different elements, from observations of binary compounds. Nowadays, there does not exist any approach that can be applied directly to such an important task of materials science. The state-of-the-art GANs are limited in the sense that they do not generate samples in domains with increased complexity, e.g., the application where we aim to construct crystals with three elements from observations containing two chemical elements only. An attempt to learn many-to-many mappings was recently introduced by BID0, however, this promising approach does not allow to generate data of a higher-order dimension. Our contribution is multi-fold:• To our knowledge, we are the first to introduce a GAN to solve the scientific problem of discovery of novel crystal structures, and we introduce an original methodology to generate new stable chemical compositions; • The proposed method is called CrystalGAN, and it consists of two cross-domain GAN blocks with constraints integrating prior knowledge including a feature transfer step; • The proposed model generates data with increased complexity with respect to observed samples; • We demonstrate by numerical experiments on a real challenge of chemistry and materials science that our approach is competitive compared to existing methods; • The proposed algorithm is efficiently implemented in Python, and it will be publicly available shortly, as soon as the contribution is de-anonymized. This paper is organized as follows. We discuss the related work in Section 2. In Section 3, we provide the formalisation of the problem, and introduce the CrystalGAN. The of our numerical experiments are discussed in Section 4. Concluding remarks and perspectives close the paper. Our contribution is closely related to the problems of unsupervised learning and cross-domain learning, since our aim is to synthesize novel data, and the new samples are supposed to belong to an unobserved domain with an augmented complexity. In the adversarial nets framework, the deep generative models compete with an adversary which is a discriminative model learning to identify whether an observation comes from the model distribution or from the data distribution BID7. A classical GAN consists of two models, a generator G whose objective is to synthesize data and a discriminator D whose aim is to distinguish between real and generated data. The generator and the discriminator are trained simultaneously, and the training problem is formulated as a two-player minimax game. A number of techniques to improve training of GANs were proposed by; BID9; BID19.Learning cross domain relations is an active research direction in image processing. Several recent papers BID13 BID0 discuss an idea to capture some particular characteristics of one image and to translate them into another image. A conditional GAN for image-to-image translation is considered by. An advantage of the conditional model is that it allows to integrate underlying structure into the model. The conditional GANs were also used for multi-model tasks BID15 ). An idea to combine observed data to produce new data was proposed in BID26, e.g., an artist can mix existing pieces of music to create a new one. First domain, H is hydrogen, and A is a metal BH:Second domain, H is hydrogen, and B is another metal DISPLAYFORM0 Generator function that translates input features xAH from (domain) AH to BH GBHA 1:Generator function that translates input features xBH from (domain) BH to AH DAH and DBH:Discriminator functions of AH domain and BH domain, respectively AHB1:xAHB 1 is a sample generated by generator function GAHB 1 BHA1:yBHA 1 is a sample produced by generator function GBHA 1 AHBA1 and BHAB1: Data reconstructed after two generator translations AHBg and BHAg:Data obtained after feature transfer step from domain AH to domain BH, and from domain BH to domain AH, respectively Input data for the second step of CrystalGAN GAHB 2:Generator function that translates xAHB g Features generated in the first step from AHBg to AHB2 GBHA 2:Generator function that translates yBHA g Data generated in first step from BHAg to BHA2 DAHB and DBHA:The discriminator functions of domain AHBg and domain BHAg, respectively AHB2:xAHB 2 is a sample generated by the generator function GAHB 2 BHA2:yBHA 2 is a sample produced by the generator function GBHA 2 AHBA2 and BHAB2:Data reconstructed as a of two generators translations AHB2 and BHA2:Final new data (to be explored by human experts) An approach to learn high-level semantic features, and to train a model for more than a single task, was introduced by BID18. In particular, it was proposed to train a model to jointly learn several complementary tasks. This method is expected to overcome the problem of overfitting to a single task. An idea to introduce multiple discriminators whose role varies from formidable adversary to forgiving teacher was discussed by BID5.Several GANs were adapted to some materials science and chemical applications. So, ObjectiveReinforced GANs that perform molecular generation of carbon-chain sequence taking into consideration some desired properties, were introduced in BID20, and the method was shown to be efficient for drug discovery. Another avenue is to integrate rule-based knowledge, e.g., molecular descriptors with the deep learning. ChemNet BID6 ) is a deep neural network pre-trained with chemistry-relevant representations obtained from prior knowledge. The model can be used to predict new chemical properties. However, as we have already mentioned before, none of these methods generates crystal data of augmented complexity. In this section, we introduce our approach. The CrystalGAN consists of three procedures:1. First step GAN which is closely related to the cross-domain GANs, and that generates pseudo-binary samples where the domains are mixed. 2. Feature transfer procedure constructs higher order complexity data from the samples generated at the previous step, and where components from all domains are well-separated. 3. Second step GAN synthesizes, under geometric constraints, novel ternary stable chemical structures. First, we describe a cross-domain GAN, and then, we provide all the details on the proposed CrystalGAN. We provide all notations used by the CrystalGAN in TAB0. The GANs architectures for the first and the second steps are shown on FIG0. We now propose a novel architecture based on the cross-domain GAN algorithms with constraint learning to discover higher order complexity crystallographic systems. We introduce a GAN model to find relations between different crystallographic domains, and to generate new materials. To make the paper easier to follow, without loss of generality, we will present our method providing a specific example of generating ternary hydride compounds of the form "A (a metal) -H (hydrogen) -B (a metal)".Our observations are stable binary compounds containing chemical elements A+H which is a composition of some metal A and the hydrogen H, and B+H which is a mixture of another metal B with the hydrogen. So, a machine learning algorithm has access to observations {(x AHi)} DISPLAYFORM0 Our goal is to generate novel ternary, i.e. more complex, stable data x AHB (or y BHA) based on the properties learned from the observed binary structures. Below we describe the architecture of the CrystalGAN. Our approach consists of two consecutive steps with a feature transfer procedure inbetween. The first step of CrystalGAN generates new data with increased complexity. The adversarial network takes DISPLAYFORM0, and synthesizes DISPLAYFORM1 and FIG0 summarizes the first step of CrystalGAN. DISPLAYFORM2 The reconstruction loss functions take the following form: DISPLAYFORM3 Ideally, L R AH = 0, L R BH = 0, and x AHBA1 = x AH, y BHAB1 = y BH, and we minimize the distances d(x AHBA1, x AH) and d(y BHAB1, y BH).The generative adversarial loss functions of the first step of CrystalGAN aim to control that the original observations are reconstructed as accurate as possible: DISPLAYFORM4 DISPLAYFORM5 The generative loss functions contain the two terms defined above: DISPLAYFORM6 The discriminative loss functions aim to discriminate the samples coming from AH and BH: DISPLAYFORM7 DISPLAYFORM8 Now, we have all elements to define the full generative loss function of the first step: DISPLAYFORM9 where λ 1, λ 2, λ 3, and λ 4 are real-valued hyper-parameters that control the ratio between the corresponding terms, and the hyper-parameters are to be fixed by cross-validation. The full discriminator loss function of this step L D1 is defined as follows: DISPLAYFORM10 The first step generates pseudo-binary samples M H, where M is a new discovered domain merging A and B properties. Although these can be interesting for human experts, the samples generated by the first step are not easy to interpret, since the domains A and B are completely mixed in these samples, and there is no way to deduce characteristics of two separate elements coming from these domains. So, we need a second step which will generate data of a higher order complexity from two given domains. We transfer the attributes of A and B elements, this procedure is also shown on FIG0, in order to construct a new dataset that will be used as a training set in the second step of the CrystalGAN.In order to prepare the datasets to generate higher order complexity samples, we add a placeholder. (E.g., for domain AH, the fourth matrix is empty, and for domain BH, the third matrix is empty.) The second step GAN takes as input the data generated by the first step GAN and modified by the feature transfer procedure. The of the second step are samples which describe ternary chemical compounds that are supposed to be stable from chemical viewpoint. The geometric constraints control the quality of generated data. A crystallographic structure is fully described by a local distribution. This distribution is determined by distances to all nearest neighbors of each atom in a given crystallographic structure. We enforce the second step GAN with the following geometric constraints which satisfy the geometric conditions of our scientific domain application. The implemented constraints are also shown on FIG0. DISPLAYFORM0 be the set of distances of the first neighbors of all atoms in a crystallographic structure. There are two geometric constraints to be considered while generating new data. The first geometric (geo) constraint is defined as follows: DISPLAYFORM1 where d 1 is the minimal distance between two first nearest neighbors in a given crystallographic structure. The second geometric constraint takes the following form: DISPLAYFORM2 where d 2 is the maximal distance between two first nearest neighbors. The loss function of the second step GAN is augmented by the following geometric constraints: DISPLAYFORM3 Given x AHBg and y BHAg from the previous step, we generate: DISPLAYFORM4 The reconstruction loss functions are given: DISPLAYFORM5 DISPLAYFORM6 The generative adversarial loss functions are given by: DISPLAYFORM7 DISPLAYFORM8 The generative loss functions of the this step are defined as follows: DISPLAYFORM9 DISPLAYFORM10 The losses of the discriminator of the second step can be defined: DISPLAYFORM11 DISPLAYFORM12 Now, we have all elements to define the full generative loss function: DISPLAYFORM13 where λ 1, λ 2, λ 3, λ 4, λ 5, and λ 6 are real-valued hyper-parameters that control the influence of the terms. The full discriminative loss function of the second step L D2 takes the form: DISPLAYFORM14 To summarise, in the second step, we use the dataset issued from the feature transfer as an input containing two domains x AHBg and y BHAg. We train the cross-domain GAN taking into consideration constraints of the crystallographic environment. We integrated geometric constraints proposed by crystallographic and materials science experts to satisfy environmental constraints, and to increase the rate of synthesized stable ternary compounds. The second step is drafted on FIG0. Crystallographic structures can be represented using the POSCAR files which are input files for the DFT calculations under the VASP code BID14. These are coordinate files, they contain the lattice geometry and the atomic positions, as well as the number (or the composition) and the nature of atoms in the crystal unit cell. We use a dataset constructed from BID2 ) by experts in materials science. Our training data set contains the POSCAR files, and the proposed CrystalGAN generates also POSCAR files. Such a file contains three matrices: the first one is abc matrix, corresponding to the three lattice vectors defining the unit cell of the system, the second matrix contains atomic positions of H atom, and the third matrix contains coordinates of metallic atom A (or B).The information from the files is fed into 4-dimensional tensors. An example of a POSCAR file, and its corresponding representation for the GANs is shown on FIG1. On the same figure on the right we show the corresponding structure in 3D. Note that we increase the data complexity by the feature transfer procedure by adding placeholders. Our training dataset includes 1,416 POSCAR files of binary hydrides divided into 63 classes where each class is represented as a 4-dimensional tensor. Each class of binary M H hydride contains two elements: the hydrogen H and another element M from the periodic table. In our experiments, after discussions with materials science researchers, we focused on exploration of ternary compositions "Palladium -Hydrogen -Nickel" from the binary systems observations of "Palladium -Hydrogen" and "Nickel -Hydrogen". So, AH = PdH, and BH = NiH. We also considered another task to generate ternary compounds "Magnesium -Hydrogen -Titanium".In the CrystalGAN, we need to compute all the distances of the nearest neighbors for each generated POSCAR file. The distances between hydrogen atoms H in a given crystallographic structure should respect some geometric rules, as well as the distances between the atoms A−B, A−A', and B −B. We applied the geometric constraints on the distances between the neighbors (for each atom in a crystallographic structure) introduced in the previous section. Note that the distances A−H and B−H are not penalized by the constraints. In order to compute the distances between all nearest neighbors in the generated data, we used the pythonic library Pymatgen BID17 specifically developed for material analysis. For all experiments in this paper, the distances are fixed by our colleagues in crystallographic and materials science to d 1 = 1.8Å (angstrom, 10 −10 meter) and d 2 = 3Å. We set all the hyperparameters by cross validation, however, we found that a reasonable performance is reached when all λ i have similar values, and are quite close to 1. We use the standard AdamOptimizer with learning rate α = 0.0001, and β 1 = 0.5. The number of epochs is set to 1000 (we verified that the functions converge). The mini-batch size equals 35. DISPLAYFORM0 without constraints with constraints Pd -Ni -H 0 0 4 9 Mg -Ti -H 0 0 2 8 Table 2: Number of ternary compositions of good quality generated by the tested methods. Each block of the CrystalGAN architecture (the generators and the discriminators) is a multi-layer neural network with 5 hidden layers. Each layer contains 100 units. We use the rectified linear unit (ReLU) as an activation function of the neural network. All these parameters were fixed by cross-validation (for both chosen domains "Palladium -Hydrogen" and "Nickel -Hydrogen").Our code is implemented in Python (TensorFlow). We run the experiments using GPU with graphics card NVIDIA Quadro M5000. In our numerical experiments, we compare the proposed CrystalGAN with a classical GAN, the DiscoGAN BID13, and the CrystalGAN but without the geometric constraints. All these GANs generate POSCAR files, and we evaluate the performance of the models by the number of generated ternary structures which satisfy the geometric crystallographic environment. Table 2 shows the number of successes for the considered methods. The classical GAN which takes Gaussian noise as an input, does not generate acceptable chemical structures. The DiscoGAN approach performs quite well if we use it to generate novel pseudo-binary structures, however, it is not adapted to synthesize ternary compositions. We observed that the CrystalGAN (with the geometric constraints) outperforms all tested methods. From multiple discussions with experts in materials science and chemistry, first, we know that the number of novel stable compounds can not be very high, and it is already considered as a success if we synthesize several stable structures which satisfy the constraints. Hence, we can not really reason in terms of accuracy or error rate which are widely used metrics in machine learning and data mining. Second, evaluation of a stable structure is not straightforward. Given a new composition, only the of density functional theory (DFT) calculations can provide a whether this composition is stable enough, and whether it can be used in practice. However, the DFT calculations are computationally too expensive, and it is out of question to run them on all data we generated using the CrystalGAN. It is planned to run the DFT calculations on some pre-selected generated ternary compositions to take a final decision on practical utility of the chemical compounds. Our goal was to develop a principled approach to generate new ternary stable crystallographic structures from observed binary, i.e. containing two chemical elements only. We propose a learning method called CrystalGAN to discover cross-domain relations in real data, and to generate novel structures. The proposed approach can efficiently integrate, in form of constraints, prior knowledge provided by human experts. CrystalGAN is the first GAN developed to generate scientific data in the field of materials science. To our knowledge, it is also the first approach which generates data of a higher-order complexity, i.e., ternary structures where the domains are well-separated from observed binary compounds. The CrystalGAN was, in particular, successfully tested to tackle the challenge to discover new materials for hydrogen storage. Currently, we investigate different GANs architectures, also including elements of reinforcement learning, to produce data even of a higher complexity, e.g., compounds containing four or five chemical elements. Note that although the CrystalGAN was developed and tested for applications in materials science, it is a general method where the constraints can be easily adapted to any scientific problem.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyEGUi05Km
"Generating new chemical materials using novel cross-domain GANs."
Given samples from a group of related regression tasks, a data-enriched model describes observations by a common and per-group individual parameters. In high-dimensional regime, each parameter has its own structure such as sparsity or group sparsity. In this paper, we consider the general form of data enrichment where data comes in a fixed but arbitrary number of tasks $G$ and any convex function, e.g., norm, can characterize the structure of both common and individual parameters. We propose an estimator for the high-dimensional data enriched model and investigate its statistical properties. We delineate the sample complexity of our estimator and provide high probability non-asymptotic bound for estimation error of all parameters under a condition weaker than the state-of-the-art. We propose an iterative estimation algorithm with a geometric convergence rate. Overall, we present a first through statistical and computational analysis of inference in the data enriched model. Over the past two decades, major advances have been made in estimating structured parameters, e.g., sparse, low-rank, etc., in high-dimensional small sample problems BID13 BID6 BID14. Such estimators consider a suitable (semi) parametric model of the response: y = φ(x, β *)+ω based on n samples {(x i, y i)} n i=1and β * ∈ R p is the true parameter of interest. The unique aspect of such high-dimensional setup is that the number of samples n < p, and the structure in β *, e.g., sparsity, low-rank, makes the estimation possible (; BID7 BID5). In several real world problems, natural grouping among samples arises and learning a single common model β 0 for all samples or many per group individual models β g s are unrealistic. The middle ground model for such a scenario is the superposition of common and individual parameters β 0 + β g which has been of recent interest in the statistical machine learning community BID16 and is known by multiple names. It is a form of multi-task learning (; BID17 when we consider regression in each group as a task. It is also called data sharing BID15 since information contained in different group is shared through the common parameter β 0 . And finally, it has been called data enrichment BID10 BID0 because we enrich our data set with pooling multiple samples from different but related sources. In this paper, we consider the following data enrichment (DE) model where there is a common parameter β * 0 shared between all groups plus individual per-group parameters β * g which characterize the deviation of group g: y gi = φ(x gi, (β * 0 + β * g)) + ω gi, g ∈ {1, . . ., G}, where g and i index the group and samples respectively. Note that the DE model is a system of coupled superposition models. We specifically focus on the high-dimensional small sample regime for where the number of samples n g for each group is much smaller than the ambient dimensionality, i.e., ∀g: n g p. Similar to all other highdimensional models, we assume that the parameters β g are structured, i.e., for suitable convex functions f g's, f g (β g) is small. Further, for the technical analysis and proofs, we focus on the case of linear models, i.e., φ(x, β) = x T β. The seamlessly extend to more general non-linear models, e.g., generalized linear models, broad families of semi-parametric and single-index models, non-convex models, etc., using existing , i.e., how models like LASSO have been extended (e.g. employing ideas such as restricted strong convexity ).In the context of Multi-task learning (MTL), similar models have been proposed which has the general form of y gi = x T gi (β * 1g + β * 2g) + ω gi where B 1 = [β 11, . . ., β 1G] and B 2 = [β 21, . . ., β 2G] are two parameter matrices . To capture relation of tasks, different types of constraints are assumed for parameter matrices. For example, BID11 assumes B 1 and B 2 are sparse and low rank respectively. In this parameter matrix decomposition framework for MLT, the most related work to ours is the one proposed by BID17 where authors regularize the regression with B 1 1,∞ and B 2 1,1 where norms are p, q-norms on rows of matrices. Parameters of B 1 are more general than DE's common parameter when we use f 0 (β 0) = β 0 1. This is because B 1 1,∞ regularizer enforces shared support of β * 1g s, i.e., supp(β * 1i) = supp(β * 1j) but allows β * 1i = β * 1j. Further sparse variation between parameters of different tasks is induced by B 2 1,1 which has an equivalent effect to DE's individual parameters where f g (·)s are l 1 -norm. Our analysis of DE framework suggests that it is more data efficient than this setup of BID17 ) because they require every task i to have large enough samples to learn its own common parameters β i while DE shares the common parameter and only requires the total dataset over all tasks to be sufficiently large. The DE model where β g's are sparse has recently gained attention because of its application in wide range of domains such as personalized medicine BID12, sentiment analysis, banking strategy BID15, single cell data analysis , road safety , and disease subtype analysis BID12. In spite of the recent surge in applying data enrichment framework to different domains, limited advances have been made in understanding the statistical and computational properties of suitable estimators for the data enriched model. In fact, non-asymptotic statistical properties, including sample complexity and statistical rates of convergence, of regularized estimators for the data enriched model is still an open question BID15 ). To the best of our knowledge, the only theoretical guarantee for data enrichment is provided in where authors prove sparsistency of their proposed method under the stringent irrepresentability condition of the design matrix for recovering supports of common and individual parameters. Existing support recovery guarantees , sample complexity and l 2 consistency BID17 of related models are restricted to sparsity and l 1 -norm, while our estimator and norm consistency analysis work for any structure induced by arbitrary convex functions f g. Moreover, no computational , such as rates of convergence of the optimization algorithms associated with proposed estimators, exist in the literature. We denote sets by curly V, matrices by bold capital V, random variables by capital V, and vectors by small bold v letters. We take DISPLAYFORM0 Given G groups and n g samples in each as DISPLAYFORM1, we can form the per group design matrix X g ∈ R ng×p and output vector y g ∈ R ng.The total number of samples is n = G g=1 n g. The data enriched model takes the following vector form: DISPLAYFORM2 where each row of X g is x T gi and ω T g = (ω g1, . . ., ω gng) is the noise vector. A random variable V is sub-Gaussian if its moments satisfies ∀p ≥ 1: DISPLAYFORM3 is sub-Gaussian if the one-dimensional marginals v, u are sub-Gaussian random variables for all u ∈ R p. The sub-Gaussian norm of v is defined as, 2018), where the expectation is over g ∼ N (0, I p×p), a vector of independent zeromean unit-variance Gaussian. DISPLAYFORM4 We propose the following Data Enrichment (DE) estimatorβ for recovering the structured parameters where the structure is induced by convex functions f g (·): DISPLAYFORM0 We present several statistical and computational for the DE estimator of the data enriched model:• The DE estimator succeeds if a geometric condition that we call Data EnRichment Incoherence Condition (DERIC) is satisfied, FIG1. Compared to other known geometric conditions in the literature such as structural coherence BID16 and stable recovery conditions BID18, DERIC is a weaker condition, FIG1.• Assuming DERIC holds, we establish a high probability non-asymptotic bound on the weighted sum of parameterwise estimation error, δ g =β g − β * g as: DISPLAYFORM1 where n 0 n is the total number of samples, γ max g∈ [G] n ng is the sample condition number, and C g is the error cone corresponding to β * g exactly defined in Section 2. To the best of our knowledge, this is the first statistical estimation guarantee for the data enrichment. • We also establish the sample complexity of the DE estimator for all parameters as ∀g ∈ [G]: DISPLAYFORM2 We emphasize that our proofs that the recovery of the common parameter β 0 by DE estimator benefits from all of the n pooled samples.• We present an efficient projected block gradient descent algorithm DICER, to solve DE's objective which converges geometrically to the statistical error bound of. To the best of our knowledge, this is the first rigorous computational for the high-dimensional data-enriched regression. A compact form of our proposed DE estimator is: DISPLAYFORM0 where y = (y DISPLAYFORM1 Example 1. (L 1 -norm) When all parameters β g s are s gsparse, i.e.,|supp(β * g)| = s g by using l 1 -norm as the sparsity inducing function, DE instantiates to the spare DE: DISPLAYFORM2 Consider the group-wise estimation error δ g =β g − β * g. Sinceβ g = β * g + δ g is a feasible point of, the error vector δ g will belong to the following restricted error set: DISPLAYFORM3 We denote the cone of the error set as C g Cone(E g) and the spherical cap corresponding to it as DISPLAYFORM4 following two subsets of C play key roles in our analysis: DISPLAYFORM5 DISPLAYFORM6 Using optimality ofβ, we can establish the following deterministic error bound. DISPLAYFORM7 The main assumptions of Theorem 1 is known as Restricted Eigenvalue (RE) condition in the literature of high dimensional statistics; ): DISPLAYFORM0 Here, we show that for the design matrix X defined in, the RE condition holds with high probability under a suitable geometric condition we call Data EnRichment Incoherence Condition (DERIC) and for enough number of samples. For the analysis, similar to existing work (; BID19 BID16, we assume the design matrix to be isotropic sub-Gaussian. 1 Definition 1. We assume x gi are i.i.d. random vectors from a non-degenerate zero-mean, isotropic sub-Gaussian distribution. In other words, DISPLAYFORM1 Further, we assume noise ω gi are i.i.d. zero-mean, unit-variance sub-Gaussian with |||ω gi ||| ψ2 ≤ K.). There exists a non-empty set I ⊆ [G] \ of groups where for some scalars 0 <ρ ≤ 1 and λ min > 0 the following holds: i∈I n i ≥ ρn. 2. ∀i ∈ I, ∀δ i ∈ C i, and δ 0 ∈ C 0: DISPLAYFORM0 Using DERIC and the small ball method BID19, a recent tool from empirical process theory in the following theorem, we elaborate the sample complexity required for satisfying the RE condition: Theorem 2. Let x gi s be random vectors defined in Definition 1. Assume DERIC condition of Definition 2 holds for error cones C g s and ψ I = λ minρ /3. Then, for all δ ∈ H, when we have enough number of samples as ∀g DISPLAYFORM1, with probability at least 1 − e −nκmin/4 we have inf δ∈H DISPLAYFORM2 4 is the lower bound of the RE condition. Example 2. (L 1 -norm) The Gaussian width of the spherical cap of a p-dimensional s-sparse vector is ω(A) = Θ(√ s log p) ). Therefore, the number of samples per group and total required for satisfaction of the RE condition in the sparse DE estimator FORMULA10 is ∀g ∈ [G]: n g ≥ m g = Θ(s g log p). Here, we provide a high probability upper bound for the deterministic upper bound of Theorem 1 and derive the final estimation error bound. Theorem 3. Assume x gi and ω gi distributed according to Definition 1 and τ > 0, then with probability at least 1 DISPLAYFORM0 we have: DISPLAYFORM1 The following corollary characterizes the general error bound and from the direct combination of Theorem 1, Theorem 2, and Theorem 3. Corollary 1. For x gi and ω gi described in Definition 1 and τ > 0 when we have enough number of samples ∀g ∈ [G]: n g > m g which lead to κ > 0, the following general error bound holds with high probability for estimator: DISPLAYFORM2 Example 3. (L 1 -norm) For the sparse DE estimator of FORMULA10, of Theorem 2 and 3 translates to the following: For enough number of samples as ∀g DISPLAYFORM3, the error bound of simplifies to: DISPLAYFORM4 Therefore, individual errors are bounded as δ g 2 = O((max g∈ [G] s g ) log p/n g ) which is slightly worse than DISPLAYFORM5 for g=1 to G do 5: DISPLAYFORM6 end for 7: DISPLAYFORM7 O(s g log p/n g), the well-known error bound for recovering an s g -sparse vector from n g observations using LASSO or similar estimators BID8 BID4 BID9 BID2. Note that max g∈ [G] s g (instead of s g) is the price we pay to recover the common parameter β 0. We propose Data enrIChER (DICER) a projected block gradient descent algorithm, Algorithm 1, where Π Ω fg is the Euclidean projection onto the set DISPLAYFORM0 To analysis convergence properties of DICER, we should upper bound the error of each iteration. Let's δ (t) = β (t) − β * be the error of iteration t of DICER, i.e., the distance from the true parameter (not the optimization minimum,β). We show that δ (t) 2 decreases exponentially fast in t to the statistical error δ 2 = β − β * 2. DISPLAYFORM1 2, updates of the Algorithm 1 obey the following with high probability: DISPLAYFORM2 where r(τ) < 1. Corollary 2. For enough number of samples, iterations of DE algorithm with step sizes µ 0 = Θ(1 n) and µ g = Θ(1 √ nng) geometrically converges to the following with high probability: DISPLAYFORM3 which is a scaled variant of statistical error bound determined in Corollary 1. In this Section we present detail proof for each theorem and proposition. To avoid cluttering, during our proofs, we state some needed as lemmas and provide their proof in the next Section B. Proof. Starting from the optimality inequality, for the lower bound with the set H we get: DISPLAYFORM0 2 is known as Restricted Eigenvalue (RE) condition. The upper bound will factorize as: DISPLAYFORM1 Putting together inequalities FORMULA6 and FORMULA8 completes the proof. A.2. Proof of Proposition 1 Proposition 1. Assume observations distributed as defined in Definition 1 and pair-wise SC conditions are satisfied. Consider each superposition model in isolation; to recover the common parameter β * 0 requires at least one group i to have n i = O(ω 2 (A 0)). To recover the rest of individual parameters, we need ∀g = i: DISPLAYFORM2 Proof. Consider only one group for regression in isolation. Note that y g = X g (β * g + β * 0) + ω g is a superposition model and as shown in BID16 ) the sample complexity required for the RE condition and subsequently recovering DISPLAYFORM3 Let's simplify the LHS of the RE condition: DISPLAYFORM0 where the first inequality is due to Lyapunov's inequality. To avoid cluttering we denote δ 0g = δ 0 + δ g where δ 0 ∈ C 0 and δ g ∈ C g. Now we add and subtract the corresponding per-group marginal tail function, DISPLAYFORM1 where ξ g > 0. Let ξ g = δ 0g 2 ξ then the LHS of the RE condition reduces to: DISPLAYFORM2 For the ease of exposition we have written the LHS of as the difference of two terms, i.e., t 1 (X) − t 2 (X) and in the followings we lower bound the first term t 1 and upper bound the second term t 2.A.3.1. LOWER BOUNDING THE FIRST TERM Our main is the following lemma which uses the DERIC condition of the Definition 2 and provides a lower bound for the first term t 1 (X):Lemma 1. Suppose DERIC holds. Let ψ I = λminρ 3. For any δ ∈ H, we have: DISPLAYFORM3 which implies that t 1 (X) = inf δ∈H G g=1 n G n ξ g Q 2ξg (δ 0g) satisfies the same RHS bound of. Let's focus on the second term, i.e., t 2 (X). First we want to show that the second term satisfies the bounded difference property defined in Section 3.2. of BID3. In other words, by changing each of x gi the value of t 2 (X) at most change by one. First, we rewrite t 2 as follows: DISPLAYFORM0 where g (x 11, . . ., x jk, . . ., DISPLAYFORM1 To avoid cluttering let's X = {x 11, . . ., x jk, . . ., x Gn G}. We want to show that t 2 has the bounded difference property, meaning: DISPLAYFORM2 for some constant c i. Note that for bounded functions f, g: X → R, we have | sup X f − sup X g| ≤ sup X |f − g|. Therefore: DISPLAYFORM3 Note that for δ ∈ H we have δ 0 2 + ng n δ g 2 ≤ 1 which in δ 0 2 ≤ 1 and δ g 2 ≤ n ng. Now, we can invoke the bounded difference inequality from Theorem 6.2 of BID3 which says that with probability at least 1 − e −τ 2 /2 we have: DISPLAYFORM4 Having this concentration bound, it is enough to bound the expectation of the second term. Following lemma provides us with the bound on the expectation. Lemma 2. For the random vector x of Definition 1, we have the following bound: DISPLAYFORM5 Set n 0 = n. Putting back bounds of t 1 (X) and t 2 (X) together from Lemma 1 and 2, with probability at least 1 − e − τ 2 2 we have: DISPLAYFORM0 Note that all κ g s should be bounded away from zero. To this end we need the follow sample complexities: DISPLAYFORM1 Taking ξ = α 6 we can simplify the sample complexities to the followings: DISPLAYFORM2 Finally, to conclude, we take τ = √ nκ min /2. Proof. From now on, to avoid cluttering the notation assume ω = ω 0. We massage the equation as follows: DISPLAYFORM0, δg δg 2 n ng ω g 2 and a g = ng n δ g 2. Then the above term is the inner product of two vectors a = (a 0, . . ., a G) and b = (b 0, . . ., b G) for which we have: DISPLAYFORM1 Now we can go back to the original form: DISPLAYFORM2, u g and e g (τ) = DISPLAYFORM3 Then from, we have: DISPLAYFORM4 To simplify the notation, we drop arguments of h g for now. From the union bound we have: DISPLAYFORM5 where σ = max g∈ [G] σ g and the last inequality is a of the following lemma:Lemma 3. For x gi and ω gi defined in Definition 1 and τ > 0, with probability at least 1 − DISPLAYFORM6 we have: DISPLAYFORM7 where σ g, η g, ζ g and g are group dependent constants. Proof. To analysis convergence properties of DICER, we should upper bound the error of each iteration. Let's δ (t) = β (t) −β * be the error of iteration t of DICER, i.e., the distance from the true parameter (not the optimization minimum,β). We show that δ (t) 2 decreases exponentially fast in t to the statistical error δ 2 = β − β * 2. We first start with the required definitions for our analysis. Definition 3. We define the following positive constants as functions of step sizes µ g > 0: DISPLAYFORM0 where DISPLAYFORM1 is the intersection of the error cone and the unit ball. In the following theorem, we establish a deterministic bound on iteration errors δ (t) g 2 which depends on constants defined in Definition 3.Theorem 5. For Algorithm 1 initialized by β = 0, we have the following deterministic bound for the error at iteration t + 1: DISPLAYFORM2 DISPLAYFORM3 The RHS of FORMULA2 consists of two terms. If we keep ρ < 1, the first term approaches zero fast, and the second term determines the bound. In the following, we show that for specific choices of step sizes µ g s, the second term can be upper bounded using the analysis of Section 4. More specifically, the first term corresponds to the optimization error which shrinks in every iteration while the second term is constant times the upper bound of the statistical error characterized in Corollary 1. Therefore, if we keep ρ below one, the estimation error of DE algorithm geometrically converges to the approximate statistical error bound. One way for having ρ < 1 is to keep all arguments of max(· · ·) defining ρ strictly below 1. To this end, we first establish high probability upper bound for ρ g, η g, and φ g (in the Appendix A.6) and then show that with enough number of samples and proper step sizes µ g, ρ can be kept strictly below one with high probability. In the following lemma we establish a recursive relation between errors of consecutive iterations which leads to a bound for the tth iteration. Lemma 4. We have the following recursive dependency between the error of t + 1th iteration and tth iteration of DE: DISPLAYFORM4 By recursively applying the of Lemma 4, we get the following deterministic bound which depends on constants defined in Definition 3: DISPLAYFORM5 where DISPLAYFORM6 µg φ g. We have: DISPLAYFORM7 A.6. Proof of Theorem 4Proof. First we need following two lemmas which are proved separately in the following sections. Lemma 5. Consider a g ≥ 1, with probability at least 1 − 6 exp −γ g (ω(A g) + τ ) 2 the following upper bound holds: DISPLAYFORM8 Lemma 6. Consider a g ≥ 1, with probability at least 1 − 4 exp −γ g (ω(A g) + τ ) 2 the following upper bound holds: DISPLAYFORM9 Note that Lemma 3 readily provides a high probability upper bound for η g (1/(a g n g)) as DISPLAYFORM10 where DISPLAYFORM11 Remember the following two to upper bound ρ g s and φ g s from Lemmas 5 and 6: DISPLAYFORM12 First we want to keep ρ 0 + G g=1 ng n φ g of FORMULA2 strictly below 1. DISPLAYFORM13 Remember that a g ≥ 1 was arbitrary. So we pick it as a g = 2 n ng 1 + c 0g DISPLAYFORM14 (because we need a g ≥ 1) and the condition becomes: DISPLAYFORM15 We want to upper bound the RHS by 1/θ f which will determine the sample complexity for the shared component: DISPLAYFORM16 Note that any lower bound on the RHS of will lead to the correct sample complexity for which the coefficient of δ DISPLAYFORM17 (determined in) will be below one. Since a 0 ≥ 1 we can ignore the first term by assuming max g∈[G] \ b g ≤ 1 and the condition becomes: DISPLAYFORM18 which can be simplified to: DISPLAYFORM19 DISPLAYFORM20 Secondly, we want to bound all of ρ g + µ 0 n ng φg µg terms of for µ g = 1 agng by 1: DISPLAYFORM21 The condition becomes: DISPLAYFORM22 Remember that we chose a g = 2b DISPLAYFORM23. We substitute the value of a g by keeping in mind the constraints for the b g and the condition reduces to: DISPLAYFORM24 Note that any positive lower bound of the d g will satisfy the condition in and the is a valid sample complexity. In the following we show that d g > 1.We have a 0 ≥ 1 condition from, so we take DISPLAYFORM25 and look for a lower bound for d g: DISPLAYFORM26 (a g from) = 2b DISPLAYFORM27 The term inside of the last bracket is always positive and therefore a lower bound is one, i.e., d g ≥ 1. From the condition we get the following sample complexity: DISPLAYFORM28 Now we need to determine b g from previous conditions, knowing that a 0 = 4 max g∈ DISPLAYFORM29. We have 0 < b g ≤ 1 in and we take the largest step by setting b g = 1.Here we summarize the setting under which we have the linear convergence: DISPLAYFORM30 Now we rewrite the same analysis using the tail bounds for the coefficients to clarify the probabilities. To simplify the notation, let r g1 = and r g (τ) = r g1 + ng n ag a0 r g2, ∀g ∈ [G] \, and r(τ) = max g∈[G] r g. All of which are computed using a g s specified in. Basically r is an instantiation of an upper bound of the ρ defined in using a g s in.We are interested to upper bound the following probability: DISPLAYFORM31 where the first inequality comes from the deterministic bound of, We first focus on bounding the first term P (ρ ≥ r(τ)): DISPLAYFORM32 Now we focus on bounding the second term: DISPLAYFORM33, g ∈ [G] and a g ≥ 1: DISPLAYFORM34 where we used the intermediate form of Lemma 3 for τ > 0. Putting all of the bounds,, and back into the: DISPLAYFORM35 where υ = max(28, σ) and γ = min g∈ [G] γ g and τ = t + max(, γ −1/2) log(G + 1) where = k max g∈[G] η g. Note that τ = t + C log(G + 1) increases the sample complexities to the followings: DISPLAYFORM36 and it also affects step sizes as follows: DISPLAYFORM37 Here, we present proofs of each lemma used during the proofs of theorems in Section A. Proof. LHS of FORMULA10 is the weighted summation of ξ g Q 2ξg (δ 0g) = δ 0g 2 ξP(| x,, δ 0g / δ 0g 2 | > 2ξ) = δ 0g 2 ξQ 2ξ (u) where ξ > 0 and u = δ 0g / δ 0g 2 is a unit length vector. So we can rewrite the LHS of as: DISPLAYFORM0 With this observation, the lower bound of the Lemma 1 is a direct consequence of the following two :Lemma 7. Let u be any unit length vector and suppose x obeys Definiton 1. Then for any u, we have DISPLAYFORM1 Lemma 8. Suppose Definition 2 holds. Then, we have: DISPLAYFORM2 Proof. Consider the following soft indicator function which we use in our derivation: DISPLAYFORM3 0, |s| ≤ a (|s| − a)/a, a ≤ |s| ≤ 2a 1, 2a < |s|Now: DISPLAYFORM4 Eψ ξg (x, δ 0g) − ψ ξg (x gi, δ 0g)≤ 2E sup DISPLAYFORM5 where gi are iid copies of Rademacher random variable which are independent of every other random variables and themselves. Now we add back 1 n and expand δ 0g = δ 0 + δ g: DISPLAYFORM6 1 √ n g gi x gi, δ g (n 0 := n, 0i := 0, x 0i := x i) = 2 √ n E sup DISPLAYFORM7 Note that the h gi is a sub-Gaussian random vector which let us bound the E sup using the Gaussian width in the last step. Proof. To avoid cluttering let h g (ω g, X g) = n ng ω g 2 sup ug∈Ag X T g ωg ωg 2, u g, e g = ζ g kω(A g) + g √ log G + τ, where s g = n ng(2K 2 + 1)n g.P (h g (ω g, X g) > e g s g ) = P h g (ω g, X g) > e g s g n n g ω g 2 > s g P n n g ω g 2 > s g+ P h g (ω g, X g) > e g s g n n g ω g 2 < s g P n n g ω g 2 < s g ≤ P n n g ω g 2 > s g + P h g (ω g, X g) > e g s g n n g ω g 2 < s g ≤ P ω g 2 > (2K 2 + 1)n g + P sup DISPLAYFORM8, u g > e g ≤ P ω g 2 > (2K 2 + 1)n g + sup DISPLAYFORM9 Let's focus on the first term. Since ω g consists of i.i.d. centered unit-variance sub-Gaussian elements with |||ω gi ||| ψ2 < K, ω 2 gi is sub-exponential with |||ω gi ||| ψ1 < 2K 2. Let's apply the Bernstein's inequality to ω g DISPLAYFORM10 We also know that E ω g 2 2 ≤ n g ) which gives us: DISPLAYFORM11 Finally, we set τ = 2K 2 n g: P ω g 2 > (2K 2 + 1)n g ≤ 2 exp (−ν g n g) = 2 (G + 1) exp (−ν g n g + log(G + 1))Now we upper bound the second term of. Given any fixed v ∈ S p−1, X g v is a sub-Gaussian random vector with X T g v ψ2 ≤ C g k. From Theorem 9 of for any v ∈ S p−1 we have: DISPLAYFORM12 where φ g = sup ug∈Ag u g 2 and in our problem φ g = 1. We now substitute t = τ + g log(G + 1) where g = θ g C g k. DISPLAYFORM13
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byx1S9Bj3N
We provide an estimator and an estimation algorithm for a class of multi-task regression problem and provide statistical and computational analysis..
Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles. We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning. In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them. We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low. The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives. Many business problems that exist in todays environment consist of multiple decisions makers either collaborating or competing towards a particular goal. In this work, the challenge is applying multi-agent systems for autonomous fleet control. As Autonomous Vehicles (AVs) are becoming more prevalent, companies controlling these fleets such as Uber/Lyft will need to teach these agents to make optimal decisions. The goal of this work is to train these agents/cars optimal relocation strategies that will maximize the efficiency of the fleet while satisfying customer trip demand. Traditional solutions will use discrete event simulation modeling to optimize over a chosen objective function. This approach requires various hand coded rules as well as assumptions to help the model converge on a solution. This becomes an extremely difficult problem when there are many outside environment dynamics that can influence an agents/cars decision making (E.g. Charging, Parking). Furthermore, a solution to a particular environment may become outdated with new incoming information (E.g. New Demand Distribution).An algorithm that can adapt and learn decision making organically is needed for these types of problems and recent works in Reinforcement Learning and particularly Deep Reinforcement Learning has shown to be effective in this space. Deep Minds recent success with Deep Q Learning (DQN) was proven to be very successful in learning human level performance for many Atari 2600 games which was difficult before this because of its highly dimension unstructured data. In this work, we will pull from prior work in Multi-Agent Deep Reinforcement Learning (MA-DRL) and extend this to our multi-agent system of cars and fleet coordination. We will represent the city environment that holds the cars and customers as an image-like state representation where each layer holds specific information about the environment. We then will introduce our work with applying this to a partially observable environment where agents can only see a certain distance from them and show how this helps with scaling up. Along with that, we will show how we took advantage of Transfer Learning to teach agents multiple objects in particular charging an import aspect of AVs. Our show that we are successfully able to teach coordination strategies with other cars so that they can optimize the utility of each car. Finally, we are also able to teach agents the second object of keeping itself alive while not losing the previous objective of picking up customers. The domain of Deep Reinforcement Learning has garnered increased attention recently because of its effectiveness in solving highly dimensional problem spaces. Although its sub-field of multiagent systems presents a difficult challenge in how we represents other agents in the environment. Since these agents are non-stationary, how do we train agents to intelligently cooperate and find an optimal policy when each agent's rewards depend on one another? [Tampuu, Ardi] builds on DeepMind success with the Atari game pong and tests cooperation and competitive policies with each player. [Foerster, Jakob] proposes two novel approaches (RIAL,DIAL) which allows error to backpropagate through multiple agents. Although, this approach has only been tested on riddles where there is a small number of agents and does not seem to scale well to larger problems. [Palmer, Gregory] solves the experience replay memory becoming outdated problem of multi-agent system. [Egorov, Maxim] introduces a novel approach to solve multi-agent systems through convolutional neural networks, stochastic policies, residual networks and is what we will building our solution for the large scale autonomous fleet control on. We will extend some of this work to partially observable and multi-objective case of our agents. The first step in training a Reinforcement Learning model to to create an environment that is representative of the real world characteristics of your problem. In our case, we needed to create an environment that represents the city dynamics that are associated with the Uber/Lyft ride sharing platform. The general goal of each agent is to travel to customers on the map and fulfill that demand. The different objects and locations that are present in this environment are as follows and can be referenced in the example figure 1.Car (Blue Circle): This is the agent or decision maker that will be acting in the environment. Its actions are right, left, up, down, and stay. The car also has an associated energy level that decrements every timestep until it runs out of batter. Agents can occupy the same space as other agents without collision. Agents can not travel outside of the defined map or into obstacles on the map. The goal location of where the Agents need to travel to. Once an agents location matched a customers location, then the customer will be considered fulfilled. Customers also have a drop-off location and travel time associated with them that determines how long an agent/car is removed from the system while in transit. Each customer begins with a random wait time of 5-15 timesteps which decrements every step. Once the wait time has been expired the customer is expire and be removed from the map. Finally, customers are generated from a distribution of demand data that we created based on real trip data for the test city location. Each timestep, new customers are generated by pulling from this distribution. Obstacles (Gray Square): These are the locations on the map that the agents cannot travel to. Customer also will not appear at these locations. Charging Stations (Yellow Square): These are the locations that agents can travel to refill their energy levels. Agents must choose the Stay action at these locations for of their battery to be refilled. Customers can appear at these locations. Open Road (Green Square): These are the standard spaces that agents and customers can exist on. Each timestep the order of activity are as follows, advance each agent sequentially, resolve their pickups or collisions, decremented car energy levels and customer wait times, remove customers that have waited too long or have been picked up, and finally generate new customers from the distribution. Each one of these transitions will have a reward associated with each agent. These events and and the reward structure are shown in TAB0.After each timestep the rewards are aggregated and attributed to that specific agent for that initial state. To point out, you may see there is a small negative reward for standard movement. This was essential to incentive agents to find the quickest path to the customers. Another important reward structure decision was the small positive reward for charging. We found that there was not enough signal to just have a large negative penalty for losing all of its energy. We had to incentivize being in that charging station space without giving a strong enough reward to detract from the agents picking up the customers. Now that we defined our various objects and reward structure, we need to represent the environment as a vector that the Deep Reinforcement Learning model can learn from. Along with the suggestions of previous work, we found that the best way to do this while keeping spatial representations was with an image-like structure. Just like how images provide 3 matrices stacked on top of each other to represent rgb values, the same can be done for our environment. Our state representation has 5 layers that encode a different piece of information and in a tensor of 5 X W x H. The following are what each channels represents. Self Layer: Simply encodes the location of the agent of interest. A value of 1 is given to the (x,y) location where the agent exists. Each self layer is unique to each agent. Other Agents Layer: This encodes the locations of all the other agents excluding yourself. A value of 1 is given to locations where other agents exists and if there are two agents at a location the value 1 is still used. Customer Layer: This encodes the locations of all the customers on the map or within each agent's vision. Here an integer value that represents that customers wait time remaining is placed at the location for that specific customer. When customers are removed from the map than the value will return to 0.Obstacles Layer: This encodes the locations of obstacles and charging locations. The value 1 is used to represent locations where the agent cant go to and the value 2 is used to encode the location of the charging station. Extra Agent Information: This encodes the energy and priority of the agent of interest. For energy we placed the integer value of how much energy is left on the location of the matrix and the priority on the location As we will mention later, in the partially observable version of this environment, we limit this state space representation to just be of that specific agents vision distance v. Figure 3, is the visual matrix of how the example map presented above translates to these layers. In the following sections we will build on the simple Deep Reinforcement Learning technique Deep Q Learning (DQN) and walk through the adaptations that make it the final partially observable multi Agent System (PO-MA-DRL). We will describe how these methods relate to our environment and how they affected our implementation decisions for this environment. The goal of reinforcement learning is to learn an optimal decision making policy that will maximize the future expected reward. One of the most common Reinforcement Learning algorithms is QLearning which represents the maximum discounted reward when we perform action a in state s, and continue optimally from that point on. Q-Learning is a form of model-free learning, meaning that it does not have to learn a model of the environment to optimally settle on a policy. The following formula is the Q-Function which is recursively calculated with transitions of (s,a,r,s). In our case, s represents the image like array of the state representation holding all the agents, customers, and obstacles. The values a and r are the straightforward action taken by the agent and the associate reward. s represents the image-like state after the agent has moved to its next location and collision detection or customer pick up has been resolved. DISPLAYFORM0 This formula is essentially the Bellman equation which is simply the current reward plus the discounted future reward of the next time step if you were to take the best action. This can be implemented as a large table of states by actions and can be recursively solved if the environment is explored enough. If iterated enough times, the Q-Function should approach the true value of taking a certain action in a state and an optimal decision policy can be looked up by taking the action that will return that maximum value. When environments become complex with very large state representations, like Atari games, then it becomes computationally difficult to converge on a reasonable Q-Function because the look-up tables can become very large. This is where the recent success in Deep Q Learning has come into play. Neural networks do a great job in generalizing highly dimensional data in a lower space. We can essentially represent that large Q-Function table with a neural network. The Q(a,s) value can be estimated with the net and the loss of this network simply is as follows. DISPLAYFORM1 Along the lines of [Egorov, Maxim], we also decided to use Convolutional Neural Networks (CNNs) to interpret these image like state representations. CNNs have shown to be successful for image classification tasks because of its ability to understand spatial relationships within the image. The same can be said with the importance of the geospatial location of agents in relation to other agents and customers. In addition, CNN's allow us to scale our environment to large sizes without the exponential increase in processing time that may have occurred with fully connected networks. Finally what makes the convergence of this deep Q-Learning algorithm possible is the addition of Experience Replay, e-greedy exploration, and target networks. Experience Replay allows us to store all of our ¡s,a,r,s¿ transitions in a bucket that we can access. During training, mini batches of transitions are pulled from the bucket and used to fit on the network. E-greedy exploration, allows us to initially randomly take actions but over time the agent will take actions it know in the maximum expected reward. Lastly, target networks, is a separate network that is a mere copy of the previous network, but frozen in time. This network is updated at slower intervals than the actual network as to stabilize the convergence of the Q-values during training. The biggest challenge with multi-agent systems is how we are going to optimize one's own control policy while understanding the intentions of the other agents. Referring back to our image representation of the state, we have one layer that represents yourself and another layer that represents the other agents. Now when we defined s as the updated image-like state after the agent has moved earlier, we must now modify that to account for the other agents in the system. The final value s represents the image-like state after all the other agents have moved. So, each agent is not looking at the other agents as a stationary object, rather as intentions that need to be accounted for when making its own optimal decision. When we increment the state from s to s, we must perform all the conflict resolutions of agent-agent and agent-customer interactions before we can resolve the final state. All the agents move sequentially rather than simultaneously which means that agents with the higher priority will be first to pick up customers. We decided to have all the agents controlled by a single network as a sort of fleet manage rather than have each agent train its own network. While we lose the ability to teach agents different personality and pickup styles, for the purpose of scaling up to a large number of agents this was not feasible. Another incentivization used to create signals for agents to act efficiently with one another is the penalty for missing a customer. Each agent feels the negative reward of missing a customer even if it is on the other side of the map. This incentivises the agents to learn a type of divide and conquer technique as to minimize the probability of missing a future customer. In the Results/Tests section we will show how effective this has been in a simple bridge example. A final consideration was how do we update the replay memory with transitions from multiple agents. We decided to increase the Replay size proportional to the number of agents so that the refresh rate of the memory would be consistent to if it were a single agent. As proposed by [Palmer, Gregory], there can be more efficient ways to update the experience replay for multi-agent systems but we leave that for later work. Partial observability is the final addition to our methods and is where our work differs from some of the related works. When attempted to train our MA-DRL with actual data from a pilot city of Autonomous Vehicles, the problem just became too large with over 200+ cars and a 25x25 grid size. Using CNNs were not enough to scale the processing of the image-like representations. To solve this, we made the agents partially observable to the entire environment or map. We limited how far an agent can see around them as v (e.g. +10 spaces away) which limits how big the state image-like representation can be. Now if the map explodes to a 100x100, the transition states will always be of size v x v and training time will not exponentially explode. Making these agents partially observable also greatly helped with the missed customer penalty mentioned in the earlier section. As the map becomes extremely large, there are dozens of customers that appear and disappear every timestep. Since we penalized every single agent in the system for missed customers, the signals became very convoluted and they were not able to recognize the importance of picking up customers. When we limited the state representation to just be of size v x v, the penalties attributed for missed customers also only applied to that partially observable view. Some of the initial concerns were that if an agent did not have a complete view of the map it may wander off in a undesirable location. Although, if enough iterations are done, each agent should have a good understanding of the map and where they are over a few timesteps. The drawback here is that the agents will not be able to generalize to other unseen maps that may have very different geographies. The final method we will be talking about is the use of Transfer Learning to teach agents multiple objectives. In our AV fleet control problem, agents/cars have the main objective of fulfilling customer demand but have the secondary objective to charge its battery when needed. As state earlier in the reward structure, agents received a positive reward when picking up customers but a large negative reward when it loses all of its energy. We found that when training on this reward structure, the agents did not seem to converge on any successful strategy. In some iterations, the agents only figured out how to go to the charging locations and stay there. In other cases, the agents just continued picking up customers until all the agents died. To solve this, we took advantage of transfer learning to consecutively teach these objectives. We begun with training the standard PO-MA-DRL model where the agents were given an infinite amount of energy. On the next iteration, the map and reward structure stayed consistent but this time we have each agent a random energy level from (50 minutes to 150 minutes). Now the agents were beginning to receive the large negative rewards but over time they were able to balance these two objectives and charge themselves when needed. In the second part of the Results/Tests we will show the of this improvement. In this section we provide two experimental ; first, we wanted to test the effectiveness of agent communication through divide and conquer techniques. Second, we tested the effectiveness of implementing transfer learning for teaching the agents to charge. In both tests, we compared the agents with baseline policies that we created. These baseline policies were build with Dijkstra's algorithm which calculates the shortest distance to each customer and chooses the action that will get to the customer the quickest. The goal of these experiments are to show the power of a DRL compared to a rules based Dijkstras approach. The goal of this evaluation is to simply test the ability of the agents to learning coordination policies as to maximize the reward of all the agents in the system. Our environment consisted of a 7x7 map with 2 agents similar to the example in figure 1. Our experiment randomly placed these agents on the board and randomized their priority. The map is in the form of a bridge in the middle and two opposite sides of the nap where customers appear randomly. The optimal cooperation policy here would be for the agents to split up and pick up customers on opposite sides of the map. We tested if our model can perform this divide and conquer coordination technique versus the baseline model which would just travel to the closest customer. FIG1 shows the average customers fulfilled of the model versus the baseline. We can see that as time (game length) increases the models reward begins to diverge from the baseline. This shows that the policies that the two model agents are performing more efficiently than the baselines. We inspected the individual games that were ing and found that the agents did indeed perform a dived and conquer technique. When randomly placed on the same side of the map, one agent would take the initiative and travel to the other side of the map. When both agents started on different sides, they would just stay there and pick up their respective customers. This simple example, successfully demonstrates that an agent can create a policy based on the other agent in the environment and the map structure to improve its overall expected future reward. The goal of this experiment was to demonstrate the ability to teach multiple objectives to agents through transfer learning and show its strength over a rules based approach. Our environment was a 10x10 bridge map with 4 agents like the one shown in figure 1. As described in the transfer learning section above, we initially trained a model to pick up customers with infinite energy. We then took that same trained model, gave each agent a random amount of energy and activated the penalty for losing all of your energy. When fine tuning, we found that we needed to set the epsilon for the e-greedy exploration to.5. Which meant that it would take the best action 50 percent of the time and a random otherwise. We linearly decayed this value to.01 over 3000 iterations. We found this approach to be successful in teaching the agent to keep itself alive and still pick up customers versus training a model altogether to learn to manage both objectives from scratch. In our experiment we wanted to demonstrate that the RL model was successful in learning how to balance these two objectives effectively. Would the model stray too closely to keeping itself alive and only pick up a few customers? Or would it pick up many customers and sparingly recharge itself. We compared our model to two baseline agents. The first baseline agent would be a conservative agent where it would go back to the charging station if its energy was below 40 percent. The second, was a more aggressive baseline agent and would only go charge its energy when it was below 10 percent. Rule based simulation models would have to set these thresholds according to business requirements and goals. Setting a hard threshold is not the most optimal though as in some scenarios it may be more beneficial to be more aggressive and get the customers near you. In others, it helps to be more conservative as customers may be far away. We can see that the conservative baseline model successfully goes and charges it's battery when low but ends up picking up less customers than the aggressive baseline. Unfortunately, the aggressive baseline loses its energy more frequently and has experience around 11 deaths over the course of the trials. Our RL model shows that it is able to balance the deaths and customers fulfilled metrics and have an overall significant improvement in net reward. While it still may die from time to time, the overall performance boost of balancing these two objectives far exceeds the general baseline models. Deep Reinforcement Learning provides a great approach to teach agents how to solve complex problems that us as humans may never be able to solve. For instance, Deep Mind has been successful in teach an agent to defeat the world champion in Go. More specifically, multi-Agent Reinforcement Learning problems provide an interesting avenue to investigate agent to agent communication and decision protocols. Since agents must rationalize about the intentions of other agents the dimensionality of the problem space becomes difficult to solve. In our use case, we wanted to see if we can scale a DRL solution up to an actual ride sharing environment that maintains the same dynamics as it would in real life. For this to be possible, we were tasked with the problem of teaching these agents effective cooperation strategies that would optimize the reward of the system along with the problem of teaching these same agents multiple objectives. This work, demonstrated how we successfully applied a partially observable multi-agent deep reinforcement solution to this ride sharing problem. Along with that, we showed how we can effectively take advantage of transfer learning to adapt decision policies to account for multiple objectives.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
B1EGg7ZCb
Utilized Deep Reinforcement Learning to teach agents ride-sharing fleet style coordination.
Stability is a key aspect of data analysis. In many applications, the natural notion of stability is geometric, as illustrated for example in computer vision. Scattering transforms construct deep convolutional representations which are certified stable to input deformations. This stability to deformations can be interpreted as stability with respect to changes in the metric structure of the domain. In this work, we show that scattering transforms can be generalized to non-Euclidean domains using diffusion wavelets, while preserving a notion of stability with respect to metric changes in the domain, measured with diffusion maps. The ing representation is stable to metric perturbations of the domain while being able to capture''high-frequency'' information, akin to the Euclidean Scattering. Convolutional Neural Networks (CNN) are layered information processing architectures. Each of the layers in a CNN is itself the composition of a convolution operation with a pointwise nonlinearity where the filters used at different layers are the outcome of a data-driven optimization process BID22. Scattering transforms have an analogous layered architecture but differ from CNNs in that the convolutional filters used at different layers are not trained but selected from a multi-resolution filter bank BID25 BID3. The fact that they are not trained endows scattering transforms with intrinsic value in situations where training is impossible -and inherent limitations in the converse case. That said, an equally important value of scattering transforms is that by isolating the convolutional layered architecture from training effects it permits analysis of the fundamental properties of CNN information processing architectures. This analysis is undertaken in BID25; BID3 where the fundamental is about the stability of scattering transforms with respect to deformations in the underlying domain that are close to translations. In this paper we consider graphs and signals supported on graphs such as brain connectivity networks and functional activity levels BID17, social networks and opinions BID19, or user similarity networks and ratings in recommendation systems BID18. Our specific goals are: (i) To define a family of graph-scattering transforms. (ii) To define a notion of deformation for graph signals. (iii) To study the stability of graph scattering transforms with respect to this notion of deformation. To accomplish goal (i) we consider the family of graph diffusion wavelets which provide an appropriate construction of a multi-resolution filter bank BID8. Our diffusion scattering transforms are defined as the layered composition of diffusion wavelet filter banks and pointwise nonlinearities. To accomplish goal (ii) we adopt the graph diffusion distance as a measure of deformation of the underlying domain BID27. Diffusion distances measure the similarity of two graphs through the time it takes for a signal to be diffused on the graph. The major accomplishment of this paper is to show that the diffusion graph scattering transforms are stable with respect to deformations as measured with respect to diffusion distances. Specifically, consider a signal x supported on graph G whose diffusion scattering transform is denoted by the operator Ψ G. Consider now a deformation of the signal's domain so that the signal's support is now described by the graph G whose diffusion scattering operator is Ψ G. We show that the operator norm distance Ψ G − Ψ G is bounded by a constant multiplied by the diffusion distance between the graphs G and G. The constant in this bound depends on the spectral gap of G but, very importantly, does not depend on the number of nodes in the graph. It is important to point out that finding stable representations is not difficult. E.g., taking signal averages is a representation that is stable to domain deformations -indeed, invariant. The challenge is finding a representation that is stable and rich in its description of the signal. In our numerical analyses we show that linear filters can provide representations that are either stable or rich but that cannot be stable and rich at the same time. The situation is analogous to (Euclidean) scattering transforms and is also associated with high frequency components. We can obtain a stable representation by eliminating high frequency components but the representation loses important signal features. Alternatively, we can retain high frequency components to have a rich representation but that representation is unstable to deformations. Diffusion scattering transforms are observed to be not only stable -as predicted by our theoretical analysis -but also sufficiently rich to achieve good performance in graph signal classification examples. Since graph and graph signals are of increasing interest but do not have the regular structure that would make use of CNNs appealing, it is pertinent to ask the question of what should be an appropriate generalization of CNNs to graphs and the graph signals whose topology they describe BID2. If one accepts the value of convolutions as prima facie, a natural solution is to replace convolutions with graph shift invariant filters which are known to be valid generalizations of (convolutional) time invariant filters BID4. This idea is not only natural but has been demonstrated to work well in practical implementations of Graph Neural Networks (GNNs) BID9 BID11 BID13 BID16 BID20. Same as Euclidean scattering transforms, our graph scattering transforms differ from GNNs in that they do not have to be trained. The advantages and limitations of the absence of training notwithstanding, our work also sheds light on the question of why graph convolutions are appropriate generalizations of regular domain convolutions for signal classification problems. Our work suggests that the value of GNNs stems from their stability relative to deformations of the underlying domain that are close to permutations -which is the property that a pair of graphs must satisfy to have small diffusion distance. The stability obtained in this paper build on the notion of scattering transforms. These scattering representations were introduced by BID25 and further developed in BID3 with computer vision applications. Since, these representations have been extended to handle transformations on more complex groups, such as roto-translations BID31 BID28, and to domains such as audio processing BID0 and quantum chemistry BID10.Similarly as in this work, extensions of scattering to general graphs have been considered in BID5 and BID34. BID5 focuses on Haar wavelets that hierarchically coarsen the graph, and relies on building multiresolution pairings. The recent BID34 is closest to our work. There, the authors define graph scattering using spectrally constructed wavelets from BID15, and establish some properties of the ing representation, such as energy conservation and stability to spectral perturbations. In contrast, our stability are established with respect to diffusion metric perturbations, which are generally weaker, in the sense that they define a weaker topology (see Section 3). We use diffusion wavelets BID8 ) to obtain multi-resolution graph filter banks that are localized in frequency as well as in the graph domain, while spanning the whole spectrum. Diffusion wavelets serve as the constructive basis for the obtained stability . Our work is also closely related to recent analysis of stability of Graph Neural Networks in the context of surface representations in BID21. In our work, however, we do not rely on extrinsic deformations and exploit the specific multiresolution structure of wavelets. This section introduces our framework and states the desired stability properties of signal representations defined on general non-Euclidean domains. Motivated by computer vision applications, our analysis starts with the notion of deformation stability. If x(u) ∈ L 2 (Ω) is an image defined over an Euclidean domain Ω ⊂ R d, we are interested in signal representations Φ: L 2 (Ω) → R K that are stable with respect to small deformations. If DISPLAYFORM0 ) denotes a change of variables with a differentiable field τ: Ω → Ω such that ∇τ < 1, then we ask DISPLAYFORM1 τ:= ∇τ ∞ denoting a uniform bound on the operator norm of ∇τ. In this setting, a notorious challenge to achieving while keeping enough discriminative power in Φ(x) is to transform the high-frequency content of x in such a way that it becomes stable. Scattering transforms BID25 BID3 provide such representations by cascading wavelet decompositions with pointwise modulus activation functions. We briefly summarize here their basic definition. Given a mother wavelet ψ ∈ L 1 (Ω) with at least a vanishing moment ψ(u)du = 0 and with good spatial localization, we consider rotated and dilated versions ψ j,c (u) = 2 −jd ψ(2 −j R c u) using scale parameter j and angle θ ∈ {2πc/C} c=0,...,C−1. A wavelet decomposition operator is defined as a filter bank spanning all scales up to a cutoff 2 J and all angles: DISPLAYFORM2 This filter bank is combined with a pointwise modulus activation function ρ(z) = |z|, as well as a low-pass average pooling operator U computing the average over the domain. The ing representation using m layers becomes DISPLAYFORM3 The ing signal representation has the structure of a CNN, in which feature maps are not recombined with each other, and trainable filters are replaced by multiscale, oriented wavelets. It is shown in BID25 that for appropriate signal classes and wavelet families, the ing scattering transform satisfies a deformation stablity condition of the form, which has been subsequently generalised to broader multiresolution families BID33. In essence, the mechanism that provides stability is to capture high-frequency information with the appropriate spatio-temporal tradeoffs, using spatially localized wavelets. Whereas deformations provide the natural framework to describe geometric stability in Euclidean domains, their generalization to non-Euclidean, non-smooth domains is not straightforward. Let x ∈ L 2 (X). If X is embedded into a low-dimension Euclidean space Ω ⊂ R d, such as a 2-surface within a three-dimensional space, then one can still define meaningful deformations on X via extrinsic deformations of Ω BID21.However, in this work we are interested in intrinsic notions of geometric stability, that do not necessarily rely on a pre-existent low-dimensional embedding of the domain. The change of variables ϕ(u) = u − τ (u) defining the deformation can be seen as a perturbation of the Euclidean metric in DISPLAYFORM0 with dμ(u) = |I − ∇τ (u)|dµ(u), and |I − ∇τ (u)| ≈ 1 if ∇τ is small, where I is the identity. Therefore, a possible way to extend the notion of deformation stability to general domains L 2 (X) is to think of X as a metric space and reason in terms of stability of Φ: L 2 (X) → R K to metric changes in X. This requires a representation that can be defined on generic metric spaces, as well as a criteria to compare how close two metric spaces are. Graphs are flexible data structures that enable general metric structures and modeling non-Euclidean domains. The main ingredients of the scattering transform can be generalized using tools from computational harmonic analysis on graphs. We note that, unlike the case of Euclidean domains, where deformations are equivalent whether they are analyzed from the function domain or its image, in the case of graphs, we focus on deformations on the underlying graph domain, while keeping the same function mapping (i.e. we model deformations as a change of the underlying graph support and analyze how this affects the interaction between the function mapping and the graph).In particular, diffusion wavelets BID8 provide a simple framework to define a multi-resolution analysis from powers of a diffusion operator defined on a graph. A weighted, undirected graph G = (V, E, W) with |V | = n nodes, edge set E and adjacency matrix W ∈ R n×n defines a diffusion process A in its nodes, given in its symmetric form by the normalized adjacency DISPLAYFORM0 where d i = (i,j)∈E W i,j denotes the degree of node i. Denote by d = W 1 the degree vector containing d i in the ith element. By construction, A is well-localized in space (it is nonzero only where there is an edge connecting nodes), it is self-adjoint and satisfies A ≤ 1, where A is the operator norm. Let λ 0 ≥ λ 1 ≥... λ n−1 denote its eigenvalues in decreasing order. Defining DISPLAYFORM1, one can easily verify that the normalized squared root degree vector DISPLAYFORM2 1 is the eigenvector with associated eigenvalue λ 0 = 1. Also, note that λ n−1 = −1 if and only if G has a connected component that is non-trivial and bipartite BID6.In the following, it will be convenient to assume that the spectrum of A (which is real and discrete since A is self-adjoint and in finite-dimensions) is non-negative. Since we shall be taking powers of A, this will avoid folding negative eigenvalues into positive ones. For that purpose, we adopt the so-called lazy diffusion, given by T = 1 2 (I + A). In Section 4 we use this diffusion operator to define both a multiscale wavelet filter bank and a low-pass average pooling, leading to the diffusion scattering representation. This diffusion operator can also be used to construct a metric on G. The so-called diffusion distances BID27 measure distances between two nodes x, x ∈ V in terms of their associated diffusion at time s: In this work, we build on this diffusion metric to define a distance between two graphs G, G. Assuming first that G and G have the same size, the simplest formulation is to compare the diffusion metric generated by G and G up to a node permutation: DISPLAYFORM3 DISPLAYFORM4 where Π n is the space of n × n permutation matrices. The diffusion distance is defined at a specific time s. As s increases, this distance becomes weaker 1, since it compares points at later stages of diffusion. The role of time is thus to select the smoothness of the'graph deformation', similarly as ∇τ measures the smoothness of the deformation in the Euclidean case. For convenience, we denote d(G, G) = d 1/2 (G, G) and use the distance at s = 1/2 as our main deformation measure. The quantity d defines a distance between graphs (seen as metric spaces) yielding a stronger topology than other alternatives such as the Gromov-Hausdorff distance, defined as DISPLAYFORM5. We choose d(G, G) in this work for convenience and mathematical tractability, but leave for future work the study of stability relative to d s GH. Finally, 1 In the sense that it defines a weaker topology, i.e., limm→∞ d DISPLAYFORM6 we consider for simplicity only the case where the sizes of G and G are equal, but definition (3.1) can be naturally extended to compare variable-sized graphs by replacing permutations by softcorrespondences (see BID1 . Our goal is to build a stable and rich representation Φ G (x). The stability property is stated in terms of the diffusion metric above: For a chosen diffusion time s, ∀ x ∈ R n, G = (V, E, W), G = (V, E, W) with |V | = |V | = n, we want DISPLAYFORM0 This representation can be used to model both signals and domains, or just domains G, by considering a prespecified x = f (G), such as the degree, or by marginalizing from an exchangeable distribution, DISPLAYFORM1 The motivation of FORMULA12 is two-fold: On the one hand, we are interested in applications where the signal of interest may be measured in dynamic environments that modify the domain, e.g. in measuring brain signals across different individuals. On the other hand, in other applications, such as building generative models for graphs, we may be interested in representing the domain G itself. A representation from the adjacency matrix of G needs to build invariance to node permutations, while capturing enough discriminative information to separate different graphs. In particular, and similarly as with Gromov-Hausdorff distances, the definition of d(G, G) involves a matching problem between two kernel matrices, which defines an NP-hard combinatorial problem. This further motivates the need for efficient representations of graphs Φ G that can efficiently tell apart two graphs, and such that (θ) = Φ G − Φ G(θ) can be used as a differentiable loss for training generative models. Let T be a lazy diffusion operator associated with a graph G of size n such as those described in Section 3.3. Following BID8, we construct a family of multiscale filters by exploiting the powers of the diffusion operator T 2 j. We define DISPLAYFORM0 This corresponds to a graph wavelet filter bank with optimal spatial localization. Graph diffusion wavelets are localized both in space and frequency, and favor a spatial localization, since they can be obtained with only two filter coefficients, namely h 0 = 1 for diffusion T 2 j−1 DISPLAYFORM1 The finest scale ψ 0 corresponds to one half of the normalized Laplacian operator DISPLAYFORM2, here seen as a temporal difference in a diffusion process, seeing each diffusion step (each multiplication by ∆) as a time step. The coarser scales ψ j capture temporal differences at increasingly spaced diffusion times. For j = 0,..., J n − 1, we consider the linear operator DISPLAYFORM3 which is the analog of the wavelet filter bank in the Euclidean domain. Whereas several other options exist to define graph wavelet decompositions BID29 BID12, and GNN designs that favor frequency localization, such as Cayley filters BID24, we consider here wavelets that can be expressed with few diffusion terms, favoring spatial over frequential localization, for stability reasons that will become apparent next. We choose dyadic scales for convenience, but the construction is analogous if one replaces scales 2 j by γ j for any γ > 1 in.If the graph G exhibits a spectral gap, i.e., β G = sup i=1,... n−1 |λ i | < 1, the following proposition proves that the linear operator Ψ defines a stable frame. Proposition 4.1. For each n, let Ψ define the diffusion wavelet decomposition and assume β G < 1. Then there exists a constant 0 < C(β) depending only on β such that for any x ∈ R n satisfying x, v = 0, DISPLAYFORM4 This proposition thus provides the Littlewood-Paley bounds of Ψ, which control the ability of the filter bank to capture and amplify the signal x along each'frequency' (i.e. the ability of the filter to increase or decrease the energy of the representation, relative to the energy of the x). We note that diffusion wavelets are neither unitary nor analytic and therefore do not preserve energy. However, the frame bounds in Proposition 4.1 provide lower bounds on the energy lost, such that the smaller 1 − β is, the less "unitary" our diffusion wavelets are. It also informs us about how the spectral gap β determines the appropriate diffusion scale J: The maximum of p(u) = (u r − u 2r) 2 is at u = 2 −1/r, thus the cutoff r * should align with β as r * = −1 log 2 β, since larger values of r capture energy in a spectral range where the graph has no information. Therefore, the maximum scale can be adjusted as J = 1 + log 2 r * = 1 + log 2 −1 log 2 β.Recall that the Euclidean Scattering transform is constructed by cascading three building blocks: a wavelet decomposition operator, a pointwise modulus activation function, and an averaging operator. Following the Euclidean scattering, given a graph G and x ∈ L 2 (G), we define an analogous Diffusion Scattering transform Φ G (x) by cascading three building blocks: the Wavelet decomposition operator Ψ, a pointwise activation function ρ, and an average operator U which extracts the average over the domain. The average over a domain can be interpreted as the diffusion at infinite time, thus U x = lim t→∞ T t x = v T, x. More specifically, we consider a first layer transformation given by DISPLAYFORM5 followed by second order coefficients DISPLAYFORM6 and so on. The representation obtained from m layers of such transformation is thus DISPLAYFORM7 5 STABILITY OF GRAPH DIFFUSION SCATTERING Given two graphs G, G of size n and a signal x ∈ R n, our objective is to bound DISPLAYFORM0. Let π * the permutation minimising the distortion between G and G in. Since all operations in Φ are either equivariant or invariant with respect to permutations, we can assume w.l.o.g. that π = 1, so that the diffusion distance can be directly computed by comparing nodes with the given order. A key property of G that drives the stability of the diffusion scattering is given by its spectral gap 1 − β G = 1 − sup i=1,... n−1 |λ i | ≥ 0. In the following, we use 2 operator norms, unless stated otherwise. Lemma 5.1. DISPLAYFORM1 Remark: If diffusion distance is measured at time different from s = 1/2, the stability bound would be modified due to scales j such that 2 j < s. The following lemma studies the stability of the low-pass operator U with respect to graph perturbations. Lemma 5.2. Let G, G be two graphs with same size, denote by v and v their respective squaredroot degree vectors, and by β, β their spectral gap. Then DISPLAYFORM2 Spectral Gap asymptotic behavior Lemmas 5.1 and 5.2 leverage the spectral gap of the lazy diffusion operator associated with G. In some cases, such as regular graphs, the spectral gap vanishes asymptotically as n → ∞, thus degrading the upper bound asymptotically. Improving the bound by leveraging other properties of the graph (such as regular degree distribution) is an important open task. The scattering transform coefficients Φ G (x) obtained after m layers are given by equation 11, for low-pass operator U such that U x = v, x so that U = v T.From Lemma 5.1 we have that, DISPLAYFORM0 We also know, from Proposition 4.1 that Ψ conforms a frame, i.e. C(β) x 2 ≤ Ψx 2 ≤ x 2 for known constant C(β) given in Prop. 4.1. Additionally, from Lemma 5.2 we get that DISPLAYFORM1 The objective now is to prove stability of the scattering coefficients Φ G (x), that is, to prove that DISPLAYFORM2 This is captured in the following Theorem: Theorem 5.3. Let G, G be two graphs and let d(G, G) be their distance measured as in equation 4. Let T G and T G be the respective diffusion operators. Denote by U G, ρ G and Ψ G and by U G, ρ G and Ψ G the low pass operator, pointwise nonlinearity and the wavelet filter bank used on the scattering transform defined on each graph, respectively, cf. equation 11. Assume ρ G = ρ G and that ρ G is non-expansive. Let β − = min(β G, β G), β + = max(β G, β G) and assume β + < 1. Then, we have that, for each k = 0,..., m − 1, the following holds BID3, it is straightforward to compute the stability bound on the scattering coefficients as follows. Corollary 5.4. In the context of Theorem 5.3, let x ∈ R n and let Φ G (x) be the scattering coefficients computed by means of equation 11 on graph G after m layers, and let Φ G (x) be the corresponding coefficients on graph G. Then, DISPLAYFORM3 DISPLAYFORM4 Corollary 5.4 satisfies equation 5. It also shows that the closer the graphs are in terms of the diffusion metric, the closer their scattering representations will be. The constant is given by topological properties, the spectral gaps of G and G, as well as design parameters, the number of layers m. We observe that the stability bound grows the smaller the spectral gap is and also as more layers are considered. The spectral gap is tightly linked with diffusion processes on graphs, and thus it does emerge from the choice of a diffusion metric. Graphs with values of beta closer to 1, exhibit weaker diffusion paths, and thus a small perturbation on the edges of these graphs would lead to a larger diffusion distance. The contrary holds as well. In other words, the tolerance of the graph to edge perturbations (i.e., d(G, G) being small) depends on the spectral gap of the graph. We also note that, as stated at the end of Section 5.1, the spectral gap appears in our upper bounds, but it is not necessarily sharp. In particular, the spectral gap is a poor indication of stability in regular graphs, and we believe our bound can be improved by leveraging structural properties of regular domains. Finally, we note that the size of the graphs impacts the stability inasmuch as it impacts the distance measure d(G, G). This is expected, since graphs of different size can be compared, as mentioned in Section 3.3. Different from BID34, our focus is on obtaining graph wavelet banks that are localized in the graph domain to improve computational efficiency as discussed in BID9. We also notice that the scattering transform in BID34 is stable with respect to a graph measure that depends on the spectrum of the graph through both eigenvectors and eigenvalues. More specifically, it is required that the spectrum gets concentrated as the graphs grow. However, in general, it is not straightforward to relate the topological structure of the graph with its spectral properties. As mentioned in Section 3.3, the stability is computed with a metric d(G, G) which is stronger than what could be hoped for. Our metric is permutation-invariant, in analogy with the rigid translation invariance in the Euclidean case, and stable to small perturbations around permutations. The extension of to weaker metrics, using e.g. multiscale deformations, is left for future work. By denoting T j = T 2 j, observe that one can approximate the diffusion wavelets from as a cascade of low-pass diffusions followed by a high-pass filter at resolution 2 j: DISPLAYFORM0 This pyramidal structure of multi-resolution analysis wavelets -in which each layer now corresponds to a different scale, shows that the diffusion scattering is a particular instance of GNNs where each layer j is generated by the pair of operators {I, T j−1}. If x (j) ∈ R n×dj denotes the feature representation at layer j using d j feature maps per node, the corresponding update is given by DISPLAYFORM1 where θ DISPLAYFORM2 In this case, a simple modification of the previous theorem shows that the ing GNN representation DISPLAYFORM3 2 ) j≤J is also stable with respect to d(G, G), albeit this time the constants are parameter-dependent:Corollary 5.5. The J layer GNN with parameters Θ = (θ DISPLAYFORM4 This bound is thus learning-agnostic and is proved by elementary application of the diffusion distance definition. An interesting question left for future work is to relate such stability to gradient descent optimization biases, similarly as in BID14 BID32, which could provide stability certificates for learnt GNN representations. In this section, we first show empirically the dependence of the stability with respect to the spectral gap, and then we illustrate the discriminative power of the diffusion scattering transform in two different classification tasks; namely, the problems of authorship attribution and source localization. Consider a small-world graph G with N = 200 nodes, edge probability p SW and rewiring probability q SW = 0.1. Let x ∼ N (0, I) be a random graph signal defined on top of G and Φ G (x) the corresponding scattering transform. Let G be another realization of the small-world graph, and let Φ G (x) be the scattering representation of the same graph signal x but on the different support G. We can then proceed to compute Φ G (x) − Φ G (x). By changing the value of p SW we can change value of the spectral gap β and study the dependence of the difference in representations as a function of the spectral gap. Results shown in FIG1 are obtained by varying p SW from 0.1 to 0.9. For each value of p SW we generate one graph G and 50 different graphs G; and for each graph G we compute Φ G (x) − Φ G (x) for 1, 000 different graph signal realizations x. The average across all signal realizations is considered the estimate of the representation difference, and then the mean as well as the variance across all graphs are computed and shown in the figure (error bars). DISPLAYFORM0 as a function of the spectral gap (changing p SW from 0.1 to 0.9 led to values of spectral gap between 0.5 and close to 1). First and foremost we observe that, indeed, as β reaches one, the stability gets worse and the representation difference increases. We also observe that, for deeper scattering representations, the difference also gets worse, although it is not a linear behaviour as predicted in equation 16, which suggest that the bound is not tight. For classifying we train a SVM linear model fed by features obtained from different representations. We thus compare with two non-trainable linear representations of the data: a data-based method (using the graph signals to feed the classifier) and a graph-based method (obtaining the GFT coefficients as features for the data). Additionally, we consider the graph scattering with varying depth to analyze the richness of the representation. Our aim is mainly to illustrate that the scattering representation is rich enough, relative to linear representations, and is stable to graph deformations. First, we consider the problem of authorship attribution where the main task is to determine if a given text was written by a certain author. We construct author profiles by means of word adjacency networks (WAN). This WAN acts as the underlying graph support for the graph signal representing the word count (bag-of-words) of the target text of unknown authorship. Intuitively, the choice of words of the target text should reflect the pairwise connections in the WAN, see BID30 for detailed construction of WANs. In particular, we consider all works by Jane Austen. To illustrate the stability , we construct a WAN with 188 nodes (functional words) using a varying number of texts to form the training set, obtaining an array of graphs that are similar but not exactly the same. For the test set, we include 154 excerpts by Jane Austen and 154 excerpts written by other contemporary authors, totaling 308 data points. FIG1 shows classification error as a function of the number of training samples used. We observe that graph scattering transforms monotonically improve while considering more training data, whereas other methods vary more erratically, showing their lack of stability (their representations vary more wildly when the underlying graph support changes). This shows that scattering diffusion transforms strike a good balance between stability and discriminative power. For the second task, let G be a 234-node graph modeling real-world Facebook interactions BID26. In the source localization problem, we observe a diffusion process after some unknown time t, that originated at some unknown node i, i.e. we observe x = W t δ i, where δ i is the signal with all zeros except a 1 on node i. The objective is to determine which community the source node i belongs to. These signals can be used to model rumors that percolate through the social network by interaction between users and the objective is to determine which user group generated said rumor (or initiated a discussion on some topic). We generate a training sample of size 2, 000, for nodes i chosen at random and diffusion times t chosen as random as well. The GFT is computed by projecting on the eigenbasis of the operator T. We note that, to avoid numerical instabilities, the diffusion is carried out using the normalized operator (W/λ max (W)) and t ≤ t max = 20. The representation coefficients (graph signals, GFT or scattering coefficients) obtained from this set are used to train different linear SVMs to perform classification. For the test set, we draw 200 new signals. We compute the classification errors on the test set as a measure of usefulness of the obtained representations. Results are presented in FIG1, where perturbations are illustrated by dropping edges with probability p (adding or removing friends in Facebook). Again, it is observed that the scattering representation exhibits lower variations when the underlying graph changes, compared to the linear approaches. Finally, to remark the discriminative power of the scattering representation, we observe that as the graph scattering grows deeper, the obtained features help in more accurate classification. We remark that in regimes with sufficient labeled examples, trainable GNN architectures will generally outperform scattering-based representations. In this work we addressed the problem of stability of graph representations. We designed a scattering transform of graph signals using diffusion wavelets and we proved that this transform is stable under deformations of the underlying graph support. More specifically, we showed that the scattering transform of a graph signal supported on two different graphs is proportional to the diffusion distance between those graphs. As a byproduct of our analysis, we obtain stability bounds for Graph Neural Networks generated by diffusion operators. Additionally, we showed that the ing descriptions are also rich enough to be able to adequately classify plays by author in the context of authorship attribution, and identify the community origin of a signal in a source localization problem. That said, there are a number of directions to build upon from these . First, our stability bounds depend on the spectral gap of the graph diffusion. Although lazy diffusion prevents this spectral gap to vanish, as the size of the graph increases we generally do not have a tight bound, as illustrated by regular graphs. An important direction of future research is thus to develop stability bounds which are robust to vanishing spectral gaps. Next, and related to this first point, we are working on extending the analysis to broader families of wavelet decompositions on graphs and their corresponding graph neural network versions, including stability with respect to the GromovHausdorff metric, which can be achieved by using graph wavelet filter banks that achieve bounds analogous to those in Lemmas 5.1 and 5.2.A PROOF OF PROPOSITION 4.1Since all operators ψ j are polynomials of the diffusion T, they all diagonalise in the same basis. Let T = V ΛV T, where V T V = I contains the eigenvectors of T and Λ = diag(λ 0, . . ., λ n−1) its eigenvalues. The frame bounds C 1, C 2 are obtained by evaluating Ψx 2 for x = v i, i = 1,..., n− 1, since v 0 corresponds to the square-root degree vector and x is by assumption orthogonal to v 0.We verify that the spectrum of ψ j is given by (p j (λ 0),..., p j (λ n−1)), where DISPLAYFORM0 2. It follows from the definition that DISPLAYFORM1.., n − 1 and therefore DISPLAYFORM2 We check that DISPLAYFORM3 2. One easily verifies that Q(x) is continuous in since it is bounded by a geometric series. Also, observe that DISPLAYFORM4 since x ∈. By continuity it thus follows that DISPLAYFORM5 which in g (t) ≤ rβ r−1 B − A, proving.By plugging FORMULA5 into we thus obtain DISPLAYFORM6 (1−β 2) 3. Finally, we observe that DISPLAYFORM7 Without loss of generality, assume that the node assignment that minimizes T G − ΠT G Π T is the identity. We need to bound the leading eigenvectors of two symmetric matrices T G and T G with a spectral gap. As before, let DISPLAYFORM8 Since we are free to swap the role of v and v, the follows. DISPLAYFORM9 First, note that ρ G = ρ G = ρ since it is a pointwise nonlinearity (an absolute value), and is independent of the graph topology. Now, let's start with k = 0. In this case, we get U G x − U G x which is immediately bounded by Lemma 5.2 satisfying equation 15.For k = 1 we have DISPLAYFORM10 where the triangular inequality of the norm was used, together with the fact that ρu − ρu ≤ ρ(u − u) for any real vector u since ρ is the pointwise absolute value. Using the submultiplicativity of the operator norm, we get DISPLAYFORM11 From Lemmas 5.1 and 5.2 we have that Ψ G − Ψ G ≤ ε Ψ and U G − U G ≤ ε U, and from Proposition 4.1 that Ψ G ≤ 1. Note also that U G = U G = 1 and that ρ = 1. This yields DISPLAYFORM12 satisfying equation 15 for k = 1.For k = 2, we observe that DISPLAYFORM13 The first term is bounded in a straightforward fashion by DISPLAYFORM14 analogy to the development for k = 1. Since U G = 1, for the second term, we focus on DISPLAYFORM15 We note that, in the first term in equation 33, the first layer induces an error, but after that, the processing is through the same filter banks. So we are basically interested in bounding the propagation of the error induced in the first layer. Applying twice the fact that ρ(u) − ρ(u) ≤ ρ(u − u) we get DISPLAYFORM16 And following with submultiplicativity of the operator norm, DISPLAYFORM17 For the second term in equation 33, we see that the first layer applied is the same in both, namely ρΨ G so there is no error induced. Therefore, we are interested in the error obtained after the first layer, which is precisely the same error obtained for k = 1. Therefore, DISPLAYFORM18 Plugging equation 35 and equation 36 back in equation 31 we get DISPLAYFORM19 satisfying equation 15 for k = 2.For general k we see that we will have a first term that is the error induced by the mismatch on the low pass filter that amounts to ε U, a second term that accounts for the propagation through (k − 1) equal layers of an initial error, yielding ε Ψ, and a final third term that is the error induced by the previous layer, (k − 1)ε Ψ. More formally, assume that equation 15 holds for k − 1, implying that DISPLAYFORM20 Then, for k, we can write DISPLAYFORM21 Again, the first term we bound it in a straightforward manner using submultiplicativity of the operator norm DISPLAYFORM22 For the second term, since U G = 1 we focus on DISPLAYFORM23 The first term in equation 42 computes the propagation in the initial error caused by the first layer. Then, repeatedly applying ρ(u) − ρ(u) ≤ ρ(u − u) in analogy with k = 2 and using submultiplicativity, we get DISPLAYFORM24 The second term in equation 42 is the bounded by equation 38, since the first layer is exactly the same in this second term. Then, combining equation 43 with equation 38, yields DISPLAYFORM25 Overall, we get DISPLAYFORM26 which satisfies equation 15 for k. Finally, since this holds for k = 2, the proof is completed by induction. E PROOF OF COROLLARY 5.4From Theorem 5.3, we have DISPLAYFORM27 and, by definition (, Sec. 3 .1), DISPLAYFORM28 so that DISPLAYFORM29 Then, applying the inequality of Theorem 5.3, we get DISPLAYFORM30 Now, considering each term, such that DISPLAYFORM31 + m−1 k=0 2 3/2 k β 2 + (1 + β 2 +) (1 − β −)(1 − β 2 +) 3 d
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BygqBiRcFQ
Stability of scattering transform representations of graph data to deformations of the underlying graph support.
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets in lifelong learning scenarios on multiple public datasets, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch model with substantially fewer number of parameters. Lifelong learning BID13, the problem of continual learning where tasks arrive in sequence, is an important topic in transfer learning. The primary goal of lifelong learning is to leverage knowledge from earlier tasks for obtaining better performance, or faster convergence/training speed on models for later tasks. While there exist many different approaches to tackle this problem, we consider lifelong learning under deep learning to exploit the power of deep neural networks. Fortunately, for deep learning, storing and transferring knowledge can be done in a straightforward manner through the learned network weights. The learned weights can serve as the knowledge for the existing tasks, and the new task can leverage this by simply sharing these weights. Therefore, we can consider lifelong learning simply as a special case of online or incremental learning, in case of deep neural networks. There are multiple ways to perform such incremental learning BID12 BID17. The simplest way is to incrementally fine-tune the network to new tasks by continuing to train the network with new training data. However, such simple retraining of the network can degenerate the performance for both the new tasks and the old ones. If the new task is largely different from the older ones, such as in the case where previous tasks are classifying images of animals and the new task is to classify images of cars, then the features learned on the previous tasks may not be useful for the new one. At the same time, the retrained representations for the new task could adversely affect the old tasks, as they may have drifted from their original meanings and are no longer optimal for them. For example, the feature describing stripe pattern from zebra, may changes its meaning for the later classification task for classes such as striped t-shirt or fence, which can fit to the feature and drastically change its meaning. Then how can we ensure that the knowledge sharing through the network is beneficial for all tasks, in the online/incremental learning of a deep neural network? Recent work suggests to either use a regularizer that prevents the parameters from drastic changes in their values yet still enables to find a good solution for the new task BID4, or block any changes to the old task BID4 retrains the entire network learned on previous tasks while regularizing it to prevent large deviation from the original model. Units and weights colored in red denote the ones that are retrained, and black ones are ones that remain fixed. (b) Non-retraining models such as Progressive Network BID12 expands the network for the new task t, while withholding modification of network weights for previous tasks. (c) Our DEN selectively retrains the old network, expanding its capacity when necessary, and thus dynamically deciding its optimal capacity as it trains on. parameters BID12. Our strategy is different from both approaches, since we retrain the network at each task t such that each new task utilizes and changes only the relevant part of the previous trained network, while still allowing to expand the network capacity when necessary. In this way, each task t will use a different subnetwork from the previous tasks, while still sharing a considerable part of the subnetwork with them. FIG0 illustrates our model in comparison with existing deep lifelong learning methods. There are a number of challenges that need to be tackled for such incremental deep learning setting with selective parameter sharing and dynamic layer expansion. 1) Achieving scalability and efficiency in training: If the network grows in capacity, training cost per task will increasingly grow as well, since the later tasks will establish connections to a much larger network. Thus, we need a way to keep the computational overhead of retraining to be low.2) Deciding when to expand the network, and how many neurons to add: The network might not need to expand its size, if the old network sufficiently explains the new task. On the other hand, it might need to add in many neurons if the task is very different from the existing ones. Hence, the model needs to dynamically add in only the necessary number of neurons.3) Preventing semantic drift, or catastrophic forgetting, where the network drifts away from the initial configuration as it trains on, and thus shows degenerate performance for earlier examples/tasks. As our method retrains the network, even partially, to fit to later learned tasks, and add in new neurons which might also negatively affect the prior tasks by establishing connections to old subnetwork, we need a mechanism to prevent potential semantic drift. To overcome such challenges, we propose a novel deep network model along with an efficient and effective incremental learning algorithm, which we name as Dynamically Expandable Networks (DEN). In a lifelong learning scenario, DEN maximally utilizes the network learned on all previous tasks to efficiently learn to predict for the new task, while dynamically increasing the network capacity by adding in or splitting/duplicating neurons when necessary. Our method is applicable to any generic deep networks, including convolutional networks. We validate our incremental deep neural network for lifelong learning on multiple public datasets, on which it achieves similar or better performance than the model that trains a separate network for each task, while using only 11.9%p − 60.3%p of its parameters. Further, fine-tuning of the learned network on all tasks obtains even better performance, outperforming the batch model by as much as 0.05%p − 4.8%p. Thus, our model can be also used for structure estimation to obtain optimal performance over network capacity even when batch training is possible, which is a more general setup. Lifelong learning Lifelong learning BID13 is the learning paradigm for continual learning where the model learns from a sequence of tasks while transferring knowledge obtained from earlier tasks to later ones. Since its inception of idea by BID13, it has been extensively studied due to its practicality in scenarios where the data arrives in streams, such as in autonomous driving or learning of robotic agents. Lifelong learning is often tackled as an online multi-task learning problem, where the focus is on efficient training as well as on knowledge transfer. BID3 suggest an online lifelong learning framework (ELLA) that is based on an existing multi-task learning formulation BID7 ) that efficiently updates latent parameter bases for a sequence of tasks, by removing dependency to previous tasks for the learning of each task predictor, and preventing retraining previous task predictors. Recently, lifelong learning is studied in deep learning frameworks; since lifelong learning of a deep network can be straightforwardly done by simple re-training, the primary focus of research is on overcoming catastrophic forgetting BID4 BID12 BID16 BID10.Preventing catastrophic forgetting Incremental or lifelong learning of deep networks in the problem known as catastrophic forgetting, which describes the case where the retraining of the network for new tasks in the network forgetting what are learned for previous tasks. One solution to this problem is to use a regularizer that prevents the new model from deviating too much from the previous one, such as 2 -regularizer. However, use of the simple 2 -regularizer prevents the model from learning new knowledge for the new tasks, which in suboptimal performances on later tasks. To overcome this limitation, BID4 proposed a method called Elastic Weight Consolidation (EWC) that regularizes the model parameter at each step with the model parameter at previous iteration via the Fisher information matrix for the current task, which enables to find a good solution for both tasks. BID16 proposed a similar approach, but their approach computes the per-synapse consolidation online, and considers the entire learning trajectory rather than the final parameter value. Another way to prevent catastrophic forgetting is to completely block any modifications to the previous network, as done in BID12, where at each learning stage the network is expanded with a subnetwork with fixed capacity that is trained with incoming weights from the original network, but without backpropagating to it. Dynamic network expansion There are few existing works that explored neural networks that can dynamically increase its capacity during training. BID17 propose to incrementally train a denoising autoencoder by adding in new neurons for a group of difficult examples with high loss, and later merging them with other neurons to prevent redundancy. Recently, BID11 propose a nonparametric neural network model which not only learns to minimize the loss but also find the minimum dimensionality of each layer that can reduce the loss, with the assumption that each layer has infinite number of neurons. BID2 also propose a network that can adaptively learn both the structure and the weights to minimize the given loss, based on boosting theory. However, none of these work considered multi-task setting and involves iterative process of adding in neurons (or sets of neurons), while our method only needs to train the network once for each task, to decide how many neurons to add. BID15 propose a method to incrementally train a network for multi-class classification, where the network not only grows in capacity, but forms a hierarchical structure as new classes arrive at the model. The model, however, grows and branches only the topmost layers, while our method can increase the number of neurons at any layer. We consider the problem of incremental training of a deep neural network under the lifelong learning scenario, where unknown number of tasks with unknown distributions of training data arrive at the model in sequence. Specifically, our goal is to learn models for a sequence of T tasks, t = 1,..., t,..., T for unbounded T where the task at time point t comes with training data DISPLAYFORM0. Note that each task t can be either a single task, or comprised of set of subtasks. While our method is generic to any kinds of tasks, for simplification, we only consider the binary classification task, that is, y ∈ {0, 1} for input feature x ∈ R d. It is the main challenge in the lifelong learning setting that all the previous training datasets up to t − 1 are not available at the current time t (only the model parameters for the previous tasks are accessible, if any). The lifelong learning agent at time t aims to learn the model parameter W t by solving following problem: DISPLAYFORM1 Figure 2: Incremental learning of a dynamically expandable network: Left: Selective retraining. DEN first identifies neurons that are relevant to the new tasks, and selectively retrains the network parameters associated with them. Center: Dynamic network expansion. If the selective retraining fails to obtain desired loss below set threshold, we expand the network capacity in a top-down manner, while eliminating any unnecessary neurons using group-sparsity regularization. Right: Network split/duplication. DEN calculates the drift ρ where L is task specific loss function, W t is the parameter for task t, and Ω(W t) is the regularization (e.g. element-wise 2 norm) to enforce our model W t appropriately. In case of a neural network which is our primary interest, DISPLAYFORM2 is the weight tensor. To tackle these challenges of lifelong learning, we let the network to maximally utilize the knowledge obtained from the previous tasks, while allowing it to dynamically expand its capacity when the accumulated knowledge alone cannot sufficiently explain the new task. Figure 2 and Algorithm 1 describes our incremental learning process. In following subsections, we describe each component of our incremental learning algorithm in detail: 1) Selective retraining, 2) Dynamic network expansion, and 3) Network split/duplication. Selective Retraining. A most naive way to train the model for a sequence of tasks would be retraining the entire model every time a new task arrives. However, such retraining will be very costly for a deep neural network. Thus, we suggest to perform selective retraining of the model, by retraining only the weights that are affected by the new task. Initially(t=1), we train the network with 1 -regularization to promote sparsity in the weights, such that each neuron is connected to only few neurons in the layer below: where 1≤l≤L denotes the l th layer of the network, W t l is the network parameter at layer l, and µ is the regularization parameter of the element-wise 1 norm for sparsity on W. For convolutional layers, we apply-norm on the filters, to select only few filters from the previous layer. Throughout our incremental learning procedure, we maintain W t−1 to be sparse, thus we can drastically reduce the computation overheads if we can focus on the subnetwork connected new task. To this end, when a new task t arrives at the model, we first fit a sparse linear model to predict task t using topmost hidden units of the neural network via solving the following problem: DISPLAYFORM0 where W t−1 1:L−1 denotes the set of all other parameters except W t L,t. That is, we solve this optimization to obtain the connections between output unit o t and the hidden units at layer L-1 (fixing all other parameters up to layer L-1 as W t−1). Once we build the sparse connection at this layer, we can identify all units and weights in the network that are affected by the training, while leaving the part of the network that are not connected to o t unchanged. Specifically, we perform breadth-first search on the network starting from those selected nodes, to identify all units (and input feature) that have paths to o t. Then, we train only the weights of the selected subnetwork S, denoted as W DISPLAYFORM1 We use the element-wise 2 regularizer since the sparse connections have been already established 1. This partial retraining will in lower computational overhead and also help with avoiding negative transfer, since neurons that are not selected will not get affected by the retraining process. Algorithm 2 describes the selective retraining process. Add neuron i to S if the weight between i and ot in W t L,t is not zero. DISPLAYFORM0 Add neuron i to S if there exists some neuron j ∈ S such that W t−1 l,ij = 0. Solve Eq. 4 to obtain W t S Dynamic Network Expansion. In case where the new task is highly relevant to the old ones, or aggregated partial knowledge obtained from each task is sufficient to explain the new task, selective retraining alone will be sufficient for the new task. However, when the learned features cannot accurately represent the new task, additional neurons need to be introduced to the network, in order to account for the features that are necessary for the new task. Some existing work BID17 BID12 are based on a similar idea. However, they are either inefficient due to iterative training that requires repeated forward pass BID17, or adds in constant number of units at each task t without consideration of the task difficulty BID12 and thus are suboptimal in terms of performance and network capacity utility. To overcome these limitations, we instead propose an efficient way of using group sparse regularization to dynamically decide how many neurons to add at which layer, for each task without repeated retraining of the network for each unit. Suppose that we expand the l th layer of a network with a constant number of units, say k, inducing two parameter matrices expansions: DISPLAYFORM1 ] for outgoing and incoming layers respectively, where W N is the expanded weight matrix involved with added neurons. Since we do not always want to add in all k units (depending on the relatedness between the new task and the old tasks), we perform group sparsity regularization on the added parameters as follows: DISPLAYFORM2 where g ∈ G is a group defined on the incoming weights for each neuron. For convolutional layers, we defined each group as the activation map for each convolutional filter. This group sparsity regularization was used in BID14 and BID1 to find the right number of neurons for a full network, while we apply it to the partial network. Algorithm 3 describes the details on how expansion works. After selective retraining is done, the network checks if the loss is below certain threshold. If not, then at each layer we expand its capacity by k neurons and solve for Eq. 5. Due to group sparsity regularization in Eq. 5, hidden units (or convolutional filters) that are deemed unnecessary from the training will be dropped altogether. We expect that from this dynamic network expansion process, the model captures new features that were not previously represented by W t−1 l to minimize residual errors, while maximizing the utilization of the network capacity by avoiding to add in too many units. Input: Datatset Dt, Threshold τ Perform Algorithm 2 and compute L if L > τ then Add k units h N at all layers Solve for Eq. 5 at all layers for l = L − 1,..., 1 do Remove useless units in h N l Network Split/Duplication. A crucial challenge in lifelong learning is the problem of semantic drift, or catastrophic forgetting, which describes the problem where the model gradually fits to the later learned tasks and thus forgets what it learned for earlier tasks, ing in degenerate performance for them. The most popular yet simple way of preventing semantic drift is to regularize the parameters from deviating too much from its original values using 2 -regularization, as follows: DISPLAYFORM0 where t is the current task, and W t−1 is the weight tensor of the network trained for tasks {1, . . ., t − 1}, and λ is the regularization parameter. This 2 regularization will enforce the solution W t to be found close to W t−1, by the degree given by λ; if λ is small, then the network will be learned to reflect the new task more while forgetting about the old tasks, and if λ is high, then W t will try to preserve the knowledge learned at previous tasks as much as possible. Rather than placing simple 2 regularization, it is also possible to weight each element with Fisher information BID4. Nonetheless, if number of tasks is large, or if the later tasks are semantically disparate from the previous tasks, it may become difficult to find a good solution for both previous and new tasks. A better solution in such a case, is to split the neuron such that we have features that are optimal for two different tasks. After performing Eq. 6, we measure the amount of semantic drift for each hidden unit i, ρ t i, as the 2 -distance between the incoming weights at t-1 and at t. Then if ρ t i > σ, we consider that the meaning of the feature have significantly changed during training, and split this neuron i into two copies (properly introducing new edges from and to duplicate). This operation can be performed for all hidden units in parallel. After this duplication of the neurons, the network needs to train the weights again by solving Eq. 6 since split changes the overall structure. However, in practice this secondary training usually converges fast due to the reasonable parameter initialization from the initial training. Algorithm 4 describes the algorithm for split operation. Input: Weight W t−1, Threshold σ Perform Eq. 6 to obtain W t for all hidden unit i do ρ DISPLAYFORM0 Copy i into i (w introduction of edges for i) Perform Eq. 6 with the initialization of W t to obtain W t Timestamped Inference. In both the network expansion and network split procedures, we timestamp each newly added unit j by setting {z} j = t to record the training stage t when it is added to the network, to further prevent semantic drift caused by the introduction of new hidden units. At inference time, each task will only use the parameters that were introduced up to stage t, to prevent the old tasks from using new hidden units added in the training process. This is a more flexible strategy than fixing the weights learned up to each learning stage as in BID12, since early tasks can still benefit from the learning at later tasks, via units that are further trained, but not split. Baselines and our model. 1) DNN-STL. Base deep neural network, either feedforward or convolutional, trained for each task separately.2) DNN-MTL. Base DNN trained for all tasks at once.3) DNN. Base DNN. All incremental models use 2 -regularizations.3) DNN-L2. Base DNN, where at each task t, W t is initialized as W t−1 and continuously trained with 2 -regularization between W t and W t−1. Deep network trained with elastic weight consolidation BID4 ) for regularization. 5) DNN-Progressive. Our implementation of the progressive network BID12, whose network weights for each task remain fixed for the later arrived tasks. Base network settings. 1) Feedforward networks: We use a two-layer network with 312-128 neurons with ReLU activations. 2) Convolutional networks. For experiments on the CIFAR-100 dataset, we use a modified version of AlexNet BID6 ) that has five convolutional layers (64-128-256-256-128 depth with 5 × 5 filter size), and three fully-connected layers (384-192-100 neurons at each layer).All models and algorithms are implemented using the Tensorflow BID0 library. We will release our codes upon acceptance of our paper, for reproduction of the . Datasets. 1) MNIST-Variation. This dataset consists of 62, 000 images of handwritten digits from 0 to 9. Unlike MNIST, the handwritten digits are rotated to arbitrary angles and has noise in the , which makes the prediction task more challenging. We use 1, 000/200/5, 000 images for train/val/test split for each class. We form each task to be one-versus-rest binary classification.2) CIFAR-100. This dataset consists of 60, 000 images of 100 generic object classes BID5. Each class has 500 images for training and 100 images for test. We used a CNN as the base network for the experiments on this dataset, to show that our method is applicable to a CNN. Further, we considered each task as a set of 10 subtasks, each of which is a binary classification task on each class.3) AWA (Animals with Attributes). This dataset consists of 30, 475 images of 50 animals BID8. For features, we use DECAF features provided with the dataset, whose dimensionality is reduced to 500 by PCA. We use random splits of 30/30/30 images for training/validation/test. We validate our models for both prediction accuracy and efficiency, where we measure the efficiency by network size at the end of training and training time. We first report the average per-task performance of baselines and our models in the top row of Figure 3. DNN-STL showed best performances on AWA and CIFAR-100 dataset; they are expected to perform well, since they are trained to be optimal for each task, while all other models are trained online which might cause semantic drift. When the number of tasks is small, MTL works the best from knowledge sharing via multi-task learning, but when the number of tasks is large, STL works better since it has larger learning capacity than MTL. Our model, DEN, performs almost the same as these batch models, and even outperforms them on MNIST-Variation dataset. Retraining models combined with regularization, such as L2 and EWC do not perform well, although the latter outperforms the former. This is expected as the two models cannot dynamically increase their capacity. Progressive network works better than the two, but it underperforms DEN on all datasets. The performance gap is most significant on AWA, as larger number of tasks (T = 50) may have made it more difficult to find the appropriate network capacity. If the network is too small, then it will not have sufficient learning capacity to represent new tasks, and if too large, it will become prone to overfitting. We further report the performance of each model over network capacity measured relative to MTL on each dataset, in Figure 3 (bottom row). For baselines, we report the performance of multiple models with different network capacity. DEN obtains much better performance with substantially fewer number of parameters than Progressive network or obtain significantly better performance using similar number of parameters. DEN also obtains the same level of performance as STL using only 18.0%, 60.3%, and 11.9% of its capacity on MNIST-Variation, CIFAR-100, and AWA respectively. This also shows the main advantage of DEN, which is being able to dynamically find its optimal capacity, since it learns a very compact model on MNIST-Variation, whilst learning a substantially large network on CIFAR-100. Further fine-tuning of DEN on all tasks (DEN-Finetune) obtains the best performing model on all datasets, which shows that DEN is not only useful for lifelong learning, but can be also used for network capacity estimation when all tasks are available in the first place. Effect of selective retraining. We further examine how efficient and effective the selective training is, by measuring the training speed and the area under ROC curve on MNIST-Variation dataset. To Figure 5: Semantic drift experiment on the MNIST-Variation dataset. We report the AUROC of different models on t = 1, t = 4, and t = 7 at each training stage to see how the model performance changes over time for these tasks. Reported AUROC is the average over five random splits.this end, we compare the model without network expansion, which we refer to as DNN-Selective, against retraining on DNN-L2 and DNN-L1 (Eq. FORMULA4), for both the accuracy and efficiency. Figure 4 (a) shows both the accuracy over training time measured as actual time spent with GPU computation, for each model. We observe that selective retraining takes significantly less time than the full retraining of the network, and even less than DNN-L1 that comes with sparse network weights. Further, whereas DNN-L1 obtained slightly less accuracy than DNN-L2, DNN-Selective improves the accuracy over the base network by 2%p. This accuracy gain may be due to the suppression of catastrophic forgetting, as DEN trains only a partial subnetwork for each newly introduced task. Figure 4 (b) shows the number of selected neurons at each layer with selective retraining. Note that DNN-selective mostly selects less portion of upper level units which are more task-specific, while selecting larger portion of more generic lower layer units. Effect of network expansion. We also compare the effectiveness of the network expansion against the model with a variant of our model that does selective retraining and layer expansion, but without network split. We refer to this model as DNN-Dynamic. We compare DNN-Dynamic with DNN-L2 used in the main experiment, and DNN-Constant, which is a version of our model that expands its capacity at each layer with fixed number of units, on MNIST-Variation dataset. Figure 4 (c) shows the experimetal . DNN-Dynamic obtains the best mean AU-ROC, significantly outperforming all models including DNN-Constant, while increasing the size of the network substantially less than DNN-Constant (k=20). This may be because having less number of parameters is not only beneficial in terms of training efficiency, but also advantageous in preventing the model from overfitting. We can set the network capacity of DNN-Constant to be similar (k=13) to obtain better accuracy, but it still underperforms DEN which can dynamically adjust the number of neurons at each layer. Effect of network split/duplication and timestamped inference. To see how network split/duplication and unit timestamping help prevent semantic drift (or catastrophic forgetting), while allowing to obtain good performances on later tasks, we compare the performance of our model against baselines and also a variant of our DEN without timestamped inference (DEN-No-Stamp) at different learning stages. Each figure in Figure 5 (a), (b), and (c) shows how the performance of the model changes at each training stage t, for tasks t=1, t=4, and t=7.We observe that DNN-L2 prevents semantic drift of the models learned at early stages, but in increasingly worse performance on later tasks (t=4, 7). DNN-EWC, on the other hand, has better performance on later tasks than DNN-L2, as reported in BID4. However, it shows significantly lower performance than both DNN-Progressive and our model, which may be due to its inability to increase network capacity, that may in limited expressive power. DNN-Progressive shows no semantic drift on old tasks, which is expected because it does not retrain parameters for them. DEN w/o Timestamping works better than DNN-Progressive on later tasks, with slight performance degeneration over time. Finally, our full model with timestamped inference, DEN, shows no sign of noticeable performance degeneration at any learning stage, while significantly outperforming DNN-Progressive. This show that DEN is highly effective in preventing semantic drift as well. We proposed a novel deep neural network for lifelong learning, Dynamically Expandable Network (DEN). DEN performs partial retraining of the network trained on old tasks by exploiting task relatedness, while increasing its capacity when necessary to account for new knowledge required to account for new tasks, to find the optimal capacity for itself, while also effectively preventing semantic drift. We implement both feedforward and convolutional neural network version of our DEN, and validate them on multiple classification datasets under lifelong learning scenarios, on which they significantly outperform the existing lifelong learning methods, achieving almost the same performance as the network trained in batch while using as little as 11.9%p − 60.3%p of its capacity. Further fine-tuning of the models on all tasks in obtaining models that outperform the batch models, which shows that DEN is useful for network structure estimation as well. We provide additional experimental on Permuted MNIST dataset BID9. This dataset consists of 70, 000 images of handwritten digits from 0 to 9, where 60, 000 images are used for training, and 10, 000 images for test. The difference of this dataset from the original MNIST is that each of the ten tasks is the multi-class classification of a different random permutation of the input pixels. FIG4 shows the of this experiment. Our DEN outperforms all lifelong learning baselines while using only 1.39 times of base network capacity. Further, DEN-Finetune achieves the best AUROC among all models, including DNN-STL and DNN-MTL.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Sk7KsfW0-
We propose a novel deep network architecture that can dynamically decide its network capacity as it trains on a lifelong learning scenario.
This paper fosters the idea that deep learning methods can be sided to classical visual odometry pipelines to improve their accuracy and to produce uncertainty models to their estimations. We show that the biases inherent to the visual odom- etry process can be faithfully learnt and compensated for, and that a learning ar- chitecture associated to a probabilistic loss function can jointly estimate a full covariance matrix of the residual errors, defining a heteroscedastic error model. Experiments on autonomous driving image sequences and micro aerial vehicles camera acquisitions assess the possibility to concurrently improve visual odome- try and estimate an error associated to its outputs.
[ 1, 0, 0 ]
SklqvxSFDB
This paper discusses different methods of pairing VO with deep learning and proposes a simultaneous prediction of corrections and uncertainty.
Building robust online content recommendation systems requires learning com- plex interactions between user preferences and content features. The field has evolved rapidly in recent years from traditional multi-arm bandit and collabora- tive filtering techniques, with new methods integrating Deep Learning models that enable to capture non-linear feature interactions. Despite progress, the dynamic nature of online recommendations still poses great challenges, such as finding the delicate balance between exploration and exploitation. In this paper we provide a novel method, Deep Density Networks (DDN) which deconvolves measurement and data uncertainty and predicts probability densities of CTR, enabling us to perform more efficient exploration of the feature space. We show the usefulness of using DDN online in a real world content recommendation system that serves billions of recommendations per day, and present online and offline to eval- uate the benefit of using DDN. In order to navigate the vast amounts of content on the internet, users either rely on active search queries, or on passive content recommendations. As the amount of the content on the internet grows, content discovery becomes an increasingly crucial challenge, shaping the way content is consumed by users. Taboola's content discovery platform aims to perform "reverse search", using computational models to match content to users who are likely to engage with it. Taboola's content recommendations are shown in widgets that are usually placed at the bottom of articles (see FIG0) in various websites across the internet, and serve billions of recommendation per day, with a user base of hundreds of millions of active users. Traditionally recommender systems have been modeled in a multi-arm bandit setting, in which the goal is to a find a strategy that balances exploitation and exploration in order to maximize the long term reward. Exploitation regimes try to maximize the immediate reward given the available information, while exploration seeks to extract new information from the feature space, subsequently increasing the performance of the exploitation module. One of the simplest approaches to deal with multi-arm bandit problems is the -greedy algorithm, in which with probability a random recommendation is chosen, and with probability 1 − the recommendation with the highest predicted reward is chosen. Upper Confidence Bound -UCB- BID0 ) and Thompson sampling techniques use prediction uncertainty estimations in order to perform more efficient exploration of the feature space, either by explicitly adding the uncertainty to the estimation (UCB) or by sampling from the posterior distribution (Thompson sampling). Estimating prediction uncertainty is crucial in order to utilize these methods. Online recommendations are noisy and probabilistic by nature, with measured values being only a proxy to the true underlying distribution, leading to additional interesting challenges when predicting uncertainty estimations. In this paper we present DDN, a unified deep neural network model which incorporates both measurement and data uncertainty, having the ability to be trained end-to-end while facilitating the exploitation/exploration selection strategy. We introduce a mathematical formulation to deconvolve measurement noise, and to provide data uncertainty predictions that can be utilized to improve exploration methods. Finally, we demonstrate the benefit of using DDN in a real world content recommendation system. Over the past decade deep learning has been applied with tremendous success in many different application domains such as computer vision, speech recognition and machine translation. In recent years we have seen a corresponding explosion of deep learning models in the recommender systems landscape, revolutionizing recommendation architectures and providing superior performance over traditional models BID17 BID3; BID4; BID14 ). Deep learning's ability to capture non-linearities has enabled to model complex user-item relations and to integrate higher level representations of data sources such as contextual, textual and visual input. Traditionally recommender systems have been modeled in a multi-arm bandit setting, where the goal is to find an exploitation/exploration selection strategy in order to maximize the long term reward. A similar challenge has been faced in Reinforcement learning (RL) setting, in which an agent has to decide when to forego an immediate reward and to explore its environment. Bayesian neural networks using distributions over the weights were applied by using either sampling or stochastic variational inference BID9; BID15 ). While Bayesian models offer a mathematically grounded framework, they usually entail a prohibitive computational cost. BID2 proposed Bayes by Backprop algorithm for the variational posterior estimation and applied Thompson sampling. BID6 proposed Monte Carlo (MC) dropout, a Bayesian approximation of model uncertainty by extracting estimations from the different sub-models that have been trained using dropout. BID8 separated uncertainty into two types, model and data uncertainty, while studying the effect of each uncertainty separately in computer vision tasks. BID10 formulated the exploration/exploitation trade-off in personalized article recommendation as a contextual bandit problem and proposed LinUCB algorithm, which adapts the UCB strategy to support models based on contextual features. The effect of measurement noise and noisy labels has been studied extensively BID5 ). BID12 proposed a probabilistic model for the conditional probability of seeing a wrong label, where the correct label is a latent variable of the model. explicitly modelled noise by an additional softmax layer that connects the correct labels to the noisy ones. In this paper we model measurement noise using a Gaussian model and combine it with a MDN.3 TABOOLA'S RECOMMENDER SYSTEM OVERVIEW Taboola's revenue stream is facilitated by online advertisers, who pay a fixed amount CPC (Cost Per Click) for each user that is redirected to their site after clicking on a Taboola's recommendation. The algorithm's total value is measured in RPM (Revenue Per Mille) where RP M = CT R * CP C * 1000, is the average revenue for every 1000 recommendations and CTR (Click Through Rate) is the probability of a recommendation being clicked. Content recommendations are ranked according to their predicted RPM; recommendations with the highest predicted RPM will be shown to the user. Taboola's main algorithmic challenge is to provide an estimate of the CTR in any given context. Taboola's recommendation engine needs to provide recommendations within strict time constraints (< 50ms). It is infeasable to rank millions of recommendations in that time frame; in order to support this we have partitioned the system into a two-step process, candidation and ranking (see FIG1). During the candidation phase, we narrow down the list of possible recommendations to thousands based on their RPM prediction in a specific context. CTR prediction in this setting is based on features such as the creative of recommendations (text and image) and empirical click statistics. This relatively small list of recommendations is written to distributed databases in worldwide data centers, and are re-calculated by Taboola's servers continuously throughout the day. When the frontend servers get a request for recommendations from the browser, they retrieve the relevant ready-made recommendation list, and perform an additional ranking of the recommendations based on additional user features using a DNN, further personalizing recommendations. This system architecture shows similarities to BID3 ).The dynamic nature of Taboola's marketplace means that our algorithm constantly needs to evaluate new recommendations, with tens of thousands of new possible recommendations every day. To support this, we split the algorithm into exploration and exploitation modules. The exploitation module aims to choose the recommendations that maximize the RPM, while the exploration module aims to enrich the dataset available for exploitation models by showing new recommendations. In this paper we focus on the candidation phase and the corresponding CTR prediction task, leaving out of this scope the second ranking step. In this section we present Deep Density Network (DDN) and describe its ability to deconvolve measurement noise and integrate it with data uncertainty in a unified model. Employing uncertainty during the training phase can be interpreted as loss attenuation, making our model more robust to noisy data. In addition, accurate uncertainty estimations enable us to employ more efficient exploitation/exploration selection strategies as discussed below. Our deep recommender model is a hybrid of a content-based and a collaborative filtering (CF) recommendation system. A high level overview is depicted in FIG2. We use two separate subnets which model the target and the context features. The target subnet receives as input the content features seen by the user and additional features such as the recommendation age which are unseen to the user. The categorical features are passed through an embedding layer and concatenated along with the numerical features, followed by fully connected layers with a RELU activation function. The is the target feature descriptor. Similarly, the context features are modeled using a separate DNN, taking as input context features such as device type where the target will be recommended, ing in the context feature descriptor. The target and context feature descriptors are then fused using a DNN which outputs both the CTR and its uncertainty prediction, with the measurement being compensated as described in sec. 4.2.In order to train our models, we collect and use historical data which consists of target and context pairs (t, c), where t is the target we recommended in a specific browsing context c. Each row in our training dataset includes the empirical CTR of the target-context pair (t, c), together with additional target and context features. We train models that optimize CTR prediction on this dataset using Maximum Likelihood Estimation (MLE) as discussed below. We separate uncertainty into two different types: measurement and data uncertainty. Measurement uncertainty corresponds to the uncertainty of our observation due to the measurement noise introduced by the binomial recommendation experiment. This type of uncertainty depends on the number of times r, a specific target x = (t, c) pair was recommended, i.e. target t was recommended in context c. Data uncertainty corresponds to the inherent noise of the observations; In contrast to measurement noise, it cannot be reduced even if more data was to be collected, as it corresponds to the inherent variability of the data. Data uncertainty is categorized into homoscedastic and heteroscedastic uncertainty. Homoscedastic is constant over all different inputs, in contrast to heteroscedastic which depends on the inputs, i.e. different input values may have more noisy outputs than others. Simple algorithms like -greedy choose actions indiscriminately during exploration, with no specific preference for targets that have higher probability to be successful in exploitation, or for targets that hold significant information gain about the feature space, for example targets that contain words that weren't previously recommended. It is beneficial to select among the non-greedy actions according to their potential of actually being optimal, taking into account both the expectation and the variance of their CTR estimates. Estimating uncertainty enables us to employ the upper confidence bound (UCB) algorithm for a better and adaptive selection strategy between exploitation/exploration. We estimate both the mean payoff µ t and the standard deviation σ t of each target t and select the target that achieves the highest UCB score, where a is a tunable parameter: DISPLAYFORM0 Our marketplace is defined by a very high recommendation turnover rate, with new content being uploaded everyday and old one becoming obsolete. Probabilistic modeling of the data uncertainty assists us in using the exploration model in order to sample targets that have the highest potential value, by employing the UCB strategy. In contrast to the variance captured by data uncertainty, model uncertaintiy corresponds to what the model "knows" about the feature space. BID6 show that model uncertainty can capture the confidence about different values in the feature space. This however comes at a prohibitive computational cost when calculated using dropout. We explore the feature space by setting to Out Of Vocabulary (OOV) categorical feature values which have been shown less than a minimal threshold. As shown in FIG3, OOV values indeed get larger uncertainty estimations. In order to deconvolve the data and measurement uncertainties we explicitly model them together. Let Y, Y * and be three random variables given x = (t, c). Y corresponds to observed CTR, after recommending (t, c) pair, r times. Y * corresponds to the true/clean CTR without the measurement noise, i.e. the observed CTR had we recommended t infinite times in c. corresponds to the binomial noise error distribution. DISPLAYFORM0 We are modelling data uncertainty by placing a distribution over the output of the model and learning it as a function of the different inputs. To this end, we are using Mixture Density Network (MDN), which employ a Gaussian Mixture Model (GMM) to model Y * . DISPLAYFORM1 For every input the MDN model predicts the coefficients of the GMM; These are the mixing coefficients, α i, µ i and σ i, from which we estimate the expected value and the standard deviation of Y *.The measurement uncertainty corresponds to the measurement noise distribution which we approximate with a Gaussian distribution: DISPLAYFORM2 Due to the fact that data noise is small given x, we enforce constant σ = f (µ, r) for every y * |x where µ is the expected value of y * |x. In this way, Y * |= given x, as σ depends only on r and µ. We can rewrite eq. 2 using eq. 3 and 6 to: DISPLAYFORM3 This enables us to deconvolve and model both data and measurement uncertainties, using a single model which combines MDN and a Gaussian model. Given this probability estimation, the training process uses SGD for minimizing the loss: DISPLAYFORM4 Data: For the purpose of this paper we use the browsed website (i.e. publisher) as the user context. In all of the experiments we used three months of historical data, containing ∼10M records of target-publisher pairs. The dataset contains ∼ 1M unique targets and ∼10K unique publishers. Every experiment has been run on multiple time slots to validate that the were statistically significant. We have experimented with the following models:1. REG is a regression model that outputs a point estimate of the CTR where the loss is the MSE between the actual and the predicted CTR.2. MDN is a model that estimates the distribution over the CTR utilizing a mixture of Gaussians (see sec. 4.2.1).3. DDN is a model that estimates the distribution over the CTR combining the data and measurement uncertainties (see sec. 4.2.1).In order to have a fair comparison, we tuned the hyper-parameters (e.g. embedding sizes, number of layers, number of mixtures) for each model separately; we performed thousands of iterations of random search, and chose the parameters that yielded the best . We evaluate our models using Mean Square Error (MSE). Due to the dynamic nature of online recommendations it is crucial that we evaluate our models online within an A/B testing framework, by measuring the average RPM of models across different publishers. In addition we utilize an online throughput metric which aims to capture the effectiveness of the exploration mechanism. Prior works have put an effort on how to measure exploration; BID11 built an offline simulator that enabled them to test different models to see which one achieves target coverage faster. This is not feasible in our case given the large turnover rate in the recommendation pool. Instead, we use the following targets throughput metric. Let < t i, p i > be the set of target-publisher pairs that accumulated enough data to achieve empiric binomial statistical significance in a given day. A model is said to be contributing to < t i, p i > if it has recommended t i in p i in the previous day more times than a predefined threshold. Our throughput metric is defined by the number of targets that a specific model contributed to this set. 6.5% 9.1% 11.7% Table 3: RPM lift vs targets throughput as a function of different values of c. Feature importance: Understanding the parameters of deep learning networks poses a significant challenge compared to linear and tree based models. We utilize the fact that our models output a full distribution rather than a point estimate to evaluate feature importance. In our analysis, we evaluate the effect on the σ prediction when a feature is "hidden" from the model during inference, by setting it to OOV. For each feature, we calculate statistics over the ratio σ oov /σ, between the predicted σ before and after setting it to OOV.In FIG3 we observe that the analyzed features have a large impact on data uncertainty. The median values of the various features are greater than one, validating our assumption that feature values that did not appear in the training data will obtain a higher uncertainty estimation. In addition, we see a distinct ordering of feature importance, where new advertisers yield a larger ratio than new targets. Using σ in a UCB setting (as in equation 1) will prioritize new targets, especially ones from new advertisers -a desired behaviour both in terms of information gain and advertiser satisfaction. In TAB0 we compare the MDN and DDN models by training them on two different datasets, D1 and D2. D1 differs from D2 by the amount of noise in the training samples; D1 contains noisy data points with relatively small amount of empirical data, while D2 contains examples with higher empirical statistical significance. We observe that DDN improves on MDN performance by 2.7% when using D1 for training, and by 5.3% when using D2. This validates that integrating measurement noise into our modeling is crucial when the training data contains very noisy samples, by attenuating the impact of measurement noise on the loss function. (see sec. 4.2.1) TAB2 we compare the three different models discussed previously in terms of online RPM. We observe that DDN is the best performing model, outperforming MDN and REG by 1.7% and 2.9% respectively. These verify once again that the loss attenuation achieved by DDN during training has enabled it to converge to better parameters, generalizing better to unseen examples. DISPLAYFORM0 We analyzed the effect of the parameter a found in 1. From a theoretical standpoint, increasing this value is supposed to prioritize higher information gain at the expense of RPM, by choosing targets that the model is uncertain about. This trade-off is worthwhile in the long term. In Table 3 we observe that there is an inverse correlation between RPM and throughput which is triggered by different values of a, with targets throughput increasing by 11.7% when setting a = 1.5. Choosing the right trade-off is an application specific concern, and we chose the trade-off induced by a = 0.5, ing in a good throughput gain with a small RPM cost. We have introduced Deep Density Network (DDN), a unified DNN model that is able to predict probability distributions and to deconvolve measurement and data uncertainties. DDN is able to model non-linearities and capture complex target-context relations, incorporating higher level representations of data sources such as contextual and textual input. We have shown the added value of using DNN in a multi-arm bandit setting, yielding an adaptive selection strategy that balances exploitation and exploration and maximizes the long term reward. We presented validating DDN's improved noise handling capabilities, leading to 5.3% improvement on a noisy dataset. Furthermore, we observed that DDN outperformed both REG and MDN models in online experiments, leading to RPM improvements of 2.9% and 1.7% respectively. Finally, by employing DDN's data uncertainty estimation and UCB strategy, we improved our exploration strategy, depicting 6.5% increase of targets throughput with only 0.05% RPM decrease.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
ryY4RhkCZ
We have introduced Deep Density Network, a unified DNN model to estimate uncertainty for exploration/exploitation in recommendation systems.
While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a of natural external occurrences. On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018. We begin by showing that tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labelled data; are robust across several machine learning models and yield geographic-level in line with prior research. We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change. However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters. Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time, finding correlations between Twitter climate change sentiment and seasonal effects BID1, and clustering Twitter users based on climate mentalities using network analysis BID8. Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre-and post-various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows. First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section 3.1). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section 3.2). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster. We henceforth refer to a tweet affirming climate change as a "positive" sample (labeled as 1 in the data), and a tweet denying climate change as a "negative" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the "twint" scraping tool BID9 to sample historical tweets for several different search terms; queries always included either "climate change" or "global warming", and further included disaster-specific search terms (e.g., "bomb cyclone," "blizzard," "snowstorm," etc.). We refer to the first data batch as "influential" tweets, and the second data batch as "event-related" tweets. The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by "influential" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section 3.1) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets. The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 -6); the Mendocino, California wildfires (Jul. 27 -Sept. 18); Hurricane Florence (Aug. 31 -Sept. 19); Hurricane Michael (Oct. 7 -16); and the California Camp Fires. For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in TAB0. Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitude of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section 3.1, we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets. To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labelling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples. TAB1.The RNN pre-trained using GloVe word embeddings BID7 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BID3 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labelling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more "real" Twitter users who are climate change believers, e.g. by using the methodology found in BID8. Our second goal is to compare the mean values of users' binary sentiments both pre-and post-each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare to the prior literature. In FIG0, we map 4-clustering on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BID4: the Southeast and Midwest have lower average sentiments (-0.17 and -0.09, respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure 2, we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre-and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BID6.From these mapping exercises, we claim that our "influential tweet" labelling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure 3, we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre-and post-event (see Section 4). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study. In Figure 3, we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre-and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre-and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust to use in sentiment analyses. We now comment on the two events yielding similar between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event. There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our are constrained to Twitter users, who are known to be more negative than the general U.S. population BID2. Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a "nowcasting" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1evmEQg_V
We train RNNs on famous Twitter users to determine whether the general Twitter population is more likely to believe in climate change after a natural disaster.
We study the control of symmetric linear dynamical systems with unknown dynamics and a hidden state. Using a recent spectral filtering technique for concisely representing such systems in a linear basis, we formulate optimal control in this setting as a convex program. This approach eliminates the need to solve the non-convex problem of explicit identification of the system and its latent state, and allows for provable optimality guarantees for the control signal. We give the first efficient algorithm for finding the optimal control signal with an arbitrary time horizon T, with sample complexity (number of training rollouts) polynomial only in log(T) and other relevant parameters. Recent empirical successes of reinforcement learning involve using deep nets to represent the underlying MDP and policy. However, we lack any supporting theory, and are far from developing algorithms with provable guarantees for such settings. We can make progress by addressing simpler setups, such as those provided by control theory. Control theory concerns the control of dynamical systems, a non-trivial task even if the system is fully specified and provable guarantees are not required. This is true even in the simplest setting of a linear dynamical system (LDS) with quadratic costs, since the ing optimization problems are high-dimensional and sensitive to noise. The task of controlling an unknown linear system is significantly more complex, often giving rise to non-convex and high-dimensional optimization problems. The standard practice in the literature is to first solve the non-convex problem of system identification-that is, recover a model that accurately describes the system-and then apply standard robust control methods. The non-convex problem of system identification is the main reason that we have essentially no provable algorithms for controlling even the simplest linear dynamical systems with unknown latent states. In this paper, we take the first step towards a provably efficient control algorithm for linear dynamical systems. Despite the highly non-convex and high-dimensional formulation of the problem, we can efficiently find the optimal control signal in polynomial time with optimal sample complexity. Our method is based on wave-filtering, a recent spectral representation technique for symmetric LDSs BID7 ). A dynamical system converts input signals {1, . . .,} ∈ R into output signals {1, . . .,} ∈ R, incurring a sequence of costs 1,..., ∈ R. We are interested in controlling unknown dynamical systems with hidden states (which can be thought of as being partially "observed"via the output signals). A vast body of work focuses on linear dynamical systems with quadratic costs, in which the {} and {} are governed by the following dynamics: DISPLAYFORM0 where ℎ 1,..., ℎ ∈ R is a sequence of hidden states starting with a fixed ℎ 1. Matrices,,,,, of appropriate dimension describe the system and cost objective; the {} are Gaussian noise vectors. All of these matrices, as well as the parameters of the Gaussian, can be unknown. The most fundamental control problem involves controlling the system for some time horizon: find a signal 1,..., that minimizes the sum of these quadratic output costs. Clearly, any algorithm for doing so must first learn the system parameters in some form, and this is often the source of computational intractability (meaning algorithms that take time exponential in the number of system parameters).Previously known algorithms are of two types. The first type tries to solve the non-convex problem, with algorithms that lack provable guarantees and may take exponential time in the worst case: e.g., expectation-maximization (EM) or gradient-based methods (back-propagation through time, like the training of RNNs) which identify both the hidden states and system parameters. Algorithms of the second type rely upon regression, often used in time-series analysis. Since -step dynamics of the system involve the first powers of, the algorithm represents these powers as new variables and learns the -step dynamics via regression (e.g., the so-called VARX(,) model) assuming that the system is well-conditioned; see Appendix A. This has moderate computational complexity but high sample complexity since the number of parameters in the regression scales with, the number of time steps. Our new method obtains the best of both : few parameters to train ing in low sample complexity, as well as polynomial computational complexity. In Section 2 we state the precise algorithm. The informal is as follows. DISPLAYFORM1 Theorem 1.1 (Controlling an unknown LDS; informal). Let D be a linear dynamical system with a symmetric transition matrix and with size |D|. Then, for every > 0, Algorithm 1 produces a sequence of controls (ˆ1, . . .ˆ), ‖ˆ‖ 2 ≤ 1 with ‖ˆ1: ‖ 2 ≤, such that DISPLAYFORM2 Assuming i.i.d. Gaussian noise ∼ (0, Σ), the algorithm samples˜(poly (|D|,, Tr Σ, 1/)) trajectories from D, and runs in time polynomial in the same parameters. The field of optimal control for dynamical systems is extremely broad and brings together literature from machine learning, statistical time-series analysis, dynamical system tracking and Kalman filtering, system identification, and optimal control. For an extensive survey of the field, see e.g. BID12 BID1.Tracking a known system. A less ambitions goal than control is tracking of a dynamical system, or prediction of the output given a known input. For the special case of LDS, the well-known Kalman filter BID9 is an optimal recursive least-squares solution for maximum likelihood estimation (MLE) under Gaussian perturbations to a linear dynamical system. System identification. When the underlying dynamical system is unknown, there are essentially no provably efficient methods for recovering it. For various techniques used in practice, see the classic survey BID10. BID11 suggest using the EM algorithm to learn the parameters of an LDS, nowadays widespread, but it is well-known that optimality is not guaranteed. The recent of BID6 gives a polynomial time algorithm for system recovery, although it applies only to the single-input-single-output case and makes various statistical assumptions on the inputs. Model-free tracking. Our methods depend crucially on a new algorithm for LDS sequence prediction, at the heart of which is a new convex relaxation for the tracking formulation BID7. In particular, this method circumvent the obstacle of explicit system identification. We detail our usage of this in Definition 2.3.We note an intriguing connection to the recently widespread use of deep neural networks to represent an unknown MDP in reinforcement learning: the main algorithm queries the unknown dynamical system with exploration signals, and uses its responses to build a compact representation (denoted byˆin Algorithm 1) which estimates the behavior of the system. Time-series analysis. One of the most common approaches to modeling dynamical systems is the autoregressive-moving average (ARMA) model and its derivatives in the time-series analysis literature BID5 BID2 BID3. At the heart of this method is the autoregressive form of a time series, namely, DISPLAYFORM0 Using online learning techniques, it is possible to completely identify an autoregressive model, even in the presence of adversarial noise BID0. This technique lies at the heart of a folklore regression method for optimal control, given in the second row of table 1.Optimal control. The most relevant and fundamental primitive from control theory, as applied to the control of linear dynamical systems, is the linear-quadratic-Gaussian (LQG) problem. In this setting, the system dynamics are assumed to be known, and the task is to find a sequence of inputs which minimize a given quadratic cost. A common solution, the LQG controller, is to combine Kalman filtering with a linear-quadratic regulator, a controller selected by solving the Bellman equation for the problem. Such an approach admits theoretical guarantees under varied assumptions on the system; see, for example, BID4.Our setting also involves a linear dynamical system with quadratic costs, and thus can be seen as a special case of the LQG setup, in which the process noise is zero, and the transition matrix is assumed to be symmetric. However, our are not analogous: our task also includes learning the system's dynamics. As such, our main algorithm for control takes a very different approach than that of the standard LQR: rather than solving a recursive system of equations, we provide a formulation of control as a one-shot convex program. First, we state the formal definitions of the key objects of interest. Definition 2.1 (Dynamical system). A dynamical system D is a mapping that takes a sequence of input vectors 1,..., ∈ B 2 = {∈ R : ‖ ‖ 2 ≤ 1} to a sequence of output vectors 1,..., ∈ R and costs 1,..., ∈ R. Denote: = [; . . . ;] as the concatenation of all input vectors from time to, and write DISPLAYFORM0 Definition 2.2 (Linear dynamical system). A linear dynamical system (LDS) is a dynamical system whose outputs and costs are defined by DISPLAYFORM1 where ℎ 1,..., ℎ ∈ R is a sequence of hidden states starting with fixed ℎ 1, and,,,,, are matrices (or vectors) of appropriate dimension. We assume ‖ ‖ op ≤ 1, i.e., all singular values of are at most one, and that 0, 0.Our algorithm and its guarantees depend on the construction of a family of orthonormal vectors in R, which are interpreted as convolution filters on the input time series. We define the wave-filtering matrix below; for more details, see Section 3 of BID7. Definition 2.3 (Wave-filtering matrix). Fix any,, and 1 ≤ ≤. Let be the eigenvector corresponding to the -th largest eigenvalue of the Hankel matrix ∈ R ×, with entries:= 2 (+) 3 −(+). The wave-filtering matrix Φ ∈ R × is defined by vertically stacked block matrices {Φ ∈ R × }, defined by horizontally stacked multiples of the identity matrix: DISPLAYFORM2 Then, letting range from 1 to, Φ: − then gives a dimension-wise convolution of the input time series by the filters {} of length. Theorem 3.3 uses a structural from BID7, which guarantees the existence of a concise representation of D in the basis of these filters. The main theorem we prove is the following. Tr(Σ) ln 2 ã, produces a sequence of controls (ˆ1, . . .ˆ) ∈ B 2, such that with probability at least 1 −, DISPLAYFORM3 assuming that DISPLAYFORM4 Further, the algorithm samples poly 1, log 1, log, log, 1,,,, Tr(Σ) trajectories from the dynamical system, and runs in time polynomial in the same parameters. We remark on some of the conditions. is bounded away from 0 when we suffer loss in all directions of. In condition, Tr(Σ) is inevitable loss due to noise, so is an assumption on the system's controllability. We set up some notation for the algorithm. Let Φ ∈ R × be the wave-filtering matrix from Definition 2.3. Let = 0 for ≤ 0, and let =: − +1. Let = max(‖ ‖, ‖ ‖, ‖ ‖, ‖ ‖) be an upper bound for the matrices involved in generating the outputs and costs. To prove Theorem 2.4, we invoke Lemma 3.1 and Lemma 3.2, proved in Subsection 3.1 and Subsection 3.2, respectively. DISPLAYFORM0 Lemma 3.2 (Robustness of control to uncertainty in dynamics). Let 1: = arg min 1: DISPLAYFORM1 whereˆ∈ R × is such that for every sequence of input signals (1, . . .,) ∈ B 2 with ‖ 1: ‖ 2 ≤ and ≤, Equation holds withˆ=D (1:). Assume. Then DISPLAYFORM2 Moreover, the minimization problem posed above is convex (for, 0), and can be solved to within opt acccuracy in poly(,, 1/ opt) time using the ellipsoid method. Proof of Theorem 2.4. Use Lemma 3.1 with ← ». Note that 1: = is a valid input to the LDS because ‖ ‖ 2 = 1. Now use Lemma 3.2 on the of Lemma 3.1. To prove Lemma 3.1, we will use the following structural from BID7 restated to match the setting in consideration. DISPLAYFORM0 Proof. This follows from Theorem 3b in BID7 after noting four things.1. E(−) are the outputs when the system is started at ℎ 1 = 0 with inputs 1: and no noise. A linear relationship − = ′ holds by unfolding the recurrence relation.2. Examining the construction of in the proof of Theorem 3b, the is exactly the projection of ′ onto the subspace defined by Φ. (Note we are using the fact that the rows of Φ are orthogonal.) 3. Theorem 3b is stated in terms of the quantity, which is bounded by 2 by Lemma F.5 in BID7.4. We can modify the system so that = by replacing (,,,) with (, , , ). This replaces by (max(, √)). Theorem 3b originally had dependence of on both Φ and, but if = then the dependence is only on Φ.Proof of Lemma 3.1. DISPLAYFORM1. Letting ′ be the matrix such that − = ′, and = ′ Φ ⊤ as in Theorem 3.3, we have that DISPLAYFORM2 Let 1 ≤ ≤. We bound the error under controls 1: ∈ B 2, ‖ 1: ‖ 2 ≤ using the triangle inequality. Letting 1: be the output under 1:, DISPLAYFORM3 By Theorem 3.3, for = Ω(log 2 log( /)), choosing constants appropriately, the first term is ≤ 4.To bound the second term in, we show concentrates around E. We have DISPLAYFORM4 By concentration of sums of 2 random variables (see BID8, for example), DISPLAYFORM5 Take ′ = 2 and ′ = 4 √ and note was chosen to satisfy. Use the union bound to get that DISPLAYFORM6 To bound the third term in, we first show thatˆconcentrates around. We havê DISPLAYFORM7 By 2 concentration, DISPLAYFORM8 We also have (E −)1 DISPLAYFORM9 With ≥ 1 − probability, we avoid both the bad events in FORMULA2 and FORMULA2 DISPLAYFORM10 and for all ‖ 1: ‖ ≤, the third term of FORMULA2 is bounded by (note ‖Φ‖ op = 1 because it has orthogonal rows) DISPLAYFORM11 Thus by, E[] − −ˆΦ 2 ≤ with probability ≥ 1 −. To prove Lemma 3.2, we need the following helpful lemma. Lemma 3.4. For a symmetric LDS D with = max(‖ ‖, ‖ ‖, ‖ ‖, ‖ ‖ op) and,, and an approximationD whose predictionsˆsatisfy the of Lemma 3.1, we have that for every sequence of controls (1, . . .,) ∈ B 2, and for every 1 ≤ ≤, DISPLAYFORM0 where costD (DISPLAYFORM1 DISPLAYFORM2 using Cauchy-Schwarz. Proof of Lemma 3.2. Define DISPLAYFORM3 1: = arg min 1: DISPLAYFORM4 DISPLAYFORM5 . By assumption, ≤. We have DISPLAYFORM6 By Lemma 3.4 for inputs DISPLAYFORM7 Lettingˆ1: be the outputs underD under the controlˆ1:, note that similar to, DISPLAYFORM8 becauseˆ1: is optimal forD. By Lemma 3.4 for inputsˆ1:, DISPLAYFORM9 Now by,, and, DISPLAYFORM10 We have presented an algorithm for finding the optimal control inputs for an unknown symmetric linear dynamical system, which requires querying the system only a polylogarithmic number of times in the number of such inputs, while running in polynomial time. Deviating significantly from previous approaches, we circumvent the non-convex optimization problem of system identification by a new learned representation of the system. We see this as a first step towards provable, efficient methods for the traditionally non-convex realm of control and reinforcement learning. In this section, we verify the statement made in Section 1 on the time and sample complexity of approximating a linear dynamical system with an autoregressive model. Although this is wellknown (see, for example, Section 6 of BID6), we present a self-contained presentation for convenience and to unify notation. The vector autoregressive model with exogenous variables, or VARX(,), is a touchstone in timeseries analysis. Given a time series of inputs (sometimes known as biases) {}, it generates the time series of responses {} by the following recurrence: DISPLAYFORM0 Here, and are memory parameters, the {} and {} are matrices of appropriate dimension, and the {} are noise vectors. In the special case of = 0, the problem can be solved efficiently with linear regression: in this case, is a linear function of the concatenated inputs [; −1 ; . . ., − +1].A VARX(0,) model is specified by = [,..., (−1) ] ∈ R × and predicts =: − +1. We quantify the relationship between VARX(0,) and linear dynamical systems, with a statement analogous to Theorem 3.1: Theorem A.1. Let D be an LDS with size ℒ, fixed ℎ 1, and noise = 0, producing outputs {1, . . .,} from inputs {1, . . .,}. Suppose that the transition matrix of D has operator norm at most < 1. Then, for each > 0, there is a VARX(0,) model with = (1 1− log(ℒ/)), specified by a matrix, such that DISPLAYFORM1 Proof. By the modification of D given in the proof of Theorem 3.3, we may assume without loss of generality that =. Also, as in the discussion of Theorem 3.1, it can be assumed that the initial hidden state ℎ 1 is zero. Then, we construct the block of corresponding to lag as DISPLAYFORM2 This is well-defined for all 1 ≤ ≤. Note that when ≥, the autoregressive model completely specifies the system D, which is determined by its (infinite-horizon) impulse response function. Furthermore, by definition of, we have DISPLAYFORM3 Noting that DISPLAYFORM4 we conclude that DISPLAYFORM5 implying the claim by the stated choice of.VARX(0,) only serves as a good approximation of an LDS whose hidden state decays on a time scale shorter than; when the system is ill-conditioned (is close to 1), this can get arbitrarily large, requiring the full time horizon =.On the other hand, it is clear that both the time and sample complexity of learning a VARX(0,) model grows linearly in. This verifies the claim in the introduction.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BygpQlbA-
Using a novel representation of symmetric linear dynamical systems with a latent state, we formulate optimal control as a convex program, giving the first polynomial-time algorithm that solves optimal control with sample complexity only polylogarithmic in the time horizon.
Generative Adversarial Networks (GANs) have become the gold standard when it comes to learning generative models for high-dimensional distributions. Since their advent, numerous variations of GANs have been introduced in the literature, primarily focusing on utilization of novel loss functions, optimization/regularization strategies and network architectures. In this paper, we turn our attention to the generator and investigate the use of high-order polynomials as an alternative class of universal function approximators. Concretely, we propose PolyGAN, where we model the data generator by means of a high-order polynomial whose unknown parameters are naturally represented by high-order tensors. We introduce two tensor decompositions that significantly reduce the number of parameters and show how they can be efficiently implemented by hierarchical neural networks that only employ linear/convolutional blocks. We exhibit for the first time that by using our approach a GAN generator can approximate the data distribution without using any activation functions. Thorough experimental evaluation on both synthetic and real data (images and 3D point clouds) demonstrates the merits of PolyGAN against the state of the art. Generative Adversarial Networks (GANs) are currently one of the most popular lines of research in machine learning. Research on GANs mainly revolves around: (a) how to achieve faster and/or more accurate convergence (e.g., by studying different loss functions (; ;) or regularization schemes (; ;) ), and (b) how to design different hierarchical neural networks architectures composed of linear and non-linear operators that can effectively model high-dimensional distributions (e.g., by progressively training large networks or by utilizing deep ResNet type of networks as generators ). Even though hierarchical deep networks are efficient universal approximators for the class of continuous compositional functions , the non-linear activation functions pose difficulties in their theoretical analysis, understanding, and interpretation. For instance, as illustrated in , element-wise non-linearities pose a challenge on proving convergence, especially in an adversarial learning setting . Consequently, several methods, e.g.,;;; , focus only on linear models (with respect to the weights) in order to be able to rigorously analyze the neural network dynamics, the residual design principle, local extrema and generalization error, respectively. Moreover, as stated in the recent in-depth comparison of many different GAN training schemes , the improvements may mainly arise from a higher computational budget and tuning and not from fundamental architectural choices. In this paper, we depart from the choice of hierarchical neural networks that involve activation functions and investigate for the first time in the literature of GANs the use of high-order polynomials as an alternative class of universal function approximators for data generator functions. This choice is motivated by the strong evidence provided by the Stone-Weierstrass theorem , which states that every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function. Hence, we propose to model the vector-valued generator function Gpzq: R d Ñ R o by a high-order multivariate polynomial of the latent vector z, whose unknown parameters are naturally represented by high-order tensors. However, the number of parameters required to accommodate all higher-order correlations of the latent vector explodes with the desired order of the polynomial and the dimension of the latent vector. To alleviate this issue and at the same time capture interactions of parameters across different orders of approximation in a hierarchical manner, we cast polynomial parameters estimation as a coupled tensor factorization that jointly factorizes all the polynomial parameters tensors. To this end, we introduce two specifically tailored coupled canonical polyadic (CP)-type of decompositions with shared factors. The proposed coupled decompositions of the parameters tensors into two different hierarchical structures (i.e., architectures of neural network decoders) that do not involve any activation function, providing an intuitive way of generating samples with an increasing level of detail. This is pictorially shown in Figure 1. The of the proposed PolyGAN using a fourth-order polynomial approximator is shown in Figure 1 (a), while Figure 1 (b) shows the corresponding generation when removing the fourth-order power from the generator. Our contributions are summarized as follows: • We model the data generator with a high-order polynomial. Core to our approach is to cast polynomial parameters estimation as a coupled tensor factorization with shared factors. To this end, we develop two coupled tensor decompositions and demonstrate how those two derivations in different neural network architectures involving only linear (e.g., convolution) units. This approach reveals links between high-order polynomials, coupled tensor decompositions and network architectures. • We experimentally verify that the ing networks can learn to approximate functions with analytic expressions. • We show how the proposed networks can be used with linear blocks, i.e., without utilizing activation functions, to synthesize high-order intricate signals, such as images. • We demonstrate that by incorporating activation functions to the derived polynomial-based architectures, PolyGAN improves upon three different GAN architectures, namely DC-GAN , SNGAN and SAGAN . (a) (b) Figure 1: Generated samples by an instance of the proposed PolyGAN. (a) Generated samples using a fourth-order polynomial and (b) the corresponding generated samples when removing the terms that correspond to the fourth-order. As evidenced, by extending the polynomial terms, PolyGAN generates samples with an increasing level of detail. In this Section, we investigate the use of a polynomial expansion as a function approximator for the data generator in the context of GANs. To begin with, we introduce the notation in Section 2.1. In Section 2.2, we introduce two different polynomials models along with specifically tailored coupled tensor factorizations for the efficient estimation of their parameters. Matrices (vectors) are denoted by uppercase (lowercase) boldface letters e.g., X, (x). Tensors are denoted by calligraphic letters, e.g., X. The order of a tensor is the number of indices needed to address its elements. Consequently, each element of an M th-order tensor X is addressed by M indices, i.e., pX q i1,i2,...,i M. The mode-m unfolding of a tensor X P R I1ˆI2ˆ¨¨¨ˆI M maps X to a matrix X pmq P R ImˆĪm with I k such that the tensor element x i1,i2,...,i M is mapped to the matrix element x im,j where j " 1`ř n"1 n‰m I n. The mode-m vector product of X with a vector u P R Im, denoted by Xˆn u P R I1ˆI2ˆ¨¨¨ˆIn´1ˆIn`1ˆ¨¨¨ˆI N, in a tensor of order M´1: The Khatri-Rao product (i.e., column-wise Kronecker product) of matrices A P R IˆN and B P R JˆN is denoted by A d B and yields a matrix of dimensions pIJqˆN. The Hadamard product of A P R IˆN and B P R IˆN is defined as A˚B and is equal to A pi,jq B pi,jq for the pi, jq element. The CP decomposition factorizes a tensor into a sum of component rank-one tensors. An M th-order tensor X P R I1ˆI2ˆ¨¨¨ˆI M has rank-1, when it is decomposed as the outer product of M vectors tu, where˝denotes for the vector outer product. Consequently, the rank-R CP decomposition of an M th-order tensor X is written as: where the factor matrices U rms " ru collect the vectors from the rank-one components. By considering the mode-1 unfolding of X, the CP decomposition can be written in matrix form as : More details on tensors and multilinear operators can be found in;. GANs typically consist of two deep networks, namely a generator G and a discriminator D. G is a decoder (i.e., a function approximator of the sampler of the target distribution) which receives as input a random noise vector z P R d and outputs a sample x " Gpzq P R o. D receives as input both Gpzq and real samples and tries to differentiate the fake and the real samples. During training, both G and D compete against each other till they reach an "equilibrium" . In practice, both the generator and the discriminator are modeled as deep neural networks, involving composition of linear and non-linear operators . In this paper, we focus on the generator. Instead of modeling the generator as a composition of linear and non-linear functions, we assume that each generated pixel x i " pGpzqq i may be expanded as a N th order polynomial 1 in z. That is, 1 With an N th order polynomial we can approximate any smooth function . Under review as a conference paper at ICLR 2020 where the scalar β i, and the set of tensors W are the parameters of the polynomial expansion associated to each output of the generator, e.g., pixel. Clearly, when n " 1, the weights are d-dimensional vectors; when n " 2, the weights, i.e., W r2s i, form a dˆd matrix. For higher orders of approximation, i.e., when n ě 3, the weights are n th order tensors. By stacking the parameters for all pixels, we define the parameters β.. Consequently, the vector-valued generator function is expressed as: Intuitively, is an expansion which allows the N th order interactions between the elements of the noise latent vector z. Furthermore, resembles the functional form of a truncated Maclaurin expansion of vector-valued functions. In the case of a Maclaurin expansion, W rns represent the n th order partial derivatives of a known function. However, in our case the generator function is unknown and hence all the parameters need to be estimated from training samples. The number of the unknown parameters in is pd, which grows exponentially with the order of the approximation. Consequently, the model of is prone to overfitting and its training is computationally demanding. A natural approach to reduce the number of parameters is to assume that the weights exhibit redundancy and hence the parameter tensors are of low-rank. To this end, several low-rank tensor decompositions can be employed . For instance, let the parameter tensors W rns admit a CP decompostion of mutilinear rank-k, namely, tW rns " rrU rns,1, U rns,2,..., U rns,pn`1q ssu N n"1, with U rns,1 P R oˆk, and U rns,m P R dˆk, for m " 2,..., n`1. Then, is expressed as which has significantly less parameters than, especially when k! d. However, a set of different factor matrices for each level of approximation are required in equation 6, and hence the hierarchical nature of images is not taken into account. To promote compositional structures and capture interactions among parameters in different orders of approximation we introduce next two coupled CP decompositions with shared factors. Model 1: Coupled CP decomposition: Instead of factorizing each parameters tensor individually we propose to jointly factorize all the parameter tensors using a coupled CP decomposition with a specific pattern of factor sharing. To illustrate the factorization, we assume a third order approximation (N " 3), however in the appendix a generalization to N -th order approximation is provided. Let us assume that the parameters tensors admit the following coupled CP decomposition with the factors corresponding to lower-order levels of approximation being shared across all parameters tensors. That is: • Let W r1s " CU T r1s, be the parameters for first level of approximation. • Let assume W r2s being a superposition of of two weights tensors, namely W r2s " W r2s 1:2Ẁ r2s 1:3, with W r2s i:j denoting parameters associated with the second order interactions across the i-th and j-th order of approximation. By enforcing the CP decomposition of the above tensors to share the factor with tensors corresponding to lower-order of approximation we obtain in matrix form: • Similarly, we enforce the third-order parameters tensor to admit the following CP decomposition (in matrix form) Note that all but the U r3s factor matrices are shared in the factorization of tensors capturing polynomial parameters for the first and second order of approximation. The parameters are C P R oˆk, U rms P R dˆk for m " 1, 2, 3. Then, for N " 3 is written as: The third order approximation of can be implemented as a neural network with the structure of Figure 2 (proved in section B, Claim 1 of the appendix). It is worth noting that the structure of the proposed network allows for incremental network growth. Model 2: Coupled nested CP decomposition: Instead of explicitly separating the interactions between layers, we can utilize a joint hierarchical decomposition on the polynomial parameters. Let us first introduce learnable hyper-parameters b rns P R ω (N n"1, which act as scaling factors for each parameter tensor. Therefore, we modify to: with. For illustration purposes, we consider a third order function approximation (N " 3). That is, To estimate its parameters we jointly factorize all parameters tensors by employing nested CP detecomposion with parameter sharing as follows (in matrix form) • First order parameters: W r1s p1q " CpA r3s d B r3s q T. • Second order parametes: • Third order parameters: ) ˙S r3s * T with C P R oˆk, A rns P R dˆk, S rns P R kˆk, B rns P R ωˆk for n " 1,..., N. Altogether, is written as: As we prove in the appendix (section B, Claim 3), can be implemented in a hierarchical manner with a three-layer neural network as shown in Figure 3. Comparison between the two models: Both models are based on the polynomial expansion, however there are few differences between those. The Coupled CP decomposition has a simpler expression, however the Coupled nested CP decomposition relates to standard architectures using hierarchical composition that has recently yielded promising in GANs (see Section 3). In the remainder of the paper, we use the Coupled nested CP decomposition by default; in Section G, we include an experimental comparison of the two models. The experimental comparison demonstrates that neither model outperforms the other in all datasets; they perform similarly. Figure 3: Schematic illustration of the Coupled nested CP decomposition (for third order approximation). Symbol˚refers to the Hadamard product. The literature on GANs is vast; we focus only on the works most closely related to ours. The interested reader can find further information in a recent survey . Despite the propagation of the noise z to successive layers, the aforementioned works have substantial differences from ours. We introduce a well-motivated and mathematically elaborate method to achieve a more precise approximation with a polynomial expansion. In contrast to the previously mentioned works, we also do not concatenate the noise with the feature representations, but rather perform multiplication of the noise with the feature representations, which we mathematically justify. The work that is most closely related to ours is the recently proposed StyleGAN , which is an improvement over the Progressive Growing of GANs (ProGAN) . As ProGAN, StyleGAN is a highly-engineered network that achieves compelling on synthesized 2D images. In order to provide an explanation on the improvements of StyleGAN over ProGAN, the authors adopt arguments from the style transfer literature . Nevertheless, the idea of style transfer proposes to use features from images for conditional image translation, which is very different to unsupervised samples (image) generation. We believe that these improvements can be better explained under the light of our proposed polynomial function approximation. That is, as we show in Figure 1, the Hadamard products build a hierachical decomposition with increasing level of detail (rather than different styles). In addition, the improvements in StyleGAN are demonstrated by using a well-tuned model. In this paper we showcase that without any complicated engineering process the polynomial generation can be applied into several architectures (or any other type of decoders) and consistently improves the performance. A sequence of experiments in both synthetic data (2D and 3D data manifolds) and higher-dimensional signals are conducted to assess the empirical performance of the proposed polynomial expansion. The first experiments are conducted on a 2D manifolds that are analytically known (Section 4.1). Further experiments on three 3D manifolds are deferred to the appendix (Section D). In Section 4.2, the polynomial expansion is used for synthesizing digits. Experiments on images beyond digits are conducted in Section E; more specifically, we experiment with images of faces and natural scenes. The experiments with such images demonstrate how polynomial expansion can be used for learning highly complex distributions by using a single activation function in the generator. Lastly, we augment our polynomial-based generator with non-linearities and show that this generator is at least as powerful as contemporary architectures. Apart from the polynomial-based generators, we implemented two variations that are considered baselines: (a)'Concat': we replace the Hadamard operator with concatenation (used frequently in recent methods, such as in), (b)' Orig': the Hadamard products are ditched, while use b r1s Ð z, i.e., there is a composition of linear layers that transform the noise z. Sinusoidal: We assess the polynomial-based generator on a sinusoidal function in the bounded domain r0, 2πs. Only linear blocks, i.e., no activation functions, are used in the generator. That is, all the element-wise non-linearities (such as ReLU's, tanh) are ditched. The distribution we want to match is a sin x signal. The input to the generator is z P R and the output is rx, sin xs with x P r0, 2πs. We assume a 12 th order approximation where each S ris, A ris is a fully-connected layer and B ris is an identity matrix. Each fully-connected layer has width 15. In Figure 4, 2, 000 random samples are synthesized. We indeed verify that in low-dimensional distributions, such as the univariate sinusoidal, PolyGAN indeed approximates the data distribution quite accurately without using any non-linear activation functions. The linear generator of the previous section is extended to greyscale images, in which an analytic expression of the ground-truth distribution remains elusive. To our knowledge, there has not been a generation of greyscale images based on polynomial expansion in the past. We capitalize on the expressivity of the recent resnet-based generator , to devise a new polynomial generator Gpzq: R 128 Ñ R 32x32. We consider a fourth-order approximation (as derived in) where B ris is the identity matrix, S ris is a residual block with two convolutions for i " 1,..., 4. We emphasize that the residual block as well as all layers are linear, i.e., there are no activation functions. We only add a tanh in the output of the generator for normalization purposes. The discriminator and the optimization procedure are the same as in SNGAN; the only difference is that we run one discriminator step per generator step (n dis " 1). Note that the'Orig' resnet-based generator resembles the generator of in this case. We perform digit generation (trained on MNIST ). In Figure 5, random samples are visualized for the three compared methods. Note that the two baselines have practically collapsed into a single number each, whereas PolyGAN does synthesize plausible digits. To further assist the generation process, we utilize the labels and train a conditional GAN. That is, the class labels are used for conditional batch normalization. As illustrated in Figure 6, the samples synthesized are improved over the unsupervised setting.' Orig' and'Concat' still suffer from severe mode collapse, while PolyGAN synthesizes digits that have different thickness (e.g. 9), style (e.g. 2) and rotation (e.g. 1). Figure 6: Conditional digit generation. Note that both'Orig' and'Concat' suffer from severe mode collapse (details in section 4.2). On the contrary, PolyGAN synthesizes digits that have different thickness (e.g. 9), style (e.g. 2) and rotation (e.g. 1). We express data generation as a polynomial expansion task. We model the high-order polynomials with tensorial factors. We introduce two tailored coupled decompositions and show how the polynomial parameters can be implemented by hierarchical neural networks, e.g. as generators in a GAN setting. We exhibit how such polynomial-based generators can be used to synthesize images by utilizing only linear blocks. In addition, we empirically demonstrate that our polynomial expansion can be used with non-linear activation functions to improve the performance of standard state-of-the-art architectures. Finally, it is worth mentioning that our approach reveals links between high-order polynomials, coupled tensor decompositions and network architectures. Algorithm 1: PolyGAN (model 1). % Perform the Hadamard product for the n th layer. Algorithm 2: PolyGAN (model 2). for n=2:N do 6 % Multiply with the current layer weight S rns and perform the Hadamard product. κ "´S rns κ`pB rns q T b rns¯˚´p A rns q T v7 end 8 x " β`Cκ. The appendix is organized as: • Section B provides the Lemmas and their proofs required for our derivations. • Section C generalizes the Coupled CP decomposition for N th order expansion. • Section D extends the experiments to 3D manifolds. • In Section E, additional experiments on image generation with linear blocks are conducted. • Comparisons with popular GAN architectures are conducted in Section F. Specifically, we utilize three popular generator architectures and devise their polynomial equivalent and perform comparisons on image generation. We also conduct an ablation study indicating how standard engineering techniques affect the image generation of the polynomial generator. • In Section G, a comparison between the two proposed decompositions is conducted on data distributions from the previous Sections. For a set of matrices tX m P R ImˆN u N m"1 the Khatri-Rao product is denoted by: In this section, we prove the following identity connecting the sets of matrices tA ν R IνˆK u N ν"1 and To demonstrate the simple case with two matrices, we prove first the special case with N " 2. Lemma 1. It holds that Proof. Initially, both sides of the equation have dimensions of KˆL, i.e., they match. The pi, jq element of the matrix product of pA Then the pi, jq element of the right hand side (rhs) of is: A 2,pk2,iq B 2,pk2,jq q " pA 1,pk1,iq A 2,pk2,iq qpB p1,k1,jq B 2,pk2,jq q From the definition of Khatri-Rao, it is straightforward to obtain the pρ, iq element with ρ " pk 1´1 qI 2`k2, (i.e. ρ P r1, I 1 I 2 s) of A 1 d A 2 as A 1,pk1,iq A 2,pk2,iq. Similarly, the pρ, jq element of B 1 d B 2 is B 1,pk1,jq B 2,pk2,jq. The respective pi, jq element of the left hand side (lhs) of is: In the last equation, we replace the sum in ρ (ρ P r1, I 1 I 2 s) with the equivalent sums in k 1, k 2. In a similar manner, we generalize the identity to the case of N ą 2 terms below. Lemma 2. It holds that Proof. The rhs includes the Hadamard products of the matrices A T ν¨Bν. Each matrix multiplication (A T ν¨Bν) in a matrix of KˆL dimensions. Thus, the rhs is a matrix of KˆL dimensions. The lhs is a matrix multiplication of two Khatri-Rao products. The first Khatri-Rao product has dimensions Kˆp ś ν I ν q, while the second p ś ν I ν qˆL. Altogether, the lhs has KˆL dimensions. Similarly to the previous Lemma, the pi, jq element of the rhs is: To proceed with the lhs, it is straightforward to derive that where s 1 " i and s ν is a recursive function of the s ν´1. However, the recursive definition of s ν is summed in the multiplication and we obtain: Below, we prove that (main paper) is equivalent to the three-layer neural network as shown in Figure 2. z. Then, the form of is equal to: Proof. Applying Lemma 2 on, we obtain: The last equation is the same as. In Claim 2 and Claim 3, we prove that (main paper) is equivalent to the three-layer neural network as shown in Figure 3. Claim 2. Let Proof. We will prove the equivalence starting from and transform it into. From: where in the last equation, we have applied Lemma 1. Applying the Lemma once more in the last term of, we obtain. with ω as in Claim 2. Then, it holds for Gpzq of that Gpzq " λ. Proof. Transforming into: To simplify the notation, we define M 1 " " ´A r1s dB r1s¯Sr2s ı * and M 2 "´A r2s dB r2s¯. The last term of becomes: Replacing into, we obtain. Note that the λ in Claim 3 is the equation behind Figure 3. By proving the claim, we have illustrated how the polynomial generator can be transformed into a network architecture for third-order approximation. In this Section, we will show how the Coupled CP decomposition generalizes to the N th order approximation. It suffices to find the decomposition that converts the N th order polynomial into a network structure (see Alg. 1). As done in Section 2.2, we capture the n th order interactions by decomposing the parameter tensor W rns (with 2 ď n ď N) as:... The term W rns 1:jn´1:...:j1 denotes the interactions across the layers 1, j n´1,..., j 1. The N th order approximation becomes:... By considering the mode-1 unfoding of Coupled CP decomposition (like in Section 2.2), we obtain:... where we use x N as an abbreviation of the sums. In the last equation, we have used Lemma 2 (Section B). Claim 4. The N th order approximation of can be implemented with a neural network as described in Alg. 1. Proof. We will use induction to prove the Claim. For N " 2, it trivially holds, while the proof for N " 3 is provided in Claim 1. Suppose it holds for N th order approximation; we prove below that it holds for N`1 th order approximation. Let us denote the approximation of as G N pzq. The pN`1q th order approximation from is:......... In the last equation, the first term in the sums is x N; for the rest two terms we apply Lemma 2:... The term λ is equal to the κ " pn´1q th order of, while there is only a single term for n " N. Therefore, is transformed into: which is exactly the form described by Alg. 1. This concludes the induction proof. Astroid: We implement a superellipse with parametric expression rα cos 3 t, α sin 3 ts for t P r´α, αs. This has a more complex distribution and four sharp edges. The random samples are visualized in Figure 7. PolyGAN models the data distribution accurately in contrast to the two baselines. We conduct three experiments in which the data distribution is analytically derived. The experiments are: Sin3D: The data manifold is an extension over the 2D manifold of the sinusoidal experiment (Section 4.1). The function we want to learn is Gpzq: R 2 Ñ R 3 with the data manifold described by the vector rx, y, sinp10˚ax 2`y2 qs for x, y P r´0.5, 0.5s. In Figure 8, 20, 000 samples are sampled from the generators and visualized. PolyGAN captures the data distribution, while'Orig' and'Concat' fail. Swiss roll: The three dimensional vector rt¨sin t, y, t¨cos ts`0.05¨s for t, y P r0, 1s and s " N p0, 1q forms the data manifold 2. In Figure 9, 20, 000 samples are visualized. Gabriel's Horn: The three dimensional vector rx, α¨c os t x, α¨s in t x s for t P r0, 160πs and x P r1, 4s forms the data manifold. The dependence on both sinusoidal and the function 1 x makes this curve challenging for a polynomial expansion. In Figure 10, the synthesized samples are plotted. PolyGAN learns how to generate samples on the manifold despite the fraction in the parametric form. Apart from digit generation (Section 4.2), we conduct two experiments on image generation of face and natural scenes. Since both distributions are harder than the digits one, we extend the approximation followed on Section 4.2 by one order, i.e., we assume a fifth-order approximation. We emphasize that each block is a residual block with no activation functions. Faces: In the experiment with faces, we utilize as the training samples the YaleB dataset. The dataset includes greyscale images of faces under extreme illuminations. We rescale all of the images into 64ˆ64 for our analysis. Random samples are illustrated in Figure 11. Our method generates diverse images and captures the case of illuminating either half part of the face, while'Orig' and'Concat' generate images that have a dark side only on the left and right side, respectively. The difference becomes profound in the finer details of the face (please zoom in), where both baselines fail to synthesize realistic semantic parts of the face. et al., 2001) ) for a generator with linear blocks and a single activation function only on the output (i.e., tan h). Notice that our method can illuminate either the left or right part of the face, in contrast to'Orig' (and 'Concat') which generate images that have a dark side only on the left (respectively right) side. In addition, both'Orig' and'Concat' fail to capture the fine details of the facial structure (please zoom in for the details). Natural scenes: We further evaluate the generation of natural images, specifically by training on CIFAR10 . CIFAR10 includes 50, 000 training images of 32ˆ32ˆ3 resolution. In Table 3, we evaluate the standard metrics of Inception Score (IS) and Frechet Inception Distance (FID) (see more details for the metrics in section F). Our model outperforms both'Orig' and'Concat' by a considerable margin. In Figure 12, some random synthesized samples are presented. To demonstrate the flexibility of the PolyGAN, we utilize three different popular generators. The three acrhitectures chosen are DCGAN , SNGAN , and SAGAN . Each original generator is converted into a polynomial expansion, while we use the non-linearities to boost the performance of the polynomial generator. The hyperparameters are kept the same as the corresponding baseline. Algorithms 3 and 4 succinctly present the key differences of our approach compared to the traditional one (in the case of SNGAN, similarly for other architectures). In addition to the baseline, we implement the most closely related alternative to our framework, namely instead of using the Hadamard operator as in Figure 3, we concatenate the noise with the feature representations at that block. The latter approach is frequently used in the literature (referred as "Concat" in the paper). The number of the trainable parameters of the generators are reported in Table 13. Our method has only a minimal increase of the parameters, while the concatenation increases the number of parameters substantially. To reduce the variance often observed during GAN training , each reported score is averaged over 10 runs utilizing different seeds. The metrics we utilize are Inception Score (IS) and Frechet Inception Distance (FID) . Below, we perform an ablation study on Section F.2, and then present the experiments on unsupervised (Section F.3) and conditional image generation (Section F.4) respectively. Datasets: We use CIFAR10 and Imagenet as the two most widely used baselines for GANs: • CIFAR10 includes 60, 000 images of 32ˆ32 resolution. We use 50, 000 images for training and the rest for testing. • Imagenet is a large scale dataset that includes over one million training images and 50, 000 validation images. We reshape the images to 128ˆ128 resolution. Baseline architectures: The architectures employed are: • DCGAN , as implemented in https://github.com/pytorch/ examples/tree/master/dcgan. This is a widely used baseline. • SNGAN , as implemented in https://github.com/ pfnet-research/sngan_projection. SNGAN is a strong performing GAN that introduced a spectral normalization in the discriminator. • SAGAN , as implemented in https://github.com/voletiv/ self-attention-GAN-pytorch. This is a recent network architecture that utilizes the notion of self-attention in a GAN setting, achieving impressive on Imagenet . The default hyper-parameters are left unchanged. The aforementioned codes are used for reporting the of both the baseline and our method to avoid any discrepancies, e.g. different frameworks ing in unfair comparisons. The source code will be released to enable the reproduction of our . Evaluation metrics: The popular Inception Score (IS) and Frechet Inception Distance (FID) are used for the quantitative evaluation. Both scores extract feature representations from a pretrained classifier (in practice the Inception network ). Despite their shortcomings, IS and FID are widely used , since alternative metrics fail for generative models . The Inception Score is defined as where x is a generated sample and ppy|xq is the conditional distribution for labels y. The distribution ppyq over the labels is approximated by 1 M ř M n"1 ppy|x n q for x n generated samples. Following the methods in the literature , we compute the inception score for M " 5, 000 generated samples per run (10 splits for each run). The Frechet Inception Distance (FID) utilizes feature representations from a pretrained network and assumes that the distributions of these representations are Gaussian. Denoting the representations of real images as N pµ r, C r q and the generated (fake) as N pµ f, C f q, FID is: In the experiments, we use M " 10, 000 to compute the mean and covariance of the real images and M " 10, 000 synthesized samples for µ f, C r. For both scores the original tensorflow inception network weights are used; the routines of tensorflow.contrib.gan.eval are called for the metric evaluation. We experimentally define that a (series of) affine transformation(s) on the input noise z are beneficial before using the transformed z for the Hadamard products. 3 These affine transformations are henceforth mentioned as global transformations on z. The implementation details for each network are the following: • DCGAN: We use a global transformation followed by a RELU non-linearity. WThe rest details remain the same as the baseline model. • SNGAN: Similarly to DCGAN, we use a global transformation with a RELU non-linearity. We consider each residual block as one order of approximation and compute the Hadamard product after each block (see algorithm 4). We conduct an ablation study based on SNGAN architecture (or our variant of SNGAN-poly), since most recent methods are based on similar generators;. Unless explicitly mentioned otherwise, the SNGAN is trained on CIFAR10 for unsupervised image generation. We add a global transformation on z, i.e. a fully-connected layer and use the transformed noise as input to the generator. In the first experiment, we evaluate whether to add a non-linear activation to the global transformation. The two alternatives are: i) with linear global transformation ('Ours-linear-global'), i.e. no non-linearity, and ii) with global transformation followed by a RELU non-linearity ('Ours-RELU-global'). The first two in Table 5 demonstrate that both metrics marginally improve when using a non-linear activation function. We add this global transformation with RELU on the original SNGAN. The are reported in the last two rows of Table 5 (where the original is mentioned as 'Orig', while the alternative of adding a global transformation as 'Original-RELU-global'). Split z into chunks: The recent BigGAN of performs hierarchical synthesis of images by splitting the latent vector z into one chunk per resolution (block). Each chunk is then concatenated into the respective resolution. We scrutinize this splitting against our method; we split the noise z into pk`1q non-overlapping chunks of equal size for performing k injections. The injection with splitting is mentioned as'Injectsplit' below. Our splitting deteriorates the scores on the task as reported in Table 6. It is possible that more elaborate splitting techniques, such as those in We scrutinize a feature normalization on the baseline of'Ours-RELU-global'. For each layer i we divide the A ris z vector with its standard deviation. The variant with global transformation followed by RELU and normalization before the Hadamard product is called'Ours-norm'. The in Table 7 illustrate that normalization improves the metrics. In Table 8, we use'Ours-RELU-global' as baseline against the model with the skip connection ('Ours-skip'). Since we use SNGAN both for unsupervised/conditional image generation, we verify the aforementioned in the conditional setting, i.e. when the class information is also provided to the generator and the discriminator. Normalization before Hadamard product: Similarly to the experiment above, for each layer i we divide the A ris z vector with its standard deviation. The quantitative in Table 9 improve the IS score, but the FID deteriorates. Skip the Hadamard product: Similarly to the aforementioned unsupervised case, we assess the performance if we add a skip connection in the Hadamard. In Table 10, the quantitative comparing the baseline and the skip case are presented. In this experiment, we study the image generation problem without any labels or class information for the images. The architectures of DCGAN and resnet-based SNGAN are used for image generation in CIFAR10 . Table 11 summarizes the of the IS/FID scores of the compared methods. In all of the experiments, PolyGAN outperforms the compared methods. Frequently class information is available. We can utilize the labels, e.g. use conditional batch normalization or class embeddings, to synthesize images conditioned on a class. We train two networks, et al., 2015). SAGAN uses self-attention blocks to improve the resnet-based generator. Despite our best efforts to show that our method is both architecture and database agnostic, the recent methods are run for hundreds of thousands or even million iterations till "convergence". In SAGAN the authors report that for each training multiple GPUs need to be utilized for weeks to reach the final reported Inception Score. We report the metrics for networks that are run with batch size 64 (i.e., four times less than the original 256) to fit in a single 16GB NVIDIA V100 GPU. Following the current practice in ML, due to the lack of computational budget , we run SAGAN for 400, 000 iterations (see Figure 3 of the original paper for the IS during training) 4. Each such experiment takes roughly 6 days to train. The FID/IS scores of our approach compared against the baseline method can be found in Table 12. In both cases, our proposed method yields a higher Inception Score and a lower FID. An experimental comparison of the two models described in Section 2 is conducted below. Unless explicitly mentioned otherwise, the networks used below do not include any non-linear activation functions, they are polynomial expansions with linear blocks. We use the following four experiments: Sinusoidal on 2D: The data distribution is described by rx, sinpxqs with x P r0, 2πs (see Section 4.1 for further details). We assume 8 th order approximation for Coupled CP decomposition and 12 th order for Coupled nested CP decomposition. Both have width 15 units. The comparison between the two models in Figure 13 demonstrates that they can both capture the data manifold. Impressively, the Coupled CP decomposition does not synthesize a single point that is outside of the manifold. Astroid: The data distribution is described on Section D. The samples comparing the two models are visualized in Figure 15. Sin3D: The data distribution is described on Section D. In Figure 15 the samples from the two models are illustrated. Swiss roll: The data distribution is described on Section D. In Figure 16 the samples from the two models are illustrated. We conduct an experiment on images to verify that both architectures can learn higher-dimensional distributions. We select the digit images as described in Section 4.2. In this case, Coupled CP decomposition is implemented as follows: each U ris is a series of linear convolutions with stride 2 for i " 1,..., 4, while C is a linear residual block. We emphasize that in both models all the activation functions are removed and there is a single tanh in the output of the generator for normalization purposes.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bye30kSYDH
We model the data generator (in GAN) by means of a high-order polynomial represented by high-order tensors.
Deep neural networks trained on large supervised datasets have led to impressive in recent years. However, since well-annotated datasets can be prohibitively expensive and time-consuming to collect, recent work has explored the use of larger but noisy datasets that can be more easily obtained. In this paper, we investigate the behavior of deep neural networks on training sets with massively noisy labels. We show on multiple datasets such as MINST, CIFAR-10 and ImageNet that successful learning is possible even with an essentially arbitrary amount of noise. For example, on MNIST we find that accuracy of above 90 percent is still attainable even when the dataset has been diluted with 100 noisy examples for each clean example. Such behavior holds across multiple patterns of label noise, even when noisy labels are biased towards confusing classes. Further, we show how the required dataset size for successful training increases with higher label noise. Finally, we present simple actionable techniques for improving learning in the regime of high label noise. Deep learning has proven to be powerful for a wide range of problems, from image classification to machine translation. Typically, deep neural networks are trained using supervised learning on large, carefully annotated datasets. However, the need for such datasets restricts the space of problems that can be addressed. This has led to a proliferation of deep learning on the same tasks using the same well-known datasets. Carefully annotated data is difficult to obtain, especially for classification tasks with large numbers of classes (requiring extensive annotation) or with fine-grained classes (requiring skilled annotation). Thus, annotation can be expensive and, for tasks requiring expert knowledge, may simply be unattainable at scale. To address this limitation, other training paradigms have been investigated to alleviate the need for expensive annotations, such as unsupervised learning BID11, self-supervised learning BID16 BID23 and learning from noisy annotations (; BID15 BID22 . Very large datasets (e.g., BID7 ; BID19) can often be attained, for example from web sources, with partial or unreliable annotation. This can allow neural networks to be trained on a much wider variety of tasks or classes and with less manual effort. The good performance obtained from these large noisy datasets indicates that deep learning approaches can tolerate modest amounts of noise in the training set. In this work, we take this trend to an extreme, and consider the performance of deep neural networks under extremely low label reliability, only slightly above chance. We envision a future in which arbitrarily large amounts of data will easily be obtained, but in which labels come without any guarantee of validity and may merely be biased towards the correct distribution. The key takeaways from this paper may be summarized as follows:• Deep neural networks are able to learn from data that has been diluted by an arbitrary amount of noise. We demonstrate that standard deep neural networks still perform well even on training sets in which label accuracy is as low as 1 percent above chance. On MNIST, for example, performance still exceeds 90 percent even with this level of label noise (see Figure 1). This behavior holds, to varying extents, across datasets as well as patterns of label noise, including when noisy labels are biased towards confused classes.• A sufficiently large training set can accommodate a wide range of noise levels. We find that the minimum dataset size required for effective training increases with the noise level. A large enough training set can accommodate a wide range of noise levels. Increasing the dataset size further, however, does not appreciably increase accuracy.• Adjusting batch size and learning rate can allow conventional neural networks to operate in the regime of very high label noise. We find that label noise reduces the effective batch size, as noisy labels roughly cancel out and only a small learning signal remains. We show that dataset noise can be partly compensated for by larger batch sizes and by scaling the learning rate with the effective batch size. Learning from noisy data. Several studies have investigated the impact of noisy datasets on machine classifiers. Approaches to learn from noisy data can generally be categorized into two groups:In the first group, approaches aim to learn directly from noisy labels and focus on noise-robust algorithms, e.g., BID0 BID3; BID8; BID13; BID14; BID20. The second group comprises mostly label-cleansing methods that aim to remove or correct mislabeled data, e.g., BID1. Methods in this group frequently face the challenge of disambiguating between mislabeled and hard training examples. To address this challenge, they often use semi-supervised approaches by combining noisy data with a small set of clean labels BID27. Some approaches model the label noise as conditionally independent from the input image BID15 BID17 and some propose image-conditional noise models BID22 BID24. Our work differs from these approaches in that we do not aim to clean the training dataset or propose new noise-robust training algorithms. Instead, we study the behavior of standard neural network training procedures in settings with massive label noise. We show that even without explicit cleaning or noise-robust algorithms, neural networks can learn from data that has been diluted by an arbitrary amount of label noise. Analyzing the robustness of neural networks. Several investigative studies aim to improve our understanding of convolutional neural networks. One particular stream of research in this space seeks to investigate neural networks by analyzing their robustness. For example, show that network architectures with residual connections have a high redundancy in terms of parameters and are robust to the deletion of multiple complete layers during test time. Further, BID18 investigate the robustness of neural networks to adversarial examples. They show that even for fully trained networks, small changes in the input can lead to large changes in the output and thus misclassification. In contrast, we are focusing on non-adversarial noise during training time. Within this stream of research, closest to our work are studies that focus on the impact of noisy training datasets on classification performance (e.g., BID17 BID20 ; BID26). In these studies an increase in noise is assumed to decrease not only the proportion of correct examples, but also their absolute number. In contrast to these studies, we separate the effects and show in §4 that a decrease in the number of correct examples is more destructive to learning than an increase in the number of noisy labels. In this work, we are concerned with scenarios of abundant data of very poor label quality, i.e., the regime in which falsely labeled training examples vastly outnumber correctly labeled examples. In particular, our experiments involve observing the performance of deep neural networks on multiclass classification tasks as label noise is increased. To formalize the problem, we denote the number of original training examples by n. To model the amount of noise, we dilute the dataset by adding α noisy examples to the training set for each original training example. Thus, the total number of noisy labels in the training set is αn. Note that by varying the noise level α, we do not change the available number of original examples. Thus, even in the presence of high noise, there is still appreciable data to learn from, if we are able to pick it out. This is in contrast to previous work (e.g., BID17 ; BID20 ; BID26), in which an increase in noise also implies a decrease in the absolute number Figure 1: Performance on MNIST as different amounts of noisy labels are added to a fixed training set of clean labels. We compare a perceptron, MLPs with 1, 2, and 4 hidden layers, and a 4-layer ConvNet. Even with 100 noisy labels for every clean label the ConvNet still attains a performance of 91%. Performance on CIFAR-10 as different amounts of noisy labels are added to a fixed training set of clean labels. We tested ConvNets with 4 and 6 layers, and a ResNet with 101 layers. Even with 10 noisy labels for every clean label the ResNet still attains a performance of 85%. of correct examples. In the following experiments we investigate three different types of noise: uniform label-swapping, structured label-swapping, and out-of-vocabulary examples. A key assumption in this paper is that unreliable labels are better modeled by an unknown stochastic process rather than by the output of an adversary. This is a natural assumption for data that is pulled from the environment, in which antagonism is not to be expected in the noisy annotation process. Deep neural networks have been shown to be exceedingly brittle to adversarial noise patterns BID18. In this work, we demonstrate that even massive amounts of non-adversarial noise present far less of an impediment to learning. As a first experiment, we will show that common training procedures for neural networks are resilient even to settings where correct labels are outnumbered by labels sampled uniformly at random at a ratio of 100 to 1. For this experiment we focus on the task of image classification and work with three commonly used datasets, MNIST , CIFAR-10 and ImageNet BID2 ).In FIG1 we show the classification performance with varying levels of label noise. For MNIST, we vary the ratio α of randomly labeled examples to cleanly labeled examples from 0 (no noise) to 100 (only 11 out of 101 labels are correct, as compared with 10.1 for pure chance). For the more challenging dataset CIFAR-10, we vary α from 0 to 10. For the most challenging dataset ImageNet, we let α range from 0 to 5. We compare various architectures of neural networks: multilayer perceptrons with different numbers of hidden layers, convolutional networks (ConvNets) with different numbers of convolutional layers, and residual networks (ResNets) with different numbers of layers BID4. We evaluate performance after training on a test dataset that is free from noisy labels. Full details of our experimental setup are provided in §3.4.Our show that, remarkably, it is possible to attain over 90 percent accuracy on MNIST, even when there are 100 randomly labeled images for every cleanly labeled example, to attain over 85 percent accuracy on CIFAR-10 with 10 random labels for every clean label, and to attain over 70 percent top-5 accuracy on ImageNet with 5 random labels for every clean label. Thus, in this highnoise regime, deep networks are able not merely to perform above chance, but to attain accuracies that would be respectable even without noise. Further, we observe from Figures 1 and 2 that larger neural network architectures tend also to be more robust to label noise. On MNIST, the performance of a perceptron decays rapidly with in- Figure 4: Illustration of uniform and structured noise models. In the case of structured noise, the order of false labels is important; we tested decreasing order of confusion, increasing order of confusion, and random order. The parameter δ parameterizes the degree of structure in the noise. It defines how much more likely the second most likely class is over chance.creasing noise (though it still attains 40 percent accuracy, well above chance, at α = 100). The performance of a multilayer perceptron drops off more slowly, and the ConvNet is even more robust. Likewise, for CIFAR-10, the accuracy of the residual network drops more slowly than that of the smaller ConvNets. This observation provides further support for the effectiveness of ConvNets and ResNets in particular for applications where noise tolerance may be important. We have seen that neural networks are extremely robust to uniform label noise. However, label noise in datasets gathered from a natural environment is unlikely to follow a perfectly uniform distribution. In this experiment, we investigate the effects of various forms of structured noise on the performance of neural networks. Figure 4 illustrates the procedure used to model noise structure. In the uniform noise setting, as illustrated on the left side of Figure 4, correct labels are more likely than any individual false label. However, overall false labels vastly outnumber correct labels. We denote the likelihood over chance for a label to be correct as. Note that = 1/(1 + α), where α is the ratio of noisy labels to certainly correct labels. To induce structure in the noise, we bias noisy labels to certain classes. We introduce the parameter δ to parameterize the degree of structure in the noise. It defines how much more likely the second most likely class is over chance. With δ = 0 the noise is uniform, whereas for δ = 1 the second most likely class is equally likely as the correct class. The likelihood for the remaining classes is scaled linearly, as illustrated in Figure 4 on the right. We investigate three different setups for structured noise: labels biased towards easily confused classes, towards hardly confused classes and towards random classes. FIG4 shows the on MNIST for the three different types of structured noise, as δ varies from 0 to 1. In this experiment, we train 4-layer ConvNets on a dataset that is diluted with 20 noisy labels for each clean label. We vary the order of false labels so that, besides the correct class, labels are assigned most frequently to those most often confused with the correct class, those least often confused with it, and in a random order. We determine commonly confused labels by training the network repeatedly on a small subset of MNIST and observing the errors it makes on a test set. The show that deep neural nets are robust even to structured noise, as long as the correct label remains the most likely by at least a small margin. Generally, we do not observe large differences between the different models of noise structure, only that bias towards random classes seems to hurt the performance a little more than bias towards confused classes. This might help explain why we often observe quite good from real world noisy datasets, where label noise is more likely to be biased towards related and confusing classes. In the preceding experiments, we diluted the training sets with noisy examples drawn from the same dataset; i.e., falsely labeled examples were images from within other categories of the dataset. In "confusing order" (highest probability for the most confusing label), "reverse confusing order", and random order. We interpolate between uniform noise, δ = 0, and noise so highly skewed that the most common false label is as likely as the correct label, δ = 1. Except for δ ≈ 1, performance is similar to uniform noise.: Performance on CIFAR-10 for varying amounts of noisy labels. Noisy training examples are drawn from CIFAR-10 itself, but mislabeled uniformly at random, CIFAR-100, with uniformly random labels, and white noise with mean and variance chosen to match those of CIFAR-10. Noise drawn from CIFAR-100 ed in only half the drop in performance observed with noise from CIFAR-10 itself, while white noise examples did not appreciable affect performance.natural scenarios, however, noisy examples likely also include categories not included in the dataset that have erroneously been assigned labels within the dataset. Thus, we now consider two alternative sources for noisy training examples. First, we dilute the training set with examples that are drawn from a similar but different dataset. In particular, we use CIFAR-10 as our training dataset and dilute it with examples from CIFAR-100, assigning each image a category from CIFAR-10 at random. Second, we also consider a dilution of the training set with "examples" that are simply white noise; in this case, we match the mean and variance of pixels within CIFAR-10 and again assign labels uniformly at random. FIG5 shows the obtained by a six-layer ConvNet on the different noise sources for varying levels of noise. We observe that both alternative sources of noise lead to better performance than the noise originating from the same dataset. For noisy examples drawn from CIFAR-100, performance drops only about half as much as when noise originates from CIFAR-10 itself. This trend is consistent across noise levels. For white noise, performance does not drop regardless of noise level; this is in line with prior work that has shown that neural networks are able to fit random input BID26. This indicates the scenarios considered in Experiments 1 and 2 represent in some sense a worst case. In natural scenarios, we may expect massively noisy datasets to fall somewhere in between the cases exemplified by CIFAR-10 and CIFAR-100. That is, some examples will be relevant but mislabeled. However, it is likely that many examples will not be from any classes under consideration and therefore will influence training less negatively. In fact, it is possible that such examples might increase accuracy, if the erroneous labels reflect underlying similarity between the examples in question. All models are trained with AdaDelta as optimizer and a batch size of 128. For each level of label noise we train separate models with different learning rates ranging from 0.01 to 1 and pick the learning rate that in the best performance. Generally, we observe that the higher the label noise, the lower the optimal learning rate. We investigate this trend in detail in §5. There seems to be a critical amount of clean training data required to successfully train the networks. This threshold increases as the noise level rises. For example, at α = 10, 2,000 clean labels are needed to attain 90% performance, while at α = 50, 10,000 clean labels are needed. In Experiments 1 and 2, noisy labels are drawn from the same dataset as the labels guaranteed to be correct. This involves drawing the same example many times from the dataset, giving it the correct label once, and in every other instance picking a random label according to the noise distribution in question. We show in Figure 7 that performance would have been comparable had we been able to draw noisy labels from an extended dataset, instead of repeating images. Specifically, we train a convolutional network on a subset of MNIST, with 2,500 certainly correct labels and with noisy labels drawn either with repetition from this set of 2,500 or without repetition from the remaining examples in the MNIST dataset. The are essentially identical between repeated and unique examples, supporting our setup in the preceding experiments. Underlying the ability of deep networks to learn from massively noisy data is the size of the data in question. It is well-established, see e.g., BID2, that traditional deep learning relies upon large datasets. We will now see how this is particularly true of noisy datasets. In Figure 8, we compare the performance of a ConvNet on MNIST as the size of the training set varies. We also show the performance of the same ConvNet trained on MNIST diluted with noisy labels sampled uniformly. We show how the performance of the ConvNet varies with the number of cleanly labeled training examples. For example, for the blue curve of α = 10 and 1,000 clean labels, the network is trained on 11,000 examples: 1,000 cleanly labeled examples and 10,000 with random labels. Generally, we observe that independent of the noise level the networks benefit from more data and that, given sufficient data, the networks reach similar . Further, the indicate that there seems to be a critical amount of clean training data that is required to successfully train the networks. This critical amount of clean data depends on the noise level; in particular, it increases as the noise level rises. Since performance rapidly levels off past the critical threshold the main requirement for the clean training set is to be of sufficient size. It is because of the critical amount of required clean data that we have not attempted to train networks for α 100. The number of correct examples needed to train such a network might rise above the 60,000 provided in the MNIST dataset. In a real-world dataset, the amount of (noisy) data available for training is likely not to be the limiting factor. Rather, considerations such as training time and learning rate may play a more important role, as we discuss in the following section. In the preceding sections, our were obtained by training neural networks with fixed batch size and running a parameter search to pick the optimal learning rate. We now look in more detail into how the choice of hyperparameters affects learning on noisy datasets. First, we investigate the effect of the batch size on the noise robustness of neural network training. In Figure 9, we compare the performance of a simple 2-layer ConvNet on MNIST with increasing noise, as batch size varies from 32 to 256. We observe that increasing the batch size provides greater robustness to noisy labels. One reason for this behavior could be that, within a batch, gradient updates from randomly sampled noisy labels cancel out, while gradients from correct examples that are marginally more frequent sum together and contribute to learning. By this logic, large batch sizes would be more robust to noise since the mean gradient over a larger batch is closer to the gradient for correct labels. All other experiments in this paper are performed with a fixed batch size of 128.We may also consider the theoretical case of infinite batch size, in which gradients are averaged over the entire space of possible inputs at each training step. While this is often impossible to perform in practice, we can simulate such behavior by an auxiliary loss function. In classification tasks, we are given an input x and aim to predict the class f (x) ∈ {1, 2, . . ., m}. The value f (x) is encoded within a neural network by the 1-hot vector y(x) such that DISPLAYFORM0 for 1 ≤ k ≤ m. Then, the standard cross-entropy loss over a batch X is given by: DISPLAYFORM1 whereŷ is the predicted vector and · X denotes the expected value over the batch X. We assume thatŷ is normalized (e.g. by the softmax function) so that the entries sum to 1.For a training set with noisy labels, we may consider the label f (x) given in the training set to be merely an approximation to the true label f 0 (x). Consider the case of n training examples, and αn noisy labels that are sampled uniformly at random from the set {1, 2, . . ., m}. Then, f (x) = f 0 (x) with probability 1 1+α, and otherwise it is 1, 2,..., m, each with probability α m(1+α). As batch size increases, the expected value over the batch X is approximated more closely by these probabilities. In the limit of infinite batch size, equation FORMULA1 takes the form of a noisy loss function H α: DISPLAYFORM2 We can therefore compare training using the cross-entropy loss with αn noisy labels to training using the noisy loss function H α without noisy labels. The term on the right-hand side of represents the noise contribution, and is clearly minimized whereŷ k are all equal. As α increases, this contribution is weighted more heavily against − logŷ f0(x) X, which is minimized atŷ(x) = y(x).We show in Figure 9 the of training our 2-layer ConvNet on MNIST with the noisy loss function H α, simulating αn noisy labels with infinite batch size. We can observe that the network's accuracy does not decrease as α increases. This can be explained by the observation that an increasing α is merely decreasing the magnitude of the true gradient, rather than altering its direction. Our observations indicate that increasing noise in the training set reduces the effective batch size, as noisy signals roughly cancel out and only small learning signal remains. We show that increasing the batch size is a simple practical means to mitigate the effect of noisy training labels. It has become common practice in training deep neural networks to scale the learning rate with the batch size. In particular, it has been shown that the smaller the batch size, the lower the optimal learning rate BID9. In our experiments, we have observed that noisy labels reduce the effective batch size. As such, we would expect that lower learning rates perform better than large learning rates as noise increases. Figure 10 shows the performance of a 4-layer ConvNet trained with different learning rates on CIFAR-10 for varying label noise. As expected, we observe that the optimal learning rate decreases as noise increases. For example, the optimal learning rate for the clean dataset is 1, while, with the introduction of noise, this learning rate becomes unstable. To sum up, we observe that increasing label noise reduces the effective batch size. We have shown that the effect of label noise can be partly counterbalanced for by a larger training batch size. Now, we see that one can additionally scale the learning rate to compensate for any remaining change in effective batch size induced by noisy labels. In this paper, we have considered the behavior of deep neural networks on training sets with very noisy labels. In a series of experiments, we have demonstrated that learning is robust to an essentially arbitrary amount of label noise, provided that the number of clean labels is sufficiently large. We have further shown that the threshold required for clean labels increases as the noise level does. Finally, we have observed that noisy labels reduce the effective batch size, an effect that can be mitigated by larger batch sizes and downscaling the learning rate. It is worthy of note that although deep networks appear robust to even high degrees of label noise, clean labels still always perform better than noisy labels, given the same quantity of training data. Further, one still requires expert-vetted test sets for evaluation. Lastly, it is important to reiterate that our studies focus on non-adversarial noise. Our work suggests numerous directions for future investigation. For example, we are interested in how label-cleaning and semi-supervised methods affect the performance of networks in a high-noise regime. Are such approaches able to lower the threshold for training set size? Finally, it remains to translate the we present into an actionable trade-off between data annotation and acquisition costs, which can be utilized in real world training pipelines for deep networks on massive noisy data.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1p461b0W
We show that deep neural networks are able to learn from data that has been diluted by an arbitrary amount of noise.
In this paper, we propose to extend the recently introduced model-agnostic meta-learning algorithm for low resource neural machine translation (NMT). We frame low-resource translation as a meta-learning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks. We use the universal lexical representation (b) to overcome the input-output mismatch across different languages. We evaluate the proposed meta-learning strategy using eighteen European languages (Bg, Cs, Da, De, El, Es, Et, Fr, Hu, It, Lt, Nl, Pl, Pt, Sk, Sl, Sv and Ru) as source tasks and five diverse languages (Ro, Lv, Fi, Tr, and Ko) as target tasks. We show that the proposed approach significantly outperforms the multilingual, transfer learning based approach and enables us to train a competitive NMT system with only a fraction of training examples. For instance, the proposed approach can achieve as high as 22.04 BLEU on Romanian-English WMT’16 by seeing only 16,000 translated words (~600 parallel sentences). Despite the massive success brought by neural machine translation (NMT, BID36 BID4 BID37, it has been noticed that the vanilla NMT often lags behind conventional machine translation systems, such as statistical phrase-based translation systems (PBMT, BID24, for low-resource language pairs (see, e.g., BID23 . In the past few years, various approaches have been proposed to address this issue. The first attempts at tackling this problem exploited the availability of monolingual corpora BID17 BID32 BID40 . It was later followed by approaches based on multilingual translation, in which the goal was to exploit knowledge from high-resource language pairs by training a single NMT system on a mix of high-resource and low-resource language pairs (a,b; BID27 BID21 BID19 . Its variant, transfer learning, was also proposed by BID42, in which an NMT system is pretrained on a high-resource language pair before being finetuned on a target low-resource language pair. In this paper, we follow up on these latest approaches based on multilingual NMT and propose a meta-learning algorithm for low-resource neural machine translation. We start by arguing that the recently proposed model-agnostic meta-learning algorithm could be applied to low-resource machine translation by viewing language pairs as separate tasks. This view enables us to use MAML to find the initialization of model parameters that facilitate fast adaptation for a new language pair with a minimal amount of training examples (§3). Furthermore, the vanilla MAML however cannot handle tasks with mismatched input and output. We overcome this limitation by incorporating the universal lexical representation BID15 and adapting it for the meta-learning scenario (§3.3).We extensively evaluate the effectiveness and generalizing ability of the proposed meta-learning algorithm on low-resource neural machine translation. We utilize 17 languages from Europarl and Russian from WMT as the source tasks and test the meta-learned parameter initialization against five target languages (Ro, Lv, Fi, Tr and Ko), in all cases translating to English. Our experiments using only up to 160k tokens in each of the target task reveal that the proposed meta-learning approach outperforms the multilingual translation approach across all the target language pairs, and the gap grows as the number of training examples 2 Background Neural Machine Translation (NMT) Given a source sentence X = {x 1, ..., x T}, a neural machine translation model factors the distribution over possible output sentences Y = {y 1, ..., y T} into a chain of conditional probabilities with a leftto-right causal structure: DISPLAYFORM0 where special tokens y 0 (bos) and y T +1 (eos) are used to represent the beginning and the end of a target sentence. These conditional probabilities are parameterized using a neural network. Typically, an encoder-decoder architecture BID36 BID9 BID4 with a RNN-based decoder is used. More recently, architectures without any recurrent structures BID13 BID37 have been proposed and shown to speedup training while achieving state-of-the-art performance. Low Resource Translation NMT is known to easily over-fit and in an inferior performance when the training data is limited BID23. In general, there are two ways for handling the problem of low resource translation: utilizing the resource of unlabeled monolingual data, and sharing the knowledge between low-and high-resource language pairs. Many research efforts have been spent on incorporating the monolingual corpora into machine translation, such as multi-task learning BID17 ), back-translation , dual learning BID20 and unsupervised machine translation with monolingual corpora only for both sides BID3 BID26. For the second approach, prior researches have worked on methods to exploit the knowledge of auxiliary translations, or even auxiliary tasks. For instance, BID8; BID28 investigate the use of a pivot to build a translation path between two languages even without any directed resource. The pivot can be a third language or even an image in multimodal domains. When pivots are not easy to obtain, BID11; BID27; BID21 have shown that the structure of NMT is suitable for multilingual machine translation. BID15 also showed that such a multilingual NMT system could improve the performance of low resource translation by using a universal lexical representation to share embedding information across languages. All the previous work for multilingual NMT assume the joint training of multiple high-resource languages naturally in a universal space (for both the input representation and the model) which, however, is not necessarily true, especially for very low resource cases. Meta Learning In the machine learning community, meta-learning, or learning-to-learn, has recently received interests. Meta-learning tries to solve the problem of "fast adaptation on new training data." One of the most successful applications of meta-learning has been on few-shot (or oneshot) learning BID25, where a neural network is trained to readily learn to classify inputs based on only one or a few training examples. There are two categories of meta-learning:1. learning a meta-policy for updating model parameters (see, e.g., BID1 BID18 BID30 2. learning a good parameter initialization for fast adaptation (see, e.g., BID10 BID38 BID35 .In this paper, we propose to use a meta-learning algorithm for low-resource neural machine translation based on the second category. More specifically, we extend the idea of model-agnostic metalearning in the multilingual scenario. The underlying idea of MAML is to use a set of source tasks T 1,..., T K to find the initialization of parameters θ 0 from which learning a target task T 0 would require only a small number of training examples. In the context of machine translation, this amounts to using many high-resource language pairs to find good initial parameters and training a new translation model on a low-resource language starting from the found initial parameters. This process can be understood as That is, we meta-learn the initialization from auxiliary tasks and continue to learn the target task. DISPLAYFORM0 We refer the proposed meta-learning method for NMT to MetaNMT. See FIG0 for the overall illustration. Given any initial parameters θ 0 (which can be either random or meta-learned), the prior distribution of the parameters of a desired NMT model can be defined as an isotropic Guassian: DISPLAYFORM0 where 1/β is a variance. With this prior distribution, we formulate the language-specific learning process Learn(D T ; θ 0) as maximizing the logposterior of the model parameters given data D T: DISPLAYFORM1 where we assume p(X|θ) to be uniform. The first term above corresponds to the maximum likelihood criterion often used for training a usual NMT system. The second term discourages the newly learned model from deviating too much from the initial parameters, alleviating the issue of overfitting when there is not enough training data. In practice, we solve the problem above by maximizing the first term with gradient-based optimization and early-stopping after only a few update steps. Thus, in the low-resource scenario, finding a good initialization θ 0 strongly correlates the final performance of the ing model. We find the initialization θ 0 by repeatedly simulating low-resource translation scenarios using auxiliary, high-resource language pairs. Following Finn et al. FORMULA0 we achieve this goal by defining the meta-objective function as DISPLAYFORM0 where k ∼ U({1, . . ., K}) refers to one metalearning episode, and D T, D T follow the uniform distribution over T's data. We maximize the meta-objective function using stochastic approximation BID31 with gradient descent. For each episode, we uniformly sample one source task at random, T k. We then sample two subsets of training examples independently from the chosen task, D T k and D T k. We use the former to simulate languagespecific learning and the latter to evaluate its outcome. Assuming a single gradient step is taken only the with learning rate η, the simulation is: DISPLAYFORM1 Once the simulation of learning is done, we evaluate the updated parameters θ k on D T k, The gradient computed from this evaluation, which we refer to as meta-gradient, is used to update the meta model θ. It is possible to aggregate multiple episodes of source tasks before updating θ: where η is the meta learning rate. Unlike a usual learning scenario, the ing model θ 0 from this meta-learning procedure is not necessarily a good model on its own. It is however a good starting point for training a good model using only a few steps of learning. In the context of machine translation, this procedure can be understood as finding the initialization of a neural machine translation system that could quickly adapt to a new language pair by simulating such a fast adaptation scenario using many high-resource language pairs. DISPLAYFORM2 We use the following approximation property DISPLAYFORM0 where ν is a small constant and DISPLAYFORM1 In practice, we find that it is also possible to ignore the second-order term, ending up with the following simplified update rule: DISPLAYFORM2 Related Work: Multilingual Transfer Learning The proposed MetaNMT differs from the existing framework of multilingual translation BID27 BID21 BID15 or transfer learning BID42. The latter can be thought of as solving the following problem: DISPLAYFORM3 We omit the subscript k for simplicity.where D k is the training set of the k-th task, or language pair. The target low-resource language pair could either be a part of joint training or be trained separately starting from the solution θ 0 found from solving the above problem. The major difference between the proposed MetaNMT and these multilingual transfer approaches is that the latter do not consider how learning happens with the target, low-resource language pair. The former explicitly incorporates the learning process within the framework by simulating it repeatedly in Eq.. As we will see later in the experiments, this in a substantial gap in the final performance on the low-resource task. Illustration In Fig. 2, we contrast transfer learning, multilingual learning and meta-learning using three source language pairs (Fr-En, Es-En and Pt-En) and two target pairs (Ro-En and Lv-En). Transfer learning trains an NMT system specifically for a source language pair (Es-En) and finetunes the system for each target language pair (RoEn, Lv-En). Multilingual learning often trains a single NMT system that can handle many different language pairs (Fr-En, Pt-En, Es-En), which may or may not include the target pairs (Ro-En, LvEn). If not, it finetunes the system for each target pair, similarly to transfer learning. Both of these however aim at directly solving the source tasks. On the other hand, meta-learning trains the NMT system to be useful for fine-tuning on various tasks including the source and target tasks. This is done by repeatedly simulating the learning process on low-resource languages using many high-resource language pairs (Fr-En, Pt-En, Es-En). I/O mismatch across language pairs One major challenge that limits applying meta-learning for low resource machine translation is that the approach outlined above assumes the input and output spaces are shared across all the source and target tasks. This, however, does not apply to ma-chine translation in general due to the vocabulary mismatch across different languages. In multilingual translation, this issue has been tackled by using a vocabulary of sub-words BID32 or characters BID27 shared across multiple languages. This surface-level sharing is however limited, as it cannot be applied to languages exhibiting distinct orthography (e.g., IndoEuroepan languages vs. Korean.)Universal Lexical Representation (ULR) We tackle this issue by dynamically building a vocabulary specific to each language using a keyvalue memory network (; BID16, as was done successfully for low-resource machine translation recently by BID15 . We start with multilingual word embedding matrices k query ∈ R |V k |×d pretrained on large monolingual corpora, where V k is the vocabulary of the k-th language. These embedding vectors can be obtained with small dictionaries of seed word pairs BID2 BID34 or in a fully unsupervised manner BID41 BID0 . We take one of these languages k to build universal lexical representation consisting of a universal embedding matrix u ∈ R M ×d and a corresponding key matrix key ∈ R M ×d, where M < |V k |. Both k query and key are fixed during meta-learning. We then compute the language-specific embedding of token x from the language k as the convex sum of the universal embedding vectors by DISPLAYFORM0 DISPLAYFORM1 and τ is set to 0.05. This approach allows us to handle languages with different vocabularies using a fixed number of shared parameters ( u, key and A.)Learning of ULR It is not desirable to update the universal embedding matrix u when finetuning on a small corpus which contains a limited set of unique tokens in the target language, as it could adversely influence the other tokens' embedding vectors. We thus estimate the change to each embedding vector induced by languagespecific learning by a separate parameter ∆ k [x]: DISPLAYFORM2 During language-specific learning, the ULR Preprocessing and ULR Initialization As described in §3.3, we initialize the query embedding vectors k query of all the languages. For each language, we use the monolingual corpora built from Wikipedia 7 and the parallel corpus. The concatenated corpus is first tokenized and segmented using byte-pair encoding (BPE, BID33, ing in 40, 000 subwords for each language. We then estimate word vectors using fastText BID5 and align them across all the languages in an unsupervised way using MUSE BID0 to get multilingual word vectors. We use the multilingual word vectors of the 20,000 most frequent words in English to form the universal embedding matrix u . Model We utilize the recently proposed Transformer BID37 as an underlying NMT system. We implement Transformer in this paper based on BID14 8 and mod-ify it to use the universal lexical representation from §3.3. We use the default set of hyperparameters (d model = d hidden = 512, n layer = 6, n head = 8, n batch = 4000, t warmup = 16000) for all the language pairs and across all the experimental settings. We refer the readers to BID37 BID14 for the details of the model. However, since the proposed metalearning method is model-agnostic, it can be easily extended to any other NMT architectures, e.g. RNN-based sequence-to-sequence models with attention BID4.Learning We meta-learn using various sets of source languages to investigate the effect of source task choice. For each episode, by default, we use a single gradient step of language-specific learning with Adam BID22 per computing the meta-gradient, which is computed by the first-order approximation in Eq..For each target task, we sample training examples to form a low-resource task. We build tasks of 4k, 16k, 40k and 160k English tokens for each language. We randomly sample the training set five times for each experiment and report the average score and its standard deviation. Each fine-tuning Ro is done on a training set, early-stopped on a validation set and evaluated on a test set.-En Lv-En Fi-En Tr-En Ko-EnFine-tuning Strategies The transformer consists of three modules; embedding, encoder and decoder. We update all three modules during metalearning, but during fine-tuning, we can selectively tune only a subset of these modules. Following BID42, we consider three fine-tuning strategies; fine-tuning all the modules (all), fine-tuning the embedding and encoder, but freezing the parameters of the decoder (emb+enc) and fine-tuning the embedding only (emb). We metalearn the initial models on all the source tasks using either Ro-En or Lv-En as a validation task. We also train the initial models to be multilingual translation systems. We fine-tune them using the four target tasks (Ro-En, Lv-En, Fi-En and Tr-En; 16k tokens each) and compare the proposed meta-learning strategy and the multilingual, transfer learning strategy. As presented in FIG2, the proposed learning approach significantly outperforms the multilingual, transfer learning strategy across all the target tasks regardless of which target task was used for early stopping. We also notice that the emb+enc strategy is most effective for both meta-learning and transfer learning approaches. With the proposed meta-learning and emb+enc fine-tuning, the final NMT systems trained using only a fraction of all available training examples achieve 2/3 (Ro-En) and 1/2 (Lv-En, Fi-En and Tr-En) of the BLEU score achieved by the models trained with full training sets. Similarly to training any other neural network, meta-learning still requires early-stopping to avoid overfitting to a specific set of source tasks. In doing so, we observe that the choice of a validation task has nonnegligible impact on the final performance. For instance, as shown in FIG2, Fi-En benefits more when Ro-En is used for validation, while the opposite happens with Tr-En. The relationship between the task similarity and the impact of a validation task must be investigated further in the future. Training Set Size We vary the size of the target task's training set and compare the proposed meta-learning strategy and multilingual, transfer learning strategy. We use the emb+enc fine-tuning on Ro-En and Fi-En. FIG3 demonstrates that the meta-learning approach is more robust to the drop in the size of the target task's training set. The gap between the meta-learning and transfer learning grows as the size shrinks, confirming the effectiveness of the proposed approach on extremely lowresource language pairs. TAB2, we present the on all five target tasks obtained while varying the source task set. We first see that it is Source (Tr) google mülteciler için 11 milyon dolar toplamaküzere bagış eşleştirme kampanyasını başlattı. Target google launches donation-matching campaign to raise $ 11 million for refugees. Meta-0 google refugee fund for usd 11 million has launched a campaign for donation. Meta-16k google has launched a campaign to collect $ 11 million for refugees. always beneficial to use more source tasks. Although the impact of adding more source tasks varies from one language to another, there is up to 2× improvement going from one source task to 18 source tasks (Lv-En, Fi-En, Tr-En and Ko-En). The same trend can be observed even without any fine-tuning (i.e., unsupervised translation, BID26 BID3). In addition, the choice of source languages has different implications for different target languages. For instance, Ro-En benefits more from {Es, Fr, It, Pt} than from {De, Ru}, while the opposite effect is observed with all the other target tasks. Training Curves The benefit of meta-learning over multilingual translation is clearly demonstrated when we look at the training curves in FIG4. With the multilingual, transfer learning approach, we observe that training rapidly saturates and eventually degrades, as the model overfits to the source tasks. MetaNMT on the other hand continues to improve and never degrades, as the metaobjective ensures that the model is adequate for fine-tuning on target tasks rather than for solving the source tasks. Sample Translations We present some sample translations from the tested models in TAB4.Inspecting these examples provides the insight into the proposed meta-learning algorithm. For instance, we observe that the meta-learned model without any fine-tuning produces a word-by-word translation in the first example (Tr-En), which is due to the successful use of the universal lexcial representation and the meta-learned initialization. The system however cannot reorder tokens from Turkish to English, as it has not seen any training example of Tr-En. After seeing around 600 sentence pairs (16K English tokens), the model rapidly learns to correctly reorder tokens to form a better translation. A similar phenomenon is observed in the Ko-En example. These cases could be found across different language pairs. In this paper, we proposed a meta-learning algorithm for low-resource neural machine translation that exploits the availability of high-resource languages pairs. We based the proposed algorithm on the recently proposed model-agnostic metalearning and adapted it to work with multiple languages that do not share a common vocabulary using the technique of universal lexcal representation, ing in MetaNMT. Our extensive evaluation, using 18 high-resource source tasks and 5 low-resource target tasks, has shown that the proposed MetaNMT significantly outperforms the existing approach of multilingual, transfer learning in low-resource neural machine translation across all the language pairs considered. The proposed approach opens new opportunities for neural machine translation. First, it is a principled framework for incorporating various extra sources of data, such as source-and targetside monolingual corpora. Second, it is a generic framework that can easily accommodate existing and future neural machine translation systems.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1g5ylbm1Q
we propose a meta-learning approach for low-resource neural machine translation that can rapidly learn to translate on a new language
This work presents a method for active anomaly detection which can be built upon existing deep learning solutions for unsupervised anomaly detection. We show that a prior needs to be assumed on what the anomalies are, in order to have performance guarantees in unsupervised anomaly detection. We argue that active anomaly detection has, in practice, the same cost of unsupervised anomaly detection but with the possibility of much better . To solve this problem, we present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method, presenting on both synthetic and real anomaly detection datasets. Anomaly detection (a.k.a. outlier detection) (; ;) aims to discover rare instances that do not conform to the patterns of majority. From a business perspective, though, we are not only interested in finding rare instances, but "usefull anomalies". This problem has been amply studied recently (; ; ; ;), with solutions inspired by extreme value theory , robust statistics and graph theory .Unsupervised anomaly detection is a sub-area of outlier detection, being frequently applied since label acquisition is very expensive and time consuming. It is a specially hard task, where there is usually no information on what these rare instances are and most works use models with implicit priors or heuristics to discover these anomalies, providing an anomaly score s(x) for each instance in a dataset. Active anomaly detection is a powerful alternative approach to this problem, which has presented good in recent works such as (; ; 2017).In this work, we first show that unsupervised anomaly detection requires priors to be assumed on the anomaly distribution; we then argue in favor of approaching it with active anomaly detection, an important, but under-explored approach (Section 2). We propose a new layer, called here Universal Anomaly Inference (UAI), which can be applied on top of any unsupervised anomaly detection model based on deep learning to transform it into an active model (Section 3). This layer uses the strongest assets of deep anomaly detection models, i.e. its learned latent representations (l) and anomaly score (s), to train a classifier on the few already labeled instances. An example of such an application can be seen in FIG0, where an UAI layer is built upon a Deanoising AutoEncoder (DAE).We then present extensive experiments, analyzing the performance of our systems vs unsupervised, semi-supervised and active ones under similar budgets in both synthetic and real data, showing our algorithm improves state of the art in several datasets, with no hyperparameter tuning (Section 4). Finally, we visualize our models learned latent representations, comparing them to unsupervised models' ones and analyze our model's performance for different numbers of labels (Appendix C). defines an outlying observation, or outlier, as one that appears to deviate markedly from other members of the sample in which it occurs. states that an outlier is an observation that deviates so much from other observations as to arouse suspicion that it was generated by a different mechanism. says that normal data instances occur in high probability regions of a stochastic model, while anomalies occur in the low probability ones. Following these definitions, specially the one from , we assume there is a probability density function from which our'normal' data instances are generated: X normal ∼ p normal (x) = p (x|y = 0), where x is an instance's available information 1 and y is a label saying if the point is anomalous or not. There is also a different probability density function from which anomalous data instances are sampled: X anom ∼ p anom (x) = p (x|y = 1). A full dataset is composed of both normal and anomalous instance, being sampled from a probability distribution that follows:(X, Y) full ∼ p full (x, y) = p (y) p (x|y) X full ∼ p full (x) = p (y = 0) p normal (x) + p (y = 1) p anom (x) = (1 − λ)p normal (x) + λp anom (x)where λ is an usually small constant representing the probability of a random data point being anomalous (λ = p(y = 1)), this constant can be either known a priori or not. divides anomaly detection learning systems in three different types:• Supervised: A training and a test set are available with curated labels for non-anomalous and anomalous instances. This case is similar to an unbalanced supervised classification setting: D train/test = (X, Y) train/test ∼ p full (x, y) • Semi-Supervised: A training set is available containing only non-anomalous instances and the challenge is to identify anomalous instances in a test set. This is also called novelty detection: DISPLAYFORM0 • Unsupervised: A dataset containing both non-anomalous and anomalous instance is available and the challenge is to identify anomalous instances in it. There is no concept of a test set since anomalous instances must be sorted in the dataset itself: DISPLAYFORM1 2.1 UNSUPERVISED ANOMALY DETECTION In this work, we will focus on unsupervised anomaly detection. Here, in possession of the full set of points X ∼ p full (x), we want to find a subset X anom ⊂ X which is composed of the anomalous instances. The full distribution p full is a mixture of distributions and if these distributions overlap very closely, it may be impossible to learn the individual distributions beyond a certain accuracy threshold . It is a well-known that general mixture models are unidentifiable . In the sequence, we further show that we gain no information on p anom from p full for any small λ without a prior on the anomalies' probability distribution. This differs from the usual unidentifiability of mixture models in that we make no assumptions on the prior for p normal, showing all valid distributions of p anom are equally probable. Theorem 1. No free anomaly theorem. Consider two independent arbitrary probability distributions p normal and p anom. For a small number of anomalies λ ≈ 0, p full = p gives us no further knowledge on the distribution of p anom: DISPLAYFORM2 From Theorem 1 we can conclude that unsupervised anomaly detection requires a prior on the anomalies distribution. A more tangible example of this can be seen in FIG1, where we present a synthetic data distribution composed of three classes of data clustered in four visibly separable clusters. Anomaly detection is an undecidable problem in this setting without further information, since it is impossible to know if the low density cluster is composed of anomalies or the anomalies are the unclustered low density points (or a combination of both).If we used a high capacity model to model the data distribution in FIG1, the low density points (Right) would be detected as anomalous. If we used a low capacity model, the cluster (Center) would probably present a higher anomaly score. Our choice of algorithm implicitly imposes a prior on the detected anomalies. Theorem 1 highlights this and makes the need to consider priors explicit. In a more practicle example, assume we are working with clinical data. In this setting, some low density clusters may indicate diseases (anomalies), while other low density clusters may be caused by uncontrolled factors in the data, such as high performance athletes. At the same time, rare diseases might seem like scattered (low density) points. We want to be able to distinguish between anomalies and'uninteresting' low probability points. The usual strategy when working with unsupervised anomaly detection problems is training a parameterized model p θ (x) to capture the full data distribution p full (x) (e.g. a PCA, or AutoEncoder), and, since λ is, by definition, a small constant, assuming p full (x) ≈ p normal (x) and assuming points with low probability are anomalous . An anomaly score s(x) is then defined as s(x) = 1 p(x). There are three main problems with this strategy: if anomalous items are more common than expected, p full might be a poor approximation of p normal; if anomalous items are tightly clustered in some way, high capacity models may learn to identify that cluster as a high probability region; if anomalous items are as rare as expected, since we only have access to p full, Theorem 1 states we have no information about p anom without further assumptions on its probability distribution. Most unsupervised anomaly detection systems also already rely on further verification of the by human experts, due to their uncertain performance. Being mostly used as a ranking system to get high probability instances in the top of a'list' to be further audited by these experts. From Theorem 1, we conclude it is impossible to have an universal and reliable unsupervised anomaly detection system, while we know that most such systems already rely on the data being later audited by human experts. These arguments together argue in favor of an active learning strategy for anomaly detection, including the auditor experts in the system's training loop. Thus, anticipating feedback and benefiting from it to find further anomalous instances, which in a more robust system. Having an extremely unbalanced dataset in this problem (λ ≈ 0) is also another justification for an active learning setting, which has the potential of requiring exponentially less labeled data than supervised settings .3.1 ACTIVE ANOMALY DETECTION With these motivations, we argue in favor of active anomaly detection methods, which despite its many advantages remains an under-explored approach to this problem. Nonetheless, recent work has shown promising (; ; 2017). In unsupervised anomaly detection, we start with a dataset D = {x|x ∼ p full (x)} and want to rank elements in this dataset so that we have the highest possible recall/precision for a certain budget b, which is the number of elements selected to be audited by an expert, with no prior information on anomaly labels. In active anomaly detection, we also start with a completely unlabeled anomaly detection dataset D = {x|x ∼ p full (x)}, but instead of ranking anomalies and sending them all to be audited at once by our expert, we select them in small parts, waiting for the experts feedback before continuing. We iteratively select the most probable k b elements to be audited 2, wait for the expert to select their label, and continue training our system using this information, as shown in Algorithm 1. This requires the same budget b as an unsupervised anomaly detection system, while having the potential of achieving a much better performance. DISPLAYFORM0 labels ← labels ∪ expert.audit(top_k) DISPLAYFORM1 With this in mind, we develop the Universal Anomaly Inference (UAI) layer. This layer can be incorporated on top of any deep learning based white box anomaly detection system which provides an anomaly score for ranking anomalies. It takes as input both a latent representation layer (l(x)), created by the model, and its output anomaly score (s(x)), and passes it through a classifier to find an item's anomaly probability. DISPLAYFORM2 This is motivated by recent works stating learned representations have a simpler statistical structure , which makes the task of modeling this manifold and detecting unnatural points much simpler . In this work, we model the UAI layer using a simple logistic regression as our classifier, but any architecture could be used here: DISPLAYFORM3 where W act ∈ R 1,d+1 is a linear transformation, b act ∈ R is a bias term and σ(·) is the sigmoid function. We learn the values of W and b using back-propagation with a cross entropy loss function, while allowing the gradients to flow through l, but not through s, since s might be non-differentiable. For the rest of this document, we will refer to the networks with a UAI layer as UaiNets. An example of this architecture is shown in FIG0. In this section, we test our new UAI layer on top of two distinct architectures: a Denoising AutoEncoder (DAE, with s dae (x) = ||x −x|| 2 2 ) and a Classifier (Class, with s class (x) = cross_entropy(x y, x y)), which use standard multi layer perceptrons. Both architectures are described in details in Appendix A.1. To test our algorithm we start by analyzing its performance on synthetic data created with different properties (Section 4.1). We then present using UaiNets on real anomaly detection datasets (Section 4.2) and in a semi-supervised setting (Section 4.3). When designing experiments, we had the objective of showing that our model can work with different definitions of anomaly, while completely unsupervised models will need, by definition, to trade-off accuracy in one setting for accuracy in the other. While this may seem straight forward, these can show how robust our approach is to the choice of underlying architecture, analyzing how well they do when their underlying architecture has a bad prior for that specific "type" of anomaly. With this in mind, we used the MNIST dataset and defined four sets of experiments: 3 1. MNIST 0: For the first set of experiments, we reduced the presence of the 0 digit class to only 10% of its original number of samples, making it only 1/91 ≈ 1.1% of the dataset samples. The 0s still present in the dataset had its class randomly changed to x y ∼ Uniform([1; 9]) and were defined as anomalies. 2. MNIST 0-2: The second set of experiments follows the same dataset construction, but we reduce the number of instances of numbers 0, 1 and 2, changing the labels of the remaining items in these categories to x y ∼ Uniform([3; 9]), and again defining them as anomalous. In this dataset anomalies composed 3/73 ≈ 4.1% of the dataset. 3. MNIST hard: The third set of experiments aims to test a different type of anomaly. In order to create this dataset, we first trained a weak one hidden layer MLP classifier on MNIST and selected all misclassified instances as anomalous, keeping them in the dataset with their original properties (x x and x y). In this dataset anomalies composed ≈ 3.3% of the dataset. 4. MNIST pca: In this set of experiments, for each image class (x y), we used a PCA to reduce the dimensionality of MNIST images (x x) to 2 and selected the 5% instances with the largest reconstruction error as anomalies. We kept all 60,000 instances in the dataset with their original properties (x x and x y) and in this dataset anomalies composed 5% of the dataset. Results for these experiments are shown in FIG3 and the main taken from them is that, even though our algorithm might not get better than its underlying model for every budget-dataset pair, it is robust to different types of anomalies, which is not the case for the underlying completely unsupervised models. While Class gives really good in MNIST 0 and MNIST 0-2 datasets, it does not achieve the same performance in MNIST hard and MNIST pca, which might indicate it is better at finding clustered anomalies than low density ones. At the same time, DAE has good for MNIST pca and MNIST hard, but bad ones for MNIST 0 and MNIST 0-2, which indicates it is better at finding low density anomalies than clustered ones. Nevertheless, both UaiNets are robust in all four datasets, being able to learn even on datasets which are hard for their underlying models, although they might have a cold start to produce . Here we analyze our model's performance on public benchmarks composed of real anomaly detection datasets. We employ 11 datasets in our analysis: KDDCUP; Thyroid; Arrhythmia; KDDCUP-Rev; Yeast; Abalone; Cardiotocography (CTG); Credit Card; Covtype; Mammography (MMG); Shuttle (; ; ;). We compare our algorithm against: DAE (TAB0 presents for these real datasets. In these experiments, DAGMM (clean) was trained on a semi-supervised anomaly detection setting, using clean datasets during training, DAGMM (dirty) and DAE were trained in an unsupervised setting, while LODA-AAD, Tree-AAD and DAE uai were trained in an active anomaly detection setting. We can clearly see from these that DAE produces fairly bad for all datasets analyzed here, nevertheless, even using a simple architecture as its underlying model, DAE uai produces similar or better to the best baselines on the 11 datasets, even when the baselines were trained in completely clean training sets. DAE uai also usually presents better than LODA-AAD and Tree-AAD, which are similarly trained in an active setting. One possible criticism to our method is that the importance of the proposed approach becomes more relevant the fewer the proportion of anomalous instances, which seems self-defeating. But we see that the largest difference from the active methods to the other algorithms was in Covtype, which has less than 1% anomalies but 286,048 instances. When working with large datasets (>1M instances), even if only 0.1% of the dataset is contaminated there is still the chance to benefit from this feedback to improve performance. The active algorithms are also more robust than the others, DAGMM used different hyperparameters for each experiment, while DAE uai and AAD use the same for all (except for k which was reduced from 10 to 3 for the datasets with less than 100 anomalies). Another practical scenario where our model could be applied is a mixture of semi-supervised and unsupervised anomaly detection. In this case, we have a dataset which contains anomalies that we want to find and audit. At the same time, new data instances, which may include new types of anomalies not seen before, can be added to the dataset at any time and we would like to detect anomalies in this dataset as well. DISPLAYFORM0 With this in mind, we ran an experiment training DAE uai and LODA-AAD on KDDCUP-Rev in the same way as in Section 4.2, while evaluating it on its test set for different budgets. This test set contains 20 new types of anomalies (the train set contains 16 types of anomalies and the test set 36). The evaluation was done by selecting the most anomalous instances found by each model on the test set and calculating the recall for both seen and unseen anomalies in that group. Results for this experiment can be seen in FIG4. In this figure, the right y axis shows the number of anomalies detected in the training set for a certain budget and corresponds to the light blue lines. The left y axis present the recall for the test dataset. We see that DAGMM is not so effective on this test set, while DAE is able to detect well novelty (new classes). We also see that DAE uai is significantly better at detecting known types of anomalies, while it maintains a recall close to the best on new unseen classes, giving better than LODA-AAD for both seen and unseen classes of anomalies. Anomaly Detection This field has been amply studied and good overviews can be found in . Although many algorithms have been recently proposed, classical methods for outlier detection, like LOF Breunig et al. FORMULA6 and OC-SVM (Schölkopf et al., 2001), are still used and produce good . Recent work on anomaly detection has focused on statistical properties of "normal" data to identify these anomalies, such as , which uses Benford's Law to identify anomalies in social networks, and , which uses Extreme Value Theory to detect anomalies. Other works focus on specific types of data, focuses on spatially contextualized data, while (; ; ;) focus on graph data. Recently, energy based models and GANs have been successfully used to detect anomalies, but autoencoders are still more popular in this field. propose a method to train robust autoencoders, drawing inspiration from robust statistics and more specifically robust PCAs, focuses on clustering, and trains autoencoders that generate latent representations which are friendly for k-means. The work most similar to ours is DAGMM , where they train a deep autoencoder and use its latent representations, together with its reconstruction error, as input to a second network, which they use to predict the membership of each data instance to a mixture of gaussian models, training the whole model end-to-end in an semi-supervised manner for novelty detection. Active Anomaly Detection Despite its many advantages, active anomaly detection remains an under-explored approach to this problem, nevertheless, over the years some really interesting work has been developed in this topic. In , the authors solve the rare-category detection problem by proposing an active learning strategy to datasets with extremely skewed distributions of class sizes. reduces outlier detection to classification using artificially generated examples that play the role of potential outliers and then applies a selective sampling mechanism based on active learning to the reduced classification problem. In (Görnitz et al., 2013), the authors proposed a Semi-Supervised Anomaly Detection (SSAD) method based in Support Vector Data Description (SVDD) , which he expanded to a semi-supervised setting, where he accounts for the presence of labels for some anomalous instances, and with an active learning approach to select these instances to label. propose an active approach that combines unsupervised and supervised learning to select items to be labeled by experts, with each approach selecting k 2 instances at a time. The most similar prior works to ours in this setting are , which proposed an algorithm that can be employed on top of any ensemble methods based on random projections, and , which expands Isolation Forests to work in an active setting. Our work differs from these prior works mainly in that we prove the necessity of priors for unsupervised anomaly detection, further motivating the Active Anomaly Detection framework, and in our proposed model. UAI layers can be assembled on top of any Deep Learning based anomaly detection architecture, which is the state of the art for unsupervised anomaly detection, to make it work in an active anomaly detection setting. Besides, after each iteration with experts both LODA-AAD and Tree-AAD have a time complexity O(t), where t is the number of already labeled instances, while each iteration of UaiNets runs in constant time O with respect to t. We proposed here a new architecture, Universal Anomaly Inference (UAI), which can be applied on top of any deep learning based anomaly detection architecture. We show that, even on top of very simple architectures, like a DAE, UaiNets can produce similar/better to state-of-the-art unsupervised/semi-supervised anomaly detection methods. We also give both theoretical and practical arguments motivating active anomaly detection, arguing that, in most practical settings, there would be no detriment to using this instead of a fully unsupervised approach. We further want to make clear that we are not stating our method is better than our semi-supervised baselines (DAGMM, DCN, DSEBM-e). Our contributions are orthogonal to theirs. We propose a new approach to this hard problem which can be built on top of them, this being our main contribution in this work. To the best of our knowledge, this is the first work which applies deep learning to active anomaly detection. We use the strongest points of these deep learning algorithms (their learned representations and anomaly scores) to build an active algorithm, presenting an end-to-end architecture which learns representations by leveraging both the full dataset and the already labeled instances. Important future directions for this work are using the UAI layers confidence in its output to dynamically choose between either directly using its scores, or using the underlying unsupervised model's anomaly score to choose which instances to audit next. Another future direction would be testing new architectures for UAI layers, in this work we restricted all our analysis to simple logistic regression. A third important future work would be analyzing the robustness of UaiNets to mistakes being made by the labeling experts. Finally, making this model more interpretable, so that auditors could focus on a few "important" features when labeling anomalous instances, could increase labeling speed and make their work easier. In this section we give detailed descriptions of the experiments. Section A.1 presents the used model architectures for both DAE and Class models, as well as DAE uai and Class uai. Section A.2 presents details on the synthetic MNIST datasets and on the hyper-parameters used for the experiments. Finally, Section A.3 contains detailed descriptions on the used datasets, baselines and experimental settings for the experiments on real anomaly detection datasets. To show our algorithm can be assembled on top of any deep learning model, we tested it using two simple but very different anomaly detection models. The first model we test it on top of is a normal Denoising AutoEncoder (DAE). A DAE is a neural network mainly composed by an encoder, which transforms the input into a latent space, and a decoder, which reconstructs the input using this latent representation, typically having a loss function that minimizes the reconstruction error L 2 norm: DISPLAYFORM0 where both f enc and f dec are usually feed forward networks with the same number of layers, l ∈ R d is a d-dimensional latent representation and is a zero mean noise, sampled from a Gaussian distribution with a ϕ standard deviation. When used in anomaly detection, the reconstruction error is usually used as an approximation of the inverse of an item's probability, and as its anomaly score: DISPLAYFORM1 We then create a DAE uai network by assembling the proposed UAI layer on top of the DAE: DISPLAYFORM2 where uai(·) is the classifier chosen for the UAI layer. This architecture can be seen in FIG0. Another typical approach to unsupervised anomaly detection is, when given a dataset with labeled data X = (x x, x y), training a classifier (Class) to predict x y from x x 5 and using the cross-entropy of an item as an approximation to the inverse of its probability distribution: DISPLAYFORM3 where f class (·) is typically a feed forward neural network with p layers, from which we can use its last hidden layer (h p−1) as the data's latent representation to be used in the Class uai. DISPLAYFORM4 This architecture can be seen in FIG5. For all experiments in this work, unless otherwise stated, the DAE's encoder and decoder had independent weights and we used both the DAE and Class models with 3 hidden layers and hidden sizes. This means the latent representations provided to the UAI layers are l ∈ R 8. We implemented all experiments using TensorFlow , and used a learning rate of 0.01, batch size of 256 and the RMSprop optimizer with the default hyper-parameters. For the active learning models, we pre-train the DAE/Class model for 5000 optimization steps, select k = 10 items to be labeled at a time, and further train for 100 iterations after each labeling call. To deal with the cold start problem, for the first 10 calls of select_top, we use the base anomaly score (s) of the DAE/Class model to make this selection, using the UAI one for all later labeling decisions. Detailed statistics on the synthetic MNIST datasets can be seen in TAB2. MNIST 0 and MNIST 0-2 were mainly generated with the purpose of simulating the situation in FIG1 (Center), where anomalies were present in sparse clusters. At the same time, MNIST hard and MNIST pca were designed to present similar characteristics to the situation in FIG1 (Right), where anomalous instances are in sparse regions of the data space. For these experiments, most datasets were used as suggested in , but we processed the KDDCUP, Thyroid, Arrhythmia and KDDCUP-Rev datasets in the same manner as to be able to better compare with their :• KDDCUP : The KDDCUP99 10 percent dataset from the UCI repository. Since it contains only 20% of instances labeled as "normal" and the rest as "attacks", "normal" instances are used as anomalies, since they are in a minority group. This dataset contains 34 continuous features and 7 categorical ones. We transform these 7 categorical features into their one hot representations, and obtain a dataset with 120 features.• Thyroid : A dataset containing data from patients which can be divided in three classes: normal (not hypothyroid), hyperfunction and subnormal functioning. In this dataset, we treat the hyperfunction class as an anomaly, with the other two being treated as normal. It can be obtained from the ODDS repository. • Arrhythmia : This dataset was designed to create classification algorithms to distinguish between the presence and absence of cardiac arrhythmia. In it, we use the smallest classes (3, 4, 5, 7, 8, 9, 14, and 15) as anomalies and the others are treated as normal. This dataset can also be obtained from the ODDS repository.• KDDCUP-Rev : Since "normal" instances are a minority in the KDDCUP dataset, we keep all "normal" instances and randomly draw "attack" instances so that they compose 20% of the dataset. We compare our algorithm against: et al., 2008): Denoising Autoencoders are autoencoder architectures which are trained to reconstruct instances from noisy inputs. DISPLAYFORM0 • DAGMM : Deep Autoencoding Gaussian Mixture Model is a stateof-the-art model for semi-supervised anomaly detection which simultaneously learns a latent representation, using deep autoencoders, and uses both this latent representation and the autoencoder's reconstruction error to learn a Gaussian Mixture Model for the data distribution.• LODA-AAD : Lightweight on-line detector of anomalies (LODA) Active Anomaly Discovery (AAD) is a work which uses the active anomaly detection framework on top of LODA (Pevnỳ, 2016), which is a method based on ensembles of weak anomaly detection models.• Tree-AAD : This work learns weights for each node in an Isolation Forest anomaly detection model, by incorporating knowledge gained through active anomaly detection. Since there is no validation/test set in unsupervised anomaly detection, we cannot tune hyperparameters on a validation set. Because of this, to make the DAE baselines more competitive, we got the for several different hyper-parameter configurations and present only the best among them. This is not a realistic approach, but we only do it to our baselines, while for our proposed algorithm we keep hyper-parameters fixed for all experiments. We even keep our hidden sizes fixed to on thyroid, which only contains 6 features per instance, since our objective here is not getting the best possible , but showing the robustness of our approach. The only hyper-parameter change we make in UAI networks is that, since there are fewer anomalies in some datasets, we set our active learning approach to choose k = 3 instances at a time, instead of 10, for datasets with less than 100 anomalies. Results for DAGMM are from our implementation of this model and follow the same procedures, architectures and hyper-parameters as described in , being trained in a semisupervised setting. The for LODA-AAD and Tree-AAD were run using the code made available by the authors and with the same steps as DAE uai. 7 For all experiments, for LODA-AAD, Tree-AAD, DAE and DAE uai used the number of anomalies in the dataset as the budget b. In this section, we present more detailed for both the synthetic (Section B.1) and real (Section B.2) anomaly detection datasets, which couldn't fit on the main paper due to lack of space. We also present for synthetic anomaly detection experiments on Fashion-MNIST (Section B.3). We present here detailed for small budgets (b ≤ 5000) on the MNIST experiments, with graphs zoomed in for these budget values. Analyzing FIG6 we see that for some of these datasets UaiNets present a cold start, producing worse for small budgets. Nonetheless, after this cold start, they produce better in all MNIST experiments. An interesting future work would be to measure the confidence in the UaiNet's prediction to dynamically choose between using its anomaly score or the underlying network's one, which could solve/reduce this cold start problem. TAB3 presents a detailed comparison for experiments ran on KDDCUP, Thyroid, Arrhythmia and KDDCUP-Rev datasets with other baselines, also showing precision, recall and their standard deviations. In this table we also compare our to: • OC-SVM : One-class support vector machines are a popular kernel based anomaly detection method. In this work, we employ it with a Radial Basis Function (RBF) kernel.• DCN : Deep Clustering Network is a state-of-the-art clustering algorithm. Its architecture is designed to learn a latent representation using deep autoencoders which is easily separable when using k-means.• PAE : Denoising AutoEncoders pretrained as suggested in .• DSEBM-e : Deep Structured Energy Based Models are anomaly detection systems based on energy based models , which are a powerful tool for density estimation. We compare here against DSEBM-e, which uses a data instance's energy as the criterion to detect anomalies.• DSEBM-r : Deep Structured Energy Based Model with the same architecture and training procedures as DSEBM-e, but using an instance's reconstruction error as the criterion for anomaly detection. The presented here are averages of five runs, with standard deviations in parenthesis. In this table, for OC-SVM, PAE, DSEBM-r, DSEBM-e, DCN and DAGMM were taken from , while DAGMM * are from our implementation of DAGMM. Unfortunately, we were not able to reproduce their in the Thyroid dataset, getting a high variance in the . LODA-AAD does not scale well to large datasets, so to run it on KDDCUP and KDDCUP-Rev we needed to limit its memory about the anomalies it had already learned, forgetting the oldest ones. This reduced its runtime complexity from O(b 2) to O(b) in our tests, where b is the budget limit for the anomaly detection task. We did the same (limit memory) for Tree-AAD on KDDCUP.On this table we can see that DAE uai produces better than LODA-AAD on all analyzed datasets and than Tree-AAD on three out of four. Our proposed method also, besides presenting comparable to state-of-the-art DAGMM trained on a clean dataset, is much more stable, having a lower standard deviation than the baselines in almost all datasets. In this Section, we present for experiments on synthetic anomaly detection datasets based on Fashion-MNIST . To create these datasets we follow the same procedures as done for MNIST in Section 4.1, generating four datasets: Fashion-MNIST 0; Fashion-MNIST 0-2; Fashion-MNIST hard; Fashion-MNIST pca. Detailed statistics of these datasets can be seen in TAB4. We run experiments on these datasets following the exact same procedures as in Section 4.1. FIG8 shows the for Fashion-MNIST 0 and Fashion-MNIST 0-2, while Figure 8 show the for Fashion-MNIST hard and Fashion-MNIST pca. These figures show similar trends to the ones for MNIST, although algorithms find anomalies in these datasets harder to identify. In one run of Fashion-MNIST 0, DAE uai needed several examples to start learning and for Fashion-MNIST hard, Class uai takes a long time to start producing better than Class. Nevertheless, UaiNets are still much more robust than the underlying networks to different types of anomalies, producing good in all four datasets, even when its underlying network gives weak on that dataset. In this section we further study UaiNets, analyzing the evolution of hidden representations and anomaly scores through training (Section C.1), and the dependence of on the number of audited anomalies (Section C.2). In this section, we show visualizations of the learned representations (l dae/class) and anomaly scores (s dae/class) of UaiNets' underlying networks, presenting their evolution as more labels are fed into the network through the active learning process. With this purpose, we retrain UaiNets on both MNIST 0-2 and MNIST hard, with a hidden size of, so that its latent representation is one dimensional (l(x) ∈ R 1 ), and plot these representations vs the anomaly scores (s) of the base network (either DAE or Class) for different budgets (b). Figure 9 shows the evolution of DAE uai's underlying l dae (x) and s dae (x). In it, we can see that initially (Figures 9 (a, d) ) anomalies and normal data instances are not separable in this space. Nevertheless, with only a few labeled instances (b = 250) the space becomes much easier to separate, while for b = 2000 the space is almost perfectly linearly separable. 8 FIG0 shows the same evolution for Class uai's underlying l class (x) and s class (x). In it, we can also see the same patterns, as initially anomalies and normal data instances are not separable, but with a few labeled instances anomalies become much more identifiable. The main taken from these visualizations is how the gradient flow through l is important, since it helps the network better separate data in these spaces, allowing good performance even when the underlying networks are not good at identifying a specific type of anomaly. This experiments aim at showing how the networks choice quality evolves with the access to more labels. Here, we present the choices DAE uai network would make having access to a fixed number of expert labels. With this in mind, we train the networks in the same way as in Section 4.2, but stop after reaching a specific budget (b), showing the choices made up to that point, and after that with no further training. FIG0 shows the evolution of DAE uai anomaly choices as it is fed more expert knowledge. We can see that with only a few labels it already fairs a lot better than its underlying network. In KDDCUP with only 3,000 labeled instances, which is less than 1% of the dataset, it can correctly find 80,000 anomalies with a high precision, while the DAE with no expert knowledge does a lot worse. On Thyroid and KDDCUP-Rev, with ≈ 10% of the dataset labeled (b = 531 and b = 4000, respectively) it finds all or almost all anomalies in the dataset correctly. The Arrhythmia dataset is a lot smaller and with few anomalies, so DAE uai improves on DAE in a smaller scale here, but it still does fairly better than the underlying network. Gifs showing this choice evolution will be made available with the final publication. DISPLAYFORM0 Figure 9: (Color online) Underlying latent representations (l dae) vs anomaly score (s dae) for DAE uai network as training progresses on MNIST 0-2 and MNIST hard. DISPLAYFORM1 Figure 10: (Color online) Underlying latent representations (l class) vs anomaly score (s class) for Class uai network as training progresses on MNIST 0-2 and MNIST hard.so its probability distribution is: where P is the hyperspace containing all probability distributions, with an hyper-volume m. Now we can try to find p(p 1 |p + = p): DISPLAYFORM2 DISPLAYFORM3 where Equality and from the fact that p 1 ∈ P 1 ⇔ p 2 ∈ P 2, given a specific value of p + = p. This completes this proof. D.2 LEMMA 2. EXTREME MIXTURES LEMMA Lemma 2. Extreme mixtures lemma. Consider two independent arbitrary probability distributions p 1 and p 2. Given only a third probability distribution p + = p composed of the weighted mixture of the two, and for a small λ ≈ 0, we can find a small residual hyperplane P 1, which tends to {p}. DISPLAYFORM4 We can also find a very large residual hyperplane P 2 for p 2, which tends to: DISPLAYFORM5 where supp(·) is the support of a probability distribution. Proof. In this proof, we start with the arbitrary residual hyperplanes P r and find restrictions in the limits of λ → 0 and λ → 1. For a β ≈ 0:lim β→0 P r = lim β→0 {p r = p−β·p 1−β, ∀p ∈ P | β · p ≤ p} = lim β→0 {p r = p − β · p, ∀p ∈ P | β · p ≤ p} = {p} P r ≈ {p r = p − β · p, ∀p ∈ P | β · p ≤ p} β ≈ 0 P 1 ≈ {p r = p − λ · p, ∀p ∈ P | λ · p ≤ p} λ ≈ 0 P 2 ≈ {p r = p − (1 − λ) · p, ∀p ∈ P | (1 − λ) · p ≤ p} λ ≈ 1 ∴ β ≈ 0 For a β ≈ 1 we start with the other definition of P r: lim β→1 P r = lim β→1 {p r, ∀p r ∈ P | (1 − β) · p r ≤ p} = lim β→1 {p r, ∀p r ∈ P | supp(p r) ⊆ supp(p), (1 − β) · p r ≤ p} = {p r, ∀p r ∈ P | supp(p r) ⊆ supp(p)} P r ≈ {p r, ∀p r ∈ P | supp(p r) ⊆ supp(p)} β ≈ 1 P 1 ≈ {p r, ∀p r ∈ P | supp(p r) ⊆ supp(p)} λ ≈ 1 P 2 ≈ {p r, ∀p r ∈ P | supp(p r) ⊆ supp(p)} λ ≈ 0 ∴ β ≈ 1This finishes this proof. D.3 THEOREM 1. NO FREE ANOMALY THEOREM Theorem 1. Consider two independent arbitrary probability distributions p normal and p anom. For a small number of anomalies λ ≈ 0, the knowledge that p full = p gives us no further knowledge on the distribution of p anom: DISPLAYFORM6 Proof. Consider in Lemmas 1 and 2 that p 2 = p anom ∼ Uniform(P). We then have that, for a small value of λ ≈ 0: DISPLAYFORM7 This finishes this proof. In this section, we prove upper and lower bounds on the maximum distance a probability distribution p 1 can be from p +, based on the value of λ. This can be directly applied to p normal for small values of λ and to p anom for large ones. Theorem 2. Upper Bound on Mixture Probability Distance For two independent arbitrary probability distributions p 1 and p 2, given only a third probability distribution p + composed of the weighted mixture of the two: DISPLAYFORM0 We have an upper bound on the distance measures δ(p +, p 1) and ||p + − p 1 || given by:δ(p +, p 1) ≤ 1 2 log 1 1 − λ ||p + − p 1 || ≤ 2 log 1 1 − λ which is a tight bound for λ ≈ 0. In this equation δ(·) is the total variation distance between two probability distributions and || · || is the L 1 norm. Proof. Pinsker's inequality states that if p and q are two probability distributions on a common measurable space (A, (1−λ)·p1(x)+λ·p2(x) dx where this maximum Kullback-Leibler divergence is achieved when p 1 and p 2 are disjoint probability distributions: DISPLAYFORM1 (1−λ)·p1(x)+λ·p2(x) dx ≤ x p 1 (x) log p1(x)(1−λ)·p1(x) dx = x p 1 (x) log δ(p +, p 1) ≤ 1 2 log 1 1 − λ ||p + − p 1 || ≤ 2 log 1 1 − λ Theorem 3. Lower Bound on Maximum Mixture Probability Distance For two independent arbitrary probability distributions p 1 and p 2, given only a third probability distribution p + composed of the weighted mixture of the two: DISPLAYFORM2 We have a lower bound on the maximum possible distance measures δ(p +, p 1) and ||p + − p 1 || for a chosen maximizing p 1 given by: DISPLAYFORM3 which is a tight bound for λ ≈ 1, considering the maximum L 1 distance between two probability distributions is 2.Proof. We can prove a lower bound on the maximized distance of a probability distribution p 1 from p + by expanding the distance equations: where in (a) we lower bound based on the probability distribution that would have the smallest possible superior distance to a later maximized probability distribution p 1. This probability distribution p 1 can always maximize its superior distance to p 2 by: DISPLAYFORM4 DISPLAYFORM5 In (b) we choose the uniform distribution as the one that would reduce this superior distance and in (c) we set p 1 (a) = 1 for a random a, since p 2 is uniform. With a similar strategy we find: DISPLAYFORM6 This concludes this proof.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJex0o05F7
A method for active anomaly detection. We present a new layer that can be attached to any deep learning model designed for unsupervised anomaly detection to transform it into an active method.
In this paper, we ask for the main factors that determine a classifier's decision making and uncover such factors by studying latent codes produced by auto-encoding frameworks. To deliver an explanation of a classifier's behaviour, we propose a method that provides series of examples highlighting semantic differences between the classifier's decisions. We generate these examples through interpolations in latent space. We introduce and formalize the notion of a semantic stochastic path, as a suitable stochastic process defined in feature space via latent code interpolations. We then introduce the concept of semantic Lagrangians as a way to incorporate the desired classifier's behaviour and find that the solution of the associated variational problem allows for highlighting differences in the classifier decision. Very importantly, within our framework the classifier is used as a black-box, and only its evaluation is required. A considerable drawback of the deep classification paradigm is its inability to provide explanations as to why a particular model arrives at a decision. This black-box nature of deep systems is one of the main reasons why practitioners often hesitate to incorporate deep learning solutions in application areas, where legal or regulatory requirements demand decision-making processes to be transparent. A state-of-the-art approach to explain misclassification is saliency maps, which can reveal the sensitivity of a classifier to its inputs. Recent work , however, indicates that such methods can be misleading since their are at times independent of the model, and therefore do not provide explanations for its decisions. The failure to correctly provide explanations by some of these methods lies in their sensibility to feature space changes, i.e. saliency maps do not leverage higher semantic representations of the data. This motivates us to provide explanations that exploit the semantic content of the data and its relationship with the classifier. Thus we are concerned with the question: can one find semantic differences which characterize a classifier's decision? In this work we propose a formalism that differs from saliency maps. Instead of characterizing particular data points, we aim at generating a set of examples which highlight differences in the decision of a black-box model. Let us consider the task of image classification and assume a misclassification has taken place. Imagine, for example, that a female individual was mistakenly classified as male, or a smiling face was classified as not smiling. Our main idea is to articulate explanations for such misclassifications through sets of semantically-connected examples which link the misclassified image with a correctly classified one. In other words, starting with the misclassified point, we change its features in a suitable way until we arrive at the correctly classified image. Tracking the black-box output probability while changing these features can help articulate the reasons why the misclassification happened in the first place. Now, how does one generate such a set of semantically-connected examples? Here we propose a solution based on a variational auto-encoder framework. We use interpolations in latent space to generate a set of examples in feature space connecting the misclassified and the correctly classified points. We then condition the ing feature-space paths on the black-box classifier's decisions via a user-defined functional. Optimizing the latter over the space of paths allows us to find paths which highlight classification differences, e.g. paths along which the classifier's decision changes only once and as fast as possible. A basic outline of our approach is given in Fig. 1. In what follows we introduce and formalize the notion of stochastic semantic paths -stochastic processes on feature (data) space created by decoding latent code interpolations. We formulate the corresponding path integral formalism which allows for a Lagrangian formulation of the problem, viz. how to condition stochastic semantic paths on the output Figure 1: Auto-Encoding Examples Setup: Given a misclassified point x 0 and representatives x −T, x T, we construct suitable interpolations (stochastic processes) by means of an Auto-Encoder. Sampling points along the interpolations produces a set of examples highlighting the classifier's decision making. probabilities of black-box models, and introduce an example Lagrangian which tracks the classifier's decision along the paths. We show the explanatory power of our approach on the MNIST and CelebA datasets. We are concerned with the problem of explaining a particular decision of a black-box model. Many recent works discuss the roll and provide definitions of explanations in the machine learning context (; ; ;). Here we follow and, in broad terms, to explain we mean to provide textual or visual artifacts that provide qualitative understanding of the relationship between the data points and the model prediction. Attempts to clarify such a broad notion of explanation require the answers to questions such as what were the main factors in a decision?, as well as would changing a certain factor have changed the decision? . To provide an answer to such questions, one must be able to define a clear notion of factors. One can think of factors as the minimal set of coordinates that allows us to describe the data points. This definition mirrors the behavior and purpose of the variational auto-encoder (VAE) code -by training an auto-encoder one can find a code which describes a particular data point. Our role here is to provide a connection between these latent codes and the classifier's decision. Changes on the code should change the classification decision in a user-defined way. Defining such a code will allow us to formalize the framework required to provide an answer to question and above. we require explanations to be model-agnostic, i.e independent of the classifier's inner workings, interpretable, and expressing local fidelity. Following the discussion above, we use the variational auto-encoder (VAE) formalism to introduce a notion of semantics useful to qualitatively explain the decisions of a black-box classifier. Let us denote the feature (data) space by X and the latent linear space of codes (describing the data) by Z, where usually dim(Z) dim(X). We consider a latent variable generative model whose distribution P θ (X) on X is defined implicitly through a two-step generation process: one first samples a code Z from a fixed prior distribution P (Z) on Z and then (stochastically) maps Z to feature space through a (decoder) distribution P θ (X|Z), the latter being parametrized by neural networks with parameters θ. This class of models are generically train by minimizing specific distances between the empirical data distribution P D (X) and the model distribution P θ (X). VAE approaches this problem by introducing an encoder distribution Q φ (Z|X), parametrized by neural networks with parameters φ, which approximates the true posterior distribution P θ (Z|X) and minimizing a variational upper bound on the Kullback-Leibler divergence D KL between P θ (X) and P D (X). This bound reads where p θ (x|z) denotes the decoder's density and yields the likelihood function of the data given the code 1. Once the model is trained one can think of the inferred latent code as containing some highlevel description of the input data. Below we will use such inferred code to modify in a controlled way the features of a given input data point. We define a defendant black-box model b(l, x) as a classifier which yields the probability that the data point x ∈ X in feature (data) space belongs to the class l ∈ L, where L is a set of classes. Assume the model b(l, x) is expected to perform by its users or clients, in a dataset D = {(l i, x i)}, where x i ∈ X and l i ∈ L is the label that x i belongs to. 2 Suppose now that the following litigation case emerges. The black-box model b has assigned the data point x 0 to the class l 0. Accordingly, a plaintiff presents a complaint as the point x 0 should have been classified as l t. Furthermore, assume we are given two additional representative data points x −T, x T which have been correctly classified by the black-box model to the classes l −T, l T, respectively -as expected by e.g. the plaintiff, the defendant (if agreed), or the institution upon which the complain or litigation case is presented (say, the court). With this set-up in mind, we propose that an explanation why x 0 was misclassified can be articulated through an example set E = {x −T, . . ., x 0, . . ., x T}, where x t ∼ P θ (X|Z = z t). Here P θ (X|Z = z t) is a given decoder distribution and the index t runs over semantic changes (properly defined below) that highlight classification decisions. This example set constitutes the context revealing how factor changes impact the classification decision (see Section 2). One expects that human oriented explanations are semantic in character. One can understand the expression bigger eyes will change the classification. As opposed to changes in some specific pixels 3. The index t would run over these changes e.g. would make the eyes bigger. In this section we first formalize the notion of semantic change by introducing the concept of (stochastic) semantic interpolations in feature space X. This will allow us to generate examples which provide local fidelity, as the examples are smooth modifications of the latent code associated to the plaintiff data point x 0. We then define a collection of probability measures over semantic paths in X. These measures will be used later in Section 6 to constrain the paths to be explanatory with respect to the classifier's decision. One of the main motivations behind the VAE formalism is the ability of the inferred latent code z to provide semantic high-level information over the data set. If one is to generate examples which have characteristics common to two different data points, say x 0 and x T from the litigation case, one can perform interpolations between the latent codes of these points, that is z 0 and z T, and then decode the points along the interpolation. A main observation is that these interpolations in latent space can be used to induce certain interpolating stochastic processes on feature space 4 X. We refer to these as stochastic semantic processes. In what follows, we first focus on linear latent interpolations, i.e. and construct an interpolating stochastic semantic process X t on X by using the decoder distribution P θ (X|Z = z(t)). In practice, the generation process of such stochastic interpolations consists then of three steps: (i) sample Q φ (Z|X) at the end points x 0 and x T using the reparametrization trick , (ii) choose a set of points z t along the line connecting z 0 and z T and (iii) decode the z t by sampling P θ (X|Z = z t). A formal description of this procedure is given below, in subsection 5.2, and an impression of the stochastic process thus constructed is presented in Fig. 1b. We observe that for every sequence of points {t i} n i=0 there is a natural measure on piecewise linear paths starting at x 0 ∈ X and terminating at x T ∈ X. More precisely, we define the probability of a piecewise linear path x(t) with nodes x 1, x 2..., x n ∈ X as dP t0,...,tn (x(t)): where q φ, p θ label the densities of Q φ, P θ, respectively, and where z(t) is defined by eq. 5. In other words, for every pair of points x 0 and x T in feature space, and its corresponding code, the decoder P θ (X|Z) induces a measure over the space of paths {x(t)|x = x 0, x(T) = x T }. Formally speaking, the collection of measures dP t0,...,tn given by different choices of points {t i} n i=0 in defines a family of consistent measures (cf. Definition 2 in the Appendix, Subsection D.1). This implies that these different measures are assembled into a stochastic process on feature space X over the continuous interval [0, T]: Proposition 1. The collection of measures prescribed by induces a corresponding continuous-time stochastic process. Moreover, under appropriate reconstruction assumptions on the auto-encoder mappings P θ, Q φ, the sample paths are interpolations, that is, start and terminate respectively at x 0, x T almost surely. The statement goes along the lines of classical on existence of product measures. For the sake of completeness we provide all the necessary technical details in the Appendix, Subsection D. Another important remark is that the stochastic semantic process construction in Proposition 1 is just one way to define such a process -there are other natural options, e.g. in terms of explicit transition kernels or Itô processes. Having described a procedure to sample stochastic semantic processes in X, we need to discover autoencoding mappings (P θ, Q φ) that give rise to reasonable and interesting stochastic paths. Specifically, to generate examples which are able to explain the defendant black-box model b(l, x) in the current litigation case (Section 4), one needs to ensure that semantic paths between the data points x 0 and x T highlight classification differences, i.e. classifications of the model along this path are far apart in the plaintiff pair of labels. Thus, to design auto-encoding mappings P θ, Q φ accordingly, we propose an optimization problem of the form min where X t is a stochastic semantic process and S P θ,Q φ is an appropriately selected functional that extracts certain features of the black-box model b(l, x). The minimization problem can be seen in the context of Lagrangian mechanics. For a given stochastic semantic process X t, and given initial and final feature "states" x 0 and x T, we introduce the following function, named the model-b semantic Lagrangian which gives rise to the semantic model action: In mechanics, the optimization given by suitable Lagrangians delivers physically meaningful paths, e.g. those specified by the equations of motion . In our case, a guiding intuition is that the semantic Lagrangian should reflect how the black-box model takes decisions along the path 6 X t, starting at x 0 and ending at x T. In this way, the minimization of the semantic action (i.e. finding minimizing paths X t) should make such classification aspects prominent along the example set. Our problem, viz. to find encoding mappings P θ, Q φ which yield explainable semantic paths with respect to a black-box model, is then a constrain optimization problem whose total objective function we write as where L VAE is given by eq., S[x(t)] corresponds to the Lagrangian action and λ is an hyper parameter controlling the action' scale. The average over the paths is taken with respect to the stochastic paths and the corresponding measure dP [x(t)] from Proposition 1, that is, the path integral where x k t labels the tth point along the the kth path, sampled as described in Section 5, n is the number of points on each path, K is the total number of paths, and the estimator on the right hand side corresponds to an explicit average over paths 7. Algorithm 1: PATH Auto-Encoder 1 while φ and θ not converged do 2 Draw {x 1, ..., x n} from the training set Generate Latent Interpolations Sample k Paths in Feature Space Evaluate Semantic Action for each path k 12 and average over k Update P θ and Q φ by descending: In practice, both L VAE and the action term are optimized simultaneously. Note that the VAE loss function L VAE is trained on the entire data set on which the black-box performs. The action term, in contrast, only sees the x 0 and x T points. This can be seen explicitly in Algorithm 1, which shows an overview of the auto-encoder pair training algorithm. Let us finally note that, drawing analogies with the adversarial formalism , the defendant black-box model plays the role of a fixed discriminator, not guiding the example generation, but the interpolations among these examples. There are plenty of options for Lagrangian functionals that provide reasonable (stochastic) example-paths -roughly speaking, we attempt to define an objective value for a certain subjective notion of explanations. In what follows we illustrate one particular such Lagrangian 8 MINIMUM HESITANT PATH We want to find an example path such that the classifier's decisions along it changes as quickly as possible, as to highly certain regions in X. In Figure 2: Probability Paths for the litigation case l 0 = 2, l T = 7. Y axis corresponds to classification probability and x axis corresponds to interpolation index. Interpolation images for a specific paths are presented below the x axis. other words, the path is forced to stay in regions where the black-box produces decisions with maximum/minimum probability. An intuitive way to enforce this is via the simple Lagrangian where l 0, l T are the labels of the litigation case in question. Roughly speaking, given the appropriate initial conditions, the paths that minimize the action associated to L 1 are paths that attempt to keep L 1 close to 1 over almost the entire interpolation interval. Additionally we require b(l T, x(t)) to be a monotonous function along the interpolating path x(t). Furthermore, in accordance with Proposition 1 we require certain level of reconstruction at the end points. To enforce these conditions we introduce the regularizers r m, r e which are described in detail in subsection D.4 of the Appendix. The total objective function is therefore where λ, λ m, λ e are hyper-parameters and S 1 is the action associated to the minimum hesitant Lagrangian L 1 in eq.. We evaluate our method in two real-world data sets: MNIST, consisting of 70k Handwriting digits, and CelebA , with roughly 203k images. We use a vanilla version of the VAE with Euclidean latent spaces Z = R dz and an isotropic Gaussian as a prior distribution P (Z) = N (Z|0, I dz). We used Gaussian encoders, i.e. Q φ (Z|X) = N (Z|µ φ (X), Σ φ (X)), where µ φ, σ φ are approximated with neural networks of parameters φ, and Bernoulli decoders P θ (X|Z). We compare the standard VAE, VAE-EDGE (VAE augmented with the edge loss r e) and PATH-VAE (our full model, eq.). The black-box classifier b(l, x) is defined as a deep network with convolutional layers and a final soft-max output layer for the labels. Details of the specifics of the architectures as well as training procedure are left to the Appendix. For MNIST we studied a litigation case wherein l −T, l T = 2, 7 and l 0 = 2, whereas its true label (i.e. that of x 0) is l t = 7 (see Section 4). The are presented in Fig. 2. VAE delivers interpolations which provide uninformative examples, i.e. the changes in the output probability b(l 0, x) cannot be associated with changes in feature space. In stark contrast, PATH-VAE causes the output probability to change abruptly. This fact, together with the corresponding generated examples, allows us to propose explanations of the form: what makes the black-box model classify an image in the path as two or seven, is the shifting up of the lower stroke in the digit two as to coincide with the middle bar of the digit seven. Similarly, the upper bar of the digit seven (especially the upper left part) has a significant decision weight. In order to provide a more quantitative analysis we demonstrate the capability of our methodology to control the path action while retaining the reconstruction capacity. Hereby, we use not only the VAE as the underlying generative model, but also Wasserstein Auto-Encoder (WAE) and Adversarial Auto-Encoder (AAE) , i.e. we simply change L VAE in eq. with the loss of WAE or AAE. The theoretical details and corresponding architectures are presented in the Appendix. We present, in Fig. 3, the action values defined over random litigation end pairs (x −T, x T). The PATH version of the model indeed yields lower action values. Furthermore, these models tend to reduce the variance within the different paths. This is expected since there is one path that minimizes the action, hence, the distribution will try to arrive at this specific path for all samples. In order to compare with other explanation models, we define a saliency map with the interpolations obtained in our methodology. We defined the interpolation saliency as the sum over the differences between interpolation images weighted with the probability change of the black-box classifier through the interpolation path. We see in Fig. 4 the comparisons among different methods. While the standard methods only show local contributions to a classification probability, our saliency maps show the minimum changes that one is to apply to the dubious image in order to change the decision to the desired label. Our approach reveals that the curvature of the lower bar is decisive to be classified as a two, while the style of the upper bar is important to be classified as a seven. Further, we provide a sanity check analysis by studying the rank correlation between original saliency map and the one obtained for a randomized layers of the black-box classifier, shown in Fig. 5. As desired, our proposed saliency map decorrelates with the randomized version. For the CelebA dataset we use a black-box classifier based on the ResNet18 architecture . We investigate two specific misclassifications. In the first case, a smile was not detected (Fig. 6 a). Here we only interpolate between the misclassified image (left) and a correctly classified one (right), of the same person. Interpolations obtained by VAE are not informative: specific changes in feature space corresponding to changes in the probability cannot be detected since the latter changes rather slowly over the example path. This observation also holds true for the VAE-EDGE model, except that the examples are sharper. Finally, our PATH-VAE model yields a sharp change in the probability along with a change of the visible teeth (compare the third and fifth picture in the example path), revealing that this feature (i.e. teeth visibility) could constitute, from a human standpoint, a decisive factor in the probability of detecting a smile for the given black-box model. It is important to note that these observations represent one of many possible path changes which could change the classifier decision. This is constrained by the current realization and representative end points. The important is that our methods are able to shape the behavior of the classifier along the path. Further experimental examples are provided in Section C of the Appendix. The bulk of the explanation literature for deep/black-box models relies on input dependent methodologies. Gradient Based approaches Figure 6: Probability Paths for the case of detecting a smile in images of celebrities. Y axis corresponds to classification probability and x axis corresponds to interpolation index. Interpolation images for a specific paths are presented below the x axis. The images are vertically aligned with a corresponding tick in the x-axis determining the interpolation index of the image score for a given input example and class label by computing the gradient of the classifier with respect to each input dimension. Generalizations of this approach address gradient saturation by incorporating gradients' values in the saliency map or integrating scaled versions of the input . Ad hoc modifications of the gradient explanation via selection of the required value , , as well as direct studies of final layers of the convolutions units of the classifiers , are also provided. In contrast to gradient based approaches, other categories of explanatory models rely on reference based approaches which modify certain inputs with uninformative reference values . Bayesian approaches treat inputs as hidden variables and marginalize over the distribution to obtain the saliency of the input . More recent generalizations exploit a variational Bernoulli distribution over the pixels values . Other successful methodologies include substitution of black-box model with locally interpretable linear classifiers. This is further extended to select examples from the data points in such a way that the latter reflect the most informative components in the linear explanations, . Studies of auto-encoder interpolations seek to guarantee reconstruction quality. In the authors characterize latent space distortions compared to the input space through a stochastic Riemannian metric. Other solutions include adversarial cost on the interpolations such as to improve interpolation quality compared to the reconstructions, . Examples which are able to deceive the classifier's decisions have been widely studied in the framework of adversarial examples . These methodologies, however, do not provide interpretable explanations or highlight any semantic differences that lead to the classifier's decisions. Finally, the Auto-Encoder framework can also naturally be seen as a tool for dimensionality reduction. Geometrically speaking, assuming that the data set approximately lies along a manifold embedded in feature space X, one can interpret the encoder, decoder as the coordinate map (chart) and its inverse. From this point of view, our approach above translates to finding coordinate charts with additional constraints on mapping the segments from z 0 to z T to appropriate (stochastic) curves between x 0 and x T. In the present work we provide a novel framework to explain black-box classifiers through examples obtained from deep generative models. To summarize, our formalism extends the auto-encoder framework by focusing on the interpolation paths in feature space. We train the auto-encoder, not only by guaranteeing reconstruction quality, but by imposing conditions on its interpolations. These conditions are such that information about the classification decisions of the model B is encoded in the example paths. Beyond the specific problem of generating explanatory examples, our work formalizes the notion of a stochastic process induced in feature space by latent code interpolations, as well as quantitative characterization of the interpolation through the semantic Lagrangian's and actions. Our methodology is not constrained to a specific Auto-Encoder framework provided that mild regularity conditions are guaranteed for the auto-encoder. There was no preprocessing on the 28x28 MNIST images. The models were trained with up to 100 epochs with mini-batches of size 32 -we remark that in most cases, however, acceptable convergence occurs much faster, e.g. requiring up to 15 epochs of training. Our choice of optimizer is Adam with learning rate α = 10 −3. The weight of the KL term of the VAE is λ kl = 1, the path loss weight is λ p = 10 3 and the edge loss weight is λ e = 10 −1. We estimate the path and edge loss during training by sampling 5 paths, each of those has 20 steps. Encoder Architecture Both the encoder and decoder used fully convolutional architectures with 3x3 convolutional filters with stride 2. Conv k denotes the convolution with k filters, FSConv k the fractional strides convolution with k filters (the first two of them doubling the resolution, the third one keeping it constant), BN denotes batch normalization, and as above ReLU the rectified linear units, FC k the fully connected layer to R k. The pre-processing of the CelebA images was done by first taking a 140x140 center crop and then resizing the image to 64x64. The models are trained with up to 100 epochs and with mini-batches of size 128. Our choice of optimizer is Adam with learning rate α = 10 −3. The weight of the KL term of the VAE is λ kl = 0.5, the path loss weight is λ p = 0.5 and the edge loss weight is λ e = 10 − 3. We estimate the path and edge loss during training by sampling 10 paths, each of those has 10 steps. Encoder Architecture Decoder Architecture Both the encoder and decoder used fully convolutional architectures with 3x3 convolutional filters with stride 2. Conv k denotes the convolution with k filters, FSConv k the fractional strides convolution with k filters (the first two of them doubling the resolution, the third one keeping it constant), BN denotes batch normalization, and as above ReLU the rectified linear units, FC k the fully connected layer to R k. C FURTHER Interpolation between 2 and 7. It is seen that the Path-VAE interpolation optimizes both probabilities (P and P) according to the chosen Lagrangian -in this case the minimum hesitant L 1. Briefly put, the construction we utilize makes use of the well-known notion of consistent measures, which are finite-dimensional projections that enjoy certain restriction compatibility; afterwards, we show existence by employing the central extension of Kolmogorov-Daniell. We start with a couple of notational remarks. Definition 1. Let S, F be two arbitrary sets. We denote that is, the set of all maps F → S. Definition 2. Let (S, B) be a measurable space and let G ⊆ F ⊆ [0, T] for some positive number T. We define the restriction projections π F,G by Moreover, for each F ⊆ [0, T] the restriction projections induce the σ-algebra B F which is the smallest σ-algebra on S F so that all projections are measurable. In particular, the projections π F,G are measurable with respect to B F, B G. } is called consistent if it is push-forward compatible with respect to the restriction projection mappings, i.e. be an arbitrary finite set. The mapping defines a consistent collection of finite measures. Proof. Let us fix Without loss of generality, it suffices to check consistency for the pair (F 1, F 2). We have where we have used L 1 -finiteness and integrated out the s variable via Fubini's theorem. Note also, that by the definitions above χ π for any fixed s ∈ X. We briefly recall the following classical due to Kolmogorov and Daniell: Theorem 1 (Theorem 2.11, Bär & Pfäffle ). Let (S, B(S)) be a measurable space with S being compact and metrizable and let I be an index set. Assume that for each J ∈ Fin(I) there exists a measure µ J on S J, B J, such that the following compatibility conditions hold: Here π J1: S J2 → S J1 denotes the canonical projection (obtained by restriction). Then, there exists a unique measure µ on (S I, B I) such that for all J ∈ Fin(I) one has We recall that a well-known way to construct the classical Wiener measure and Brownian motion is precisely via the aid of Theorem 1 . We are now in a position to construct the following stochastic process. Proposition 3. There exists a continuous-time stochastic process Moreover, for small positive numbers, δ we have X 0 ∈ B δ (x 0) with probability at least (1 −), provided the reconstruction error of encoding/decoding process is sufficiently small. In particular, if x 0 stays fixed after the application of encoder followed by decoder, then X 0 = x 0 almost surely. A similar statement holds also for the terminal point X t and x T respectively. Proof. By applying Theorem 1 to the collection of consistent finite measures prescribed by Proposition 2 we obtain a measure µ on the measurable space (S [0,T], B [0,T] ). Considering the probability space (S [0,T], B [0,T], µ) we define stochastic process It follows from the construction and the Theorem of Kolmogorov-Daniell that P ((X t1, X t2, . . ., X tn) ∈ A) is expressed in the required way. This shows the first claim of the statement. Now, considering a small ball B δ (x 0) we have Here, the function R(x *, U) measures the probability that the input x * is decoded in the set U. Thus, if the reconstruction error gets smaller, R converges to 1. This implies the second statement. Finally, if we assume that the auto-encoder fixes x 0 in the sense above, we similarly get D.2 CONCERNING THE REGULARITY OF SAMPLE PATHS An important remark related to the the variational problem is the following: one could develop plenty of meaningful functionals S P θ,Q φ that involve taking velocities or higher derivatives -thus one is supposed to work over spaces of curves with certain regularity assumptions. However, as stated above we are working over stochastic paths X t whose regularity is, in general, difficult to guarantee. A straightforward way to alleviate this issue is to consider a "smooth" version of the curve X t -e.g. by sampling X t through a decoder with controllable or negligible variance or by means of an appropriate smoothing. Furthermore, one could also approach such stochastic variational analysis via Malliavin calculus -however, we do not pursue this direction in the present work. We now briefly discuss a few remarks about the regularity of the stochastic semantic process from Proposition 1. First, we state a well-known of Kolmogorov and Chentsov: Theorem 2 (Theorem 2.17, Bär & Pfäffle ). Let (M, ρ) be a metric measure space and let X t, t ∈ [0, T] be a stochastic process. Suppose that there exists positive numbers a, b, C, with the property, ∀s, t, |s − t| < Then, there exists a version Y t, t ∈ [0, T] of the stochastic process X t whose paths are α-Hölder continuous for any α ∈ (0, b/a). Thus, roughly speaking, an estimate on E [ρ(X s, X t) a ] can be regarded as a measure of the extent to which Theorem 2 fails. To give an intuitive perspective, let us consider the stochastic process given by Proposition 1 and, considering only the points X s, X s+δ for a small positive number δ, let us write the expectation in as: where we have used the standard Euclidean distance. To estimate the integral further, let us for simplicity assume that the encoder is deterministic and the decoder is defined via a Gaussian Ansatz of the type µ(z) + σ(z) ⊗ for a normal Gaussian variable. Thus the last integral can be written as: where we denote the covariance matrix at time s by Σ s. Now, if Σ s+δ becomes sufficiently small as δ converges to 0, then the exponential factor will dominate and thus holds. In other words, Hölder regularity of the process is verified provided that p θ (x|z) becomes localized in x and converges to a Dirac measure (similarly to the case of the heat kernel propagator and Brownian motion). From this point of view, the variance of the decoder can be considered as an indicator of how far the stochastic process is from being Hölder continuous. Below we discuss two other stochastic process constructions, one of which is built upon Itô diffusion processes and enjoys further path-regularity properties. We briefly recall that, among other aspects, Lagrangian theory suggests a framework for optimization of functionals (Lagrangians) defined over appropriate function spaces. Critical points of Lagrangians are identified by means of the corresponding Euler-Lagrange equations . To obtain the Euler-Lagrange equations for the Lagrangians in we compute in a straightforward manner the first variation where φ: [0, T] → T X is a compactly supported deformation 9. This produces the following conditions: Assuming the Hessian is not identically vanishing along the curve, the critical points of the variational problem are given by the condition (∇B) (l T |x(t)) = αẋ(t). In addition to following the geometry of the black box B, one could also impose a natural condition that the stochastic paths minimize distances on the manifold in feature space that the auto-encoder pair induces. We recall from basic differential geometry that the image of the decoder as a subset of the feature space is a submanifold with a Riemannian metric g induced by the ambient Euclidean metric in the standard way (for we refer to do). In the simple case of a deterministic auto-encoder, one can think of g as the matrix J T J where J denotes the Jacobian of the decoder -thus g gives rise to scalar product g(X, Y):= XJ T JY. In the stochastic case, one can use suitable approximations to obtain g in a similar manner -e.g. in the authors decompose the decoder into a deterministic and a stochastic part, whose Jacobians J 1, J 2 are summed as J T 1 J 1 + J T 2 J 2 to obtain the matrix g. Now, having Riemannian structure (i.e. the notion of a distance) on the data submanifold, geodesic curves naturally arise as minimizers of a suitable distance functional, namely: where the norm · g is computed with respect to the Riemannian metric g, that is g(·, ·). We note that the utilization of geodesics for suitable latent space interpolations was thoroughly discussed in. As mentioned already, we would like that classifier's probabilities change in a monotonous fashion along the paths -these paths are preferable in the sense that they provide examples following a particular trend along the disputed labels. We enforce such monotonic behaviour along the paths with the term r m:= 1 K(n − 1) with n the number of points along the path and K the number of paths. Further, and in accordance with Proposition 1, one can also require that the auto-encoder reconstructs the endpoints with sufficiently large accuracy. We enforce this requirement with the edge term r e:= i (|b(l i, x i)) − b(l i,x i)| + c(x i,x i)), i = 0, T, −T,where c measures the reconstruction error 10 andx i ∼ P θ (X|Z = z i), with z i ∼ Q φ (Z|X = x i) and x i the data points at i = 0, T, −T. In contrast to VAE, within the WAE framework one only needs to be able to sample from Q φ (Z|X) and P θ (X|Z) -i.e. their density is not needed. WAE is trained by minimizing a (penalized) optimal transport divergence -the Wasserstein distance, between the input data distribution P D (X) and the implicit latent variable model P θ (X). As in VAE, the latter is defined by first sampling Z from P (Z) and then mapping Z to X through the decoder P θ (X|Z). The loss function of WAE is given by where c is a distance function and D Z is an arbitrary divergence between the prior P (Z) and the agregate posterior Q φ (Z) = E P D (X) [Q φ (Z|X)], weighted by a positive hyperparameter λ. Minimizing Equation 56 corresponds to minimizing the Wasserstein distance if the decoder is deterministic (i.e. P θ (X|Z = z) = δ g θ (z) ∀z ∈ Z, with the map g θ: Z → X ) and the distance term is optimized. If the decoder is stochastic Equation 56 yields an upper bound on the Wasserstein
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJxs5p4twr
We generate examples to explain a classifier desicion via interpolations in latent space. The variational auto encoder cost is extended with a functional of the classifier over the generated example path in data space.
The soundness and optimality of a plan depends on the correctness of the domain model. In real-world applications, specifying complete domain models is difficult as the interactions between the agent and its environment can be quite complex. We propose a framework to learn a PPDDL representation of the model incrementally over multiple planning problems using only experiences from the current planning problem, which suits non-stationary environments. We introduce the novel concept of reliability as an intrinsic motivation for reinforcement learning, and as a means of learning from failure to prevent repeated instances of similar failures. Our motivation is to improve both learning efficiency and goal-directedness. We evaluate our work with experimental for three planning domains. Planning requires as input a model which describes the dynamics of a domain. While domain models are normally hand-coded by human experts, complex dynamics typical of real-world applications can be difficult to capture in this way. This is known as the knowledge engineering problem BID3. One solution is to learn the model from data which is then used to synthesize a plan or policy. In this work, we are interested in applications where the training data has to be acquired by acting or executing an action. However, training data acquired in a planning problem could be insufficient to infer a complete model. While this is mitigated by including past training data from previous planning problems, this would be ill-suited for nonstationary domains where distributions of stochastic dynamics shift over time. Furthermore, the computation time increases with the size of the training data. Following these observations, we present an incremental learning model (ILM) which learns action models incrementally over planning problems, under the framework of reinforcement learning. PPDDL, a planning language modelling probabilistic planning problems BID20 ) (see Figure 1), is used for planning, and a rules-based representation (see FIG0) is used for the learning process. A parser translates between these two representations. Action models that were learned previously are provided to subsequent planning problems and are improved upon acquiring new training data; past training data are not used. We denote the models provided as prior action models. These could also be hand-coded, incomplete models serving as prior knowledge. Using prior knowledge has two advantages: it biases the learning towards the prior action models, and it reduces the amount of exploration required. While the learning progress cannot be determined without the true action models, we can estimate it empirically based on the of learning and acting. This empirical estimate, or reliability, is used to guide the search in the space of possible models during learning and as an intrinsic motivation in reinforcement learning. When every action is sufficiently reliable, we instead exploit with Gourmand, a planner that solves finite-horizon Markov Decision Processes (MDP) problems online BID9.Another major contribution of our work is its ability to learn from failure. Actions fail to be executed if their preconditions are not satisfied in the current state. This is common when the model is incorrect. Failed executions can have dire consequences in the real-world or cause irreversible changes such that goal states cannot be reached. ILM records failed executions and prevents any further attempts that would lead to similar failure. This reduces the number of failed executions and increases the efficiency of exploration. The rest of the paper is organized as follows. First, we review related work and then present the necessary . Next, we provide details of ILM. Lastly, we evaluate ILM in three planning domains and discuss the significance of various algorithmic features introduced in this paper. We extend the rules learner from BID15 that learns a set of relational rules to represent an action which can have probabilistic effects. A relational representation allows generalization of the state space unlike propositional rules which are used in BID14. Our training data consists of state transitions (s t, a t, s t+1) where s t is the pre-state, a t is the grounded action, and s t+1 is the post-state. This requirement is stricter than BID19 BID22 BID2 which learns from sequences of actions with no information on intermediate states. However, these works learn deterministic actions whereas we are interested in probabilistic actions. BID13 BID12 but do not address the incremental nature of reinforcement learning. BID5 BID18 ) learn deterministic action models incrementally while BID16 learns probabilistic action models. Our work is most similar to the latter which revises relational rules representing an action whenever contradicting examples are received. They do not store all the examples but rather track how well each rule explains the examples. On the other hand, we address incremental learning over planning problems where only current training data is used. Furthermore, our approach could consider prior knowledge in the form of incomplete action models which can have extraneous predicates unlike BID23. A second area of research that is related to our work is model-based reinforcement learning. R-MAX BID0 ) is provably sample-efficient, handling the balance between exploration and exploitation implicitly by assigning the maximum reward to unknown states which are set to absorbing states. If the count, defined as the number of times an action is executed in the state, of every applicable action exceeds a threshold, then the state is known. R-MAX is impractical for planning problems with large state spaces. Hence, additional assumptions such as factored state spaces BID8, known structures of dynamic Bayesian networks (DBN) BID6, or known maximum in-degree of DBNs BID4 ) are often made. Conversely, we only assume that the arguments of actions are known (e.g., we know moveCar has ? from and ?to as its arguments). We also use an empirical estimate for the learning progress, which we call reliability, as intrinsic motivation. Reliability is also used to quantify prior knowledge which other works on intrinsic motivation do not address BID1 BID7.Background PPDDL. Action models described in PPDDL are defined by their preconditions and effects, typically restricted to conjunctions of predicates. An example is shown in Figure 1. An action is applicable if its precondition is true in the current state, and executing it changes the state according to its effects which can be deterministic or probabilistic. Rules. For learning action models, we use a rules-based representation as it is well-suited to the incremental nature of reinforcement learning BID16 ). An action is described by a set of rules R where a rule r ∈ R has three parts: the name of the action, the precondition, and the effect. An example is shown in FIG0. The key difference between PPDDL and rules representations are the addition of noise effect in the latter which serves to avoid modelling a multitude of rare effects which could increase the complexity of synthesizing a plan. When a rare effect occurs, it is often better to replan. Multiple rules are required to represent disjunctive preconditions or effects. A rule covers a state-action pair (s, a) if it represents a and is applicable in s. Every state-action pair in the training data is covered by at most one rule which is called the unique covering rule, denoted as r (s,a). A propositional rule is obtained from the grounding of a relational rule by assigning an object or value to every argument in the rule (e.g. grounding moveCar(?loc1, ?loc2) to moveCar(l31, l13)). Actions are grounded in a similar fashion. MDPs model fullyobservable problems with uncertainty. A finite-horizon MDP is a tuple of the form (S, A, T, R, G, s 0, H) where S is a set of states, A is the set of actions, T: S ×A×S → is the transition function, R: S × A → R specifies rewards for performing actions, G is the set of goal states, s 0 is the initial state, and H is the number of decision epochs or planning horizon. The objective is to find a policy which maximizes the sum of expected rewards. Reinforcement Learning. When transition functions in MDPs are not known, model-based reinforcement learning can be used to learn them and perform sequential decisionmaking. This is the same as learning action models as they can be translated to transition functions BID20. Reinforcement learning deals with the balance between exploration and exploitation. Exploration seeks meaningful experiences from which action models are learned while exploitation synthesizes a policy using the models. We propose a new approach to incremental learning across planning problems called ILM. ILM has two main components: a rules learner and a reinforcement learning framework. We first introduce the concept of reliability which is used in both components followed by the extension made to the rules learner from BID15. Lastly, we provide details of the framework. The reliability of learned action models are empirical estimates of its learning progress. Reliability serves two purposes. We extend the rules learner from BID15 to consider the prior action model and its reliability to learn new rules. In reinforcement learning, less reliable actions are preferred during exploration. Reliability is defined as: DISPLAYFORM0 where o is an action, EX is the exposure, SU is the success rate, V O is the volatility, α s, α v, and γ are scaling parameters, n is the number of updates, and o 0 is the prior action model which can be an incomplete action model or an empty action model (no predicates in precondition and effect). Reliability is updated whenever o is executed. The initial values of SU and V O are set to zero. The reliability of the prior model is inherited with γ ∈ as the discount factor which reduces its significance given new data. Success Rate. An action with a high success rate indicates that recent executions are successful which is more likely if it has a small error. We define the success rate as: DISPLAYFORM1 where SU (o) ∈ 0, 1 1−γ, st is the execution status, and the indicator function 1 equals to 1 if the enclosing condition is true; otherwise, it is 0. The execution status is'failure' when the precondition of the action executed is not satisfied. The state is then assumed to be unchanged. The status is'partial success' if the post-state is not expected given the learned effects. SU is computed recursively with γ as the discount factor which gives less importance to past executions. Volatility. Volatility measures how much a set of rules representing an action changes after learning. A low volatility suggests that learning has converged to the true action model. Volatility is computed recursively, and is defined as: DISPLAYFORM2 where DISPLAYFORM3 is the set of rules before (after) learning, andd(R prev, R) is the normalized difference between the two sets of rules. The difference between two rules is defined as: DISPLAYFORM4 ) where superscripts p and e refer to the precondition and effect of a rule, respectively, and d − (p 1, p 2) returns the number of predicates that are in the set of predicates p 1 but not in p 2. The normalized difference is defined as: DISPLAYFORM5 |r 1 | + |r 2 | where the operator |r| refers to the number of predicates in r. The difference between two set of rules, d(R 1, R 2), is the sum of differences of pairs of rules r 1 ∈ R 1 and r 2 ∈ R 2 where the rules are paired such that the sum is minimal. Each rule is paired at most once and the number of predicates in unpaired rules are added to the sum. Exposure. Exposure measures the variability (inverse of similarity BID10) of the prestates in the training data, and is defined as: DISPLAYFORM6 where S is the set of unique pre-states in the state transitions involving o, and N s is the number of state transitions ing from successful executions. The first term is the ratio of state transitions from successful executions, penalizing those from failed executions which are less informative. Essentially, exposure is the average pairwise difference between pre-states weighted by N s. Since probabilities of effects are inferred using maximum likelihood on the N s successful state transitions, reliability considers these probabilities implicitly. Only unique pre-states are used to prevent double-counting. For example, in the Exploding Blocksworld domain, the sequence of actions pickUpFromTable(b1) and putDown(b1) can be executed repeatedly. This also causes V O to decrease and SU to increase which yields a high reliability which does not reflect the learning progress of the actions. Using exposure as a scaling factor prevents such scenarios. The rules learner from BID15 applies a search operator, selected at random, to a rule. Each search operator modifies the rule differently to yield a set of new rules. An example of a rule is shown in FIG0. A greedy search uses a score function as heuristics. We introduce a deviation penalty, P EN (R, R 0), to the score function such that the search begins from and is bounded around the prior action models, R 0, which can be a set of empty rules, or rules of incomplete action models. Hence, the learner refines R 0. The score function is defined as: DISPLAYFORM0 whereP is the probability of the effect in r (s,a) which covers the transition (s, a, s'), T is the training data, α p is a parameter, and P EN (r) penalizes complex rules to avoid over-specialization. The deviation penalty increases when R deviates further from R 0, and is defined as: DISPLAYFORM1 where α drop and α add are scaling parameters, and ∆ drop (R, R 0) and ∆ add (R, R 0) are defined as: DISPLAYFORM2 where the pairings of rules r ∈ R and r 0 ∈ R 0 are the same as the pairings in d(R, R 0).Since past training data is not used, the rules learner may consider a probabilistic effect of R 0 as noise if this effect is rarely seen in the current training data. ∆ drop increases when this happens. If the probabilistic effect is not seen at all, it will be dropped in R regardless of how large P EN (R, R 0) is. Such rules will be rejected. The deviation penalty is scaled by the reliability of the prior action model and the inverse of exposure. The intuition is that deviation should be limited if the prior action model is highly reliable, and encouraged if the training data has high variability. We begin this section by explaining the main algorithm for ILM (Algorithm 1), followed by the subroutines for reinforcement learning and learning from failure. The inputs to Algorithm 1 are the prior action models (R 0) and their reliability (RE 0), initial state (s 0), goal state (g), and the maximum number of iterations (N). EX max = 0 and tabu = ∅ for the first function call and shall be discussed later. The main loop interleaves learning, planning, and acting (lines 5 to 19). Exploration and exploitation is performed at the start of each iteration (line 6). If no action is found, then a dead-end is reached (line 7) and the algorithm terminates. When an action fails to execute, ILM learns from this failure by recording the failed instance in tabu (line 11: relevant predicates returns the set of grounded predicates in s that does not contain objects that were not in a), otherwise, synthetic state transitions (s t, a, s t) are generated (line 13) where a T is a randomly grounded action such that check tabu(s t, a, tabu) ⇒ ⊥. Failed executions are exceedingly less as failed instances are added to tabu. Reconstructing synthetic failed transitions augment the training data and aids the learning of preconditions. Learning from training data of low variability (or low exposure) could in lower correctness of learned rules. To prevent this, we delay learning until certain criteria are met (can learn in line 15): 1. If R 0 is the set of empty rules, always learn since no information can be lost. However, this risks learning incorrect preconditions or effects that can prevent the agent from reaching the goal state. 2. Otherwise, learn if there is at least one successful transition, at least one failed or synthetic transition, and EX > α EX EX max where α EX ∈.If learning is allowed, then new rules are learned (learn rules in line 16) and the values of RE, EX, V O, and SU are updated (line 17). Otherwise, only RE, EX, and SU are updated. The algorithm terminates after reaching the maximum number of iterations or when the goal is reached. It returns the learned rules, reliability, maximum exposure (EX max), and tabu. These are used as inputs to the next function call to Algorithm 1. The balance between exploration and exploitation is implemented in EE(s, g, R, RE, tabu, ζ). First, we compute the counts return R, RE, max(EX, EX max), tabu for all applicable actions in s using the context-based density formula from (Lang, Toussaint, and Kersting 2012) which performs relational generalizations -the amount of exploration is reduced as states which are unknown under propositional representations could be known under relational representations. The count-action pairs < c, o > are sorted in increasing order of c = RE(o) r∈R (s,a,s')∈T 1(r is applicable in s) in a list, L, where R are rules of o. Reliability serves as intrinsic motivation where less reliable actions are explored more. A state is known if ∀c i ∈ L (c i ≥ ζ), or if the reliability of every action exceeds a constant threshold. The second condition allows exploitation using prior action models when counts are still zero. If the state is known, exploitation is attempted using Gourmand, a planner that solves problems modelled in finite-horizon MDP online BID9. ILM can use any planner that accepts planning problems written in PPDDL. Exploitation fails if no plan is found or if the first action of the plan is in tabu. Exploration is attempted if the state is not known or exploitation fails. An action is popped off the top of L and a list of grounded actions that are applicable in s are enumerated. A grounded action that is not in tabu is selected at random and returned. If no such actions exist, then the next action is popped off until L is empty, following which random exploration is resorted to where actions are grounded without considering if preconditions are satisfied in s. If all grounded actions are in tabu, then a dead-end is reached. Learning from Failure Failed executions due to unsatisfied preconditions are recorded in tabu. Before an action a is executed in state s, Algorithm 2 checks if (s, a) is in tabu, returning False if so. We describe the algorithm with an example as shown in FIG1. A state is described by a set of predicates. We extract the set of predicates f s ⊆ s that does not have an object in its binding that is not in the argu- (l31) road(l11 l21) road(l21 l31) road(l12 l11) road(l13 l12) road(l13 l22) road(l22 l31) road(l22 l21) road(l12 l22) spareIn(l11) spareIn(l12) spareIn(l21) fs: ¬hasspare notFlattire at(l31) ft: ¬hasspare notFlattire at(?loc1) spareIn(?loc2) Perform substitution σ = {? loc1 → l31, ? loc2 → l13} on ft ft: ¬hasspare notFlattire at(l31) spareIn(l13) ments of a (line 2). We assume that the arguments of actions are known for this to be possible. f s is compared to each entry (f t, a t) in tabu (lines 3 to 7). The predicates in f t are grounded with the same substitution as the variables binding of a (line 5). Hence, the check is lifted to relational representations and is applicable even if the objects in the domain change. If f s does not have at least one predicate that is not in f t, then a is in tabu (line 6). In the example, moveCar(l31, l13) is in tabu, as are all grounded actions of moveCar that do not have road(?loc1, ?loc2) in f s. check tabu exploits experiences from failed executions which are otherwise uninformative to the rules learner as it cannot determine the reason for the failure BID17. Since every action is checked before execution, tabu will not contain identical entries. This keeps the size of tabu to a minimum which is important as the memory and time complexity is O(|tabu|). The completeness of Algorithm 2 depends on the failed instances in tabu. In the example, if tabu is ∅, then a is not in tabu, and in this case, the algorithm is incomplete. a then fails to execute following which f s is lifted with σ = {l31 → ? loc1, l13 → ? loc2} and inserted with o in tabu. Since no erroneous instance is ever added to tabu, the algorithm is sound. That is, no action that is found in tabu will succeed in execution. In one trial of experiments, ten planning problems are attempted sequentially in an order of increasing scale (see Table 1). We denote an attempt as one round. Each trial starts with no prior knowledge; the prior action models for round 1 are empty action models. Since the planning problems are probabilistic, 50 independent trials are conducted. The ma- Table 1: Number of objects in small, medium, and largescale planning problems for each of the three domains. Table 2: Algorithmic configurations for ILM, ILM-R, ILM-T, and R-MAX. DISPLAYFORM0 chine used to run the experiments was a four core Intel(R) i5-6500 with 4 GB of RAM.We used three planning domains: Tireworld and Exploding Blocksworld domains from the International Probabilistic Planning Competition BID21, and the Logistics domain. In the Tireworld domain, the car may get a flat tire when moving to another location. If the tire is flat, the car cannot move and a deadend is reached if no spare tires are available. Tireworld problems of the same scale are identical and are constructed systematically such that there are no unavoidable dead-ends BID11. In the Exploding Blocksworld domain, a block may detonate when it is put down, destroying the block or table beneath. A destroyed block or table is no longer accessible. Each block can only detonate once. We set the goal states as random configurations of three blocks. All Logistics problems have one truck per city, one airplane, and one parcel. Loading and unloading parcels may fail and the state remains unchanged. The models for all domains are stationary where probabilities of the effects of actions are kept constant in all rounds. The performance of ILM is evaluated with the correctness of the learned model and the goal-directedness. R-MAX and two variants of ILM are included for comparison. ILM-R does not use reliability; the relational count is not weighted and the deviation penalty in the score function used by the rules learner is zero. In addition, ILM-R does not delay learning (line 15 of Algorithm 1) as this requires EX max, a component of reliability. ILM-T does not learn from failure. ILM, ILM-R, and ILM-T do not use past training data while R-MAX does. The algorithmic configurations are summarized in Table 2. The correctness of a learned modelP can be defined as the average variational distance betweenP and the true model P BID15: DISPLAYFORM0 where T is the set of test examples -500 state transitions per action are generated with the true distribution. FIG2 show the variational distances for Tireworld, Exploding Blocksword, and Logistics domains. The variational distances at round 0 are of the prior action models, which are empty models for round 1.Tireworld. ILM learns action models incrementally as evident by the decrease in variational distance from rounds 1 to 10. ILM-R performed marginally worse as it learns from training data of low variability which caused the variational distances to increase in rounds 4 and 6. The utility of learning from failure is illustrated by the significantly larger variational distances for ILM-T and R-MAX. In both cases, most of the executions led to failure which are less meaningful DISPLAYFORM1 1 18 4 2 22 1 3 3 0 0 6 0 0 2 to 3 38 22 10 42 10 5 6 4 0 16 10 3 4 to 6 17 27 20 16 12 9 2 9 3 13 8 6 7 to 10 6 17 18 4 7 16 1 9 9 2 9 17 Table 3: Average number of successful trials out of 50 trials for Tireworld (T), Exploding Blocksworld (E), and Logistics (L) domains.experiences for the rules learner. Since the maximum number of iterations is only 15 (moveCar alone has 36 possible groundings for the small-scale planning problems), such inefficient exploration performs poorly. Exploding Blocksworld. The lowest variational distances are achieved with ILM from rounds 1 to 4 and with R-MAX thereafter. The latter learns from a larger training set which is important for this domain which has complex actions putOnBlock and putDown. These actions have conditional effects which are modelled as separate rules with different preconditions. Preconditions are inferred by comparing prestates in the training data. Most of the predicates in the prestates remain unchanged as an action typically changes a small subset of the state. Hence, more training data is required to learn more complex preconditions. Since the training data used by R-MAX are largely from failed experiences, it took four rounds before it outperforms ILM.Logistics. ILM had the best performance in all rounds. The large variational distances for ILM-T is due to the difficulty in learning driveTruck. This action has four arguments and there are 432 possible groundings in the smallscale planning problems. This has complications in the goaldirectedness which shall be discussed in the next subsection. The goal-directedness is evaluated by the number of successful trials which are trials where the goal state is reached. The goal-directedness for the three domains is shown in Table 3 which underlines the performance of the different algorithmic configurations. It is averaged over rounds with planning problems of the same scale. Round 1 is separated from rounds 2 and 3 to illustrate the advantage of having prior knowledge. The average number of successful trials for rounds 2 and 3 were generally larger than round 1 even though the scales of the planning problems are the same. This is because ILM exploits learned models from the previous round whereas round 1 had no such prior knowledge. Tireworld. ILM-R outperforms ILM in rounds 1 to 3. This is because the goal state can be reached by executing moveCar repeatedly as long as the tire is not flat along the way. ILM attempts exploitation more often than ILM-R Figure 5: Number of actions from exploration, exploitation, forced exploration, and random exploration that were successfully executed in the Exploding Blocksworld domain. The are the means and standard deviations of 50 trials using ILM (top) and ILM-T (bottom).as it weights relational counts with reliability. For smallscale planning problems, exploration or exploitation may not make a significant difference. When the scale increases, the number of steps between the initial state and the goal state increases and the probability of getting a flat tire along the way is higher. A dead-end is reached if the tire is flat and no spare tire is available. In such circumstances, exploitation is required and ILM outperforms ILM-R in rounds 4 to 10. ILM-T and R-MAX did not perform well as actions failed to execute most of the time. Exploding Blocksworld. Dead-ends are often the cause of failing to reach the goal state. A block could detonate with a probability of 0.2 when executing putDown or putOnBlock which destroys the table or underlying block. These irreversible changes to the state could then lead to dead-ends. Nevertheless, ILM has the most number of successful trials in all rounds. ILM-R performed much poorer than ILM as reaching the goal state with exploration alone is difficult. Even though R-MAX has lower variational distances than ILM for rounds 5 to 10, it did not outperform ILM as it does not learn from failure. Figure 5 compares the number of actions that were executed successfully in each round using ILM and ILM-T. The latter had significantly fewer number of successful executions. Figure 5 shows the frequency of exploration decreasing over the rounds while that of exploitation increased. This is expected as the action models are learned incrementally and exploitation should be used in later rounds where the variational distance is lower. FIG3 shows the use of tabu in ILM. The number of entries added to tabu declined sharply after round 1 because repeating entries are not added. The number of actions found in tabu correspond to the number of failed executions if this check was not done. This number rose for rounds 7 to 10 because the number of grounded actions increases in the largescale planning problems. Logistics. The number of successful trials increases even when the scale of the planning problem increases. In smallscale planning problems, there were few successful trials because driveTruck was not learned yet as mentioned previously. driveTruck failed to execute repeatedly till round 3 as only two out of 432 grounded actions would succeed. As a , a subset of the state space, which could include the goal state, is not reached. If states where the truck is at a location with a parcel are never reached, then loadTruck and unloadTruck could not be executed. This applies to loadAirplane and unloadAirplane if a parcel is not at an airport. For the trial shown in FIG4, the action model for driveTruck was learned from a single partially successful execution in round 3. The learned rule is shown in Figure 8. Its Action: driveTruck(?truck ? from ?to ?city) Precondition: airport(?from) ∧ truck at(?truck ?from)... ∧ in city(?to ?city) Effect: 1.0 ¬truck at(?truck ? from) ∧ truck at(?truck ? to) 0 noise Figure 8: Learned rule for driveTruck in the Logistics domain. The predicate airport(?from) in the precondition is extraneous and causes the action to be erroneously inapplicable in some parts of the state space.precondition had the extraneous predicate airport(?from). As a , driveTruck is not selected in rounds 4, 5, and 7 because this incorrect precondition is not satisfied though the action is in fact applicable. loadTruck and unloadTruck are not attempted in rounds 4 to 8 because they are in tabu. This example illustrates the adverse impact of learning extraneous predicates for preconditions. Although we delay learning when the training data has low variability, this is not done if the action models are empty. We presented a domain-independent framework, ILM, for incremental learning over multiple planning problems of a domain without the use of past training data. We introduced a new measure, reliability, which serves as an empirical estimate of the learning progress and influences the processes of learning and planning. The relational counts are weighted with reliability to reduce the amount of exploration required for reliable action models. We also extended an existing rules learner to consider prior knowledge in the form of incomplete action models. ILM learns from failure by checking if an action is in a list of state-action pairs which represents actions that have failed to execute. We evaluated ILM on three benchmark domains. Experimental showed that variational distances of learned action models decreased over each subsequent round. Learning from failure greatly reduces the number of failed executions leading to improved correctness and goal-directedness. For complex domains, more training data is required to learn action models. Using past training data would not work well for non-stationary domains and also increases the computation time for learning. The first issue could be resolved by learning distributions from the current training data only. The second issue could be resolved by maintaining a fixed size of training data by replacing older experiences while maximizing the exposure, or variability, of the training data. These will be explored in the future.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1eZxbU9DE
Introduce an approach to allow agents to learn PPDDL action models incrementally over multiple planning problems under the framework of reinforcement learning.
The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces. With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC. Our experimental demonstrate a need to revisit the benefits of entropy regularization in DRL. We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks. Off-policy Deep Reinforcement Learning (RL) algorithms aim to improve sample efficiency by reusing past experience. Recently a number of new off-policy Deep Reinforcement Learning algorithms have been proposed for control tasks with continuous state and action spaces, including Deep Deterministic Policy Gradient (DDPG) and Twin Delayed DDPG (TD3) . TD3, which introduced clipped double-Q learning, delayed policy updates and target policy smoothing, has been shown to be significantly more sample efficient than popular on-policy methods for a wide range of Mujoco benchmarks. The field of Deep Reinforcement Learning (DRL) has also recently seen a surge in the popularity of maximum entropy RL algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In particular, Soft Actor Critic (SAC), which combines off-policy learning with maximum-entropy RL, not only has many attractive theoretical properties, but can also give superior performance on a wide-range of Mujoco environments, including on the high-dimensional environment Humanoid for which both DDPG and TD3 perform poorly (a; b;). SAC has a similar structure to TD3, but also employs maximum entropy reinforcement learning. In this paper, we first seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that when using the standard objective without entropy along with standard additive noise exploration, there is often insufficient exploration due to the bounded nature of the action spaces. Specifically, the outputs of the policy network are often way outside the bounds of the action space, so that they need to be squashed to fit within the action space. The squashing in actions persistently taking on their maximal values, so that there is insufficient exploration. In contrast, the entropy term in the SAC objective forces the outputs to have sensible values, so that even with squashing, exploration is maintained. We conclude that the entropy term in the objective for Soft Actor Critic principally addresses the bounded nature of the action spaces in the Mujoco environments. With this insight, we propose Streamlined Off Policy (SOP), a streamlined algorithm using the standard objective without the entropy term. SOP employs a simple normalization scheme to address the bounded nature of the action spaces, allowing satisfactory exploration throughout training. We also consider replacing the aforementioned normalization scheme with inverting gradients (IG) The contributions of this paper are thus threefold. First, we uncover the primary contribution of the entropy term of maximum entropy RL algorithms when the environments have bounded action spaces. Second, we propose a streamlined algorithm which do not employ entropy maximization but nevertheless matches the sampling efficiency and robustness performance of SAC for the Mujoco benchmarks. And third, we combine our streamlined algorithms with a simple non-uniform sampling scheme to achieve state-of-the art performance for the Mujoco benchmarks. We provide anonymized code for reproducibility 1. We represent an environment as a Markov Decision Process (MDP) which is defined by the tuple (S, A, r, p, γ), where S and A are continuous multi-dimensional state and action spaces, r(s, a) is a bounded reward function, p(s |s, a) is a transition function, and γ is the discount factor. Let s(t) and a(t) respectively denote the state of the environment and the action chosen at time t. Let π = π(a|s), s ∈ S, a ∈ A denote the policy. We further denote K for the dimension of the action space, and write a k for the kth component of an action a ∈ A, that is, a = (a 1, . . ., a K). The expected discounted return for policy π beginning in state s is given by: γ t r(s(t), a(t))|s = s] Standard MDP and RL problem formulations seek to maximize V π (s) over policies π. For finite state and action spaces, under suitable conditions for continuous state and action spaces, there exists an optimal policy that is deterministic . In RL with unknown environment, exploration is required to learn a suitable policy. In DRL with continuous action spaces, typically the policy is modeled by a parameterized policy network which takes as input a state s and outputs a value µ(s; θ), where θ represents the current parameters of the policy network (; ; ;). During training, typically additive random noise is added for exploration, so that the actual action taken when in state s takes the form a = µ(s; θ) + where is a K-dimensional Gaussian random vector with each component having zero mean and variance σ. During testing, is set to zero. Maximum entropy reinforcement learning takes a different approach than by optimizing policies to maximize both the expected return and the expected entropy of the policy (; ; ; ; ; ; ; ; 2018a; b). In particular, with maximization entropy RL, the objective is to maximize where H(π(·|s)) is the entropy of the policy when in state s, and the temperature parameter λ determines the relative importance of the entropy term against the reward. For entropy maximization DRL, when given state s the policy network will typically output a Kdimensional vector σ(s; θ) in addition to the vector µ(s; θ). The action selected when in state s is then modeled as µ(s; θ) + where ∼ N (0, σ(s; θ)). Maximum entropy RL has been touted to have a number of conceptual and practical advantages for DRL (a; b). For example, it has been argued that the policy is incentivized to explore more widely, while giving up on clearly unpromising avenues. It has also been argued that the policy can capture multiple modes of near-optimal behavior, that is, in problem settings where multiple actions seem equally attractive, the policy will commit equal probability mass to those actions. In this paper, we show for the Mujoco benchmarks that the standard additive noise exploration suffices and can achieve the same performance as maximum entropy RL. 3 THE SQUASHING EXPLORATION PROBLEM. When selecting an action, the action needs to be selected within these bounds before the action can be taken. DRL algorithms often handle this by squashing the action so that it fits within the bounds. For example, if along any one dimension the value µ(s; θ) + exceeds a max, the action is set (clipped) to a max. Alternatively, a smooth form of squashing can be employed. For example, suppose a min k = −M and a max k = +M for some positive number M, then a smooth form of squashing could use a = M tanh(µ(s; θ) + ) in which tanh is being applied to each component of the K-dimensional vector. DDPG and TD3 use clipping, and SAC (a; b) uses smooth squashing with the tanh function. For concreteness, henceforth we will assume that smooth squashing with the tanh is employed. We note that an environment may actually allow the agent to input actions that are outside the bounds. In this case, the environment will typically first clip the actions internally before passing them on to the "actual" environment . We now make a simple but crucial observation: squashing actions to fit into a bounded action space can have a disastrous effect on additive-noise exploration strategies. To see this, let the output of the policy network be µ(s) = (µ 1 (s),..., µ K (s)). Consider an action taken along one dimension k, and suppose µ k (s) >> 1 and | k | is relatively small compared to µ k (s). Then the action a k = M tanh(µ k (s)+ k ) will be very close (essentially equal) to M. If the condition µ k (s) >> 1 persists over many consecutive states, then a k will remain close to 1 for all these states, and consequently there will be essentially no exploration along the kth dimension. We will refer to this problem as the squashing exploration problem. A similar observation was made in. We will argue that algorithms such as DDPG and TD3 based on the standard objective with additive noise exploration can be greatly impaired by squashing exploration. SAC is a maximum-entropy based off-policy DRL algorithm which provides good performance across all of the Mujuco benchmark environments. To the best of our knowledge, it currently provides state of the art performance for the Mujoco benchmark. In this section, we argue that the principal contribution of the entropy term in the SAC objective is to resolve the squashing exploration problem, thereby maintaining sufficient exploration when facing bounded action spaces. To argue this, we consider two DRL algorithms: SAC with adaptive temperature (b), and SAC with entropy removed altogether (temperature set to zero) but everything else the same. We refer to them as SAC and as SAC without entropy. For SAC without entropy, for exploration we use additive zero-mean Gaussian noise with σ fixed at 0.3. Both algorithms use tanh squashing. We compare these two algorithms on two Mujoco environments: Humanoid-v2 and Walker-v2. Figure 1 shows the performance of the two algorithms with 10 seeds. For Humanoid, SAC performs much better than SAC without entropy. However, for Walker, SAC without entropy performs nearly as well as SAC, implying maximum entropy RL is not as critical for this environment. To understand why entropy maximization is important for one environment but less so for another, we examine the actions selected when training these two algorithms. Humanoid and Walker have action dimensions K = 17 and K = 6, respectively. Here we show representative for one dimension for both environments, and provide the full in the Appendix. The top and bottom rows of Figure 2 shows for Humanoid and Walker, respectively. The first column shows the µ k values for an interval of 1,000 consecutive time steps, namely, for time steps 599,000 to 600,000. The second column shows the actual action values passed to the environment for these time steps. The third and fourth columns show a concatenation of 10 such intervals of 1000 time steps, with each interval coming from a larger interval of 100,000 time steps. The top and bottom rows of Figure 2 are strikingly different. For Humanoid using SAC with entropy, the |µ k | values are small, mostly in the range [-1.5,1.5], and fluctuate significantly. This allows the action values to also fluctuate significantly, providing exploration in the action space. On the other hand, for SAC without entropy the |µ k | values are typically huge, most of which are well outside the interval [-10,10]. This causes the actions a k to be persistently clustered at either M or -M, leading to essentially no exploration along that dimension. As shown in the Appendix, this property (lack of exploration for SAC without entropy maximization) holds for all 17 action dimensions. For Walker, we see that for both algorithms, the µ k values are sensible, mostly in the range [-1,1] and therefore the actions chosen by both algorithms exhibit exploration. In , the principle benefit of maximum entropy RL in SAC for the Mujuco environments is that it resolves the squashing exploration problem. For some environments (such as Walker), the outputs of the policy network take on sensible values, so that sufficient exploration is maintained and overall good performance is achieved without the need for entropy maximization. For other environments (such as Humanoid), entropy maximization is needed to reduce the magnitudes of the outputs so that exploration is maintained and overall good performance is achieved. Given the observations in the previous section, a natural question is: is it possible to design a streamlined off policy algorithm that does not employ entropy maximization but offers performance comparable to SAC (which has entropy maximization)? As we observed in the previous section, without entropy maximization, in some environments the policy network output values |µ k |, k = 1,..., K can become persistently huge, which leads to insufficient exploration due to the squashing. A simple solution is to modify the outputs of the policy network by normalizing the output values when they collectively (across the action dimensions) become too large. To this end, let µ = (µ 1, . . ., µ K) be the output of the original policy network, and let G = k |µ k |/K. The G is simply the average of the magnitudes of the components of µ. The otherwise, we leave µ unchanged. With this simple normalization, we are assured that the average of the normalized magnitudes is never greater than one. Henceforth we assume the policy network has been modified with the simple normalization scheme just described. Our Streamlined Off Policy (SOP) algorithm is described in Algorithm 1. The algorithm is essentially DDPG plus the normalization described above, plus clipped double Q-learning and target policy smoothing . Another way of looking at it is as TD3 plus the normalization described above, minus the delayed policy updates and the target policy parameters. SOP also uses tanh squashing instead of clipping, since tanh gives somewhat better performance in our experiments. The SOP algorithm is "streamlined" as it has no entropy terms, temperature adaptation, target policy parameters or delayed policy updates. In our experiments, we also consider TD3 plus the simple normalization, and also another streamlined algorithm in which we replace the simple normalization scheme described above with the inverting gradients (IG) scheme as described in. The basic idea is: when gradients suggest increasing the action magnitudes, gradients will be downscaled if actions are within the boundaries, and inverted entirely if actions are outside the boundaries. More implementation details can be found in the Appendix. Algorithm 1 Streamlined Off-Policy 1: Input: initial policy parameters θ, Q-function parameters φ 1, φ 2, empty replay buffer D 2: Set target parameters equal to main parameters φ targ i ← φ i for i = 1, 2 3: repeat Generate an episode using actions a = M tanh(µ θ (s) + ) where ∼ N (0, σ 1). for j in range(however many updates) do Randomly sample a batch of transitions, B = {(s, a, r, s)} from D Compute targets for Q functions: Update Q-functions by one step of gradient descent using Update policy by one step of gradient ascent using Update target networks with Figure 3 compares SAC (with temperature adaptation (a; b) ) with SOP, TD3+ (that is, TD3 plus the simple normalization), and inverting gradients (IG) for five of the most chal-lenging Mujuco environments. Using the same baseline code, we train with ten different random seeds for each of the two algorithms. Each algorithm performs five evaluation rollouts every 5000 environment steps. The solid curves correspond to the mean, and the shaded region to the standard deviation of the returns over the ten seeds. Results show that SOP, SAC and IG have similar sample-efficiency performance and robustness across all environments. TD3+ has slightly weaker asymptotic performance for Walker and Humanoid. IG initially learns slowly for Humanoid with high variance across random seeds, but gives similar asymptotic performance. This confirms that with a simple output normalization scheme in the policy network, the performance of SAC can be achieved without maximum entropy RL. In the Appendix we provide an ablation study for SOP, which shows a major performance drop when removing either double Q-learning or normalization, whereas removing target policy smoothing in only a small performance drop in some environments. We now show how a small change in the sampling scheme for SOP can achieve state of the art performance for the Mujoco benchmark. We call this sampling scheme Emphasizing Recent Experience (ERE). ERE has 3 core features: (i) It is a general method applicable to any off-policy algorithm; (ii) It requires no special data structure, is very simple to implement, and has near-zero computational overhead; (iii) It only introduces one additional important hyper-parameter. The basic idea is: during the parameter update phase, the first mini-batch is sampled from the entire buffer, then for each subsequent mini-batch we gradually reduce our range of sampling to sample more aggressively from more recent data. Specifically, assume that in the current update phase we are to make 1000 mini-batch updates. Let N be the max size of the buffer. Then for the k th update, we sample uniformly from the most recent c k data points, where c k = N · η k and η ∈ is a hyper-parameter that determines how much emphasis we put on recent data. η = 1 is uniform sampling. When η < 1, c k decreases as we perform each update. η can made to adapt to the learning speed of the agent so that we do not have to tune it for each environment. The effect of such a sampling formulation is twofold. The first is recent data have a higher chance of being sampled. The second is that we do this in an ordered way: we first sample from all the data in the buffer, and gradually shrink the range of sampling to only sample from the most recent data. This scheme reduces the chance of over-writing parameter changes made by new data with parameter changes made by old data (; ; ; ;). This process allows us to quickly obtain new information from recent data, and better approximate the value functions near recently-visited states, while still maintaining an acceptable approximation near states visited in the more distant past. What is the effect of replacing uniform sampling with ERE? First note if we do uniform sampling on a fixed buffer, the expected number of times a data point is sampled is the same for all data points. Now consider a scenario where we have a buffer of size 1000 (FIFO queue), we collect one data at a time, and then perform one update with mini-batch size of one. If we start with an empty buffer and sample uniformly, as data fills the buffer, each data point gets less and less chance of being sampled. Specifically, over a period of 1000 updates, the expected number of times the tth data is sampled is: 1/t + 1/(t + 1) + · · · + 1/T. Figure 4f shows the expected number of times a data is sampled as a function of its position in the buffer. We see that older data are expected to get sampled much more than newer data. This is undesirable because when the agent is improving and exploring new areas of the state space; new data points may contain more interesting information than the old ones, which have already been updated many times. When we apply the ERE scheme, we effectively skew the curve towards assigning higher expected number of samples for the newer data, allowing the newer data to be frequently sampled soon after being collected, which can accelerate the learning process. In the Appendix, we provide further algorithmic detail and analysis on ERE, and compare ERE to two other sampling schemes: an exponential sampling scheme and Prioritized Experience Replay. Figure 4 compares the performance of SOP, SOP+ERE, SAC and SAC+ERE. With ERE, both SAC and SOP gain a significant performance improvement in all environments. SOP+ERE learns faster than SAC and vanilla SOP in all Mujoco environments. SOP+ERE also greatly improves overall performance for the two most challenging environments, Ant and Humanoid, and has the best performance for Humanoid. In table 1, we show the mean test episode return and std across 10 random seeds at 1M timesteps for all environments. The last column displays the percentage improvement of SOP+ERE over SAC, showing that SOP+ERE achieves state of the art performance. In Ant and Humanoid, SOP+ERE improves performance by 21% and 24% over SAC at 1 million timesteps, respectively. As for the std, SOP+ERE gives lower values, and for Humanoid a higher value. ERE allows new data to be sampled many times soon after being collected. In recent years, there has been significant progress in improving the sample efficiency of DRL for continuous robotic locomotion tasks with off-policy algorithms (; ; a; b). There is also a significant body of research on maximum entropy RL methods (; ; ; ; ; ; ; ; 2018a; b). Uniform sampling is the most common way to sample from a replay buffer. One of the most wellknown alternatives is prioritized experience replay (PER). PER uses the absolute TD-error of a data point as the measure for priority, and data points with higher priority will have a higher chance of being sampled. This method has been tested on DQN and double DQN (DDQN) with significant improvement and applied successfully in other algorithms (; ; ;) and can be implemented in a distributed manner . There are other methods proposed to make better use of the replay buffer. The ACER algorithm has an on-policy part and an off-policy part, with a hyper-parameter controlling the ratio of off-policy to on-policy updates . The RACER algorithm selectively removes data points from the buffer, based on the degree of "off-policyness", bringing improvement to DDPG , NAF and PPO . , replay buffers of different sizes were tested, showing large buffer with data diversity can lead to better performance. Finally, with Hindsight Experience Replay , priority can be given to trajectories with lower density estimation to tackle multi-goal, sparse reward environments. In this paper we first showed that the primary role of maximum entropy RL for the Mujoco benchmark is to maintain satisfactory exploration in the presence of bounded action spaces. We then developed a new streamlined algorithm which does not employ entropy maximization but nevertheless matches the sampling efficiency and robustness performance of SAC for the Mujoco benchmarks. Our experimental demonstrate a need to revisit the benefits of entropy regularization in DRL. Finally, we combined our streamlined algorithm with a simple non-uniform sampling scheme to achieve state-of-the art performance for the Mujoco benchmark. In this ablation study we separately examine the importance of (i) the normalization at the output of the policy network; (ii) the double Q networks; (iii) and randomization used in the line 8 of the SOP algorithm (that is, target policy smoothing ). Figure 5 shows the for the five environments considered in this paper. In Figure 5, "no normalization" is SOP without the normalization of the outputs of the policy network; "single Q" is SOP with one Q-network instead of two; and "no smoothing" is SOP without the randomness in line 8 of the algorithm. Figure 5 confirms that double Q-networks are critical for obtaining good performance (; ; a). Figure 5 also shows that output normalization is also critical. Without output normalization, performance fluctuates wildly, and average performance can decrease dramatically, particularly for Humanoid and HalfCheetah. Target policy smoothing improves performance by a relatively small amount. Table 2 shows hyperparameters used for SOP, SOP+ERE and SOP+PER. For adaptive SAC, we use our own PyTorch implementation for the comparisons. Our implementation uses the same hyperparameters as used in the original paper (b). Our implementation of SOP variants and adaptive SAC share most of the code base. For TD3, our implementation uses the same hyperparamters as used in the authors' implementation, which is different from the ones in the original paper . They claimed that the new set of hyperparamters can improve performance for TD3. We now discuss hyperparameter search for better clarity, fairness and reproducibility (; ;). For the η value in the ERE scheme, in our early experiments we tried the values (0.993, 0.994, 0.995, 0.996, 0.997, 0.998) on the Ant and found 0.995 to work well. This initial range of values was decided by computing the ERE sampling range for the oldest data. We found that for smaller values, the range would simply be too small. For the PER scheme, we did some informal preliminary search, then searched on Ant for β 1 in (0, 0.4, 0.6, 0.8), β 2 in (0, 0.4, 0.5, 0.6, 1), and learning rate in (1e-4, 2e-4, 3e-4, 5e-4, 8e-4, 1e-3), we decided to search these values because the original paper used β 1 = 0.6, β 2 = 0.4 and with reduced learning rate. For the exponential sampling scheme, we searched the λ value in (3e-7, 1e-6, 3e-6, 5e-6, 1e-5, 3e-5, 5e-5, 1e-4) in Ant, this search range was decided by plotting out the probabilities of sampling, and then pick a set of values that are not too extreme. For σ in SOP, in some of our early experiments with SAC, we accidentally found that σ = 0.3 gives good performance for SAC without entropy and with Gaussian noise. We searched values (for HalfCheetah-v2) SOP gaussian noise std σ = σ 1 = σ 2 0.29 TD3 gaussian noise std for data collection σ 0.1 * action limit guassian noise std for target policy smoothingσ 0.2 TD3+ gaussian noise std for data collection σ 0.15 guassian noise std for target policy smoothingσ 0.2 ERE ERE initial η 0 0.995 Algorithm 2 SOP with Emphasizing Recent Experience 1: Input: initial policy parameters θ, Q-function parameters φ 1, φ 2, empty replay buffer D of size N, initial η 0, recent and max performance improvement I recent = I max = 0. 2: Set target parameters equal to main parameters φ targ,i ← φ i for i = 1, 2 3: repeat 4: Generate an episode using actions a = M tanh(µ θ (s) + ) where ∼ N (0, σ 1). update I recent, I max with training episode returns, let K = length of episode 6: for j in range(K) do 8: Sample a batch of transitions, B = {(s, a, r, s)} from most recent c k data in D 10: Compute targets for Q functions: Update Q-functions by one step of gradient descent using Update policy by one step of gradient ascent using Update target networks with In this section we discuss the details of the Inverting Gradient method. discussed three different methods for bounded parameter space learning: Zeroing Gradients, Squashing Gradients and Inverting Gradients, they analyzed and tested the three methods and found that Inverting Gradients method can achieve much stronger performance than the other two. In our implementation, we remove the tanh function from SOP and use Inverting Gradients instead to bound the actions. Let p indicate the output of the last layer of the policy network. During exploration p will be the mean of a normal distribution that we sample actions from, the IG approach can be summarized by the following equation : Where ∇ p is the gradient of the policy loss w.r.t to p. During a policy network update, we first backpropagate the gradients from the outputs of the Q network to the output of the policy network for each data point in the batch, we then compute the ratio for each p value (each action dimension), depending on the sign of the gradient. We then backpropagate from the output of the policy network to parameters of the policy network, and we modify the gradients in the policy network according to the ratios we computed. We made an efficient implementation and further discuss the computation efficiency of IG in the implementation details section. We also investigate the effect of other interesting sampling schemes. We also implement the proportional variant of Prioritized Experience Replay with SOP. Since SOP has two Q-networks, we redefine the absolute TD error |δ| of a transition (s, a, r, s) to be the average absolute TD error in the Q network update: Within the sum, the first term y q (r, is simply the target for the Q network, and the term Q θ,l (s, a) is the current estimate of the l th Q network. For the i th data point, the definition of the priority value p i is p i = |δ i | +. The probability of sampling a data point P (i) is computed as: where β 1 is a hyperparameter that controls how much the priority value affects the sampling probability, which is denoted by α in, but to avoid confusion with the α in SAC, we denote it as β 1. The importance sampling (IS) weight w i for a data point is computed as: where β 2 is denoted as β in. Based on the SOP algorithm, we change the sampling method from uniform sampling to sampling using the probabilities P (i), and for the Q updates we apply the IS weight w i. This gives SOP with Prioritized Experience Replay (SOP+PER). We note that as compared with SOP+PER, ERE does not require a special data structure and has negligible extra cost, while PER uses a sum-tree structure with some additional computational cost. We also tried several variants of SOP+PER, but preliminary show that it is unclear whether there is improvement in performance, so we kept the algorithm simple. The ERE scheme is similar to an exponential sampling scheme where we assign the probability of sampling according to the probability density function of an exponential distribution. Essentially, in such a sampling scheme, the more recent data points get exponentially more probability of being sampled compared to older data. For the i th most recent data point, the probability of sampling a data point P (i) is computed as: We apply this sampling scheme to SOP and refer to this variant as SOP+EXP. Figure 6 shows a performance comparison of SOP, SOP+ERE, SOP+EXP and SOP+PER. Results show that the exponential sampling scheme gives a boost to the performance of SOP, and especially in the Humanoid environment, although not as good as ERE. Surprisingly, SOP+PER does not give a significant performance boost to SOP (if any boost at all). We also found that it is difficult to find hyperparameter settings for SOP+PER that work well for all environments. Some of the other hyperparameter settings actually reduce performance. It is unclear why PER does not work so well for SOP. A similar has been found in another recent paper , showing that PER can significantly reduce performance on TD3. Further research is needed to understand how PER can be successfully adapted to environments with continuous action spaces and dense reward structure. Figure 7 shows, for fixed η, how η affects the data sampling process, under the ERE sampling scheme. Recent data points have a much higher probability of being sampled compared to older data, and a smaller η value gives more emphasis to recent data. Different η values are desirable depending on how fast the agent is learning and how fast the past experiences become obsolete. So to make ERE work well in different environments with different reward scales and learning progress, we adapt η to the the speed of learning. To this end, define performance to be the training episode return. Define I recent to be how much performance improved from N/2 timesteps ago, and I max to be the maximum improvement throughout training, where N is the buffer size. Let the hyperparameter η 0 be the initial η value. We then adapt η according to the formula: η = η 0 · I recent /I max + 1 − (I recent /I max). Under such an adaptive scheme, when the agent learns quickly, the η value is low in order to learn quickly from new data. When progress is slow, η is higher to make use of the stabilizing effect of uniform sampling from the whole buffer. Figure 7b plots the expected number of times a data point in the buffer is sampled, with the data points ordered from most to least recent. In this section we discuss some programming details. These details are not necessary for understanding the algorithm, but they might help with reproducibility. In the ERE scheme, the sampling range always starts with the entire buffer (1M data) and then gradually shrinks. This is true even when the buffer is not full. So even if there are not many data points in the buffer, we compute c k based as if there are 1M data points in the buffer. One can also modify the design slightly to obtain a variant that uses the current amount of data points to compute c k. In addition to the reported scheme, we also tried shrinking the sampling range linearly, but it gives less performance gain. In our implementation we set the number of updates after an episode to be the same as the number of timesteps in that episode. Since environments do not always end at 1000 timesteps, we can give a more general formula for c k. Let K be the number of mini-batch updates, let N be the max size of the replay buffer, then: With this formulation, the range of sampling shrinks in more or less the same way with varying number of mini-batch updates. We always do uniform sampling in the first update, and we always have η When η is small, c k can also become small for some of the mini-batches. To prevent getting a minibatch with too many repeating data points, we set the minimum value for c k to 5000. We did not find this value to be too important and did not find the need to tune it. It also does not have any effect for any η ≥ 0.995 since the sampling range cannot be lower than 6000. In the adaptive scheme with buffer of size 1M, the recent performance improvement is computed as the difference of the current episode return compared to the episode return 500,000 timesteps earlier. Before we reach 500,000 timesteps, we simply use η 0. The exact way of computing performance improvement does not have a significant effect on performance as long as it is reasonable. In this section we give analysis on the additional programming and computation complexity brought by ERE and PER. In terms of programming complexity, ERE is a clear winner since it only requires a small adjustment to how we sample mini-batches. It does not modify how the buffer stores the data, and does not require a special data structure to make it work efficiently. Thus the implementation difficulty is minimal. PER (proportional variant) requires a sum-tree data structure to make it run efficiently. The implementation is not too complicated, but compared to ERE it is a lot more work. The exponential sampling scheme is very easy to implement, although a naive implementation will incur a significant computation overhead when sampling from a large buffer. To improve its computation efficiency, we instead uses an approximate sampling method. We first sample data indexes from segments of size 100 from the replay buffer, and then for each segment sampled, we sample one data point uniformly from that segment. In terms of computation complexity (not sample efficiency), and wall-clock time, ERE's extra computation is negligible. In practice we observe no difference in computation time between SOP and SOP+ERE. PER needs to update the priority of its data points constantly and compute sampling probabilities for all the data points. The complexity for sampling and updates is O(log(N)), and the rank-based variant is similar. Although this is not too bad, it does impose a significant overhead on SOP: SOP+PER runs twice as long as SOP. Also note that this overhead grows linearly with the size of the mini-batch. The overhead for the Mujoco environments is higher compared to Atari, possibly because the Mujoco environments have a smaller state space dimension while a larger batch size is used, making PER take up a larger portion of computation cost. For the exponential sampling scheme, the extra computation is also close to negligible when using the approximate sampling method. In terms of the proposed normalization scheme and the Inverting Gradients (IG) method, the normalization is very simple and can be easily implemented and has negligible computation overhead. IG has a simple idea, but its implementation is slightly more complicated than the normalization scheme. When implemented naively, IG can have a large computation overhead, but it can be largely avoided by making sure the gradient computation is still done in a batch-manner. We have made a very efficient implementation and our code is publicly available so that interested reader can easily reproduce it. In Figure 8 we show additional on applying ERE to SOP+IG. The shows that after applying the ERE scheme, SOP and IG both get a performance boost. The performance of the SOP+ERE and IG+ERE are similar. In figure 9, we show additional comparing TD3 with TD3 plus our normalization scheme, which we refer as TD3+. The show that after applying our normalization scheme, TD3+ has a significant performance boost in Humanoid, while in other environments, both algorithms achieve similar performance. To understand why entropy maximization is important for one environment but less so for another, we examine the actions selected when training SAC with and without entropy. Humanoid and Walker2d have action dimensions K = 17 and K = 6, respectively. In addition to the representative shown for one dimension for both environments in Section 3.2, the for all the dimensions are provided here in Figures 10 and 11. From Figure 10, we see that for Humanoid using SAC (which uses entropy maximization), the |µ k | values are small and fluctuate significantly for all 17 dimensions. On the other hand, for SAC without entropy the |µ k | values are typically huge, again for all 17 dimensions. This causes the actions a k to be persistently clustered at either M or -M. As for Walker, the |µ k | values are sensible for both algorithms for all 6 dimensions, as shown in figure 11.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJl47yBYPS
We propose a new DRL off-policy algorithm achieving state-of-the-art performance.
Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets. Open-domain question answering (QA) aims to answer questions from a broad range of domains by effectively marshalling evidence from large open-domain knowledge sources. Such resources can be Wikipedia, the whole web BID12, structured knowledge bases BID2 or combinations of the above (Baudiš &Šedivỳ, 2015).Recent work on open-domain QA has focused on using unstructured text retrieved from the web to build machine comprehension models BID9. These studies adopt a two-step process: an information retrieval (IR) model to coarsely select passages relevant to a question, followed by a reading comprehension (RC) model BID26 to infer an answer from the passages. These studies have made progress in bringing together evidence from large data sources, but they predict an answer to the question with only a single retrieved passage at a time. However, answer accuracy can often be improved by using multiple passages. In some cases, the answer can only be determined by combining multiple passages. In this paper, we propose a method to improve open-domain QA by explicitly aggregating evidence from across multiple passages. Our method is inspired by two notable observations from previous open-domain QA analysis:• First, compared with incorrect answers, the correct answer is often suggested by more passages repeatedly. For example, in FIG0 (a), the correct answer "danny boy" has more passages providing evidence relevant to the question compared to the incorrect one. This observation can be seen as multiple passages collaboratively enhancing the evidence for the correct answer.• Second, sometimes the question covers multiple answer aspects, which spreads over multiple passages. In order to infer the correct answer, one has to find ways to aggregate those multiple passages in an effective yet sensible way to try to cover all aspects. In FIG0 the correct answer "Galileo Galilei" at the bottom has passages P1, "Galileo was a physicist..." and P2, "Galileo discovered the first 4 moons of Jupiter", mentioning two pieces of evidence to match the question. In this case, the aggregation of these two pieces of evidence can help entail the ground-truth answer "Galileo Galilei". In comparison, the incorrect answer "Isaac Newton" has passages providing partial evidence on only "physicist, mathematician and astronomer". This observation illustrates the way in which multiple passages may provide complementary evidence to better infer the correct answer to a question. To provide more accurate answers for open-domain QA, we hope to make better use of multiple passages for the same question by aggregating both the strengthened and the complementary evidence from all the passages. We formulate the above evidence aggregation as an answer re-ranking problem. Re-ranking has been commonly used in NLP problems, such as in parsing and translation, in order to make use of high-order or global features that are too expensive for decoding algorithms BID6 BID27 BID16 BID11. Here we apply the idea of re-ranking; for each answer candidate, we efficiently incorporate global information from multiple pieces of textual evidence without significantly increasing the complexity of the prediction of the RC model. Specifically, we first collect the top-K candidate answers based on their probabilities computed by a standard RC/QA system, and then we use two proposed re-rankers to re-score the answer candidates by aggregating each candidate's evidence in different ways. The re-rankers are:• A strength-based re-ranker, which ranks the answer candidates according to how often their evidence occurs in different passages. The re-ranker is based on the first observation if an answer candidate has multiple pieces of evidence, and each passage containing some evidence tends to predict the answer with a relatively high score (although it may not be the top score), then the candidate is more likely to be correct. The passage count of each candidate, and the aggregated probabilities for the candidate, reflect how strong its evidence is, and thus in turn suggest how likely the candidate is the corrected answer.• A coverage-based re-ranker, which aims to rank an answer candidate higher if the union of all its contexts in different passages could cover more aspects included in the question. To achieve this, for each answer we concatenate all the passages that contain the answer together. The is a new context that aggregates all the evidence necessary to entail the answer for the question. We then treat the new context as one sequence to represent the answer, and build an attention-based match-LSTM model between the sequence and the question to measure how well the new aggregated context could entail the question. Overall, our contributions are as follows: 1) We propose a re-ranking-based framework to make use of the evidence from multiple passages in open-domain QA, and two re-rankers, namely, a strengthbased re-ranker and a coverage-based re-ranker, to perform evidence aggregation in existing opendomain QA datasets. We find the second re-ranker performs better than the first one on two of the three public datasets. 2) Our proposed approach leads to the state-of-the-art on three different datasets (Quasar-T BID9, SearchQA BID10 and TriviaQA BID17) and outperforms previous state of the art by large margins. In particular, we achieved up to 8% improvement on F1 on both Quasar-T and SearchQA compared to the previous best . Given a question q, we are trying to find the correct answer a g to q using information retrieved from the web. Our method proceeds in two phases. First, we run an IR model (with the help of a search engine such as google or bing) to find the top-N web passages p 1, p 2,..., p N most related to the question. Then a reading comprehension (RC) model is used to extract the answer from these passages. This setting is different from standard reading comprehension tasks (e.g. BID24), where a single fixed passage is given, from which the answer is to be extracted. When developing a reading comprehension system, we can use the specific positions of the answer sequence in the given passage for training. By contrast, in the open-domain setting, the RC models are usually trained under distant supervision BID9 BID17. Specifically, since the training data does not have labels indicating the positions of the answer spans in the passages, during the training stage, the RC model will match all passages that contain the ground-truth answer with the question one by one. In this paper we apply an existing RC model called R 3 to extract these candidate answers. After the candidate answers are extracted, we aggregate evidence from multiple passages by reranking the answer candidates. Given a question q, suppose we have a baseline open-domain QA system that can generate the top-K answer candidates a 1,..., a K, each being a text span in some passage p i. The goal of the re-ranker is to rank this list of candidates so that the top-ranked candidates are more likely to be the correct answer a g. With access to these additional features, the re-ranking step has the potential to prioritize answers not easily discoverable by the base system alone. We investigate two re-ranking strategies based on evidence strength and evidence coverage. An overview of our method is shown in FIG1. In open-domain QA, unlike the standard RC setting, we have more passages retrieved by the IR model and the ground-truth answer may appear in different passages, which means different answer spans may correspond to the same answer. To exploit this property, we provide two features to further re-rank the top-K answers generated by the RC model. This method is based on the hypothesis that the more passages that entail a particular answer, the stronger the evidence for that answer and the higher it should be ranked. To implement this we count the number of occurrences of each answer in the top-K answer spans generated by the baseline QA model and return the answer with the highest count. Measuring Strength by Probability Since we can get the probability of each answer span in a passages based on the RC model, we can also sum up the probabilities of the answer spans that are referring to the same answer. In this method, the answer with the highest probability is the final prediction 1. In the re-ranking scenario, it is not necessary to exhaustively consider all the probabilities of all the spans in the passages, as there may be a large number of different answer spans and most of them are irrelevant to the ground-truth answer. Remark: Note that neither of the above methods require any training. Both just take the candidate predictions from the baseline QA system and perform counting or probability calculations. At test time, the time complexity of strength-based re-ranking is negligible. Consider FIG0 where the two answer candidates both have evidence matching the first half of the question. Note that only the correct answer has evidence that could also match the second half. In this case, the strength-based re-ranker will treat both answer candidates the same due to the equal amount of supporting evidence, while the second answer has complementary evidence satisfying all aspects of the question. To handle this case, we propose a coverage-based re-ranker that ranks the answer candidates according to how well the union of their evidence from different passages covers the question. In order to take the union of evidence into consideration, we first concatenate the passages containing the answer into a single "pseudo passage" then measure how well this passage entails the answer for the question. As in the examples shown in FIG0 (b), we hope the textual entailment model will reflect (i) how each aspect of the question is matched by the union of multiple passages; and (ii) whether all the aspects of the question can be matched by the union of multiple passages. In our implementation an "aspect" of the question is a hidden state of a bi-directional LSTM BID15. The match-LSTM BID31 model is one way to achieve the above effect in entailment. Therefore we build our coverage-based re-ranker on top of the concatenated pseudo passages using the match-LSTM. The detailed method is described below. Passage Aggregation We consider the top-K answers, a 1,..., a K, provided by the baseline QA system. For each answer a k, k ∈ [1, K], we concatenate all the passages that contain a k, {p n |a k ∈ p n, n ∈ [1, N]}, to form the union passagep k. Our further model is to identify which union passage, e.g.,p k, could better entail its answer, e.g., a k, for the question. Measuring Aspect(Word)-Level Matching As discussed earlier, the first mission of the coverage-based re-ranker is to measure how each aspect of the question is matched by the union of multiple passages. We achieve this with word-by-word attention followed by a comparison module. First, we write the answer candidate a, question q and the union passagep of a as matrices A, Q,P, with each column being the embedding of a word in the sequence. We then feed them to the bidirectional LSTM as follows: DISPLAYFORM0 where H a ∈ R l×A, H q ∈ R l×Q and H p ∈ R l×P are the hidden states for the answer candidate, question and passage respectively; l is the dimension of the hidden states, and A, Q and P are the length of the three sequences, respectively. Next, we enhance the question representation H q with H a: DISPLAYFORM1 where [·; ·] is the concatenation of two matrices in row and H aq ∈ R l×(A+Q). As most of the answer candidates do not appear in the question, this is for better matching with the passage and finding more answer-related information from the passage.2 Now we can view each aspect of the question as a column vector (i.e. a hidden state at each word position in the answer-question concatenation) in the enhanced question representation H aq. Then the task becomes to measure how well each column vector can be matched by the union passage; and we achieve this by computing the attention vector Parikh et al. FORMULA0 for each hidden state of sequences a and q as follows: DISPLAYFORM2 where α ∈ R P ×(A+Q) is the attention weight matrix which is normalized in column through softmax. H aq ∈ R l×(A+Q) are the attention vectors for each word of the answer and the question by weighted summing all the hidden states of the passagep. Now in order to see whether the aspects in the question can be matched by the union passage, we use the following matching function: DISPLAYFORM3 where · ⊗ e (A+Q) is to repeat the vector (or scalar) on the left A + Q times; (· ·) and (· − ·) are the element-wise operations for checking whether the word in the answer and question can be matched by the evidence in the passage. We also concatenate these matching representations with the hidden state representations H aq and H aq, so that the lexical matching representations are also integrated into the the final aspect-level matching representations 3 M ∈ R 2l×(A+Q), which is computed through the non-linear transformation on four different representations with parameters DISPLAYFORM4 Measuring the Entire Question Matching Next, in order to measure how the entire question is matched by the union passagep by taking into consideration of the matching at each aspect, we add another bi-directional LSTM on top of it to aggregate the aspect-level matching information 4: DISPLAYFORM5 where H m ∈ R l×(A+Q) is to denote all the hidden states and h s ∈ R l, the max of pooling of each dimension of H m, is the entire matching representation which reflects how well the evidences in questions could be matched by the union passage.2 Besides concatenating H q with H a, there are other ways to make the matching process be aware of an answer's positions in the passage, e.g. replacing the answer spans in the passage to a special token like in. We tried this approach, which gives similar but no better , so we keep the concatenation in this paper. We leave the study of the better usage of answer position information for future work. 4 Note that we use LSTM here to capture the conjunction information (the dependency) among aspects, i.e. how all the aspects are jointly matched. In comparison simple pooling methods will treat the aspects independently. Low-rank tensor inspired neural architectures (e.g., BID21) could be another choice and we will investigate them in future work. Re-ranking Objective Function Our re-ranking is based on the entire matching representation. For each candidate answer a k, k ∈ [1, K], we can get a matching representation h s k between the answer a k, question q and the union passagep k through Eqn.. Then we transform all representations into scalar values followed by a normalization process for ranking: DISPLAYFORM6 where we concatenate the match representations for each answer in row through [·; ·], and do a non-linear transformation by parameters W r ∈ R l×l and b r ∈ R l to get hidden representation R ∈ R l×K. Finally, we map the transformed matching representations into scalar values through parameters w o ∈ R l and w o ∈ R. o ∈ R K is the normalized probability for the candidate answers to be ground-truth. Due to the aliases of the ground-truth answer, there may be multiple answers in the candidates are ground-truth, we use KL distance as our objective function: DISPLAYFORM7 where y k indicates whether a k the ground-truth answer or not and is normalized by K k=1 y k and o k is the ranking output of our model for a k. Although the coverage-based re-ranker tries to deal with more difficult cases compared to the strength-based re-ranker, the strength-based re-ranker works on more common cases according to the distributions of most open-domain QA datasets. We can try to get the best of both worlds by combining the two approaches. The full re-ranker is a weighted combination of the outputs of the above different re-rankers without further training. Specifically, we first use softmax to re-normalize the top-5 answer scores provided by the two strength-based rankers and the one coverage-based reranker; we then weighted sum up the scores for the same answer and select the answer with the largest score as the final prediction. We conduct experiments on three publicly available open-domain QA datasets, namely, Quasar-T BID9, SearchQA BID10 and TriviaQA BID17. These datasets contain passages retrieved for all questions using a search engine such as Google or Bing. We do not retrieve more passages but use the provided passages only. The statistics of the three datasets are shown in Table 1. 5 BID9 ) is based on a trivia question set. The data set makes use of the "Lucene index" on the ClueWeb09 corpus. For each question, 100 unique sentence-level passages were collected. The human performance is evaluated in an open-book setting, i.e., the human subjects had access to the same passages retrieved by the IR model and tried to find the answers from the passages. 6 BID10 ) is based on Jeopardy! questions and uses Google to collect about 50 web page snippets as passages for each question. The human performance is evaluated in a similar way to the Quasar-T dataset. 7 BID17 Our baseline models 9 include the following: GA BID8 b), a reading comprehension model with gated-attention; BiDAF BID26 ), a RC model with bidirectional attention flow; AQA BID4, a reinforced system learning to aggregate the answers generated by the re-written questions; R 3 ), a reinforced model making use of a ranker for selecting passages to train the RC model. As R 3 is the first step of our system for generating candidate answers, the improvement of our re-ranking methods can be directly compared to this baseline. TriviaQA does not provide the leaderboard under the open-domain setting. As a , there is no public baselines in this setting and we only compare with the R 3 baseline. We first use a pre-trained R 3 model, which gets the state-of-the-art performance on the three public datasets we consider, to generate the top 50 candidate spans for the training, development and test datasets, and we use them for further ranking. During training, if the groundtruth answer does not appear in the answer candidates, we will manually add it into the answer candidate list. For the coverage-based re-ranker, we use Adam BID19 to optimize the model. Word embeddings are initialized by GloVe BID23 and are not updated during training. We set all the words beyond Glove as zero vectors. We set l to 300, batch size to 30, learning rate to 0.002. We tune the dropout probability from 0 to 0.5 and the number of candidate answers for re-ranking (K) in 11. In this section, we present and analysis of our different re-ranking methods on the three different public datasets.8 Despite the open-domain QA data provided, the leaderboard of TriviaQA focuses on evaluation of RC models over filtered passages that is guaranteed to contain the correct answers (i.e. more like closed-domain setting). The evaluation is also passage-wise, different from the open-domain QA setting.9 Most of the of different models come from the public paper. While we re-run model R 3 based on the authors' source code and extend the model to the datasets of SearchQA and TriviaQA datasets. 10 To demonstrate that R 3 serves as a strong baseline on the TriviaQA data, we generate the R 3 following the leaderboard setting. The showed that R 3 achieved F1 56.0, EM 50.9 on Wiki domain and F1 68.5, EM 63.0 on Web domain, which is competitive to the state-of-the-arts. This confirms that R 3 is a competitive baseline when extending the TriviaQA questions to open-domain setting.11 Our code will be released under https://github.com/shuohangwang/mprc. DISPLAYFORM0 GA BID8 26.4 26.4 ----BiDAF BID26 25.9 28.5 28.6 34.6 --AQA BID4 The performance of our models is shown in TAB3. We use F1 score and Exact Match (EM) as our evaluation metrics 12. From the , we can clearly see that the full re-ranker, the combination of different re-rankers, significantly outperforms the previous best performance by a large margin, especially on Quasar-T and SearchQA. Moreover, our model is much better than the human performance on the SearchQA dataset. In addition, we see that our coverage-based re-ranker achieves consistently good performance on the three datasets, even though its performance is marginally lower than the strength-based re-ranker on the SearchQA dataset. In this subsection, we analyze the benefits of our re-ranking models. BM25 as an alternative coverage-based re-ranker We use the classical BM25 retrieval model BID25 ) to re-rank the aggregated passages the same way as the coveragebased re-ranker, where the IDF values are first computed from the raw passages before aggregation. From the in TAB3, we see that the BM25-based re-ranker improves the F1 scores compared with the R 3 model, but it is still lower than our coverage-based re-ranker with neural network models. Moreover, with respect to EM scores, the BM25-based re-ranker sometimes gives lower performance. We hypothesize that there are two reasons behind the relatively poor performance of BM25. First, because BM25 relies on a bag-of-words representation, context information is not taken into consideration and it cannot model the phrase similarities. Second, shorter answers tend to be preferred by BM25. For example, in our method of constructing pseudo-passages, when an answer sequence A is a subsequence of another answer sequence B, the pseudo passage of A is always a superset of the pseudo passage of B that could better cover the question. Therefore the F1 score could be improved but the EM score sometimes becomes worse. Re-ranking performance versus answer lengths and question types FIG3 decomposes the performance according to the length of the ground truth answers and the types of questions on TriviaQA and Quasar-T. We do not include the analysis on SearchQA because, for the Jeopardy! style questions, it is more difficult to distinguish the questions types, and the range of answer lengths is narrower. Our show that the coverage-based re-ranker outperforms the baseline in different lengths of answers and different types of questions. The strength-based re-ranker (counting) also gives improvement but is less stable across different datasets, while the strength-based re-ranker (probability) Table 3: The upper bound (recall) of the Top-K answer candidates generated by the baseline R 3 system (on dev set), which indicates the potential of the coverage-based re-ranker.tends to have and trends that are close to the baseline curves, which is probably because the method is dominated by the probabilities predicted by the baseline. The coverage-based re-ranker and the strength-based re-ranker (counting) have similar trends on most of the question types. The only exception is that the strength-based re-ranker performs significantly worse compared to the coverage-based re-ranker on the "why" questions. This is possibly because those questions usually have non-factoid answers, which are less likely to have exactly the same text spans predicted on different passages by the baseline. Potential improvement of re-rankers Table 3 shows the percentage of times the correct answer is included in the top-K answer predictions of the baseline R 3 method. More concretely, the scores are computed by selecting the answer from the top-K predictions with the best EM/F1 score. Therefore the final top-K EM and F1 can be viewed as the recall or an upper bound of the top-K predictions. From the , we can see that although the top-1 prediction of R 3 is not very accurate, there is high probability that a top-K list with small K could cover the correct answer. This explains why our re-ranking approach achieves large improvement. Also by comparing the upper bound performance of top-5 and our re-ranking performance in TAB3, we can see there is still a clear gap of about 10% on both datasets and on both F1 and EM, showing the great potential improvement for the re-ranking model in future work. Effect of the selection of K for the coverage-based re-ranker As shown in Table 3, as K ranges from 1 to 10, the recall of top-K predictions from the baseline R 3 system increases significantly. Table 5: Results of running strength-based re-ranker (counting) on different number of top-K answer candidates on Quasar-T (dev set).Ideally, if we use a larger K, then the candidate lists will be more likely to contain good answers. At the same time, the lists to be ranked are longer thus the re-ranking problem is harder. Therefore, there is a trade-off between the coverage of rank lists and the difficulty of re-ranking; and selecting an appropriate K becomes important. Table 4 shows the effects of K on the performance of coveragebased re-ranker. We train and test the coverage-based re-ranker on the top-K predictions from the baseline, where K ∈ {3, 5, 10}. The upper bound are the same ones from Table 3. The show that when K is small, like K=3, the performance is not very good due to the low coverage (thus low upper bound) of the candidate list. With the increase of K, the performance becomes better, but the top-5 and top-10 are on par with each other. This is because the higher upper bound of top-10 counteracts the harder problem of re-ranking longer lists. Since there is no significant advantage of the usage of K=10 while the computation cost is higher, we report all testing with K=5.Effect of the selection of K for the strength-based re-ranker Similar to Table 4, we conduct experiments to show the effects of K on the performance of the strength-based re-ranker. We run the strength-based re-ranker (counting) on the top-K predictions from the baseline, where K ∈ {10, 50, 100, 200}. We also evaluate the upper bound for these Ks. Note that the strength-based re-ranker is very fast and the different values of K do not affect the computation speed significantly compared to the other QA components. The are shown in Table 5, where we achieve the best when K=50. The performance drops significantly when K increases to 200. This is because the ratio of incorrect answers increases notably, making incorrect answers also likely to have high counts. When K is smaller, such incorrect answers appear less because statistically they have lower prediction scores. We report all testing with K=50.Examples Table 6 shows an example from Quasar-T where the re-ranker successfully corrected the wrong answer predicted by the baseline. This is a case where the coverage-based re-ranker helped: the correct answer "Sesame Street" has evidence from different passages that covers the aspects "Emmy Award" and "children's television shows". Although it still does not fully match all the facts in the question, it still helps to rank the correct answer higher than the top-1 prediction "Great Dane" from the R 3 baseline, which only has evidence covering "TV" and "1969" in the question. Table 6: An example from Quasar-T dataset. The ground-truth answer is "Sesame Street". Q: question, A: answer, P: passages containing corresponding answer. Open Domain Question Answering The task of open domain question answering dates back to as early as BID13 and was popularized by TREC-8 BID30. The task is to produce the answer to a question by exploiting resources such as documents BID30, webpages BID20 or structured knowledge bases BID2 BID3.Recent efforts BID10 BID9 benefit from the advances of machine reading comprehension (RC) and follow the search-and-read QA direction. These deep learning based methods usually rely on a document retrieval module to retrieve a list of passages for RC models to extract answers. As there is no passage-level annotation about which passages entail the answer, the model has to find proper ways to handle the noise introduced in the IR step. uses bi-gram passage index to improve the retrieval step; BID10; BID9 propose to reduce the length of the retrieved passages. focus more on noise reduction in the passage ranking step, in which a ranker module is jointly trained with the RC model with reinforcement learning. To the best of our knowledge, our work is the first to improve neural open-domain QA systems by using multiple passages for evidence aggregation. Moreover, we focus on the novel problem of "text evidence aggregation", where the problem is essentially modeling the relationship between the question and multiple passages (i.e. text evidence). In contrast, previous answer re-ranking research did not address the above problem: traditional QA systems like BID12 have similar passage retrieval process with answer candidates added to the queries. The retrieved passages were used for extracting answer scoring features, but the features were all extracted from single-passages thus did not utilize the information of union/co-occurrence of multiple passages. KB-QA systems BID0; BID34 sometimes use text evidence to enhance answer re-ranking, where the features are also extracted on the pair of question and a single-passage but ignored the union information among multiple passages. We are the first to introduce re-ranking methods to neural open-domain QA and multi-passage RC. Meanwhile, our two-step approach shares some similarities to the previous multi-step approaches proposed for standard single-passage RC, in terms of the purposes of either using additional information or re-fining answer predictions that are not easily handled by the standard answer extraction models for RC.On cloze-test tasks BID14, relates to our work in the sense that it is a two-step extractor-reasoner model, which first extracts K most probable singletoken answer candidates and then constructs a hypothesis by combining each answer candidate to the question and compares the hypothesis with all the sentences in the passage. Their model differs from ours in several aspects: (i) Epireader matches a hypothesis to every single sentence, including all the "noisy" ones that does not contain the answer, that makes the model inappropriate for open-domain QA setting; (ii) The sentence matching is based on the sentence embedding vectors computed by a convolutional neural network, which makes it hard to distinguish redundant and complementary evidence in aggregation; (iii) Epireader passes the probabilities predicted by the extractor to the reasoner directly to sustain differentiability, which cannot be easily adapted to our problem to handle phrases as answers or to use part of passages. Similarly, BID7 ) also combined answer candidates to the question to form hypotheses, and then explicitly use language models trained on documents to re-rank the hypotheses. This method benefits from the consistency between the documents and gold hypotheses (which are titles of the documents) in cloze-test datasets, but does not handle multiple evidence aggregation like our work. S-Net BID28 proposes a two-step approach for generative QA. The model first extracts an text span as the answer clue and then generates the answer according to the question, passage and the text span. Besides the different goal on answer generation instead of re-ranking like this work, their approach also differs from ours on that it extracts only one text span from a single selected passage. We have observed that open-domain QA can be improved by explicitly combining evidence from multiple retrieved passages. We experimented with two types of re-rankers, one for the case where evidence is consistent and another when evidence is complementary. Both re-rankers helped to significantly improve our individually, and even more together. Our considerably advance the state-of-the-art on three open-domain QA datasets. Although our proposed methods achieved some successes in modeling the union or co-occurrence of multiple passages, there are still much harder problems in open-domain QA that require reasoning and commonsense inference abilities. In future work, we will explore the above directions, and we believe that our proposed approach could be potentially generalized to these more difficult multipassage reasoning scenarios. This work was partially supported by DSO grant DSOCL15223.We thank Mandar Joshi for testing our model on the unfiltered TriviaQA hidden test dataset.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ]
rJl3yM-Ab
We propose a method that can make use of the multiple passages information for open-domain QA.
Many large text collections exhibit graph structures, either inherent to the content itself or encoded in the metadata of the individual documents. Example graphs extracted from document collections are co-author networks, citation networks, or named-entity-cooccurrence networks. Furthermore, social networks can be extracted from email corpora, tweets, or social media. When it comes to visualising these large corpora, either the textual content or the network graph are used. In this paper, we propose to incorporate both, text and graph, to not only visualise the semantic information encoded in the documents' content but also the relationships expressed by the inherent network structure. To this end, we introduce a novel algorithm based on multi-objective optimisation to jointly position embedded documents and graph nodes in a two-dimensional landscape. We illustrate the effectiveness of our approach with real-world datasets and show that we can capture the semantics of large document collections better than other visualisations based on either the content or the network information. Substantial amounts of data is produced in our modern information society each day. A large portion of it comes from the communication on social media platforms, within chat applications, or via emails. This data exhibits dualtiy in the sense that they can be represented as text and graph. The metadata provides an inherent graph structure given by the social network between correspondents and the exchanged messages constitute the textual content. In addition, there are many other datasets that exhibit these two facets. Some of them are found in bibliometrics, for example in collections of research publications as co-author and citation networks. When it comes to analyse these types of datasets, usually either the content or the graph structure is neglected. In data exploration scenarios the goal of getting an overview of the datasets at hand is insurmountable with current tools. The sheer amount of data prohibits simple visualisations of networks or meaningful keyword-driven summaries of the textual content. Data-driven journalism often has to deal with leaked, unstructured, very heterogeneous data, e.g. in the context of the Panama Papers, where journalists needed to untangle and order huge amounts of information, search entities, and visualise found patterns . Similar datasets are of interest in the context of computational forensics . Auditing firms and law enforcement need to sift through huge amounts of data to gather evidence of criminal activity, often involving communication networks and documents . Users investigating such data want to be able to quickly gain an overview of its entirety, since the large amount of heterogeneous data renders experts' investigations by hand infeasible. Computer-aided exploration tools can support their work to identify irregularities, inappropriate content, or suspicious patterns. Current tools 1 lack sufficient semantic support, for example by incorporating document embeddings and the ability to combine text and network information intuitively. We propose MODiR, a scalable multi-objective dimensionality reduction algorithm, and show how it can be used to generate an overview of entire text datasets with inherent network information in a single interactive visualisation. Special graph databases enable the efficient storage of large relationship networks and provide interfaces to query or analyse the data. However, without prior knowledge, it is practically impossible to gain an overview or quick insights into global network structures. Although traditional node-link visualisations of a graph can provide this overview, all semantic information from associated textual content is lost completely. Technically, our goal is to combine network layouts with dimensionality reduction of highdimensional semantic embedding spaces. Giving an overview over latent structures and topics in one visualisation may significantly improve the exploration of a corpus by users unfamiliar with the domain and terminology. This means, we have to integrate multiple aspects of the data, especially graph and text, into a single visualisation. The challenge is to provide an intuitive, two-dimensional representation of both the graph and the text, while balancing potentially contradicting objectives of these representations. In contrast to existing dimensionality reduction methods, such as tSNE , MODiR uses a novel approach to transform high-dimensional data into two dimensions while optimising multiple constraints simultaneously to ensure an optimal layout of semantic information extracted from text and the associated network. To minimise the computational complexity that would come from a naive combination of network drawing and dimensionality reduction algorithms, we formally use the notion of a hypergraph. In this way, we are able to move repeated expensive computations from the iterative document-centred optimisation to a preprocessing step that constructs the hypergraph. We use real-world datasets from different domains to demonstrate the effectiveness and flexibility of our approach. MODiR-generated representations are compared to a series of baselines and state-of-the-art dimensionality reduction methods. We further show that our integrated view of these datasets exhibiting duality is superior to approaches focusing on text-only or network-only information when computing the visualisation. With MODiR we bridge the gap between text and network visualisation by jointly reducing the dimensionality of the input data. Therefore we subdivided this part into three sections to highlight related work in the areas of text visualisation, representation learning, as well as dimensionality reduction. Other work that tries to jointly model text and networks but without dimensionality reduction and without a focus on visualisation is LINE. They generate information networks consisting of different types of nodes, e.g. words from document content and authors from document metadata. Another tool that investigates combining graph structure with textual elements is VOSviewer . They construct and visualise bibliographic networks that provide a multi-view interface to explore and filter keywords and network aspects of such datasets. In our work we go beyond building a network from textual data but instead project the textual data into a latent space. Document visualisation aims to visualise the textual content, such that users gain quick insights into topics, latent phrases, or trends. Tiara extracts topics and derives time-sensitive keywords to depict evolving subjects over time as stacked plots. Another line of work projects documents into a latent space, for example by using topic models or embeddings: Creating scatterplots of embedded documents of a large corpus may in a very dense and unclear layout, so Chen et al. developed an algorithm to reduce over-full visualisations by picking representative documents. A different approach is taken by Fortuna et al. , who do not show documents directly, but generate a heatmap of the populated canvas and overlay it with salient phrases at more densely populated areas from the underlying documents in that region. Friedl et al. extend that concept by drawing clear lines between regions and colouring them. They also add edges between salient phrases based on co-occurrences in the texts. A map analogy can be used to visualise the contents of documents by embedding them into a high dimensional semantic space and projecting it on a two-dimensional canvas as a document landscape. Most recently Cartograph was proposed, which is visually very similar to previous approaches, but pre-renders information at different resolution and uses a tiling server with (geographic) map technology to deliver responsive interactions with the document landscape. Regions are coloured based on underlying ontologies from a knowledge-base. Networks are traditionally visualised using so-called node-link graphs. This way, any additional information related to nodes and edges are lost. The layout of nodes usually follows a force-based analogy first proposed by. Newer approaches optimise the compu-tational complexity and include local metrics to better represent inherent structures as for example ForceAtlas2 , which is the default network algorithm for the network visualisation tool Gephi. The text and network visualisation methods discussed above primarily use structural properties of the data to generate their layout. Although we focus on the visualisation of text data with inherent graph information, MODiR can work with arbitrary kinds of data. Our model only requires a way to project the data into a high-dimensional Euclidean vector space so that the distance between two points can be interpreted as their (semantic) similarity. Traditionally, text can be represented as bagof-words vector that optionally is weighted by respective tf-idf scores. In recent years, embeddings became more popular as they conserve semantic meaning in their vector representation. introduced neural architectures to learn high-dimensional vector representations for words and paragraphs . Similar methods are used to learn representations for nodes in a network based on either the structural neighbourhood or additional heterogeneous information . Schlötterer et al. attempted to learn joint representations of network structure and document contents but saw no improvement over conventional models in a series of classification tasks. We only use the structural information of the network for better control over fine-grained adjustments in our layout algorithm. The goal of dimensionality reduction is to represent high-dimensional data in a low-dimensional space while preserving the characteristics of the original data as sound as possible. A very common application of dimensionality reduction is to project high-dimensional data into two dimensions for the purpose of visual interpretation. Generally, these methods follow one of three mathematical models. Linear models, such as Principle Component Analysis (PCA) can be calculated very efficiently and have proven to reduce input spaces to improve the performance of downstream tasks. Thus, they are often indirectly used for feature extraction. Although reductions to two dimensions for visualisations are appropriate for quick initial data exploration, other approaches are able to better preserve data characteristics in two dimensions. For example, the nonlinear Sammon mapping tries to preserve the structure of inter-point distances in high-dimensional space in low-dimensional space. The ing visualisations are generally better then PCA to show relatedness of individual data points. Lastly, there are probabilistic models like Stochastic Neighbour Embeddings (SNE) . They are similar to a Sammon mapping in that they use inter-point distances but model these distances as probability distributions. The t-distributed SNE has proven to produce competitive for visualising datasets while preserving characteristics , however its nondeterministic nature may produce greatly varying . Recently, FltSNE was proposed, an optimisation of tSNE that significantly reduces the computational complexity. Other newer dimensionality reduction algorithms like LargeVis and UMAP scale almost linearly by using efficient nearest neighbourhood approximations in the high-dimensional space and spectral embeddings to initialise positions of points in the low-dimensional space to reduce the number of fine-tuning iterations. Visualisations of complex datasets are restricted to two or three dimensions for users to grasp the structure and patterns of the data. We integrate multiple entities (i.e., documents and persons) into a joint visualisation, which we call landscape. This landscape consists of a base-layer containing all documents depicted as dots forming the document landscape; nodes and their connections are placed on top of this base-layer as circles connected by lines forming the graph layer. In this section, we propose the MODiR algorithm which integrates multiple objectives during the layout process to find an overall good fit of the data within the different layers. Our approach is derived from state-ofthe-art methods for drawing either the network layer or the document landscape. We assume that documents are given as high-dimensional vectors and entities are linked among one another and to the documents. These links are used as restrictions during the multi-objective dimensionality reduction of document vectors. Let x (i) ∈ X ⊂ R d be the set of n documents in their d-dimensional representation and y (i) ∈ Y ⊂ R 2 the respective positions on the document landscape. Let H(V, E) be a hypergraph based on the network information inferred from the document corpus, with vertices V = X P, where X are the documents and p i ∈ P are the entities in the network and hyperedges e k ∈ E describing the relation between documents and entities. For each pair of entities p m, p n ∈ P that are connected in the context of documents x (i),... ∈ X, there is a hyperedge e k = {p m, p n, x (i),...}. Analogously, the same definition applies to Y. Further, H Y or H X is used to explicitly state the respective document representation used. The position in the graph layer π: P → R 2 of an entity p m is defined as where E pm ⊂ H Y is the set of hyperedges containing p m and N pm is the number of documents p m is associated with. 2 This effectively places an entity at the centre of its respective documents. More elaborate methods like a density-based weighted average are also applicable to mitigate the influence of outliers. For simplicity we will abbreviate π(, where W ∈ R 2×n is the projection matrix leant by MODiR based on multiple objectives ϕ {1,2,3} using gradient descend, as defined later in this section. The objectives are weighted by manually set parameters θ {1,2,3} to balance the effects that favour principles focused on either the graph layer or the document landscape, as they may contradict one another. Given a high-dimensional hypergraph H X, the matrix W, and a entity projection π, we define the ing multi-objective dimensionality reduction function as In the following paragraphs, we will formally introduce MODiR's objectives. Objectives and are inspired by tSNE and use the neighbourhood context of documents in X to position similar documents near one another and unrelated ones further apart in Y. Objective attracts documents based on co-occurrence in hyperedges so that the ing π m will be closer if they are well connected in the graph. This third objective also implicitly brings documents closer to their respective entities. Objective: Similar documents are near one another. Semantically similar documents should be closer on the document landscape and dissimilar ones further apart. To measure the semantic similarity of documents, used a naïve bag-of-words representation. Although tSNE preserves the inherent semantic structure in two-dimensional representations from these sparse vectors , we opted to use document embeddings. This has the advantage that, when only part of the data is visualised, the embedding model can still be trained on a larger set of documents and thus retain the additional information. Objective is inspired by the efficient usage of context words in word2vec . Corresponding to the skip-gram model, we define the context X k,x (i) ⊂ X of a document x (i) by its k nearest neighbours in the embedding space. The first objective is defined as with σ being the sigmoid function. Distances are normalised based on the context to make them comparable between the high-dimensional and two-dimensional space and rescaled by the sigmoid. Objective: Dissimilar documents are apart from one another. The optimal solution to the previously defined objective would be to project all documents onto the same point on the twodimensional canvas. In order to counteract that, we introduce negative examples for each pair of context documents. We do so by sampling a set of l documents that are not in the k neighbourhood of be the set of negative samples for x (i), then the second objective is defined as 2 Np m:= {x Objective: Connected entities are near one another and their documents. This object serves two purposes: All documents y (i) associated with a person p m are placed near its π m position in the graph layer and two people π m and π n are forced near one another if they are connected. Let E y (i) ⊂ E be the set of hyperedges in the hypergraph H containing the document y (i) and E Y y (i) = e k ∈E y (i) e k \ P all documents that are linked to y (i) through an entity, then the third objective is defined as which, when minimised, attracts documents that are related through entities. This has two implicit effects: An entity p m gets closer to its documents as they are attracted to π m without having to explicitly compute this position using Equation 1. Also, related entities p m, p n are attracted to one another since they appear in the same hyperedges. The computational complexity of this objective is strongly related to the connectedness of entities in the graph. For dense graphs, we propose a heuristic by only using a subset of s documents from the context E Y y (i) of y (i). An objective modelling a repulsive force as in force-directed graph layouts is not needed as the first two objectives ϕ {1,2} provide enough counteracting force. Algorithm. The positions of entities and documents on the landscape are calculated using the previously defined objectives as follows. First, we construct the hypergraph H X with document contexts including the set of k-neighbourhoods X k,x (i). Relevant pairwise distances can be stored in an adjacency matrix so reduce computational overhead in Equations 2 and 3. For more efficient training, the randomly sampled l negative neighbourhoodsX can be prepared ahead of time and then only masked during later. The s-neighbourhoods for entities in Equation 4 E Y y (i) can only be prepared with references, as Y y (i) updates with each iteration. We designed the algorithm to move as much repetitive computations to pre-processing ahead of time or each epoch. Creating these sets is very efficient using Hierarchical Navigable Small World graphs (HNSW) for approximate nearest neighbour search. Overall we are able to reduce the pre-processing complexity to O(n log n) and for each iteration O(kln), with k, l n near linear. After generating the context sets, we use gradient descend to update the projection matrix W with learning rate η reducing the overall error Φ as defined by Selecting appropriate values for the hyperparameters k, l, s, and θ {1,2,3} is critical to produce meaningful . We found l = k in all experiments to produce the best as this way for every similar document the model has one dissimilar document to compare. Inspired by tSNE , we limit hyperparameters by setting k and s dynamically for each document based on a user-defined perplexity. With these adaptations, the only parameters to be set are the perplexity β that roughly determines the context size, the learning rate η, and the objective weights, which can often stay at a default setting. A reference implementation including a modular processing pipeline for different datasets, approaches, and experiments is available on GitHub 3. Our approach is mainly motivated to explore business communication data (namely emails), such as the Enron corpus . However, due to the lack of ground truth, we will focus our evaluation on research publications and their co-authorship network. Results of dimensionality reduction can be subjective, so as in prior work on dimensionality reduction (; ;), we will qualitatively compare our approach to a variety of baselines and provide some quantitative experiments. Experimental Setup. Here, we are using the Semantic Scholar 4 Open Corpus (S2) , we remove articles with missing information and limit to the six communities Data Mining, Databases, Machine Learning, NLP, Computer Vision, and HCI. This way we discard clearly unrelated computer science articles and biomedical studies for a more fine grained analysis. For in-depth comparisons we use an even smaller subset from S2 of 24 hand-picked authors, their co-authors, and their papers (S2b). To our knowledge, there are no algorithms that use multiple objectives for dimensionality reduction of high-dimensional data. Popular approaches for traditional dimensionality reduction are tSNE and PCA. As baselines, we use the original optimised implementation of tSNE 7 written in C as provided by the authors. The quantitative evaluation is two-fold: MODiR can simulate their behaviour by setting θ 3 = 0 and thus ignoring the objective that incorporates network information. In our experiments we use the following parameter settings. For tSNE we set the perplexity to P erp(P i) = 5, θ = 0.5 and run it for 1,000 iterations. In MODiR we set the neighbourhood size to k = s = 10, the negative context size to l = 20, and all objective weights θ {1,2,3} = 1.0. For a discussion on the influence of hyperparameters, we refer to the supplemental material. The speed of convergence depends on the learning rate η and thus dictates the number of maximum iterations. Early stopping with a threshold on the update rate could be implemented. Depending on the size of the dataset and a fixed learning rate of η = 0.01, MODiR generally converges after 10 to 200 iterations, for larger and more connected data it is advisable to use a higher learning rate in the first epoch for initialisation and then reducing it to very small updates. For better comparability, we use a constant number of iterations of T = 100. Quantitative Evaluation. state, it is by definition impossible to fully represent the structure of intrinsically high-dimensional data, such as a set of document embeddings, in two dimensions. However, stochastic neighbour embeddings are able to capture intrinsic structures well in two dimensional representations . To measure this capability, we compare the ability of k-means++ to cluster the high-and twodimensional space. We set the number of clusters to the number or research communities (k = 6) and calculate the percentage of of papers for each community per cluster. Therefore we assign each community to the cluster with most respective papers and make sure to use a clustering with an even distribution. Results are listed in Table 1 for tSNE, PCA, MODiR, and the original high dimensional embedding averaged over five runs. We see, that as expected due to topical overlap of communities, even original embeddings can't be accurately clustered. Interestingly though, there seems to be a significant difference between AM and S2 although the sets of papers intersect, which we assume is due to the fact, that S2 is larger and additionally contains more recent papers. Although PCA (S2b), subsampled for readability; (a) the network is laid out first, documents are randomly placed along edges; (b) the network is laid out first, documents are randomly placed around nodes; (c) documents are part of the network layout as nodes in the graph that replace author-author edges; (d) the document landscape is laid out first, nodes are positioned at the centre of their associated documents; (e) tSNE is applied on papers and authors together, where documents are aggregated to represent authors often does not generate visualisations in which classes can be clearly distinguished, the clustering algorithm is still able to separate them with competitive compared to tSNE and MODiR. MODiR not only aims to produce a good document landscape, but also a good layout of the network layer. Graph layouts are well studied, thus we refer to related work on aesthetics and readability . While these are very elaborate and consider many aspects, we decided to use Noack's normalised AtEdge-length : It describes how well the space utilisation is by measuring whether edges are as short as possible with respect to the size and density of the graph. Table 1 contains the . Although the AtEdge metric is comparable for layouts of the same graph, it is not comparable between datasets as can be seen by the fact, that a larger number of edges causes an overall lower score. The AtEdge length produced by PCA is generally better than that of tSNE while MODiR outperforms both as our approach specifically includes an optimised network layout. The better performance of PCA over tSNE can be explained by the ing layouts being more densely clustered in one spot. Although the AtEdge length aims to give a lower score for too close positioning, it is not able to balance that to the many very long edges in the layout produced by tSNE. Qualitative Evaluation. Apart from a purely quantitative evaluation, we use the hand-selected Semantic Scholar dataset (S2b) to visually compare compare network-centric baselines (a-c), document-focused baselines (d-e) and MODiR (f) in Figure 1. Papers are depicted as circles where the stroke colour corresponds to the communities, black lines and dots are authors and their coauthorships, size corresponds to the number of publications. For better readability and comparability, the number of drawn points is reduced and three communities are marked. In Figure 1a we use the weighted co-authorship network drawn using and scatter the papers along their respective edges after the graph is laid out. We see, that active collaboration is easy to identify as densely populated edges and research communities of selected areas are mostly coherent and unconnected researchers are spatially separated from others. Although it is possible to distinguish the different communities in the graph layer,the document landscape isn't as clear. The ML researchers are split apart from the rest of the NLP community, which in turn is overcrowded. Figure 1b uses the same network layout but places articles randomly around their first author, which makes it easy to spot the scientific communities by colour. Lastly, we include papers as nodes and co-authorship edges are connected through them during the network layout in Figure 1c. This produces a very clean looking layout compared with the other baselines, however papers lump together and are not evenly distributed. Furthermore, semantic nuances between papers are mostly lost which becomes most apparent in the now separated database clusters. Also, the semantic overlap between the ML and NLP communities is not noticeable. Figure 1d positions documents using tSNE and places researchers using Equation 1. We see that articles are positioned on the landscape so that research areas are distinctly recognisable by colour. Papers that could not be assigned to a specific area are scattered across the entire landscape. The collaboration network is laid out surprisingly good. The research interests of the authors are coherent between the network and the document landscape, it even shows the close relation between NLP and ML, while showing a clear separation to database related topics. Nonetheless, the network should be loosened for better readability, for example members of the same research group who frequently co-author papers tend to collide. Unconnected authors are almost not visible as they drift toward densely populated areas in the middle. In Figure 1e, we included authors as virtual documents as the sum of their papers during the tSNE reduction. This shows some improvement, as the network layout is more loose and fewer edges overlap and the issue with collapsing research groups is also mostly mitigated. The semantic overlap of ML and NLP is niceley captured along with the difference to the database papers. However, the network is not clearly readable. With MODiR the three research communities become clearly distinguishable, both in the graph layer and in the document landscape. Nodes of well connected communities are close together, yet are not too close locally, and separate spatially from other communities. The document landscape is laid out more clearly, as papers from different fields are grouped to mostly distinct clusters. Obviously there is still a slight overlap as a of semantic similarities. As previously pointed out, this visualisation also correctly reveals, that the ML and NLP communities are more closely related to each other (both use machine learning) than to DB. The authorship of documents however can only be conveyed through interaction, so this information is not present in the static visualisations shown here. Based on these we argue, that the network information improves the (visual) community detection. The document embeddings of articles can only reflect the semantic similarities, which may overlap. In conjunction with information from the co-authorship network, the embeddings are put into their context and thus are more meaningful in a joint visualisation. In this paper we discussed how to jointly visualise text and network data with all its aspects on a single canvas. Therefore we identified three principles that should be balanced by a visualisation algorithm. From those we derived formal objectives that are used by a gradient descend algorithm. We have shown how to use that to generate landscapes which consist of a base-layer, where the embedded unstructured texts are positioned such that their closeness in the document landscape reflects semantic similarity. Secondly, the landscape consists of a graph layer onto which the inherent network is drawn such that well connected nodes are close to one another. Lastly, both aspects can be balanced so that nodes are close to the documents they are associated with while preserving the graph-induced neighbourhood. We proposed MODiR, a novel multi-objective dimensionality reduction algorithm which iteratively optimises the document and network layout to generate insightful visualisations using the objectives mentioned above. In comparison with baseline approaches, this multi-objective approach provided best balanced overall as measured by various metrics. In particular, we have shown that MODiR outperforms state-of-the-art algorithms, such as tSNE. We also implemented an initial prototype for an intuitive and interactive exploration of multiple datasets. with over 45 million articles. Both corpora cover a range of different scientific fields. Semantic Scholar for example integrates multiple data sources like DBLP and PubMed and mostly covers computer science, neuroscience, and biomedical research. Unlike DBLP however, S2 and AM not only contain bibliographic metadata, such as authors, date, venue, citations, but also abstracts to most articles, that we use to train document embeddings using the Doc2Vec model in Gensim 10. Similar to Carvallari et al. remove articles with missing information and limit to six communities that are aggregated by venues as listed in Table 3. This way we reduce the size and also remove clearly unrelated computer science articles and biomedical studies. For in depth comparisons we reduce the S2 dataset to 24 hand-picked authors, their co-authors, and their papers (S2b). Note, that the characteristics of the networks differ greatly as the ratio between documents, nodes, and edges in Table 2 shows. In an email corpus, a larger number of documents is attributed to fewer nodes and the distribution has a high variance (some people write few emails, some a lot). In the academic corpora on the other hand, the number of documents per author is relatively low and similar throughout. Especially different is the news corpus, that contains one entity that is linked to all other entities and to all documents. In Section 4 we gave brief overview of the settings used in the experiments presented above. Here, we provide additional insights from our experiments on different hyperparameter settings for MODiR. The context sizes are the most important parameters. The first two objective weights can be ignored, as the context size has similar effects, so we set θ 1 = θ 2 = 1.0 in all our experiments. Generally, small numbers for k, l, s perform better. This is in line with our expectations, as each item x(i) will also be in the context of its respective neighbours and will therefore amplify its attractive force. A large number for k for example will force all points towards the centre of the canvas or if even larger, produce random scatter as the gradients amplify. In our experiments we use k = 10, for datasets with a few thousand samples, k should usually be below l. We also found, that the negative context is best with l = 20 for all sizes. Furthermore, we set both θ 1 = θ 2 = 1.0 for all experiments because the influence on selecting k, l is much larger. The graph context is also set to s = 10 (in our dataset the number of entities is close to the number of documents), the objective weight can be freely adjusted between around 0.8 ≤ θ 3 ≤ 1.2 to set the influence of the entity network. Similar to the semantic neighbourhoods in the first and second objective, the choice of s is significantly more influential than θ 3. A.4 LANDSCAPE VISUALISATIONS
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlMkTNYvH
Dimensionality reduction algorithm to visualise text with network information, for example an email corpus or co-authorships.
Machine learned models exhibit bias, often because the datasets used to train them are biased. This presents a serious problem for the deployment of such technology, as the ing models might perform poorly on populations that are minorities within the training set and ultimately present higher risks to them. We propose to use high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers. We present a framework that leverages Bayesian parameter search to efficiently characterize the high dimensional feature space and more quickly identify weakness in performance. We apply our approach to an example domain, face detection, and show that it can be used to help identify demographic biases in commercial face application programming interfaces (APIs). Machine learned classifiers are becoming increasingly prevalent and important. Many systems contain components that leverage trained models for detecting or classifying patterns in data. Whether decisions are made entirely, or partially based on the output of these models, and regardless of the number of other components in the system, it is vital that their characteristics are well understood. However, the reality is that with many complex systems, such as deep neural networks, many of the "unknowns" are unknown and need to identified BID23. Imagine a model being deployed in law enforcement for facial recognition, such a system could encounter almost infinite scenarios; which of these scenarios will the classifier have a blind-spot for? We propose an approach for helping diagnose biases within such a system more efficiently. Many learned models exhibit bias as training datasets are limited in size and diversity BID34 BID33, or they reflect inherent human-biases BID7. It is difficult for researchers to collect vast datasets that feature equal representations of every key property. Collecting large corpora of training examples requires time, is often costly and is logistically challenging. Let us take facial analysis as an exemplar problem for computer vision systems. There are numerous companies that provide services of face detection and tracking, face recognition, facial attribute detection, and facial expression/action unit recognition (e.g., Microsoft (msf, 2018), Google (goo, 2018), Affectiva (; aff, 2018) ). However, studies have revealed systematic biases in of these systems BID6 BID5, with the error rate up to seven times larger on women than men. Such biases in performance are very problematic when deploying these algorithms in the real-world. Other studies have found that face recognition systems misidentify [color, gender (women), and age (younger)] at higher error rates BID22. Reduced performance of a classifier on minority groups can lead to both greater numbers of false positives (in a law enforcement domain this would lead to more frequent targeting) or greater numbers of false negatives (in a medical domain this would lead to missed diagnoses).Taking face detection as a specific example of a task that all the services mentioned above rely upon, demographic and environmental factors (e.g., gender, skin type, ethnicity, illumination) all influence the appearance of the face. Say we collected a large dataset of positive and negative examples of faces within images. Regardless of how large the dataset is, these examples may not be evenly distributed across each demographic group. This might mean that the ing classifier performs much less accurately on African-American people, because the training data featured few examples. A longitudinal study of police departments revealed that African-American individuals were more likely to be subject to face recognition searches than others BID15. To further complicate matters, even if one were to collect a dataset that balances the number of people with different skin types, it is highly unlikely that these examples would have similar characteristics across all other dimensions, such as lighting, position, pose, etc. Therefore, even the best efforts to collect balanced datasets are still likely to be flawed. The challenge then is to find a way of successfully characterizing the performance of the ing classifier across all these dimensions. The concept of fairness through awareness was presented by BID9, the principle being that in order to combat bias we need to be aware of the biases and why they occur. This idea has partly inspired proposals of standards for characterizing training datasets that inform consumers of their properties BID20. Such standards would be very valuable. However, while transparency is very important, it will not solve the fundamental problem of how to address the biases caused by poor representation. Nor will it help identify biases that might still occur even with models trained using carefully curated datasets. Attempts have been made to improve facial attribute detection by including gender and racial diversity. In one example, by BID29, were improved by scraping images from the web and learning facial representations from a held-out dataset with a uniform distribution across race and gender intersections. However, a drawback of this approach is that even images available from vast sources, such as Internet image search, may not be evenly balanced across all attributes and properties and the data collection and cleaning is still very time consuming. To address the problem of diagnosing bias in real world datasets we propose the use of high-fidelity simulations BID30 to interrogate models. Simulations allow for large volumes of diverse training examples to be generated and different parameter combinations to be systematically tested, something that is challenging with "found" data scrapped from the web or even curated datasets. Simulated data can be created in different ways. Generative adversarial networks (GANs) BID17 are becoming increasingly popular for synthesizing data BID31. For example, GANs could be used to synthesize images of faces at different ages BID40. However, GANs are inherently statistical models and are likely to contain some of the biases that the data used to train them contain. A GAN model trained with only a few examples of faces with darker skin tones will likely fail to produce a diverse set of high quality synthesized images with this attribute. Parameterized graphics models are an alternative for training and testing vision models BID36 BID15 BID35. Specifically, it has been proposed that graphics models be used for performance evaluation BID19. As an example, this approach has been used for models for pedestrian detection BID35. To the best of our knowledge graphics models have not been employed for detecting demographic biases within vision models. We believe that demographic biases in machine learned systems is significant enough a problem to warrant further attention. The contributions of this paper are to: present a simulated model for generating synthetic facial data, show how simulated data can be used to identify the limitations of existing face detection algorithms, and to present a sample efficient approach that reduces the number of simulations required. The simulated model used in this paper is made available. With more adoption of machine learned algorithms in real-world applications there is growing concern in society that these systems could discriminate between people unfairly 12. In cases where these systems provide a reliable signal that the output has a low confidence, these can be described as known unknowns. However, many learned models are being deployed while their performance in specific scenarios is still unknown, or their prediction confidence in scenarios is not well characterized. These can be described as unknown unknowns. Lakkaraju et al. BID23 published the first example of a method to help address the discovery of unknowns in predictive models. First, the search-space is partitioned into groups which can be given interpretable descriptions. Second, an explore-exploit strategy is used to navigate through these groups systematically based on the feedback from an oracle (e.g., a human labeler). Bansal and Weld proposed a new class of utility models that rewarded how well the discovered unknown unknowns help explain a sample distribution of expected queries BID3. We also employ an explore-exploit strategy in our work, but rather than rely on an oracle we leverage synthetic data. Biases often from unknowns within a system. Biases have been identified in real-world automated systems applied in domains from medicine BID43 to criminal justice BID2. While the exact nature of the problem is debated BID14 it is clear that further research is needed to address bias in machine learned systems and actionable remedies to help practitioners in their day to day work are necessary. Numerous papers have highlighted biases within data typically used for machine learning BID7 BID34 BID33 and machine learned classifiers BID5. Biases in datasets can be caused in several ways. BID34 identified selection bias, capture bias, and negative set bias as problems. " Selection bias" is related to the tendency for certain types, or classes, of images to be included in datasets in the first place. " Capture bias" is related to how the data are acquired and may be influenced by the collectors' preferences (e.g. in images of certain people, say celebrities, or points of view, lighting conditions, etc.). Taking faces as an example, these biases typically manifest as a greater proportion of images including white males, frontal head poses and smiles or neutral expressions. " Negative set bias" is related to the examples in the dataset of the "rest of the world". As an example, if the negative examples included in a dataset of faces are skewed, the model will learn a warped representation of what is not a face. However, even if one were to make every effort to address these three types of bias, the problem may still exist that artifacts created by humans contain the biases that human have without our explicit knowledge BID7. In our discussion, we will call this "human bias".With increasing frequency, data hungry machine learning classifiers are being trained on large-scale corpora collected from Internet sources BID8 BID11 ) and/or via crowdsourcing BID26. Such data collection makes curation challenging as there are a vast number of samples to label and sort. The Internet is a data source that is likely to be subject to "selection bias", "capture bias", "negative set bias" and "human bias". Furthermore, BID33 argue that more powerful feature descriptors (such as those generated by a deep convolutional neural network versus a simple hand crafted approach) may actually exacerbate the problem of bias and accentuate biases in the ing classifier. Even in the best case scenario, feature representations alone cannot intrinsically remove the negative bias problem. Thus there is still a need for approaches that help address the issues of bias within machine learning, and this problem is becoming more acute. To this end, we propose a practical approach to use high-fidelity simulations to diagnoses biases efficiently in machine vision classifiers. Face detection was one of the earliest applications of computer vision. Below we describe significant landmarks in the development of face detection algorithms. Earlier methods relied on rigid templates with boosting learning methods commonly employed. Hand-crafted features used for learning included Haar like features, local binary patterns and histograms of gradients. A major landmark was the Viola-Jones BID39 classifier based on Haar features. This work presented a fast and accurate method and was widely used thanks to implementations in OpenCV and other computing frameworks. Deep convolutional neural networks have since surpassed the performance of previous methods BID12 BID41 and can be used to learn face detection and attribute classification together in a single framework BID27. Face detection has matured into a technology that companies and agencies are deploying in many real world contexts BID42; these include safety critical applications. Application programming interfaces (APIs) and software development kits (SDKs) are two ways in which companies are exposing these models to other businesses and to consumers. All the APIs we study in this paper use convolutional neural networks for face detection, making diagnosing points of failure or bias challenging. The approach we present for identifying biases is algorithm independent, but we believe is particularly suited to models trained on large corpra and using powerful feature descriptors. Face detection is frequently the first step applied in screening images to include in datasets used for subsequent facial analysis tasks (e.g., face recognition or expression detection). If the face detector used for such tasks is biased, then the ing data set is also likely to be biased. This was found to be the case with the commonly used Labeled Faces in the Wild (LFW) dataset . One study found it to be 78% male and 85% White BID18. We believe a simulation-based approach, using an artificial human (see FIG0, could help characterize such biases and allow researchers to account for them in the datasets they collect. We propose to use simulation to help interrogate machine learned classifiers and diagnose biases. The key idea here is that we repeatedly synthesize examples via simulation that have the highest likelihood of breaking the learned model. By synthesizing such a set, we can then take a detailed look at the failed examples in order to understand the biases and the blind spots that were baked into the model in the first place. We leverage the considerable advancements in computer graphics to simulate highly realistic inputs. In many cases, face detection algorithms are designed to be able to detect faces that occupy as few as 36 x 36 pixels. The appearances of the computer generated faces are as close to photo-realistic in appearance as is necessary to test the face detectors, and at low resolutions we would argue that they are indistinguishable from photographs. While simulated environments allow us to explore a large number of parameter combinations quickly and systematically, varying lighting, head pose, skin type, etc. often require significant computational resources. Furthermore, exhaustive search over the set of simulations is rendered infeasible as each degree of freedom in simulation leads to exponentially increasing numbers of examples. Therefore, we need a learning method that can intelligently explore this parameter space in order to identify regions of high failure rates. In this work, we propose to apply Bayesian Optimization BID4 to perform this parameter search efficiently. Formally, lets denote θ as parameters that spawns an instance of simulation s(θ). This instance then is fed into the machine intelligence system in order to check whether the system correctly identifies s(θ). Consequently, we can define a composite function Loss(s(θ)), that captures the notion of how well the AI system handles the simulation instance generated when applying the parameters θ. Note that the function Loss(·) is similar to the loss functions used in training classifiers (e.g. 0-1 loss, hinge loss etc.). Our goal then is to find diverse instances of θ such that the composite loss function attains values higher than a set threshold. FIG1 graphically describes such a composition. Bayesian Optimization allows us to tackle this problem by first modeling the composite function Loss(s(θ)) as a Gaussian Process (GP) BID28. Modeling as a GP allows us to quantify The composite function, which takes as input the simulation parameters θ, in order to first produce a simulation s(θ). This in turn is fed into a machine classification system to compute a loss function characterizing whether the system performed correctly or poorly.uncertainty around the predictions, which in turn is used to efficiently explore the parameter space in order to identify the spots that satisfy the search criterion. In this work, we follow the recommendations in BID32, and model the composite function (0-1 loss) via a GP with a Radial Basis Function (RBF) kernel, and use Expected Improvement (EI) as an acquisition function. Within the AirSim BID30 ) environment we created an agent torso. The agent was placed in a room with multiple light sources (from windows, overhead lights and spotlights). Pictures of the avatar and the layout of the room are shown in FIG0. The avatar and environment are being made available. The skin type of the agent could be varied continuously from a lighter to darker skin type (allowing us to mimic different levels on the Fitzpatrick Classification Scale BID13), and aging of the skin could be customized via the textures. The agent's facial position, facial actions (e.g., mouth opening or eye lids closing) could be fully customized. In this paper, our goal was to illustrate how simulated data can be used to identify and evaluate biases within classifiers and not to exhaustively evaluate each face API's performance with respect to every facial expression or lighting configuration. However, our method could be used to do this. We manipulated the following parameters in order to evaluate how the appearance of the face impacted the success or failure of the face detection algorithms. The parameters were varied continuously within the bounds specified below. Angles are measured about the frontal position head position. For facial actions (mouth opening and eye lids closing) the mappings are specified in Facial Action Coding System BID10 intensities. We compared four facial analysis APIs in our experiments. Microsoft, Google, IBM and Face++ all offer services for face detection (and classification of other facial attributes). The APIs all accept HTTP POST requests with URLs of the images or binary image data as a parameter within the request. These APIs return JSON formatted data structures with the locations of the detected faces. Unfortunately, detailed descriptions of the data used to train each of the face detection models were not available. To the best of our knowledge, the models exposed through these APIs all use deep convolutional neural network architectures and are trained on millions of images. The documentation reports that a face is detectable when its size is 36 × 36 to 4096 × 4096 pixels and up to 64 faces can be returned for an image. Google: The documentation did not report minimum size or resolution requirements. The documentation reports that the minimum pixel density is 32 × 32 pixels per inch, and the maximum image size is 10 MB. IBM published a statement in response to the article by in which they further characterize the performance of their face detection algorithm.3Face++: The documentation reports that a face is detectable when its size is 48 × 48 to 4096 × 4096 pixels. The minimal height (or width) of a face should be also be no less than 1/48 of the short side of image. An unlimited number of faces can be returned per image.: Distribution of each simulation parameter for successfully detected faces and missed detections. The distribution for skin types and ages skews darker and older respectively for false negatives than for true positives. The Bayesian optimization reveals this difference with fewer simulations, such that given 1000 samples the differences are apparent with BO and not with random sampling. The skin tone ranges from light to dark and age ranges from young to old. For example images see FIG2 We used two approaches for searching our space of simulated faces for face detection failure cases. The first was randomly sampling parameters for generating face configurations and the second was using Bayesian optimization. We compared these sampling methods for the different face detection APIs. This type of analysis would have been difficult, if not impossible, without the ability to systematically synthesize data. FIG3 shows boxplots of the demographic parameters for faces correctly detected and faces missed. The are shown for the skin type on a normalized scale of 0 (lighter skin type, Fitzpatrick I) to 1 (darker skin type, Fitzpatrick VI) and age on a normalized scale of 0 (unwrinkled) to 1 (heavily wrinkled). From left to right illustrate the performance for the Microsoft, Face++, Google and IBM classifiers. FIG4 shows the cumulative number of failure cases identified with increasing numbers of simulations (faces generated). The are presented across all the APIs with the error bars representing the standard error. When using random sampling the sample (or simulation) efficiency for finding failure cases is significantly reduced compared to our Bayesian approach. After 800 samples 44% more failures (724 vs. 503) were found using Bayesian optimization. First, let us discuss the performance (characterized by true positives and false negatives) of the APIs with regard to the skin type and age appearance of the subjects. FIG3 shows that the detectors consistently failed more frequently on faces with darker skin types and to a lesser extent on faces with older appearances. The missed detections were heavily skewed towards the upper end of the skin type and age ranges. Overall, the IBM API skew was the least extreme. This is probably due to the fact that the detectors were improved 4 following the presented in BID5 identifying biases within previous versions. This shows that paying attention to the training data used to create the models behind these APIs can significantly mitigate the biases with the ing models. While our are only based on one base facial model, they provide that support prior work BID6. While our are only based on one base facial model, they provide evidence that supports prior work BID6. We would need further models with alternate bone structures to draw more about the generalizability of these , especially with regard to other demographic variables (e.g., gender). The age manipulation of the avatar was somewhat constrained as only the texture (and not the shape) of the face was varied. This is a limitation and more realistic age manipulation warrants future work. Using naive sampling techniques (e.g., random sampling or grid search), the cost of search is exponential with the number of parameters. Consider that there are three axes of head rotation, twentyeight facial action units, thousands of possible facial expressions, and millions of potential lighting configurations. Soon naive sampling techniques become impractical. We show that a Bayesian optimization approach can significantly reduce the number of simulations required to find an equal number of failure cases. In a simple experiment with six parameters, the Bayesian approach led to an over 40% improvement in efficiency with respect to finding the false negatives (missed detections). With a larger number of parameters this improvement will be much more dramatic. In addition, the improvement in sample efficiency was the most dramatic (over 500%) for the classifier with the fewest number of missed detections overall (IBM) (see FIG4 . This suggests that BO can further improve efficiency as failures become more challenging to find. Bias in machine learned systems can be introduced in a number of ways. In complex models these biases can be difficult to identify. Using data generated via a realistic synthetic environment we have been able to identify demographic biases in a set of commercially available face detection classifiers. These biases are consistent with previous on photo datasets BID5 . While the synthetic faces we used were not quite as realistic as photographs we believe that this empirical finding supports the use of parametric simulations for this problem. We present an approach that leverages highly-realistic computer simulations to interrogate and diagnose biases within ML classifiers. We propose the use of simulated data and Bayesian optimization to intelligently search the parameter space. We have shown that it is possible to identify limits in commercial face detection systems using synthetic data. We highlight bias in these existing classifiers which indicates they perform poorly on darker skin types and on older skin texture appearances. Our approach is easily extensible, given the amount of parameters (e.g., facial expressions and actions, lighting direction and intensity, number of faces, occlusions, head pose, age, gender, skin type) that can be systematically varied with simulations. We used one base facial model for our experimentation. This limits the generalization of our and the ability for us to determine whether the effects would be similar or different across genders and other demographic variables. Synthetic faces with alternate bone structures would need to be created to test these hypotheses. While the initial cost of creating the models is high, they can be used to generate large volumes of data, making synthetics cost effective in the long-run. Age modeling in face images should be improved using GAN or improved parametric synthetic models. A limitation of our work is that the aging was only represented via texture changes. We plan to investigate GAN-based approaches for synthesis and compare these to parametric synthesis. A hybrid of parametric and statistical models could be used to create a more controllable but diverse set of synthesized faces. Future work will consider retraining the models using synthetic data in order to examine whether this can be used to combat model bias.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJf_YjCqYX
We present a framework that leverages high-fidelity computer simulations to interrogate and diagnose biases within ML classifiers.
Point clouds are a flexible and ubiquitous way to represent 3D objects with arbitrary resolution and precision. Previous work has shown that adapting encoder networks to match the semantics of their input point clouds can significantly improve their effectiveness over naive feedforward alternatives. However, the vast majority of work on point-cloud decoders are still based on fully-connected networks that map shape representations to a fixed number of output points. In this work, we investigate decoder architectures that more closely match the semantics of variable sized point clouds. Specifically, we study sample-based point-cloud decoders that map a shape representation to a point feature distribution, allowing an arbitrary number of sampled features to be transformed into individual output points. We develop three sample-based decoder architectures and compare their performance to each other and show their improved effectiveness over feedforward architectures. In addition, we investigate the learned distributions to gain insight into the output transformation. Our work is available as an extensible software platform to reproduce these and serve as a baseline for future work. Point clouds are an important data type for deep learning algorithms to support. They are commonly used to represent point samples of some underlying object. More generally, the points may be extended beyond 3D space to capture additional information about multi-sets of individual objects from some class. The key distinction between point clouds and the more typical tensor data types is that the information content is invariant to the ordering of points. This implies that the spatial relationships among points is not explicitly captured via the indexing structure of inputs and outputs. Thus, standard convolutional architectures, which leverage such indexing structure to support spatial generalization, are not directly applicable. A common approach to processing point clouds with deep networks is voxelization, where point clouds are represented by one or more occupancy-grid tensors (,). The grids encode the spatial dimensions of the points in the tensor indexing structure, which allows for the direct application of convolutional architectures. This voxelization approach, however, is not appropriate in many use cases. In particular, the size of the voxelized representation depends on the spatial extent of the point cloud relative to the spatial resolution needed to make the necessary spatial distinctions (such as distinguishing between different objects in LIDAR data). In many cases, the required resolution will be unknown or in enormous tensors, which can go beyond the practical space and time constraints of an application. This motivates the goal of developing architectures that support processing point cloud data directly, so that processing scales with the number of points rather than the required size of an occupancy grid. One naive approach, which scales linearly in the size of the point cloud, is to'flatten' the point cloud into an arbitrarily ordered list. The list can then be directly processed by standard convolutional or fully-connected (MLP) architectures directly. This approach, however, has at least two problems. First, the indexing order in the list carries no meaningful information, while the networks do not encode this as a prior. Thus, the networks must learn to generalize in a way that is invariant to ordering, which can be data inefficient. Second, in some applications, it is useful for point clouds to consist of varying numbers of points, while still representing the same underlying objects. However, the number of points that can be consumed by the naive feedforward architecture is fixed. PointNet and exhibit better performance over the MLP baseline with a smaller network by independently transforming each point into a high-dimensional representation with a single shared MLP that is identically applied to each individual point. This set of derived point features is then mapped to a single, fixed-sized dense shape representation using a symmetric reduction function. As such the architectures naturally scale to any number of input points and order invariance is built in as an architectural bias. As a , these architectures have been shown to yield significant advantages in applications in which point clouds are used as input, such as shape classification. The success of PointNet and DeepSet style architectures in this domain shows that designing a network architecture to match the semantics of a point cloud in a more efficient, and better performing network. Since point clouds are such a useful object representation, it's natural to ask how we should design networks to decode point clouds from some provided shape representation. This would allow for the construction of point cloud auto-encoders, which could serve a number of applications, such as anomaly detection and noise smoothing. Surprisingly, the dominant approach to designing such a differentiable point cloud decoder is to feed the dense representation of the desired object through a single feedforward MLP whose is then reshaped into the appropriate size for the desired point cloud. This approach has similar issues as the flat MLP approach to encoding point clouds; the decoder can only produce a fixed-sized point cloud while point clouds are capable of representing objects at low or high levels of detail; the decoder only learns a single deterministic mapping from a shape representation to a point cloud while we know that point clouds are inherently random samples of the underlying object. The primary goal and contribution of this paper is to study how to apply the same lessons learned from the PointNet encoder's semantic congruence with point clouds to a point cloud decoder design. As such, we build on PointNet's principles to present the'NoiseLearn' algorithm-a novel, simple, and effective point cloud decoding approach. The simplicity of the decoding architectures and the increase in performance are strong indicators that sample-based decoders should be considered as a default in future studies and systems. In addition, we investigate the operation of the decoders to gain insight into how the output point clouds are generated from a latent shape representation. Point cloud decoders are a relatively unexplored area of research. Among the works which describe an algorithm that produces a point cloud, the majority focus their efforts on learning a useful latent shape representation that is then passed to a MLP decoder. PU-Net is one such example, in which they design a novel point cloud upsampling network which uses a hierarchical approach to aggregating and expanding point features into a meaningful latent shape representation. To decode the learned shape representation into a point cloud, the latent vector is then passed through a feedforward MLP to produce a fixed number of points. This implies that the network would need to be retrained to allow for a different upsampling rate, which unlikely to be a desired property of an upsampling algorithm. TopNet recognizes the data inefficiency of using a single MLP to decode a point cloud and instead reorganizes their MLP into a hierarchical tree structure in which MLPs at the same level share the same parameters. Their show that addressing this inefficiency allows for better performance with a smaller parameter count. Similarly, in "Learning Localized Generative Models for 3D Point Clouds via Graph Convolution" augments their decoder by assuming a graph structure over the decoded point cloud and employing graph convolutions. However, despite improved performance neither approach addresses the other issues that come with using MLPs to decode entire point clouds, namely the fixed-size output. "Point Cloud GAN" and PointFlow take a different approach to producing a point set in a generative setting. Instead of learning a single mapping from any latent vector directly to its decoded point cloud, they learn a function parameterized by the latent vector which transforms low-dimensional Gaussian noise to a 3D point on the surface of the object described by the latent shape representation. This sampling based approach is more in line with the semantics of point clouds. First, an arbitrary number of points can be drawn from the Gaussian noise to produce a point cloud consisting of that number of points without requiring any changes to or retraining of the algorithm. Second, every individual point is decoded independently and identically, which avoids the data inefficiency issues that come with using MLPs to process set data. While this sampling approach has several desirable properties and appears promising, it's unclear whether the algorithm is applicable outside of the GAN settings these two papers inhabit, if they require specific bespoke loss functions to be trained effectively, or if they are capable of outperforming the baseline MLP approach according to other metrics. A point cloud is a set of n 3D points C = {p 1, . . ., p N}, where each p i ∈ R 3. In general, each p i may have additional auxiliary information associated with it via non-spatial dimensions. While all of our architectures easily generalize to include such information, in this paper, we focus on point clouds that exclusively encode shapes without auxiliary information. A point cloud auto-encoder takes a point cloud C with n points and outputs a point cloudĈ with m points that is intended to represent the shape described by C. While often n = m, we are interested in the general case when n and m may be different, which corresponds to up-scaling or downscaling C. Each auto-encoder will be comprised of an encoder E(C), which takes an input point cloud and outputs a latent shape representation h in R l, and a decoder D(h) which maps a latent representation to an output point cloud of the appropriate size. Thus, given an input point cloud C, the auto-encoder output is given byĈ = D(E(C)). In this paper, we focus on the Chamfer distance as the measure of auto-encoder quality. Intuitively this loss function measures how wellĈ matches C in terms of the nearest neighbor inĈ to each point in C and vice versa. Specifically, if dist(p,Ĉ) gives the distance between point p and the nearest neighbor in point cloudĈ, our loss function is defined by Since the focus of this paper is on point-cloud decoders, all of our architectures use the same pointcloud encoder architecture, while varying the decoder architecture. Below, we first overview the common PointNet-style encoder used followed by a description of the four decoders considered in our experimental analysis, which include three sample-based decoders. PointNet handles unordered input by recognizing that a symmetric function g (such as element-wise max or sum) produces the same regardless of the order of its inputs. PointNet thus learns a single transformation function f that maps individual points to an l-dimensional representation and then combines those representations via g. That is, the latent encoding produced by PointNet for a point cloud C = {p 1, . . ., p n} is the l dimensional vector As desired, E(C) is invariant to the ordering of points in C and applies to any number of points. We learn an MLP representation of f, with input space R 3, encoding points, and output space R l, encoding the latent representation or point feature. We use max as the reduction function g to map the arbitrary number of ing point features to a single fixed-size latent shape representation. The hidden layers and size of the latent shape representation for each instantiation of this encoder architecture can be found in Table 1. Most prior work has used MLP decoders, which we consider here as a baseline approach. An MLP decoder is a fully connected network that takes the latent shape representation as input and outputs an m × 3 output vector, which represents the m output points. Accordingly, MLP decoders are parameterized by the number and size of their fully connected layers. In our experiments, each fully connected layer consists of parameterized ReLU units with a batch normalization layer. Our main focus is on sample-based decoders, which allow for an arbitrary number of outputs points to be produced for a latent shape representation. In particular, given a latent shape representation h, each of our decoders is defined in terms of a point feature sampling distribution S(h), where the decoder produces a point-cloud output by sampling m point features from S(h). Once we have a set of M independently sampled point features from our sampling distribution S(h) we need to transform each one into a triple representing that point's location. Note that we are now in an identical but opposite situation as the point cloud encoder. Whereas the encoder had to transform independent point samples of some underlying object into corresponding high-dimensional representations, our decoder now has to transform independently sampled high-dimensional point representations into a point in space on the surface of the target object. Therefore, we can simply apply the same style of PointNet encoding mechanism with different input and output tensor sizes to implement an effective point feature decoder. The sizes of the hidden layers in our decoder network can be seen in Table 1. By applying the shared MLP point decoder to each sampled point feature, we can directly decode point clouds of arbitrary size. Below we describe three architectures for S, which are compared to each other and the baseline MLP decoder in our experiments. NoiseAppend Decoder. NoiseAppend is similar to the sampling approach described in "Point Cloud GAN" by. They sample point features by simply sampling from a multivariate Gaussian distribution with zero mean and unit variance before appending the sampled noise to the latent shape vector. That is, S(h) = concat (h, N (0, I)). However, this requires us to decide how many elements of noise should be appended to the latent shape representation. state that the size of the appended noise vector should be'much smaller than' the size of the latent shape representation, but it's not clear how much noise is necessary to allow the decoder to fully represent the shape. Ultimately this is an additional hyperparameter that needs to be investigated and tuned. NoiseAdd Decoder. NoiseAdd builds on the concept of adding unit Gaussian noise to the latent shape vector with the goal of avoiding the additional hyperparameter that NoiseAppend introduces. This can be easily accomplished by treating the entire latent vector as the mean of a Gaussian distribution with unit variance. That is, S(h) = N (h, I). However, this violates the claim by that the amount of noise introduced to the ing point feature samples should be much smaller than the size of the latent shape representation itself. Therefore, it may be the case that uniformly adding noise to every element of the latent vector obscures the crucial information it represents. NoiseLearn Decoder. NoiseLearn attempts to instead learn a small separate function V (h) which predicts the log-variance of the desired point feature distribution. Specifically, S(h) = N h, e V (h)/2 I. We define V (h) as a small MLP, the size of which can be seen in Table 1. By allowing the network to choose the amount and location of noise to be added to the latent shape vector, we hope that it will learn both to add an appropriate amount of noise for the target shape while Figure 2: Diagrams of the different approaches to deriving a distribution from the latent shape representation h. conserving the information necessary to accurately reconstruct it without introducing any additional hyperparameters. We evaluated each decoding architecture by training several instantiations of each architecture on a point cloud auto-encoding problem derived from the ModelNet40 dataset, which consists of over 12,000 3D models of 40 different common object classes. The dataset has a prescribed train/test split, with approximately 9800 models in the training dataset and 2500 in the test dataset. We randomly select 10% of the training data to use for validation during training. Before training, each object model in the ModelNet40 dataset is used to generate a uniformlysampled point cloud with 4096 points which is then scaled to fit within the unit sphere. For all autoencoder network models, at each iteration of training, the point clouds are randomly downsampled to 1024 points before being used to update the network parameters. The helps reduce the computational cost of training and also encouraging better generalization. During training, each decoded point cloud consists of 1024 points. We use the Chamfer distance as the loss function due to its relative speed and capability to directly compare point clouds of unequal sizes without modification. Each network is trained for 100 epochs using the ADAM optimizer with an initial learning rate of 10 −3, where each epoch performs a parameter update on each training example. The learning rate is decreased by a factor of 10 at epoch 50 and epoch 80. We trained five instantiations of each of the four network architectures with each instantiation varying the number of parameters as shown in Table 1 (note that we were not able to scale down the MLP for the smallest parameter setting). For each instantiation we ran the entire training process 15 times and all show average performance across the 15 runs. All code and infrastructure for "push-button" replication of our experiments open-source (Github/Gitlab location removed for anonymous review -code will be privately made available to reviewers through a comment approximately a week after submission). Quantitative Results. Figure 3 shows the validation loss along the learning curves for the 2M parameter instantiation of each architecture. The relative ordering of the architectures is consistent after the initial phase of training, with all curves flattening out by 100 epochs. First, the large jumps in the MLP training (due to unstable training runs) show that it was much less stable to training compared to the sample-based architectures. While effort was spent trying to specialize the training configuration for the MLP, stability remained an issue. 1 In contrast the runs for each sample based architecture were stable and similar. Ignoring the MLP stability, it performs similarly to worst performing sample-based architectureby the end of the learning curve. The three sample based architectures show rapid progress early in learning and then settle into a consistent ordering with NoiseLearn performing best, followed by NoiseAppend, and then NoiseAdd. This suggests that NoiseAdd's approach of adding uniform noise to the latent representation may be obscuring information needed for accurate reconstruction, compared to NoiseAppend, which separates noise from the shape representation. On the other hand, we see that while NoiseLearn also adds noise to the latent representation, it is able to outperform NoiseAppend. This indicates the importance of being able to intelligently select how much noise to add to different components of the representation. Apparently, this allows NoiseLearn to avoid obscuring critical information in the latent representation needed for accurate reconstruction. Figure 4 shows the average test set performance after 100 epochs of training of each size instantiation of the four architectures (note the log scale). The appendix also shows more detailed broken down for each of 5 selected object classes. The three sample based architectures show relatively consistent improvement in performance as the sizes grow by orders of magnitude. Rather, the MLP shows initial improvement, but then performance decreases significantly past 100K parameters. We looked further into the behavior of the MLP architecture for the larger parameter sets. We observed that the larger MLPs showed a similar decrease in performance on even the training data, indicating that the problem is not necessarily overfitting but also difficulty of the optimization. It is possible that with substantially more epochs the MLP performance would improve, but at great cost. This indicates that the MLP architecture is much less efficient at exploiting larger network sizes than the more structured sample-based architectures. It is possible that the architecture and training hyperparameters could be tweaked to improve the large MLP networks' performance, such as by adding additional regularization via weight decay or other mechanisms. However, we consider this tweaking to be outside the scope of this work, and note that none of the sampling based architectures required any such tweaking to achieve competitive performance at all parameter counts. NoiseParam-2m NoiseAdd-2m NoiseAppend-2m Figure 5: Examples of networks' auto-encoding on several previously unseen objects. Overall the give strong evidence that the sample-based architectures encode a bias that is much better matched to decoding the class of shapes in ModelNet40 compared to MLPs. The structured sample-based architectures, compared to the MLP, in more stable learning and the ability to continually improve as the architectures grow in size. Further, we see that the NoiseLearn architecture, which avoids the need to specify hyperparameters to control the amount of noise performs the best, or near best, across all network sizes and number of epochs. Illustrative Qualitative Results. Figure 5 shows each network's performance three test objects not seen in the training data. The point cloud decoded by the MLP network appears to be more evenly distributed spatially, while the sampling-based approaches are better able to capture finer detail in the target shape, such as the stool's thin legs and crossbars. Among the sample-based approaches, no single approach is clearly dominant in terms of visual quality across the objects. It is interesting that all of the sample-based architectures tend to miss the same type of object details, e.g. the jets on the plane or the leg cross bars on the chair, which may be due to limitations of the PointNet encoders sized and/or architecture. Nevertheless, it is quite interesting that a single relatively small latent vector representation is able to encode the level of detail exibited in these . Each sampling architecture defines a function from the latent shape representation to a point feature distribution. The underlying latent representation inherently defines the manifold of the encoded shape. Rather, the injected noise (either via appending or addition) can be viewed as playing the role of indexing locations on the manifold for the generated point. Effectively, the primary difference between the sample-based architectures is how they use the noise to index locations and traverse the manifolds. Below we aim to better understand this relationship between noise and spatial indexing and how the architectures differ in that respect. In Figure 6 we demonstrate how each architecture uses noise by controlling the variance introduced to a trained network in two different ways. To examine how the decoder's output is influenced by individual elements of noise we show the output of these networks when all but one of the noise elements is held at a constant near-zero value. In the lower plots, we show the decoder's behavior when it only receives the union of the noise elements above. This demonstrates both how the network learns to exploit individual elements of noise and how the decoder combines those elements to produce a point cloud that spans the entire shape. For NoiseAppend all of the noise is of equal magnitude, so we just examine the first five elements of noise in its noise vector. NoiseLearn predicts individual variances for each element in the dense shape encoding, enabling us to select the five elements of noise with the highest variance, and therefore presumably the biggest contribution to the decoded point cloud. The appendix contains additional examples of noise manipulation. The plots shown in Figure 6 give us some insight into how the networks use noise to complete the decoded shape. Each individual element of noise appears to correspond to a learned path along the surface of the learned shape. The final point cloud then seems to be produced by'extruding' along those paths. NoiseLearn's use of only four significant elements of noise suggests that in this domain only three or four elements of noise is sufficient to achieve good coverage of the target shape. Figure 8 shows how individual noise channels change when the NoiseAppend architecture is modified to only append one, two, and three noise elements. With only one element of noise, we can see that the network effectively has to learn a single path that spans as much of the target shape as possible. With two elements of noise, the network instead seems to learn individual'loops' around the object which are transformed and rotated as necessary. Once the network has access to three elements of noise, we see the same behavior as the functional networks of learning small paths on the object's surface. If too little noise can seriously hurt NoiseLearn's performance, does adding too much noise do the same? Figure 7 shows the NoiseAppend architecture trained with different amounts of added noise to see if the same performance dropoff is present at both extremes. It appears that even when the noise vector is much larger than the dense shape representation, the decoder's overall performance is not impacted. However, note that adding large amounts of noise does significantly increase the parameter count, so there is a nontrivial cost to doing this. In this work, we evaluated and compared several realizations of a sample-based point cloud decoder architecture. We show that these sampling approaches are competitive with or outperform the MLP approach while using fewer parameters and providing better functionality. These advantages over the baseline suggest that sample based point cloud decoders should be the default approach when a network needs to produce independent point samples of some underlying function or object. To further this this area of research, we provide a complete open-source implementation of our tools used to train and evaluate these networks. The tables below show each architecture's average loss on each individual class in the ModelNet40 dataset. The best-performing network is bolded for each object class.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
SklVI1HKvH
We present and evaluate sampling-based point cloud decoders that outperform the baseline MLP approach by better matching the semantics of point clouds.
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks. Deep Learning frameworks such as MXNet , PyTorch , and TensorFlow (a) represent neural network models as computation graphs. Efficiently executing such graphs requires optimizing discrete decisions about how to map the computations in a graph onto hardware so as to minimize a relevant cost metric (e.g., running time, peak memory). Given that execution efficiency is critical for the success of neural networks, there is growing interest in the use of optimizing static compilers for neural network computation graphs, such as Glow , MLIR , TVM (a), and XLA (XLA team, 2017). Here we consider the model parallelism setting where a computation graph can be executed using multiple devices in parallel. Nodes of the graph are computational tasks, and directed edges denote dependencies between them. We consider here jointly optimizing over placement, i.e., which nodes are executed on which devices, and schedule, i.e., the node execution order on each device. These decisions are typically made in either one or two passes in the compiler. We consider two different objectives: 1) minimize running time, subject to not exceeding device memory limits, and 2) minimize peak memory usage. In the optimization literature, such problems are studied under the class of task scheduling, which is known to be NP-hard in typical settings . As scheduling and placement are just a few of the many complex decisions made in a compiler, it is essential in a production setting that a solution 1) produce solutions of acceptable quality fast, even on large graphs (e.g., thousands of nodes) and decision spaces, and 2) handle diverse graphs from various types of applications, neural network architectures, and users. In this work we consider learning an optimizer that satisfies these requirements. Crucially, we aim to learn an optimizer that generalizes to a broad set of previously unseen computation graphs, without the need for training on such graphs, thus allowing it to be fast at test time. Previous works on learning to optimize model parallelism decisions have not considered generalization to a broad set of graphs nor joint optimization of placement and scheduling. In Mirhoseini et al. (2017;, learning is done from scratch for each computation graph and for placement decisions only, requiring hours (e.g., 12 to 27 hours per graph). This is too slow to be broadly useful in a general-purpose production compiler. We propose an approach that takes only seconds to optimize similar graphs. In concurrent work to ours, shows generalization to unseen graphs, but they are generated artificially by architecture search for a single learning task and dataset. In contrast, we collect real user-defined graphs spanning a broad set of tasks, architectures, and datasets. In addition, both Mirhoseini et al. (2017; and Fig. 1 : Overview of our approach. The Biased Random Key Genetic Algorithm (BRKGA) is used to optimize execution decisions for a computation graph (e.g., placement and scheduling of nodes) with respect to a cost metric (e.g., running time, peak memory) computed using the performance model. BRKGA requires proposal distributions for each node in the graph to generate candidate solutions in its search loop. The default choice is agnostic to the input graph: uniform distribution over at all nodes. We use a graph neural network policy to predict node-specific non-uniform proposal distribution choices (parameterized as beta distributions over ). BRKGA is then run with those choices and outputs the best solution found by its iteration limit. By controlling the non-uniformity of the distributions, the policy directs how BRKGA's search effort is allocated such that a better solution can be found with the same search budget. consider only placement decisions and rely on TensorFlow's dynamic scheduler; they do not address the static compiler setting where it is natural to jointly optimize scheduling and placement. The key idea of our approach (Figure 1) is to learn a neural network that, conditioned on the input graph to be optimized, directs an existing optimization algorithm's search such that it finds a better solution in the same search budget. We choose the Biased Random-Key Genetic Algorithm (BRKGA (Gonçalves &) ) as the optimization algorithm after an extensive evaluation of several choices showed that it gives by far the best speed-vs-quality trade-off for our application. BRKGA produces good solutions in just a few seconds even for real-world TensorFlow graphs with thousands of nodes, and we use learning to improve the solution quality significantly at similar speed. We train a graph neural network to take a computation graph as input and output node-specific proposal distributions to use in the mutant generation step of BRKGA's inner loop. BRKGA is then run to completion with those input-dependent distribution choices, instead of inputagnostic default choices, to compute execution decisions. The distributions are predicted at each node, ing in a high-dimensional prediction problem. There is no explicit supervision available, so we use the objective value as a reward signal in a contextual bandit approach with REINFORCE . Our approach, "Reinforced Genetic Algorithm Learning" (REGAL), uses the network's ability to generalize to new graphs to significantly improve the solution quality of the genetic algorithm for the same objective evaluation budget. We follow the static compiler approach of constructing a coarse static cost model to evaluate execution decisions and optimizing them with respect to it, as done in . This is in contrast to evaluating the cost by executing the computation graph on hardware (; . A computationally cheap cost model enables fast optimization. It is also better suited for distributed training of RL policies since a cost model is cheap to replicate in parallel actors, while hardware environments are not. Our cost model corresponds to classical NP-hard scheduling problems, so optimizing it is difficult. In this paper we focus fully on learning to optimize this cost model, leaving integration with a compiler for future work. We structure the neural network's task as predicting proposal distributions to use in the search over execution decisions, rather than the decisions themselves directly. Empirically we have found the direct prediction approach to be too slow at inference time for our application and generalizes poorly. Our approach potentially allows the network to learn a more abstract policy not directly tied to detailed decisions that are specific to particular graphs, which may generalize better to new graphs. It can also make the learning task easier as the search may succeed even with sub-optimal proposal distribution predictions, thus smoothening the reward function and allowing the network to incrementally learn better proposals. The node-specific proposal distribution choices provide a rich set of knobs for the network to flexibly direct the search. Combining learning with a search algorithm has been shown to be successful (e.g., , and our work can be seen as an instance of the same high-level idea. This paper makes several contributions: • We are the first to demonstrate learning a policy for jointly optimizing placement and scheduling that generalizes to a broad set of real-world TensorFlow graphs. REGAL significantly outperforms all baseline algorithms on two separate tasks of minimizing runtime and peak memory usage (section 5.3) on datasets constructed from 372 unique real-world TensorFlow graphs, the largest dataset of its kind in the literature and at least an order of magnitude larger than the ones in previous works (; b; ; . • We use a graph neural network to predict mutant sampling distributions of a genetic algorithm, specifically BRKGA, for the input graph to be optimized. This directs BRKGA's search in an input-dependent way, improving solution quality for the same search budget. • We compare extensively to classical optimization algorithms, such as enumerative search, local search, genetic search, and other heuristics, and analyze room-for-improvement in the objective value available to be captured via learning. Both are missing in previous works. Learning to optimize computation graphs: AutoTVM (b) applies learning to the very different problem of optimizing low-level implementations of operators in a tensor program, while we focus on optimizing higher-level decisions such as placement and scheduling of ops. use graph neural nets and RL to learn a scheduling policy for data processing jobs on clusters. These works are conceptually similar to ours in their use of learning, applied to a different domain. Learning for combinatorial optimization: Our work is an instance of applying learning for combinatorial optimization . Previous works on learning graph combinatorial optimization algorithms (e.g., ;) have focused on problems such as Minimum Vertex Cover, Maximum Clique, Maximum Independent Set, etc. The task scheduling problem we consider is significantly different in that the objective value is a more complex function on node-level decisions. Also, we focus on large-scale, real-world TensorFlow graphs, while e.g., uses small-scale, synthetic graph distributions. Learning a proposal distribution for stochastic search: learns a policy for predicting instance-dependent proposal distributions to be used in the stochastic optimizer STOKE for superoptimizing programs. However, it uses handcrafted instance features and shows on relatively simple, small programs. In contrast, we automatically learn the instance representations and show on real-world graphs. An earlier work by similarly learns a neural network to predict input-dependent proposal distributions for sequential Monte Carlo search for inference in a graphical model. Parallel task scheduling is a classical problem for scheduling ops in a computational graph to minimize runtime. Learning is not traditionally a part of the approaches proposed in this literature. studies greedy task scheduling approaches for TensorFlow. develops a simulation-based optimizer for deep learning computation graphs that uses a larger decision space by combining data, model, and attribute parallelism. Our approach can potentially be extended to such larger decisions spaces to achieve even bigger improvements in execution cost. 3 Figure 1 shows an overview of our approach. Given an input graph to optimize, instead of applying BRKGA directly with the default uniform distribution at all nodes, a graph neural network predicts beta distribution choices at each node. BRKGA is run with these choices to optimize placement and scheduling decisions with respect to the objective defined by the performance model. We first explain the performance model and BRKGA in this section, and the learning component in the next. A computation graph has a set of ops to run. Each op produces zero or more tensors and requires zero or more tensors as input. The runtime of each op is known and fixed (e.g., given by a simulator as in In this setting, we consider the problem of finding an assignment of ops to devices and an overall schedule such that each op is run once with the objectives of minimizing the peak local memory use across devices (e.g., to find a feasible way to run a large computation graph), or minimizing the runtime subject to a constraint on the peak memory used on any device. The performance model does not consider rematerialization of tensors, fragmentation when computing memory use, and asynchronous transfers between devices. Despite these simplifications, the model yields slight variants of problems that are known to be NP-hard and therefore remains a challenging setting in which to study how to learn an optimizer. See section A.4 for more details of the model. Biased random-key genetic algorithm (BRKGA) is a meta-heuristic framework that has been successful in a wide array of applications for solving hard combinatorial optimization problems (Gonçalves &). In BRKGA, chromosomes in a population are encoded as n-dimensional vectors with entries in for some fixed n. This random-key encoding decouples the application from the genetic algorithm, specifically the crossover and mutant generation procedures . The BRKGA variant we use is specified by a fitness evaluation function f: n → R, scalar integer parameters π, π e, and π c representing the population size, number of elites, and number of children, resp., an elite bias ρ ∈ [0.5, 1.0), and a mutant generation distribution D over n. The procedure aims to find a chromosome that maximizes f. The initial population (a collection of π chromosomes) is created by sampling from D. (Known good solutions may also be used to initialize a population.) One evolution step is completed as follows. 1. Sort the chromosomes in order of decreasing fitness using f. Denote the first π e chromosomes as elites and the remaining chromosomes as nonelites. 2. Construct the next generation from three different sources of chromosomes: (a) Copy the elite chromosomes unmodified from the last generation. (b) For each of the π c new children, select two parent chromosomes uniformly at random, one from the nonelites and one from the elites. Apply the crossover procedure (described below) to generate a new chromosome given the two parents. (c) Generate the remaining π − π e − π c by sampling from D. We continue the evolution procedure for a fixed number of evaluations, i.e., calls to f. Given an elite and nonelite chromosome a, b ∈ n (resp.), the crossover procedure produces a child chromosome c by independently combining entries from the parents. Specifically, for each index i ∈ 1,..., n independently, let c i = a i with probability ρ and c i = b i with probability 1 − ρ. Our use of BRKGA is standard except for the mutant-sampling distribution D, which is usually fixed to the uniform distribution. We generalize BRKGA for instance-specific learning by sampling from n independent beta distributions, whose parameters can vary by index. Beta flexibly admits non-uniform distribution choices and also subsumes the uniform choice. Given a computation graph, let d be the number of devices, o the number of ops, and t the number of tensors. We define the chromosome encoding a scheduling solution to have three distinct parts: o × d entries specifying op-to-device affinities; o entries specifying scheduling priorities for each op; t × d entries specifying tensor-to-device priorities for transfers that may be needed. Given a chromosome, op placements are picked by maximum affinity. Transfer ops are created as implied by the placements. We then obtain a schedule by performing a topological sort over the ops given their tensor dependencies, breaking ties by using the corresponding node priorities. Once a schedule is constructed, the performance model is used to evaluate its peak memory and/or runtime. When enforcing a memory constraint, the fitness of a schedule is encoded such that all memory-feasible schedules have better fitness than infeasible schedules. An example is provided in section A.5. We train a contextual bandit policy that predicts beta distribution choices for each of the nodes of a computation graph to be used by BRKGA to optimize it. For each round in the bandit setting, first the context is observed by drawing a computation graph G as an i.i.d. sample from a distribution D (e.g., a distribution over TensorFlow graphs). G has a set of nodes V and a set of edges E, with features associated with the nodes and edges. A policy p(a|G) is applied to make a set of decisions at each node. These decisions, denoted a v for each v ∈ V, across all nodes form one action a = {a v∈V}. One decision in a corresponds to playing one arm of the bandit, and specifying the entire a corresponds to playing several arms together in a single round. This can be viewed as a combinatorial multi-armed bandit problem . The action a specifies all the node-specific beta distributions BRKGA needs to optimize placement and scheduling decisions for G. To enable a policy over discrete choices, we quantize the mean and variance parameters of the beta distribution. The environment then runs BRKGA with those distribution choices with a fixed iteration limit. The final objective value is used to compute the reward. To make the reward values comparable across different graphs, we divide the objective value o a (G) achieved on a graph G with action a by the objective value o s (G) achieved by standard BRKGA using uniform distributions. Since we want to minimize the objective (e.g., runtime or peak memory), we define the reward as r(a, G) = − oa(G) os (G). So a reward > −1 corresponds to an action that achieves a better objective value than standard BRKGA on a graph. We maximize the expected reward L = E G [a p(a|G)r(a, G)], where E G is an expectation over graphs in our training set. Learning is done by REINFORCE . We added a scalar baseline b(G) for reducing the variance of the gradient estimates. From computation graphs, we derive multigraphs with attributed nodes and directed edges. Denote a multigraph G = (V, E). In our setup, the nodes V correspond 1:1 to the ops. An edge e ∈ E exists from u to v for each tensor that op v requires that is produced by op u. As a tensor can be required by multiple ops, the correspondence from edges to tensors may be many to one. Each node v ∈ V and edge e ∈ E has an attribute vector x v and x e. The attributes contain respective features, e.g., sizes of the tensors. We learn a model that predicts good mutant sampling distributions for BRKGA given this multigraph. Each node has d + 1 independent beta distributions, corresponding to device affinities and scheduling priorities, whose parameters are represented as a vector a v. These are the model's actions in RL terms, and our model specifies a distribution over actions a = {a v} v∈V for each graph, p(a|G). Note the action space is different from graph to graph. We use Graph Neural Networks (GNNs) (; ;) to learn representations for computation graphs. Given a (multi)graph G, a GNN computes representation vectors h v for each node through an iterative message passing process as follows: where e s is the source node of edge e and e t is the target node. In our formulation, MLP n and MLP e are multilayer perceptrons (MLPs) that encode node and edge attributes, MLP msg and MLP msg compute messages along the edges in the edge direction (m will contain information from the T -hop neighborhood around v in the graph. Given the h v 's, we produce a v 's through conditionally independent predictions, where the prediction for one node v does not depend on the predictions of other nodes given the computed representations: MLP a is shared across all nodes for predicting the parameters of the output distributions. In our experiments, we quantize the continuous beta distribution parameters and use a discrete action space. The outputs are therefore categorical, and we use the MLP to compute the logits of the corresponding softmax distributions. More details are included in section A.6. The baseline is computed using a separate GNN, where after we obtained the node representations h v, we aggregate across nodes and We consider two tasks, one is minimizing peak memory and the other is minimizing running time, both on two homogeneous devices with 16 GiB of memory each and synchronous tensor transfers with zero cost (zero latency and infinite bandwidth). We train a separate neural network for each task-dataset pair for the case of two devices. We have collected a dataset of 372 topologically-unique real-world TensorFlow graphs by mining machine learning jobs on a company's internal compute cluster (see A.1.2). These jobs are from a wide range of production and research use cases. The dataset is split into {train, valid, test} sets containing {60%, 20%, 20%} graphs, respectively. These sets are disjoint with respect to graph topology, so at test time the policy needs to generalize to new topologies. We augment the dataset by applying multiplicative noise to tensor sizes and op running times to create several variants per graph. Even though the variants of the same graph share the same topology, they represent different optimization problem instances. We create separate datasets for minimizing runtime and peak memory. The TF runtime dataset has 16329 training, 5470 validation, and 5266 test graphs. The TF peak memory dataset has 22400 training, 7400 validation, and 7400 test graphs. For reproducibility, we have released 1 a synthetic dataset of computation graphs with 10000 training, 1000 validation, and 1000 test cases. The graph topologies are generated from several classical random graph models, and the op running times and tensor sizes are sampled from Gaussian distributions (see A.1.4). On this dataset we minimize running time without a memory constraint (e.g., on two homogeneous devices with infinite memory). Graph Partitioning + Depth First Search (GP+DFS): Combines a graph partitioning (GP) baseline for device placement to minimize communication across devies and a Depth-First Search heuristic similar to the one implemented in XLA (b) to compute per-device schedules given placements. This is representative of the XLA compiler's solution for model parallelism. Local Search: The method starts with a random placement and schedule and greedily improves it by moving an op across devices or changing an op's order in the current schedule. Graph-As-Sequence Model (GAS): Like Mirhoseini et al. (2017;, we convert the graph into a sequence using a topological sort and apply a recurrent neural network to predict node-level 1 https://drive.google.com/drive/folders/1lxRl1ocsWu-POwbdEY06Mrzq6Ot99j7N distributions to be used by BRKGA. This comparison measures the usefulness of graph structure for learning. BRKGA XK: Run BRKGA for X thousand iterations with uniform sampling distributions using default hyperparameters consistent with Gonçalves & . This comparison measures the performance of the default version of BRKGA. Tuned BRKGA: Apply grid search to BRKGA's hyperparameters on the training set and pick the best. This represents how well BRKGA performs by customizing it to the distribution of computation graphs, but without instance-dependent customization. Instance-dependent Random Search (IDRS): Same as REGAL, but BRKGA is replaced with random search. This is done by running BRKGA for only one generation using the proposal distributions computed by the neural network. Additionally, we use a Constraint Programming (CP) approach with the CP-SAT solver of Google ORtools to establish a provably global optimum for each computation graph optimization problem instance by running for up to 24 hours. As an enumerative algorithm, it is generally not competitive when run only for seconds. For a fair comparison, we fix the number of performance model evaluations allowed per graph to be the same across algorithms. (Except GP+DFS, which does not allow fixing it.) Given typical TensorFlow graph sizes and compiler running time constraints, we estimate that a budget of 5,000 evaluations is feasible in practice, so we use that in the experiments. Learning to directly predict a solution: We have explored two more approaches for training a graph neural network to predict placement and scheduling solutions directly, without BRKGA. We used supervised learning to train a network to predict BRKGA's solutions. The best accuracy was achieved by predicting the solution autoregressively, one variable at a time conditioned on previously predicted variables. We also used RL to learn a policy with IMPALA to optimize the objective value by incrementally predicting the solution one variable at a time, and once complete, iteratively improving it with a learned local search policy. The inference cost for both approaches is quadratic in the number of nodes (the graph net is applied a linear number of times, each with linear cost), while REGAL's inference cost is linear, making them orders of magnitude slower than REGAL at test time. An evaluation on a test set of small graphs showed that neither approach improves on BRKGA5K. Improving the scalability and the generalization of these approaches is left as future work, and do not present their here. We use two metrics to compare algorithms. 1) Average percent improvement over BRKGA 5K: For a given graph, compute the percent improvement in the objective achieved by an algorithm relative to BRKGA with evaluation limit set to 5,000. BRKGA 5K is a natural reference for measuring the effect of learning approaches that predict proposal distributions for it. 2) Average percent gap from best known solution: Compute the best known objective value among all the algorithms. (This will be found by CP-SAT if it finishes within the time limit.) Compute the percent difference between an algorithm's solution and the best known objective value. We report averages over test set graphs. Training set are similar and reported in section A.3. Table 1 compares REGAL to other algorithms on the two TensorFlow test sets and the synthetic dataset. REGAL outperforms all the baselines on all three tasks. It gives 1.9× and 4.4× bigger improvements than the next best algorithm on runtime and peak memory minimization tasks, respectively. The percent improvement over GP + DFS is 44.4% and 10.1% for runtime and peak memory, respectively. REGAL reduces the average percent gap from the best known solution by about 1.8× with respect to BRKGA 5K on both TensorFlow test sets, and by about 6× and 3.3× with respect to GP + DFS on the TensorFlow Runtime and Peak Memory test sets, respectively. (For an XLA user, GP + DFS is the current, albeit weak, state-of-the-art algorithm.) The synthetic test set shows similar . The learned policy successfully generalizes to previously unseen graphs, to the extent that a large fraction of the estimated room for improvement over BRKGA 5K is captured using the same evaluation limit. To further test the limits of generalization for the policies learned using REGAL, we evaluate them on XLA graphs from a production compiler team's internal performance benchmark. XLA uses a different set of ops from TensorFlow, and the benchmark graphs on average have about an order of magnitude more nodes and edges than the TensorFlow graphs in our training set, so this is a difficult generalization challenge. REGAL achieves 0.58% average runtime improvement over BRKGA 5K on 94 graphs, and 3.74% average peak memory improvement on 32 graphs. It is promising that any improvements are possible at all despite training only on TensorFlow graphs, and points to the possibility of bigger improvements by training directly on XLA graphs. Optimizer running times: BRKGA 5K takes on average 0.89 seconds on the TensorFlow Peak Memory test set to optimize a computation graph, while REGAL takes 1.04 seconds. (The times are similar on the Runtime test set.) Instead of taking hours to compute a solution per graph (e.g., Mirhoseini et al. (2017;), REGAL produces solutions in orders of magnitude less time, while still being better than all the baselines. 5.4 COMPARING REGAL VS. BRKGA Figure 2 shows histograms of percent improvements in runtime (left) and peak memory (right) achieved by REGAL over BRKGA 5K on the test sets. Green bars correspond to graphs on which REGAL improved over BRKGA 5K, while red bars correspond to graphs on which REGAL was worse. (Ties have been omitted for clarity.) REGAL matches or beats BRKGA 5K on 87.4% of the runtime test set, and 88.9% of the peak memory test set. The highest improvement is 26.0% for run time and 54.3% for peak memory, while the worst regression is 24.0% for run time and 17.9% for peak memory. To assess whether the improvements provided by REGAL's policy generalize to evaluation limits other than the one for which it was trained, we varied the evaluation limit used by both BRKGA and REGAL at test time. The are shown in figure 3. REGAL's performance improves with more evaluations, confirming that the policy generalizes to higher evaluation limits. In other words, there exist node-level choices for the distributions used in BRKGA that perform well regardless of the evaluation limit, and REGAL learns to predict those choices. This is particularly useful in cases where the actual evaluation limit to use will be known only at test time, so that the same policy can be applied without re-training. Interestingly, even with 50,000 evaluations, BRKGA is not able to match REGAL's performance with just 5,000 evaluations! The RL agents' actions are instance dependent. The agent that performs the best on the TF Runtime dataset has a choice of 16 different node placement actions for each node in a graph. For each graph in the TF Runtime test set, we compute the entropy of the distribution of the node placement actions taken by the agent and plot a histogram of these entropies in Figure 4 (a). is 1.71 nats which implies that the actions are neither uniform random, nor constant, and vary from graph to graph. Furthermore, the agent's performance overall gets better with more graph message passing iterations T. Figure 4 (b) shows the peak validation reward reached within a hyperparameter sweep for each T for the TF runtime optimization task. Models that utilize the GNN with message passing (T > 0) reach higher performance than T = 0 (i.e., ignoring the graph structure). By training a graph neural network policy to predict graph-conditional node-level distributions for BRKGA, REGAL successfully generalizes to new graphs, significantly outperforms all baselines in solution quality, and computes solutions in about one second on average per TensorFlow test set graph. REGAL's speed and generalization make it a strong choice for use in a production compiler that needs to handle a diverse set of graphs under a limited time budget. We foresee several extensions. Integrating REGAL into a neural network compiler would allow us to evaluate the end-to-end gains due to better placement and scheduling decisions. To further improve REGAL's own performance, one could use a Mixture of Experts architecture. Given the diversity of graphs, a mixture model can train specialized sub-models on different types of graphs (e.g., convolutional networks, recurrent networks, etc.). Another is to replace BRKGA with alternatives, e.g., combining learned neural policies with local search. figure 6 give statistics for the number of nodes and edges in the datasets. The broad range of graph sizes indicates the diversity of the datasets. We collected a dataset by mining TensorFlow jobs running in a shared production cluster and extracting computation graphs in the MetaGraphDef 2 format. As lots of computation graphs were repeated due to device/machine/job replicas, we deduplicate the dataset by graph topology (specifically node in-degree sequence). We have not applied any other kind of filtering to restrict the dataset in any way (e.g., by architecture, input modality, learning task, dataset, etc.). Since the organization from which the graphs were collected has a large and diverse set of production and research use cases across input modalities, learning types, and datasets, we strongly believe our dataset is representative of a broad, real-world distribution of TensorFlow graphs. Computational costs for these computation graphs are simulated with an in-house simulator (based on Grappler 3) that outputs memory and running time profiled information in the CostGraphDef 4 format. The simulator TensorFlow Op coverage didn't include custom kernels or complicated control flow like cycles (e.g. tf.while_loop). The train-validation-test set split is made by selecting a set of 5 graphs from a list of graphs sorted by number of nodes, and splitting them as 3-1-1 across the three sets, respectively. This ensures that the distribution of the number of nodes is similar for the three sets. For each graph in the train/validation/test sets, we make 99 copies of it and multiply each tensor size and each Op running time cost with a uniform sampled number in the interval (0.5, 1.5) (one sample per tensor size per copy plus one sample per TF Op per copy). The modified copies are added back to the respective set so that the graph topologies in train/validation/test do not overlap. Graphs with no relevant cost information or no room for improvement (e.g. a graph with a single node, a chain in the minimizing running time task) are filtered. This in two different datasets for the two objective functions, one for runtime and another for peak memory. The encoding is described in A.2 A.1.3 XLA DATASET We also collected a dataset by dumping CostGraphDefs during the XLA compilation of several benchmark graphs and extracted the greatest-size control-flow-free subgraph. After deduplication 94 graphs remained. The encoding is described in A.2 We sample synthetic graphs from a set of classic random graph models, including the Erdos-Renyi model Erdos & Rényi, the Barabasi-Albert model Barabási & , the WattsStrogatz model and the stochastic block model. The parameters we used for each of the random graph models are listed in Table 2. Erdos-Renyi Edge probability p = 0.05. Each node connected to m = 2 previous nodes. Each node connected to k = 4 neighbors initially, with probability p = 0.3 to swap an edge with a random edge. Stochastic Block Model Number of blocks k = 4, within block edge probability p = 0.3, cross block edge probability q = 0.01. The number of nodes in a graph is sampled uniformly from range. Note that all the above random graph models generate undirected graphs. To convert the graphs into directed acyclic graphs, for each graph we sample a random ordering of nodes π, and then set the direction of all edges (i, j) such that π(i) < π(j), i.e. node i comes before node j in the ordering π. After setting all the edge directions, we locate all the head nodes that don't have any predecessors and all the tail nodes that don't have any successors, and then create a source node and a sink node, and create edges from the source node to all the head nodes, and from all the tail nodes to the sink node. Real TensorFlow graphs all contain one source and one sink node. Examples of synthetic graphs generated from the 4 random graph models are shown in Figure 7. Each edge (i, j) in the graph represents either a control dependency (op j can run only when op i is finished) or a data dependency (op i can run only when some output tensor(s) produced by op j is available). We assign a probability of 0.1, 0.8 and 0.1 for each op to produce 0, 1 or 2 output tensors. When op i produces 0 output tensors, any other op j that depends on it can only be through a control dependency. Otherwise, op j will have a probability of 0.2 to make the dependency a control dependency, and probability of 0.8 to make this a data dependency, in which case an output tensor is picked according to an uniform distribution. We fill in the memory cost for each tensor by sampling from a Gaussian distribution with mean 50 and standard deviation 10. The time cost for each op is computed as the sum of all the input and output tensor memory costs plus a random noise that is a fraction r of the total memory cost, with r sampled from a Gaussian with 0 mean and standard deviation 0.1. The source and sink nodes do not have any memory or time costs. To make sure the generated synthetic graphs are interesting, we apply an additional filtering step by running BRKGA 1k and BRKGA 10k, and keeping only the graphs whose runtime improved by at least 18%. This gives us a dataset of graphs that on average improve runtime by 20% from running BRKGA 1k to 10k. The synthetic data in CostGraphDef format is available at https://drive.google.com/ drive/folders/1lxRl1ocsWu-POwbdEY06Mrzq6Ot99j7N. The encoding is described in A.2. We consider "control dependencies" as being tensors of size zero. Each triple (op producer, tensor, op consumer) in the CostGraphDef is encoded as a separate edge. Each of these edges e is associated with three features denoted by x e: the size of the tensor that is associated with the edge, a one-hot feature indicating if the edge is a control edge, and normalized index of the hyperedge to which the edge belongs. This means that the graph neural network's input is a directed graph with multiple edges. (There are alternative encodings like a bipartite graph where both TF ops and TF tensors are nodes, and edges exist only between ops and tensors when the op consumes or produces the tensor.) Each node v in the computation graph is associated with node features x v. There are 11 node features in total, which can be grouped into three categories: memory-based, runtime-based (not used in the peak memory task) and BRKGA-based (task dependent). As memory-based node features, we use the sum of input tensor sizes, the sum of output tensor sizes, the extra internal memory of the TensorFlow op, and a one-hot indicator of whether the op is the one which uses the greatest memory in the graph. As runtime-based node features, we use the sum of direct predecessor nodes' running times, the sum of direct successor nodes' running times, the running time cost of the op itself, and one-hot indicator of whether the op is the one with the greatest runtime cost in the graph. As BRKGA-based node features, we have a node aggregation (the expectation of the placement per device and the schedule order for each node) of the chromosomes found by BRKGA (minimizing peak memory for the peak memory dataset and minimizing runtime for the runtime dataset) running for 400 evaluations with uniform random distributions. To make comparisons fair, REGAL with K fitness evaluations means 400 evaluations to compute features, and K − 400 fitness evaluations for BRKGA using the instance-specific distributions. For each graph, all node or edge features relating to memory size are normalized by the greatest memory size number in that graph and all features relating to runtime are are normalized by the greatest op runtime cost. To break symmetry and reduce a single degree of freedom without loss of performance, we fix the placement of the node with highest memory for the memory task and runtime for the runtime task to the first device. Figure 8 shows the reward curves on the training set for runtime minimization (left) and peak memory minimization (right). Each point in the curves is the average reward achieved on a minibatch of training graphs at the corresponding training step. The graph neural network policy is trained using TensorFlow on 10 cores of a CPU machine (no GPUs were used), with multi-threading used by BRKGA for evaluating chromosomes and by TensorFlow. A training run takes approximately 2-3 days to be completed. The final average percent improvement over BRKGA5K on the training set is similar to that of the test set for both tasks: 7.25% on the training set vs. 7.09% on the test set for runtime minimization, and 4.36% on the training set vs. 3.56% on the test set for peak memory minimization. The small gap between train and test shows that the policy is able to generalize successfully to unseen graphs at test time. The scheduling problem is specified by a set of devices D and a computation graph. The computation graph has the list of ops j ∈ N and tensors τ ∈ T, the tensors produced by each op I(j) ⊆ T, the tensors consumed by each op C(j) ⊆ T, the memory used by each tensor m τ, and the execution time of each op r j. A tensor is produced by exactly one op but can be consumed by many. Solutions to the scheduling problem are constructed as follows. A placement is an assignment pl: N → D from ops to devices. Given a placement we defineÑ pl as the set of ops N extended with synchronous inter-device transfer operations. A transfer operation consumes a tensor on the device where it was created and produces it on a device where it is needed. Given a placement pl, a schedule is a total ordering s:Ñ → {1, 2, . . ., |Ñ |} on ops inÑ pl. We say that op j runs at simulation time step s(j). We model the schedule execution as follows. At each simulation time step t ∈ {1, 2, . . ., |Ñ |}, each device d has a list l d,t of tensors currently in memory. A tensor is added to the list when produced by an op that runs on the device or by a transfer op that receives the tensor on the device. A tensor is removed immediately after all of its consumers on the device have run. A schedule is valid if for each op j, all the input tensors are available on the corresponding device at simulation time step s(j). See Section A.9 for an example schedule. The memory used on a device at simulation time step t is the sum of the memory used by each tensor that is in memory, i.e., τ ∈l d,t m τ. The peak memory of a schedule is the maximum value of the memory used at any time and on any device. The runtime of a schedule is computed by stepping through the simulation time steps in order and accounting for the execution time of each op on each device. Synchronous transfers block until both the sender and the receiver have completed their preceding tasks, and their execution time depends on a known bandwidth between devices. A.5 BRKGA CHROMOSOME ENCODING Let the input graph G contain o ops and t tensors which must be placed over d devices. Then, the BRKGA chromosome for this graph is a vector c ∈ o×d+o+t×d composed of the following parts Fig. 9: A computation graph and an example of a chromosome encoding. 1. The first o × d entries in c represent the node-device affinities, one value for each (node, device). Each node is assigned to the device for which it has the highest value in the chromosome. 2. The next o entries represent the node scheduling priorities. A valid schedule of the computation graph is obtained by performing a topological sort over the nodes of the graph, breaking ties using the node scheduling priorities. Nodes with higher priority are scheduled first. 3. The final t × d entries represent the tensor transfer priorities, one entry for each (tensor, device) pair. These priorities determine the order in which tensors are transferred across devices. An example of a chromosome encoding is shown in Figure 9 for a graph with o = 3 nodes, t = 3 tensors and d = 2 devices. As per the example, nodes 1 and 3 are placed on device 1 while node 2 is placed on device 2. The scheduling order over the nodes is 1, 2, 3. Since nodes 1 and 2 are placed on different devices, tensors A and C must be transferred from device 1, where they are produced, to their consumer, node 2, which is on device 2. As per the tensor transfer priorities, tensor C is transferred before tensor A since tensor C has a higher priority to get transferred to device 2. Each real number in a BRKGA chromosome is sampled from its own Beta distribution, which is parameterized by two real numbers α and β. To be more precise, if we denote the chromosome by is a Beta distribution with parameters α i and β i. To be able to run BRKGA, REGAL must propose the values for α i and β i for each i. As described in A.5, the BRKGA chromosome consists of three parts. In REGAL, we optimize the RL agents over choices of α i and β i for the first two parts of the chromosome, i.e., the parts of the chromosome corresponding to the node placement and scheduling decisions. The Beta distribution parameters for the tensor transfer priorities are fixed to α i = β i = 1 which correspond to the uniform random distribution. Thus, for a graph with o ops and with d devices, the RL agent must propose (d + 1) × 2 values for each of the o ops in the graph. To make the learning task easier, rather than directly predicting values of α and β, we quantize the output space of the RL agent's actions such that each action uniquely maps to a Beta distribution. This mapping is done as follows: • For each node in the graph 1 ≤ i ≤ o, and for each entry 1 ≤ j ≤ (d + 1) correponding to node i in the BRKGA chromosome, the agent performs the set of actions • m ij and v ij ∈ {0, 1, . . ., k − 1} where k is some fixed constant greater than 1, and these represent the quantized mean µ ij and variance σ 2 ij of the Beta distribution which are related to each other as follows: • µ ij and σ 2 ij can be mapped to α ij and β ij for a Beta distribution as follows: • The values m ij and v ij are sampled from a Categorical distribution whose logits are determined by We use a similar quantization strategy for the BRKGA crossover probabilities. For every crossover probability, we sample an integer c ∈ {0, 1, . . ., k − 1} from a Categorical distribution for some fixed integer constant k, and the dequantized crossover probability is given by 0.5 * 1 + c+1 k MLPs Multi-layer perceptrons, or multi-layer fully connected neural networks are models that map input vectors to output vectors through layers of linear transformations and nonlinear activation functions, like the following: where x is an input vector, (W i, b i) are the parameters for the ith layer, and h is the output vector. σ is a nonlinear scalar function applied element-wise to the input vectors. Typical choices include the logistic sigmoid function σ(x) = 1 1+e −x, tanh function σ(x) = e x +e −x e x −e −x and the ReLU function σ(x) = max{0, x}. RNNs and LSTMs Recurrent neural networks (RNNs) are good sequence models. Typical RNNs contains a recurrent memory c t that is updated recursively by taking some input at each step t as the following: where x t is the input at step t. The simplest RNN cell has the following form where W, b are the parameters and σ is a nonlinearity. Long-short term memory (LSTM) models are a type of RNNs that uses explicit gating to control the access to the memory. LSTMs distinguish the memory c t and the output of the LSTM h t as two sets of vectors, and compute the update at step t as Here i, f, o are the input, forget and output gates and is element-wise multiplication. The carefully designed memory access control through gating makes LSTMs better at modeling long-term dependencies. We can use an autoregressive model to capture some structure in the outputs. Given the node representations h v for each of the nodes from the GNN, we can utilize an ordering of the nodes, e.g. from a topological sort, and treat the node representations as a sequence, and then use an LSTM to predict the outputs y v sequentially. We tried this approach but found that using an LSTM on top of the h v's to predict y v's did not perform as well as the conditionally independent model. The reasons for this might be: the autoregressive approach relies on a sequential ordering of the nodes, and this ordering might not be reliable nor consistent across graphs; the number of nodes in the computation graphs can be large, and learning recurrent models on long sequences is known to be challenging; the noisy training signal in our REINFORCE-based training setup makes this model even more difficult to train. A.9 PLACEMENT AND SCHEDULING EXAMPLE Figure 10 illustrates a computation graph, a valid schedule, and how we account for which tensors are in memory at a given time under the model presented in sec. 3.1. A.10 BASELINES Here we provide supplementary information about our baselines. • CP SAT: The multi-device peak memory minimization problem is formulated for the CP solver using a model of the operation execution order, tensor lifetimes, and cumulative constraints to evaluate peak memory usage. The solver is guaranteed to find the globally optimal solution given sufficient time. • Graph partition + DFS: The graph partitioning objective is to minimize data transferred across devices. We use the modified implementation of the Kernighan-Lin algorithm used by XLA for device placement in some settings. This implementation is generally slower than heuristics implemented in popular libraries like METIS although it tends to find better quality solutions. • Local Search: The initial schedule is a topological sort order of the ops, and the initial placement selects devices uniformly randomly. A local move either changes the device assignment of the op, or changes the op order in the current schedule. The hyperparameters (e.g., number of random restarts) are set to values that perform the best on a sample of 10,000 graphs in the training set as found by grid search. • Tuned BRKGA: The following hyperparameters of BRKGA using grid search: the beta distribution parameters (two scalars), and the number of chromosomes, elites, mutants, and populations. The grid search tries 648 hyperparameter settings and picks the best one as evaluated on 10,000 training set graphs. • REGAL: The performance of REGAL is stochastic both because the actions are stochastic and because BRKGA itself depends on random samples for mutation and crossover. We estimated the standard deviation of the percent improvement statistics with respect to these sources of randomness as below 0.1%, which is small compared to the differences we observe. Hence we have omitted the error bars from figures 3. A.11 RUNNING TIME COMPARISON FOR ALGORITHMS Table 3 shows the average running times of the various algorithms on the TensorFlow test set and the XLA dataset, as measured on an Intel Xeon E5-1650 3.60GHz machine. The times are averaged over the unaugmented graphs in the test set. REGAL provides both fast running time as well as high solution quality. For a slightly higher running time than BRKGA 5K, REGAL improves the solution quality significantly. Almost all of the added running time is due to extra cost of sampling beta distributions by REGAL compared to uniform distributions by BRKGA. This can be seen from the nearly identical running times of REGAL and Tuned BRKGA, which also uses beta distributions, but without the neural network policy. The local search heuristic runs slowly because it was not implemented efficiently, e.g., with incremental evaluations of the objective; we show its timing for completeness only. REGAL can train a policy to generate any subset of the following actions for BRKGA: 1) Actions for node placement priorities, and 2) Actions for node scheduling priorities. We train REGAL with various subsets of these actions and compare their performance against each other in table 4. We observe that on the validation set, REGAL performs best when it has to learn actions for both placement and scheduling compared to just scheduling or placement alone. Our best model trained on the TF peak memory dataset is capable of generating 16 different kinds of actions for node placement decisions and 4 different kind of actions for node scheduling decisions. Each of these actions determine the shape of the Beta distributions from which we sample the node-device affinities and the node scheduling priorities. In this section we attempt to gain insights into the structure of these actions. We divide our node placement decisions into three categories: • Actions that give a node a higher probability to be placed on device 1 • Actions that give a node a higher probability to be placed on device 2 • Actions that give equal preference to the two devices. Similarly, we divide our node scheduling decisions in two categories: • Actions that give nodes a "high" scheduling priority. • Actions that give node a "low" scheduling priority. Finally, we aggregate the average relative memory consumption of all nodes that were assigned the same set of actions, where memory consumption of a node is defined as the sum of the memory uses of all its input and output tensors. The relative memory usage is the memory usage normalized by the largest memory usage of a node in the graph. We plot this data in Figure 11. On the right, each cell represents the average relative memory consumption of the nodes that were assigned a particular placement and scheduling decision (darker cells indicate nodes with higher average relative memory consumption). On the left, each cell represents the frequency of the nodes that were assigned a particular placement and scheduling decision (darker cells indicate higher frequency). With these we can make the following observations: • On average, nodes with higher normalized memory consumption are assigned lower scheduling priorities. • Most of the nodes with the highest relative memory consumption have no affinity for either of the two devices. • For nodes with the lowest relative memory consumption, most of them have an affinity to be placed on device 2, while a smaller but still significant number of them prefer device 1. This implies that the node placement strategy is more complicated than a trivial "place lighter nodes on device 2" strategy and REGAL's actions are non-trivially dependent on the input. The graph neural network had a state size of 32 for each node and edge, 16 propagations, all networks MLP n MLP e MLP node MLP msg MLP msg being two layers of size 32, the aggregation used was mean pooling. For faster training, the reward of the training set was made with 1000 fitness evaluations for REGAL and BRKGA (4600 for REGAL and 5000 for BRKGA for the validation and test sets). Training lasted 100000 gradient steps with each step having a mini-batch of size 4 and with gradient clipping by L 2 norm with value 10. The baseline mean squared error term's contribution to the overall loss was weighted by 0.0001. The optimizer was Adam with beta1 0.9 beta2 0.999 epsilon 1e − 8 and learning rate 0.0001. The number of devices (for the memory model) was 2. A.15 HYPERPARAMETERS OF THE BEST AGENT FOR RUNTIME TF The graph neural network had a state size of 32 for each node and edge, 16 residual graph propagations, all networks MLP n MLP e MLP node MLP msg MLP msg being two layers of size 32, the aggregation used was sum. Training lasted 100000 gradient steps with each step having a mini-batch of size 4 and with gradient clipping by L 2 norm with value 10. The baseline mean squared error term's contribution to the overall loss was weighted by 0.0001. The optimizer was Adam with beta1 0.9 beta2 0.999 epsilon 1e − 8 and learning rate 0.0001. With k = 16 for scheduling and k = 2 for placement (k being the quantization level defined in A.6). Test graphs, by unique topology Fig. 12: A box plot of rewards on the TF Runtime test set by unique graph topology. There are 100 graphs for each topology, 99 of which generated by data augmentation. A reward greater than -1 implies that REGAL finds a better solution than BRKGA. Box plots visualize the 25th, 50th, and 75th percentiles and the range of a set of points. A.16 HYPERPARAMETERS OF THE BEST AGENT FOR RUNTIME SYNTHETIC Same as A.15 but with 2 graph propagations and gru node updates and aggregation using mean. A.17 PERFORMANCE BY GRAPH ARCHITECTURE Figure 12 shows the distribution of rewards on the TF Runtme test set broken down by graph topology. As described in Section 5.1, for each unique topology we augment the dataset by perturbing the tensor sizes and op running times. This generates a distribution of 100 rewards per topology. The variance for a fixed topology is typically relatively small.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkxDoJBYPB
We use deep RL to learn a policy that directs the search of a genetic algorithm to better optimize the execution cost of computation graphs, and show improved results on real-world TensorFlow graphs.
Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. One of the most basic prediction tasks is view prediction: how would a given scene look from an alternative viewpoint? Humans excel at this task. Our ability to imagine and fill in missing visual information is tightly coupled with perception: we feel as if we see the world in 3 dimensions, while in fact, information from only the front surface of the world hits our (2D) retinas. This paper explores the connection between view-predictive representation learning and its role in the development of 3D visual recognition. We propose inverse graphics networks, which take as input 2.5D video streams captured by a moving camera, and map to stable 3D feature maps of the scene, by disentangling the scene content from the motion of the camera. The model can also project its 3D feature maps to novel viewpoints, to predict and match against target views. We propose contrastive prediction losses that can handle stochasticity of the visual input and can scale view-predictive learning to more photorealistic scenes than those considered in previous works. We show that the proposed model learns 3D visual representations useful for semi-supervised learning of 3D object detectors, and unsupervised learning of 3D moving object detectors, by estimating motion of the inferred 3D feature maps in videos of dynamic scenes. To the best of our knowledge, this is the first work that empirically shows view prediction to be a useful and scalable self-supervised task beneficial to 3D object detection. Predictive coding theories suggest that the brain learns by predicting observations at various levels of abstraction. These theories currently have extensive empirical support: stimuli are processed more quickly if they are predictable , prediction error is reflected in increased neural activity , and disproven expectations lead to learning . A basic prediction task is view prediction: from one viewpoint, predict what the scene would look like from another viewpoint. Learning this task does not require supervision from any annotations; supervision is freely available to a mobile agent in a 3D world who can estimate its egomotion . Humans excel at this task: we can effortlessly imagine plausible hypotheses for the occluded side of objects in a photograph, or guess what we would see if we walked around our office desks. Our ability to imagine information missing from the current image view-and necessary for predicting alternative views-is tightly coupled with visual perception. We infer a mental representation of the world that is 3-dimensional, in which the objects are distinct, have 3D extent, occlude one another, and so on. Despite our 2-dimensional visual input, and despite never having been supplied a 3D bounding box or 3D segmentation mask as supervision, our ability for 3D perception emerges early in infancy . In this paper, we explore the link between view predictive learning and the emergence of 3D perception in computational models of perception, on mobile agents in static and dynamic scenes. Our models are trained to predict views of static scenes given 2.5D video streams as input, and are evaluated on their ability to detect objects in 3D. Our models map 2.5D input streams into 3D feature volumes of the depicted scene. At every frame, the architecture estimates and accounts for the motion of the camera, so that the internal 3D representation remains stable. The model projects its inferred 3D feature maps to novel viewpoints, and matches them against visual representations Pretrain view contrastive (ours) Pretrain view regression Random weight initialization Figure 1: Semi-supervised 3D object detection. Pre-training with view-contrastive prediction improves , especially when there are few object 3D bounding box annotations. extracted from the target view. We propose contrastive losses to measure the match error, and backpropagate gradients end-to-end in our differentiable modular architecture. At test time, our model forms plausible 3D completions of the scene given RGB-D (2.5D) video streams or even a single RGB-D image as input: it learns to inpaint information behind occlusions, and infer the 3D extents of objects. We evaluate the trained 3D representations in two tasks. Semi-supervised learning of 3D object detectors (Figure 1): We show that view contrastive pretraining helps detect objects in 3D, especially in the low-annotations regime. Unsupervised 3D moving object detection (Figure 3 right): Our model can detect moving objects in 3D without any human annotations, by forming a 3D feature volume per timestep, then estimating the motion field between volumes, and clustering the motion into objects. View prediction has been the center of much recent research effort. Most methods test their models in single object scenes, and aim to generate beautiful images for graphics applications; ), as opposed to learning general-purpose visual representations. In this work, we use view prediction to help object detection, not the inverse. The work of attempted view prediction in full scenes, yet only experimented with toy data containing a few colored 3D shapes. Their model cannot effectively generalize beyond the training distribution, e.g., cannot generalize across scenes of variable number of objects. The work of is the closest to our work. Their model is also an inverse graphics network equipped with a 3-dimensional feature bottleneck, and was trained for view prediction; it showed strong generalization across scenes, number of objects, and arrangements. However, the authors demonstrated its abilities only in toy simulated scenes, similar to those used in. Furthermore, they did not evaluate the usefulness of the learned features for a downstream semantic task, beyond view prediction. This raises questions on the scalability and usefulness of view prediction as an objective for self-supervised visual representation learning, which our work aims to address. We compare against the state-of-the-art model of and show that the features learned under our proposed view-contrastive losses are more semantically meaningful (Figure 1). To the best of our knowledge, this is the first work that can discover objects in 3D from a single camera viewpoint, without any human annotations of object boxes or masks. Summarizing, we have the following contributions over prior works: Novel view-contrastive prediction objectives. We show that these losses outperform RGB regression (; and VAE alternatives in semi-supervised 3D object detection. A novel unsupervised 3D moving object detection method, by estimating 3D motion of egomotion-stabilized 3D feature maps. We show that we outperform 2.5D baselines and iterative generative what-where VAEs of previous works . Simulation-to-real transfer of the acquired view-predictive 3D feature representations. We show 3D features (pre)trained with view contrastive prediction in a simulated environment boost the performance of 3D detectors trained from 3D object box annotations on real-world images. Our code and data will be made publicly available upon publication. Predictive visual feature learning Predictive coding theories suggest that much of the learning in the brain is of a predictive nature . Recent work in unsupervised learning of word representations has successfully used ideas of predictive coding to learn word representations by predicting neighboring words . Many challenges emerge in going from a finite-word vocabulary to the continuous high-dimensional image data manifold. Unimodal losses such as mean squared error are not very useful when predicting high dimensional data, due to the stochasticity of the output space. Researchers have tried to handle such stochasticity using latent variable models or autoregressive prediction of the output pixel space, which involves sampling each pixel value from a categorical distribution conditioned on the output thus far (Van den). Another option is to make predictions in a latent feature space. followed this direction and used an objective that preserves mutual information between the future bottom-up extracted features and the predicted contextual latent features, applying it in speech, text and image patches in single images. The view contrastive loss proposed in this work is a non-probabilistic version of their contrastive objective. However, our work focuses on the video domain as opposed to image patches, and uses drastically different architectures for both the contextual and bottom-up representations, using a 3D representation bottleneck. We consider a mobile agent that can move about the scene at will. The agent has an RGB camera with known intrinsics, and a depth sensor registered to the camera's coordinate frame. At training time, the agent has access to its camera pose, and it learns in this stage to imagine full 3D scenes (via view prediction), and to estimate egomotion (from ground-truth poses). It is reasonable to assume that a mobile agent who moves at will has access to its approximate egomotion, since it chooses where to move and what to look at . Active vision is outside the scope of this work, so our agent simply chooses viewpoints randomly. At test time, the model estimates egomotion on-thefly from its RGB-D inputs. We use groundtruth depth provided by the simulation environment, and we will show in Sec. 4 that the learned models generalize to the real world, where (sparser) depth is provided by a LiDAR unit. We describe our model architecture in Sec. 3.1, our view-contrastive prediction objectives in Sec. 3.2, and our unsupervised 3D object segmentation in Sec. 3.3. Figure 2 -left. It is a recurrent neural network (RNN) with a memory state tensor M (t) ∈ R w×h×d×c, which has three spatial dimensions (width w, height h, and depth d) and a feature dimension (c channels per grid location). The latent state aims to capture an informative and geometrically-consistent 3D deep feature map of the world space. Therefore, the spatial extents correspond to a large cuboid of world space, defined with respect to the camera's position at the first timestep. We refer to the latent state as the model's imagination to emphasize that most of the grid locations in M (t) will not be observed by any sensor, and so the feature content must be "imagined" by the model. Our model is made up of differentiable modules that go back and forth between 3D feature imagination space and 2D image space. It builds on the recently proposed geometry-aware recurrent neural networks (GRNNs) of, which also have a 3D egomotion-stabilized latent space, and are trained for RGB prediction. Our model can be considered a type of GRNN. In comparison to: (i) our egomotion module can handle general camera motion, as opposed to a 2-degree-of-freedom sphere-locked camera. This is a critical requirement for handling data that comes from freely-moving cameras, such as those mounted on mobile vehicles, as opposed to only orbiting cameras. (ii) Our 3D-to-2D projection module decodes the 3D map into 2D feature maps, as opposed to RGB images, and uses view-contrastive prediction as our objective, as opposed to regression. In Table 3 and Figure 8, we show that nearest neighbors in our learned featurespace are more accurate and appear more semantically related than neighbors delivered by RGB regression. We briefly describe each neural module of our architecture next. Further implementation details are in the appendix. (t) ∈ R w×h×3 and point- by filling each 3D grid location with the RGB value of its corresponding subpixel. The pointcloud is converted to a 3D occupancy grid O (t) ∈ R w×h×d×1, by assigning each voxel a value of 1 or 0, depending on whether or not a point lands in the voxel. We then convert the concatenation of these tensors into a 3D feature tensor F (t) ∈ R w×h×d×c, via a 3D convolutional encoder-decoder network with skip connections. We L 2 -normalize the feature in each grid cell. Egomotion estimation This module computes the relative 3D rotation and translation between the current camera pose (at time t) and the reference pose (from time 1), allowing us to warp the feature tensor F (t) into a registered version F reg. In principle any egomotion estimator could be used here, but we find that our 3D feature tensors are well-suited to a 3D coarse-to-fine alignment search, similar to the 2D process in the state-of-the-art optical flow model PWC-Net (Sun et [R, t] Figure 2: View-contrastive 3D feature learning with 3D-bottlenecked inverse graphics networks. Left: Learning visual feature representations by moving in static scenes. The 3D-bottlenecked RNNs learn to map 2.5D video streams to egomotion-stabilized 3D feature maps of the scene by optimizing for view-contrastive prediction. Right: Learning to segment 3D moving objects by watching them move. Non-zero 3D motion in the latent 3D feature space reveals independently moving objects and their 3D extent, without any human annotations. 2018). Given the 3D tensors of the two timesteps, F and F (t), we incrementally warp F (t) into alignment with F, by estimating the approximate transformation at a coarse scale, then estimating residual transformations at finer scales. This is done efficiently with 6D cross correlations and cost volumes. Following PWC-Net, we use fully-connected layers to convert the cost volumes into motion estimates. While our neural architecture is trained end-to-end to optimize a view prediction objective, our egomotion module by exception is trained supervised using pairs of frames with annotated egomotion. In this way, it learns to be invariant to moving objects in the scene. Latent map update This module aggregates egomotion-stabilized (registered) feature tensors into the memory tensor M (t). On the first timestep, we set M = F. On later timesteps, we update the memory with a simple running average. 3D-to-2D projection This module "renders" the 3D feature state M (t) into a 2D feature map of a desired viewpoint V (k). We first warp the 3D feature map with a 2-block 2D ResNet . 3D object detection Given images with annotated 3D object boxes, we train a 3D object detector that takes as input the 3D feature map M (t), and outputs 3D bounding boxes for the objects present. Our object detector is a 3D adaptation of the state-of-the-art 2D object detector, Faster-RCNN . The model outputs 3D axis-aligned boxes with objectness confidences. 3.2 VIEW-CONTRASTIVE RENDERING Given a set of input RGBs, pointclouds, and camera poses (we train our model to predict feature abstractions of an unseen input ( Figure 2 -left. We consider two types of representations for the target view: a top-, and a bottom-up one, B = Note that the top-down representation has access to the viewpoint V (n+1) but not to observations from that viewpoint (I (n+1), D (n+1) ), while the bottom-up representation is only a function of those observations. We construct 2D and 3D versions of these representation types, using our architecture modules: • We obtain T 3D = M (n) by encoding the set of inputs 1,..., n. • We obtain B 3D = F (n+1) by encoding the single input n + 1. • We obtain • We obtain B 2D = F (n+1) by convolving I (n+1) with a 3-block 2D ResNet . Finally, the contrastive losses pull corresponding (top-down and bottom-up) features close together in embedding space, and push non-corresponding ones beyond a margin of distance: where α is the margin size, and Y is 1 at indices where T corresponds to B, and −1 everywhere else. The losses ask tensors depicting the same scene, but acquired from different viewpoints, to contain the same features. The performance of a metric learning loss depends heavily on the sampling strategy used (; ;). We use the distance-weighted sampling strategy proposed by which uniformly samples "easy" and "hard" negatives; we find this outperforms both random sampling and semi-hard sampling. Upon training, our model learns to map even a single RGB-D input to a complete 3D imagination, as we show in Figure 2 -right. Given two temporally consecutive and registered 3D maps, we train a motion estimation module to predict the 3D motion field W (t) between them, which we call 3D imagination flow. Since we have accounted for camera motion, this 3D motion field should only be non-zero for independently moving objects. We obtain 3D object proposals by clustering the 3D flow vectors, extending classic motion clustering methods (; to an egomotion-stabilized 3D feature space, as opposed to 2D pixel space. Our 3D FlowNet is a 3D adaptation of the PWC-Net (2D) optical flow model . Note that our model only needs to estimate motion of the independently-moving part of the scene, since egomotion has been accounted for. It works by iterating across scales in a coarse-to-fine manner. At each scale, we compute a 3D cost volume, convert these costs to 3D displacement vectors, and incrementally warp the two tensors to align them. We train our 3D FlowNet using two tasks: Synthetic transformation of feature maps: We apply random rotations and translations to F (t) and ask the model to recover the dense 3D flow field that corresponds to the transformation; Unsupervised 3D temporal feature matching: to align it with F (t), using the estimated flow W (t). We apply the warp with a differentiable 3D spatial transformer layer, which does trilinear interpolation to resample each voxel. This extends self-supervised 2D optical flow to 3D feature constancy (instead of 2D brightness constancy). We found that both types of supervision are essential for obtaining accurate 3D flow field estimates. Since we are not interested in the 3D motion of empty air voxels, we additionally estimate 3D voxel occupancy, and supervise this using the input pointclouds; we set the 3D motion of all unoccupied voxels to zero. We describe our 3D occupancy estimation in more detail in the appendix. The proposed 3D imagination flow enjoys significant benefits over 2D optical flow or 3D scene flow. It does not suffer from occlusions and dis-occlusions of image content or projection artifacts , which typically transform rigid 3D transformations into non-rigid 2D flow fields. In comparison to 3D scene flow , which concerns visible 3D points, 3D imagination flow is computed between visual features that may never have appeared in the field of view, but are rather inpainted by imagination. We obtain 3D object segmentation proposals by thresholding the 3D imagination flow magnitude, and clustering voxels using connected components. We score each component using a 3D version of a center-surround motion saliency score employed by numerous works for 2D motion saliency detection . This score is high when the 3D box interior has lots of motion but the surrounding shell does not. This in a set of scored 3D segmentation proposals for each video scene. We train our models in CARLA , an open-source photorealistic simulator of urban driving scenes, which permits moving the camera to any desired viewpoint in the scene. We obtain data from the simulator as follows. We generate 1170 autopilot episodes of 50 frames each (at 30 FPS), spanning all weather conditions and all locations in both "towns" in the simulator. We define 36 viewpoints placed regularly along a 20m-radius hemisphere in front of the ego-car. This hemisphere is anchored to the ego-car (i.e., it moves with the car). In each episode, we sample 6 random viewpoints from the 36 and randomly perturb their pose, and then capture each timestep of the episode from these 6 viewpoints. We generate train/test examples from this, by assembling all combinations of viewpoints (e.g., N ≤ 5 viewpoints as input, and 1 unseen viewpoint as the target). We filter out frames that have zero objects within the metric "in bounds" region of the GRNN (32m × 32m × 4m). This yields 172524 frames (each with multiple views): 124256 in Town1, and 48268 in Town2. We treat the Town1 data as the "training" set, and the Town2 data as the "test" set, so there is no overlap between the train and test images. For additional testing with real-world data, we use the (single-view) object detection benchmark from the KITTI dataset , with the official train/val split: 3712 training frames, and 3769 validation frames. We evaluate our view-contrastive 3D feature representations in three tasks: semi-supervised 3D object detection, unsupervised 3D moving object detection, 3D motion estimation. We use as a baseline representing the state-of-the-art, but evaluate additional related works in the appendix (Sec. C.2, Sec. C.3). We use the proposed view-contrastive prediction as pretraining for 3D object detection 1. We pretrain the inverse graphics network weights, and then train a 3D object detector module supervised to map a 3D feature volume M to 3D object boxes, as described in Section 3.1. We are interested in seeing the benefit of this pre-training across different amounts of label supervision, so we first use the full CARLA train set for view prediction training (without using box labels), and then use a randomlysampled subset of the CARLA train set for box supervision; we evaluate on the CARLA validation set. We varied the size of the box supervision subset across the following range: 100, 200, 500, 1000, 10000, 80000. We show mean average precision (at an IoU of 0.75) for car detection as a function of the number of annotated 3D bounding box examples in Figure 1. We compare our model against a version of our model that optimizes for RGB regression, similar to but with a 6 DoF camera motion as opposed to 2 DoF, as well as a model trained from random weight initialization (i.e., without pretraining). After pre-training, we freeze the feature layers after view predictive learning, and only supervise the detector module; for the fully supervised baseline (from random initialization), we train end-to-end. As expected, the supervised model performs better with more labelled data. In the low-data regime, pre-training greatly improves , and more so for view-contrastive learning than RGB learning. We could not compare against alternative view prediction models as the overwhelming majority Figure 3: 3D feature flow and object proposals, in dynamic scenes. Given the input frames on the left, our model estimates dense egomotion-stabilized 3D flow fields, and converts these into object proposals. We visualize colorized pointclouds and flow fields in a top-down (bird's eye) view. of them consider pre-segmented scenes (single object setups; e.g., and cannot generalize beyond those settings. The same is the case for the model of , which was greatly outperformed by GRNNs in the work of . We evaluate whether the 3D predictive feature representations learned in the CARLA simulator are useful for learning 3D object detectors in the real world by testing on the real KITTI dataset . Specifically, we use view prediction pre-training in the CARLA train set, and box supervision from the KITTI train set, and evaluate 3D object detection in the KITTI validation set. Existing real-world datasets do not provide enough camera viewpoints to support view-predictive learning. Specifically, in KITTI, all the image sequences come from a moving car and thus all viewpoints lie on a near-straight trajectory. Thus, simulation-to-real transferability of features is especially important for view predictive learning. We show simulation-to-real transfer in Table 1. We compare the proposed view contrastive prediction pre-training, with view regression pre-training, and random weight initialization (no pretraining). In all cases, we train a 3D object detection module supervised using KITTI 3D box annotations. We also compare freezing versus finetuning the weights of the pretrained inverse graphics network. The are consistent with the CARLA tests: view-contrastive pretraining is best, view regression pretraining is second, and learning from human annotations alone is worst. Note that depth in KITTI is acquired by a real velodyne LiDAR sensor, and therefore has lower density and more artifacts than CARLA, yet our model generalizes across this distribution shift. In this section, we test our model's ability to detect moving objects in 3D without any 3D object annotations, simply by clustering 3D motion vectors. We use two-frame video sequences of dynamic scenes from the CARLA data, and we split the validation set into two parts for evaluation: scenes where the camera is stationary, and scenes where the camera is moving. This splitting is based on the observation that moving object detection is made substantially more challenging under a moving camera. We show precision-recall curves for 3D moving object detection under a stationary camera in Figure 4. We compare our model against a model trained with RGB view regression (similar to Tung Figure 5: Unsupervised 3D moving object detection with a moving camera et al., 2019) and a 2.5D baseline. The 2.5D baseline computes 2D optical flow using PWC-Net , then proposes object masks by thresholding and clustering 2D flow magnitudes; these 2D proposals are mapped to 3D boxes by segmenting the input pointcloud according to the proposed masks. Our model outperforms the baselines. Note that even with ground-truth 2D flow, ground-truth depth, and an oracle threshold, a 2.5D baseline can at best only capture the portions of the objects that are in the pointcloud. As a , 3D proposals from PWC-Net often underestimate the extent of the objects by half or more. Our model imagines the full 3D scene in each frame, so it does not have this issue. We show precision-recall curves for 3D moving object detection under a moving camera in Figure 5. We compare our model where egomotion is predicted by our neural egomotion module, against our model with ground-truth egomotion, as well as a 2.5D baseline, and a stabilized 2.5D baseline. The 2.5D baseline uses optical flow estimated from PWC-Net as before. To stabilize the 2.5D flow, we subtract the ground-truth scene flow from the optical flow estimate before generating proposals. Our model's performance is similar to its level in static scenes, suggesting that the egomotion module and stabilization mechanism effectively disentangles camera motion from the 3D feature maps. The 2.5D baseline performs poorly in this setting, as expected. Surprisingly, performance drops further after stabilizing the 2D flows for egomotion. We confirmed this is due to the estimated scene flow being imperfect: subtracting ground-truth scene flow leaves many motion fragments in the . With ground-truth 2D flow, the baseline performs similar to its static-scene level. We have attempted to compare against the unsupervised object segmentation methods proposed in; by adapting the publicly available code accordingly. These models use an inference network that takes as input the full video frame sequences to predict the locations of 2D object bounding boxes, as well as frame-to-frame displacements, in order to minimize view prediction error in 2D. We were not able to produce meaningful from their inference networks. The success of may partially depend on carefully selected priors for 2D object bounding box location and object size parameters that match the moving MNIST dataset statistics used in the paper, as suggested by the publicly available code. We do not assume knowledge or existence of such object location or size priors for our CARLA data. Full In this section, we evaluate accuracy of our 3D FlowNet module. The previous section evaluated this module indirectly since it plays a large part in unsupervised 3D moving object detection; here we evaluate its accuracy directly. We use two-frame video sequences of dynamic scenes from our CARLA test set. We compare training our 3D FlowNet over (frozen) 3D feature representations obtained from the proposed viewcontrastive prediction and the baseline RGB regression of. We show 3D motion estimation in Table 2. We also compare against a zero-motion baseline that predicts zero motion everywhere. Since approximately 97% of the voxels belong to the static scene, a zero-motion baseline is very competitive in an overall average. We therefore evaluate error separately in static and moving parts of the scene. Our method achieves dramatically lower error than the RGB generation baseline, which suggests the proposed view contrastive objectives in 3D and 2D in learning of correspondent features across views even for moving objects, despite the fact features were learned only using static scenes. The proposed model has two important limitations. First, our work assumes an embodied agent that can move around at will. This is hard to realize in the real world, and indeed there are almost no existing datasets with enough camera views. Second, our model architecture consumes a lot of GPU memory, due to its extra spatial dimension. This severely limits either the resolution or the metric span of the latent map M. On 12G Titan X GPUs we encode a space sized 32m × 32m × 8m at a resolution of 128 × 128 × 32, with a batch size of 4; iteration time is 0.2s/iter. Supervised 3D object detectors typically cover twice this metric range. Sparsifying our feature grid, or using points instead of voxels, are clear areas for future work. We propose models that learn space-aware 3D feature abstractions of the world given 2.5D input, by minimizing 3D and 2D view contrastive prediction objectives. We show that view-contrastive prediction leads to features useful for 3D object detection, both in simulation and in the real world. We further show that the ability to visually imagine full 3D scenes allows us to estimate dense 3D motion fields, where clustering non-zero motion allows 3D objects to emerge without any human supervision. Our experiments suggest that the ability to imagine visual information in 3D can drive 3D object detection without any human annotations-instead, the model learns by moving and watching objects move . In Section B, we provide implementation details for our 3D-bottlenecked architecture, egomotion module, and 3D imagination FlowNet. In Section C, we provide additional experiments, and additional visualizations of our output. In Section D, we discuss additional related work. Inputs Our input images are 128 × 384 pixels. We trim input pointclouds to a maximum of 100,000 points, and to a range of 80 meters, to simulate a velodyne LiDAR sensor. 2D-to-3D unprojection This module converts the input 2D image I (t) and pointcloud D (t) into a 4D tensor U (t) ∈ R w×h×d×4, by filling the 3D imagination grid with samples from the 2D image grid, using perspective (un)projection. Specifically, for each cell in the imagination grid, indexed by the coordinate (i, j, k), we compute the floating-point 2D pixel location [u, v] T = KS [i, j, k] T that it projects to from the current camera viewpoint, using the pinhole camera model , where S is the similarity transform that converts memory coordinates to camera coordinates and K is the camera intrinsics (transforming camera coordinates to pixel coordinates). We fill U (t) i,j,k with the bilinearly interpolated pixel value I (t) u,v. We transform our depth map D t in a similar way and obtain a binary occupancy grid O (t) ∈ R w×h×d×1, by assigning each voxel a value of 1 or 0, depending on whether or not a point lands in the voxel. We concatenate this to the unprojected RGB, making the tensor [We pass the tensors [U (t), O (t) ] through a 3D encoder-decoder network. The 3D feature encoderdecoder has the following architecture, using the notation k-s-c for kernel-stride-channels: 4-2-64, 4-2-128, 4-2-256, 4-0.5-128, 4-0.5-64, 1-1-F, where F is the feature dimension. We use F = 32. After each deconvolution (stride 0.5 layer) in the decoder, we concatenate the same-resolution featuremap from the encoder. Every convolution layer (except the last in each net) is followed by a leaky ReLU activation and batch normalization. Egomotion estimation This module computes the relative 3D rotation and translation between the current camera viewpoint and the reference coordinate system of the map M. We significantly changed the module of which could only handle 2 degrees of camera motion. We consider a general camera with full 6-DoF motion. Our egomotion module is inspired by the state-of-the-art PWC-Net optical flow method : it incorporates spatial pyramids, incremental warping, and cost volume estimation via cross-correlation. and M (t) can be used directly as input to the egomotion module, we find better performance can be obtained by allowing the egomotion module to learn its own featurespace. Thus, we begin by passing the (unregistered) 3D inputs through a 3D encoder-decoder, producing a reference tensor F ∈ R w×h×d×c, and a query tensor F (t) ∈ R w×h×d×c. We wish to find the rigid transformation that aligns the two. We use a coarse-to-fine architecture, which estimates a coarse 6D answer at the coarse scale, and refines this answer in a finer scale. We iterate across scales in the following manner: First, we downsample both feature tensors to the target scale (unless we are at the finest scale). Then, we generate several 3D rotations of the second tensor, representing "candidate rotations", making a set {F (t) θi |θ i ∈ Θ}, where Θ is the discrete set of 3D rotations considered. We then use 3D axis-aligned cross-correlations between F and the F (t) θi, which yields a cost volume of shape r × w × h × d × e, where e is the total number of spatial positions explored by cross-correlation. We average across spatial dimensions, yielding a tensor shaped r × e, representing an average alignment score for each transform. We then apply a small fully-connected network to convert these scores into a 6D vector. We then warp F (t) according to the rigid transform specified by the 6D vector, to bring it into (closer) alignment with F. We repeat this process at each scale, accumulating increasingly fine corrections to the initial 6D vector. Similar to PWC-Net , since we compute egomotion in a coarse-to-fine manner, we need only consider a small set of rotations and translations at each scale (when generating the cost volumes); the final transform composes all incremental transforms together. However, unlike PWCNet, we do not repeatedly warp our input tensors, because this accumulates interpolation error. Instead, following the inverse compositional Lucas-Kanade algorithm , and at each scale warp the original input tensor with the composed transform. 3D-to-2D projection This module "renders" 2D feature maps given a desired viewpoint V (k) by projecting the 3D feature state M (t). We first appropriately orient the state map by resam-. We then warp the view-oriented tensor M (t) view k such that perspective viewing rays become axis-aligned. We implement this by sampling from the memory tensor with the correspondence T, where the indices [u, v] span the image we wish to generate, and d spans the length of each ray. We use logarithmic spacing for the increments of d, finding it far more effective than linear spacing (used in, likely because our scenes cover a large metric space. We call the perspective-transformed tensor M (t) proj k. To avoid repeated interpolation, we actually compose the view transform with the perspective transform, and compute M (t) proj k from M (t) with a single trilinear sampling step. Finally, we pass the perspective-transformed tensor through a CNN, converting it to a 2D feature map M. The CNN has the following architecture (using the notation k-s-c for kernel-stride-channels): max-pool along the depth axis with 1×8×1 kernel and 1×8×1 stride, to coarsely aggregate along each camera ray, 3D convolution with 3-1-32, reshape to place rays together with the channel axis, 2D convolution with 3-1-32, and finally 2D convolution with 1-1-e, where e is the channel dimension. For predicting RGB, e = 3; for metric learning, we use e = 32. We find that with dimensionality e = 16 or less, the model underfits. To train our 3D FlowNet, we generate supervised labels from synthetic transformations of the input, and an unsupervised loss based on the standard standard variational loss (; . For the synthetic transformations, we randomly sample from three uniform distributions of rigid transformations: (i) large motion, with rotation angles in the range [−6, 6] (degrees) and translations in [−1, 1] (meters), (ii) small motion, with angles from [−1, 1] and translations from [−0.1, 0.1], (iii) zero motion. We found that without sampling (additional) small and zero motions, the model does not accurately learn these ranges. Still, since these synthetic transformations cause the entire tensor to move at once, a FlowNet learned from this supervision alone tends to produce overly-smooth flow in scenes with real (non-rigid) motion. The variational loss L warp, described in the main text, overcomes this issue. We also apply a smoothness loss penalizing local flow changes: is the estimated flow field and ∇ is the 3D spatial gradient. This is a standard technique to prevent the model from only learning motion edges (; . 3D occupancy estimation The goal in this step is to estimate which voxels in the imagination grid are "occupied" (i.e., have something visible inside) and which are "free" (i.e., have nothing visible inside). For supervision, we obtain (partial) labels for both "free" and "occupied" voxels using the input depth data. Sparse "occupied" voxel labels are given by the voxelized pointcloud O (t) reg. To obtain labels of "free" voxels, we trace the source-camera ray to each occupied observed voxel, and mark all voxels intersected by this ray as "free". Our occupancy module takes the memory tensor M (t) as input, and produces a new tensor P (t), with a value in at each voxel, representing the probability of the voxel being occupied. This is achieved by a single 3D convolution layer with a 1 × 1 × 1 filter (or, equivalently, a fully-connected network applied at each grid location), followed by a sigmoid nonlinearity. We train this network with the logistic loss, is the label map, andÎ is an indicator tensor, indicating which labels are valid. Since there are far more "free" voxels than "occupied", we balance this loss across classes within each minibatch. 2D CNN The CNN that converts the target view into an embedding image is a residual network with two residual blocks, with 3 convolutions in each block. The convolution, cars and CARLA (used in this work) . CARLA scenes are more realistic, and are not object-centric. layers' channel dimensions are 64, 64, 64, 128, 128, 128. Finally there is one convolution layer with e channels, where e is the embedding dimension. We use e = 32. Contrastive loss For both the 2D and the 3D contrastive loss, for each example in the minibatch, we randomly sample a set of 960 pixel/voxel coordinates S for supervision. Each coordinate i ∈ S gives a positive correspondence T i, B i, since the tensors are aligned. For each T i, we sample a negative B k from the samples acquired across the entire batch, using the distance-weighted sampling strategy of. In this way, on every iteration we obtain an equal number of positive and negative samples, where the negative samples are spread out in distance. We additionally apply an L 2 loss on the difference between the entire tensors, which penalizes distance at all positive correspondences (instead of merely the ones sampled for the metric loss). We find that this accelerates training. We use a coefficient of 0.1 for L 2D contrast, 1.0 for L 3D contrast, and 0.001 for the L 2 losses. Code and training details Our model is implemented in Python/Tensorflow, with custom CUDA kernels for the 3D cross correlation (used in the egomotion module and the flow module) and for the trilinear resampling (used in the 2D-to-3D and 3D-to-2D modules). The CUDA operations use less memory than native-tensorflow equivalents, which facilitates training with large imagination tensors (128 × 128 × 32 × 32). Training to convergence (approx. 200k iterations) takes 48 hours on a single GPU. We use a learning rate of 0.001 for all modules except the 3D FlowNet, for which we use 0.0001. We use the Adam optimizer, with β 1 = 0.9, β 2 = 0.999. Code, data, and pre-trained models will be made publicly available upon publication. C ADDITIONAL EXPERIMENTS C.1 DATASETS CARLA vs. other datasets We test our method on scenes we collected from the CARLA simulator , an open-source driving simulator of urban scenes. CARLA permits moving the camera to any desired viewpoint in the scene, which is necessary for our view-based learning strategy. Previous view prediction works have considered highly synthetic datasets: The work of introduced the Shepard-Metzler dataset, which consists of seven colored cubes stuck together in random arrangements, and the Rooms-Ring-Camera dataset, which consists of a random floor and wall colors and textures with variable numbers of shapes in each room of different geometries and colors. The work of introduced a ShapeNet arrangements dataset, which consists of table arrangements of ShapeNet synthetic models . The work of considers scenes with a single car. Such highly synthetic and limited-complexity datasets cast doubt on the usefulness and generality of view prediction for visual feature learning. The CARLA simulation environments considered in this work have photorealistic rendering, and depict diverse weather conditions, shadows, and objects, and arguably are much closer to real world conditions, as shown in Figure 6. While there exist real-world datasets which are visually similar , they only contain a small number viewpoints, which makes view-predictive training inapplicable. Since occlusion is a major factor in a dataset's difficulty, we provide occlusion statistics collected from our CARLA data. Note that in a 2D or unrealistic 3D world, most of the scene would be fully visible in every image. In CARLA, a single camera view reveals information on approximately 0.23 (±0.03) of all voxels in the model's 32m × 32m × 8m "in bounds" space, leaving 0.77 totally occluded/unobserved. This measure includes both "scene surface" voxels, and voxels lying on rays that travel from the camera center to the scene surface (i.e., "known free space"). The revealed surface itself occupies only 0.01 (±0.002) of the volume. Adding a random second view, the total volume revealed is 0.28 (±0.05); the surface revealed is 0.02 (±0.003). With all 6 views, 0.42 (±0.04) of the volume is revealed; 0.03 (±0.004) is revealed surface. These statistics illustrate that the vast majority of the scene must be "imagined" by the model to satisfy the view prediction objective. Images from the CARLA simulator have complex textures and specularities and are close to photorealism, which causes RGB view prediction methods to fail. We illustrate this in Figure 7: given an input image and target viewpoint (i.e., pose), we show target views predicted by a 3D-bottlenecked RNN trained for RGB generation, (ii) a VAE variant of that architecture, and (iii) Generative Query Networks (GQN; , which does not have a 3D representation bottleneck, but rather concatenates 2D images and their poses in a 2D recurrent architecture. Unlike these works, however, our model uses view prediction as a means of learning useful visual representation for 3D object detection, segmentation and motion estimation, not as the end task itself. C.3 2D AND 3D CORRESPONDENCE We evaluate our model's performance in estimating visual correspondences in 2D and in 3D, using a nearest-neighbor retrieval task. In 2D, the task is as follows: we extract one "query" patch from a top-down render of a viewpoint, then extract 1000 candidate patches from bottom-up renders, with only one true correspondence (i.e., 999 negatives). We then rank all bottom-up patches according to L 2 distance from the query, and report the retrieval precision at several recall thresholds, averaged over 1000 queries. In 3D the task is similar, but patches are feature cubes extracted from the 3D imagination; we generate queries from one viewpoint, and retrievals from other viewpoints and other scenes. The 1000 samples are generated as follows: from 100 random test examples, we generate 10 samples from each, so that each sample has 9 negatives from the same viewpoint, and 990 others from different locations/scenes. We compare the proposed model against (i) the RGB prediction baseline of, (ii) Generative Query Networks (GQN) of , which do not have a 3D representation bottleneck, and (iii) a VAE alternative of the (deterministic) model of. Quantitative are shown in Table 3. For 2D correspondence, the models learned through the RGB prediction objectives obtain precision near zero at each recall threshold, illustrating that the model is not learning precise RGB predictions. The proposed view contrastive losses perform better, and combining both the 2D and 3D contrastive losses is better than using only 2D. Interestingly, for 3D correspondence, the retrieval accuracy of the RGB-based models is relatively high. Training 3D bottlenecked RNNs as a variational autoencoder, where stochasticity is added in the 3D bottleneck, improves its precision at lower ranks thresholds. Contrastive learning outperforms all baselines. Adding the 3D contrastive loss gives a large boost over using the 2D contrastive loss alone. Note that 2D-bottlenecked architectures cannot perform 3D patch retrieval. Qualitative retrieval for our full model vs. are shown in Figure 8. C.4 UNSUPERVISED 3D OBJECT MOTION SEGMENTATION Our method proposes 3D object segmentations, but labels are only available in the form of oriented 3D boxes; we therefore convert our segmentations into boxes by fitting minimum-volume oriented 3D boxes to the segmentations. The precision-recall curves presented in the paper are computed with an intersection-over-union (IOU) threshold of 0.5. Figure 9 shows sample visualizations of 3D box proposals projected onto input images. We test our model's ability to estimate occupied and free space. Given a single view as input, the model outputs an occupancy probability for each voxel in the scene. Then, given the aggregated labels computed from this view and a random next view, we compute accuracy at all voxels for which we have labels. Voxels that are not intersected by either view's camera rays are left unlabelled. Table 4 shows the classification accuracy, evaluated independently for free and occupied voxels, and for all voxels aggregated In each row, an asterisk marks the box with the highest center-surround score. Right: Observed and estimated heightmaps of the given frames, computed from 3D occupancy grids. Note that the observed (ground truth) heightmaps have view-dependent "shadows" due to occlusions, while the estimated heightmaps are dense and viewpoint-invariant. together. Overall, accuracy is extremely high (97-98%) for both voxel types. Note that part of the occupied space (i.e., the voxelized pointcloud of the first frame) is an input to the network, so accuracy on this metric is expected to be high. We show a visualization of the occupancy grids in Figure 9 (right). We visualize the occupancy grids by converting them to heightmaps. This is achieved by multiplying each voxel's occupancy value by its height coordinate in the grid, and then taking a max along the grid's height axis. The visualizations show that the occupancy module learns to fill the "holes" of the partial view, effectively imagining the complete 3D scene. Method R (rad) t (m) ORB-SLAM2 (Mur-Artal & Tardós, 2017) 0.089 0.038 SfM-Net 0.083 0.086 SfM-Net + GT depth 0.100 0.078 Ours 0.120 0.036 Table 5: Egomotion error. Our model is on par with the baselines. We compare our egomotion module against ORB-SLAM2 (Mur-Artal & Tardós, 2017), a state-ofthe-art geometric SLAM method, and against the SfM-Net architecture, which is a 2D CNN that takes pairs of frames and outputs egomotion. We give ORB-SLAM2 access to ground-truth pointclouds, but note that it is being deployed merely as an egomotion module (rather than for SLAM). We ran our own model and SfM-Net with images sized 128 × 384, but found that ORB-SLAM2 performs best at 256 × 768. SfM-Net is designed to estimate depth and egomotion unsupervised, but since our egomotion module is supervised, we supervise SfM-Net here as well. We evaluate two versions of it: one with RGB inputs (as designed), and one with RGB-D inputs (more similar to our model). Table 5 shows the . Overall the models all perform similarly, suggesting that our egomotion method performs on par with the rest. Note that the egomotion module of is inapplicable to this task, since it assumes that the camera orbits about a fixed point, with 2 degrees of freedom. Here, the camera is free, with 6 degrees of freedom. 3D feature representations A long-standing debate in Computer Vision is whether it is worth pursuing 3D models in the form of binary voxel grids, meshes, or 3D pointclouds as the output of visual recognition. The "blocks world" of set as its goal to reconstruct the 3D scene depicted in the image in terms of 3D solids found in a database. Pointing out that replicating the 3D world in one's head is not enough to actually make decisions, argued for feature-
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxt60VtPr
We show that with the right loss and architecture, view-predictive learning improves 3D object detection
The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (x_txt, x_aud) by encouraging its output to reconstruct x_aud. The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a , the proposed model delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions. Our model achieves start-of-the-art across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice). In the past few years, we have seen exciting developments in Text-To-Speech (TTS) using deep neural networks that learn to synthesize human-like speech from text in an end-to-end fashion. Ideally, synthesized speech should convey the given text content in an appropriate auditory style which we refer to as style modeling. Modeling style is of particular importance for many practical applications such as intelligent conversational agents and assistants. Yet, this is an incredibly challenging task because the same text can map to different speaking styles, making the problem somewhat under-determined. To this end, the recently proposed Tacotron-based approaches BID22 ) use a piece of reference speech audio to specify the expected style. Given a pair of text and audio input, they assume two independent latent variables: c that encodes content from text, and s that encodes style from the reference audio, where c and s are produced by a text encoder and a style encoder, respectively. A new audio waveform can be consequently generated by a decoder conditioned on c and s, i.e. p(x|c, s). Thus, it is straightforward to train the model that minimizes the log-likelihood by a reconstruction loss. However, this method makes it challenging for s to exclusively encode style because no constraints are placed on the disentanglement of style from content within the reference audio. It makes the model easy to simply memorize all the information (i.e. both style and content components) from the paired audio sample. In this case, the style embedding tends to be neglected by the decoder, and the style encoder cannot be optimized easily. To help address some of the limitations of the prior work, we propose a model that provides enhanced controllability and disentanglement ability. Rather than only training on a single paired text-audio sample (the text and audio are aligned with each other), i.e. (x txt, x aud) →x, we adopt a pairwise training procedure to enforce our model to correctly map input text to two different audio references (x txt, x aud is paired with x txt, and x − aud is unpaired (randomly sampled). Training the model involves solving an adversarial game and a collaborative game. The adversarial game concentrates the true joint data distribution p(x, c) by using a conditional GAN loss. The collaborative game is built to minimize the distance of generated samples from the real samples in both original space and latent space. Specifically, we introduce two additional losses, the reconstruction loss and the style loss. The style loss is produced by drawing inspiration from image style transfer BID4, which can be used to give explicit style constraints. During training, the the generator and discriminator combat each other to match a joint distribution. While at the same time, they also collaborate with each other in order to minimize the distance of the expected sample and the synthesized sample in both original space and hidden space. As a , our model delivers a highly controllable generator and disentangled representation. TTS can be formulated as a cross-domain mapping problem, i.e. given the source domain Src (text) and target domain T rg (audio), we want to learn a mapping F: Src → T rg such that the distribution of F (Src) matches the distribution T rg. When modeling style in TTS, F shall be conditioned on a style variable, which can be specified in many forms such as a reference audio waveform or a label. Given (x txt, x aud), the goal is then to synthesize a new audio waveform that contains the textual content specified by x txt and the auditory style specified by x aud. Tacotron-based systems BID22 solve this with a reconstruction loss by training on paired data x txt and x aud. Here, we describe their solution via a conditional probabilistic model admitting two independent sources of variation: a content variable c 1:T with T words specified by text x txt, and a style variable s given by the reference audio x aud. Given (x txt, x aud), we can sample: content: c 1:T ∼ q ϕ (c 1:T |x txt), style: s ∼ q φ (s|x aud), and outputx ∼ p θ (x|c 1:T, s), p θ (x|c 1:T, s) is a likelihood function described by a decoder network, Dec. We define deterministic encoders Enc c that maps x txt to their corresponding content components, and Enc s that parameterizes the approximate posterior q φ (s|x aud). It is natural to consider the conditional likelihood to be written as x ∼ p θ (x|Enc c (x txt), Enc s (x aud)), and the training objective could be maximizing the log-likelihood: DISPLAYFORM0 In practice, the learned mapping F should be injective, i.e. there should be a one-to-one correspondence between input conditions and the output audio waveform. However, we argue that training only on paired data with maximum likelihood objective is insufficient to learn this mapping. Unlike x txt that purely contains content components, x aud consists of both style components s and other factors z, such as verbal content that matches with x txt. Therefore, the model needs to be able to disentangle s from z. Otherwise, in the case of training on paired data by maximum likelihood objective, the model could simply learn to copy the waveform information from x aud to the output and ignore s. When given the same x txt but different x aud to such a model, it may still map sample to the samex. In the following sections, we introduce a way to prevent this degenerate issue. Our proposed approach combines adversarial and collaborative games to train a TTS stylization model. Our training procedure, illustrated in FIG1 (a), can also be considered as the swapping of style components. After swapping, the content components of both observations will remain the same, while the sources of style will change, and be aligned with x + aud and x − aud, respectively. We now explain the two games involved in training the proposed model. + andx − to be assigned high probabilities of belonging to the target domain using generative adversarial networks (GAN) BID6. Specifically, we use a conditional GAN BID13 to model a joint distribution of audio and content (i.e. D(x, c)), which provides a stronger constraint by enforcing the decision to be always conditioned on the content variable c. We define the min-max adversarial game: DISPLAYFORM0 Unlike the traditional binary classification setting (real or fake), we make D a ternary classifier with D(·) i representing the probability of x being either "fake from paired input" (D(·) 1 ), "fake from unpaired input" D(·) 2, or "real audio sample" D(·) 3. This ternary setting makes the discriminator more powerful because it must distinguish subtle differences between samples generated from paired and unpaired input. A similar setting has also been used in cross-domain image generation BID24. Our generator consists of two encoders Enc c and Enc s and a decoder Dec; the discriminator is used only during training.3.2 COLLABORATIVE GAME Although our adversarial game theoretically drives p θ (x, c, s) toward the true data distribution, we find it to be insufficient to find the desired distribution, as there is little supervision from the observation what s should represent. Especially for x − aud, the absence of a pairwise relationship makes it difficult to find the correct correspondence. As a , G might generate high-fidelity samplesx − with incorrect s, and D will still accept it as long as its style is different fromx +. Therefore, we impose explicit constraints on the generated samples with a style loss and a reconstruction loss. Style Loss. In the computer vision literature, BID4 captured the artistic style of an image using the gram matrix of features maps produced by a convolutional neural network. The gram matrix computes patch-level appearance statistics, such as texture, in a location-invariant manner. It is thus natural to expect that a gram matrix of feature maps computed from a mel-spectrogram captures local statistics of an audio signal in the frequency-time domain, representing low-level characteristics of sound, e.g. loudness, stress, speed, pitch, etc. In fact, while the prosodic variation is often suprasegmental, certain characteristics, such as emotion, are captured by local statistics in the time-frequency domain. For example, BID2 have shown that a temporary reduction in the average fundamental frequency significantly correlates with sarcasm expression. More broadly, numerous past studies on prosody have been based on spectral characteristics, e.g.; BID1.Let X andX be the feature maps of the mel-spectrograms from the reference and the synthesized audio samples, respectively. We compute the gram matrices W and G as the inner-product of vectorized feature maps X andX, respectively: DISPLAYFORM1 Our style loss L sty is then the L 2 distance between G and W over all pairs of filters i and j: DISPLAYFORM2 where N f is the number of filters. To produce the features maps, most existing approaches in image style transfer use the VGG-19 BID21 ) pretrained on ImageNet . However, mel-spectrograms look quite different from the natural images of the ImageNet, making the VGG-19 unsuitable for our work. We found that a simple four-layer CNN with random weights, denoted by R, perform just well for our purpose; similar findings have been reported recently by BID26, showing that the structure of a CNN is sufficient to capture a great deal of low-level image statistics. Reconstruction Loss. Asx + is expected to be the same as x + aud, we include Eq. 2 and encourage the reconstruction in the original mel-spectrogram space: DISPLAYFORM3 where f (·) and g(·) denote the deterministic encoding function of Enc c and Enc s, respectively. We also encourage reconstruction in the latent space by introducing an inference network C: x aud → z c which approximates the posterior p(z c |x aud) as z c ∼ p c (z c |x aud) = C(x aud). C reduces to an Nway classifier if z c is categorical. In our model, C and Enc s share all layers and there is one final fully-connected layer to output parameters for the conditional distribution p c (z c |x aud). To train p c (z c |x aud), we define a collaborative game in the latent space: DISPLAYFORM4 Minimizing the first term w.r.t. C guides C toward the true posterior p(z c |x aud), while minimizing the second term w.r.t. G enhances G with extra controllability, i.e. it minimizes the chance that G could generate samples that would otherwise be falsely predicted by C. Note that we also minimize the second term w.r.t. C, which proves effective during training that uses synthetic samples to augment the predictive power of C. In summary, minimizing both L sty and L rec can be seen as a collaborative game between players C, R and G that drives p θ (x|c, s) to match p(x|c, s), and p c (z c |x) to match the posterior p(z c |x), the reconstruction loss is thus given by: DISPLAYFORM5 We train our model with a combination of the GAN loss, style loss, and reconstruction loss: DISPLAYFORM0 We set α = 0.1, β = 10 in our experiments. Our model is based on Tacotron that predicts mel-spectrograms directly from character sequences. The predicted mel-spectrogram can be synthesized directly to speech using either the WaveNet vocoder (van den BID27 or the Griffin-Lim method BID7 . In our experiments, we use the Griffin-Lim for fast waveform generation. For Enc c, we use the same text encoder architecture of BID23 . The style encoder Enc s is a combination of reference encoder and style token layers proposed in . We combine the encoded s and c as in Tacotron, i.e. for a content sequence c of length L, we concatenate the style embedding with each state of the text embeddings. The inference network C takes as input a style embedding and processes it through a fully-connected layers followed by batch normalization and Relu is added on top of each convolution layer. The output is mapped to an N-way classification layer. R is a 2-D fully-convolution neural network with four layers, with filter dimensions of 32, 32, 64, 64, respectively. The discriminator D has the similar architecture with the reference encoder, except that a style and content fusion unit is added before it, such that the predicted spectrogram and content information are jointly fed into the network. Finally, a fully-connected layer followed by ReLU maps the output to a 3-way classification layer. Note that, instead of using the character level content embedding c 1:T, here we adopt the global sentence embedding, which is the average of hidden unit activation over the sequence. A detailed diagram can be seen in Appendix 5. We train our model with a minibatch size of 32 using the Adam optimizer; we iterated 200K steps for EMT-4 and 280K steps for VCTK datasets. During training, R is fixed weights. For testing, C, R and D are not needed, and we simply send a text, audio pairs into the model (unpaired audios are not needed in the testing stage), which is shown in FIG1. Text-To-Speech (TTS): Recently, rapid progress has been achieved in TTS with end-to-end trained neural networks, e.g., WaveNet (van den), DeepVoice, VoiceLoop, Char2Wav , BID18 BID20 BID32 and Tacotron BID23. Consequently, modeling style in TTS has become a subject of extensive study. DeepVoice2 and DeepVoice3 BID15 learn one or more lookup tables that store information about different speaker identities. However, they are limited to synthesizing voices of speaker identities seen during training. Unlike DeepVoice2 and DeepVoice3,, which is based on VoiceLoop, can fit unseen speakers' voice at testing time. There is also a collection of approaches that are based on Tacotron, e.g., Tacotron-prosody BID29, prosody-Tacotron (a) and GST. prosody-Tacotron uses an encoder to compute a style embedding from a reference audio waveform, where the embedding provides style information that is not provided by the text. The Global-Style-Token (GST) extends prosody-Tacotron by adding a new attention layer that captures a wide range of acoustic styles. Domain mapping by GANs: Recently, GANs have shown promising in various domain mapping problems. Cycle-GAN BID35 and UNIT BID11 perform imageto-image translation by adding a cycle-consistency loss to the learning objective of GANs. Further research has extended this to cross-domain translation. StackGAN BID33 generates images from text, and DA-GAN BID12 operates across different domains, e.g., object transfiguration, human face to cartoon face, skeleton to natural object. Another line of work performs one-sided domain mapping without using the cycle consistency loss, e.g., BID24. BID34 and BID8 are mapping within text and speech domains. Moving beyond one-to-one domain mapping, Bicycle GAN Zhu et al. (2017b) maps samples from one domain to multiple target domains. Our work can also be considered as a one-sided crossdomain mapping that does not require cycle consistency, which makes the training more practical. We also follow the concept of Bicycle GAN that promotes a one-to-many mapping. To the best of our knowledge, ours is the first to formulate TTS as a cross domain mapping problem using GANs. Style transfer: The recent success in image style transfer BID4 has motivated approaches that model the acoustic style of sound using spectrogram. For example, uses a simple 1-D convolutional layer with a ReLU to compute feature maps and then obtain style features by computing the gram matrix. BID1 followed the same concept and adopted two different audio representations, the mel-spectrogram and the constant Q transform spectrogram. Inspired by this, in this work, we adopt the image style transfer concept to impose explicit style constraints on audio mel-spectrogram. Table 1: Experimental on disentanglement ability and controllability. We evaluate our model from three perspectives: content vs. style disentanglement ability (Sec. 5.1), effectiveness of style modeling (Sec. 5.2), and controllability (Sec. 5.3). We use two datasets: EMT-4, an in-house dataset of 22,377 American English audio-text samples, with a total of 24 hours. All the audio samples are read by a single speaker, in four emotion categories: happy, sad, angry and neutral. For each text sample, there is only one audio sample labeled with one of the four emotion styles. VCTK, a publicly available, multi-speaker dataset containing recordings of clean speech from 109 speakers, with a total of 44 hours. As the raw audio clips have different specifications, we preprocess them by downsampling the audio to 24kHz and trimming leading and trailing silence, reducing the median duration from 3.3 seconds to 1.8 seconds. We compare our method with three state-of-the-art approaches: prosody-Tacotron (a) is similar to our model but trained on the reconstruction loss only. The style embeddings are obtained from the reference encoder directly. GST incorporates the Global Style Tokens to prosody-Tacotron. DeepVoice2 learns a look-up table capturing embeddings for different speaker identity. As DeepVoice2 is particularly designed for multi-speaker modeling, comparisons with DeepVoice2 is only performed on VCTK. Reconstruction error of style embeddings: If the style encoder Enc s has successfully disentangled style from other factors of variation in the audio input, we expect the style embedding s to contain very little information about the content of the audio input. Therefore, we should expect poor performance when we try to reconstruct the audio sample purely from s. This motivates us to evaluate our model with the task of reconstructing audio samples from style embeddings. To this end, we train an autoencoder, where the encoder has the same architecture as Enc s and the decoder has six deconvolutional layers, with each layer having batch normalization and ReLU activation. To set the baseline, we first train the autoencoder from scratch using only the L 2 reconstruction loss; this in the reconstruction error of 0.12. Next, we use precomputed style embeddings from different approaches and train only the decoder network using the reconstruction loss. We report the under the columns "Recon. error" in Table 5. prosody-Tacotron achieves the lowest reconstruction error, suggesting that the approach has the weakest ability to disentangle style from other factors in audio input. GST shows improvement over prosody-Tacotron, which demonstrates the effectiveness of the style token layer that acts as an information bottleneck from audio input to style embeddings. DeepVoice2 performs much better than both prosody-Tacotron and GST on the VCTK dataset. This shows the model particularly works well on modeling speaker identities. Compared to the three state-of-the-art approaches, our model performs the best on both datasets. We also evaluate the importance of different loss terms in our model. We can see that the adversarial loss L adv provides a significant improvement over the baseline models, which suggests the effectiveness of our adversarial loss and pairwise training. When we add the style loss we get further improvements. We also remove the adversarial loss and add the style and reconstruction losses to the baseline GST; this produces even worse . It is because, when training only on paired data with GST's encoder-decoder network, the reconstruction loss already imposes very strong supervision. In this case, additional constraints might on the contrary impaired the performance due to the risk of over fitting. While as our model is adversarially trained, the GAN loss regularizes the model, thus under this case, the style loss and reconstruction loss can help optimizing in a better way. We conduct a qualitative evaluation where we generate audio samples by combining content and style embeddings from different ground-truth pairs of text and audio. Specifically, we randomly sample four (text, audio) pairs from the EMT-4 dataset, one for each emotion category, and create 16 permutations of text (content) and audio (style). Qualitative are available on our project webpage. In in, the samples located on the diagonal are the of parallel style transfer, i.e. reference audio sample is aligned with the text input. Off-diagonals are unparalleled style transfer where in each column have the same content with different styles, and each row shares the same style with different content. By hearing our samples, we can see that the are comparable for both parallel transfer and unparalleled transfer, which means the content and style components are disentangled. Even when transferring on two samples which are separated by a large distance in the latent space, e.g. sad to happy (row 2, line 4), the styles are correctly mapped (compare this to sad to sad (row 4, line 4)). We also conducted subjective study to ask seven subjects to classify these by emotion category. The accuracy reaches 86%, which suggests the efficacy of our model in disentangling style and content components. Speaker style modeling: We further evaluate the effectiveness of our approach on modeling styles by means of speaker verification. Specifically, we compare our style embeddings with the i-vector representation used in modern speaker verification systems BID10 ) on the speaker classification task. We report the under the columns "Embeddings" in Table 5.3. We can see that, despite the fact that the i-vectors are specifically designed for speaker classification, our model can still achieve comparable , which suggests that our model can produce generic feature representation for various auditory styles including speaker identity. Visualization of style embedding: FIG2 shows the t-SNE projection (van der) of style embeddings from (a) the EMT-4 dataset and (b) the VCTK dataset. To create the plots, we randomly sampled 1,000 instances from each dataset: (a) 250 instances from each of the four emotion categories, and (b) 125 instances from 9 speakers (3 male and 5 female). We can see that the projections show clear boundaries between different style (emotion and speaker) categories. Interestingly, "sad" is far from the other three emotion categories; we believe that this is because sad speech usually have much lower pitch compared to other emotion categories. "neutral" is projected to the middle, which has roughly the same distance with other emotion categories. Also, we can see that there is a clear boundary between male samples and female samples. A good TTS system should allow users to control both content and style of the output. We consider two factors that affect the controllability: the fidelity, i.e. the synthesized speech should contain the desired content in a clearly audible form, and the naturalness, i.e. the synthesized speech should contain the desired style. To validate the fidelity, we assess the performance of synthesized samples in a speech recognition task. We use a pre-trained ASR model based on WaveNet (van den BID27 to compute Word Error Rate (WER) for the samples synthesized by each model. Results are shown in Table 5. Our model performs comparably with, and sometimes even better than, the state-of-the-art approaches. Note that WER measures only the correctness of verbal content, not its auditory style. The suggests that all the methods we have compared perform reasonably well in controlling the verbal content in TTS. When trained with more constraints on GST, i.e. TAB3: Classification accuracy for synthesized samples and learned style embeddings. Embeddings prosody-Tacotron GST DeepVoice2 Ours i-vector Ours EMTGST+L sty + L rec, the performance gets worse. We suspect that this is because the autoencoder training procedure used in GST already gives strong supervision to the decoder. Thus, when added with more constraints, the model has overfitted to the training data. Our model does not have such problem because of the unpaired samples used during training, which act as strong regularizer. Classification accuracy on synthesized samples: As the style we are modeling are all categorical, we evaluate the synthesized samples by a classification task. We train two classifiers on each dataset, which have 98% and 83% accuracy for EMT-4 and VCTK, respectively. We select 1000 samples synthesized from test set on EMT-4. To assess on VCTK, we test samples from both seen and unseen speakers, where'seen speakers' mean the speakers are part of the training set, while the reference audio are selected from the test set.'unseen speaker' means the speakers have never be seen during training, which means the model is asked to fit a new speaker's voice on testing stage. The are shown in Table 5.3. As we can see, on the EMT-4 dataset, our model performs the better than prosody-Tacotron and GST. When tested on the seen data on VCTK, DeepVoice2 performs the best, but it fails to generalize to unseen speakers. Our model performs well in both cases. To qualitatively evaluate Our model, we conduct style transfer. In this experiment, we want to compare our model against GST in how well they model varied styles in EMT-4. We randomly selected 15 sentences, where 10 of the sentences are from the test set, and 5 of them are picked on web (out of the dataset). To perform style transfer, we select four different reference audios from'happy','angry','sad' and'neutral', all of the reference audio samples are unseen during training. Each sentence is paired with these four reference audio samples for synthesizing, which will produce 60 new audio samples in total. The can be found in our project page. We also compare our model against GST on the task of unparalleled transfer at scale. Specifically, we follow the same setting in to run side-by-side subjective study on 7-point ratings (-3 to 3) from "model A is the closest to the reference style" to "model B is the closest to the reference style", where model B is ours. We recruited seven participants. Each listened to all 60 permutation of content and rated each set of audio style (emotions) comparing the of our model versus the prosody-Tacotron model. They rated each pair of audio outputs on the 7-point scale. We performed a single-sample T-Test on the ing ratings averaged across all participants. µ¿0 means our model was judged as closer to the reference. Overall the emotion samples the participants rated our model as significantly closer to the reference (µ=0.872, p 0.001). For each of the styles individually our model was consistently rated as significantly closer to the reference (neutral: µ=0.295, p=0.01, happy: µ=0.905, p 0.001, sad: µ=1.646, p 0.001, angry: µ=0.641, p 0.001). These provide further evidence that our model can synthesize speech with the correct content and distinct auditory styles that are closer to the reference than the state-of-the-art comparison. We also evaluate the output using mean opinion score (MOS) naturalness tests. Our model reaches 4.3 MOS, outperforming 4.0 MOS reported in 3.82 MOS reported in (a). We propose an end-to-end conditional generative model for TTS style modeling. The proposed model is built upon Tacotron, with an enhanced content-style disentanglement ability and controllability. The proposed pairwise training approach that involves a adversarial game and a collaborative game together, in a highly controllable generator with disentangled representations. Benefiting from the separate modeling of content c and style s, our model can synthesize high fidelity speech signals with the correct content and realistic style, ing in natural human-like speech. We demonstrated our approach on two TTS datasets with different auditory styles (emotion and speaker identity), and show that our approach establishes state-of-the-art quantitative and qualitative performance on a variety of tasks. For future research, an important direction can be training on unpaired data under an unsupervised setting. In this way, the requirements for a lot of work on aligning text and audios can be much released. We provide architecture details of our model. FIG3 shows both the content and style encoder networks (Enc c and Enc s, respectively) as well as the inference network (C). Figure 4 shows the decoder network (Dec), and Figure 5 shows the discriminator network (D). We further provide network parameter settings in the captions of each figure. units, respectively. Each layer is followed by a ReLU activation and dropout with a 50% chance. The output is fed into a CBHG block. Inside in the CBHG block, the Conv1D bank has 16 layers, where each layer has 128 units and comes with a ReLU activation. Next, the maxpooling layer has a stride of 1 and with a width of 2. The Conv1D projection has three layers, each with 128 units and a ReLU activation. After the residual connection is four fully-connected layers, each with 128 units and a ReLU activation. The final Bidirectional GRU has 128 cells. (Right) The style encoder (Enc s) and the inference Network (C): The style encoder consists of a reference encoder and style token layers. The reference encoder takes a N × T mel × 80 mel-spectrogram as input, where N is batch size, T mel is length of mel-spectrogram, and 80 is the dimension. The six Conv2D layers have filters, respectively, each with a kernel size 3 × 3 and a stride of 2 × 2. Each layer is followed by a ReLU activation and batch normalization. Next is a single-layer GRU with 128 units. The final state from the GRU is fed into a fully-connected layer with 128 units and a tanh activation; this produces the reference embedding. In the style token layers, 10 global style tokens (GSTs) are randomly initialized. The reference embedding is used as a query for a multi-head attention unit. A learned linear weight is then output from the multi-head attention unit, and the style embedding is computed as a weighted sum. The inference network (C) shares the same architecture and parameters with Enc s, except that a new N-way classifier (which consists of a fully connected layer followed by Softmas) is added on top. Figure 4: Decoder network architecture. The decoder (Dec) takes as input a concatenation of a content embedding sequence C 1:T and the style embedding (replicated T times). We then unroll each of the T time slices by feeding them into two fully-connected layers, with units, respectively, followed by an attention RNN and a decoder RNN. The attention RNN has 2-layer residual-GRUs, each with 256 cells. The decoder RNN has a 256 cell one-layer GRU. As an output of each time step, 5 spectrogram slices are predicted (r = 5), and they are fed into the CBHG block (see FIG3 (left) for detail). The final output of the decoder is the predicted spectrogram. A vocoder is used to synthesize voice audios from the spectrograms. In this work, we use the GriffinLim algorithm BID7 to achieve fast waveform generation. Figure 5: Discriminator network architecture. The main computation body is similar to the reference encoder (part of the style encoder in FIG3), as shown on the right. The difference is that, instead of having only spectrograms as input, it has a combination of spectrograms (either ground truth or synthesized) and the content information (output from Enc c), both shown on the left. Here we adopt global sentence embedding to represent the content information. The output content embedding from the Content Encoder Enc c is averaged along time over the whole sequence, which produces a N × 1 × 128 single embedding. To match with the dimension of the spectrogram, the single content embedding is replicated according to the spectrograms time step (T mel), and they are concatenated together as the combined input. In this section, we show attention plots of our model and a baseline model, comparing the robustness of these models for different lengths of the reference audio. Figure 6: Attention alignments by different reference sentence lengths. From left to right, the sentence length are short, medium and long, respectively. The first row is obtained by using the Style Encoder Enc s as shown in 3. The second row is obtained by only using the Reference Encoder (remove the Style token layers in Enc s). As can be seen, adding the global style token layer made the network more robust to the variance in the length of the reference audio. To better analyze the inference and attention mechanisms of our model, we further evaluate the performance in terms of the Word Error Rate (WER) and classification accuracy under different embedding sizes TAB2 ) and different number of attention heads TAB3. TAB2 shows the optimal embedding size is at 128; too small size prevents essential information from flowing through the network, while too big size leads to a poor ability to bottleneck the information from disentangling the style components with other factors within the reference audio. Also, the large embedding size means more parameters to optimize for, which in the risk of over-fitting. TAB3 shows the when applying difference numbers of attention heads. We get the best performance with four attention heads.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByzcS3AcYX
a generative adversarial network for style modeling in a text-to-speech system
Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization. However, there is currently no theoretical analysis that explains this observation. In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent. Specifically, we prove data-dependent sample complexity bounds which show that over-parameterization improves the generalization performance of gradient descent. Most successful deep learning models use a number of parameters that is larger than the number of parameters that are needed to get zero-training error. This is typically referred to as overparameterization. Indeed, it can be argued that over-parameterization is one of the key techniques that has led to the remarkable success of neural networks. However, there is still no theoretical account for its effectiveness. One very intriguing observation in this context is that over-parameterized networks with ReLU activations, which are trained with gradient based methods, often exhibit better generalization error than smaller networks BID11 ). This somewhat counterintuitive observation suggests that first-order methods which are trained on over-parameterized networks have an inductive bias towards solutions with better generalization performance. Understanding this inductive bias is a necessary step towards a full understanding of neural networks in practice. Providing theoretical guarantees for this phenomenon is extremely challenging due to two main reasons. First, to show a generalization gap, one needs to prove that large networks have better sample complexity than smaller ones. However, current generalization bounds that are based on complexity measures do not offer such guarantees. Second, analyzing the dynamics of first-order methods on networks with ReLU activations is a major challenge. Indeed, there do not exist optimization guarantees even for simple learning tasks such as the classic XOR problem in two dimensions. 1 To advance this issue, we focus on a particular learning setting that captures key properties of the over-parameterization phenomenon. We consider a high-dimensional extension of the XOR problem, which we refer to as the "XOR Detection problem (XORD)". The XORD is a pattern recognition task where the goal is to learn a function which classifies binary vectors according to whether they contain a two-dimensional binary XOR pattern (i.e., or (−1, −1)). This problem contains the classic XOR problem as a special case when the vectors are two dimensional. We consider learning this function with gradient descent trained on an over-parameterized convolutional neural network (i.e., with multiple channels) with ReLU activations and three layers: convolutional, max pooling and fully connected. As can be seen in FIG0, over-parameterization improves generalization in this problem as well. Therefore it serves as a good test-bed for understanding the role of over-parameterization. 1 We are referring to the problem of learning the XOR function given four two-dimensional points with binary entries, using a moderate size one-hidden layer neural network (e.g., with 50 hidden neurons). Note that there are no optimization guarantees for this setting. Variants of XOR have been studied in BID10; but these works only analyzed the optimization landscape and did not provide guarantees for optimization methods. We provide guarantees for this problem in Sec. 9. 3). The figure shows the test error obtained for different number of channels k. The blue curve shows test error when restricting to cases where training error was zero. It can be seen that increasing the number of channels improves the generalization performance. Experimental details are provided in Section 8.2.1.. In this work we provide an analysis of optimization and generalization of gradient descent for XORD. We show that for various input distributions, ranges of accuracy and confidence parameters, sufficiently over-parameterized networks have better sample complexity than a small network which can realize the ground truth classifier. To the best of our knowledge, this is the first example which shows that over-paramaterization can provably improve generalization for a neural network with ReLU activations. Our analysis provides a clear distinction between the inductive bias of gradient descent for overparameterized and small networks. It reveals that over-parameterized networks are biased towards global minima that detect more patterns in the data than global minima found by small networks. 2 Thus, even though both networks succeed in optimization, the larger one has better generalization performance. We provide experiments which show that the same phenomenon occurs in a more general setting with more patterns in the data and non-binary input. We further show that our analysis can predict the behavior of over-parameterized networks trained on MNIST and guide a compression scheme for over-parameterized networks with a mild loss in accuracy (Sec. 6). In recent years there have been many works on theoretical aspects of deep learning. We will refer to those that are most relevant to this work. First, we note that we are not aware of any work that shows that generalization performance provably improves with over-parameterization. This distinguishes our work from all previous works. Several works study convolutional networks with ReLU activations and their properties BID4 b; BID0. All of these works consider convolutional networks with a single channel. BID2 and BID7 provide guarantees for SGD in general settings. However, their analysis holds for over-parameterized networks with an extremely large number of neurons that are not used in practice (e.g., the number of neurons is a very large polynomial of certain problem parameters). Furthermore, we consider a 3-layer convolutional network with max-pooling which is not studied in these works. , BID3 and study the role of overparameterization in the case of quadratic activation functions. BID1 provide generalization guarantees for over-parameterized networks with Leaky ReLU activations on linearly separable data. prove generalization bounds for neural networks. However, these bounds are empirically vacuous for over-parameterized networks and they do not prove that networks found by optimization algorithms give low generalization bounds. We begin with some notations and definitions. Let d ≥ 4 be an integer. We consider a classification problem in the space {±1} 2d. Namely, the space of vectors of 2d coordinates where each coordinate can be +1 or −1. Given a vector x ∈ {±1} 2d, we consider its partition into d sets of two coordinates as follows x = (x 1, ..., x d) where x i ∈ {±1}2. We refer to each such x i as a pattern in x. Neural Architecture: We consider learning with the following three-layer neural net model. The first layer is a convolutional layer with non-overlapping filters and multiple channels, the second layer is max pooling and the third layer is a fully connected layer with 2k hidden neurons and weights fixed to values ±1. Formally, for an input x = (x 1, ..., x d) ∈ R 2d where x i ∈ R 2, the output of the network is given by: DISPLAYFORM0 where W ∈ R 2k×2 is the weight matrix whose rows are the w (i) vectors followed by the u DISPLAYFORM1 vectors, and σ(x) = max{0, x} is the ReLU activation applied element-wise. See FIG6 for an illustration of this architecture. Remark 3.1. Because there are only 4 different patterns, the network is limited in terms of the number of different rules it can implement. Specifically, it is easy to show that its VC dimension is at most 15 (see Sec. 10). Despite this limited expressive power, there is a generalization gap between small and large networks in this setting, as can be seen in FIG0, and in our analysis below. Data Generating Distribution: Next we define the classification rule we will focus on. Let P XOR correspond to the following two patterns: P XOR = {, (−1, −1)}. Define the classification rule: DISPLAYFORM2 Namely, f * detects whether a pattern in P XOR appears in the input. In what follows, we refer to P XOR as the set of positive patterns and {±1} 2 \ P XOR as the set of negative patterns. Let D be a distribution over X × {±1} such that for all (x, y) ∼ D we have y = f * (x). We say that a point (x, y) is positive if y = 1 and negative otherwise. Let D + be the marginal distribution over {±1} 2d of positive points and D − be the marginal distribution of negative points. In the following definition we introduce the notion of diverse points, which will play a key role in our analysis. Definition 3.2 (Diverse Points). We say that a positive point (x, 1) is diverse if for all z ∈ {±1} 2 there exists 1 ≤ i ≤ d such that x i = z. We say that a negative point DISPLAYFORM3 For φ ∈ {−, +} define p φ to be the probability that x is diverse with respect to D φ. For example, if both D + and D − are uniform, then by the inclusion-exclusion principle it follows that DISPLAYFORM4 For each set of binary patterns A ⊆ {±1} 2 define p A to be the probability to sample a point which contains all patterns in A and no patterns in A c (the complement of A). Let A 1 = {2}, A 2 = {4}, A 3 = {2, 4, 1} and A 4 = {2, 4, 3}.The following quantity will be useful in our analysis: DISPLAYFORM5 Learning Setup: Our analysis will focus on the problem of learning f * from training data with a three layer neural net model. The learning algorithm will be gradient descent, randomly initialized. As in any learning task in practice, f * is unknown to the training algorithm. Our goal is to analyze the performance of gradient descent when given data that is labeled with f *. We assume that we are given a training set S = S + ∪ S − ⊆ {±1} 2d × {±1} 2 where S + consists of m IID points drawn from D + and S − consists of m IID points drawn from D −. Importantly, we note that the function f * can be realized by the above network with k = 2. Indeed, the network N defined by the filters w =, DISPLAYFORM6 2d. It can be seen that for k = 1, f * cannot be realized. Therefore, any k > 2 is an over-parameterized setting. Training Algorithm: We will use gradient descent to optimize the following hinge-loss function. DISPLAYFORM7 for γ ≥ 1. 4 We assume that gradient descent runs with a constant learning rate η and the weights are randomly initiliazed with IID Gaussian weights with mean 0 and standard deviation σ g. Furthermore, only the weights of the first layer, the convolutional filters, are trained. We will need the following notation. Let W t be the weight matrix in iteration t of gradient descent. DISPLAYFORM0 2 the ith convolutional filter at iteration t. Similarly, for DISPLAYFORM1 t ∈ R 2 to be the k + i convolutional filter at iteration t. We assume that each DISPLAYFORM2 0 is initialized as a Gaussian random variable where the entries are IID and distributed as N (0, σ 2 g). In each iteration, gradient descent performs the update DISPLAYFORM3 In this section we state our main that demonstrates the generalization gap between overparameterized networks and networks with k = 2. Define the generalization error to be the difference between the 0-1 test error and the 0-1 training error. For any, δ and training algorithm let m(, δ) be the sample complexity of a training algorithm, namely, the number of minimal samples the algorithm needs to get at most generalization error with probability at least 1 − δ. We consider running gradient descent in two cases, when k ≥ 120 and k = 2. In the next section we exactly define under which set of parameters gradient descent runs, e.g., which constant learning rates. Fix parameters p + and p − of a distribution D and denote by c < 10 −10 a negligible constant. Assume that gradient descent is given a sample of points drawn from D + and D −. We denote the sample complexity of gradient descent in the cases k ≥ 120 and k = 2, by m 1 and m 2, respectively. The following shows a data dependent generalization gap (recall the definition of p * in Eq. 3).Theorem 4.1. Let D be a distribution with paramaters p +, p − and p DISPLAYFORM0 The proof follows from Theorem 5.2 and Theorem 5.3 which we state in the next section. The proof is given in Sec. 8.8. One surprising fact of this theorem is that m 1 (0, δ) ≤ 2. Indeed, our analysis shows that for an over-parameterized network and for sufficiently large p + and p −, one diverse positive point and one diverse negative suffice for gradient descent to learn f * with high probability. We note that even in this case, the dynamics of gradient descent is highly complex. This is due to the randomness of the initialization and to the fact that there are multiple weight filters in the network, each with different dynamics. See Sec. 5 for further details. We will illustrate the guarantee of Theorem 4.1 with several numerical examples. In all of the examples we assume that for the distribution D, the probability to sample a positive point is 1 2 and 4 In practice it is common to set γ to 1. In our analyis we will need γ ≥ 8 to guarantee generalization. In Section 8.3 we show empirically, that for this task, setting γ to be larger than 1 in better test performance than setting γ = 1.5 Note that BID6 show that fixing the last layer weights to ±1 does not degrade performance in various tasks. This assumption also appeared in other works BID1 BID8. 6 We note that this generalization gap holds for global minima (0 train error). Therefore, the theorem can be read as follows. For k ≥ 120, given 2 samples, with probability at least 1 − δ, gradient descent converges to a global minimum with at most test error. On the other hand, for k = 2 and given number of samples less than 2 log DISPLAYFORM1, with probability greater than δ, gradient descent converges to a global minimum with error greater than. See Section 5 for further details. DISPLAYFORM2 (it is easy to constuct such distributions). In the first example, we assume that p + = p − = 0.98 and δ = 1 − 0.98 DISPLAYFORM3 05. In this case we get that for any 0 ≤ < 0.005, m 1 (, δ) ≤ 2 whereas m 2 (, δ) ≥ 129. For the second example consider the case where p + = p − = 0.92. It follows that for δ = 0.16 and any 0 ≤ < 0.02 it holds that m 1 (, δ) ≤ 2 and m 2 (, δ) ≥ 17. For = 0 and any δ > 0, by setting p + and p − to be sufficiently close to 1, we can get an arbitrarily large gap between m 1 (, δ) and m 2 (, δ). In contrast, for sufficiently small p + and p −, e.g., in which p +, p − ≤ 0.7, our bound does not guarantee a generalization gap. In this section we sketch the proof of Theorem 4.1. The theorem follows from two theorems: Theorem 5.2 for over-parameterized networks and Theorem 5.3 for networks with k = 2. We formally show this in Sec. 8.8. In Sec. 5.1 we state Theorem 5.2 and outline its proof. In Sec. 5.2 we state Theorem 5.3 and shortly outline its proof. Finally, for completeness, in Sec. 9 we also provide a convergence guarantee for the XOR problem with inputs in {±1}, which in our setting is the case of d = 1. In what follows, we will need the following formal definition for a detection of a pattern by a network. DISPLAYFORM0 We say that a pattern v (positive or negative) is detected by the network N W with confidence DISPLAYFORM1 The above definition captures a desired property of a network, namely, that its filters which are connected with a positive coefficient in the last layer, have high correlation with the positive patterns and analogously for the remaining filters and negative patterns. We note however, that the condition in which a network detects all patterns is not equivalent to realizing the ground truth f *. The former can hold without the latter and vice versa. Theorem 5.2 and Theorem 5.3 together imply a clear characterization of the different inductive biases of gradient descent in the case of small (k = 2) and over-parameterized networks. The characterization is that over-parameterized networks are biased towards global minima that detect all patterns in the data, whereas small networks with k = 2 are biased towards global minima that do not detect all patterns (see Definition 5.1). In Sec. 8.5 we show this empirically in the XORD problem and in a generalization of the XORD problem. In the following sections we will need several notations. Define x 1 =, x 2 = (1, −1), x 3 = (−1, −1), x 4 = (−1, 1) to be the four possible patterns in the data and the following sets: DISPLAYFORM2 We denote by x + a positive diverse point and x − a negative diverse point. Define the following sum: DISPLAYFORM3 Finally, in all of the in this section we will denote by c < 10 −10 a negligible constant. The main in this section is given by the following theorem., k ≥ 120 and γ ≥ 8. Then, with probability DISPLAYFORM0 iterations, it converges to a global minimum which satisfies: DISPLAYFORM1, all patterns are detected with confidence c d.This shows that given a small training set size, and sufficiently large p + and p −, overparameterized networks converge to a global minimum which realizes the classifier f * with high probability and in a constant number of iterations. Furthermore, this global minimum detects all patterns in the data with confidence that increases with over-parameterization. The full proof of Theorem 5.2 is given in Sec. 8.6.We will now sketch its proof. With probability at least (p + p −) m all training points are diverse and we will condition on this event. From Sec. 10 we can assume WLOG that the training set consists of one positive diverse point x + and one negative diverse point x − (since the network will have the same output on all same-label diverse points). We note that empirically over-parameterization improves generalization even when the training set contains non-diverse points (see FIG0 and Sec. 8.2). Now, to understand the dynamics of gradient descent it is crucial to understand the dynamics of the sets in Eq. 5. This follows since the gradient updates are expressed via these sets. Concretely, let DISPLAYFORM2 then the gradient update is given as follows: DISPLAYFORM3 the gradient update is given by: DISPLAYFORM4 Furthermore, the values of N W (x +) and N W (x −) depend on these sets and their corresponding weight vectors, via sums of the form S + t, defined above. The proof consists of a careful analysis of the dynamics of the sets in Eq. 5 and their corresponding weight vectors. For example, one of this analysis is that for all t ≥ 1 and i ∈ {1, 3} we have W There are two key technical observations that we apply in this analysis. First, with a small initialization and with high probability, for all 1 ≤ j ≤ k and 1 ≤ i ≤ 4 it holds that w DISPLAYFORM5. This allows us to keep track of the dynamics of the sets in Eq. 5 more easily. For example, by this observation it follows that if for some j * ∈ W + t it holds that j * ∈ W + t+1, then for all j such that j ∈ W + t it holds that j ∈ W + t+1. Hence, we can reason about the dynamics of several filters all at once, instead of each one separately. Second, by concentration of measure we can estimate the sizes of the sets in Eq. 5 at iteration t = 0. Combining this with of the kind W + t (i) = W + 0 (i) for all t, we can understand the dynamics of these sets throughout the optimization process. The theorem consists of optimization and generalization guarantees. For the optimization guarantee we show that gradient descent converges to a global minimum. To show this, the idea is to characterize the dynamics of S + t using the characterization of the sets in Eq. 5 and their corresponding weight vectors. We show that as long as gradient descent did not converge to a global minimum, S + t cannot decrease in any iteration and it is upper bounded by a constant. Furthermore, we show that there cannot be too many consecutive iterations in which S + t does not increase. Therefore, after sufficiently many iterations gradient descent will converge to a global minimum. We will now outline the proof of the generalization guarantee. Denote the network learned by gradient descent by N W T. First, we show that the network classifies all positive points correctly. Define the following sums for all 1 ≤ i ≤ 4: DISPLAYFORM6 First we notice that for all positive z we have N W T (z) min{X . Hence, we can show that each positive point is classified correctly. The proof that all negative points are classified correctly and patterns x 2 and x 4 are detected is similar but slightly more technical. We refer the reader to Sec. 8.6 for further details. DISPLAYFORM7 The following theorem provides generalization lower bounds of global minima in the case that k = 2 and in a slightly more general setting than the one given in Theorem 5.2. Theorem 5.3. Let S = S + ∪ S − be a training set as in Sec. 3. Assume that gradient descent runs with parameters η = cη k where c η ≤ DISPLAYFORM0, k = 2 and γ ≥ 1. Then the following holds: DISPLAYFORM1 48, gradient descent converges to a global minimum that has non-zero test error. Furthermore, for c d ≥ 2c η, there exists at least one pattern which is not detected by the global minimum with confidence c d .2. The non-zero test error above is at least p * .The theorem shows that for a training set that is not too large and given sufficiently large p + and p −, with constant probability, gradient descent will converge to a global minimum that is not the classifier f * . Furthermore, this global minimum does not detect at least one pattern. The proof of the theorem is given in Sec. 8.7.We will now provide a short outline of the proof. Let w DISPLAYFORM2 T be the filters of the network at the iteration T in which gradient descent converges to a global minimum. The proof shows that gradient descent will not learn f * if one of the following conditions is met: a) DISPLAYFORM3 T · x 4 > 0. Then by using a symmetry argument which is based on the symmetry of the initialization and the training data it can be shown that one of the above conditions is met with high constant probability. Finally, it can be shown that if one of these conditions hold, then at least one pattern is not detected. We perform several experiments that corroborate our theoretical findings. In Sec. 8.5 we empirically demonstrate our insights on the inductive bias of gradient descent. In Sec. 6.2 we evaluate a model compression scheme implied by our , and demonstrate its success on the MNIST dataset. In this section we perform experiments to examine the insights from our analysis on the inductive bias of gradient descent. Namely, that over-parameterized networks are biased towards global minima that detect more patterns in the data than global minima found by smaller networks. We check this both on the XORD problem which contains 4 possible patterns in the data and on an instance of an extension of the XORD problem, that we refer to as the Orthonormal Basis Detection (OBD) problem, which contains 60 patterns in the data. In Sec. 8.5 we provide details on the experimental setups. standard training (red), the small network that uses clusters from the large network (blue), and the large network (120 channels) with standard training (green). It can be seen that the large network is effectively compressed without losing much accuracy. Due to space considerations, we will not formally define the OBD problem in this section. We refer the reader to Sec. 8.5 for a formal definition. Informally, The OBD problem is a natural extension of the XORD problem that contains more possible patterns in the data and allows the dimension of the filters of the convolutional network to be larger. The patterns correspond to a set of orthonormal vectors and their negations. The ground truth classifier in this problem can be realized by a convolutional network with 4 channels. In FIG1 we show experiments which confirm that in the OBD problem as well, overparameterization improves generalization. We further show the number of patterns detected in %0 training error solutions for different number of channels, in both the XORD and OBD problems. It can be clearly seen that for both problems, over-parameterized networks are biased towards %0 training error solutions that detect more patterns, as predicted by the theoretical . By inspecting the proof of Theorem 5.2, one can see that the dynamics of the filters of an overparameterized network are such that they either have low norm, or they have large norm and they point to the direction of one of the patterns (see, e.g., Lemma 8.4 and Lemma 8.6). This suggests that by clustering the filters of a trained over-parameterized network to a small number of clusters, one can create a significantly smaller network which contains all of the detectors that are needed for good generalization performance. Then, by training the last layer of the network, it can converge to a good solution. Following this insight, we tested this procedure on the MNIST data set and a 3 layer convolutional network with convolutional layer with multiple channels and 3 × 3 kernels, max pooling layer and fully connected layer. We trained an over-parameterized network with 120 channels, clustered its filters with k-means into 4 clusters and used the cluster centers as initialization for a small network with 4 channels. Then we trained only the fully connected layer of the small network. In FIG5 we show that for various training set sizes, the performance of the small network improves significantly with the new initialization and nearly matches the performance of the overparameterized network. In this paper we consider a simplified learning task on binary vectors and show that overparameterization can provably improve generalization performance of a 3-layer convolutional network trained with gradient descent. Our analysis reveals that in the XORD problem overparameterized networks are biased towards global minima which detect more relevant patterns in the data. While we prove this only for the XORD problem and under the assumption that the training set contains diverse points, our experiments clearly show that a similar phenomenon occurs in other settings as well. We show that this is the case for XORD with non-diverse points FIG0 ) and in the more general OBD problem which contains 60 patterns in the data and is not restricted to binary inputs FIG1. Furthermore, our experiments on MNIST hint that this is the case in MNIST as well FIG5.By clustering the detected patterns of the large network we could achieve better accuracy with a small network. This suggests that the larger network detects more patterns with gradient descent even though its effective size is close to that of a small network. We believe that these insights and our detailed analysis can guide future work for showing similar in more complex tasks and provide better understanding of this phenomenon. It would also be interesting to further study the implications of such on model compression and on improving training algorithms. Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. We tested the generalization performance in the setup of Section3. We considered networks with number of channels 4,6,8,20,50,100 and 200. The distribution in this setting has p + = 0.5 and p − = 0.9 and the training sets are of size 12 (6 positive, 6 negative). Note that in this case the training set contains non-diverse points with high probability. The ground truth network can be realized by a network with 4 channels. For each number of channels we trained a convolutional network 100 times and averaged the . In each run we sampled a new training set and new initialization of the weights according to a gaussian distribution with mean 0 and standard deviation 0.00001. For each number of channels c, we ran gradient descent with learning rate 0.04 c and stopped it if it did not improve the cost for 20 consecutive iterations or if it reached 30000 iterations. The last iteration was taken for the calculations. We plot both average test error over all 100 runs and average test error only over the runs that ended at 0% train error. In this case, for each number of channels 4, 6, 8, 20, 50, 100,200 the number of runs in which gradient descent converged to a 0% train error solution is 62, 79, 94, 100, 100, 100, 100, respectively. Figure 5 shows that setting γ = 5 gives better performance than setting γ = 1 in the XORD problem. The setting is similar to the setting of Section 8.2.1. Each point is an average test error of 100 runs.. Because the is a lower bound, it is desirable to understand the behaviour of gradient descent for values outside these ranges. In Figure 6 we empirically show that for values outside these ranges, there is a generalization gap between gradient descent for k = 2 and gradient descent for larger k. We will first formally define the OBD problem. Fix an even dimension parameter d 1 ≥ 2. In this problem, we assume there is an orthonormal basis B = {v 1, ..., v d1} of R d1. Divide B into two equally sized sets B 1 and B 2, each of size d1 2. Now define the set of positive patterns to be P = {v | v ∈ B 1} ∪ {−v | v ∈ B 1} and negative patterns to be DISPLAYFORM0 we assume the input domain is X ⊆ R d1d2 and each x ∈ X is a vector such that x = (x 1, ..., x d2) where each x i ∈ P OBD. We define the ground truth classifier f OBD: X → {±1} such that f OBD (x) = 1 if and only there exists at least one x i such that x i ∈ P. Notice that for d 1 = 2 and by normalizing the four vectors in {±1} 2 to have unit norm, we get the XORD problem. We note that the positive patterns in the XORD problem are defined to be P XOR and the negative patterns are {±1} 2 \ P XOR.Let D be a distribution over X 2d × {±1} such that for all (x, y) ∼ D, y = f OBD (x). As in the XORD problem we define the distributions D + and D −. We consider the following learning task which is the same as the task for the XORD problem. We assume that we are given a training set S = S + ∪ S − ⊆ {±1} d1d2 × {±1} where S + consists of m IID points drawn from D + and S − consists of m IID points drawn from D −. The goal is to train a neural network with randomly initialized gradient descent on S and obtain a network N: DISPLAYFORM1 We consider the same network as in the XORD problem (Eq. 1), but now the filters of the convolution layer are d 1 -dimensional. Formally, for an input x = (x 1, ..., x d) ∈ X the output of the network is given by DISPLAYFORM2 where W ∈ R 2k×d1 is the weight matrix which contains in the first k rows the vectors w (i) ∈ R d1, in the next k rows the vectors u (i) ∈ R d1 and σ(x) = max{0, x} is the ReLU activation applied element-wise. We performed experiments in the case that d 1 = 30, i.e., in which there are 60 possible patterns. In FIG1, for each number of channels we trained a convolutional network given in Eq. 9 with gradient descent for 100 runs and averaged the . The we sampled 25 positive points and 25 negative points in the following manner. For each positive point we sampled with probability 0.25 one of the numbers twice with replacement. Denote these numbers by m 1 and m 2. Then we sampled m 1 different positive patterns and m 2 different negative patterns. Then we filled a 60d 1 -dimensional vectors with all of these patterns. A similar procedure was used to sample a negative point. We considered networks with number of channels 4,6,8,20,100 and 200 and 500. Note that the ground truth network can be realized by a network with 4 channels. For each number of channels we trained a convolutional network 100 times and averaged the . For each number of channels c, we ran gradient descent with learning rate 0.2 c and stopped it if it did not improve the cost for 20 consecutive iterations or if it had 0% training error for 200 consecutive iterations or if it reached 30000 iterations. The last iteration was taken for the calculations. We plot both average test error over all 100 runs and average test error only over the runs that ended at 0% train error. For each number of channels 4, 6, 8, 20, 100, 200,500 the number of runs in which gradient descent converged to a 0% train error solution is 96, 99, 100, 100, 100, 100, 100, respectively. For each 0% train error solution we recorded the number of patterns detected with c d = 0.0001 according to the Definition 5.1 (generalized to the OBD problem). In the XORD problem we recorded similarly the number of patterns detected in experiments which are identical to the experiments in Section 8.2.1, except that in this case p + = p − = 0.98. We will first need a few notations. Define x 1 =, x 2 = (1, −1), x 3 = (−1, −1), x 4 = (−1, 1) and the following sets: DISPLAYFORM0 We can use these definitions to express more easily the gradient updates. Concretely, let j ∈ W + t (i 1) ∩ W − t (i 2) then the gradient update is given as follows: DISPLAYFORM1 the gradient update is given by: DISPLAYFORM2 We denote by x + a positive diverse point and x − a negative diverse point. Define the following sums for φ ∈ {+, −}: DISPLAYFORM3 By the conditions of the theorem, with probability at least (p + p −) m all the points in the training set are diverse. From now on we will condition on this event. Furthermore, without loss of generality, we can assume that the training set consists of one diverse point x + and one negative points x −. This follows since the network and its gradient have the same value for two different positive diverse points and two different negative points. Therefore, this holds for the loss function defined in Eq. 4 as well. We will now proceed to prove the theorem. In Section 8.6.1 we prove on the filters at initialization. In Section 8.6.2 we prove several auxiliary lemmas. In Section 8.6.3 we prove upper bounds on S − t, P + t and P − t for all iterations t. In Section 8.6.4 we characterize the dynamics of S + t and in Section 8.6.5 we prove an upper bound on it together with upper bounds on N Wt (x +) and −N Wt (x −) for all iterations t. We provide an optimization guarantee for gradient descent in Section 8.6.6. We prove generalization guarantees for the points in the positive class and negative class in Section 8.6.7 and Section 8.6.8, respectively. We complete the proof of the theorem in Section 8.6.9. Lemma 8.1. With probability at least 1 − 4e −8, it holds that DISPLAYFORM0 Proof. Without loss of generality consider DISPLAYFORM1 2, we get by Hoeffding's inequality DISPLAYFORM2 The now follows by the union bound. Lemma 8.2. With probability DISPLAYFORM3 Proof. Let Z be a random variable distributed as N (0, σ 2). Then by Proposition 2.1.2 in , we have DISPLAYFORM4 Therefore, for all 1 ≤ j ≤ k and 1 ≤ i ≤ 4, DISPLAYFORM5 and DISPLAYFORM6 The follows by applying a union bound over all 2k weight vectors and the four points DISPLAYFORM7 From now on we assume that the highly probable event in Lemma 8.2 holds. DISPLAYFORM8 Proof. By Lemma 8.2 we have DISPLAYFORM9 and similarly −N W0 (x −) < 1. Therefore, by Eq. 10 and Eq. 11 we get: DISPLAYFORM10 2. For i ∈ {2, 4} and j ∈ W + 0 (i), it holds that w DISPLAYFORM11 4. For i ∈ {2, 4} and j ∈ U + 0 (i), it holds that u DISPLAYFORM12 2. For i ∈ {2, 4} and j ∈ W + 0 (i), it holds that w DISPLAYFORM13 4. For i ∈ {2, 4} and j ∈ U + 0 (i), it holds that u DISPLAYFORM14 As before, by Lemma 8.2 we have N W2 (x +) < γ and −N W2 (x −) < 1. Lemma 8.4. For all t ≥ 1 we have W DISPLAYFORM0 Proof. We will first prove that DISPLAYFORM1 To prove this, we will show by induction on t ≥ 1, that for all j ∈ W + 0 (i) ∩ W + 0 (l), where l ∈ {2, 4} the following holds: DISPLAYFORM2 The claim holds for t = 1 by the proof of Lemma 8.3. Assume it holds for t = T. By the induction hypothesis there exists an l ∈ {2, 4} such that j ∈ W + T (i) ∩ W − T (l). By Eq. 10 we have, DISPLAYFORM3 where a ∈ {0, 1} and b ∈ {−1, 0}. T ·x l = w (j) 0 ·x l then l = l and either w (j) DISPLAYFORM0 T · x l < 0 and l = l. It follows that either w DISPLAYFORM1 In both cases, we have w DISPLAYFORM2 In order to prove the lemma, it suffices to show that DISPLAYFORM0.., k}. We will show by induction on t ≥ 1, that for all j ∈ W + 0 ∪ W + 0, the following holds: DISPLAYFORM1 The claim holds for t = 1 by the proof of Lemma 8.3. Assume it holds for t = T. By the induction hypothesis j ∈ W + T ∩ W + T. Assume without loss of generality that j ∈ W + T. This implies that j ∈ W − T as well. Therefore, by Eq. 10 we have DISPLAYFORM2 where a ∈ {0, 1} and b ∈ {0, −1}. By the induction hypothesis, w DISPLAYFORM3 where the first inequality follows since j ∈ W + T and the second by Eq. 13. This implies that DISPLAYFORM4 Otherwise, assume that a = 0 and b = −1. By Lemma 8.2 we have w T · x 2 < 0 and j / ∈ W + T, which is a contradiction. DISPLAYFORM5 which concludes the proof. Lemma 8.5. For all t ≥ 0 we have DISPLAYFORM6 0 + α t ηx 2 for α t ∈ Z. This follows since the inequalities u DISPLAYFORM7. Assume by contradiction that there exist an iteration t for which u DISPLAYFORM8 0 + α t−1 ηx 2 where α t−1 ∈ Z. 9 Since the coefficient of x i changed in iteration t, we have j ∈ U + t−1 ∪ U + t−1. However, this contradicts the claim above which shows that if u DISPLAYFORM9 Lemma 8.6. Let i ∈ {1, 3} and l ∈ {2, 4}. DISPLAYFORM10 Proof. First note that by Eq. 11 we generally have u DISPLAYFORM11, by the gradient update in Eq. 11 it holds that a t ∈ {0, −1}. Indeed, a 0 = 0 and by the gradient update if a t−1 = 0 or a t−1 = −1 then a t ∈ {−1, 0}.Assume by contradiction that there exists an iteration t > 0 such that b t = −1 and b t−1 = 0. Note that by Eq. 11 this can only occur if j ∈ U + t−1 (l). We have u DISPLAYFORM12 Lemma 8.7. Let DISPLAYFORM13 and Y DISPLAYFORM14 Then for all t, DISPLAYFORM15 9 Note that in each iteration βt changes by at most η. Proof. We will prove the claim by induction on t. For t = 0 this clearly holds. Assume it holds for t = T. Let j 1 ∈ W + T and j 2 ∈ W + T. By Eq. 10, the gradient updates of the corresponding weight vector are given as follows: DISPLAYFORM16 where a ∈ {0, 1} and b 1, b 2 ∈ {−1, 0, 1}. By Lemma 8.4, j 1 ∈ W + T +1 and j 2 ∈ W + T +1. Therefore, DISPLAYFORM17 DISPLAYFORM18 Proof. In Lemma 8.4 we showed that for all t ≥ 0 and j ∈ W DISPLAYFORM19 t · x 2 ≤ η. This proves the first claim. The second claim follows similarly. Without loss of generality, let j ∈ U + t. By Lemma 8.5 it holds that U DISPLAYFORM20 Therefore, by Lemma 8.6 we have u (j) t x 1 < η, from which the claim follows. For the third claim, without loss of generality, assume by contradiction that for j ∈ U DISPLAYFORM21, from which the claim follows. Lemma 8.9. The following holds: DISPLAYFORM0 Under review as a conference paper at ICLR 2019 DISPLAYFORM1 Proof.1. The equality follows since for each i ∈ {1, 3}, l ∈ {2, 4} and j ∈ W DISPLAYFORM2 2. In this case for each i ∈ {1, 3}, l ∈ {2, 4} and j ∈ W DISPLAYFORM3 3. This equality follows since for each i ∈ {1, 3}, l ∈ {2, 4} and j ∈ W DISPLAYFORM4 (since x l will remain the maximal direction). Therefore, DISPLAYFORM5 In the second case, where we have DISPLAYFORM6 t · x i < η for i ∈ {1, 3}. Note that by Lemma 8.6, any DISPLAYFORM7. By all these observations, we have DISPLAYFORM8 By Eq. 14 and Eq. 15, it follows that, Z DISPLAYFORM9. Applying these observations b times, we see that Y DISPLAYFORM10 4) where the equality follows by Lemma 8.4. By Lemma 8.9, we have S DISPLAYFORM11 Hence, we can conclude that DISPLAYFORM12 Proof. Define DISPLAYFORM13 First note that by Lemma 8.4 we have W DISPLAYFORM14 where the second equality follows by Lemma 8.4. DISPLAYFORM15 To see this, note that by Lemma 8.6 and Lemma 8.5 it holds that u DISPLAYFORM16 0 · x 2 and thus Eq. 16 holds. Now assume that j ∈ U + T (l) for l ∈ {2, 4}. Then DISPLAYFORM17 Proof. The claim holds for t = 0. Consider an iteration T. If DISPLAYFORM18 η, where the last inequality follows from the previous observation. Hence, DISPLAYFORM19 The proof of the second claim follows similarly. DISPLAYFORM20 The third claim holds by the following identities and bounds DISPLAYFORM21 η by the previous claims. We are now ready to prove a global optimality guarantee for gradient descent. Proposition 8.13. Let k > 16 and γ ≥ 1. With probabaility at least 1 − DISPLAYFORM0 iterations, gradient descent converges to a global minimum. Proof. First note that with probability at least 1 − DISPLAYFORM1 −8 the claims of Lemma 8.1 and Lemma 8.2 hold. Now, if gradient descent has not reached a global minimum at iteration t then either DISPLAYFORM2 where the last inequality follows by Lemma 8.1. DISPLAYFORM3 by Lemma 8.9. However, by Lemma 8.10, it follows that after 5 consecutive iterations t < t < t + 6 in which DISPLAYFORM4 To see this, first note that for all t, N Wt (x +) ≤ γ+3c η by Lemma 8.12. Then, by Lemma 8.10 we have DISPLAYFORM5 where the second inequality follows by Lemma 8.1 and the last inequality by the assumption on k. Assume by contradiction that GD has not converged to a global minimum after T = 7(γ+1+8cη) DISPLAYFORM6 iterations. Then, by the above observations, and the fact that S + 0 > 0 with probability 1, we have DISPLAYFORM7 However, this contradicts Lemma 8.12. We will first need the following three lemmas. Lemma 8.14. With probability at least 1 − 4e −8, it holds that DISPLAYFORM0 Proof. The proof is similar to the proof of Lemma 8.1.Lemma 8.15. Assume that gradient descent converged to a global minimum at iteration T. Then there exists an iteration T 2 < T for which S DISPLAYFORM1 Proof. Assume that for all 0 ≤ t ≤ T 1 it holds that N Wt (x +) < γ and −N Wt (x −) < 1. By continuing the calculation of Lemma 8.3 we have the following: DISPLAYFORM2 2. For i ∈ {2, 4} and j ∈ W + 0 (i), it holds that w DISPLAYFORM3 Therefore, there exists an iteration DISPLAYFORM4 It suffices to show that for all T 1 ≤ t < T 2 the following holds: DISPLAYFORM5 The first claim follows since at any iteration N Wt (x +) can decrease by at most 2ηk = 2c η. For the second claim, let t < t be the latest iteration such that N W t (x +) ≥ γ. Then at iteration t it holds that −N W t (x −) < 1 and N W t (x +) ≥ γ. Therefore, for all i ∈ {1, 3}, l ∈ {2, 4} and DISPLAYFORM6 t + ηx l. Hence, by Lemma 8.5 and Lemma 8.6 it holds that U + t +1 ∪ U + t +1 = ∅. Therefore, by the gradient update in Eq. 11, for all 1 ≤ j ≤ k, and all t < t ≤ t we have u DISPLAYFORM7 The above argument shows that DISPLAYFORM8 Assume that k ≥ 64 and gradient descent converged to a global minimum at iteration T. Then, DISPLAYFORM9 Proof. Notice that by the gradient update in Eq. 10 and Lemma 8.2, X + t can be strictly larger than max DISPLAYFORM10. We know by Lemma 8.15 that there exists T 2 < T such that S + T2 ≥ γ+1−3c η and that N Wt (x +) < γ and −N Wt (x −) ≥ 1 only for t > T 2. Since S + t ≤ γ + 1 + 8c η for all t by Lemma 8.12, there can only be at most DISPLAYFORM11 where the second inequality follows by Lemma 8.1 and the third inequality by the assumption on k. We are now ready to prove the main of this section. Assume without loss of generality that z i = (−1, −1) = x 3. Define Notice that DISPLAYFORM0 Furthermore, by Lemma 8.7 we have DISPLAYFORM1 and by Lemma 8.14, DISPLAYFORM2. Combining this fact with Eq. 19 and Eq. 20 we get DISPLAYFORM3 which implies together with Eq. 18 that X DISPLAYFORM4 where the first inequality is true because DISPLAYFORM5 The second inequality in Eq. 21 follows since P + T ≤ c η and by appyling Lemma 8.16. Finally, the last inequality in Eq. 21 follows by the assumption on k.10 Hence, z is classified correctly. We will need the following lemmas. Lemma 8.18. With probability at least 1 − 8e −8, it holds that DISPLAYFORM0 Under review as a conference paper at ICLR 2019Proof. The proof is similar to the proof of Lemma 8.1 and follows from the fact that DISPLAYFORM1 Lemma 8.19. Let DISPLAYFORM2 Then for all t, there exists X, Y ≥ 0 such that |X| ≤ η U + 0, |Y | ≤ η U + 0 and DISPLAYFORM3 Proof. First, we will prove that for all t there exists a t ∈ Z such that for DISPLAYFORM4 11 We will prove this by induction on t. For t = 0 this clearly holds. Assume it holds for an iteration t. Let j 1 ∈ U − 0 and j 2 ∈ U − 0. By the induction hypothesis, there exists a T ∈ Z such that u DISPLAYFORM5. In either case, by Eq. 11, we have the following update at iteration t + 1: DISPLAYFORM6 where a ∈ {−1, 0, 1}. Hence, u DISPLAYFORM7 − (a t + a)ηx 2. This concludes the proof by induction. Now, consider an iteration t, j 1 ∈ U + 0, j 2 ∈ U + 0 and the integer a t defined above. If a t ≥ 0 then DISPLAYFORM8 ) which proves the claim in the case that a t ≥ 0.If a t < 0 it holds that 11 Recall that by Lemma 8.5 we know that DISPLAYFORM9 Under review as a conference paper at ICLR 2019 DISPLAYFORM10 Since for all 1 ≤ j ≤ k it holds that u DISPLAYFORM11 4) which concludes the proof. Lemma 8.20. Let DISPLAYFORM12 Then for all t, DISPLAYFORM13 Proof. We will first prove that for all t there exists an integer a t ≥ 0 such that for DISPLAYFORM14 · x 4 + ηa t. We will prove this by induction on t. For t = 0 this clearly holds. Assume it holds for an iteration t. DISPLAYFORM15. By the induction hypothesis, there exists an integer a t ≥ 0 such that u DISPLAYFORM16, it follows that if a t ≥ 1 we have the following update at iteration T + 1: DISPLAYFORM17 where a ∈ {−1, 0, 1}. Hence, u DISPLAYFORM18 0 ·x 2 +η(a t +a) and u DISPLAYFORM19. This concludes the proof by induction. Now, consider an iteration t, DISPLAYFORM20 and the integer a t defined above. We have, DISPLAYFORM21 It follows that DISPLAYFORM22 which concludes the proof. We are now ready to prove the main of this section. Proof. With probability at least 1 − √ 2k √ πe 8k − 16e −8 Proposition 8.13 and Lemma 8.18 hold. It suffices to show generalization on negative points. Assume that gradient descent converged to a global minimum at iteration T. Let (z, −1) be a negative point. Assume without loss of generality that z i = x 2 for all 1 ≤ i ≤ d. Define the following sums for l ∈ {2, 4}, DISPLAYFORM23 First, we notice that DISPLAYFORM24 We note that by the analysis in Lemma 8.18, it holds that for any t, j 1 ∈ U + 0 and j 2 ∈ U + 0, either j 1 ∈ U + t and j 2 ∈ U + t, or j 1 ∈ U + t and j 2 ∈ U + t. We assume without loss of generality that j 1 ∈ U + T and j 2 ∈ U + T. It follows that in this case DISPLAYFORM25 12 Otherwise we would replace Y − T with Y − T and vice versa and continue with the same proof. DISPLAYFORM26. By Lemma 8.20 and Lemma 8.18 DISPLAYFORM27 and by Lemma 8.19 and Lemma 8.18 there exists Y ≤ c η such that: DISPLAYFORM28 Plugging these inequalities in Eq. 24 we get: DISPLAYFORM29 By Lemma 8.16 we have X − T ≤ 34c η. Hence, by using the inequality S − T ≤ c η we conclude that DISPLAYFORM30 where the last inequality holds for k > 64 We will now prove pattern detection . In the case of over-paramterized networks, in Proposition 8.17 we proved that DISPLAYFORM31 1+α(k). Since for i ∈ {1, 3} it holds that D xi ≥ X + (i), it follows that patterns x 1 and x 3 are detected. Similarly, in Proposition 8.21our analysis implies that, without loss of generality, DISPLAYFORM32 (under the assumption that we assumed without loss of generality), it follows that patterns x 2 and x 4 are detected. The confidence of the detection is at 2 by a symmetry argument. This will finish the proof of the theorem. For the proof, it will be more convenient to denote the matrix of weights at iteration t as a tuple of 4 vectors, i.e., W t = w To see this, we will illustrate this through one case, the other cases are similar. Assume, for example, that arg max 1≤l≤4 u(FORMULA0 t · x l = 3 and arg max l∈{2,4} u t · x l = 2 and assume without loss of generality that N W T · x 1 ≤ 2c η and therefore, x 1 cannot be detected with confidence greater than 2c η. 2. Let Z 1 be the set of positive points which contain only the patterns x 1, x 2, x 4, Z 2 be the set of positive points which contain only the patterns x 3, x 2, x 4. Let Z 3 be the set which contains the negative point with all patterns equal to x 2 and Z 4 be the set which contains the negative point with all patterns equal to x 4. By the proof of the previous section, if the event E holds, then there exists 1 ≤ i ≤ 4, such that gradient descent converges to a solution at iteration T which errs on all of the points in Z i. Therefore, its test error will be at least p * (recall Eq. 3). In this section we assume that we are given a training set S ⊆ {±1} 2 × {±1} 2 consisting of points (x 1, 1), (x 2, −1), (x 3, 1), (x 4, −1), where x 1 =, x 2 = (−1, 1), x 3 = (−1, −1) and x 4 = (1, −1). Our goal is to learn the XOR function with gradient descent. Note that in the case of two dimensions, the convolutional network introduced in Section 3 reduces to the following two-layer fully connected network. DISPLAYFORM0 We consider running gradient descent with a constant learning rate η ≤ cη k, c η ≤ 1 and IID gaussian initialization with mean 0 and standard deviation σ g = cη 16k 3/2. We assume that gradient descent minimizes the hinge loss (W) = where optimization is only over the first layer. We will show that gradient descent converges to the global minimum in a constant number of iterations. For each point x i ∈ S define the following sets of neurons: DISPLAYFORM1 t · x i < 0
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HyGLy2RqtQ
We show in a simplified learning task that over-parameterization improves generalization of a convnet that is trained with gradient descent.
We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-learning algorithm that implicitly learns a model prior distribution of interest. Our proposed method extends the PAC-Bayes framework from a single task setting to the few-shot meta-learning setting to upper-bound generalisation errors on unseen tasks. We also propose a generative-based approach to model the shared prior and task-specific posterior more expressively compared to the usual diagonal Gaussian assumption. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification on mini-ImageNet benchmark, and competitive in a multi-modal task-distribution regression. One unique ability of humans is to be able to quickly learn new tasks with only a few training examples. This is due to the fact that humans tend to exploit prior experience to facilitate the learning of new tasks. Such exploitation is markedly different from conventional machine learning approaches, where no prior knowledge (e.g. training from scratch with random initialisation) , or weak prior knowledge (e.g., fine tuning from pre-trained models) are used when encountering an unseen task for training. This motivates the development of novel learning algorithms that can effectively encode the knowledge learnt from training tasks, and exploit that knowledge to quickly adapt to future tasks . Prior knowledge can be helpful for future learning only if all tasks are assumed to be distributed according to a latent task distribution. Learning this latent distribution is, therefore, useful for solving an unseen task, even if the task contains a limited number of training samples. Many approaches have been proposed and developed to achieve this goal, namely: multi-task learning , domain adaptation and meta-learning . Among these, meta-learning has flourished as one of the most effective methods due to its ability to leverage the knowledge learnt from many training tasks to quickly adapt to unseen tasks. Recent advances in meta-learning have produced state-of-the-art in many benchmarks of few-shot learning data sets (; ; ; ; ;). Learning from a few examples is often difficult and easily leads to over-fitting, especially when no model uncertainty is taken into account. This issue has been addressed by several recent Bayesian meta-learning approaches that incorporate model uncertainty into prediction, notably LLAMA that is based on Laplace method , or PLATIPUS , Amortised Meta-learner and VERSA that use variational inference (VI). However, these works have not thoroughly investigated the generalisation errors for unseen samples, ing in limited theoretical generalisation guarantees. Moreover, most of these papers are based on variational functions that may not represent well the richness of the underlying distributions. For instance, a common choice for the variational function relies on the diagonal Gaussian distribution, which can potentially worsen the prediction accuracy given its limited representability. In this paper, we address the two problems listed above with the following technical novelties: (i) derivation of a rigorous upper-bound for the generalisation errors of few-shot meta-learning using PAC-Bayes framework, and (ii) proposal of a novel variational Bayesian learning based on implicit The few-shot meta-learning problem is modelled using a hierarchical model that learns a prior p(w i ; θ) using a few data points s ij )}. Shaded nodes denote observed variables, while white nodes denote hidden variables. generative models to facilitate the learning of unseen tasks. Our evaluation shows that the models trained with our proposed meta-learning algorithm is at the same time well calibrated and accurate, with competitive in terms of Expected Calibration Error (ECE) and Maximum Calibration Error (MCE), while outperforming state-of-the-art methods in a few-shot classification benchmark (mini-ImageNet). Our paper is related to Bayesian few-shot meta-learning techniques that have been developed to incorporate uncertainty into model estimation. LLAMA employs the Laplace method to extend the deterministic estimation assumed in MAML to a Gaussian distribution. However, the need to estimate and invert the Hessian matrix makes this approach computationally challenging for large-scale models, such as deep neural networks. Variational inference (VI) addresses such scalability issue -remarkable examples of VI-based methods are PLATIPUS, BMAML , Amortised meta-learner and VERSA . Although these VI-based approaches have demonstrated impressive in regression, classification as well as reinforcement learning, they do not provide any theoretical guarantee on generalisation errors for unseen samples within a task. Moreover, the overly-simplified family of diagonal Gaussian distributions used in most of these works limits the expressiveness of the variational approximation, ing in a less accurate prediction. Our work is also related to the PAC-Bayes framework used in multi-task learning that provides generalisation error bounds with certain confidence levels. These previously published papers jointly learn a single shared prior and many task-specific posteriors without relating the shared prior to any task-specific posterior. Hence, these approaches need to store all task-specific posteriors, ing in un-scalable solutions, especially when the number of tasks is large. In contrast, our proposed method learns only the shared prior of model parameters and uses that prior to estimate the task-specific posterior through the likelihood function by performing a fixed number of gradient updates. This proposed variant of amortised inference allows a memory efficient solution, and therefore, more favourable for applications with large number of tasks, such as few-shot meta-learning. In this section, we first define and formulate the few-shot meta-learning problem. Subsequently, we derive the generalisation upper-bound based on PAC-Bayes framework. We then present our proposed approach that employs implicit variational distributions for few-shot meta-learning. We use the notation of task environment to describe the unknown distribution p(T) over a family of tasks, from where tasks are sampled. Each task T i in this family is indexed by i ∈ {1, ..., T} and associated with a dataset {X i, Y i} consisting of a training/support set {X ij given the small support set for task T i . We rely on a Bayesian hierarchical model as shown in Figure 1, where w i represents the model parameters for task T i, and θ denotes the meta-parameters shared across all tasks. For example, in MAML , w i are the neural network weights for task T i that is initialised from θ and obtained by performing truncated gradient descent using {X While the conventional graphical model methods in meta-learning learn the joint probability p(θ, w 1:T |Y 1:T, X 1:T) (, Section A. 3), our objective function for the few-shot meta-learning is to minimise the negative log predictive probability w.r.t. the meta-parameters θ as follows: where we simplify the notation by dropping the explicit dependence on X from the set of conditioning variables (this simplification is adopted throughout the paper). The predictive probability term inside the expectation in can be expanded by applying the sum rule of probability and lower-bounded by Jensen's inequality: In practice, the task-specific posterior p(w i |Y (t) i; θ) is often intractable, and therefore, approximated by a distribution q(i, θ). Given this assumption and the in, the upper bound of the objective function in can be presented as: where: Hence, instead of minimising the objective function in, we minimise the upper bound in. There are two issues related to the optimisation of this upper bound: (i) the generalisation error for (x i}, and (ii) how to estimate q(w i ; λ i) that can approximate the true posterior p(w i |Y (t) i; θ) accurately, so that we can evaluate and minimise the upper-bound in. We address the generalisation error in Section 3.2 and present a variational method to obtain an expressive variational posterior q(w i ; λ i) in Section 3.3. We first introduce the PAC-Bayes bound for the single-task problem in Theorem 1. Theorem 1 (PAC-Bayes bound for single-task setting ). Let D be an arbitrary distribution over an example domain Z. Let H be a hypothesis class,: H × Z → be a loss function, π be a prior distribution over H, and δ ∈. is an i.i.d. training set sampled according to D, then for any "posterior" Q over H, the following holds: where Theorem 1 indicates that with a high probability, the expected error of an arbitrary posterior Q on data distribution p(z) is upper-bounded by the empirical error plus a complexity regularisation term. These two terms express the trade-off between fitting data (bias) and regularising model complexity (variance). Remark 1. Despite the assumption based on bounded loss function, the PAC-Bayes bound can also be extended to unbounded loss function (, Section 5). Before presenting the novel bound for few-shot meta-learning, we define some notations. Recall that m is the number of samples in the query set {X The novel bound on the generalisation error for the few-shot meta-learning problem is shown in Theorem 2. Please refer to Appendix A for the proof. Theorem 2 (PAC-Bayes bound for few-shot meta-learning in). For the general error of few-shot meta-learning in, the following holds: Remark 2. The derived in Theorem 2 is different from the one in (, Theorem 2). As mentioned in Section 2, the prior work does not relate the posterior of model parameters q(w i ; λ i) to the shared prior p(w i ; θ). The "hypothesis" in that case is a tuple including the model parameters sampled from the prior and task-specific posterior. In contrast, our approach is a variant of amortised inference that relates the posterior from the prior and likelihood function by gradient updates (see Section 3.3). Hence, the "hypothesis" in our case includes the parameters sampled from the task-specific posterior only. The discrepancy of the "hypothesis" used between the two approaches in different upper-bounds, particularly at the regularisation term. Given the in Theorem 2, the objective function of interest is to minimise the generalisation upper-bound: As denoted in Section 3.1, q(w i ; λ i) is a variational posterior that approximates the true posterior p(w i |Y (t) i; θ) for task T i, and therefore, can be obtained by minimising the following KL divergence: The ing cost function (excluding the constant term) in is often known as the variational free energy (VFE). For simplicity, we denote the cost function as The first term of VFE can be considered as a regularisation that penalises the difference between the shared prior p(w i ; θ) and the variational task-specific posterior q(w i ; λ i), while the second term is referred as data-dependent or likelihood cost. Exactly minimising the cost function in is computationally challenging, so gradient descent is used with θ as the initialisation: where α t is the learning rate and the truncated gradient descent consists of a single step (the extension to a larger number of steps is trivial). Given the approximated posterior q(w i ; λ i) with parameter λ i obtained from, we can calculate and optimise the generalisation upper bound in w.r.t. θ. In Bayesian statistics, the shared prior p(w i ; θ) represents a modelling assumption, and the variational task-specific posterior q(w i ; λ i) is a flexible function that can be adjusted to achieve a good trade-off between performance and complexity. In general, p(w i ; θ) and q(w i ; λ i) can be modelled using two general types of probabilistic models: prescribed and implicit . For example, Amortised Meta-learner is a prescribed approach where both distributions are assumed to be diagonal Gaussians. In this paper, we present a more expressive way of implicitly modelling the shared prior and task-specific posterior. Both distributions p(w i ; θ) and q(w i ; λ i) are now defined at a more fundamental level whereby data is generated through a stochastic mechanism without specifying parametric distributions. We use a parameterised model (i.e., a generator G represented by a deep neural network) to model the sample generation from the prior and posterior: where p(z) is usually denoted by a Gaussian model N (0, I) or a uniform model U. Due to the nature of implicit models, the KL divergence term in, in particular the density ratio q(w i ; λ i) /p(w i ; θ), cannot be evaluated either analytically or symbolically. We, therefore, propose to employ the probabilistic classification approach (, Chapter 4) to estimate the KL divergence term. We use a parameterised model -a discriminator D represented by a deep neural network -as a classifier to distinguish different w i sampled from the prior p(w i ; θ) (label 1) or the posterior q(w i ; λ i) (label 0). The objective function to train the discriminator D can be written as: where ω i is the parameters of D for task T i. Given the discriminator D, the KL divergence term in can be estimated as: where z (l) ∼ p(z), L t is the number of Monte Carlo samples, and V (., ω i) is the output of the discriminator D without sigmoid activation. The variational-free energy in can, therefore, be rewritten as: One problem that arises when estimating the loss in is how to obtain the local optimal parameters ω * i for the discriminator D. One simple approach is to generate several model parameters w i from the prior p(w i ; θ) and posterior q(w i ; λ i) following to train D(.; ω i) by optimising the cost in. The downside is the significant increase in training time and memory usage to store the computational graph to later be used for minimising the upper-bound in w.r.t. θ. To overcome this limitation, we propose to meta-learn ω i using MAML . In this scenario, we define ω 0 as the meta-parameters (or initialisation) of ω i. Within each task, we initialise ω i at ω 0 and use the generated w i from as training data. This approach leads to our proposed algorithm, named Statistical Implicit Bayesian Meta-Learning (SImBa), shown in Algorithm 1. Our assumption here is that the discriminator can provide an optimal estimate of the KL divergence term as shown in. This strong theoretical property only holds when the discriminator model is correctly-specified (, Remark 4.7). To this end, we employ the universal approximation theorem to model the discriminator as a feed-forward fully connected neural network. We expect that under this modelling approach, the discriminator model is approximately correctly-specified. Output: meta-parameters θ of the shared prior p(w i ; θ), and discriminator meta-parameters ω 0 1: initialise θ and ω 0 2: while θ not converged do repeat step 7 to calculate discriminator loss L D (ω i) end for 15: Another approach to estimate the KL divergence term in is to use a lower bound of fdivergence . There is a difference between the lower bound approach and the probabilistic classification presented in this subsection. In the former approach, the lower bound of the KL divergence is maximised to tighten the bound. In the latter approach, a discriminator is trained to minimise the logistic regression loss to estimate the ratio q(w i ; λ i) /p(w i ; θ), and use Monte Carlo sampling to approximate the KL divergence of interest. One potential drawback of the implicit modelling used in this paper is the curse of dimensionality, ing in an expensive computation during training. This is an active research question when dealing with generative models in general. This issue can be addressed by encoding the highdimensional data, such as images, to a feature embedding space by supervised-learning on the same training data set . This strategy reduces the dimension of the input space, leading to smaller generator and discriminator models. The trade-off lies in the possibility of losing relevant information that can affect the performance on held-out tasks. It is also worthy noting that our proposed method is easier to train than prior Bayesian few-shot meta-learning ) because we no longer need to estimate the weighting factor of the KL divergence term in. The trade-off of our approach lies in the need to set the significance level δ, but tuning δ is arguably more intuitive than estimating the correct weighting factor for the KL divergence term. We evaluate SImBa in both few-shot regression and classification problems. We also compare to prior state-of-art meta-learning methods to show the strengths and weaknesses of SImBa. The experiment in this subsection is a multi-modal task distribution where half of the data is generated from sinusoidal functions, while the other half is from linear functions. The details of the experimental setup and additional visualisation are presented in Appendix B. The in Figure 2 (leftmost and middle graphs) show that SImBa is able to vary the prediction variance, especially when there is more uncertainty in the training data, while MAML can only output a single value at each data point. To further evaluate the predictive uncertainty, we employ the reliability diagram based on the quantile calibration for regression . The reliability diagram shows a correlation between predicted and actual probability. A perfect calibrated model will have its predicted probability equal to the actual probability, and hence, align well with the diagonal y = x. Figure 2 (Right) shows the for SImBa and some published meta-learning methods. As expected, Bayesian metalearning approaches, and in particular, Amortised Meta-learner, which relies on diagonal Gaussian distributions, are better calibrated than MAML -a deterministic approach. However, the averaged slope of the Amortised Meta-learner correlation curve is quite small, implying that its predicted probability is peaked at the mean of the ground-truth distribution with small covariances. In contrast, SImBa employs a much richer variational distribution, and therefore, ing in a model with better calibration. We evaluate SImBa on the N -way k-shot setting, where a meta learner is trained on many related tasks containing N classes with k examples per class. We use the train-test split that consists of 64 classes for training, 16 for validation, and 20 for testing . Please refer to Appendix C for the details of the model used. Although we target the estimation of model uncertainty, we also present the accuracy of SImBa against the state of the art on mini-ImageNet . The in Table 1 shows that SImBa achieves state-of-the-art in 1-shot setting when the base model is the 4-layer convolutional neural network (CNN) , and in 5-shot setting when different network architecture is used. We also show in Appendix D that generators with larger networks tend to classify better. Similar to the experiment for regression, we use reliability diagrams to evaluate the predictive uncertainty. The reliability diagrams show how well calibrated a model is when testing across many unseen tasks. A perfectly calibrated model will have its values overlapped with the identity function y = x, indicating that the probability associated with the label prediction is the same as the true probability. Figures 3a and 3b show the of SImBa and other Bayesian meta-learning methods. Visually, the model trained with SImBa shows better calibration than the ones trained with MAML and PLATIPUS, while being competitive to Amortised Meta-learner. To further evaluate, we compute the expected calibration error (ECE) and maximum calibration error (MCE) of the models trained with these methods. The plotted in Figure 3c show that the model trained with SImBa has smaller ECE and MCE compared to MAML and PLATIPUS. SImBa also has lower ECE and competitive MCE compared to Amortised Metalearner, but notice that Amortised Meta-learner has a worse classification than SImBa, as shown in Table 1. Table 1: The few-shot 5-way classification accuracy (in percentage, with 95% confidence interval) of SImBa averaged over 600 mini-ImageNet tasks are competitive to the state-of-the-art methods. SImBa outperforms other prior methods in 1-shot setting when using the standard 4-layer CNN, and 5-shot setting when using non-standard network architectures. Matching nets 43.56 ± 0.84 55.31 ± 0.73 Meta-learner LSTM 43.44 ± 0.77 60.60 ± 0.71 MAML 48.70 ± 1.84 63.15 ± 0.91 Prototypical nets 1 49.42 ± 0.78 68.20 ± 0.66 LLAMA 49.40 ± 1.83 PLATIPUS 50.13 ± 1.86 Amortised ML 45.00 ± 0.60 SImBa 51.01 ± 0.31 63.94 ± 0.43 Relation nets 50.44 ± 0.82 65.32 ± 0.70 VERSA 53.40 ± 1.82 67.37 ± 0.86 SNAIL 55.71 ± 0.99 68.88 ± 0.92 adaResNet 56.88 ± 0.62 71.94 ± 0.57 TADAM 58.50 ± 0.30 76.70 ± 0.30 LEO 61.76 ± 0.08 77.59 ± 0.12 LGM-Net 69. We introduce and formulate a new Bayesian algorithm for few-shot meta-learning. The proposed algorithm, SImBa, is based on PAC-Bayes framework which theoretically guarantees prediction generalisation on unseen tasks. In addition, the proposed method employs a generative approach that implicitly models the shared prior p(w i ; θ) and task-specific posterior q(w i ; λ i), ing in more expressive variational approximation compared to the usual diagonal Gaussian methods, such as PLATIPUS or Amortised Meta-learner . The uncertainty, in the form of the learnt implicit distributions, can introduce more variability into the decision made by the model, ing in well-calibrated and highly-accurate prediction. The algorithm can be combined with different base models that are trainable with gradient-based optimisation, and is applicable in regression and classification. We demonstrate that the algorithm can make reasonable predictions about unseen data in a multi-modal 5-shot learning regression problem, and achieve state-of-the-art calibration and classification with on few-shot 5-way tasks on mini-ImageNet data set. First, we present the two auxiliary lemmas that helps to prove Theorem 2. Lemma 1. For i = 1: n, if X i and Y i are random variables, then: Proof. The proof is quite direct: Hence, the proof. Lemma 2. For n events A i with i = 1: n, the following holds: Proof. Proof can be done by induction. For n = 2: Suppose that it is true for case n: We prove that this is also true for case (n + 1): It is, therefore, true for (n + 1), and hence, the proof. Secondly, we apply the PAC-Bayes bound in Theorem 1 on the task i to obtain an upper-bound for a single task i shown in Corollary 1. Corollary 1. For a single task T i in Eq. and δ i ∈, the following holds: Finally, we can employ Lemmas 1 and 2 combined with Corollary 1 to derive the novel upper-bound for few-shot meta-learning setting. Theorem 2 (PAC-Bayes bound for few-shot meta-learning in). For the general error of few-shot meta-learning in, the following holds: Proof. Applying the inequality in Lemma 1 by replacing where are defined at Eqs., and, respectively. Applying Lemma 2 the right hand side term of Ineq. gives: Applying the transitive property for Ineqs., and Corollary 1, and setting δ i = δ/T prove the theorem. The experiment is carried out with half of the data being generated from sinusoidal functions, while the other half from linear functions. The amplitude and phase of the sinusoidal functions are uniformly sampled from [0.1, 5] and [0, π], respectively, while the slope and intercept of the lines are sampled from [-3, 3]. Data is uniformly generated from [-5, 5], and the corresponding label is added a zero-mean Gaussian noise with a standard deviation of 0.3. Each task consists of 5 data points used for training (|Y i | = 15). The base model used in the regression experiment is a three-hidden fully connected layer neural network. Each hidden layer has 100 hidden units (1 → 40 → 40 → 40 → 1), followed by tanh activation. No batch normalisation is used. The generator is a fully connected network with two hidden layers consisting of 256 and 1024 units, respectively (dim(z) → 256 → 1024 → dim(w i)). The discriminator is also fully connected (dim(w i) → 512 → dim(z) → 1). These networks are activated by ReLU, except the last layer of the discriminator is activated by sigmoid function. No batch normalisation is used across these two networks. The variational parameters λ i and ω i are estimated by performing five gradient updates with learning rate α t = 0.001 and γ t = 0.001. The meta-parameters θ and the meta-parameter of the discriminator ω 0 are obtained with Adam with fixed step size α v = 10 −4 and γ v = 10 −5. At the beginning of training, we clip the gradient when updating λ i with a value of 10, and then gradually increase the clipping value. After 50,000 tasks, we remove the gradient clipping and continue to train until convergence. i | = 15N ). The latent noise z is a 100-dimensional vector sampled from a uniform distribution U. Adam optimiser is employed to optimise both θ and ω 0. Please refer to Table 2 for other hyper-parameters used. This model corresponds to the top part of Table 1. All input images are down-sampled to 84-by-84 pixels before performing experiments to be consistent with prior few-shot meta-learning works. The base model is a 4-block CNN, where each block consists of 32 filters with a size of 3-by-3, followed by a batch normalisation and a ReLU activation function. The generator is a 2-hiddenlayer fully connected network (dim(z) → 256 → 1024 → dim(w i)), where each layer is activated by ReLU without batch normalisation. The discriminator is also a fully connected network (dim(w i) → 1024 → 256 → dim(z) → 1) with ReLU activation and without batch normalisation (the last activation function is a sigmoid). This corresponds to the bottom part of Table 1. Here, we employ the features extracted from (, Section 4.2.2) as the encoding of the input images. The training for the feature embedding consists of 3 steps. First, raw input images are down-sampled to 80-by-80 pixels. Second, a wide residual neural network WRN-28-10 is trained with data and labels from the 64 classes of the training set. Finally, the intermediate features of 640 dimensions at layer 21 are chosen as the embedding features used for our classification experiments. The base model used in this experiment is a fully connected network with 1 hidden layer that consists of 128 hidden units (640 → 128 → N) followed by ReLU activation and batch normalisation. The generator model is constructed as a 1-hidden layer fully connected network with 512 hidden units, followed by ReLU without batch normalisation. The discriminator is also a fully connected network (dim(w i) → 512 → dim(z) → 1) with ReLU without batch normalisation. To study the effect of network architecture on the classification performance presented in Table 1, we repeat the classification experiment with the same setup, but different base networks. We vary the number of hidden units in the base network from 16 to 128, and also increase the size the the hidden layer of the generator from 256 to 512. The in Table 3 show that the larger the base network and the generator are, the better the classification accuracy.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkgYJaEFwS
Bayesian meta-learning using PAC-Bayes framework and implicit prior distributions
As the area of Explainable AI (XAI), and Explainable AI Planning (XAIP), matures, the ability for agents to generate and curate explanations will likewise grow. We propose a new challenge area in the form of rebellious and deceptive explanations. We discuss how these explanations might be generated and then briefly discuss evaluation criteria. Explanations as a research area in AI (XAI) has been around for several decades BID7 BID5 BID10 BID45 BID12 BID40 BID44 BID28. It has additionally gained momentum recently as evidenced by the increasing number of workshops and special tracks covering it in various conferences (e.g., VIS-xAI, FEAPAI4Fin, XAIP, XAI, OXAI, MAKE-eXAI, ICCBR-19 Focus area).While still growing in use, there have been some approaches to formalizing XAI. BID11 stated that anything calling itself XAI should address the following questions:• Why did the agent do that and not something else?• When does the agent succeed and when does it fail?• When can I trust the agent?However, less thought out is the idea of explanations that are deceptive or rebellious in nature. These forms of explanation can be an entirely new area of discussion and use for certain autonomous agents. The study of deception and rebellion are both rich fields, and many aspects of both that are studied in civilian and military capacities. For example, the area of deception detection works on finding ways to detect inconsistencies BID41 BID22 BID2. BID17 discuss a number of ways why deception is an important topic for autonomous agents. Studies of rebellion and resistance have investigated how, why, when it does, and doesn't, happen (Martí and Copyright c 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. BID24 BID33. The use of both has also been studied BID34 BID1 BID21 BID18 BID30.The idea of pairing deception and rebellion with explanations may not be intuitive initially. However, in addition to being areas of rich study, deception and rebellion offer key conditions that are of interest to agent reasoning. Predominately, it requires multiple actors (i.e., An actor deceives another actor, or an actor rebels against a coordinator). Additionally, there needs to be some sort of conflict or misalignment between the actors. Either something needs to be in contention for an actor to rebel, or something needs to be in conflict for the actor to use deception. b Rebellion in agents has been a growing area of interest BID9 BID3 BID8. This area is focused on finding models in which agents can rebel from directives given in certain circumstances. This can include having more upto-date knowledge that would affect the plan, finding opportunities to exploit but may be off-mission, or solving problems or roadblocks before they become an issue even if it is off-mission. discuss three ways in which rebellion could manifest in agents. The expression of a rebellion can consist of either an explicit or implicit act. The focus is either inward or outward facing. Lastly, the interaction initiation can either be reactive or proactive. Deception in agents has been progressing over the last decade, with many discussions on formalizing deception. The majority of this formalism is on the topic of lying BID42 BID38 BID43. There has also been inroads for more encompassing deception as described by BID39 and BID37. Of interest here, BID37 defined Quantitative & Qualitative Maxims for Dishonesty as the following maxims:1. Lie, Bullshit (BS), or withhold information as little as possible to achieve your objective.2. Never lie if you can achieve your objective by BS.3. Never lie nor BS if you can achieve your objective by withholding Information.4. Never lie, BS, nor withhold information if you can achieve your objective with a half-truth. A particular topic that has received attention is deceptive, or dishonest, agents in negotiations BID31 BID47.With these concepts in mind, we will pursue research to answer the following:What kind of reasoning models are required to generate explanations of a deceptive or rebellious nature? Research into rebellion and deception have been studied for decades in other fields. We summarize some of these studies in this section, as they will form the foundation and context of our approach. Dissent assumes a challenge to some system of power or belief BID26. The author goes further to differentiate between dissent and rebellion. Martin claims that a basic distinction between dissent and rebellion is that dissenters still believe in the process that has been established whereas rebels do not. A number of real-world studies on dissent and whistleblowing in organizations showcase the importance of this topic and area BID15 BID19 BID26 BID32 BID25. Many of these discuss the outcomes and towards both the actor and the agent in these situations. BID25 discuss organizational responses to whistle-blowing and dissent. Manager reprisals may include building a damaging record towards them, threats against them, isolation, prosecution, or setting them up for failure at their job. In a number of instances, it seems that giving dissent, or becoming a whistle-blower, can quickly turn into an adversarial or antagonistic situation. Closely tied to dissent is the idea of cynicism. BID23 defines it as: a belief that there is a gap between desired and observed organizational identity; a negative affect toward the organization or organizational change (strategy); and tendencies to disparaging and/or critical behaviors toward the organization that are consistent with those beliefs and affect. They additionally give examples of cynicism as: pessimism, emotional/narrative expressions or outbursts, frustration, irony, accusations, neglect, negative coping behaviors and expressions, and aggression. Observed actions of resistance to change have been studied in a number of ways. Once such study BID46 noted that there was a tendency towards "informal or mundane" types of resistance compared to an open, direct, and explicit form of objection. When faced with managerial initiatives, the following expressions of resistance were noted: "careful carelessness", humor, cynicism and skepticism, nostalgic talk (i.e., the "Good ol' Days"), alternative articulations of self-hood, and simulation of productivity. While these expressions of resistance were present, workers were also camouflaging this dissent with a goodhumored appearance. It was noted that hiding dissent allowed for behind-the-scenes inaction to stifle any new directives and also avoided many conversations that workers deemed futile. Similar studies include BID14 BID27 BID35.Along with giving a number of examples of resistance, BID24 While overt forms of resistance usually did not work, the stories of that resistance remained and were used later to continue forms of resistance. There have been several studies on how deception in society works. A very good resource is (Whiten and Byrne 1988) which defines a number of deceptive actions observed among primates in the wild. They are categorized as either concealment, distraction, use of a tool, or use of another agent. Examples include hiding things from sight, looking at things to have others avoid looking at something else, and getting other agents to take the blame for something. In addition to primates, there have been deceptive studies for cephalopods BID4 ) and dolphins BID16.In studies of deception in warfare BID34, there have been noticeable benefits to using deception. This includes minimizing the amount of resources for a task or ensuring the opponents mis-allocate their resources. Also, most of the example deceptive acts required an extended duration of time to be executed and prove successful. This included preparations for and performing the deceptive actions. A good case study from (Dunin-Keplicz and Verbrugge 2011) describes an ecological disaster scenario. We present it now to provide context for explanations discussed in Section 4. In the ecological disaster scenario, two poisons have breached containment in a large area. Each poison is highly dangerous on its own. However, when combined they could be explosive. Robots on the ground have the ability to neutralize specific poisons in small areas. An unmanned air vehicle (UAV) can survey large areas to identify high concentrations of poison. A coordinator can issue commands to either the UAV or the ground robots. A robot can gauge the type and quantity of poisons in the small area around it, and can move across the ground. The UAV can scan larger areas to identify large concentrations of individual poisons, though it reports only the highest quantity. Additionally, the UAV cannot scan areas that are obscured by ceilings or coverings. The coordinator receives the information from the UAV and the Robots, but does not have a view of their own. Examples of rebellion here could be robots not following commands to enter areas that would explode or the UAV deciding to survey a different area to help robots on the ground compared to an area that the coordinator instructed the UAV to survey. Let us now consider explanations that are rebellious or deceptive within the context given in Section 3. Some of the recurring reasons for generating these explanations include "makes it simpler" or "avoids a larger conversation". These can limit conversations but can also cause misinterpretations. It also has the ability to avoid conversation bottlenecks, letting agents continue to perform other actions. During a disaster, it may come to pass that an agent with the ability to administer first-aid or medical attention, perhaps they are equipped with air masks or oxygen, encounters a victim who will only let them near if they do not inform any law enforcement of their position. In this instance, the agent would administer the first aid or medical treatment and, when asked by the Coordinator to explain their activities, would say they are performing a different activity or in a different location. A robot could be traversing an area trying to help people evacuate or administer aid and they have informed the Coordinator they are performing this task. However, suppose there is a person who wishes to remain anonymous or would refuse help. In this case, the robot could explain its actions but leave out details that would identify the person in later reports or debriefs. This would help save the person and increase that victim's trust in the robot to help them out of the area. A version of an explanation with a half-truth could be as follows: a medical agent has found and is administering aid to a victim, however the victim is too far gone. Keeping the victim calm with a half-truth or "white lie" explanation would be beneficial to the victim. An example of a protest-based explanation could come from the following contingency. An exploratory or search agent has encountered an area of the environment that it deems too hazardous to continue in. The Coordinator asks why it isn't moving forward anymore. The agent responds with,"I will not move forward until the hazardous area has been secured and is safe to pass through. " As discussed in Section 2.3, cynicism is a bit odd. It is usually used as a form of soft-resistance. The agent still performs the action or command, but may not do it optimally. An example could be that the Coordinator assigns a robot that has a full amount of neutralizing and medical equipment to survey an area. This might take a while for the robot to execute, so the Coordinator might ask why progress is slow, and the explanation could be " If I could fly this would go much quicker." Alternatively, asking to release all of its equipment so that it can be lighter to perform the survey is another example. For this instance, the agent is in some sort of situation in which it will not continue an objective given by the Coordinator. Perhaps the Coordinator has tasked an agent with neutralizing an area under a fallen concrete roof. However the agent has noticed a victim in the area being treated by another agent. In that instance the agent could respond to the Coordinator, "I will not neutralize that area, there is a victim in the vicinity. Please assign me a different objective." In order to generate possible deceptive explanations as suggested above an agent would require a few things in its models to properly generate a model. An agent would require an internal model of the domain so that it can reason about possible actions, tasks, and goals. It would also require an internal model of the external agent's domain model. This is required so that when generating the "deceptive" aspect, it can be validated against the presumed model of that agent. In addition to these models, a few conditions should be met as well. Notably, a discrepancy must be noticed between the external agent's model and the internal model in relation to the query asked. There needs to be a specific condition or contingency in which the truth would not maximize an overall objective benefit for the agent in question. Likewise, rebellious explanations require similar things such as internal models for both the domain and objectives along with noticing a discrepancy between the objectives, the domain, and the agent's internal model. Of great interest is the work in model reconciliation BID6. This is focused on maintaining (at least) two separate models, an internal one to the agent, and an external one for someone interacting with the agent. The idea is for the agent to reconcile any differences between these two versions to formulate an explanation. This approach is promising in regards to expanding it towards rebel or deceptive tasks in explanation. In the case of either deceptive or rebellious explanations, a discrepancy is required. This has been an active research focus. survey work on discrepancy detection. Useful to this thread of research, BID29 ) discusses it in the context of goal reasoning. In terms of reasoning models, BID36 ) discuss some interesting concepts in relation to Goal Networks. A life cycle for goals is also discussed. Combining these goal networks and the temporal nature of goal life cycles, a goal timeline develops. This timeline structure can represent the reasoning needed for some of the explanation models once discrepancies have been detected. Utilizing models of the world that are not in the explainer's original model is both challenging and novel to pursue for several reasons. It requires an agent to distinguish between different viewpoints of explanations. Introduces reasoning over viable explanations that can be generated. Requires a conversation concerning the ethics of deciding when an agent can decide to deceive. Finally, it opens up the area of XAI to new sets of scenarios -namely those that are deceptive or rebellious. To facilitate the development of deceptive or rebellious explanations, we will need a way to evaluate them. We propose a few areas that may be suitable for such testing. One such testing ground is the RoboCup Rescue. This is a popular disaster simulation BID20 that can be leveraged to simulate examples similar to those given in Section 3. Various games and game simulations may prove useful to test for these explanations. Some game options include Minecraft, One Night Ultimate Werewolf, Secret Hitler, Clue, and Diplomacy. Other relevant domains may include those that involve unmanned air and underwater vehicles. These vehicles require a large amount of autonomy and can be utilized in areas where discrepancies between an operator's situation awareness and the vehicle's belief state differ dramatically. Along with testing simulations, we can also look at measures of explanation effectiveness. Some of these measures can include clarity, timeliness, or correctness. Did the explanation answer the query? How easy was the explanation to understand? Was the time it took to respond seen as adequate? Is the user's attitude toward the agent lower or higher given this form of explanation?
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Bkxj7a2Q5E
Position paper proposing rebellious and deceptive explanations for agents.
We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one latent variable in the superstructure, our model reduces to one that assumes the latent features to be generated from a Gaussian mixture model. We call our model the latent tree variational autoencoder (LTVAE). Whereas previous deep learning methods for clustering produce only one partition of data, LTVAE produces multiple partitions of data, each being given by one super latent variable. This is desirable because high dimensional data usually have many different natural facets and can be meaningfully partitioned in multiple ways. Clustering is a fundamental task in unsupervised machine learning, and it is central to many datadriven application domains. Cluster analysis partitions all the data into disjoint groups, and one can understand the structure of the data by examining examples in each group. Many clustering methods have been proposed in the literature BID0, such as k-means BID18, Gaussian mixture models BID5 and spectral clustering BID30. Conventional clustering methods are generally applied directly on the original data space. However, it is challenging to perform cluster analysis on high dimensional and unstructured data BID26, such as images. It is not only because the dimensionality is high, but also because the original data space is too complex to interpret, e.g. there are semantic gaps between pixel values and objects in images. Recently, deep learning based clustering methods have been proposed that simultanously learn nonlinear embeddings through deep neural networks and perform cluster analysis on the embedding space. The representation learning process learns effective high-level representations from high dimensional data and helps the cluster analysis. This is typically achieved by unsupervised deep learning methods, such as restricted Boltzmann machine (RBM) BID11, autoencoders (AE) BID28, variational autoencoders (VAE) BID16, etc. Previous deep learning based clustering methods BID33 BID10 BID14 BID34 ) assume one single partition over the data and that all attributes define that partition. In real-world applications, however, the assumptions are usually not true. High-dimensional data are often multifaceted and can be meaningfully partitioned in multiple ways based on subsets of attributes BID4. For example, a student population can be clustered in one way based on course grades and in another way based on extracurricular activities. Movie reviews can be clustered based on both sentiment (positive or negative) and genre (comedy, action, war, etc.). It is challenging to discover the multi-facet structures of data, especially for high-dimensional data. To resolve the above issues, we propose an unsupervised learning method, latent tree variational autoencoder (LTVAE) to learn latent superstructures in variational autoencoders, and simultaneously perform representation learning and structure learning. LTVAE is a generative model, where the data is assumed to be generated from latent features through neural networks, while the latent features themselves are generated from tree-structured Bayesian networks with another level of latent variables as shown in Fig. 1. Each of those latent variables defines a facet of clustering. The proposed method automatically selects subsets of latent features for each facet, and learns the dependency structure among different facets. This is achieved through systematic structure learning. Consequently, LTVAE is able to discover complex structures of data rather than one partition. We also propose efficient learning algorithms for LTVAE with gradient descent and Stepwise EM through message passing. The rest of the paper is organized as follows. The related works are reviewed in Section 2. We introduce the proposed method and learning algorithms in Section 3. In Section 4, we present the empirical . The is given in Section 5. Clustering has been extensively studied in the literature in many aspects BID0. More complex clustering methods related to structure learning using Bayesian nonparametrics have been proposed, like Dirichlet Process, Hierarchical Dirichlet Process (HDP). However, those are with conventional clustering methods that apply on raw data. Recently, deep learning based clustering methods have drawn more and more attention. A simple two-stage approach is to first learn low-dimensional embeddings using unsupervised feature learning methods, and then perform cluster analysis on the embeddings. However, without any supervision, the representation learning do not necessarily reveal the true cluster structure of the data. DEC BID33 ) is a method that simultaneously learns feature representations and cluster assignments through deep autoencoders. It gradually improves the clustering by driving the deep network to learn a better mapping. Improved Deep Embedded Clustering BID10 improves DEC by keeping the decoder network and adding reconstruction loss to the original clustering loss in DEC. Variational deep embedding BID14 ) is a generative method that models the data generative process using a Gaussian mixture model combined with a VAE, and also performs joint learning of representations and clustering. Similarly, GMVAE BID7 performs joint learning of a GMM and a VAE, but instead generates the mixture components through neural networks. Deep clustering network (DCN) BID34 is another one that jointly learns an autoencoder and performs k-means clustering. These joint learning methods consistently achieve better clustering than conventional ones. The method proposed in BID35 uses convolutional neural networks and jointly learns the representations and clustering in a recurrent framework. All these methods assume flat partitions over the data, and do not attempt the structure learning issue. An exception is hierarchical nonparametric variational autoencoders proposed in BID9. It uses nCRP as the prior for VAE to allow infinitely deep and branching tree hierarchy structure and focuses on learning hierarchy of concepts. However, it is still one partition over the data, only that the partitions in upper levels are more general partitions, while those in lower levels more fine-grained. Different from it, our work focuses on multifacets of clustering, for example, the model could make one partition based on identity of subjects, while another partition based on pose. In this section, we present the proposed latent tree variational autoencoder and the learning algorithms for joint representation learning and structure learning for multidimensional clustering. Deep generative models assume that data x is generated from latent continuous variable z through some random process. The process consists of two steps: a value z is generated from some prior distribution p(z); the observation x is generated from the conditional distribution p θ (x|z), which is parameterized through deep neural networks. Thus, it defines the joint distribution between Generation Network DISPLAYFORM0 Figure 2: Inference and gradient through message passing. Solid-arrows denote collecting message, and dashed-arrows denote distributing message.observation x and latent variable z: DISPLAYFORM1 This process is hidden from our view, and we learn this process by maximizing the marginal loglikelihood p(x) over the parameters θ and latent variable z from data. After the learning, the latent variable z can be regarded as the deep representations of x since it captures the most relevant information of x. Thus, the learning process is also called representation learning. In order to learn the latent structure of z, for example multidimensional cluster structure, we in-. This essentially forms a Bayesian network. And if we restrict the network to be treestructured, the z and Y together form a latent tree model BID37 BID21 BID19 BID20 BID38 with z being the observed variables and Y being the latent variables. For multidimensional clustering, each latent variable Y is taken to be a discrete variable, where each discrete state y of Y defines a cluster. Each latent variable Y thus defines a facet partition over the data based on subset of attributes and multiple Y's define multiple facets. Given a value y of Y, z b follows a conditional Gaussian distribution P (z b |y) = N (µ y, Σ y) with mean vector µ y and covariance matrix Σ y. Thus, each z b and its parent constitute a Gaussian mixture model (GMM). Suppose the parent of a node is denoted as π(·), the maginal distribution of z is defined as follows DISPLAYFORM2 which sums over all possible combinations of Y states. As a matter of fact, a GMM is a Gaussian LTM that has only one latent variable connecting to all observed variables. Let the latent structure of Y be S, defining the number of latent variables in Y, the number of discrete states in each variable Y and the connectivity structure among all variables in z and Y. And let the parameters for all conditional probabilities in the latent structure be Θ. Both the latent structure S and the latent parameters Θ are unknown. We aim to jointly learn data representations and the latent structure. The proposed LTVAE model is shown in Fig. 1. The latent structure S are automatically learned from data and will be discussed in a later section. Due to the existence of the generation network, the inference of the model is intratable. Instead, we do amortized variational inference for the latent variable z by introducing an inference network BID16 and define an approximate posterior q φ (z|x). The evidence lower bound (ELBO) L ELBO of the marginal loglikelihood of the data given (S, Θ) is: DISPLAYFORM3 where log y p S (z, y; Θ) is the marginal loglikelihood of the latent variable z under the latent tree model, and H[·] is the entropy. The conditional generative distribution p θ (x|z) could be a Gaussian distribution if the input data is real-valued, or a Bernoulli distribution if binary, parameterized by the generation network. Using Monte Carlo sampling, the ELBO can be asymptotically estimated by DISPLAYFORM4 where DISPLAYFORM5 can be computed analytically if we choose the form of DISPLAYFORM6, where J is the dimensionality of z. Furthermore, the marginal loglikelihood log y p S (z (i), y; Θ) can be computed efficiently through message passing. Message passing is an efficient algorithm for inference in Bayesian networks BID17 BID22. In message passing, we first build a clique tree using the factors in the defined probability density. Because of the tree structure, each z b along with its parent form a clique with the potential ψ(z b, y) being the corresponding conditional distribution. This is illustrated in Fig. 2. With the sampled z (i), we can compute the message ψ (y) by absorbing the evidence from z. During collecting message phase, the message ψ (y) are sent towards the pivot. After receiving all messages, the pivot distributes back messages towards all z. Both the posterior of Y and the marginal loglikelihood of z (i) thus can be computed in the final normalization step. In this section, we propose efficient learning algorithms for LTVAE through gradient descent and stepwise EM with message passing. Given the latent tree model (S, Θ), the parameters of neural networks can be efficiently optimized through stochastic gradient descent (SGD). However, in order to learn the model, it is important to efficiently compute the gradient of the marginal loglikelihood log p S (z; Θ) from the latent tree model, the third term in Eq. 4. Here, we propose an efficient method to compute gradient through message passing. Let z b be the variables that we want to compute gradient with respect to, and let Y b be the parent node. The marginal loglikelihood of full z can be written as DISPLAYFORM0 where f (y b) is the collection of all the rest of the terms not containing z b. The gradient g z b of the marginal loglikelihood log p S (z; Θ) w.r.t z b thus can be computed as DISPLAYFORM1 where p(y b |z) is the posterior probability of y b and can be computed efficiently with message passing as described in the previous section. The detailed derivation is in Appendix E. DISPLAYFORM2 With the efficient computation of the third term in Eq. 4 and its gradient w.r.t z through message passing, the parameters of inference network and generation network can be efficiently optimized through SGD.In order to jointly learn the parameters of the latent tree Θ, we propose Stepwise EM algorithm based on mini-batch of data. Specifically, we maximize the third term in Eq. 4, i.e. the marginal loglikelihood of z under the latent tree. In the Stepwise E-step, we compute the distributions P (y, y |z, θ (t−1) ) and P (y|z, θ (t−1) ) for each latent node Y and its parent Y. In the Stepwise M-step, we estimate the new parameter θ (t). Let s(z, y) be a vector the sufficient statistics for a single data case. Lets = E p S (y|z;Θ) [s(z, y)] be the expected sufficient statistics for the data case, where the expectation is w.r.t the posterior distribution of y with current parameter. And let µ = N i=1s i be the sum of the expected sufficient statistics. The update of the parameter Θ is performed as follows: DISPLAYFORM3 where η is the learning rate and l is the complete data loglikelihood. Each iteration of update of LTVAE thus is composed of one iteration of gradient descent update for the neural network parameters and one iteration of Stepwise EM update for the latent tree model parameters with a mini-batch of data. For the latent structure S, there are four aspects need to determine: the number of latent variables, the cardinalities of latent variables, the connectivities among variables. We aim at finding the model m * that maximizes the BIC score BID25 BID17: DISPLAYFORM0 where θ * is the MLE of the parameters and d(m) is the number of independent parameters. The first term is known as the likelihood term. It favors models that fit data well. The second term is known as the penalty term. It discourages complex models. Hence, the BIC score provides a tradeoff between model fit and model complexity. To this end, we perform systematic searching to find a structure with a high BIC score. We use the hill-climing algorithm to search for m * as in BID21, and define 7 search operators: node introduction (NI) and node deletion (ND) to introduce new latent nodes and delete existing nodes, state introduction (SI) and state deletion (SD) to add a new state and delete a state for existing nodes, node relocation (NR) to change links of existing nodes, pouching (PO) and unpouching (UP) operators to combine nodes into a single node and separate variables from a node.. The structure search operators are shown in FIG1. Each operator produces a set of candidates from existing structure, and the best candidate is picked if it improves the previous one. To reduce the number of possible search candidates, we first perform SI, NI and PO to expand the structure and pick the best model. Then we perform NR to adjust the best model. Finally, we perform UP, ND and SD to simplify the current best structure and pick the best one. Acceleration techniques BID22 are adopted that make the algorithm efficient enough. The structure learning is performed iteratively together with the parameter learning of neural networks. The overall learning algorithm is illustrated in Algorithm 1. Starting from a pretrained model, we iteratively improve the structure and parameters of latent tree model while learning the representations of data through neural network in a greedy manner. Using current structure S t as the initial structure, Published as a conference paper at ICLR 2019 Compute log pS (z; Θ) and ∂ log p S (z) ∂z from Eq. 5 and 7 Compute ELBO from Eq. 4 θ, φ ← Back-propagation and SGD step DISPLAYFORM0 we search for a better model. With new latent tree model, we optimize for a better representation until convergence. We first demonstrate the effectiveness of the proposed method through synthetic data. Assume that the data points have two facets Y 1 and Y 2, where each facet controlls a subset of attributes (e.g. two-dimensional domain) and defines one partition over the data. This four-dimensional domain z = {z 1, z 2, z 3, z 4} is a latent representation which we do not observe. What we observe is x ∈ R 100 that is obtained via the following non-linear transformation: DISPLAYFORM0 where W ∈ R 10×4 and U ∈ R 100×10 are matrices whose entries follow the zero-mean unit-variance i.i.d. Gaussian distribution, σ(·) is a sigmoid function to introduce nonlinearity. The generative model is shown in FIG2. We define two clusters in facet Y 1 and two clusters in facet Y 2, and generate 5,000 samples of x. Under the above generative model, recovering the two facets Y 1 and Y 2 structure and the latent z domain from the observation of x seems very challenging. All previous DNN-based methods (AE+GMM, DEC, DCN, etc.) are only able to discover one-facet of clustering (i.e. one partition over the data), and none of these is applicable to solve such a multidimensional clustering problem. FIG2 shows the of the proposed method. As one can see, the LTVAE successfully discovers the true superstructure of Y 1 and Y 2. The 2-d plot of z 1 and z 2 shows the separable latent space clusters under facet Y 1, and it matches the ground-truth cluster assignments. Additionally, the 2-d plot of z 3 and z 4 shows another separable clusters under facet Y 2, and it also matches the ground-truth cluster assignments well in the other facet. We evaluate the proposed LTVAE model on two image datasets and two other datasets, and compare it against other deep learning based clustering algorithms, including two-stage methods, AE+GMM and VAE+GMM, which first learn AE/VAE BID16 ) models then construct a GMM on top of them, and joint learning methods, DEC BID33 and DCN BID34. The datasets include MNIST, STL-10, Reuters BID33 BID14 and the Heterogeneity Human Activity Recognition (HHAR) dataset. When evaluating the clustering performance, for fair of comparison, we follow previous works BID33 BID34 and use the network structures of d−500−500−2000−10 for the encoder network and 10−2000− 500 − 500 − d for the decoder network for all datasets, where d is the data-space dimension, which varies among datasets. All layers are fully-connected. We follow the pretraining procedure as in BID33. We first perform greedy layer-wise pretraining in denoising autoencoder manner, then stack all layers to form deep autoencoder. The deep autoencoder is further finetuned to minimize the reconstruction loss. The weights of the deep autoencoder are used to intialize the weights of encoder and decoder networks of above methods. After the pretraining, we optimze the objectives of those methods. For DEC and DCN, we use the same hyperparameter settings as the original papers. When initializing the cluster centroids for DEC and DCN, we perform 10 random restarts and pick the with the best objective value for k-means/GMM. For the proposed LTVAE, we use Adam optimzer BID15 with initial learning rate of 0.001 and mini-batch size of 128. For Stepwise EM, we set the learning rate to be 0.01. As in Algorithm 1, we set E = 5, i.e. we update the latent tree model every 5 epochs. When optimizing the candidate models during structure search, we perform 10 random restarts and train with EM for 200 iterations. We first show that, by using the marginal loglikelihood defined by the latent tree model as the prior, LTVAE better fits the data than conventional VAE and importance weighted autoencoders (IWAE) BID3. While alternative quantitative criteria have been proposed BID2 BID13 BID24 for generative models, log-likelihood of held-out test data remains one of the most important measures of a generative model's performance BID16 BID3 BID32 BID9. For comparison, we approximate true loglikelihood L 5000 using importance sampling BID3: DISPLAYFORM0, where z (i) ∼ q φ (z|x). The for all datasets are shown in TAB0. The proposed LTVAE obtains a higher test data loglikelihood and ELBO, implying that it can better model the underlying complex data distribution embedded in the image data. The most important features of the proposed model are that it can perform variable selection for model-based clustering, leading to multiple facets clustering. We use the standard unsupervised evaluation metric and protocols for evaluations and comparisons to other algorithms BID36. For baseline algorithms we set the number of clusters to the number of ground-truth categories. While for LTVAE, it automatically determines the number of facets and latent superstructure through structure learning. We evaluate performance with unsupervised clustering accuracy (ACC): DISPLAYFORM0 where l i is the groundtruth label, c i is the cluster assignment produced by the algorithm, and m ranges over all possible mappings between clusters and labels. TAB1 show the quantitative clustering compared with previous works. With z dimension of small value like 10, LTVAE usually discovers only one facet. It can be seen the, for MNIST dataset LTVAE achieves clustering accuracy of 86.32%, better than the of other methods. This is also the case for STL-10, Reuters and HHAR.More importantly, the proposed LTVAE does not just give one partition over the data. Instead, it explains the data in multi-faceted ways. Unlike previous clustering experiments, for this experiment, we choose the z dimension to be 20. FIG3 shows the two facet clustering for MNIST. It can be seen that facet 1 gives quite clean clustering over the identity of the digits and the ten digits are well separated. On the other hand, facet 2 gives a more grand partition based on the shape and pose. Note how up-right "4" and "9" are similar, and how tilted "4","7" and "9" are similar. The facet meanings are more evident in FIG3. FIG4 shows four facets discovered for the STL-10 The digits generated by the proposed model. Digits in the same row come from the same latent code of the latent tree.dataset. Although it is hard to characterize precisely how the facets differ from each other, there are visible patterns. For example, the cats, monkeys and birds in facet 2 have clearly visible eyes, while this is not always true in facet 1. The deers in facet 2 are all showing their antlers/ears, while this is not true in facet 3. In facet 2 we see frontal views of cars, while in facets 1 and 3 we see side view of cars. In facet 1, each cluster consists of the same types of objects/animal. In facet 3/4, images in the same cluster do not necessarily show the same type of objects/animals. However, they have similar overall feel. Since the structure of the data in latent space is automatically learned through the latent tree, we can sample the data in a more structured way. One way is through ancestral sampling, where we first sample the root of the latent tree and then hierarchically sample the children variables to get z, from which the images can be generated through generation network. The other way is to pick one component from the Gaussian mixture and sample z from that component. This produces samples from a particular cluster. FIG5 shows the samples generated in this way. As it can be seen, digits sampled from each component has clear semantic meaning and belong to the same category. Whereas, the samples generated by VAE does not have such structure. Conditional image generation can also be performed to alter the attributes of the same digit as shown in Appendix B. LTVAE learns the dependencies among latent variables Y. In general, latent variables are often correlated. For example, the social skills and academic skills of a student are generally correlated. Therefore, its better to model this relationship to better fit the data. Experiments show that removing such dependencies in LTVAE models in inferior data loglikelihood. In this paper, for the inference network, we simply use mean-field inference network with same structure as the generative network BID16. However, the limited expressiveness of the mean-field inference network could restrict the learning in the generative network and the quality of the learned model BID31 BID6. Using a faithful inference network structure as in BID31 to incorporate the dependencies among latent variables in the posterior, for example one parameterized with masked autoencoder distribution estimator (MADE) model BID8, could have a significant improvement in learning. We leave it for future investigation. In this paper, we propose an unsupervised learning method, latent tree variational autoencoder (LT-VAE), which simultaneously performs representation learning and multidimensional clustering. Different from previous deep learning based clustering methods, LTVAE learns latent embeddings from data and discovers multi-facet clustering structure based on subsets of latent features rather than one partition over data. Experiments show that the proposed method achieves state-of-the-art clustering performance and reals reasonable multifacet structures of the data. For the MNIST dataset, the conditional probability between identity facet Y 1 (x-axis) and pose facet Y 2 (y-axis) is shown in Fig. 8. It can be seen that a cluster in Y 1 facet could correspond to multiple clusters in Y 2 facet due to the conditional probability, e.g. cluster 0, 4, 5, 11 and 12. However, not all clusters in Y 2 facet are possible for a given cluster in Y 1 facet. Figure 8: Conditional probability of Y 1 and Y 2 for the two facets of MNIST discovered by LTVAE. Here we show more on conditional image generation. Interestingly, with LTVAE, we can change the original images by fixing variables in some facet and sampling in other facets. For example, in MNIST we can fix the variables in identity facet and change the pose of the digit by sampling in the pose facet. FIG6 shows the samples generated in this way. As it can be seen, the pose of the input digits are changed in the samples generated by the proposed method. C COMPUTATIONAL TIMEWe compare the computational time of the proposed LTVAE w/ structure learning and that w/ fixed structure. For LTVAE with fixed structure, we fixed the structure of the latent tree model to be a single Y connecting to all zs, in which each z node consists of single z variable.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJgNwi09Km
We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features.
Many practical robot locomotion tasks require agents to use control policies that can be parameterized by goals. Popular deep reinforcement learning approaches in this direction involve learning goal-conditioned policies or value functions, or Inverse Dynamics Models (IDMs). IDMs map an agent’s current state and desired goal to the required actions. We show that the key to achieving good performance with IDMs lies in learning the information shared between equivalent experiences, so that they can be generalized to unseen scenarios. We design a training process that guides the learning of latent representations to encode this shared information. Using a limited number of environment interactions, our agent is able to efficiently navigate to arbitrary points in the goal space. We demonstrate the effectiveness of our approach in high-dimensional locomotion environments such as the Mujoco Ant, PyBullet Humanoid, and PyBullet Minitaur. We provide quantitative and qualitative to show that our method clearly outperforms competing baseline approaches. In reinforcement learning (RL), an agent optimizes its behaviour to maximize a specific reward function that encodes tasks such as moving forward or reaching a target. After training, the agent simply executes the learned policy from its initial state until termination. In practical settings in robotics, however, control policies are invoked at the lowest level of a larger system by higher-level components such as perception and planning units. In such systems, agents have to follow a dynamic sequence of intermediate waypoints, instead of following a single policy until the goal is achieved. A typical approach to achieving goal-directed motion using RL involves learning goal-conditioned policies or value functions . The key idea is to learn a function conditioned on a combination of the state and goal by sampling goals during the training process. However, this approach requires a large number of training samples, and does not leverage waypoints provided by efficient planning algorithms. Thus, it is desirable to learn models that can compute actions to transition effectively between waypoints. A popular class of such models is called Inverse Dynamics Model (IDM) . IDMs typically map the current state (or a history of states and actions) and the goal state, to the action. In this paper, we address the need of an efficient control module by learning a generalized IDM that can achieve goal-direction motion by leveraging data collected while training a state-of-the-art RL algorithm. We do not require full information of the goal state, or a history of previous states to learn the IDM. We learn on a reduced goal space, such as 3-D positions to which the agent must learn to navigate. Thus, given just the intermediate 3-D positions, or waypoints, our agent can navigate to the goal, without requiring any additional information about the intermediate states. The basic framework of the IDM is shown in Fig. 1. The unique aspect of our algorithm is that we eliminate the need to randomly sample goals during training. Instead, we exploit the known symmetries/equivalences of the system (as is common in many robotics settings) to guide the collection of generalized experiences during training. We propose a class of algorithms that utilize the property of equivalence between transitions modulo the difference in a fixed set of attributes. In the locomotion setting, the agent's transitions are symmetric under translations and rotations. We capture this symmetry by defining equivalence modulo orientation among experiences. We use this notion of equivalence to guide the training of latent representations shared by these experiences and provide them as input to the IDM to produce the desired actions, as shown in Fig. 4. A common challenge faced by agents trained using RL techniques is lack of generalization capability. The standard way of training produces policies that work very well on the states encountered by the agent during training, but often fail on unseen states. Achieving good performance using IDMs requires both these components: collecting generalized experiences, and learning these latent representations, as we demonstrate in Section 6. Our model exhibits high sample efficiency and superior performance, in comparison to other methods involving sampling goals during training. We demonstrate the effectiveness of our approach in the Mujoco Ant environment in OpenAI Gym , and the Minitaur and Humanoid environments in PyBullet . From a limited number of experiences collected during training under a single reward function of going in one direction, our generalized IDM succeeds at navigating to arbitrary goal positions in the 3-D space. We measure performance by calculating the closest distance to the goal an agent achieves. We perform ablation experiments to show that collecting generalized experience in the form of equivalent input pairs boosts performance over all baselines, these equivalent input pairs can be condensed into a latent representation that encodes relevant information, and learning this latent representation is in fact critical to success of our algorithm. Details of experiments and analysis of can be found in Sections 5 and 6. Several recent works learn policies and value functions that are conditioned over not just the state space, but also the goal space (; ;) and then generalize those functions to unseen goals. Goal-conditioned value functions are also largely used in hierarchical reinforcement learning algorithms , where the higher level module learns over intrinsic goals and the lower level control modules learn subpolicies to reach those goals, or the lower level control modules can efficiently execute goals proposed by the higher-level policy . use trained goalconditioned policies to learn actionable latent representations that extract relevant information from the state, and use these pretrained representations to train the agent to excel at other tasks. Pong* et al. learn goal-conditioned value functions, and use them in a model-based control setting. IDMs are functions that typically map the current state of the agent and the goal state that the agent aims to achieve, to the desired action. They have been used in a wide variety of contexts in existing literature. train IDMs using a history of states and actions and full goal state information for transferring models trained in simulation to real robots. and use IDMs in combination with Forward Dynamics Models (FDMs) to predict actions from compressed representations of high-dimensional inputs like RGB images generated by the FDM. use IDMs to provide a curiosity-based reward signal Figure 2: We report the of our algorithm and baselines on the three locomotion environments shown above: Humanoid, Minitaur, and Ant. The red sphere indicates the goal in each environment. in the general RL framework to encourage exploration; use IDMs to provide supervision for the learning of visual features relevant to the task assigned to the robot. We circumvent the need to learn goal-conditioned policies or value functions by combining IDMs with known symmetric properties of the robot. We train an IDM conditioned on the state space and a reduced goal space, using data collected while training any state-of-the-art RL algorithm. Our data collection is unique in that we exploit equivalences in experiences observed during training and learn a latent representation space shared between such equivalent experiences. Our IDM produces the action given this latent representation as an input, leading to generalization over parts of the state and goal spaces unobserved during training. In the general RL framework , a learning agent interacts with an environment modeled as a Markov Decision Process consisting of: 1) a state space S, 2) an action space A, 3) a probability distribution P: S × S × A →, where P (s |s, a) is the probability of transitioning into state s by taking action a in state s, 4) a reward function R: S × A × S → R that gives the reward for this transition, and 5) a discount factor Γ. The agent learns a policy π θ: S → A, parameterized by θ while trying to maximize the discounted expected return. Goal-conditioned RL optimizes for learning a policy that maximizes the return under a goal-specific reward function R g. On-policy RL algorithms, such as Policy Gradient methods and Trust Region methods use deep neural networks to estimate policy gradients, or maximize a surrogate objective function subject to certain constraints. Off-policy RL algorithms; ) incorporate elements of deep Q-learning into the actor-critic formulation. Hindsight Experience Replay (HER) is a popular technique used in conjunction with an off-policy RL algorithm to learn policies in a sample-efficient way from sparse rewards in goal-based environments. Our method leverages samples collected while training a state-of-the-art RL algorithm to train an IDM that maps the current state and desired goal position to the action required to reach the goal. There are four major components involved in this process: 1) collecting data while training the RL algorithm, 2) learning a basic IDM that maps the current state and the desired goal to the required action, 3) collecting experiences that are equivalent to those observed, and using them to train the IDM, and 4) learning a latent representation that generalizes this model to unseen parts of the state space by utilizing the equivalent experiences collected in step 3. We elaborate on each of these in the following sections. Our goal in this steps is to collect data for our IDM in the process of learning a policy under a single reward function. Recall the motivation for learning IDMs: we want a model that can take in the current state of the agent and the next command, in the form of a location in the space that the agent should travel to. Thus, we collect state, action, and position data from the transitions observed during the training process. We emphasize the difference between the state space S and the goal space O. The state space S is high-dimensional, consisting of information related to joint angles, velocities, torques, etc. The goal space, O, is low-dimensional, consisting, in this case, of the 3-D coordinates of the goal position that the agent is supposed to navigate to. Definition 1 (Experiences). We define experiences as tuples (s, o, o, a), where s is the current state of the agent, o is its current 3-D position, and a is the action that the agent performed to move from the state s and position o to the next position, or intermediate goal, o. We write..,T to denote all the experience tuples collected from a trajectory τ of length T. Given a set of experiences E, we can train the IDM using supervised learning techniques. Definition 2 (Inverse Dynamics Model). We define the Inverse Dynamics Model (IDM) as where s, o, o and a represent the current state, current position, desired goal, and action respectively. The IDM can reproduce seen actions on state and observation tuples that have appeared in the training data. However, it can not generalize good behaviour to states and observations that have not appeared in the initial training (see Fig. 6 for qualitative evidence). Our aim in the next steps is to generalize the observed experiences so that they can be used over previously unseen inputs from the S × O × O space. One issue with the current hypothesis of collecting data for a single reward is: the samples we obtain are highly biased in that they predominantly contain samples for motion in one direction. As a , there are certain parts of the input space that our agent is unlikely to ever visit. So it is unreasonable to expect it to generalize its behaviour in those parts. We can see qualitative evidence in Fig. 6, where the Humanoid can navigate to goals seen during training time (6a) using the basic IDM (corresponding method in the plots is RL), but fails when the goal lies outside the training distribution (6b). In order to mitigate this bias in the state space introduced by single reward function, we collect generalized experience comprising experience tuples that are equivalent modulo orientation (e.m.o.) with respect to actions. We use Θ to represent the orientation space. Definition 3 (Equivalence modulo Orientation). For e 1, e 2 ∈ E, e 1 and e 2 are e.m.o. with respect to A =⇒ A(e 1) = A(e 2). This defines an equivalence mod Θ with respect to The LFM generates the latent representations for two e.m.o. samples. We minimize the distance between them to enforce equivalence, and also fit the IDM to predict the action correctly, given these latent representations. Definition 4 (Generalized Experience). We collect unseen experiences equivalent to the observed experiences by taking a trajectory τ ⊆ E, changing the initial orientation of the agent, leading to a different s 0, and repeating the same set of actions observed in τ. We denote this operation by G, so G(τ) is a new trajectory. The full generalized experience set obtained in this way is written as G = E ∪ G(E). s 0 is an unseen state that has not appeared while training the agent. Despite using generalized experiences during training, the IDM does not always show great improvements in tasks like navigating to the desired goal position in an arbitrary direction, as seen in Table 1. We hypothesize that this is due to the agent failing to recognize e.m.o. experiences, and instead learning actions from irrelevant attributes of the state space. We use the e.m.o. experiences from the generalized experience set G to train a Latent Feature Model ψ that discards irrelevant information from the state, and learns only shared information relevant to the IDM to produce actions. Definition 5 (Latent Feature Model). Our Latent Feature Model (LFM) aims to learn the equivalence between e.m.o. experiences from the generalized experience set G. We define the LFM ψ as where γ is a k-dimensional latent representation of the experience sample. We then modify the IDM φ to produce the action from this latent representation as In order to learn these two models: LFM and IDM, we need our objective function to incorporate the property of equivalence modulo actions in the latent representations, and learn a good mapping from these representations to actions. Since the LFM is used to generate latent representations for two e.m.o. experience samples simultaneously, and then optimize their distance, we use a Siamese framework to model ψ. Our objective L 1 minimizes the distance between the latent representations generated for e.m.o. experience samples. we use a simple regression loss such as the L2 distance to fit the output of the IDM to the action. Here, the input to the IDM is the mean of the latent representations generated for the e.m.o. experiences. We jointly learn these two models by minimizing a weighted loss function: Fig. 4 shows the training procedure we use for our IDM. Each e.m.o. sample pair from equivalent trajectories is passed as input to the LFM, which generates the latent representations for the pair. The mean of these latent representations is then passed as input to the IDM, which predicts the action to be taken. These two models are trained simultaneously, ing in rich latent representations that can achieve goal-directed motion, generalizable to any arbitrary location. Remark 1. It is important to note that at test time, only the current state and goal are passed to the LFM to generate the latent representation, which is used by the IDM to predict the required action. We demonstrate the effectiveness of our overall approach by performing a series of ablation experiments, successively including each major component of our algorithm. In addition to a random baseline, we use Vanilla Goal-Conditioned Policy, and Hindsight Experience Replay as baselines for comparison with our algorithm. We demonstrate superior using our algorithm on three locomotion environments: Mujoco Ant environment in OpenAI Gym , and Humanoid and Minitaur environments in PyBullet . In order to fairly evaluate performance, we train each baseline using the same number of environment interactions as our algorithm. We also maintain uniform seeds, initial states and goals across all methods. The details of the test setting, along with network architectures, learning rates, and other hyperparameters, are discussed in the Appendix. Since our method aims at achieving goal-directed motion, we compare it with other on-policy and off-policy RL algorithms that are trained for this specific purpose. In this baseline experiment, we collect (s, o, o, a) samples by taking random actions at each step. We use these samples to train the IDM. Vanilla Goal-Conditioned Policy (VGCP): The second baseline algorithm we consider is VGCP, which takes as input the state and desired goal, and learns a policy on this input space using any state-of-the-art RL algorithm. The policy is given by π VGCP: S, O → A and is learnt using a stateof-the-art model-free RL technique. We use Proximal Policy Optimization (PPO) ( ) for Ant and Minitaur, and Soft Actor-Critic (SAC) ( ) for Humanoid. We select Soft Actor Critic (SAC) and Deep Deterministic Policy Gradient (DDPG) ) as the off-policy algorithms used in conjunction with HER for our algorithms. We report on HER with both sparse and dense rewards. Sparse rewards indicate whether the target was successfully reached, and dense rewards include this information, in addition to control and contact costs. Throughout the paper, HER-Sp refers to HER with sparse rewards, and HER-De refers to HER with dense rewards. Collecting Experience using standard RL algorithm (RL): We collect samples while training state-of-the-art RL algorithms rewarding them for going to the right (as is common in locomotion environments), and use them as the training data for our IDM. More details are listed in the Appendix. Collecting Generalized Experience (GE): We collect generalized experiences in the following manner: we save the trajectories followed by the agent while it learns a policy for locomotion. For some/all of these trajectories (details in A.1.2), we rotate the initial state of the agent by a random angle, and repeat the actions taken in the original trajectory. The samples collected in this modified trajectory are e.m.o. to those in the original trajectory. All of these samples constitute generalized experiences, which we use to train the IDM. Table 1: Quantitative of our methods and all baselines on the three environments: Humanoid, Minitaur, and Ant respectively. In each environment, LR emerges as clearly the best performer, enabling the agent to navigate closest to the goal. We use the generalized experiences collected in the previous step to extend the learned behaviour of the agent to unseen parts of the state space. We use the dual network architecture shown in Fig. 4 in this experiment. We jointly train the LFM and IDM using a weighted loss function that minimizes the distance between the latent representations generated for e.m.o. experiences, and fits the output of the IDM to the desired actions. Remark 2. We preprocess (s, o, o) to (s, d) where d is the unit vector in the direction o → o, and provide it as input to the models. We also show on a task in which the agent has to navigate through a series of waypoints, to see if our IDM has indeed learnt the right actions. There are three questions we wish to answer through this experiment: 1) How much does the agent deviate from the intended optimal trajectory through the intermediate goals/waypoints? 2) How fast is the agent able to accomplish each goal? 3) What is the agent's style/gait of walking? For this qualitative comparison, we found that neither HER nor VGCP with the current setting was able to generate good trajectories. Thus, we trained VGCP on 20 million samples (our method uses 15 million environment samples). We use this trained policy to generate the VGCP trajectories in Fig. 7, and our LR model (trained using 15 million samples) for the LR trajectories. For all trajectories, the agents are given the same initial orientation and goal position to ensure fair comparison. In this section, we analyze the performances of our algorithm and baselines qualitatively and quantitatively. In particular, we discuss the following: 1) overall performances of all methods, 2) distribution of distances to the goal observed at test time for all methods, 3) comparison between performance of best baseline and LR on the waypoint navigation task, and 4) analysis of speed, trajectory, and walking style observed in LR and the best baseline on Humanoid. The first two points are addressed in the following two subsections. The next two questions are discussed in detail in Fig. 7. We report the closest distance from the target that the agent is able to reach, as the evaluation metric for all our experiments. The mean and standard deviation of closest distance are reported in Table 1. It is clear that for each of the three environments, GE and LR outperform all other baselines. In particular, LR is observed to be the best performing algorithm for each environment. This lends validation to our claim that learning latent representations shared between equivalent experiences indeed boosts performance, instead of treating the equivalent experiences as independent training samples. The in table 1 show that for the Minitaur and Humanoid environments, the performance of VGCP is better than most baselines, and nearly comparable to GE. However, it shows poor performance on the Ant environment. This anomaly arises from the fact that 2 million samples are not enough to train a goal-conditioned policy for the Ant. The Minitaur and Humanoid, on the other hand, are trained for 4 million and 15 million time steps respectively, thus enabling better goal-conditioned policies to be learnt, leading to much superior performance. For each of the environments, we observe that HER-Sp and HER-De both show poor performance, compared to that of GE, LR, and even VGCP. In this section, we analyze the violin plots in Fig. 5. These plots show the distributions of the closest distances from targets observed over 10,000 episodes for each algorithm. For each environment, we see that for RL, and in some cases, VGCP, HER, and GE, there is a small peak away from the mean, which gives a much lower distance than the mean distance. This suggests that there is a significant number of episodes in which the Humanoid reaches very close to the target. We analyze this discrepancy in performance qualitatively in 6, and conclude that the small peak consists of episodes in which the initial state and goal position have been observed during training, and thus, the goal-conditioned policy has already learnt the optimal actions for this scenario. Across all environments, Ant and Minitaur in particular, the peak of the LR distribution is much lower than the other methods. This shows that in most episodes, the agent can successfully navigate to the goal; the slightly higher mean value and variance are due to the small number of episodes in which the agent fails at the beginning of the episode. This is to be expected because the actions taken by the IDM depend on the kind of data collected during the training process. In some cases, the model may learn the wrong actions, leading to the agent dying early in those episodes. Figure 7: Given a series of waypoints, we check the ability of our Humanoid to navigate through them efficiently, and compare it with the best baseline, VGCP, trained on 20 million samples of environment interaction (our algorithm uses 15 million). (a) We plot the trajectory followed by each agent as it navigates through the waypoints. We see that LR can navigate through all goals much more easily than VGCP, without deviating. In addition to this, LR enables the agent to reach each goal much faster than VGCP, which is unable to reach the last goal due to its slow speed, as the episode terminates in a fixed number of timesteps. (b) We plot the orientation of the agent at uniform intervals throughout the episode. We observe that while the agent trained using LR walks forward as expected, the agent trained using VGCP, while managing to move slowly closer to the goal, does so in an arbitrary fashion in that its orientation is seldom in the direction of its motion. We also provide qualitative evidence to prove that the distribution of closest distances in episodes is indeed biased by initial state and target configurations seen in the training data. We fix the initial state of the agent and generate goals in a specific region. We plot the of 10 episodes for each method in Fig. 6 for the Humanoid environment. The initial state and goal configurations we show for, consist of those experienced during training (left), and those not experienced during training (right). We see that for the configurations experienced at training time, most baseline methods, and our methods, GE and LR, are able to reach close to the goal. However, when the configuration lies outside the training distribution, only LR (sometimes GE) can navigate to the goal. We include more in the Appendix. We propose a new algorithm to achieve goal-directed motion for a variety of locomotion agents by learning the inverse dynamics model on shared latent representations for equivalent experiences. To this end, we take three important steps: We utilize the experience collected by our agent while training a standard reinforcement learning algorithm, so that our IDM has "good" samples in which the agent walks reasonably well. We generalize this experience by modifying the initial configuration for each observed trajectory in the collected data, and generate the equivalent trajectories. We learn the important shared information between such symmetric pairs of experience samples through a latent representation that is used as an input to the IDM to produce the action required for the agent to reach the goal. We provide extensive qualitative and quantitative evidence to show that our methods surpass existing methods to achieve generalization over unseen parts of the state space. , and Humanoid and Minitaur environments in PyBullet . In each of these environments, for our methods, the agent is trained to perform well on the 3-D locomotion task in one direction. For the baseline methods, the agent is trained to reach goals generated in different 3-D positions. The reward function for collecting data for RL, GE and LR includes contact and control costs, and rewards for moving forward. The reward function for training VGCP includes contact and control costs, but instead of moving forward, the agent is encouraged to move in the direction of the goal. In the case of HER-Sparse, the agent only receives a reward 0 if it reaches the goal, and -1 otherwise. For HER-Dense, the agent receives a weighted sum of distance to the goal and contact and control costs as its reward. Humanoid Humanoid is a bipedal robot, with a 43-D state space and 17-D action space. The state contains information about the agent's height, orientation (yaw, pitch, roll), 3-D velocity, and the 3-D relative positions of each of its joints (knees, shoulders, etc.). The action consists of the torques at these joints. Minitaur Minitaur is a quadrupedal robot, with a 17-D state space and 8-D action space. The state contains information about motor angles, torques, velocities, and the orientation of the base. The action consists of the torque to be applied at each joint. Ant Ant is a quadrupedal robot, with a 111-D state space and 8-D action space. The state space contains the agent's height, 3-D linear and angular velocity, joint velocities, joint angles, and the external forces at each link. The action consists of the torques at all the 8 joints. Our choice of these locomotion environments is driven by the motivation provided earlier: we want an agent to navigate to a desired goal position in the 3-D space. In view of this, we do not use 2-D locomotion environments like Half-Cheetah, Walker, or Hopper. Also, though Reacher and Pusher are goal-based environments, we do not use them as they are manipulation environments, and we are aiming to achieve navigation by agents trained on 3-D locomotion tasks. The details of number of environment interactions, network architectures for the IDM, LFM, policy, and the state-of-the-art RL algorithm used to train the policy (for baselines), or collect data (for our methods), are given in Table 2. We use the Adam optimizer with a learning rate of 1e-3 and a batch size of 512 for all our methods across all environments. For the Humanoid environment, we observed that the first ∼ 5 million samples were predominantly bad samples that led to the Humanoid falling down very early in each episode. Thus, we froze training after 5 million samples, when we had a policy in which the agent was able to walk a few steps. We used the policy at that point to collect 10 million steps of environment interaction. For the Ant and Minitaur environments, we collected 2 million and 4 million steps of environment interactions respectively, during training. For RL, we used all steps encountered in on-policy data. For GE and LR, we used only the first half of training samples encountered while training the policy. The second half of the samples is generated by applying the generalized transformation G, explained in Section 4.3. We use the first 1 million, 2 million and 5 million samples collected for the Ant, Minitaur and Humanoid (trained) respectively, and generate the rest by applying G. The hyperparameters involved in training the LFM and IDM are: the dimension k of the latent representation space, and the ratio of the latent representation loss (L 1) to the regression loss (L 2), λ. We set λ = 0.25 for all 3 environments. We use k = 10 for the Ant and Minitaur environments, and k = 50 for the Humanoid environment. We discuss the impact of the latent representation dimension k on the performance of the model in the next section. At test time, the agent is rotated by a random angle. The target is set at a distance of ∼ 2 − 5 units from the agent at an angle of [−45 •, 45 •] to the agent's orientation in the case of the Ant and Humanoid environments. For Minitaur, we set the target at a distance of ∼ 1.5 − 2.5 units from the agent at an angle of [−45 •, 45 •] to the agent's orientation. The intermediate goals, or waypoints, can be provided by any planning algorithm, since our approach is agnostic to the actual planning algorithm used. In our work, we use Model Predictive Control, i.e. we replan at each step. Each episode consists of a maximum of 1000 steps for each environment, and the episode terminates when the agent reaches the goal, or falls down/dies. We report the closest distance from the target that the agent is able to reach, for each episode. We select 10 random seeds and test the performance of each method on 1000 episodes for each random seed. In order to ensure that the comparison between all methods is indeed fair, we set the initial configuration of the agent and the target to be the same across all methods at test time. Why does our method perform better than baselines in spite of using inferior samples? It is important to note that since we take only the first half of the environment interactions for GE and LR, we are learning from essentially inferior samples consisting of actions that may not be optimal, as they have been encountered earlier in the training process. In spite of learning from inferior actions, our method outperforms the baselines. This is because our IDM formulation enables the learning of actions that can reach the next goal given the current state, which is the kind of behaviour we require in goal-directed motion. The focus of this model is more on learning actions that caused certain transitions, not on learning the most optimal actions that achieve the transition, although that would be the next best thing to do. In addition to this, our LFM framework enables the agent to achieve their equivalent transitions by producing the same action for all transitions that share a common latent representation. This enables our agent to generalize its motion to a larger part of the goal space in comparison with baselines that struggle to achieve goal-directed motion for goals or states lying outside the training distribution. Why is our method more sample-efficient than the baselines? Our methods (GE and LR) do not utilize the latter half of samples collected during the training process, which would consist of more optimal actions, because that would imply using a higher number of actual environment interactions than the baselines. In spite of this, our method outperforms all baselines for all environments. This is because goal-conditioned policies explore the state and goal spaces to achieve goal-directed motion. While exploration is in fact a desirable component in reinforcement learning, sampling from the goal space during training requires the agent to acquire a sense of direction and the skill of locomotion simultaneously. This enlarged state space, comprising the original state and the goal, leads to an increase in the number of samples required by the agent to learn a good policy. Our latent representations, on the other hand, encode the property of equivalence modulo orientation with respect to actions, described in Section 4.3. This enables our method to generalize representations of some samples of training experience to other parts of the state and goal spaces, which have not been encountered during training. As a , the IDM learns to predict actions for states and goals it has never encountered before, if they have the same latent representation as those seen during training, thus boosting performance while remaining sample-efficient. Why does LR show qualitatively better than VGCP? VGCP is trained as a function of the state and goal space, and produces the action that the agent should take in order to reach the goal. The agent is rewarded for moving towards the goal, keeping control and contact costs minimal. There is no constraint on the type of motion exhibited by the agent, or the time taken by the agent to reach the goal. As a , we see in Fig. 7 that the agent trained using VGCP walks very slowly throughout both episodes, and follows a non-optimal path to navigate through 3 out of the 4 waypoints. Due to its slow speed, it is unable to reach the last waypoint. The LR agent, on the other hand, learns an IDM from samples collected while training on the locomotion task. Since the locomotion reward encourages walking forward while also keeping control and contact costs minimal, the LR agent always walks forward facing each waypoint, and follows a smooth trajectory through all the waypoints, validating the superior performance of our IDM. We see that though the LR and VGCP agents are initialized in the same orientation, the VGCP agent walks with its back facing the waypoints, while the LR agent successfully adopts the optimal path and navigates through all waypoints adopting a "natural" walking style. Why does HER not perform as well as other baselines? If we revisit the motivation behind the HER algorithm, we realize that it shows superior performance using fewer training samples over environments that have low-dimensional state and action spaces, and a sparse reward setting. In our case, we are dealing with high-dimensional locomotion agents that require specific actions to walk, and navigate to the goal. If we use the sparse reward setting, most samples in the replay buffer consist of failures, as the agent first needs to learn to walk, after which it can successfully navigate to the goal. Even with the dense reward setting, there is no significant improvement in performance because of this reason. HER performs very well on low-dimensional sparse-reward environments but it is difficult to extend this behaviour to higher dimensions or dense rewards, especially where complex locomotion skills have to be learned in order to navigate to the goal. How important is the latent representation dimension k in achieving good performance? The dimension of the latent representation k is perhaps the most important hyperparameter that determines the performance of LR. Choosing a bad value of k could in information loss if k dimensions are not sufficient to encode the relevant information for producing the action, or irrelevant information being encoded if k is too high. In our experiments, we use k = 10 for Ant and Minitaur, and k = 50 for Humanoid. We show the impact of k on performance in the Ant environment in Fig. 9. We see that performance improves as we keep reducing the dimension up until k = 10. As we try to reduce further, performance suddenly drops, implying loss of information. Thus, the Ant's 111-D state and 3-D goal are reduced to a 10-D representation that is used by the IDM to produce actions to navigate to the goal.
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HylloR4YDr
We show that the key to achieving good performance with IDMs lies in learning latent representations to encode the information shared between equivalent experiences, so that they can be generalized to unseen scenarios.
In this paper, we first identify \textit{angle bias}, a simple but remarkable phenomenon that causes the vanishing gradient problem in a multilayer perceptron (MLP) with sigmoid activation functions. We then propose \textit{linearly constrained weights (LCW)} to reduce the angle bias in a neural network, so as to train the network under the constraints that the sum of the elements of each weight vector is zero. A reparameterization technique is presented to efficiently train a model with LCW by embedding the constraints on weight vectors into the structure of the network. Interestingly, batch normalization can be viewed as a mechanism to correct angle bias. Preliminary experiments show that LCW helps train a 100-layered MLP more efficiently than does batch normalization. Neural networks with a single hidden layer have been shown to be universal approximators BID6 BID8. However, an exponential number of neurons may be necessary to approximate complex functions. A solution to this problem is to use more hidden layers. The representation power of a network increases exponentially with the addition of layers BID17 BID2. A major obstacle in training deep nets, that is, neural networks with many hidden layers, is the vanishing gradient problem. Various techniques have been proposed for training deep nets, such as layer-wise pretraining BID5, rectified linear units BID13 BID9, variance-preserving initialization BID3, and normalization layers BID7 BID4.In this paper, we first identify the angle bias that arises in the dot product of a nonzero vector and a random vector. The mean of the dot product depends on the angle between the nonzero vector and the mean vector of the random vector. We show that this simple phenomenon is a key cause of the vanishing gradient in a multilayer perceptron (MLP) with sigmoid activation functions. We then propose the use of so-called linearly constrained weights (LCW) to reduce the angle bias in a neural network. LCW is a weight vector subject to the constraint that the sum of its elements is zero. A reparameterization technique is presented to embed the constraints on weight vectors into the structure of a neural network. This enables us to train a neural network with LCW by using optimization solvers for unconstrained problems, such as stochastic gradient descent. Preliminary experiments show that we can train a 100-layered MLP with sigmoid activation functions by reducing the angle bias in the network. Interestingly, batch normalization BID7 can be viewed as a mechanism to correct angle bias in a neural network, although it was originally developed to overcome another problem, that is, the internal covariate shift problem. Preliminary experiments suggest that LCW helps train deep MLPs more efficiently than does batch normalization. In Section 2, we define angle bias and discuss its relation to the vanishing gradient problem. In Section 3, we propose LCW as an approach to reduce angle bias in a neural network. We also present a reparameterization technique to efficiently train a model with LCW and an initialization method for LCW. In Section 4, we review related work; mainly, we examine existing normalization techniques from the viewpoint of reducing the angle bias. In Section 5, we present empirical that show that it is possible to efficiently train a 100-layered MLP by reducing the angle bias using LCW. Finally, we conclude with a discussion of future works. We introduce angle bias by using the simple example shown in FIG1. FIG1 (a) is a heat map representation of matrix W ∈ R 100×100, each of whose elements is independently drawn from a uniform random distribution in the range (−1, 1). Matrix A ∈ R 100×100 is also generated randomly, and its elements range from 0 to 1, as shown in FIG1 (b). We multiply W and A to obtain the matrix shown in FIG1 (c). Unexpectedly, a horizontal stripe pattern appears in the heat map of W A although both W and A are random matrices. This pattern is attributed to the angle bias that is defined as follows: Definition 1. P γ is an m dimensional probability distribution whose expected value is γ1 m, where γ ∈ R and 1 m is an m dimensional vector whose elements are all one. Proposition 1. Let a be a random vector in R m that follows P γ. Given w ∈ R m such that w > 0, the expected value of w · a is |γ| √ m w cos θ w, where θ w is the angle between w and 1 m. where E(x) denotes the expected value of random variable x. Definition 2. From Proposition 1, the expected value of w · a depends on θ w as long as γ = 0. The distribution of w · a is then biased depending on θ w; this is called angle bias. In FIG1, if we denote the i-th row vector of W and the j-th column vector of A by w i and a j, respectively, a j follows P γ with γ = 0.5. The i-th row of W A is biased according to the angle between w i and 1 m, because the (i, j)-th element of W A is the dot product of w i and a j. Note that if the random matrix A has both positive and negative elements, W A also shows a stripe pattern as long as each column vector of A follows P γ with γ = 0.We can generalize Proposition 1 for any m dimensional distributionP, instead of P γ, as follows: DISPLAYFORM0 Letâ be a random vector that follows an m dimensional probability distributionP whose expected value isμ ∈ R m. Given w ∈ R m such that w > 0, it follows that DISPLAYFORM1 whereθ w is the angle between w andμ. Proof. The proof is the same as that of Proposition 1.Proposition 2 states that the distribution of w ·â is biased according toθ w unless μ = 0. and BID21. Repeating the operations through multiple layers, the variance of θ l i and a l i will shrink to small values. We illustrate the effect of angle bias in an MLP by using the CIFAR-10 dataset BID10 ) that includes a set of 32 × 32 color (RGB) images. Each sample in CIFAR-10 is considered an input vector with 32 × 32 × 3 = 3072 real values, in which each variable is scaled into the range [−1, 1]. We consider an MLP with sigmoid activation functions that has 10 hidden layers with m = 128 neurons in each layer. The weights of the MLP are initialized according to BID3. We randomly took 100 samples from the dataset and input them into the MLP. FIG2 shows the activation pattern in layers BID19 BID21 5, 7, and 9 on the selected samples. Please note that the activation in Layer 1 corresponds to a 1 i in Equation, that is, Layer 1 is the layer after the input layer. We see stripe patterns in the layers other than Layers 1 in FIG2 that are caused by angle bias. In Layer 9, the activation value of each neuron is almost constant regardless of the input. In contrast, no stripe pattern appears in Layer 1, because each element of the input vector is scaled into the range [−1, 1] and its mean value is near zero; this corresponds to the case in which μ ≈ 0 in Proposition 2., for each sample 1. FIG6 shows boxplot summaries of θ l i on the first ten neurons in layers 1, 3, 5, 7, and 9, in which the 1%, 25%, 50%, 75%, and 99% quantiles are displayed as whiskers or boxes. We see the mean of θ l i are biased according to the neurons in the layers other than Layer 1. We also see that the variance of θ l i shrink through layers. Next, we consider an MLP with ReLU activation functions that has 50 hidden layers with m = 128 neurons in each layer. The weights are initialized according to BID3. Figure 4 shows the activation pattern in layers BID19 10, 20, 30, and 40 on the randomly selected samples. We see stripe patterns in the layers other than Layer 1 that are caused by the angle bias. Figure 5 shows boxplot summaries of θ l i on the first ten neurons in layers 1, 10, 20, 30, and 40. We see that the mean of θ l i are biased according the neurons in the layers other than Layer 1. We also see that the variance of θ l i shrink through layers, but the shrinking rate is much moderate compared to that in FIG6. This is because ReLU projects a preactivation vector into the unbounded region [0, +∞) m and the activation vector is less likely to concentrate on a specific region. Under the effect of angle bias, the activation of neurons in deeper layers are almost constant regardless of the input in an MLP with sigmoid activation functions, as shown in FIG2. It indicates that ∇ a 0 L = 0, where L is a loss function that is defined based on the output of the MLP and ∇ a 0 L means the gradient with respect to the input vector a 0. From Equation, we have DISPLAYFORM0 DISPLAYFORM1 Assuming that w Equation FORMULA4, with l = 1, indicating that the gradients of weights in the first layer are vanished. From Equation, with l = 1, FORMULA3 and FORMULA4, with l = 2, under the assumption that w DISPLAYFORM2 DISPLAYFORM3 If we use rectified linear activation instead of sigmoid activation, the gradients of weights are less likely to vanish, because ∇ a 0 L will seldom be exactly zero. However, the rate of each neuron being active 2 is biased, because the distribution of preactivation z l i is biased. If a neuron is always active, it behaves as an identity mapping. If a neuron is always inactive, it is worthless because its output is always zero. Such a phenomenon is observed in deep layers in Figure 4. As discussed in BID1, the efficiency of the network decreases in this case. In this sense, angle bias may reduce the efficiency of a network with rectified linear activation. There are two approaches to reduce angle bias in a neural network. The first one is to somehow make the expected value of the activation of each neuron near zero, because angle bias does not occur if μ = 0 from Proposition 2. The second one is to somehow regularize the angle between w. In this section, we propose a method to reduce angle bias in a neural network by using the latter approach. We introduce W LC as follows: DISPLAYFORM0 The following holds for w ∈ W LC:Proposition 3. Let a be an m dimensional random variable that follows P γ. Given w ∈ W LC such that w > 0, the expected value of w · a is zero..., m) will likely be more similar to each other. The activation vector in layer l, each of whose elements is given by Equation, is then expected to follow P γ. Therefore, if the input vector a 0 follows P γ, we can inductively reduce the angle bias in each layer of an MLP by using weight vectors that are included in W LC. We call weight vector w DISPLAYFORM1 We built an MLP with sigmoid activation functions of the same size as that used in Section 2.2.1, but whose weight vectors are replaced with LCWs. We applied the minibatch-based initialization described in Section 3.3. FIG8 shows the activation pattern in layers 1, 3, 5, 7, and 9 of the MLP with LCW on the randomly selected samples that are used in FIG2. When compared with FIG2, we see no stripe pattern in FIG8. The neurons in Layer 9 respond differently to each input sample; this means that a change in the input leads to a different output. Therefore, the network output changes if we adjust the weight vectors in Layer 1, that is, the gradients of weights in Layer 1 do not vanish in FIG8. FIG9 shows boxplot summaries of θ l i on the first ten neurons in layers 1, 3, 5, 7, and 9 of the MLP with LCW. We see that the angle distributes around 90• on each neuron in each layer. This indicates that the angle bias is resolved in the calculation of z l i by using LCW. FIG10 shows the activation pattern in layers of the MLP with LCW after 10 epochs training. A slight stripe pattern is visible in FIG10, but neurons in each layer react differently to each input. FIG11 shows boxplot summaries of θ l i of the MLP after 10 epochs training. We see that the mean of θ l i is slightly biased according to the neurons. However, the variance of θ l i do not shrink even in deeper layers. We built an MLP with ReLU activation functions of the same size as that used in Section 2.2.2, whose weight vectors are replaced with LCWs. We applied the minibatch-based initialization described in Section 3.3. FIG1 shows the activation pattern in layers BID19 10, 20, 30, and 40 of the MLP with LCW. When compared with Figure 4, we see no stripe pattern in FIG1. FIG1 shows boxplot summaries of θ l i on the first ten neurons in layers BID19 10, 20, 30, and 40 of the MLP with LCW. We can observe that the angle bias is resolved by using LCW in the MLP with ReLU activation functions. A straightforward way to train a neural network with LCW is to solve a constrained optimization problem, in which a loss function is minimized under the condition that each weight vector is included in W LC. Although several methods are available to solve such constrained problems, for example, the gradient projection method BID11, it might be less efficient to solve a constrained optimization problem than to solve an unconstrained one. We propose a reparameterization technique that enables us to train a neural network with LCW by using a solver for unconstrained optimization. We can embed the constraints on the weight vectors into the structure of the neural network by reparameterization. DISPLAYFORM0 where I m−1 is the identity matrix of order (m − 1) × (m − 1). In the experiments in Section 5, we used B m in Equation FORMULA9. We also tried an orthonormal basis of W LC as B m; however, there was little difference in accuracy. It is worth noting that the proposed reparameterization can be implemented easily and efficiently by using modern frameworks for deep learning using GPUs. By introducing LCW, we can reduce the angle bias in z l i in Equation, which mainly affects the expected value of z l i. It is also important to regularize the variance of z l i, especially when the sigmoid activation is used, because the output of the activation will likely saturate when the variance of z l i is too large. We apply an initialization method by which the variance of z l i is regularized based on a minibatch of samples. This type of initialization has also been used in previous studies BID12 and BID16. We conducted preliminary experiments using the CIFAR-10 dataset, the CIFAR-100 dataset BID10, and the SVHN dataset BID14. These experiments are aimed not at achieving state-of-the-art but at investigating whether we can train a deep model by reducing the angle bias and empirically evaluating the performance of LCW in comparison to that of BN and WN.Network structure We used MLPs with the cross-entropy loss function. Each network has 32 × 32 × 3 = 3072 input neurons and 10 output neurons, and it is followed by a softmax layer. We refer to an MLP that has L hidden layers and M neurons in each hidden layer as MLP(L, M). Either a sigmoid activation function or a rectified linear activation function was used. MLP LCW denotes an MLP in which each weight vector is replaced by LCW. MLP BN denotes an MLP in which the preactivation of each neuron is normalized by BN. MLP WN denotes an MLP whose weight vectors are reparametrized by WN.Initialization Plain MLP and MLP BN were initialized using the method proposed in BID3. MLP LCW was initialized using the minibatch-based method described in Section 3.3 with σ z = 0.5. MLP WN was initialized according to BID16.Optimization MLPs were trained using a stochastic gradient descent with a minibatch size of 128 for 100 epochs. The learning rate starts from 0.1 and is multiplied by 0.95 after every two epochs. The experiments were performed on a system running Ubuntu 16.04 LTS with NVIDIA R Tesla R K80 GPUs. We implemented LCW using PyTorch version 0.1.12. We implemented BN using the torch.nn.BatchNorm1d module in PyTorch. We implemented WN by ourselves using PyTorch 3. We first consider MLPs with sigmoid activation functions. FIG1 shows the convergence and computation time for training MLPs with CIFAR-10 dataset. FIG1 (a) shows that the training accuracy of the plain MLP is 10% throughout the training, because the MLP output is insensible to the input because of the angle bias, as mentioned in Section 2.2 BID22. By contrast, MLP LCW or MLP BN is successfully trained, as shown in FIG1 (a), indicating that the angle bias is a crucial obstacle to training deep MLPs with sigmoid activation functions. MLP LCW achieves a higher rate of increase in the training accuracy compared to MLP BN in FIG1 (a), (d), and (g). As described in Section 4, WN itself cannot reduce the angle bias, but the bias is reduced immediately after the initialization of WN. From FIG1 (a) and (d), we see that deep MLPs with WN are not trainable. These suggest that starting with weight vectors that do not incur angle bias is not sufficient to train deep nets. It is important to incorporate a mechanism that reduces the angle bias during training, such as LCW or BN.The computational overhead of training of MLP LCW is approximately 55% compared to plain MLP, as shown in FIG1 (b); this is much lower than that of MLP BN. The overhead of MLP WN is large compared to that of MLP BN, although it contradicts the claim of BID16. We think this is due to the implementation of these methods. The BN module we used in the experiments consists of a specialized function developed by GPU vendors, whereas the WN module was developed by ourselves. In this sense, the overhead of LCW may be improved by a more sophisticated implementation. In terms of the test accuracy, MLP LCW has peaks around 20 epochs, as shown in FIG1 (c), (f), and (i). We have no clear explanation for this finding, and further studies are needed to investigate the generalizability of neural networks. Experimental with SVHN and CIFAR-100 datasets are reported in Section B in the appendix. We have experimented with MLPs with rectified linear activation functions. In our experiments, we observed that the plain MLP with 20 layers and 256 neurons per layer was successfully trained. However, the training of MLP LCW of the same size did not proceed at all, regardless of the dataset used in our experiment; in fact, the output values of the network exploded after a few minibatch updates. We have investigated the weight gradients of the plain MLP and MLP LCW. FIG1 shows boxplot summaries of the weight gradients in each layer of both models, in which the gradients are evaluated by using a minibatch of CIFAR-10 immediately after the initialization. By comparing FIG1 (a) and FIG1 (b), we find an exponential increase in the distributions of the weight gradients of MLP LCW in contrast to the plain MLP. Because the learning rate was the same for every layer in our experiments, this exponential increase of the gradients might hamper the learning of MLP LCW. The gradients in a rectifier network are sums of path-weights over active paths BID1. The exponential increase of the gradients therefore implies an exponential increase of active paths. As discussed in Section 2.3, we can prevent neurons from being always inactive by reducing the angle bias, which we think caused the exponential increase in active paths. We need further studies to make MLP LCW with rectified linear activation functions trainable. Possible directions are to apply layer-wise learning rates or to somehow regularize the distribution of the weight gradients in each layer of MLP LCW, which we leave as future work. In this paper, we have first identified the angle bias that arises in the dot product of a nonzero vector and a random vector. The mean of the dot product depends on the angle between the nonzero vector and the mean vector of the random vector. In a neural network, the preactivation value of a neuron is biased depending on the angle between the weight vector of the neuron and the mean of the activation vector in the previous layer. We have shown that such biases cause a vanishing gradient in a neural network with sigmoid activation functions. To overcome this problem, we have proposed linearly constrained weights to reduce the angle bias in a neural network; these can be learned efficiently by the reparameterization technique. Preliminary experiments suggest that reducing the angle bias is essential to train deep MLPs with sigmoid activation functions.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HylgYB3pZ
We identify angle bias that causes the vanishing gradient problem in deep nets and propose an efficient method to reduce the bias.
Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning. Knowledge graphs collect and organize relations and attributes about entities, which are playing an increasingly important role in many applications, including question answering and information retrieval. Since knowledge graphs may contain incorrect, incomplete or duplicated records, additional processing such as link prediction, attribute classification, and record de-duplication is typically needed to improve the quality of knowledge graphs and derive new facts. Markov Logic Networks (MLNs) were proposed to combine hard logic rules and probabilistic graphical models, which can be applied to various tasks on knowledge graphs . The logic rules incorporate prior knowledge and allow MLNs to generalize in tasks with small amount of labeled data, while the graphical model formalism provides a principled framework for dealing with uncertainty in data. However, inference in MLN is computationally intensive, typically exponential in the number of entities, limiting the real-world application of MLN. Graph neural networks (GNNs) have recently gained increasing popularity for addressing many graph related problems effectively (; ; ;). However, the design and training procedure of GNNs do not explicitly take into account the prior knowledge in the form of logic rules. To achieve good performance, these models typically require sufficient labeled instances on specific end tasks . In this paper, we explore the combination of the best of both worlds, aiming for a method which is data-driven yet can exploit the prior knowledge encoded in logic rules. To this end, we design a simple variant of graph neural networks, named ExpressGNN, which can be efficiently trained in the variational EM framework for MLN. An overview of our method is illustrated in Fig. 1. ExpressGNN and the corresponding reasoning framework lead to the following desiderata: • Efficient inference and learning: ExpressGNN can be viewed as the inference network for MLN, which scales up MLN inference to much larger knowledge graph problems. • Combining logic rules and data supervision: ExpressGNN can leverage the prior knowledge encoded in logic rules, as well as the supervision from labeled data. • Compact and expressive model: ExpressGNN may have small number of parameters, yet it is sufficient to represent mean-field distributions in MLN. Statistical relational learning. There is an extensive literature relating the topic of logic reasoning, and here we only focus on the approaches that are most relevant to statistical relational learning on knowledge graphs. Logic rules can compactly encode the domain knowledge and complex dependencies. Thus, hard logic rules are widely used for reasoning in earlier attempts, such as expert systems and inductive logic programming . However, hard logic is very brittle and has difficulty in coping with uncertainty in both the logic rules and the facts in knowledge graphs. Later studies have explored to introduce probabilistic graphical model in the field of logic reasoning, seeking to combine the advantages of relational and probabilistic approaches. Representative works including Relational Markov Networks (RMNs;) and Markov Logic Networks (MLNs;) were proposed in this . Markov Logic Networks. MLNs have been widely studied due to the principled probabilistic model and effectiveness in a variety of reasoning tasks, including entity resolution (a), social networks , information extraction , etc. MLNs elegantly handle the noise in both logic rules and knowledge graphs. However, the inference and learning in MLNs is computationally expensive due to the exponential cost of constructing the ground Markov network and the NP-complete optimization problem. This hinders MLNs to be applied to industry-scale applications. Many works appear in the literature to improve the original MLNs in both accuracy and efficiency (b; ; ;). Nevertheless, to date, MLNs still struggle to handle large-scale knowledge bases in practice. Our framework ExpressGNN overcomes the scalability challenge of MLNs by efficient stochastic training algorithm and compact posterior parameterization with graph neural networks. Graph neural networks. Graph neural networks (GNNs; ;) can learn effective representations of nodes by encoding local graph structures and node attributes. Due to the compactness of model and the capability of inductive learning, GNNs are widely used in modeling relational data . Recently, proposed Graph Markov Neural Networks (GMNNs), which employs GNNs together with conditional random fields to learn object representations. These existing works are simply data-driven, and not able to leverage the domain knowledge or human prior encoded in logic rules. To the best of our knowledge, ExpressGNN is the first work that connects GNNs with first-order logic rules to combine the advantages of both worlds. Knowledge graph embedding. Another line of research for knowledge graph reasoning is in the family of knowledge graph embedding methods, such as TransE , NTN , DistMult , ComplEx , and RotatE . These methods design various scoring functions to model relational patterns for knowledge graph reasoning, which are very effective in learning the transductive embeddings of both entities and relations. However, these methods are not able to leverage logic rules, which can be crucial in some relational learning tasks, and have no consistent probabilistic model. Compared to these methods, ExpressGNN has consistent probabilistic model built in the framework, and can incorporate knowledge from logic rules. A recent concurrent work has proposed probabilistic Logic Neural Network (pLogicNet), which integrates knowledge graph embedding methods with MLNs with EM framework. Compared to pLogicNet which uses a flattened embedding table as the entity representation, our work can better capture the structure knowledge encoded in the knowledge graph with GNNs, and supplement the knowledge from logic formulae for the prediction task. Our method is a general framework that can trade-off the model compactness and expressiveness by tuning the dimensionality of the GNN part and the embedding part. Thus, pLogicNet can be viewed as a special case of our work with the embedding part only. 3 PRELIMINARY Knowledge Graph. A knowledge graph is a tuple In the language of first-order logic, entities are also called constants. For instance, a constant can be a person or an object. Relations are also called predicates. Each predicate is a logic function defined over C, i.e., r(·): C ×... × C → {0, 1}. In general, the arguments of predicates are asymmetric. For instance, for the predicate r(c, c):= L(c, c) (L for Like) which checks whether c likes c, the arguments c and c are not exchangeable. With a particular set of entities assigned to the arguments, the predicate is called a ground predicate, and each ground predicate ≡ a binary random variable, which will be used to define MLN. For a d-ary predicate, there are M d ways to ground it. We denote an assignment as a r. For instance, with a r = (c, c), we can simply write a ground predicate r(c, c) as r(a r). Each observed fact in knowledge bases is a truth value {0, 1} assigned to a ground predicate. For instance, a fact o can be [L(c, c) = 1]. The number of observed facts is typically much smaller than that of unobserved facts. We adopt the open-world paradigm and treat these unobserved facts ≡ latent variables. As a clearer representation, we express a knowledge base K by a bipartite graph G K = (C, O, E), where nodes on one side of the graph correspond to constants C and nodes on the other side correspond to observed facts O, which is called factor in this case. The set of T edges, E = {e 1, . . ., e T}, connect constants and the observed facts. More specifically, an edge e = (c, o, i) between node c and o exists, if the ground predicate associated with o uses c as an argument in its i-th argument position (Fig. 2). Markov Logic Networks. MLNs use logic formulae to define potential functions in undirected graphical models. A logic formula f (·): C ×... × C → {0, 1} is a binary function defined via the composition of a few predicates. For instance, a logic formula f (c, c) can be where ¬ is negation and the equivalence is established by De Morgan's law. Similar to predicates, we denote an assignment of constants to the arguments of a formula f as a f, and the entire collection of consistent assignments of constants as A f = {a 1 f, a 2 f, . . .}. A formula with constants assigned to all of its arguments is called a ground formula. Given these logic representations, MLN can be defined as a joint distribution over all observed facts O and unobserved facts H as where Z(w) is the partition function summing over all ground predicates and φ f (·) is the potential function defined by a formula f as illustrated in Fig. 2. One form of φ f (·) can simply be the truth value of the logic formula f. For instance, if the formula is f (c, c):= ¬S(c) ∨ ¬F(c, c) ∨ S(c), then φ f (c, c) can simply take value 1 when f (c, c) is true and 0 otherwise. Other more sophisticated φ f can also be designed, which have the potential to take into account complex entities, such as images or texts, but will not be the focus of this paper. The weight w f can be viewed as the confidence score of the formula f: the higher the weight, the more accurate the formula is. Difference between KG and MLN. We note that the graph topology of knowledge graphs and MLN can are very different, although MLN is defined on top of knowledge graphs. Knowledge graphs are typically very sparse, where the number of edges (observed relations) is typically linear in the number of entities. However, the graphs associated with MLN are much denser, where the number of nodes can be quadratic or more in the number of entities, and the number of edges (dependency between variables) is also high-order polynomials in the number of entities. In this section, we introduce the variational EM framework for MLN inference and learning, where we will use ExpressGNN as a key component (detailed in Sec. 5). Markov Logic Networks model the joint probabilistic distribution of all observed and latent variables, as defined in Eq. 1. This model can be trained by maximizing the log-likelihood of all the observed facts log P w (O). However, it is intractable to directly maximize the objective, since it requires to compute the partition function Z(w) and integrate over all variables O and H. We instead optimize the variational evidence lower bound (ELBO) of the data log-likelihood, as follows where Q θ (H | O) is a variational posterior distribution of the latent variables given the observed ones. The equality in Eq. 2 holds if the variational posterior Q θ (H|O) equals to the true posterior P w (H|O). We then use the variational EM algorithm to effectively optimize the ELBO. The variational EM algorithm consists of an expectation step (E-step) and a maximization step (M-step), which will be called in an alternating fashion to train the model: 1) In the E-step (Sec. 4.1), we infer the posterior distribution of the latent variables, where P w is fixed and Q θ is optimized to minimize the KL divergence between Q θ (H|O) and P w (H|O); 2) In the M-step (Sec. 4.2), we learn the weights of the logic formulae in MLN, where Q θ is fixed and P w is optimized to maximize the data log-likelihood. In the E-step, which is also known as the inference step, we are minimizing the KL divergence between the variational posterior distribution Q θ (H|O) and the true posterior distribution P w (H|O). The exact inference of MLN is computationally intractable and proven to be NP-complete . Therefore, we choose to approximate the true posterior with a mean-field distribution, since the mean-field approximation has been demonstrated to scale up large graphical models, such as latent Dirichlet allocation for modeling topics from large text corpus . In the mean-field variational distribution, each unobserved ground predicate r(a r) ∈ H is independently inferred as follows: where each factorized distribution Q θ (r(a r)) follows the Bernoulli distribution. We parameterize the variational posterior Q θ with deep learning models as our neural inference network. The design of the inference network is very important and has a lot of considerations, since we need a compact yet expressive model to accurately approximate the true posterior distribution. We employ graph neural networks with tunable embeddings as our inference network (detailed in Sec. 5), which can trade-off between the model compactness and expressiveness. With the mean-field approximation, L ELBO (Q θ, P w) defined in Eq. 2 can be reorganized as below: where w f is fixed in the E-step and thus the partition function Z(w) can be treated as a constant. We notice that the first term E Q θ (H|O) [log P w (O, H)] has the summation over all formulae and all possible assignments to each formula. Thus this double summation may involve a large number of terms. The second term E Q θ (H|O) [log Q θ (H|O)] is the sum of entropy of the variational posterior distributions Q θ (r(a r)), which also involves a large number of terms since the summation ranges over all possible latent variables. Typically, the number of latent facts in database is much larger than the number of observed facts. Thus, both terms in the objective function pose the challenge of intractable computational cost. To address this challenge, we sample mini-batches of ground formulae to break down the exponential summations by approximating it with a sequence of summations with a controllable number of terms. More specifically, in each optimization iteration, we first sample a batch of ground formulae. For each ground formula in the sampled batch, we compute the first term in Eq. 4 by taking the expectation of the corresponding potential function with respect to the posterior of the involved latent variables. The mean-field approximation enables us to decompose the global expectation over the entire MLN into local expectations over ground formulae. Similarly, for the second term in Eq. 4, we use the posterior of the latent variables in the sampled batch to compute a local sum of entropy. For tasks that have sufficient labeled data as supervision, we can add a supervised learning objective to enhance the inference network, as follows: This objective is complementary to the ELBO on predicates that are not well covered by logic rules but have sufficient observed facts. Therefore, the overall E-step objective function becomes: where λ is a hyperparameter to control the weight. This overall objective essentially combines the knowledge in logic rules and the supervision from labeled data. In the M-step, which is also known as the learning step, we are learning the weights of logic formulae in Markov Logic Networks with the variational posterior Q θ (H|O) fixed. The partition function Z(w) in Eq. 4 is not a constant anymore, since we need to optimize those weights in the M-step. There are exponential number of terms in the partition function Z(w), which makes it intractable to directly optimize the ELBO. To tackle this problem, we adopt the widely used pseudo-log-likelihood as an alternative objective for optimization, which is defined as: where MB r(ar) is the Markov blanket of the ground predicate r(a r), i.e., the set of ground predicates that appear in some grounding of a formula with r(a r). For each formula i that connects r(a r) to its Markov blanket, we optimize the formula weight w i by gradient descent, with the derivative: where y r(ar) = 0 or 1 if r(a r) is an observed fact, and y r(ar) = Q θ (r(a r)) otherwise. With the independence property of Markov Logic Networks, the gradients of the logic formulae weights can be efficiently computed on the Markov blanket of each variable. For the M-step, we design a different sampling scheme to make it computationally efficient. For each variable in the Markov blanket, we take the truth value if it's observed and draw a sample from the variational posterior Q θ if it's latent. In the M-step, the ELBO of a fully observed ground formula depends on the formula weight, thus we need to consider all the fully observed ground formulae. It is computationally intractable to use all possible ground predicates to compute the gradients in Eq. 8. To tackle this challenge, we simply consider all the ground formulae with at most one latent predicate, and pick up the ground predicate if its truth value determines the formula's truth value. Therefore, we keep a small subset of ground predicates, each of which can directly determine the truth value of a ground formula. Intuitively, this small subset contains all representative ground predicates, and makes good estimation of the gradients with much cheaper computational cost. Algorithm 1: GNN Initialize entity node: In the neural variational EM framework, the key component is the posterior model, or the inference network. We need to design the inference network that is both expressive and efficient to approximate the true posterior distribution. A recent concurrent work uses a flattened embedding table as the entity representation to model the posterior. However, such simple posterior model is not able to capture the structure knowledge encoded in the knowledge graph. We employ graph neural networks with tunable embeddings to design our inference network. We also investigate the expressive power of GNN from theoretical perspective, which justifies our design. Our inference network, named ExpressGNN, consists of three parts: the first part is a vanilla graph neural network (GNN), the second part uses tunable embeddings, and the third part uses the embeddings to define the variational posterior. For simplicity, we assume that each predicate has two arguments (i.e., consider only r(c, c)). We design each part as follows: • We build a GNN on the knowledge graph G K, which is much smaller than the ground graph of MLN (see comparison in Fig. 2). The computational graph of the GNN is given in Algorithm 1. The GNN parameters θ 1 and θ 2 are shared across the entire graph and independent of the number of entities. Therefore, the GNN is a compact model with • For each entity in the knowledge graph, we augment its GNN embedding with a tunable embedding The tunable embeddings increase the expressiveness of the model. As there are M entities, the number of parameters in tunable embeddings is O(kM). • We use the augmented embeddings of c 1 and c 2 to define the variational posterior. Specifically, In summary, ExpressGNN can be viewed as a two-level encoding of the entities: the compact GNN assigns similar embeddings to similar entities in the knowledge graph, while the expressive tunable embeddings provide additional model capacity to encode entity-specific information beyond graph structures. The overall number of trainable parameters in ExpressGNN is O(d 2 + kM). By tuning the embedding size d and k, ExpressGNN can trade-off between the model compactness and expressiveness. For large-scale problems with a large number of entities (M is large), ExpressGNN can save a lot of parameters by reducing k. The combination of GNN and tunable embeddings makes the model sufficiently expressive to approximate the true posterior distributions. Here we provide theoretical analysis on the expressive power of GNN in the mean-field inference problem, and discuss the benefit of combining GNN and tunable embeddings in ExpressGNN. Recent studies show that the vanilla GNN embeddings can represent the of graph coloring, but fail to represent the of the more strict graph isomorphism check, i.e., GNN produces the same embedding for some nodes that should be distinguished. We first demonstrate this problem by a simple example: Example. • Entity A and B have opposite relations with E, i.e., F(A, E) = 1 versus F(B, E) = 0 in the knowledge graph, but running GNN on the knowledge graph will always produce the same embeddings for A and B, i.e., µ A = µ B. • L(A, E) and L(B, E) apparently have different posteriors. However, using GNN em- We can formally prove that solving the problem in the above example requires the graph embeddings to distinguish any non-isomorphic nodes in the knowledge graph. A formal statement is provided below (see Appendix E for the proof). Definition 5.1. Two ordered sequences of nodes (c 1, . . ., c n) and (c 1, . . ., c n) are isomorphic in a graph G K if there exists an isomorphism π: Implied by the theorem, to obtain an expressive enough representation for the posterior, we need a more powerful GNN variant. A recent work has proposed a powerful GNN variant , which can handle small graphs such as chemical compounds and protein structures, but it is computationally expensive due to the usage of high-dimensional tensors. As a simple yet effective solution, ExpressGNN augments the vanilla GNN with additional tunable embeddings, which is a trade-off between the compactness and expressiveness of the model. In summary, ExpressGNN has the following nice properties: • Efficiency: ExpressGNN directly works on the knowledge graph, instead of the huge MLN grounding graph, making it much more efficient than the existing MLN inference methods. • Compactness: The compact GNN model with shared parameters can be very memory efficient, making ExpressGNN possible to handle industry-scale problems. • Expressiveness: The GNN model can capture structure knowledge encoded in the knowledge graph. Meanwhile, the tunable embeddings can encode entity-specific information, which compensates for GNN's deficiency in distinguishing non-isomorphic nodes. • Generalizability: With the GNN embeddings, ExpressGNN may generalize to new entities or even different but related knowledge graphs unseen during training time without the need for retraining. Benchmark datasets. We evaluate ExpressGNN and other baseline methods on four benchmark datasets: UW-CSE , Cora , synthetic Kinship datasets, and FB15K-237 constructed from Freebase . Details and full statistics of the benchmark datasets are provided in Appendix B. General settings. We conduct all the experiments on a GPU-enabled (Nvidia RTX 2080 Ti) Linux machine powered by Intel Xeon Silver 4116 processors at 2.10GHz with 256GB RAM. We implement ExpressGNN using PyTorch and train it with Adam optimizer . To ensure a fair comparison, we allocate the same computational resources (CPU, GPU and memory) for all the experiments. We use the default tuned hyperparameters for competitor methods, which can reproduce the experimental reported in their original works. Model hyperparameters. For ExpressGNN, we use 0.0005 as the initial learning rate, and decay it by half for every 10 epochs without improvement of validation loss. For Kinship, UW-CSE and Cora, we run ExpressGNN with a fixed number of iterations, and use the smallest subset from the original split for hyperparameter tuning. For FB15K-237, we use the original validation set to tune the hyperparameters. We use a two-layer MLP with ReLU activation function as the nonlinear transformation for each embedding update step in the GNN model. We learn different MLP For each dataset, we search the configuration of ExpressGNN on either the validation set or the smallest subset. The configuration we search includes the embedding size, the split point of tunable embeddings and GNN embeddings, the number of embedding update steps, and the sampling batch size. For the inference experiments, the weights for all the logic formulae are fixed as 1. For the learning experiments, the weights are initialized as 1. For the choice of λ in the combined objective L θ in Eq. 6, we set λ = 0 for the inference experiments, since the query predicates are never seen in the training data and no supervision is available. For the learning experiments, we set λ = 1. We first evaluate the inference accuracy and efficiency of ExpressGNN. We compare our method with several strong MLN inference methods on UW-CSE, Cora and Kinship datasets. We also conduct ablation study to explore the trade-off between GNN and tunable embeddings. Experiment settings. For the inference experiments, we fix the weights of all logic rules as 1. A key advantage of MLN is that it can handle open-world setting in a consistent probabilistic framework. Therefore, we adopt open-world setting for all the experiments, as opposed to closed-world setting where unobserved facts (except the query predicates) are assumed to be false. We also report the performance under closed-world setting in Appendix C. Prediction tasks. The deductive logic inference task is to answer queries that typically involve single predicate. For example in UW-CSE, the task is to predict the AdvisedBy(c,c) relation for all persons in the set. In Cora, the task is to de-duplicate entities, and one of the query predicates is SameAuthor(c,c). As for Kinship, the task is to predict whether a person is male or female, i.e., Male(c). For each possible substitution of the query predicate with different entities, the model is tasked to predict whether it's true or not. Inference accuracy. The of inference accuracy on three benchmark datasets are reported in Table 1. A hyphen in the entry indicates that it is either out of memory or exceeds the time limit (24 hours). We denote our method as ExpressGNN-E since only the E-step is needed for the inference experiments. Note that since the lifted BP is guaranteed to get identical as BP , the of these two methods are merged into one row. For these experiments, ExpressGNN-E uses 64-dim GNN embeddings and 64-dim tunable embeddings. On Cora, all the baseline methods fail to handle the data scale under open-world setting, and ExpressGNN-E achieves good inference accuracy. On UW-CSE, ExpressGNN-E consistently outperforms all baselines. The Kinship dataset is synthesized and noise-free, and the number of entities increases linearly on the five sets S1-S5. HL-MRF achieves perfect accuracy for S1-S4, but is infeasible on the largest set S5. ExpressGNN-E yields similar but not perfect , which is presumably caused by the stochastic nature of our sampling and optimization procedure. Inference efficiency. The inference time corresponding to the experiments in Table 1 is summarized in Fig. 4. On UW-CSE (left table), ExpressGNN-E uses much shorter time for inference compared to all the baseline methods, and meanwhile ExpressGNN-E achieves the best inference performance. On Kinship (right figure), as the data size grows linearly from S1 to S5, the inference time of most baseline methods grows exponentially, while ExpressGNN-E maintains a nearly constant time cost, demonstrating its nice scalability. Some baseline methods such as MCMC and MC-SAT become infeasible for larger sets. HL-MRF maintains a comparatively short inference time, however, it has a huge increase of memory cost and is not able to handle the largest set S5. Ablation study. ExpressGNN can trade-off the compactness and expressiveness of model by tuning the dimensionality of GNN and tunable embeddings. We perform ablation study on the Cora dataset to investigate how this trade-off affects the inference accuracy. Results of different configurations of ExpressGNN-E are shown in Table 2. It is observed that GNN64+Tune4 has comparable performance with Tune64, but is consistently better than GNN64. Note that the number of parameters in GNN64+Tune4 is O(64 2 + 4|C|), while that in Tune64 is O(64|C|). When the number of entities is large, GNN64+Tune4 has much less parameters to train. This is consistent with our theoretical analysis : As a compact model, GNN saves a lot of parameters, but GNN alone is not expressive enough. A similar is observed for GNN64+Tune64 and Tune128. Therefore, ExpressGNN seeks a combination of two types of embeddings to possess the advantage of both: having a compact model and being expressive. The best configuration of their embedding sizes can be varied on different tasks, and determined by the goal: getting a portable model or better performance. We evaluate ExpressGNN in the knowledge base completion task on the FB15K-237 dataset, and compare it with state-of-the-art knowledge base completion methods. Experiment settings. To generate logic rules, we use Neural LP on the training set and pick up the candidates with top confidence scores. See Appendix D for examples of selected logic rules. We evaluate both inference-only and inference-and-learning version of ExpressGNN, denoted as ExpressGNN-E and ExpressGNN-EM, respectively. Prediction task. For each test query r(c, c) with respect to relation r, the model is tasked to generate a rank list over all possible instantiations of r and sort them according to the model's confidence on how likely this instantiation is true. Evaluation metrics. Following existing studies , we use filtered ranking where the test triples are ranked against all the candidate triples not appearing in the dataset. Candidate triples are generated by corrupting the subject or object of a query r(c, c). For evaluation, we compute the Mean Reciprocal Ranks (MRR), which is the average of the reciprocal rank of all the truth queries, and Hits@10, which is the percentage of truth queries that are ranked among the top 10. Competitor methods. Since none of the aforementioned MLN inference methods can scale up to this dataset, we compare ExpressGNN with a number of state-of-the-art methods for knowledge base completion, including Neural Tensor Network (NTN;), Neural LP , DistMult , ComplEx , TransE , RotatE and pLogicNet. The of MLN and pLogicNet are directly taken from the paper. For all the other baseline methods, we use publicly available code with the provided best hyperparameters to run the experiments. Performance analysis. The experimental on the full training data are reported in Table 3 (100% columns). Both ExpressGNN-E and ExpressGNN-EM significantly outperform all the baseline methods. With learning the weights of logic rules, ExpressGNN-EM achieves the best performance. Compared to MLN, ExpressGNN achieves much better performance since MLN only relies on the logic rules while ExpressGNN can also leverage the labeled data as additional supervision. Compared to knowledge graph embedding methods such as TransE and RotatE, ExpressGNN can leverage the prior knowledge in logic rules and outperform these purely data-driven methods. Data efficiency. We investigate the data efficiency of ExpressGNN and compare it with baseline methods. Following , we split the knowledge base into facts / training / validation / testing sets, and vary the size of the training set from 0% to 100% to feed the model with complete facts set for training. From Table 3, we see that ExpressGNN performs significantly better than the baselines on smaller training data. With more training data as supervision, data-driven baseline methods start to close the gap with ExpressGNN. This clearly shows the benefit of leveraging the knowledge encoded in logic rules when there data is insufficient for supervised learning. Zero-shot relational learning. In practical scenarios, a large portion of the relations in the knowledge base are long-tail, i.e., most relations may have only a few facts . Therefore, it is important to investigate the model performance on relations with insufficient training data. We construct a zero-shot learning dataset based on FB15K-237 by forcing the training and testing data to have disjoint sets of relations. Table 4 shows the . As expected, the performance of all the supervised relational learning methods drop to almost zero. This shows the limitation of such methods when coping with sparse long-tail relations. Neural LP is designed to handle new entities in the test set , but still struggles to perform well in zero-shot learning. In contrast, ExpressGNN leverages both the prior knowledge in logic rules and the neural relational embeddings for reasoning, which is much less affected by the scarcity of data on long-tail relations. Both variants of our framework (ExpressGNN-E and ExpressGNN-EM) achieve significantly better performance. This paper studies the probabilistic logic reasoning problem, and proposes ExpressGNN to combine the advantages of Markov Logic Networks in logic reasoning and graph neural networks in graph representation learning. ExpressGNN addresses the scalability issue of Markov Logic Networks with efficient stochastic training in the variational EM framework. ExpressGNN employs GNNs to capture the structure knowledge that is implicitly encoded in the knowledge graph, which serves as supplement to the knowledge from logic formulae. ExpressGNN is a general framework that can trade-off the model compactness and expressiveness by tuning the dimensionality of the GNN and the embedding part. Extensive experiments on multiple benchmark datasets demonstrates the effectiveness and efficiency of ExpressGNN. We provide more examples in this section to show that it is more than a rare case that GNN embeddings alone are not expressive enough. A.1 EXAMPLE 1 Now, we use another example in Fig. 7 to show that even when the local structures are the same, the posteriors can still be different, which is caused by the formulae.... … (C,H,G,M)... … B DATASET DETAILS For our experiments, we use the following benchmark datasets: • The social network dataset UW-CSE contains publicly available information of students and professors in the CSE department of UW. The dataset is split into five sets according to the home department of the entities. • The entity resolution dataset Cora consists of a collection of citations to computer science research papers. The dataset is also split into five subsets according to the field of research. • We introduce a synthetic dataset that resembles the popular Kinship dataset . The original dataset contains kinship relationships (e.g., Father, Brother) among family members in the Alyawarra tribe from Central Australia. The synthetic dataset closely resembles the original Kinship dataset but with a controllable number of entities. To generate a dataset with n entities, we randomly split n entities into two groups which represent the first and second generation respectively. Within each group, entities are grouped into a few sub-groups representing the sisterand brother-hood. Finally, entities from different sub-groups in the first generation are randomly coupled and a sub-group in the second generation is assigned to them as their children. To generate the knowledge base, we traverse this family tree, and record all kinship relations for each entity. We generate five kinship datasets (Kinship S1-S5) by linearly increasing the number of entities. • The knowledge base completion benchmark FB15K-237 is a generic knowledge base constructed from Freebase , which is designed to a more challenging variant of FB15K. More specifically, FB15K-237 is constructed by removing nearduplicate and inverse relations from FB15K. The dataset is split into training / validation / testing and we use the same split of facts from training as in prior work . The complete statistics of these datasets are shown in Table 5. Examples of logic formulae used in four benchmark datasets are listed in Table 7. In Sec. 6.1 we compare ExpressGNN with five probabilistic inference methods under open-world semantics. This is different from the original works, where they generally adopt the closed-world setting due to the scalability issues. More specifically, the original works assume that the predicates (except the ones in the query) observed in the knowledge base is closed, meaning for all instantiations For sanity checking, we also conduct these experiments with a closed-world setting. We found the summarized in Table 6 are close to those reported in the original works. This shows that we have a fair setup (including memory size, hyperparameters, etc.) for those competitor methods. Additionally, one can find that the AUC-PR scores compared to those (Table 1) under open-world setting are actually better. This is due to the way the datasets were originally collected and evaluated generally complies with the closed-world assumption. But this is very unlikely to be true for realworld and large-scale knowledge base such as Freebase and WordNet, where many true facts between entities are not observed. Therefore, in general, the open-world setting is much more reasonable, which we follow throughout this paper. We list some examples of logic formulae used in four benchmark datasets in Table 7. The full list of logic formulae is available in our source code repository. Note that these formulae are not necessarily as clean as being always true, but are typically true. For UW-CSE and Cora, we use the logic formulae provided in the original dataset. UW-CSE provides 94 hand-coded logic formulae, and Cora provides 46 hand-coded rules. For Kinship, we hand-code 22 first-order logic formulae. For FB15K-237, we first use Neural LP on the full data to generate candidate rules. Then we select the ones that have confidence scores higher than 90% of the highest scored formulae sharing the same target predicate. We also de-duplicate redundant rules that can be reduced to other rules by switching the logic variables. Finally, we have generated 509 logic formulae for FB15K-237. Theorem. Consider a knowledge base K = (C, R, O) and any r ∈ R. Two latent random variables X:= r(c 1, . . ., c n) and X:= r(c 1, . . ., c n) have the same posterior distribution in any MLN if and only if (c 1, · · ·, c n) Then we give the proof as follows. A logic formula f can be represented as a factor graph, G f = (C f, R f, E f), where nodes on one side of the graph is the set of distinct constants C f needed in the formula, while nodes on the other side is the set of predicates R f used to define the formula. The set of edges, E f, will connect constants to predicates or predicate negation. That is, an edge e = (c, r, i) between node c and predicate r exists, if the predicate r use constant c in its i-th argument. We note that the set of distinctive constants used in the definition of logic formula are templates where actual constant can be instantiated from C. An illustration of logic formula factor graph can be found in Fig. 8. Similar to the factor graph for the knowledge base, we also differentiate the type of edges by the position of the argument. Therefore, every single formula can be represented by a factor graph. We will construct a factor graph representation to define a particular formula, and show that the MLN induced by this formula will in different posteriors for r(c 1, . . ., c n) and r(c 1, . . ., c n). The factor graph for the formula is constructed in the following way (see Fig. 7 as an example of the ing formula constructed using the following steps): The proof of this claim is given at the end of this proof. (ii) Next, we use G * c1:n to define a formula f. We first initialize the definition of the formula value as f (c 1, . . ., c n,c 1, . . .,c n) = ∧ r(ar):r(ar) ∈ G * c1:n ⇒ r(c 1, . . ., c n). Then, we changer(ar) in this formula to the negation ¬r(ar) if the observed value ofr(ar) is 0 in G * c1:n. We have defined a formula f using the above two steps. Suppose the MLN only contains this formula f. Then the two nodes r(c 1, . . ., c n) and r(c 1, . . ., c n) in this MLN must be distinguishable. The reason is, in MLN, r(c 1, . . ., c n) is connected to a ground formula f (c 1, . . ., c n,c 1, . . .,c n), whose factor graph representation is G * c1:n ∪ r(c 1, . . ., c n). In this formula, all variables are observed in the knowledge base K except for r(c 1, . . ., c n) and and the observation set is O * c. The formula value is f (c 1, . . ., c n,c 1, . . .,c n) = (1 ⇒ r(c 1, . . ., c n)). Clarification: Eq. 10 is used to define a formula and c i in this equation can be replaced by other constants, while Eq. 11 represents a ground formula whose arguments are exactly c 1,..., c n,c 1,...,c n. Based on (Condition), there is NO formula f (c 1, . . ., c n,c 1, . . .,c n) that contains r(c 1, . . ., c n) has
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJg76kStwH
We employ graph neural networks in the variational EM framework for efficient inference and learning of Markov Logic Networks.
Reinforcement learning (RL) methods achieved major advances in multiple tasks surpassing human performance. However, most of RL strategies show a certain degree of weakness and may become computationally intractable when dealing with high-dimensional and non-stationary environments. In this paper, we build a meta-reinforcement learning (MRL) method embedding an adaptive neural network (NN) controller for efficient policy iteration in changing task conditions. Our main goal is to extend RL application to the challenging task of urban autonomous driving in CARLA simulator. " Every living organism interacts with its environment and uses those interactions to improve its own actions in order to survive and increase" BID13. Inspired from animal behaviorist psychology, reinforcement learning (RL) is widely used in artificial intelligence research and refers to goal-oriented optimization driven by an impact response or signal BID30. Properly formalized and converted into practical approaches BID9, RL algorithms have recently achieved major progress in many fields as games BID18 BID28 and advanced robotic manipulations BID12 BID17 beating human performance. However, and despite several years of research and evolution, most of RL strategies show a certain degree of weakness and may become computationally intractable when dealing with high-dimensional and non-stationary environments BID34. More specifically, the industrial application of autonomous driving in which we are interested in this work, remains a highly challenging "unsolved problem" more than one decade after the promising 2007 DARPA Urban Challenge BID2 ). The origin of its complexity lies in the large variability inherent to driving task arising from the uncertainty of human behavior, diversity of driving styles and complexity of scene perception. An interpretation of the observed vulnerability due to learning environment changes has been provided in contextaware (dependence) research assuming that "concepts in the real world are not eternally fixed entities or structures, but can have a different appearance or definition or meaning in different contexts" BID36. There are several tasks that require context-aware adaptation like weather forecast with season or geography, speech recognition with speaker origins and control processes of industrial installations with climate conditions. One solution to cope with this variability is to imitate the behavior of human who are more comfortable with learning from little experience and adapting to unexpected perturbations. These natural differences compared to machine learning and specifically RL methods are shaping the current research intending to eschew the problem of data inefficiency and improve artificial agents generalization capabilities BID10. Tackling this issue as a multi-task learning problem BID3, meta-learning has shown promising and stands as one of the preferred frames to design fast adapting strategies BID25 BID23. It refers to learn-to-learn approaches that aim at training a model on a set of different but linked tasks and subsequently generalize to new cases using few additional examples BID7.In this paper we aim at extending RL application to the challenging task of urban autonomous driving in CARLA simulator. We build a meta-reinforcement learning (MRL) method where agent policies behave efficiently and flexibly in changing task conditions. We consolidate the approach robustness by integrating a neural network (NN) controller that performs a continuous iteration of policy evaluation and improvement. The latter allows reducing the variance of the policy-based RL and accelerating its convergence. Before embarking with a theoretical modeling of the proposed approach in section 3, we introduce in the next section metalearning and related work in order to better understand the current issues accompanying its application to RL settings. In the last section, we evaluate our method using CARLA simulator and discuss experimental . Generally, in order to acquire new skills, it is more useful to rely on previous experience than starting from scratch. Indeed, we learn how to learn across tasks requiring, each time, less data and trial-and-error effort to conquer further skills BID10. The term meta-learning that refers to learning awareness on the basis of prior experience was first cited by BID0 in the field of educational psychology. It consists in taking control of a learning process and guiding it in accordance with the context of a specific task. In machine learning research, meta-learning is not a new concept and displays many similarities with the above definition BID31 BID26 BID19. It assumes that rather than building a learning strategy on the basis of a single task, it will be more effective to train over a series of tasks sharing a set of similarities then generalize to new situations. By acquiring prior biases, meta-learning addresses models inaccuracies achieving fast adaptation from few additional data BID4. At an architectural level, the learning is operated at two scales: a base-level system is assigned to rapid learning within each task, and a meta (higher) level system uses previous one feedback for gradual learning across tasks BID35.One of the first contribution to meta-learning is the classical Algorithm Selection Problem (ASP) proposed by BID24 considering the relationship between problem characteristics and the algorithm suitable to solve it. Then based on the concept of ASP, the No Free Lunch (NFL) theorem BID38 demonstrated that the generalization performance of any learner across all tasks is equal to 0. The universal learner is consequently a myth and each algorithm performs well only on a set of tasks delimiting its area of expertise. ASP and NFL theorem triggered a large amount of research assigned to parameter and algorithm recommendation BID8 BID1 BID29 BID22. In this type of meta-learning, a meta-learner apprehend the relationship between data characteristics called meta-features and base-learners performance in order to predict the best model to solve a specific task. Various meta-learners have been used and generally consist of shallow algorithms like decision trees, k-Nearest Neighbors and Support Vector Machines BID32. Regarding meta-features, the most commonly used ones included statistical and information-theoretic parameters as well as land-marking and model-based extractors BID33 ).The recent regain of interest in neural network models and more specifically deep learning ing from the advent of large training datasets and computational resources allowed the resurgence of neural network Meta-learning BID14. Instead of requiring explicit task characteristics, the meta-level learns from the structure of base-models themselves. Neural networks are particularly suitable to this kind of transfer learning given their inner capabilities of data features abstraction and rule inductions reflected in their connection weights and biases. The typology of meta-learners developed so far includes recurrent models, metrics and optimizers with several areas of application in classification, regression and RL BID15.Meta-learning algorithms extended recently to the context of RL can be classified in two broad categories. A first set of methods implement a recurrent neural network (RNN) or its memory-augmented variant (LSTM) as the meta-learner. BID6 study RL optimization in the frame of a reinforcement learning problem (RL2) where policies are represented with RNNs that receive past rewards and actions, in addition to the usual inputs. The approach is evaluated on multi-armed bandits (MAB) and tabular Markov Decision Processes (MDPs). In BID35, Advantage Actor-Critic (A2C) algorithms with recurrence are trained using different architectures of LSTM (simple, convolutional and stacked). The experiments are conducted on bandits problems with increasing level of complexity (dependent/independent arms and restless).In the second category, the learner gradients are used for meta-learning. Such methods are task-agnostic and adaptable to any model trained with gradient-descent. The gradient-based strategy has been originally introduced by BID7 with their Model-Agnostic Meta-Learning (MAML) algorithm. It has been demonstrated efficient for different problem settings including gradient RL with neural network policies. MAML mainly aims at generating a model initialization sensitive to changes and reaching optimal on a new scenario after just few gradient updates. Meta-SGD BID15 uses stochastic gradient descent to meta-learn, besides a model initialization, the inner loop learning rate and the direction of weights update. In Reptile BID20, the authors design a first order approximation of MAML computationally less expensive than the original method which includes second order derivative of gradient. BID27 propose a probabilistic view of MAML for continuous adaptation in RL settings. A competitive multi-agent environment (RoboSumo) was designed to run iterated adaptation games for the approach testing. A major part of MRL papers have been evaluated either at a preliminary level of experimentation or on elementary tasks (2D navigation, simulated muJoCo robots and bandit problems). In this work we consider an application of gradient-based MRL in a more challenging dynamic environment involving realistic and complex sides of real world tasks, which is CARLA simulator for autonomous driving BID5. The proposed model consists of a MRL framework embedding an adaptive NN controller to tackle both the nonstationarity and high dimensionality issues inherent to autonomous driving environments in CARLA simulator. The RL task considered in this work is a Markov Decision Process (MDP) defined according to the tuple (S, A, p, r, γ, ρ 0, H) where S is the set of states, A is the set of actions, p(s t+1 |s t, a t) is the state transition distribution predicting the probability to reach a state s t+1 in the next time step given current state and action, r is a reward function, γ is the discount factor, ρ 0 is the initial state distribution and H the horizon. Consider the sum of expected rewards (return) from a trajectory τ (0,H−1) = (s 0, a 0, ..., s H−1, a H−1, s H). A RL setting aims at learning a policy π of parameters θ (either deterministic or stochastic) that maps each state s to an optimal action a maximizing the return R of the trajectory. DISPLAYFORM0 Following the discounted return expressed above, we can define a state value function V (s): S → R to measure the current state return estimated under policy π: DISPLAYFORM1 In order to optimize the parameterized policy π θ, we use gradient descents like in the family of REINFORCE algorithms BID37 updating the policy parameters θ in the direction: DISPLAYFORM2 We build an approach of NN meta-learning compatible with RL setting. Our contribution consists in combining a gradient-based meta-learner like in MAML BID7 to learn a generalizable model initialization and a NN controller for more robust and continuous adaptation. The agent policy π θ approximated by a convolutional neural network (CNN) is trained to quickly adapt to a new task through few standard gradient descents. Explicitly, this consists in finding an optimal initialization of parameters θ * allowing a few-shot generalization of the learned model. Given a batch of tasks T i sampled from p(T), the metaobjective is formulated as follows: DISPLAYFORM0 The MRL approach includes two levels of processing: the inner and the outer loops associated respectively to the base and meta-learning. In the inner loop, we start by reducing the disturbances characterizing policy based methods and induced by the score function R t. Indeed, complex domains with conflicting dynamics and high dimensional observations like autonomous driving yield a large amount of uncertainty. One flexible solution to reduce disturbances and accelerate learning convergence is policy iteration. Subsequently, we modify the RL scheme by integrating a step of policy evaluation and improvement that generates added bonuses to guide the agent towards new states. The policy evaluation is performed with temporal difference (TD) learning combining Monte Carlo method and dynamic programming BID30 to learn, with step size ω, the value function approximated by a CNN: DISPLAYFORM1 Where δ t is the multi-step TD error that consists in bootstrapping the sampled returns from the value function estimate: DISPLAYFORM2 Multi-step returns allow the agent to gather more information on the environment before calculating the error in the value function estimates. Subsequently, the improvement of the policy is performed through the replacement of the score function R t by the TD error δ t in the policy gradient: DISPLAYFORM3 For each sampled task T i, the policy parameters θ i are computed using the updated gradient descent: DISPLAYFORM4 Once the models and related evaluations are generated for all batch tasks, the outer loop is activated. It consists in operating a meta-gradient update of the initial model parameters with a meta-step size β on the basis of the previous level rewards R Ti (θ i): DISPLAYFORM5 The steps detailed above are iterated until an accepted performance is reached. The ing model initialization θ * should be able to achieve fast driving adaptation after only a few gradient steps. In this section we evaluate the performance of the continuous-adapting MRL model on the challenging task of urban autonomous driving. The goal of our experiment is to demonstrate the effectiveness of meta-level learning combined with a NN controller to optimize the RL policy and achieve a more robust learning of high-dimensional and complex environments. At this stage of work, we present the preliminary of our study assessing 2 basic assumptions. The MRL agent is adapting faster at training time and displaying better generalization capabilities in unseen environments. Environment settings. We conduct our experiments using CARLA simulator for autonomous driving BID5 BID21 designed as a server-client system. Carla 3D environment consists of static objects and dynamic characters. As we consider the problem of autonomous driving in changing conditions, we induce nonstationary environments across training episodes by varying several server settings. The task complexity: select one of the available towns as well as different start and end positions for the vehicle tasks (straight or with-turn driving). FORMULA1 The traffic density: control the number of dynamic objects such as pedestrians and vehicles. Weather and lightening: select a combination of weather and illumination conditions to diversify visual effects controlling sun position, radiation intensity, cloudiness and precipitation. Hence we can exclusively use a subset of environments for meta-training ("seen") and a second subset for test-time adaptation ("unseen"). The reward is shaped as a weighted sum of the distance traveled to target, speed in km/h, collisions damage and overlaps with sidewalk and opposite lane. Results. Given the preliminary level of experiments and the absence of various state-of-the-art work on the recent CARLA simulator, we adopt BID7 methodology consisting in comparing the continuous-adapting MRL initialization with conventionally pre-trained and randomly initialized RL algorithms. In all experiments the average episodic reward is used to describe the methods global performance. An episode is terminated when the target destination is reached or after a collision with a dynamic character. FIG1 depicts the test-time adaptation performance of the 3 models. During this phase, the RL agent initialized with meta-learning still uses the NN controller for continuous adaptation. The confirm that our approach generates models adapting faster in "unseen" environments comparatively to the standard RL strategies. Zooming the initial driving steps (figure 1), we notice that our method has distinctly surpassed the standard RL versions only after 10000 steps (500 gradient descents). Subsequently we should lead further tests to identify a specific threshold for few shot learning when evolving from low to high-dimensional settings like autonomous driving task. In order to evaluate the generalization assumption, we compare the models behavior on "seen" and "unseen" environments. FIG2 does not reveal a significant "shortfall" of our approach performance between the 2 scenarios reflecting its robustness in non-stationary conditions. In the contrary, the performance of the pre-trained standard RL decreased notably in "unseen" environments due to the lack of generalization capabilities. Although all indicate a certain robustness of the continuous-adapting MRL, it is too early to draw firm at this preliminary stage of evaluation. First, the episodic reward indicator should be completed with the percentage of successfully ended episodes in order to demonstrate the effective learning of the agent and allow the comparison with state-of-the-art work BID5 BID16. Second, further consideration should be addressed to the pertinence of few shot learning regimes in very complex and high dimensional environments like autonomous driving since the meta-learned strategy may acquires a particular bias at training time "that allows it to perform better from limited experience but also limits its capacity of utilizing more data" BID27. In this paper we addressed the limits of RL algorithms in solving high-dimensional and complex tasks. Built on gradient-based meta-learning, the proposed approach implements a continuous process of policy assessment and improvement using a NN controller. Evaluated on the challenging problem of autonomous driving using CARLA simulator, our approach showed higher performance and faster learning capabilities than conventionally pre-trained and randomly initialized RL algorithms. Considering this paper as a preliminary attempt to scale up RL approaches to high-dimensional real world applications like autonomous driving, we plan in future work to bring deeper focus on several sides of the approach such as the reward function, CNN architecture and including vehicle characteristics in the tasks complexity setup.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
S1eoN9rsnN
A meta-reinforcement learning approach embedding a neural network controller applied to autonomous driving with Carla simulator.
The information bottleneck principle is an elegant and useful approach to representation learning. In this paper, we investigate the problem of representation learning in the context of reinforcement learning using the information bottleneck framework, aiming at improving the sample efficiency of the learning algorithms. We analytically derive the optimal conditional distribution of the representation, and provide a variational lower bound. Then, we maximize this lower bound with the Stein variational (SV) gradient method. We incorporate this framework in the advantageous actor critic algorithm (A2C) and the proximal policy optimization algorithm (PPO). Our experimental show that our framework can improve the sample efficiency of vanilla A2C and PPO significantly. Finally, we study the information-bottleneck (IB) perspective in deep RL with the algorithm called mutual information neural estimation(MINE). We experimentally verify that the information extraction-compression process also exists in deep RL and our framework is capable of accelerating this process. We also analyze the relationship between MINE and our method, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound. In training a reinforcement learning algorithm, an agent interacts with the environment, explores the (possibly unknown) state space, and learns a policy from the exploration sample data. In many cases, such samples are quite expensive to obtain (e.g., requires interactions with the physical environment). Hence, improving the sample efficiency of the learning algorithm is a key problem in RL and has been studied extensively in the literature. Popular techniques include experience reuse/replay, which leads to powerful off-policy algorithms (e.g., (; ; ; a;)), and model-based algorithms (e.g., ). Moreover, it is known that effective representations can greatly reduce the sample complexity in RL. This can be seen from the following motivating example: In the environment of a classical Atari game: Seaquest, it may take dozens of millions samples to converge to an optimal policy when the input states are raw images (more than 28,000 dimensions), while it takes less samples when the inputs are 128-dimension pre-defined RAM data . Clearly, the RAM data contain much less redundant information irrelevant to the learning process than the raw images. Thus, we argue that an efficient representation is extremely crucial to the sample efficiency. In this paper, we try to improve the sample efficiency in RL from the perspective of representation learning using the celebrated information bottleneck framework . In standard deep learning, the experiments in show that during the training process, the neural network first "remembers" the inputs by increasing the mutual information between the inputs and the representation variables, then compresses the inputs to efficient representation related to the learning task by discarding redundant information from inputs (decreasing the mutual information between inputs and representation variables). We call this phenomena "information extraction-compression process" "information extraction-compression process" "information extraction-compression process"(information E-C process). Our experiments shows that, similar to the shown in , we first (to the best of our knowledge) observe the information extraction-compression phenomena in the context of deep RL (we need to use MINE for estimating the mutual information). This observation motivates us to adopt the information bottleneck (IB) framework in reinforcement learning, in order to accelerate the extraction-compression process. The IB framework is intended to explicitly enforce RL agents to learn an efficient representation, hence improving the sample efficiency, by discarding irrelevant information from raw input data. Our technical contributions can be summarized as follows: 1. We observe that the "information extraction-compression process" also exists in the context of deep RL (using MINE to estimate the mutual information). 2. We derive the optimization problem of our information bottleneck framework in RL. In order to solve the optimization problem, we construct a lower bound and use the Stein variational gradient method developed in to optimize the lower bound. 3. We show that our framework can accelerate the information extraction-compression process. Our experimental also show that combining actor-critic algorithms (such as A2C, PPO) with our framework is more sample-efficient than their original versions. 4. We analyze the relationship between our framework and MINE, through this relationship, we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound. Finally, we note that our IB method is orthogonal to other methods for improving the sample efficiency, and it is an interesting future work to incorporate it in other off-policy and model-based algorithms. Information bottleneck framework was first introduced in . They solve the framework by iterative Blahut Arimoto algorithm, which is infeasible to apply to deep neural networks. tries to open the black box of deep learning from the perspective of information bottleneck, though the method they use to compute the mutual information is not precise. derives a variational information bottleneck framework, yet apart from adding prior target distribution of the representation distribution P (Z|X), they also assume that P (Z|X) itself must be a Gaussian distribution, which limits the capabilities of the representation function. extends this framework to variational discriminator bottleneck to improve GANs , imitation learning and inverse RL. As for improving sample-efficiency, (; ; a) mainly utilize the experience-reuse. Besides experience-reuse, tries to learn a deterministic policy, seeks to mitigate the delay of off-policy. learn the environment model. Some other powerful techniques can be found in . State representation learning has been studied extensively, readers can find some classic works in the overview . Apart from this overview, (b) shows a theoretical foundation of maintaining the optimality of representation space. proposes a new perspective on representation learning in RL based on geometric properties of the space of value function. learns representation via information bottleneck(IB) in imitation/apprenticeship learning. To the best of our knowledge, there is no work that intends to directly use IB in basic RL algorithms. A Markov decision process(MDP) is a tuple, (X, A, R, P, µ), where X is the set of states, A is the set of actions, R: X × A × X → R is the reward function, P: is the transition probability function(where P (X ′ |X, a) is the probability of transitioning to state X ′ given that the previous state is X and the agent took action a in X), and µ: X → is the starting state distribution. A policy π: X → P(A) is a map from states to probability distributions over actions, with π(a|X) denoting the probability of choosing action a in state X. In reinforcement learning, we aim to select a policy π which maximizes is the expected return by policy π after taking action a in state X. Actor-critic algorithms take the advantage of both policy gradient methods and valuefunction-based methods such as the well-known A2C . Specifically, in the case that policy π(a|X; θ) is parameterized by θ, A2C uses the following equation to approximate the real policy gradient where R t = ∑ ∞ i=0 γ i r t+i is the accumulated return from time step t, H(p) is the entropy of distribution p and b(X t) is a baseline function, which is commonly replaced by V π (X t). A2C also includes the minimization of the mean square error between R t and value function V π (X t). Thus in practice, the total objective function in A2C can be written as: where α 1, α 2 are two coefficients. In the context of representation learning in RL, The information bottleneck framework is an information theoretical framework for extracting relevant information, or yielding a representation, that an input X ∈ X contains about an output Y ∈ Y. An optimal representation of X would capture the relevant factors and compress X by diminishing the irrelevant parts which do not contribute to the prediction of Y. In a Markovian structure X → Z → Y where X is the input, Z is representation of X and Y is the label of X, IB seeks an embedding distribution P ⋆ (Z|X) such that: Under review as a conference paper at ICLR 2020 for every X ∈ X, which appears as the standard cross-entropy loss 1 in supervised learning with a MI-regularizer, β is a coefficient that controls the magnitude of the regularizer. Next we derive an information bottleneck framework in reinforcement learning. Just like the label Y in the context of supervised learning as showed in, we assume the supervising signal Y in RL to be the accurate value R t of a specific state X t for a fixed policy π, which can be approximated by an n-step bootstrapping function be the following distribution:.This assumption is heuristic but reasonable: If we have an input X t and its relative label Y t = R t, we now have X t's representation Z t, naturally we want to train our decision function V π (Z t) to approximate the true label Y t. If we set our target distribution to be For simplicity, we just write P (R|Z) instead of P (Y t |Z t) in the following context. With this assumption, equation can be written as: The first term looks familiar with classic mean squared error in supervisd learning. In a network with representation parameter ϕ and policy-value parameter θ, policy lossĴ(Z; θ) in equation and IB loss in can be jointly written as: where I(X, Z; ϕ) denotes the MI between X and Z ∼ P ϕ (·|X). Notice that J(Z; θ) itself is a standard loss function in RL as showed in. Finally we get the ultimate formalization of IB framework in reinforcement learning: The following theorem shows that if the mutual information I(X, Z) of our framework and common RL framework are close, then our framework is near-optimality. Theorem Theorem Theorem 1 (Near-optimality theorem). Policy π r = π θ r, parameter ϕ r, optimal policy π ⋆ = π θ ⋆ and its relevant representation parameter ϕ ⋆ are defined as following: Define J Assume that for any In this section we first derive the target distribution in and then seek to optimize it by constructing a variational lower bound. We would like to solve the optimization problem in: Combining the derivative of L 1 and L 2 and setting their summation to 0, we can get that We provide a rigorous derivation of in the appendix(A.2). We note that though our derivation is over the representation space instead of the whole network parameter space, the optimization problem and the ing distribution are quite similar to the one studied in in the context of Bayesian inference. However, we stress that our formulation follows from the information bottleneck framework, and is mathematically different from that in . In particular, the difference lies in the term L 2, which depends on the the distribution P ϕ (Z | X) we want to optimize (while in , the corresponding term is a fixed prior). The following theorem shows that the distribution in is an optimal target distribution (with respect to the IB objective L). The proof can be found in the appendix(A.3). Theorem Theorem Theorem 2. (Representation Improvement Theorem) Consider the objective function, given a fixed policy-value parameter θ, representation distribution P ϕ (Z|X) and state distribution P (X). Define a new representation distribution: Though we have derived the optimal target distribution, it is still difficult to compute P ϕ (Z). In order to resolve this problem, we construct a variational lower bound with a distribution U (Z) which is independent of ϕ. Notice that. Now, we can derive a lower bound of L(θ, ϕ) in as follows: Naturally the target distribution of maximizing the lower bound is: Next we utilize the method in to optimize the lower bound. Stein variational gradient descent(SVGD) is a non-parametric variational inference algorithm that leverages efficient deterministic dynamics to transport a set of particles to approximate given target distributions Q(Z). We choose SVGD to optimize the lower bound because of its ability to handle unnormalized target distributions such as. Briefly, SVGD iteratively updates the "particles" via a direction function Φ ⋆ (·) in the unit ball of a reproducing kernel Hilbert space (RKHS) H: where Φ * (·) is chosen as a direction to maximally decrease 2 the KL divergence between the particles' distribution P (Z) and the target distribution Q(Z) =Q In fact, Φ * is chosen to maximize the directional derivative of F (P) = −DKL(P ||Q), which appears to be the "gradient" of F distribution, C is normalized coefficient) in the sense that where P [ϵΦ] is the distribution of Z + ϵΦ(Z) and P is the distribution of Z. showed a closed form of this direction: where K is a kernel function(typically an RBF kernel function). Notice that C has been omitted. In our case, we seek to minimize, which is equivalent to maximizeL(θ, ϕ), the greedy direction yields: In practice we replace log U (Ẑ) with ζ log U (Ẑ) where ζ is a coefficient that controls the magnitude of ∇Ẑ log U (Ẑ). Notice that Φ(Z i) is the greedy direction that Z i moves towardŝ L(θ, ϕ)'s target distribution as showed in(distribution that maximizesL(θ, ϕ)). This means Φ(Z i) is the gradient ofL(Z i, θ, ϕ): Since our ultimate purpose is to update ϕ, by the chain rule, Φ(Z i) is given in equation. In practice we update the policy-value parameter θ by common policy gradient algorithm since: and update representation parameter ϕ by. This section we verify that the information E-C process exists in deep RL with MINE and our framework accelerates this process. Mutual information neural estimation(MINE) is an algorithm that can compute mutual information(MI) between two high dimensional random variables more accurately and efficiently. Specifically, for random variables X and Z, assume T to be a function of X and Z, the calculation of I(X, Z) can be transformed to the following optimization problem: The optimal function T ⋆ (X, Z) can be approximated by updating a neural network T (X, Z; η). With the aid of this powerful tool, we would like to visualize the mutual information between input state X and its relative representation Z: Every a few update steps, we sample a batch of inputs and their relevant representations and compute their MI with MINE, every time we train MINE(update η) we just shuffle {Z i} n i=1 and roughly assume the shuffled representations {Z Figure is the tensorboard graph of mutual information estimation between X and Z in Atari game Pong, x-axis is update steps and y-axis is MI estimation. More details and can be found in appendix(A.6) and (A.7). As we can see, in both A2C with our framework and common A2C, the MI first increases to encode more information from inputs("remember" the inputs), then decreases to drop irrelevant information from inputs("forget" the useless information). And clearly, our framework extracts faster and compresses faster than common A2C as showed in figure(b). (a) MI in A2C (b) MI in A2C with our framework After completing the visualization of MI with MINE, we analyze the relationship between our framework and MINE. According to , the optimal function T * in goes as follows: Combining the with Theorem, we get: Through this relationship, we theoretically derive an algorithm that can directly optimize our framework without constructing the lower bound, we put this derivation in the appendix(A.5). In the experiments we show that our framework can improve the sample efficiency of basic RL algorithms(typically A2C and PPO). Our anonymous code can be found in https://github. com/AnonymousSubmittedCode/SVIB. Other can be found in last two appendices. In A2C with our framework, we sample Z by a network ϕ(X, ϵ) where ϵ ∼ N (·; 0, 0.1) and the number of samples from each state X is 32, readers are encouraged to take more samples if the computation resources are sufficient. We set the IB coefficient as β = 0.001. We choose two prior distributions U (Z) of our framework, the first one is uniform distribution, apparently when U (Z) is the uniform distribution, ∇Ẑ log U (Ẑ) |Ẑ =Z can be omitted. The second one is a Gaussian distribution, which is defined as follows: for a given state X i, sample a batch of {Z We also set ζ as 0.005∥∇Ẑ 1 β J(Ẑ; θ)/∇Ẑ log U (Ẑ)∥ |Ẑ =Z to control the magnitude of ∇Ẑ log U (Ẑ) |Ẑ =Z. Following , the kernel function in we used is the Gaussian RBF kernel K(Z i, Z j) = exp(−∥Z i − Z j ∥ 2 /h) where h = med 2 /2 log(n + 1), med denotes the median of pairwise distances between the particles {Z . As for the hyper-parameters in RL, we simply choose the default parameters in A2C of Openaibaselines(https://github.com/openai/baselines/tree/master/baselines/a2c). In summary, we implement the following four algorithms: A2C with uniform SVIB A2C with uniform SVIB A2C with uniform SVIB: Use ϕ(X, ϵ) as the embedding function, optimize by our framework(algorithm(A.4)) with U (Z) being uniform distribution. A2C with Gaussian SVIB A2C with Gaussian SVIB A2C with Gaussian SVIB: Use ϕ(X, ϵ) as the embedding function, optimize by our framework(algorithm(A.4)) with U (Z) being Gaussian distribution. A2C A2C A2C:Regular A2C in Openai-baselines with ϕ(X) as the embedding function. A2C with noise A2C with noise A2C with noise(For fairness):A2C with the same embedding function ϕ(X, ϵ) as A2C with our framework. Figure(a)-(e) show the performance of four A2C-based algorithms in 5 gym Atari games. We can see that A2C with our framework is more sample-efficient than both A2C and A2C with noise in nearly all 5 games. Figure 2: (a)-(e) show the performance of four A2C-based algorithms, x-axis is time steps(2000 update steps for each time step) and y-axis is the average reward over 10 episodes, (f)-(h) show the performance of four PPO-based algorithms, x-axis is time steps(300 update steps for each time step). We make exponential moving average of each game to smooth the curve(In PPO-Pong, we add 21 to all four curves in order to make exponential moving average). We can see that our framework improves sample efficiency of basic A2C and PPO. Notice that in SpaceInvaders, A2C with Gaussian SVIB is worse. We suspect that this is because the agent excessively drops information from inputs that it misses some information related to the learning process. There is a more detailed experimental discussion about this phenomena in appendix(A.7). We also implement four PPO-based algorithms whose experimental settings are same as A2C except that we set the number of samples as 26 for the sake of computation efficiency. Results can be found in the in figure(f)-(h). We study the information bottleneck principle in RL: We propose an optimization problem for learning the representation in RL based on the information-bottleneck framework and derive the optimal form of the target distribution. We construct a lower bound and utilize Stein Variational gradient method to optimize it. Finally, we verify that the information extraction and compression process also exists in deep RL, and our framework can accelerate this process. We also theoretically derive an algorithm based on MINE that can directly optimize our framework and we plan to study it experimentally in the future work. According to the assumption, naturally we have: Notice that if we use our IB framework in value-based algorithm, then the objective function J π can be defined as: where and d π is the discounted future state distribution, readers can find detailed definition of d π in the appendix of . We can get: We show the rigorous derivation of the target distribution in. Denote P as the distribution of X, P Z ϕ (Z) = P ϕ (Z) as the distribution of Z. We use P ϕ as the short hand notation for the conditional distribution P ϕ (Z|X). Moreover, we write Take the functional derivative with respect to P ϕ of the first term L 1: Hence, we can see that Then we consider the second term. By the chain rule of functional derivative, we have that Combining the derivative of L 1 and L 2 and setting their summation to 0, we can get that A.3 Proof of Theorem 2 (X, Z; ϕ), given a fixed policy-value parameter θ, representation distribution P ϕ (Z|X) and state distribution P (X), define a new representation distribution: Proof Proof Proof. Define I(X) as: According to the positivity of the KL-divergence, we have L(θ,φ) ≥ L(θ, ϕ). Algorithm 1 Information-bottleneck-based state abstraction in RL θ, ϕ ← initialize network parameters β, ζ ← initialize hyper-parameters in ϵ ← learning rate M ← number of samples from A.5 Integrate MINE to our framework MINE can also be applied to the problem of minimizing the MI between Z and X where Z is generated by a neural network P ϕ (·|X): A.6 Study the information-bottleneck perspective in RL Now we introduce the experimental settings of MI visualization. And we show that the agent in RL usually tends to follow the information E-C process. We compare the MI(I(X, Z)) between A2C and A2C with our framework. Every 2000 update steps(2560 frames each step), we re-initialize the parameter η, then sample a batch of inputs and their relevant representations, n = 64, and compute the MI with MINE. The learning rate of updating η is same as openai-baselines' A2C: 0.0007, training steps is 256 and the network architecture can be found in our code file "policy.py". Figure is the MI visualization in game Qbert. Note that there is a certain degree of fluctuations in the curve. This is because that unlike supervised learning, the distribution of datasets and learning signals R π (X) keep changing in reinforcement learning: R π (X) changes with policy π and when π gets better, the agent might find new states, in this case, I(X, Z) might increase again because the agent needs to encode information from new states in order to learn a better policy. Yet finally, the MI always tends to decrease. Thus we can say that the agent in RL usually tends to follow the information E-C process. (a) MI in A2C (b) MI in A2C with our framework Figure 3: Mutual information visualization in Qbert. As policy π gets better, the agent might find new states, in this case, I(X, Z) might increase again because the agent needs to encode information from new states in order to learn a better policy. Yet finally, the MI always tends to decrease. Thus it follows the information E-C process. We argue that it's unnecessary to compute I(Z, Y) like : According to, if the training loss continually decreases in supervised learning(Reward continually increases as showed in figure(a) in reinforcement learning), I(Z, Y) must increase gradually. We also add some additional experimental of MI visualization in the appendix(A.7). This section we add some additional experimental about our framework. Notice that in game MsPacman, performance of A2C with our framework is worse than regular A2C. According to the MI visualization of MsPacman in figure(b), we suspect that this is because A2C with our framework drops the information from inputs so excessively that it misses some information relative to the learning process. To see it accurately, in figure(b), the orange curve, which denotes A2C with our framework, from step(x-axis) 80 to 100, suddenly drops plenty of information. Meanwhile, in figure(b), from step(xaxis) 80 to 100, the rewards of orange curve start to decrease. As showed in figure, unlike Pong, Breakout, Qbert and some other shooting games, the frame of MsPacman contains much more information related to the reward: The walls, the ghosts and the tiny beans everywhere. Thus if the agent drops information too fast, it may hurt the performance.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Syl-xpNtwS
Derive an information bottleneck framework in reinforcement learning and some simple relevant theories and tools.
A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency. In the course of everyday functioning, people are constantly faced with real-world environments in which they are required to shift unpredictably between multiple, sometimes unfamiliar, tasks BID2. They are nonetheless able to flexibly adapt existing decision schemas or build new ones in response to these challenges BID1. How humans support such flexible learning and task switching is largely unknown, both neuroscientifically and algorithmically BID28 BID5.We investigate solving this problem with a neural module approach in which simple, task-specialized decision modules are dynamically allocated on top of a largely-fixed underlying sensory system BID0 BID14. The sensory system computes a general-purpose visual representation from which the decision modules read. While this sensory backbone can be large, complex, and learned comparatively slowly with significant amounts of training data, the task modules that deploy information from the base representation must, in contrast, be lightweight, quick to be learned, and easy to switch between. In the case of visually-driven tasks, from neuroscience and computer vision suggest the role of the fixed general purpose visual representation may be played by the ventral visual stream, modeled as a deep convolutional neural network (; BID23 . However, the algorithmic basis for how to efficiently learn and dynamically deploy visual decision modules remains far from obvious. The TouchStream environment is a touchscreen-like GUI for continual learning agents, in which a spectrum of visual reasoning tasks can be posed in a large but unified action space. On each timestep, the environment (cyan box) emits a visual image (xt) and a reward (rt). The agent recieves xt and rt as input and emits an action at. The action represents a "touch" at some location on a two-dimensional screen e.g. at ∈ {0, . . ., H − 1} × {0, . . ., W − 1}, where H and W are the screen height and width. The environment's policy is a program computing xt and rt as a function of the agent's action history. The agent's goal is to learn how to choose optimal actions to maximize the amount of reward it recieves over time. The agent consists of several component neural networks including a fixed visual backbone (yellow inset), a set of learned neural modules (grey inset), and a meta-controller (red inset) which mediates the deployment of these learned modules for task solving. The modules use the ReMaP algorithm § 2 to learn how to estimate reward as a function of action (heatmap), conditional on the agent's recent history. Using a sampling policy on this reward map, the agent chooses an optimal action to maximize its aggregate reward. In standard supervised learning, it is often assumed that the output space of a problem is prespecified in a manner that just happens to fit the task at hand -e.g. for a classification task, a discrete output with a fixed number of classes might be determined ahead of time, while for a continuous estimation problem, a one-dimensional real-valued target might be chosen instead. This is a very convenient simplification in supervised learning or single-task reinforcement learning contexts, but if one is interested in the learning and deployment of decision structures in a rich environment defining tasks with many different natural output types, this simplification becomes cumbersome. To go beyond this limitation, we build a unified environment in which many different tasks are naturally embodied. Specifically, we model an agent interacting with a two-dimensional touchscreenlike GUI that we call the TouchStream, in which all tasks (discrete categorization tasks, continuous estimation problems, and many other combinations and variants thereof) can be encoded using a single common and intuitive -albeit large -output space. This choice frees us from having to hand-design or programmatically choose between different output domain spaces, but forces us to confront the core challenge of how a naive agent can quickly and emergently learn the implicit "interfaces" required to solve different tasks. We then introduce Reward Map Prediction (ReMaP) networks, an algorithm for continual reinforcement learning that is able to discover implicit task-specific interfaces in large action spaces like those of the TouchStream environment. We address two major algorithmic challenges associated with learning ReMaP modules. First, what module architectural motifs allow for efficient task interface learning? We compare several candidate architectures and show that those incorporating certain intuitive design principles (e.g. early visual bottlenecks, low-order polynomial nonlinearities and symmetry-inducing concatenations) significantly outperform more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Second, what system architectures are effective for switching between tasks? We present a meta-controller architecture based on a dynamic neural voting scheme, allowing new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency. In § 1 we formalize the TouchStream environment. In § 2, we introduce the ReMaP algorithm. In § 3, we describe and evaluate comparative performance of multiple ReMaP module architectures on a variety of TouchStream tasks. In § 4, we describe the Dynamic Neural Voting meta-controller, and evaluate its ability to efficiently transfer knowledge between ReMaP modules on task switches. Modern deep convolutional neural networks have had significant impact on computer vision and artificial intelligence BID17, as well as in the computational neuroscience of vision . There is a recent but growing literature on convnet-based neural modules, where they have been used for solving compositional visual reasoning tasks BID0 BID14. In this work we apply the idea of modules to solving visual learning challenges in a continual learning context. Existing works rely on choosing between a menu of pre-specified module primitives, using different module types to solve subproblems involving specific input-output datatypes, without addressing how these modules' forms are to be discovered in the first place. In this paper, we show a single generic module architecture is capable of automatically learning to solve a wide variety of different tasks in a unified action/state space, and a simple controller scheme is able to switch between such modules. Our are also closely connected with the literature on lifelong (or continual) learning. A part of this literature is concerned with learning to solve new tasks without catastrophically forgetting how to solve old ones (; . The use of modules obviates this problem, but instead shifts the hard question to one of how newly-allocated modules can be learned effectively. The continual learning literature also directly addresses knowlege transfer to newly allocated structures BID3 BID10, but largely addresses how transfer learning can lead to higher performance, rather than addressing how it can improve learning speed. Aside from reward performance, we focus on issues of speed in learning and task switching, motivated by the remarkably efficient adaptability of humans in new task contexts. Existing work in continual learning also largely does not address which specific architecture types learn tasks efficiently, independent of transfer. By focusing first on identifying architectures that achieve high performance quickly on individual tasks ( § 3), our transfer-learning investigation then naturally focuses more on how to efficiently identify when and how to re-use components of these architectures (§ 4). Most of these works also make explicit a priori assumptions about the structure of the tasks to be encoded into the models (e.g. output type, number of classes), rather than address the more general question of emergence of solutions in an embodied case, as we do. Meta-reinforcement learning approaches such as BID29; BID8, as well as the schema learning ideas of e.g. BID1 BID19 typically seek to address the issue of continual learning by having a complex meta-learner extract correlations between tasks over a long timescale. In our context most of the burden of environment learning is placed on the individual modules, so our meta-controller can thus be comparatively light-weight compared to typical meta-reinforcement approaches. Unlike our case, meta-learning has mostly been limited to small state or action spaces. Some recent work in general reinforcement learning (e.g. BID21 BID9) has addressed the issue of large action spaces, but has not sought to address multitask transfer learning in these large action spaces. Agents in a real-world environment are exposed to many different implicit tasks, arising without predefined decision structures, and must learn on the fly what the appropriate decision interfaces are for each situation. Because we are interested in modeling how agents can do this on-the-fly learning, our task environment should mimic the unconstrained nature of the real world. Here, we describe the TouchStream environment, which attempts to do this in a simplified two-dimensional domain. Our problem setup consists of two components, an "environment" and an "agent," interacting over an extended temporal sequence FIG0. At each timestep t, the environment emits an RGB image x t of height H and width W, and a scalar reward r t. Conversely, the agent accepts images and rewards as input and chooses an action a t in response. The action space A available to the agent consists of a two-dimensional pixel grid {0, . . ., H − 1} × {0, . . ., W − 1} ⊂ Z 2, of the same height and width as its input image. The environment is equipped with a policy (unknown to the agent) that on each time step computes image x t and reward r t as a function of the history of agent actions {a 0, . . ., a t−1}, images {x 0, . . ., x t−1} and rewards {r 0, . . ., r t−1}.In this work, the agent is a neural network, composed of a visual backbone with fixed weights, together with a meta-controller module whose parameters are learned by interaction with the environment. The agent's goal is to learn to enact a policy that maximizes its reward obtained over time. Unlike an episodic reinforcement learning context, the TouchStream environment is continuous: throughout the course of learning the agent is never signaled when it should reset to some "initial" internal state. However, unlike the traditional continuous learning context of e.g. BID27, a TouchStream may implicitly define many different tasks, each of which is associated with its own characteristic reward schedule. The agent experiences a continual stream of tasks, and any implicit association between reward schedule and state reset must be discovered by the agent. By framing the action space A of the agent as all possible pixel locations and the state space as any arbitrary image, a very wide range of possible tasks are unified in this single framework, at the cost of requiring the agents' action space to be congruent to its input state space, and thus be quite large. This presents two core efficiency challenges for the agent: on any given task, it must be able to both quickly recognize what the "interface" for the task is, and transfer such knowledge across tasks in a smart way. Both of these goals are complicated by the fact that both the large size of agent's state and action spaces. Although we work with modern large-scale computer vision-style datasets and tasks in this work, e.g. ImageNet BID6 ) and MS-COCO BID18 ), we are also inspired by visual psychology and neuroscience, which have pioneered techniques for how controlled visual tasks can be embodied in real reinforcement learning paradigms BID13 BID22. Especially useful are three classes of task paradigms that span a range of the ways discrete and continuous estimation tasks can be formulated -including Stimulus-Response, Match-To-Sample, and Localization tasks FIG1. The Stimulus-Response (SR) paradigm is a common approach to physically embodying discrete categorization tasks BID11. For example, in the simple two-way SR discrimination task shown in FIG1, the agent is rewarded if it touches the left half of the screen after being shown an image of a dog, and the right half after being shown a butterfly. SR tasks can be made more difficult by increasing the number of image classes or the complexity of the reward boundary regions. In our SR experiments, we use images and classes from the ImageNet dataset BID6 ).Match-To-Sample Tasks: The Match-to-Sample (MTS) paradigm is another common approach to assessing visual categorization abilities BID20 ). In the MTS task shown in FIG1, trials consist of a sequence of two image frames -the "sample" screen followed by the "match" screen -in which the agent is expected to remember the object category seen on the sample frame, and then select an onscreen "button" (really, a patch of pixels) on the match screen corresponding to the sample screen category. Unlike SR tasks, MTS tasks require some working memory and more localized spatial control. More complex MTS tasks involve more sophisticated relationships between the sample and match screen. In FIG1, using the MS-COCO object detection challenge dataset BID18 ), the sample screen shows an isolated template image indicating one of the 80 MS-COCO classes, while the match screen shows a randomly-drawn scene from the dataset containing at least one instance of the sample-image class. The agent is rewarded if its chosen action is located inside the boundary of an instance (e.g. the agent "pokes inside") of the correct class. This MS-COCO MTS task is a "hybrid" of categorical and continuous elements, meaning that if phrased as a standard supervised learning problem, both categorical readout (i.e. class identity) and a continous readout (i.e. object location) would be required. Localization: FIG1 shows a two-step continuous localization task in which the agent is supposed to mark out the bounding box of an object by touching opposite corners on two successive timesteps, with reward proportionate to the Intersection over Union (IoU) value of the predicted bounding box relative to the ground truth bounding box IoU = Area(B GT ∩B) Area(B GT ∪B). In localization, unlike the SR and MTS paradigms, the choice made at one timestep constrains the agent's optimal choice on a future timestep (e.g. picking the upper left corner of the bounding box on the first step contrains the lower right opposite corner to be chosen on the second).Although these tasks can become arbitrarily complex along certain axes, the tasks presented here require only fixed-length memory and future prediction. That is, each task requires only knowledge of the past k b timesteps, and a perfect solution always exists within k f timesteps from any point. The minimal required values of k b and k f are different across the various tasks in this work. However, in the investigations below, we set these to the maximum required values across tasks, i.e. kb = 1 and kf = 2. Thus, the agent is required to learn for itself when it is safe to ignore information from the past and when it is irrelevant to predict past a certain point in the future. We will begin by considering a restricted case where the environment runs one semantic task indefinitely, showing how different architectures learn to solve such individual tasks with dramatically different levels of efficiency (§ 2-3). We will then expand to considering the case where the environment's policy consists of a sequence of tasks with unpredictable transitions between tasks, and exhibit a meta-controller that can cope effectively with this expanded domain (§ 4). The TouchStream environment necessarily involves working with large action and state spaces. Methods for handling this situation often focus on reducing the effective size of action/state spaces, either via estimating pseudo-counts of state-action pairs, or by clustering actions BID21 BID9. Here we take another approach, using a neural network to directly approximate the (image-state modulated) mapping between the action space and reward space, allowing learnable regularities in the state-action interaction to implicitly reduce the large spaces into something manageable by simple choice policies. We introduce an off-policy algorithm for efficient multitask reinforcement learning in large action and state spaces: Reward Map Prediction, or ReMaP. As with any standard reinforcement learning situation, the agent seeks to learn an optimal policy π = p(a t | x t) defining the probability density p over actions given image state x t. The ReMaP algorithm is off-policy, in that π is calculated as a simple fixed function of the estimated reward. A ReMaP network M Θ is a neural network with parameters Θ, whose inputs are a history over previous timesteps of (i) the agent's own actions, and (ii) an activation encoding of the agent's state space; and which explicitly approximates the expected reward map across its action space for some number of future timesteps. Mathematically: DISPLAYFORM0 where k b is the number of previous timesteps considered; k f is the length of future horizon to be considered; Ψ t−k b:t is the history [ψ(x t−k b),..., ψ(x t)] of state space encodings produced by fixed backbone network ψ(·), h t−k b:t−1 is the history [a t−k b . . ., a t−1] of previously chosen actions, and each m i ∈ map(A, R) -that is, a map from action space to reward space. The predicted reward maps are constructed by computing the expected reward obtained for a subsample of actions drawn randomly from A: DISPLAYFORM1 where r t+j is the predicted reward j steps into the future horizon. Having produced k f reward prediction maps, one for each timestep of its future horizon, the agent needs to determine what it believes will be the single best action over all the expected reward maps m DISPLAYFORM2. The ReMaP algorithm formulates doing so by normalizing the predictions across each of these k f maps into separate probability distributions, and sampling an action from the distribution which has maximum variance. That is, the agent computes its policy π as follows: DISPLAYFORM3 where DISPLAYFORM4 is a normalization that removes the minimum of the map, DISPLAYFORM5 ensures it is a probability distribution parameterized by functional family f (·), and VarArgmax is an operator which chooses the input with largest variance. The sampling procedure described in equation FORMULA3 uses two complementary ideas to exploit spatial and temporal structure to efficiently explore a large action space. Since rewards in real physical tasks are spatially correlated, the distribution-based sampler in Equation FORMULA5 allows for more effective exploration of potentially informative actions than would the single-point estimate of an apparent optimum (e.g. an -greedy policy). Further, in order to reduce uncertainty, the ReMaP algorithm explores timesteps with greatest reward map variance. The VarArgmax function nonlinearly upweights the timeframe with highest variance to exploit the fact that some points in time carry disproportianate relevance for reward outcome, somewhat analagously to how max-pooling operates in convolutional networks. Although any standard action selection strategy can be used in place of the one in (e.g. pseudo -greedy over all k f maps), we have empirically found that this policy is effective at efficiently exploring our large action space. The parameters Θ of a ReMaP network are learned by gradient descent on the loss of the reward prediction error Θ * = argmin Θ L [m t (a t), r t,; Θ] with map m j t compared to the true reward r t+j. Only the reward prediction in m t corresponding to the action chosen at timestep t participates in loss calculation and backpropagation of error signals. A minibatch of maps, rewards, and actions is collected over several consecutive inference passes before performing a parameter update. The ReMaP algorithm is summarized in 1. Initialize ReMaP network M Initialize state and action memory buffers Ψ t−k b:t and h t−k b:t−1 for timestep t = 1,T do Observe x t, encode with state space network ψ(·), and append to state buffer Subsample set of potential action choices a t uniformly from A Produce k f expected reward maps of a t from eq. Select action according to policy π as in FORMULA3 Execute action a t in environment, store in action buffer, and receive reward r t Calculate loss for this and previous k f − 1 timesteps if t ≡ 0 mod batch size then Perform parameter update Throughout this work, we take our fixed backbone state space encoder to be the VGG-16 convnet, pretrained on ImageNet BID26. Because the resolution of the input to this network is 224x224 pixels, our action space A = {0, . . ., 223} × {0, . . ., 223}. By default, the functional family f used in the action selection scheme in Eq. FORMULA5 is the identity, although on tasks benefiting from high action precision (e.g. Localization or MS-COCO MTS), it is often optimal to sample a low-temperature Boltzmann distribution with f (x) = e −x/T. Reward prediction errors are calculated using the cross-entropy loss (where logits are smooth approximations to the Heaviside function in analogy to eq. FORMULA6). The main question we seek to address in this section is: what specific neural network structure(s) should be used in ReMaP modules? The key considerations are that such modules (i) should be easy to learn, requiring comparatively few training examples to discover optimal parameters Θ *, and (ii) easy to learn from, meaning that an agent can quickly build a new module by reusing components of old ones. Intuitive Example: As an intuition-building example, consider the case of a simple binary StimulusResponse task, as in FIG1 ("if you see a dog touch on the right, if a butterfly touch on the left"). One decision module that is a "perfect" reward predictor on this task is expressed analytically as: DISPLAYFORM0 where H is the Heaviside function, a x and a y are the x and y components of the action a ∈ A relative to the center of the screen, and W is a 1 × |Ψ t | matrix expressing the class boundary (bias term omitted for clarity). If W Ψ t is positive (i.e. the image is of a dog) then a x must also be positive (i.e. touch is on the right) to predict positive reward; conversly, if W Ψ t is negative (i.e. butterfly), a x must be negative (i.e. left touch) to predict reward. If neither of these conditions hold, both terms are equal to zero, so the formula predicts no reward. Since vertical location of the action does not affect reward, a y is not involved in reward calculation on this task. Equation FORMULA6 has three basic ideas embedded in its structure:• there is an early visual bottleneck, in which the high-dimensional general purpose feature representation Ψ t is greatly reduced in dimension (in this case, from the 4096 features of VGG's FC6 layer, to 1) prior to combination with action space, • there is a multiplicative interaction between the action vector and (bottlenecked) visual features, and • there is symmetry, e.g. the first term of the formula is the sign-antisymmetric partner of the second term, reflecting something about the spatial structure of the task. In the next sections, we show these three principles can be generalized into a parameterized family of networks from which the visual bottleneck (the W parameters), and decision structure (the form of equation FORMULA6) can emerge naturally and efficienty via learning for any given task of interest. In this section we define a generic ReMaP module which is lightweight, encodes all three generic design principles from the "perfect" formula, and uses only a small number of learnable parameters. Define the concatenated square nonlinearity as DISPLAYFORM0 and the concatenated ReLU nonlinearity BID25 ) as DISPLAYFORM1 where ⊕ denotes vector concatenation. The CReS nonlinearity is then defined as the composition of CReLU and Sq, e.g. DISPLAYFORM2 The CReS nonlinearity introduces multiplicative interactions between its arguments via its Sq component and symmetry via its use of CReLU. Definition. The (n 0, n 1, . . ., n k)-Early Bottleneck-Multiplicative-Symmetric (EMS) module is the ReMaP module given by Figure 3: Decision interfaces emerge naturally over the course of training. The ReMaP modules allow the agent to discover the implicit interfaces for each task. We observe that learning generally first captures the emergence of natural physical constructs before learning task-specific decision rules. Examples of this include: a. onscreen "buttons" appearing on the match screen of an MTS task before the specific semantic meaning of each button is learned (arrows indicate random motion), and b. the general discovery of objects and their boundaries before the task-specific category rule is applied. This image is best viewed in color. DISPLAYFORM3 The EMS structure builds in each of the three principles described above. The B stage represents the early bottleneck in which visual encoding inputs are bottlenecked to size n 0 before being combined with actions, and then performs k CReS stages, introducing multiplicative symmetric interactions between visual features and actions. From this, the "perfect" module definition for the binary SR task in eq. FORMULA6 then becomes a special case of a two-layer EMS module. Note that the visual features to be bottlenecked can be from any encoder; in practice, we work with both fully connected and convolutional features of the VGG-16 backbone. In the experiments that follow, we compare the EMS module to a wide variety of alternative control motifs, in which the early bottleneck, multiplicative, and symmetric features are ablated. Multiplicative nonlinearity and bottleneck ablations use a spectrum of more standard activation functions, including ReLU, tanh, sigmoid, elu BID4, and CReLU forms. In late bottleneck (fully-ablated) architectures -which are, effectively, "standard" multi-layer perceptrons (MLPs) -action vectors are concatenated directly to the output of the visual encoder before being passed through subsequent stages. In all, we test 24 distinct architectures. Detailed information on each can be found in the Supplement. We compared each architecture across 12 variants of visual SR, MTS, and localization tasks, using fixed visual encoding features from layer FC6 of VGG-16. Task variants ranged in complexity from simple (e.g. a binary SR task with ImageNet categories) to more challenging (e.g. a many-way ImageNet MTS task with buttons appearing in varying positions on each trial). The most complex tasks are two variants of localization, either with a single main salient object placed on a complex (similar to images used in Yamins & DiCarlo FORMULA3), or complex scenes from MS-COCO (see Fig. 3b). Details of the tasks used in these experiments can be found in the Supplement. Module weights were initialized using a normal distribution with µ = 0.0, σ = 0.01, and optimized using the ADAM algorithm (Kingma & Ba FORMULA3) with parameters β 1 = 0.9, β 2 = 0.999 and = 1e−8. Learning rates were optimized on a per-task, per-architecture basis in a cross-validated fashion. For each architecture and task, we ran optimizations from five different initialization seeds to obtain mean and standard error due to initial condition variability. For fully-ablated "late-bottleneck" modules, we measured the performance of modules of three different sizes (small, medium, and large), where the smallest version is equivalent in size to the EMS module, and the medium and large versions are much larger TAB2 ). A key feature of ReMaP modules is that they are able to discover de novo the underlying output domain spaces for a variety of qualitatively distinct tasks (Fig. 3 ; more examples in FIG1). The emergent decision structures are highly interpretable and reflect the true interfaces that the environment implicitly defines. The spatiotemporal patterns of learning are robust across tasks and replicable across initial seedings, and thus might serve as a candidate model of interface use and learning in humans. In general, we observe that the modules typically discover the underlying "physical structures" needed to operate the task interface before learning the specific decision rules needed to solve the task. For example, in the case of a discrete MTS categorization task (Fig. 3a), this involves the quick discovery of onscreen "buttons" corresponding to discrete action choices before these buttons are mapped to their semantic meaning. In the case of in the MS-COCO MTS task (Fig. 3b), we observe the initial discovery of high salience object boundaries, and followed by category-specific refinement. It is important to note that the visual backbone was trained on a categorization task, quite distinct from the localization task in MS-COCO MTS. Thus, the module had to learn this very different decision structure, as well as the class boundaries of MS-COCO, from scratch during training. The efficiency of learning was measured by computing the taskaveraged, normalized area under the learning curve (TA-N-AUC) for each of the 24 modules tested, across all 12 task variants. FIG2 shows characteristic learning curves for several tasks, summarized in the table in FIG2. Results for all architectures for all tasks are shown in Supplement FIG0. We find that the EMS module is the most efficient across tasks (0.997 TA-N-AUC). Moreover, the EMS architecture always achieves the highest final reward level on each task. Increasing ablations of the EMS structure lead to increasingly poor performance, both in terms of learning efficiency and final performance. Ablating the low-order polynomial interaction (replacing Sq with CReLU) had the largest negative effect on performance (0.818 TA-N-AUC), followed in importance by the symmetric structure (0.944 TA-N-AUC). Large fully-ablated models (no bottleneck, using only ReLU activations) performed significantly worse than the smaller EMS module and the single ablations (0.717 TA-N-AUC), but better than the module with neither symmetry nor multiplicative interactions (0.566 TA-N-AUC). Small fully-ablated modules with the same number of parameters as EMS were by far the least efficient (0.403 TA-N-AUC) and oftentimes achieved much lower final reward. In summary, the main conceptual features by which the special-case architecture in eq. solves the binary SR task are both individually helpful, combine usefully, and can be parameterized and efficiently learned for a variety of visual tasks. These properties are critical to achieving effective task learning compared to standard MLP structures. In a second experiment focusing on localization tasks, we tested an EMS module using convolutional features from the fixed VGG-16 feature encoder, reasoning that localization tasks could benefit from finer spatial feature resolution. We find that using visual features with explicit spatial information substantially improves task performance and learning efficiency on these tasks FIG3. To our knowledge, our on MS-COCO are the first demonstrated use of reinforcement learning to achieve instance-level object segmentations. Reward curves (measuring bounding box IoU) in FIG3 show little difference between any of the late bottleneck modules at any size. The only models to consistently achieve an IoU above 0.4 are the EMS-like variants, especially with convolutional features. For context, a baseline SVR trained using supervised methods to directly regress bounding boxes using the same VGG features in an IoU of 0.369. So far, we've considered the case where the TouchStream consists of only one task. However, agents in real environments are often faced with having to switch between tasks, many of which they may be encountering for the first time. Ideally, such agents would repurpose knowledge from previously learned tasks when it is relevant to a new task. Formally, we now consider environment policies consisting of sequences of tasks T = {τ 1, τ 2, ..., τ Ω}, each of which may last for an indeterminate period of time. Consider also a set of modules M, where each module corresponds to a task-specific policy π ω (a | x) = p (a t | x t, τ ω). When a new task begins, we cue the agent to allocate a new module M Ω+1 which is added to the set of modules M. In the learning that follows allocation, the weights in old modules are held fixed while the parameters in the new module M Ω+1 are trained. However, the output of the system is not merely the output of the new module, but instead is a dynamically allocated mixture of pathways through the computation graphs of the old and new modules. This mixture is determined by a meta-controller (Fig. 6). The meta-controller is itself a neural network which learns a dynamic distribution over (parts of) modules to be used in building the composite execution graph. Intuitively, this composite graph is composed of a small number of relevant pathways that mix and match parts of existing modules to solve the new task, potentially in combination with new module components that need to be learned. We define a meta-controller that assigns weights to each layer in each module in M. Let p i ω be the weight associated with the ith layer in module ω. These weights are probabilistic on a per layer basis, e.g. p i ω ≥ 0 and ω p i ω = 1 and can be interpreted as the probability of the controller selecting the ith layer l i ω for use in the execution graph, with distribution π i = {p i ω}. For such an assignment of weights, the composite execution graph defined by the meta-controller is generated by computing the sum of the activations of all the components at layer i weighted by the probabilities p i ω. These values are then passed on to the next layer where this process repeats. Mathematically, the composite layer at stage i can be expressed as DISPLAYFORM0 DISPLAYFORM1 = fixed parameters Figure 6: The Dynamic Neural Voting Controller. Dynamic Neural Voting solves new tasks by computing a composite execution graph through previously learned and newly allocated modules. Shown here is an agent with two existing modules (yellow and green), as one newly allocated module is being learned (blue). For each layer, the controller takes as input the activations of all three modules and outputs a set of "voting " -probabilistic weights to be used to scale activations within the corresponding components. Voting can be done on either a per-layer basis or a per-unit basis, for clarity only the layer voting method is depicted. The weighted sum of these three scaled outputs is used as input to the next stage in the computation graph. If the new task can be solved through a combination of existing module components, these will be weighted highly, while the new module will be effectively unused e.g. is assigned low weights. If however the task is quite different than previously solved tasks, the new module will play a larger role in the execution graph as it learns to solve the task.where M i ω (·) is the operator that computes the ith layer of module ω, andl 0:= ψ(x t) is the original encoded input state. The question now is, where do these probabilistic weights come from? The core of our procedure is a dynamic neural voting process in which the controller network learns a Boltzmann distribution over module activations to maximize reward prediction accuracy. This process is performed at each module layer, where the module weightings for a given layer are conditioned on the of voting at the previous layer. That is, DISPLAYFORM2 where p p p i = (p This voting procedure operates in an online fashion, such that the controller is continously learning its meta-policy while the agent is taking actions. As defined, the meta-controller constitutes a fully-differentiable neural network and is learned by gradient descent online. A useful refinement of the above mechanism involves voting across the units of M. Specifically, the meta-controller now assigns probabilistic weights p i,j ω to neuron n i,j ω (the jth unit in layer i of module ω). In contrast to the layer-voting scheme, the dynamically generated execution graph computed by the meta controller now becomes composite neurons with activations: DISPLAYFORM3 which are concatenated to form the composite layerl i M. The generalization of equation FORMULA13 to the single-unit voting scheme then becomes: DISPLAYFORM4 where DISPLAYFORM5 Ω ) are the unit-level weights across modules, and DISPLAYFORM6 Empirically, we find that the initialization schemes of the learnable controller parameters are an important consideration in the design, and that two specialized transformations also contribute slightly to its overall efficiency. For details on these, please refer to the Supplement. The dynamic neural voting mechanism achieves meta-control through a neural network optimized online via gradient descent while the modules are solving tasks, rather than a genetic algorithm that operates over a longer timescale as in the work of BID10. Moreover, in contrast to the work of the voting mechanism eliminates the need for fully-connected adaptation layers between modules, thus substantially reducing the number of parameters required for transfer.: Dyanmic Neural Voting quickly corrects for "no-switch" switches. Although a new module is allocated for each task transition, if the new task is identitcal to the original task, the controller quickly learns to reuse the old module components. Top: postswitching learning curve for the EMS module on a binary stimulus-response task, after being trained on the same task. For clarity, only the Layer Voting method is compared against a baseline module trained from scratch. Bottom: fraction of the original module reused over the course of post-switch learning, calculated by averaging the voting weights of each layer in the original module. "No-switch" switches: Our first experiments tested how the dynamic neural voting mechanism would respond to "no-switch" switches, i.e. ones in which although a switch cue was given and a new module allocated, the environment policy's task did not actually change FIG7. We find that in such cases, performance almost instantly approaches pre-switch levels (e.g. there is very little penalty in attempting an uneccessary switch). Moreover, we find that the weightings the controller applies to the new module is low: in other words, the system recognizes that no new module is needed and acts accordingly by concentrating its weights on the existing module. These show that, while we formally assume that the agent is cued as when task switches occurs, in theory it could implement a completely autonomous monitoring policy, in which the agent simply runs the allocation procedure if a performance "anomoly" occurs (e.g. a sustained drop in reward). If the system determines that the new module was unneeded, it could simply reallocate the new module for a later task switch. In future work, we plan to implement this policy explicitly."Real" switches: We next tested how the dynamic voting controller handled switches in which the environment policy substantially changed after the switching cue. Using both the EMS module and (for control) the large fully-ablated module as described in § 3.2, the dynamic neural voting controller was evaluated on 15 switching experiments using multiple variants of SR and MTS tasks. Specifically, these 15 switches cover a variety of distinct (but not mutually exclusive) switching types including:• addition of new classes to the dataset (switch indexes 2, 7, 11 in the table of FIG10 • replacing the current class set entirely with a new non-overlapping class set (switch ids. 1, 3)• addition of visual variability to a previously less variable task (switch id. 6) • addition of visual interface elements e.g. new buttons (switch id. 8)• transformation of interface elements e.g. screen rotation (switch ids. 12, 13, 14, 15) • transitions between different task paradigms e.g. SR to MTS tasks and vice-versa (switch ids. 4, 5, 9, 10).Controller hyperparameters were optimized in a cross-validated fashion (see Appendix G.1), and optimizations for three different initialization seeds were run to obtain mean and standard error. Figures 8a and b show characteristic post-switch learning curves for the EMS module for both the Layer Voting and Single-Unit Voting methods. Additional switching curves can be found in the Supplement. Cumulative reward gains relative to learning from scratch were quantified by Relative Gain in AUC: DISPLAYFORM0, where M is the module trained from scratch on 2-way vert-motion horiz-flip MTS 4-way 2-shown vert-motion MTS 15. • map rotation 8. Task switching with Dynamic Neural Voting. Post-Switching learning curves for the EMS module on the 4-way Quadrant SR task after learning a. 2-way SR task and b. a 4-way MTS task with 4 match screen class templates. Both the Layer Voting method and Single-Unit Voting method are compared against a baseline module trained on the second task from scratch. Across all twelve task switches, we evaluate the Relative Gain in AUC over baseline (RGain) using both voting methods for c. the EMS module and d. the large-sized fully-ablated late bottleneck MLP. the second task, and M switch is the module transferred from an initial task using the dynamic voting controller. We find that the dynamic voting controller allows for rapid positive transfer of both module types across all 15 task switches, and the general Single-Unit voting method is a somewhat better transfer mechanism than the Layer Voting method FIG10. Both the EMS module and the large fully-ablated module, which was shown to be inefficient on single-task performance in § 3.2, benefit from dynamic neural voting FIG10.EMS modules are more "switchable": To quantify how fast switching gains are realized, we use Transfer Gain: T Gain = ∆max T∆ max, where T ∆max = argmax(∆ t) is the time where the maximum amount of reward difference between M switch and M occurs, and ∆ max is the reward difference at that time. Qualitatively, a high score on the Transfer Gain metric indicates that a large amount of relative reward improvement has been achieved in a short amount of time (see FIG7 for a graphical illustration of the relationship between the RGain and T Gain metrics). While both the EMS and large fully-ablated modules have positive Transfer Gain, EMS scores significantly higher on this metric, i.e. is significantly more "switchable" than the large fully-ablated module FIG10. We hypothesize that this is due to the EMS module being able to achieve high task performance with significantly fewer units than the larger fully-ablated module, making the former easier for the dynamic neural voting controller to operate on. In this work, we introduce the TouchStream environment, a continual reinforcement learning framework that unifies a wide variety of spatial decision-making tasks within a single context. We describe a general algorithm (ReMaP) for learning light-weight neural modules that discover implicit task interfaces within this large-action/state-space environment. We show that a particular module architecture (EMS) is able to remain compact while retaining high task performance, and thus is especially suitable for flexible task learning and switching. We also describe a simple but general dynamic task-switching architecture that shows substantial ability to transfer knowledge when modules for new tasks are learned. A crucial future direction will be to expand insights from the current work into a more complete continual-learning agent. We will need to show that our approach scales to handle dozens or hundreds of task switches in sequence. We will also need to address issues of how the agent determines when to build a new module and how to consolidate modules when appropriate (e.g. when a series of tasks previously understood as separate can be solved by a single smaller structure). It will also be critical to extend our approach to handle visual tasks with longer horizons, such as navigation or game play with extended strategic planning, which will likely require the use of recurrent memory stores as part of the feature encoder. From an application point of view, we are particularly interested in using techniques like those described here to produce agents that can autonomously discover and operate the interfaces present in many important real-world two-dimensional problem domains, such as on smartphones or the internet BID12. We also expect many of the same spatially-informed techniques that enable our ReMaP/EMS modules to perform well in the 2-D TouchStream environment will also transfer naturally to a three-dimensional context, where autonomous robotics applications BID7 Friedemann Zenke, Ben Poole, and Surya Ganguli. Improved multitask learning through synaptic intelligence. arXiv preprint arXiv:1703.04200, 2017. The EMS module and all ablation controls were evaluated on a suite of 13 stimulus-response, match-to-sample, localization, and MS-COCO MTS variants:1. 2-way SR -standard binary SR task 2. 4-way double binary SR -four class variant of SR, where each class is assigned either to the right or left half of the action space 3. 4-way quadrant SR -four class variant of SR, where each class is assigned to only a quadrant of the action space 4. 2-way stationary MTS -standard binary MTS task with stereotyped and non-moving match screens 5. 2-way stationary horiz-flip MTS -two class variant MTS task where the match templates' horizontal placement is randomly chosen, but confined within the same vertical plane 6. 2-way stationary vert-motion MTS -two class variant MTS task where the match templates' vertical position is randomly chosen, but each class is confined to a specific side 7. 2-way stationary vert-motion horiz-flip MTS -two class variant MTS task where the match templates' positions are completely random 8. 4-way 2-shown MTS -four class variant MTS task where only two class templates are shown on the match screen (appearing with random horizontal location as well) 9. 4-way 2-shown vert-motion MTS -same as above, but with random vertical motion for the templates 10. 4-way 4-shown stationary MTS -four class variant MTS task where all four class templates are shown on the match screen, but with fixed positions. 11. 4-way 4-shown permuted MTS -same as above, but with randomly permuted locations of all match templates 12. Localization -Localization task 13. MS-COCO MTS -80-way MTS task using the MS-COCO detection challenge dataset, where match screens are randomly samples scenes from the dataset Match-To-Sample Experiment Details: Sample screen images drawn from the same Image-Net class set as the Stimulus-Response tasks. One face-centered, unobstructed class instance is also drawn from the Image-Net classification challenge set and used as a match screen template image for that class. Class template images for the match screen were held fixed at 100x100 pixels. For all variants of the MTS task, we keep a six pixel buffer between the edges of the screen and the match images, and a twelve pixel buffer between the adjascent edges of the match images themselves. Variants without vertical motion have the match images vertically centered on the screen. The Localization task uses synthetic images containing a single main salient object placed on a complex (similar to images used in ; Yamins et al. FORMULA3). There are a total of 59 unique classes in this dataset. In contrast to other single-class localization datasets (e.g. Image-Net) which are designed to have one large, face-centered, and centrally-focused object instance and for which a trivial policy of "always poke in image corners" could be learned, this synthetic image set offers larger variance in instance scale, position, and rotation so the agent is forced into learning non-trivial policies requiring larger precision in action selection. This task uses the entire MS-COCO detection challenge dataset BID18. On every timestep, a sample screen chosen from one of the 80 MS-COCO classes. These are constructed to be large, unobstructed, face centered representations of the class. For the match screen, we sample a random scene from MS-COCO containing any number of objects, but containing at least a single instance of the sample class. The agent is rewarded if its action is located inside any instance of the correct class. Both modules use sample actions from a low-temperature Boltzmann policy from eq., which was empirically found to in more precise reward map prediction. TAB2 aggregates the number of units per layer for the EMS and ablated modules which was used when conducting single-task and task-switching experiments. Only fully-connected modules' layer sizes are shown here. For details on the convolutional bottleneck EMS module, please refer to C.2. This is a "Convolutional Bottleneck" extension of the EMS module shown in the paper, where skip connections link the conv5 and the FC6 representation of the visual backbone. Here, the "scenelevel" representation stored in the FC6 ReMaP memory buffer is tiled spatially to match the present convolution dimensions (here 14x14), and concatenated onto its channel dimension. A series of 1x1 convolutions plays the role of a shallow visual bottleneck, before the activations are vectorized and concatenated with A as input to the CReS layers of the standard EMS module. The in the paper are shown for a bottleneck consisting of a single tanh and two CReS convolutions, with 128 units each. The Downstream layers use 128 units each as well. The motivation for the convolutional bottleneck is that lower-level features are useful for complex spatial tasks such as Localization and Object Detection, and hence may in a more precise policy. By tiling the entire scene-level representation along the convolution layer's channel dimension, a form of multiplicative template-matching is possible between objects that must be memorized (e.g. MS-COCO MTS templates) and what is inside the present scene. In all, we investigated 23 distinct ablations on the EMS module, across all twelve task variants outlined in sec A FIG0 ). Symmetry ablations replace CReS with the activation x → ReLU(x) ⊕ x 2 Multiplicative ablations are denoted by specifying the nonlinearity used in place of CReS (where this is one of ReLU, tanh, sigmoid, elu BID4), or CReLU Shang et al.. This additionally includes one partial symmetry ablation (denoted "partial symm") where only the visual bottleneck is symmetric, and one which ablates the ReLU from the "no symm" module (denoted "no symm/partial-mult"). TAB3. Five additional reward map examples for the MS-COCO MTS task are provided in in FIG1. Examples are plotted over the course of learning. Learning trajectories for seven additional tasks are provided in FIG15. Modules capable of convergence on a task were run until this was acheived, but AUC values for a given task are calculated at the point in time when the majority of models converge. Additional trajectories for ten unshown switching curves are provided in FIG2. Here we describe the weight initialization scheme that was found to be optimal for use with the dynamic voting controller. For simplicity, consider the layer-voting mechanism, with learnable weight matricies W i and biases b i. The intended biasing scheme is achieved through initializing the elements of these parameters to: DISPLAYFORM0 This initialization technique was also generalized for use with the single-unit voting mechanism. Two additional switching mechanisms were added to the controller to augment its ability to switch between taks which are remappings of the action space or reward policy of a preexisting module. we note that efficient modules are those which can effectively produce a minimal representation of the interaction between action space A and observation x t. If the agent's optimal action space shifts to A while the remainder of the task context remains fixed, the controller should allow for rapid targeted remapping A → A. Since we formulate the modules as ReMaP Networks, and A is an input feature basis, we can achieve remappings of this form through a fully-connected transformation: DISPLAYFORM0 where a τ = [h t−kb:t−1, a t] is the vector of action histories, and W a and b embed a τ into new action space A using only a small number of learnable parameters. Pseudo Identity-Preserving Transformation In practice, we initialize the parameters in eq. FORMULA3 such that the transformation is pseudo identity-preseving, meaning that the representation learned at this level in the original module is not destroyed prior to transfer. This is done by initializing W a to be an identity matrix I |aτ | with a small amount of Gaussian noise ∼ N (0.0, σ 2) added to break symmetry. b is initialized to be a vector of ones of size |a τ |. Each of the k f maps m t (x) reflects the agent's uncertainty in the environment's reward policy. If the task context remains stationary, but the environment transitions to new reward schedule R that no longer aligns with the module's policy π, the controller could to this transition by e.g. containing a mechanism allowing for targeted transformation of m(x) and hence also π. One complication that arises under ReMaP is that since each task-module learns its optimal action space internally, m(x) are in the basis of R rather than A. Therefore, transformations on the map distribution must also re-encode A before mapping to R.In this work, we investigate a shallow "adapter" neural network that lives on top of the existing module and maps R → R. Its first and second layers are defined by DISPLAYFORM0 where g(a τ) is a similar transformation on A as above, denotes elementwise multiplication, W 1 is a learnable matrix embedding into a hidden state, and W 2 ∈ R |l1|×|R | is a learnable matrix embedding into R Pseudo Identity-Preserving Transformation Similar to the transformation on the action space, we modify the reward-map transformation to be pseudo identity-preserving as well. This is done by modifying eq. FORMULA4 The intended map-preserving transformation is accomplished via initializing W 1 and W 2 as: DISPLAYFORM1 Both of the targeted transformations have several hyperparameters. We conducted a grid search to optimize these in a cross-validated fashion, on a set of test task switches designed to be solved by one of the targeted transformations FIG3. Each was conducted independently of the dynamic voting controller, and independently of the other transformation. Optimal hyperparameters found in these experiments were fixed for use in the integrated dynamic voting controller, and were not further optimized afterwards. Action Transformation Hyperparameters We conducted three tests using the stimulus-response paradigm: class reversal (in which the left class becomes the right class and vice-versa), a horizontal rotation of the reward boundaries (such that right becomes up and left becomes down), and a "switch" to the original task (intended to test the identity-preserving component).In this work, we find that a single, non-activated linear transformation (f in FORMULA3) is optimal for this new state-space embedding, using k b * 2 units, and initialized such that the idendity-preserving transformation weights have σ = 0.01. The learning rate for this transformation was found to be optimal at 0.1. We conducted two tests using the stimulusresponse paradigm: a "squeezing" task (where there is no longer any reward dispensed on the lower half of the screen), and a "switch" to the original task (intended to test the identity-preserving component).In this work, we find the optimal activations in eq. to be f (·) = CReS and g(·) = ReLU, with 4 units in the hidden layer. in the weight initialization scheme was found optimal at 0.001, and an initial bias of 0.01. The optimal learning rate for this transformation was found to be 0.01. A study was conducted to determine the relative benefit of the targeted transformations (Fig. S6), where it was determined that the primary contribution of the dynamic neural controller was in fact the voting mechanism (although the transformations did supplement this as well).G.5 DEPLOYMENT SCHEME OF TASK MODULES When cued into task transition, the controller freezes the learnable parameters of the old task-module, and deploys a new unitialized task-module. The controller then initializes the action and reward map transformation networks as described in G.2 on top of the old module. These transformations are also voted on inside the dynamic neural controller at every timestep. H SWITCHING METRICS FIG7 graphically illustrates the metrics used inside the paper to quantify switching performance: RGain and T Gain. FIG7: Illustration of switching performance metrics. We quantify the switching performance of the dynamic neural controller and task-modules by two metrics: "relative gain in AUC" (ratio of green to purple shaded regions), and "transfer" gain (difference of reward at T∆max). Relative AUC measures the overall gain relative to scratch, and the transfer gain measures the speed of transfer. Curve shown is the EMS module with Single-Unit voting method evaluated on a switch from a 4-way MTS task with two randomly moving class templates to a 4-way MTS task with four randomly moving templates.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkPLzgZAZ
We propose a neural module approach to continual learning using a unified visual environment with a large action space.
Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine. In this paper, we present a semi-supervised technique that addresses both these issues simultaneously. We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions. Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution. Our method identifies heart failure and other thoracic diseases. For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space. Decoding the ant latent representation produces an image without apparent disease. The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction. Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art. Deep learning as applied to medicine has attracted much interest in recent years as a potential solution to many difficult problems in medicine, such as the recognition of diseases on pathology slides or radiology images. However, adoption of machine learning algorithms in fields such as medicine relies on the end user being able to understand and trust the algorithm, as incorrect implementation and errors may have significant consequences. Hence, there has recently been much interest in interpretability in machine learning as this is a key aspect of implementing machine learning algorithms in practice. We propose a novel method of creating visual rationales to help explain individual predictions and explore a specific application to classifying chest radiographs. There are several well-known techniques in the literature for generating visual heatmaps. Gradient based methods were first proposed in 2013 described as a saliency map in BID11, where the derivative of the final class predictions is computed with respect to the input pixels, generating a map of which pixels are considered important. However, these saliency maps are often unintelligible as convolutional neural networks tend to be sensitive to almost imperceptible changes in pixel intensities, as demonstrated by recent work in adversarial examples. In fact, obtaining the saliency map is often the first step in generating adversarial examples as in BID3. Other recent developments in gradient based methods such as Integrated Gradients from BID12 have introduced fundamental axioms, including the idea of sensitivity which helps focus gradients on relevant features. Occlusion sensitivity proposed by is another method which covers parts of the image with a grey box, mapping the ant change in prediction. This produces a heatmap where features important to the final prediction are highlighted as they are occluded. Another wellknown method of generating visual heatmaps is global average pooling. Using fully convolutional neural networks with a global average pooling layer as described in BID15, we can examine the class activation map for the final convolutional output prior to pooling, providing a low resolution heatmap for activations pertinent to that class. A novel analysis method by BID10 known as locally interpretable model-agnostic explanations (LIME) attempts to explain individual predictions by simulating model predictions in the local neighbourhood around this example. Gradient based methods and occlusion sensitivity can also be viewed in this light -attempting to explain each classification by changing individual input pixels or occluding square areas. However, sampling the neighbourhood surrounding an example in raw feature space can often be tricky, especially for image data. Image data is extremely complex and high-dimensional -hence real examples are sparsely distributed in pixel space. Sampling randomly in all directions around pixel space is likely to produce non-realistic images. LIME's solution to this is to use superpixel based algorithms to oversegment images, and to perturb the image by replacing each superpixel by its average value, or a fixed pre-determined value. While this produces more plausible looking images as opposed to occlusion or changing individual pixels, it is still sensitive to the parameters and the type of oversegmentation used -as features larger than a superpixel and differences in global statistics may not be represented in the set of perturbed images. This difficulty in producing high resolution visual rationales using existing techniques motivates our current research. We introduce a novel method utilizing recent developments in generative adversarial networks (GANs) to generate high resolution visual rationales. We demonstrate the use of this method on a large dataset of frontal chest radiographs by training a classifier to recognize heart failure on chest radiographs, a common task for doctors. Our method comprises of three main steps -we first use generative adversarial networks to train a generator on an unlabelled dataset. Secondly, we use the trained generator as the decoder section of an autoencoder. This enables us to encode and decode, to and from the latent space while still producing high resolution images. Lastly, we train simple supervised classifiers on the encoded representations of a smaller, labelled dataset. We optimize over the latent space surrounding each encoded instance with the objective of changing the instance's predicted class while penalizing differences in the ant decoded image and the original reconstructed image. This enables us to visualize what that instance would appear as if it belonged in a different class. Firstly, we use the Wasserstein GAN formulation by and find that the addition of the gradient penalty term helps to stabilize training as introduced by BID4. Our unlabelled dataset comprises of a set of 98,900 chest radiograph images, which are scaled to 128 by 128 pixels while maintaining their original aspect ratio through letterboxing, and then randomly translated by up to 8 pixels. We use a 100 dimensional latent space. Our discriminator and generator both use the DCGAN architecture while excluding the batch normalization layers and using Scaled Exponential Linear Units described in BID7 as activations except for the final layer of the generator which utilized a Tanh layer. We train the critic for 4 steps for each generator training step. The GAN training process was run for 200k generator iterations before visually acceptable generated images were produced. ADAM was used as the optimizer with the generator and discriminator learning rates both set to 5 x 10 -5.In the next step, we use the trained generator as the decoder for the autoencoder. We fix the weights of the decoder during training and train our autoencoder to reproduce each of the images from the unlabelled dataset. The unlabelled dataset was split by patient in a 15 to 1 ratio into a training and validation set. We minimize the Laplacian loss between the input and the output, inspired by BID1. Minimal overfitting was observed during the training process even when the autoencoder was trained for over 1000 epochs, as demonstrated in 2.We then train a classifier on a smaller labelled dataset consisting of 7,391 chest radiograph images paired with a B-type natriuretic peptide (BNP) blood test that is correlated with heart failure. This test is measured in nanograms per litre, and higher readings indicate heart failure. This is a task of real-world medical interest as BNP test readings are not often available immediately and offered at all laboratories. Furthermore, the reading of chest radiograph images can be complex, as suggested by the widely varying levels of accuracy amongst doctors of different seniority levels reported by BID6. We perform a natural logarithm on the actual BNP value and divide the ant number by 10 to scale these readings to between 0 and 1. This task can be viewed as either a regression or classification task, as a cut-off value is often chosen as a diagnostic threshold. In this paper, we train our network to predict the actual BNP value but evaluate its AUC at the threshold of 100ng/L. We choose AUC at this threshold as this is the cut-off suggested by BID9, and AUC is a widely used metric of comparison in the medical literature. We augment each labelled image and encode it into the latent space using our previously trained autoencoder. To prevent contamination, we separate our images by patient into a training and testing set with a ratio of 4 to 1 prior to augmentation and encoding. We demonstrate the success of simple classifiers upon this latent representation, including a 2 layer multilayer perceptron with 256 hidden units as well as a linear regressor. To obtain image specific rationales, we optimize over the latent space starting with the latent representation of the given example. We fix the weights of the entire model and apply the ADAM optimizer on a composite objective comprising of the output value of the original predicted class and a linearly weighted mean squared error term between the decoded latent representation and the decoded original representation. We cap the maximum number of iterations at 5000 and set our learning rate at 0.1. We stop the iteration process early if the cutoff value for that class is achieved. The full algorithm is described in Algorithm 1. This generates a latent representation with a different prediction from the initial representation. The difference between the decoded generated representation and the decoded original representation is scaled and overlaid over the original image to create the visual rationale for that image. We use gradient descent to optimize the following objective: DISPLAYFORM0 DISPLAYFORM1 Where X is the reconstructed input image (having been passed through the autoencoder); X target and z target are the output image and its latent representation. G is our trained generator neural network. γ is a coefficient that trades-off the classification and reconstruction objectives. L target is a target objective which can be a class probability or a regression target. The critical difference between our objective and the one used for adversarial example generation is that optimization is performed in the latent space, not the image space. Algorithm 1 Visual rationale generation Require: α, learning rate γ, image similarity penalty ρ, cutoff value Require: x, the initial input f: x → z, a function approximating the mapping between image and latent space DISPLAYFORM2 We also apply our method to external datasets and demostrate good cross-dataset generalization, in particular the National Institutes of Health (NIH) ChestX-ray8 dataset comprising of 108,948 frontal chest radiographs, recently released by BID13. We downsize the provided images to work with our autoencoder and split this by patient into a training, validation and testing set in the 7:1:2 ratio used by the dataset's authors. We encode these images into the latent space and apply a 6 layer fully connected neural network with 100 hidden units in each layer utilizing residual connections. We train this with a batch size of 2048. This architecture is fully described in figure 1. To evaluate the usefulness of the generated visual rationales, we conduct an experiment where we compare visual rationales generated by a classifier to one which is contaminated. We train the classifier directly on the testing examples and over train until almost perfect accuracy on this set is achieved. We reason that the contaminated classifier will simply memorize the testing examples and hence will not be able to produce useful rationales. We also apply our method to the well known MNIST dataset and apply a linear classifier with a 10 way softmax. In order to generate our visual rationales we select an initial class and a target classwe have chosen to transform the digit 9 to the digit 4 as these bear physical resemblance. We alter our optimization objective by adding a negatively weighted term for the predicted probability of the target class as described in Algorithm 2.Algorithm 2 Visual rationale generation for multiclass predictors Require: α, learning rate γ, image similarity penalty ρ, cutoff value β, target class weighting t, target class Require: x, the initial input f: x → z, a function approximating the mapping between image and latent space g: z → x h c (z) → P (c|z), classifier predicting class probability from DISPLAYFORM3 To illustrate the fidelity of our autoencoder we reconstruct each image in a smaller labelled set which has not been seen during training. The reconstructed images are show in Fig. 3 In FIG4 we demonstrate an example of the algorithm's reconstruction of a chest radiograph from a patient with heart failure, as well as the visualization of the same patient's chest radiograph without heart failure. We subtract the visualization of the radiograph without heart failure from the original reconstructed image and superimpose this as a heatmap on the original image to demonstrate the visual rationale for this prediction. For the same image, we apply the saliency map method, integrated gradients, the occlusion sensitivity method with a window size of 8, as well as LIME to obtain Fig. 6 for comparison. All of these methods yield noisy and potentially irrelevant features as compared to our method of generating visual rationales. Figure 6: Comparison of other methods -top left to bottom right: Saliency map, saliency map overlaid on original image, heatmap generated via occlusion sensitivity method, Integrated gradients, integrated gradients overlaid on original image, LIME output We apply our classifier as described above to the chest radiograph dataset released by the NIH recently and achieve similar to or exceeding that of the baseline reported in the original dataset. ROC curves are demonstrated in FIG5 Comparison AUC are reported in TAB0. We show that even without repeating the autoencoder or GAN training process on the new dataset, we are able to classify encoded representations of these chest radiographs with an accuracy comparable to or exceeding the performance of the published baseline network, which utilizes various state of the art network architectures as well as higher resolution images. We apply our method to the MNIST dataset and demonstrate class switching between digits from 9 to 4 and 3 to 2. FIG6. demonstrates the visual rationales for why each digit has been classified as a 9 rather than a 4, as well as the transformed versions of each digit. As expected, the top horizontal line in the digit 9 is removed to make each digit appear as a 4. Interestingly, the algorithm failed to convert several digits into a 4 and instead converts them into other digits which are presumably more similar to that instance, despite the addition of the weighted term encouraging the latent representation to prefer the target class. This type of failure is observed more in digits that are less similar to each other, such as from converting from the digits 3 to 2, as simply removing the lower curve of the digit may not always in a centered "two" digit. This precludes the simple interpretation that we are able to attribute to the 9 to 4 task. This behaviour is not noted in our chest radiograph dataset as we are able to convert every image from the predicted class to the converse, which is presumably due to the smaller differences between chest X-rays with and without heart failure. Similarly, the time taken to generate a visual rationale depends on the confidence of the classifier in its prediction, as the algorithm runs until the input has been altered sufficiently or a maximum number of steps (in our case 500) have been completed. In the case of converting digit 9s to 4s -we were able to generate 1000 visual rationales in 1 minute and 58 seconds. Lastly, we contaminate our heart failure classifier as described in the methods section and compare visual rationales generated by the contaminated classifier with those generated previously. FIG0 demonstrates images where both classifiers predict the presence of heart failure. The rationales from the contaminated classifier focus on small unique aspects of the image and largely do not correspond to our notion of what makes a chest radiograph more likely to represent heart failure, namely enlarged hearts and congested lung fields. To demonstrate this we present 100 images classified as having a BNP level of over 100ng/L to two expert reviewers, equally split between a contaminated or a normally trained classifier. Each image and the associated visual rationale was presented to the reviewers who were blinded to the origin of the classifier. Reviewers were tasked in selecting features from a provided list which they felt corresponded with the visual rationales. Each reviewer rated each image twice. Aggregated from experts are presented in TAB2. This clearly shows that the contaminated classifier indeed produces less interpretable visual rationales. (b) Contaminated classifier. In Reviewer A's first round, 22 out of 50 visual rationales generated were identified as having features of cardiomegaly, and 3 had features of effusions. Each visual rationale can contain multiple features. We show in this work that using the generator of a GAN as the decoder of an autoencoder is viable and produces high quality autoencoders. The constraints of adversarial training force the generator to produce realistic radiographs for a given latent space, in this case a 100-dimensional space normally distributed around 0 with a standard deviation of 1.This method bears resemblance to previous work done on inverting GANS done by BID2, although we are not as concerned with recovering the exact latent representation but rather the ability to recreate images from our dataset. It is suggested in previous work in BID8 that directly training a encoder to reverse the mapping learnt by the generator in a decoupled fashion does not yield good as the encoder never sees any real images during training. By training upon the loss between the real input and generated output images we overcome this. We further establish the utility of this encoder by using encoded latent representations to predict outcomes on unseen datasets, including one not from our institution. We achieve this without retraining our encoder on these unseen datasets, suggesting that the encoder has learnt useful features about chest radiographs in general. Our primary contribution in this paper however is not the inversion of the generator but rather the ability to generate useful visual rationales. For each prediction of the model we generate a corresponding visual rationale with a target class different to the original prediction. We display some examples of the rationales this method produces and inspect these manually to check if these are similar to our understanding of how to interpret these images. The ability to autoencode inputs is essential to our rationale generation although we have not explored in-depth in this paper the effect of different autoencoding algorithms (for instance variational autoencoders) upon the quality of the generated rationales, as our initial experiments with variational and vanilla autoencoders were not able to reconstruct the level of detail required. For chest radiographs, common signs of heart failure are an enlarged heart or congested lung fields, which appear as increased opacities in the parts of the image corresponding to the lungs. The rationales generated by the normally trained classifier in FIG0 to be consistent with features described in the medical literature while the contaminated classifier is unable to generate these rationales. We also demonstrate the generation of rationales with the MNIST dataset where the digit 9 is transformed into 4 while retaining the appearance of the original digit. We can see that the transformation generally removes the upper horizontal line of the 9 to convert this into a 4. Interestingly, some digits are not successfully converted. Even with different permutations of delta and gamma weights in Algorithm 2 some digits remain resistant to conversion. We hypothesize that this may be due to the relative difficulty of the chest radiograph dataset compared to MNIST -leading to the extreme confidence of the MNIST model that some digits are not the target class. This may cause vanishingly small gradients in the target class prediction, preventing gradient descent from achieving the target class. We compare the visual rationale generated by our method to various other methods including integrated gradients, saliency maps, occlusion sensitivity as well as LIME in Fig. 6.All of these methods share similarities in that they attempt to perturb the original image to examine the impact of changes in the image on the final prediction, thereby identifying the most salient elements. In the saliency map approach, each individual pixel is perturbed, while in the occlusion sensitivity method, squares of the image are perturbed. LIME changes individual superpixels in an image by changing all the pixels in a given superpixel to the average value. This approach fails on images where the superpixel classification is too coarse, or where the classification is not dependent on high resolution details within the superpixel. To paraphrase BID12, attribution or explanation for humans relies upon counterfactual intuition -or altering the image to remove the cause of the predicted outcome. Model agnostic methods such as gradient based methods, while fulfilling the sensitivity and implementation invariance axioms, do not acknowledge the natural structure of the inputs. For instance, this often leads to noisy pixel-wise attribution as seen in Fig. 6. This does not fit well with our human intuition as for many images, large continuous objects dominate our perception and we often do not expect attributions to differ drastically between neighbouring pixels. Fundamentally these other approaches suffer from their inability to perturb the image in a realistic fashion, whereas our approach perturbs the image's latent representation, enabling each perturbed image to look realistic as enforced by the GAN's constraints. Under the manifold hypothesis, natural images lie on a low dimensional manifold embedded in pixel space. Our learned latent space serves as a approximate but useful coordinate system for the manifold of natural images. More specifically the image (pardon the pun) of the generator G [R d] is approximately the set of'natural images' (in this case radiographs) and small displacements in latent space around a point z closely map into the tangent space of natural images around G(z). Performing optimization in latent space is implicitly constraining the solutions to lie on the manifold of natural images, which is why our output images remain realistic while being modified under almost the same objective used for adversarial image generation. Hence, our method differs from these previously described methods as it generates high resolution rationales by switching the predicted class of an input image while observing the constraints of the input structure. This can be targeted at particular classes, enabling us answer the question posed to our trained model -'Why does this image represent Class A rather than Class B?'There are obvious limitations in this paper in that we do not have a rigorous definition of what interpretability entails, as pointed out by BID12. An intuitive understanding of the meaning of interpretability can be obtained from its colloquial usage -as when a teacher attempts to teach by example, an interpretation or explanation for each image helps the student to learn faster and generalize broadly without needing specific examples. Future work could focus on the measurement of interpretability by judging how much data a second model requires when learning from the predictions and interpretations provided by another pretrained model. Maximizing the interpretability of a model may be related to the ability of models to transfer information between each other, facilitating learning without resorting to the use of large scale datasets. Such an approach could help evaluate non-image based visual explanations such as sentences, as described in BID5.Other technical limitations include the difficulty of training a GAN capable of generating realistic images larger than 128 by 128 pixels. This limits the performance of subsequent classifiers in identifying small features. This can be seen in the poor performance of our model in detecting nodules, a relatively small feature, compared to the baseline implementation in the NIH dataset. In , we describe a method of semi-supervised learning and apply this to chest radiographs, using local data as well as recent datasets. We show that this method can be leveraged to generate visual rationales and demonstrate these qualitatively on chest radiographs as well as the well known MNIST set.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B13EC5u6W
We propose a method of using GANs to generate high quality visual rationales to help explain model predictions.
Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions. This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contrast to standard ones (typically Gaussians). We model such distributions with Generative Neural Networks (GNNs) and introduce a new ES algorithm that leverages their expressiveness to accelerate the stochastic search. Because it acts as a plug-in, our approach allows to augment virtually any standard ES algorithm with flexible search distributions. We demonstrate the empirical advantages of this method on a diversity of objective functions. We are interested in the global minimization of a black-box objective function, only accessible through a zeroth-order oracle. In many instances of this problem the objective is expensive to evaluate, which excludes brute force methods as a reasonable mean of optimization. Also, as the objective is potentially non-convex and multi-modal, its global optimization cannot be done greedily but requires a careful balance between exploitation and exploration of the optimization landscape (the surface defined by the objective). The family of algorithms used to tackle such a problem is usually dictated by the cost of one evaluation of the objective function (or equivalently, by the maximum number of function evaluations that are reasonable to make) and by a precision requirement. For instance, Bayesian Optimization targets problems of very high evaluation cost, where the global minimum must be approximately discovered after a few hundreds of function evaluations. When aiming for a higher precision and hence having a larger budget (e.g. thousands of function evaluations), a popular algorithm class is the one of Evolutionary Strategies (ES) , a family of heuristic search procedures. ES algorithms rely on a search distribution, which role is to propose queries of potentially small value of the objective function. This search distribution is almost always chosen to be a multivariate Gaussian. It is namely the case of the Covariance Matrix Adaptation Evolution Strategies (CMA-ES) , a state-of-the-art ES algorithm made popular in the machine learning community by its good on hyper-parameter tuning . It is also the case for Natural Evolution Strategies (NES) algorithms, which were recently used for direct policy search in Reinforcement Learning (RL) and shown to compete with state-of-the-art MDP-based RL techniques . Occasionally, other distributions have been used; e.g. fat-tails distributions like the Cauchy were shown to outperform the Gaussian for highly multi-modal objectives . We argue in this paper that in ES algorithms, the choice of a standard parametric search distribution (Gaussian, Cauchy, ..) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum. To overcome the limitations of classical parametric search distributions, we propose using flexible distributions generated by bijective Generative Neural Networks (GNNs), with computable and differentiable log-probabilities. We discuss why common existing optimization methods in ES algorithms cannot be directly used to train such models and design a tailored algorithm that efficiently train GNNs for an ES objective. We show how this new algorithm can readily incorporate existing ES algorithms that operates on simple search distributions, Algorithm 1: Generic ES procedure input: zeroth-order oracle on f, distribution π 0, population size λ repeat (Sampling) Sample x 1,..., x λ i.i.d ∼ π t (Evaluation) Evaluate f (x 1),..., f (x n). (Update) Update π t to produce x of potentially smaller objective values. until convergence; like the Gaussian. On a variety of objective functions, we show that this extension can significantly accelerate ES algorithms. We formally introduce the problem and provide on Evolutionary Strategies in Section 2. We discuss the role of GNNs in generating flexible search distributions in Section 3. We explain why usual algorithms fail to train GNNs for an ES objective and introduce a new algorithm in Section 4. Finally we report experimental in Section 5. In what follows, the real-valued objective function f is defined over a compact X and π will generically denote a probability density function over X. We consider the global optimization of f: The generic procedure followed by ES algorithms is presented in Algorithm 1. To make the update step tractable, the search distribution is tied to a family of distributions and parametrized by a realvalued parameter vector θ (e.g. the mean and covariance matrix of a Gaussian), and is referred to as π θ. This update step constitutes the main difference between ES algorithms. One principled way to perform that update is to minimize the expected objective value over samples x drawn from π θ. Indeed, when the search distribution is parametric and tied to a parameter θ, this objective can be differentiated with respect to θ thanks to the log-trick: This quantity can be approximated from samples -it is known as the score-function or REINFORCE estimator, and provides a direction of update for θ. Unfortunately, naively following a stochastic version of the gradient -a procedure called Plain Gradient Evolutionary Strategies (PGES) -is known to be highly ineffective. PGES main limitation resides in its instability when the search distribution is concentrating, making it unable to precisely locate any local minimum. To improve over the PGES algorithm the authors of proposed to descend J(θ) along its natural gradient . More precisely, they introduce a trust-region optimization scheme to limit the instability of PGES, and minimize a linear approximation of J(θ) under a Kullback-Leibler (KL) divergence constraint: To avoid solving analytically the trust region problem, shows that its solution can be approximated by: is the Fischer Information Matrix (FIM) of π θ. The parameter θ is therefore not updated along the negative gradient of J but rather along F −1 θ ∇ θ J(θ), a quantity known as the natural gradient. The FIM F θ is known analytically when π θ is a multivariate Gaussian and the ing algorithm, Exponential Natural Evolutionary Strategies (xNES) has been shown to reach state-of-the-art performances on a large ES benchmark. Figure 2: Example of an undesirable behavior of a Gaussian search distribution. The dashed lines represent density level lines of the search distribution. Because the latter cannot have a curved profile, it is forced to drastically reduce its entropy until it reaches the straight part of the valley. CMA-ES Naturally, there exist other strategies to update the search distribution π θ. For instance, CMA-ES relies on a variety of heuristic mechanisms like covariance matrix adaptation and evolution paths, but is only defined when π θ is a multivariate Gaussian. Explaining such mechanisms would be out of the scope of this paper, but the interested reader is referred to the work of for a detailed tutorial on CMA-ES. ES implicitly balance the need for exploration and exploitation of the optimization landscape. The exploitation phase consists in updating the search distribution, and exploration happens when samples are drawn from the search distribution's tails. The key role of the search distribution is therefore to produce a support adapted to the landscape's structure, so that new points are likely to improve over previous samples. We argue here that the choice of a given parametric distribution (the multivariate Gaussian distribution being overwhelmingly represented in state-of-the-art ES algorithms) constitutes a potentially harmful implicit constraint for the stochastic search of a global minimum. For instance, a Gaussian distribution is not adapted to navigate a curved valley because of its inability to continuously curve its density. This lack of flexibility will lead it to drastically reduce its entropy, until the curved valley looks locally straight. At this point, the ES algorithm resembles a hill-climber and barely takes advantage of the exploration abilities of the search distribution. An illustration of this phenomenon is presented in Figure 2 on the Rosenbrock function. Another limitation of classical search distribution is their inability to follow multiple hypothesis, that is to explore at the same time different local minima. Even if mixture models can show such flexibility, hyper-parameters like the number of mixtures have optimal values that are impossible to guess a priori. We want to introduce flexible search distributions to overcome these limitations. Such distributions should, despite their expressiveness, be easily trainable. We should also be concerned when designing them with their role in the exploration/exploitation trade off: a search distribution with too much capacity could over-fit some seemingly good samples, leading to premature convergence. To sum-up, we want to design search-distributions that are: • more flexible than classical distributions • yet easily trainable • while keeping control over the exploration / exploitation trade-off In the following section, we carefully investigate the class of Generative Neural Networks (GNNs) to find a parametric class of distributions satisfying such properties. Generative Neural Networks have been studied in the context of density estimation and shown to be able to model complex and highly multimodal distributions . We propose here to leverage their expressiveness for ES, and train them in a principled way thanks to the ES objective: As discussed in Section 2, optimizing J(π) with gradient-based methods is possible through the score-function estimator, which requires to be able to compute and efficiently differentiate the logprobabilities of π. The core idea behind a GNN is to map a latent variable z ∈ Z drawn from a known distribution ν ω to an output variable x = g η (z) where g η is the forward-pass of a neural network. The parameter η represents the weights of this neural network while ω describe the degrees of freedom of the latent space distribution ν ω. We denote θ = (ω, η) and π θ (x) the density of the output variable x. For general neural network architectures, it is impossible to compute π θ (x) for samples x drawn from the GNN. This is namely why their are often trained with adversarial methods for sample generation purposes, bypassing the need of computing densities, but at the expense of a good density estimation (mode-dropping). An alternative to adversarial methods was proposed with variational auto-encoders however at the cost of learning two neural networks (an encoder and a decoder). A less computationally expensive method consists in restricting the possible architectures to build bijective GNNs, also known as Normalizing Flows (NF) , which allows the exact computation of the distribution's density. Indeed, if g η is a bijection from Z to X with inverse h η g −1 η, the change of variable formula provides a way to compute π θ (x): To have a tractable density one therefore needs to ensure that the determinant of the Jacobian |∂h η (x)/∂x| is easily computable. Several models satisfying these two properties (i.e bijectivity and computable Jacobian) have been proposed for density estimation (; ; 2016), and proved their expressiveness despite their relatively simple structure. NFs therefore answer two of our needs when building our new search distribution: flexibility and easiness to train. In this work, we will focus on one NF model: the Non-Linear Independent Component Estimation (NICE) model, for its numerical stability and volume preserving properties. The authors of NICE proposed to build complex yet invertible transformations through the use of additive coupling layers. An additive coupling layer leaves half of its input unchanged, and adds a non-linear transformation of the first half to the second half. More formally, by noting v = [v 1, v 2] the output of a coupling layer and u = [u 1, u 2] its input, one has: where t is an arbitrarily complex transformation -modelled by a Multi-Layer Perceptron (MLP) with learnable weights and biases. This transformation has unit Jacobian determinant and is easily invertible: and only requires a feed-forward pass on the MLP t. The choice of the decomposition u = [u 1, u 2] can be arbitrary, and is performed by applying a binary filter to the input. By stacking additive coupling layers, one can create complex distributions, and the inversion of the ing mapping is independent of the complexity of the neural networks t. The density of the ing distribution is readily computable thanks to the inverse transform theorem. The transformation induced by NICE is volume preserving (it has a unitary Jacobian determinant). This is quite desirable in a ES context, as the role of concentrating the distribution on a minimum can be left to the latent space distribution ν ω. The role of the additive coupling layers is therefore only to introduce non-linearities in the inverse transform h η so that the distribution is better adapted to the optimization landscape. The fact that this fit is volume-preserving (every subset of the latent space has an image in the data space with the same probability mass) encourages the search distribution to align its tails with regions of small value of the optimization landscape, which is likely to improve the quality of future exploration steps. The NICE model therefore fits perfectly our needs for a flexible search distribution that is easy to train, and that provides enough control on the exploration / exploitation trade-off. Other bijective GNN models like the Real-NVP introduce non-volume preserving transformations, which cannot provide such a control. In practice, we observed that using such transformations for ES led to early concentration and premature convergence. We are now equipped with enough tools to use GNNs for ES: an adapted model (NICE) for our search distribution π θ, and an objective to train it with: Here, θ describes jointly the free parameters of the latent distribution ν ω and η, the weights and biases of the MLPs forming the additive coupling layers. We start this section by explaining why existing training strategies based on the objective are not sufficient to truly leverage the flexibility of GNNs for ES, before introducing a new algorithm tailored for this task. We found that the PGES algorithm (naive stochastic gradient descent of with the score-function estimator) applied to the NICE distribution suffers from the same limitations as when applied to the Gaussian; it is inable to precisely locate any local minimum. As for the Gaussian, training the NICE distribution for ES requires employing more sophisticated algorithms -such as NES. However, using the natural gradient for the GNNs distributions is not trivial. First the Fischer Information Matrix F θ is not known analytically and must be estimated via Monte-Carlo sampling, thereby introducing approximation errors. Also, we found that the approximations justifying to follow the descent direction provided by the natural gradient are not adapted to the NICE distribution. Indeed, the assumption behind the NES update is that the loss J(θ) can be (locally) well approximated by the quadratic objective: where γ is a given non-negative Lagrange multiplier. For NICE, given the highly non-linear nature of π θ this approximation is bound to fail even close to the current parameter θ and will lead to spurious updates. A classical technique to avoid such updates is to artificially increase the curvature of the quadratic term, and is known as damping. Practically, this implies using F θ + βI instead of F θ as the local curvature metric, with β a non-negative damping parameter. We found that to ensure continuous decrease of J(θ), and because of its highly non-linear nature when using the GNNs, the damping parameter β has to be set to such high values that the modifications of the search distribution are too small to quickly make progress and by no means reaches state-of-the-art performances. We observed that even if the training of the additive coupling layers is performed correctly (i.e the distribution has the correct shape), high damping of the latent space parameters prevents the distribution from quickly concentrating when a minimum is found. It is unclear how the damping parameter should be adapted to avoid spurious update, while still allowing the distribution to make large step in the latent space and ensure fast concentration when needed. In the following, we present an alternated minimization scheme to bypass the issues raised by natural gradient training for GNN distributions in a ES context. So far, we used the parameter θ to describe both ω and η (respectively, the free parameters of the latent space distribution ν ω and the degrees of freedom of the non-linear mapping g η), and the optimization over all these parameters was performed jointly. Separating the roles of ω and η, the initial objective can be rewritten as follows: Therefore, the initial objective can be rewritten as the expected value of samples drawn from the latent distribution, under the objective f • g η -that is, the representation of the objective function f in the latent space. If ν ω is a standard distribution (i.e efficiently trainable with the natural gradient) and f • g η is a well structured function (i.e one for which ν ω is an efficient search distribution), then the single optimization of ω by classical methods (such as the natural gradient) should avoid the limitations discussed in 2.2. This new representation motivates the design of a new training algorithm that optimizes the parameters ω and η separately. Alternating Minimization In the following, we will replace the notation π θ with π ω,η to refer to the NICE distribution with parameter θ = (ω, η). We want to optimize ω and η in an alternate fashion, which means performing the following updates at every step of the ES procedure: This means that at iteration t, samples are drawn from π ωt,ηt and serve to first optimize the latent space distribution parameters ω, and then the additive coupling layers parameters η. For the following iteration, the population is sampled under π ωt+1,ηt+1. The update (11a) of the latent space parameters is naturally derived from the new representation of the initial objective. Indeed, ω can be updated via natural gradient ascent of J(ω, η t) -that is with keeping η = η t fixed. Practically, this therefore reduces to applying a NES algorithm to the latent distribution ν ω on the modified objective function f • g ηt. Once the latent space parameters updated, the coupling layers parameters should be optimized with respect to: At this stage, the only available samples are drawn under π ωt,ηt. To estimate, based on these samples, expectations under π ωt+1,ηt one must use importance propensity scores: The straightforward minimization of this off-line objective is known to lead to degeneracies (, Section 4), and must therefore be regularized. For our application, it is also desirable to make sure that the update η does not undo the progress made in the latent space -in other words, we want to regularize the change in f • g η. To that extent, we adopt a technique proposed in and minimize a modification on the initial objective with clipped propensity weights: clip ε (x) clips the value of x between 1 − and 1 +. The parameter ε is an hyper-parameter that controls the change in distribution, and the program can be efficiently solved via a gradient descent type algorithm, such as Adam . To sum up, we propose optimizing the latent distribution and the coupling layers separately. The latent space is optimized by natural gradient descent, and the coupling layers via an off-policy objective with clipped propensity weights. We call this algorithm GNN-ES for Generative Neural Networks Evolutionary Strategies. Latent space optimization It turns out the GNN-ES can be readily modified to incorporate virtually any existing ES algorithms that operates on the simple distribution ν ω. For instance, if ν ω is set to be a multivariate Gaussian with learnable mean and covariance matrix, the latent space optimization (11a) can be performed by either xNES or CMA-ES. This holds for any standard distribution ν ω and any ES algorithm operating on that distribution. This remark allows us to place GNN-ES in a more general framework and to understand it as a way to improve existing ES algorithm, by providing a principled way to learn complex, non-linear transformations on top of rather standard search distributions (like the Gaussian). In what follows, we will use the GNN prefix in front of existing ES algorithm to describe its augmented version with our algorithm, working as a plug-in. Pseudo-code for this general algorithm can be found in Appendix B. Using historic data ES algorithms typically use small populations of samples to estimate expectations. Such small sample sizes don't allow for enough data exposure for the GNN to build a meaningful transformation g η. To circumvent this problem, we augment the off-line program with samples for past generations thanks to the fused importance sampling estimator . This technique is classical in similar settings like MDP-based reinforcement learning and counterfactual reasoning and proves to be essential for our problem. Formally, for a given horizon T that controls how far we look in the past, this amounts to storing the samples x drawn from π θ t−T +1,..., π θt (as well as their respective scores) in a buffer H T. The objective can then be rewritten as: This technique allows to increase the data exposure of the GNN by using past samples (and therefore does not require additional function evaluations) and to reduce the variance of the off-line estimator of the original expectation . To control the change in distribution, the fused propensity weights can then be clipped in a similar fashion than in the program. Mode preserving properties To achieve improved exploration, the search distribution should align its tails with the level sets of the objective function. This is not guaranteed when performing the update step since the GNN's update could simply move the mean of the search distribution without shaping the tails. One way to encourage the GNN's capacity to be allocated to the tails is to impose a mode-preserving property. If µ denotes the location of a mode of the latent distribution, then the mode of the distribution π θ generated by the NICE model is located in g η (µ) (see Appendix A for the proof). It is therefore easy to build a map f η based on the initial g η that is mode-preserving: where µ t denotes the mode of the latent distribution ν ω at iteration t. Defined as such, f η preserves the mode of the previous search distribution (since f ηt+1 (µ) = f ηt (µ)), is trivially still a bijection and remains volume preserving. Using the push-forward map f η instead of g η, we explicitly push the flexibility brought by the GNN to impact only the tails of the search distribution. As detailed in an ablation study presented in Appendix F, this additional tool turns out to be essential in order to use GNNs for ES. In all that follows, we build the NICE model with three coupling layers. Each coupling layer's nonlinear mapping t is built with a one hidden layer MLP, with 128 neurons and leaky ReLU activation functions. This architecture is kept constant in all our experiments. We present here two-dimensional visualizations of the behavior of a GNN distribution trained with GNN-xNES -the latent distribution is therefore Gaussian. Figure 3a Figure 3b displays the density level lines of the latent distribution, as well as the learned representation of the objective in the latent space. The search distribution is able to have curved density isolines, enabling better exploration. In the latent space, the global minimum can be reached without navigating a curved valley. Figures 4a and 4b provide similar visualizations on the Rastrigin function, a highly multimodal but symmetric objective. The GNN lowers the barriers between local minima, making it easier to escape a local minimum to the global minimum. Experimental set-up We present experiments on both unimodal and multimodal objectives for xNES and GNN-xNES. We use the official implementation of xNES 1 with default hyper-parameters (such as the population size λ), both as a baseline and as an inner optimization method for GNNxNES. All experiments are run on the COmparing Continous Optimizers (COCO) platform, a popular framework for comparing black-box optimization algorithms. It namely allows to benchmark different algorithms on translated and rotated versions of the same objectives, in order to evaluate multiple configurations with different global minimum positions. We compare xNES and GNN-xNES on functions from the 2018 Black-Box Optimization Benchmark (BBOB) suite. When comparing these two algorithms, we impose that their initial search distributions are close in order to ensure fair comparison. We insist on the fact that the xNES algorithm has the exact same configuration whether it is used by itself or as an inner-optimization algorithm for GNN-xNES. Further experimental details, including additional hyper-parameters value for GNN-xNES are provided in Appendix C. We run the different algorithms on two unimodal landscapes where we expect GNN search distributions to bring a significant improvement over the Gaussian -as discussed in 2.2. These objectives functions are the Rotated Rosenbrock function (a curved valley with high conditioning) and the Bent Cigar (an asymmetric and curved Cigar function). Extensive details on these objective functions can be found in the BBOB documentation . Results on additional unimodal functions can be found in Appendix E. Performance is measured through Empirical Cumulative Distribution Functions (ECDFs) of the runtime, also known as data profiles (Moré &). Such curves report the fraction of problems solved as a function of the number of objective evaluations. For a given precision ∆, a problem is said to be solved if the best function evaluation made so far is smaller than f (x *) + ∆. We create 200 problems, equally spaced on a log-scale from ∆ = 10 2 to ∆ = 10 −5 and, as in the COCO framework, aggregate them over 15 function instances. Results are presented in Figure 5 for the two benchmark functions and in dimensions d = 2, 5, 10. We now compare the performances of the different algorithms on a collection of three multimodal objectives: the Rastrigin function, the Griewank-Rosenbrock function and the Schwefel function. Extensive details about these objectives can be found in. When using ES algorithms to optimize multimodal functions, it is usual to augment them with restart strategies . When convergence is detected, the search distribution is re-initialized in order to search another part of the landscape, and often the population size is increased. This allows to fairly compared algorithms that converge fast to potentially bad local minima, and algorithms that converges slower to better minima. Their exist a large variety of restart strategies ; as the official implementation of xNES is not equipped with a default one, we trigger a restart whenever the algorithm makes no progress for more than 30 × d iterations. The standard deviation of the search distribution is set back to 1, and its mean sampled uniformly within the compact X of interest (defined by the COCO framework). At each restart, the population size of the algorithm is multiplied by 2, as in. This restart strategy is used for both xNES and GNN-xNES. We measure performance as the number of functions evaluations to find an objective value smaller than f (x *) + 10 −5 within a budget of d × 10 5 function evaluations, averaged over 15 function instances. When an algorithm is not able to discover the global minimum within the given budget, we use the maximum number of evaluations as its performance. For visualization purposes, this measure of performance is divided by d 2. Results are reported in Figure 6. On all objectives, and for all dimensions, GNN-xNES discovers (in average) the global minimum faster than xNES. Additional on others multimodal functions are presented in Appendix E. The goal of this section is to present additional comparison between xNES and GNN-xNES on RL-based objective functions -less synthetic than the previously considered BBOB functions. ES algorithms have recently been used for direct policy search in Reinforcement Learning (RL) and shown to reach performances comparable with state-of-the-art MDP-based techniques . Direct Policy Search ignores the MDP structure of the RL environment and rather considers it as a black-box. The search for the optimal policy is performed directly in parameter space to maximize the average reward per trajectory: where p x is the distribution of trajectories induced by the policy (the state-conditional distribution over actions) parametrized by x, and r the rewards generated by the environment. The objective can readily be approximated from samples by simply rolling out M trajectories, and optimized using ES. In our experiments 2, we set M = 10 and optimize deterministic linear policies (as in). In Figures 7a and 7b we report of the GNN-xNES algorithm compared to xNES, when run on the Mujoco locomotion tasks Swimmer and InvertedDoublePendulum, both from the OpenAI Gym . Performance is measured by the average reward per trajectory as a function of the number of evaluations of the objective f. Results are averaged over 5 random seeds (ruling the initialization of the environment and the initial distribution over the policy parameters x). In all three environments, GNN-xNES discovers behaviors of high rewards faster than xNES. In this work, we motivate the use of GNNs for improving Evolutionary Strategies by pinpointing the limitations of classical search distributions, commonly used by standard ES algorithms. We propose a new algorithm that leverages the high flexibility of distributions generated by bijective GNNs with an ES objective. We highlight that this algorithm can be seen as a plug-in extension to existing ES algorithms, and therefore can virtually incorporate any of them. Finally, we show its empirical advantages across a diversity of synthetic objective functions, as well as from objectives coming from Reinforcement Learning. Beyond the proposal of this algorithm, we believe that our work highlights the role of expressiveness in exploration for optimization tasks. This idea could be leverage in other settings where exploration is crucial, such a MDP-based policy search methods. An interesting line of future work could focus on optimizing GNN-based conditional distribution for RL tasks -an idea already developed in;. Other possible extensions to our work could focus on investigating first-order and mixed oracles, such as in; Faury et al. (2018 We prove here the fact that if µ denotes the location of the mode of the latent distribution ν ω, then g η (µ) is a mode for π ω,η. Indeed, under reasonable smoothness assumptions, one has that y is a mode for π ω,η if and only if: Since π ω,η (x) = ν ω (h η (x)), this is therefore equivalent to: In the NICE model, we have that ∂hη(x) ∂x = 1 for all x hence the matrix ∂hη(x) ∂x x=y is invertible and its kernel is reduced to the null vector. Therefore: and therefore µ = h η (y) by definition of µ (the only critical point of ν ω). Hence since h −1 η = g η, we have that y = g η (µ) which concludes the proof. We provide below the pseudo-code for the generic algorithm GNN-A-ES, where A is a generic ES algorithm operating on a parametric distribution ν ω. The additional hyper-parameters are the horizon T as well as the clipping constant ε. The function clip (x, lb, ub) clips the input x between a lower-bound lb and an upper-bound ub.: objective function f, distribution ν ω and its related ES algorithm A hyper-parameters: clipping constant ε, NICE model architecture, initial parameters ω 0, initial weights η 0, horizon T, population size λ (Initialization) Initialize NICE MLPs weights and biases with η 0. Let H be a circular buffer of length T × λ while not terminate do (Sampling) //One step ES-based optimization of the latent space Apply A to the latent distribution (GNN update) * //Many-steps gradient based optimization of the GNN η t+1 η x,f ∈H f · clip Algorithm 2 does not detail the mode-preserving addition for the sake of readability and clarity. We provide additional details on this procedure here. Let µ t be the mode of the latent distribution ν ωt. At the (Initialization) step, set α 0 = g η0 (µ 0) where g η (·) is the push-forward map on the NICE model described in Section 3.2. For all round t ≥ 1, let f η (z) = g η (z) − g η (µ t) + α t. The variable α t represent the push forward mapping of the latent distribution's mean under the current model. Every time the latent space is updated -the (ES update) step, let α t+1 = f ηt (µ t+1). Then, for the (GNN update), optimize the forward-map f η (z) = g η (z) − g η (µ t+1) + α t+1. After this update, we have f ηt+1 (µ t+1) = α t+1 = f ηt (µ t+1), which means that the mode of the search distribution (which is the image of the latent distribution mode) has not been impacted by the GNN update. C EXPERIMENTAL DETAILS Baselines We use xNES with its default (adapted) hyper-parameters (described in) for both its baselines versions and its inner optimization parts in GNN-xNES. The population size λ is one such hyper-parameters, and is therefore set to λ = 4 + 3 log(d). Also, as it is classically done in ES algorithms, we use a rank-based fitness shaping, designed to make the algorithm invariant with respect to order-preserving cost transformations. We use the same fitnessshaping function as in. Across all experiments, we use the same hyper-parameters for GNN-xNES without fine tuning for each tasks. We use three coupling layers, each with a single hidden layer MLP with 128 hidden neurons and Leaky ReLU activations. The MLPs are initialized via Glorot initialization, and the clipping constant is set to ε = 0.05. The history size T was determined experimentally, and set to T = 3 * (1 + log(d)). When restarts are used, this history size is divided by the numbers of restart so far (as the population size grows larger). Every synthetic objective we used in this work was taken from the BBOB2019 benchmark dataset. Their expression as well as additional details on the framework can be found in Hansen et al. (2010; 2016). At the beginning of each experiment, we set the Gaussian search distribution (for xNES) and the Gaussian latent distribution (for GNN-xNES) to a standard normal, with a mean uniformly sampled within the compact X of interest (defined by the COCO framework). C.3 RL ENVIRONMENTS Table 1 provides details on the RL environment used to compare GNN-xNES and xNES, like the dimensions of the state space S and action space A, the number d of the policy's degrees of freedom and the maximum number of steps m per trajectory. At the beginning of each experiment, we set the Gaussian search distribution (for xNES) and the Gaussian latent distribution (for GNN-xNES) to a standard normal with zero mean. In this particular case, where the function evaluations are noisy, we kept the default population size of the xNES algorithm. Algorithm mean(# restarts), d=2 mean(# restarts), d=5 mean(# restarts), d=10 xNES 2.3 2.7 3.4 GNN-xNES 1.3 2.5 2.9 Table 2: Mean number of restarts needed to discover the global minimum on the Rastrigin function. efficiently fit each optimization landscapes, without having to reduce its entropy like a multivariate normal would. We present here some additional on some unimodal and multimodal synthetic functions. Figure 9 present ECDFs curve obtained from the Attractive Sector function, a highly asymmetrical function around its global minimum. On such a function, GNN-xNES seems to accelerate xNES in small dimensions, however this speed-up disappears in higher dimensions. Figure 10 presents on the Rosenbrock function (without random rotations). Again, GNN-xNES accelerates the xNES algorithm. Figure 11 present on the multimodal functions Gallagher's Gaussian 101 Peaks and Gallagher's Gaussian 21 Peaks. Again, GNN-xNES discovers the global minimum faster (on average) than xNES. In our multimodal experiments, we used simulated restarts as a fair mean of comparing different algorithm (this is common practice in order to fairly compare algorithms that converge fast to potentially bad local minima to algorithms that converge slowly to the global minimum). If the empirical prove that GNN-xNES accelerate xNES in the discovery of the global minimum, it does not prove that GNN-xNES leverages the flexibility of the GNN to detect the global minimum when xNES misses it. In an attempt to prove that it is indeed the case, we report in Table 2 the number of restarts needed by both GNN-xNES and xNES to discover the global minimum on the Rastrigin function (averaged over the 15 randomly initialized run). For this instance, GNN-xNES consistently discovers the global minimum with less restarts than xNES. As detailed in Section 4, one can apply Algorithm 2 as a plug-in to any ES method. So far, we empirically evaluated the benefits of our approach by comparing xNES against its GNN extension (GNN-xNES). We present in Figure 12 additional evaluations obtained by comparing CMA-ES and its GNN extension (denoted GNN-CMA-ES) on the Rosenbrock function in dimension 2,5 and 10. CMA-ES is considered to be the state-of-the-art ES algorithm, and improving its performances is a non-trivial task. On the considered example GNN-CMA-ES improves CMA-ES, highlighting the empirical benefit of our approach for a large class of ES algorithm. One can however observe that the performance boost brought by the GNN extension is milder for GNN-CMA-ES then for GNNxNES. We suspect that this is due to the use of cumulation via an evolution path in CMA-ES 3, which basically introduces a momentum-like update when optimizing the latent distribution. While using an evolution path makes a lot of sense when optimizing a stationary objective, it can be quite harmful for non-stationary ones. We therefore believe that the cumulation step in CMA-ES (for the latent distribution) and the GNN optimization (making the objective optimized by CMA-ES in the latent space non-stationary) can lead to conflicting updates and might hinder the benefits brought by the GNN's additional flexibility. Designing a GNN update strategy complying with the use of evolution paths could therefore be a way of further improving GNN-CMA-ES, and is left for future work. We present here an ablation study for two additional tools that we introduced after the alternating optimization view: the mode preserving extension as well as the history augmentation. Figure 13 presents ECDFs curves on the Rosenbrock, Rotated Rosenbrock and Bent Cigar functions in 2D, for a version of GNN-xNES that doesn't use history but only the current population. Using history and therefore exposing the GNN to larger datasets improves the procedure. Figure 14 present similar on a version of GNN-xNES without the mode preserving property. Again, one can notice that ensuring that the GNN training is mode-preserving is crucial to improve experimental . Figure 14: ECDFs curves for xNES, GNN-xNES and GNN-xNES-nmp, which is not mode preserving. Ensuring that the training of the GNN doesn't impact the mode of the search distribution improves the optimization.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJlDDnVKwS
We propose a new algorithm leveraging the expressiveness of Generative Neural Networks to improve Evolutionary Strategies algorithms.
We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, ing in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers. For embedded applications of machine learning, we seek models that are as accurate as possible given constraints on size and on latency at inference time. For many neural networks, the parameters and computation are concentrated in two basic building blocks:1. Convolutions. These tend to dominate in, for example, image processing applications.2. Dense matrix multiplications (GEMMs) as found, for example, inside fully connected layers or recurrent layers such as GRU and LSTM. These are common in speech and natural language processing applications. These two building blocks are the natural targets for efforts to reduce parameters and speed up models for embedded applications. Much work on this topic already exists in the literature. For a brief overview, see Section 2.In this paper, we focus only on dense matrix multiplications and not on convolutions. Our two main contributions are:1. Trace norm regularization: We describe a trace norm regularization technique and an accompanying training methodology that enables the practical training of models with competitive accuracy versus number of parameter trade-offs. It automatically selects the rank and eliminates the need for any prior knowledge on suitable matrix rank. We explore the importance of optimizing for low batch sizes in on-device inference, and we introduce kernels 1 for ARM processors that vastly outperform publicly available kernels in the low batch size regime. These two topics are discussed in Sections 3 and 4, respectively. Although we conducted our experiments and report in the context of large vocabulary continuous speech recognition (LVCSR) on embedded devices, the ideas and techniques are broadly applicable to other deep learning networks. Work on compressing any neural network for which large GEMMs dominate the parameters or computation time could benefit from the insights presented in this paper. Our work is most closely related to that of, where low rank factored acoustic speech models are similarly trained by initializing weights from a truncated singular value decomposition (SVD) of pretrained weight matrices. This technique was also applied to speech recognition on mobile devices BID26. We build on this method by adding a variational form of trace norm regularization that was first proposed for collaborative prediction BID24 and also applied to recommender systems BID13 ). The use of this technique with gradient descent was recently justified theoretically BID6. argue that trace norm regularization could provide a sensible inductive bias for neural networks. To the best of our knowledge, we are the first to combine the training technique of with variational trace norm regularization. Low rank factorization of neural network weights in general has been the subject of many other works BID7 BID22 BID2 BID14. Some other approaches for dense matrix compression include sparsity BID15 BID19, hash-based parameter sharing BID3, and other parameter-sharing schemes such as circulant, Toeplitz, or more generally low-displacement-rank matrices BID23 BID17. BID14 explore splitting activations into independent groups. Doing so is akin to using block-diagonal matrices. The techniques for compressing convolutional models are different and beyond the scope of this paper. We refer the interested reader to, e.g., BID8; BID10 and references therein. Low rank factorization is a well studied and effective technique for compressing large matrices. In, low rank models are trained by first training a model with unfactored weight matrices (we refer to this as stage 1), and then initializing a model with factored weight matrices from the truncated SVD of the unfactored model (we refer to this as warmstarting a stage 2 model from a stage 1 model). The truncation is done by retaining only as many singular values as required to explain a specified percentage of the variance. If the weight matrices from stage 1 had only a few nonzero singular values, then the truncated SVD used for warmstarting stage 2 would yield a much better or even error-free approximation of the stage 1 matrix. This suggests applying a sparsity-inducing 1 penalty on the vector of singular values during stage 1 training. This is known as trace norm regularization in the literature. Unfortunately, there is no known way of directly computing the trace norm and its gradients that would be computationally feasible in the context of large deep learning models. Instead, we propose to combine the two-stage training method of with an indirect variational trace norm regularization technique BID24 BID6. We describe this technique in more detail in Section 3.1 and report experimental in Section 3.2. First we introduce some notation. Let us denote by || · || T the trace norm of a matrix, that is, the sum of the singular values of the matrix. The trace norm is also referred to as the nuclear norm or the Schatten 1-norm in the literature. Furthermore, let us denote by || · || F the Frobenius norm of a matrix, defined as DISPLAYFORM0 The Frobenius norm is identical to the Schatten 2-norm of a matrix, i.e. the 2 norm of the singular value vector of the matrix. The following lemma provides a variational characterization of the trace norm in terms of the Frobenius norm. Lemma 1 BID12; BID6 ). Let W be an m × n matrix and denote by σ its vector of singular values. Then DISPLAYFORM1 where the minimum is taken over all U: m×min(m, n) and V: min(m, n)×n such that W = U V. Furthermore, if W =Ũ ΣṼ * is a singular value decomposition of W, then equality holds in for the choice U =Ũ √ Σ and V = √ ΣṼ *.The procedure to take advantage of this characterization is as follows. First, for each large GEMM in the model, replace the m × n weight matrix W by the product W = U V where U: m × min(m, n) and V: min(m, n) × n. Second, replace the original loss function (W) by DISPLAYFORM2 where λ is a hyperparameter controlling the strength of the approximate trace norm regularization. Proposition 1 in BID6 guarantees that minimizing the modified loss equation FORMULA2 is equivalent to minimizing the actual trace norm regularized loss: DISPLAYFORM3 In Section 3.2.1 we show empirically that use of the modified loss is indeed highly effective at reducing the trace norm of the weight matrices. To summarize, we propose the following basic training scheme:• Stage 1:-For each large GEMM in the model, replace the m×n weight matrix W by the product W = U V where U: m × r, V: r × n, and r = min(m, n). -Replace the original loss function (W) by DISPLAYFORM4 where λ is a hyperparameter controlling the strength of the trace norm regularization.-Train the model to convergence. DISPLAYFORM5 -For the trained model from stage 1, recover W = U V by multiplying the two trained matrices U and V.-Train low rank models warmstarted from the truncated SVD of W. By varying the number of singular values retained, we can control the parameter versus accuracy trade-off. One modification to this is described in Section 3.2.3, where we show that it is actually not necessary to train the stage 1 model to convergence before switching to stage 2. By making the transition earlier, training time can be substantially reduced. We report here the of our experiments related to trace norm regularization. Our baseline model is a forward-only Deep Speech 2 model, and we train and evaluate on the widely used Wall Street Journal (WSJ) speech corpus. Except for a few minor modifications described in Appendix B, we follow closely the original paper describing this architecture BID1, and we refer the reader to that paper for details on the inputs, outputs, exact layers used, training methodology, and so on. For the purposes of this paper, suffice it to say that the parameters and computation are dominated by three GRU layers and a fully connected layer. It is these four layers that we compress through low-rank factorization. As described in Appendix B.2, in our factorization scheme, each Figure 1: CER dependence on λ rec and λ nonrec for trace norm regularization (left) and DISPLAYFORM0 GRU layer involves two matrix multiplications: a recurrent and a non-recurrent one. For a simple recurrent layer, we would write DISPLAYFORM1 For a GRU layer, there are also weights for reset and update gates, which we group with the recurrent matrix. See Appendix B.2 for details and the motivation for this split. Since our work focuses only on compressing acoustic models and not language models, the error metric we report is the character error rate (CER) rather than word error rate (WER). As the size and latency constraints vary widely across devices, whenever possible we compare techniques by comparing their accuracy versus number of parameter trade-off curves. All CERs reported here are computed on a validation set separate from the training set. In this section, we investigate the effects of training with the modified loss function in. For simplicity, we refer to this as trace norm regularization. As the WSJ corpus is relatively small at around 80 hours of speech, models tend to benefit substantially from regularization. To make comparisons more fair, we also trained unfactored models with an 2 regularization term and searched the hyperparameter space just as exhaustively. For both trace norm and 2 regularization, we found it beneficial to introduce separate λ rec and λ nonrec parameters for determining the strength of regularization for the recurrent and non-recurrent weight matrices, respectively. In addition to λ rec and λ nonrec in initial experiments, we also roughly tuned the learning rate. Since the same learning rate was found to be optimal for nearly all experiments, we just used that for all the experiments reported in this section. The dependence of final CER on λ rec and λ nonrec is shown in Figure 1. Separate λ rec and λ nonrec values are seen to help for both trace norm and 2 regularization. However, for trace norm regularization, it appears better to fix λ rec as a multiple of λ nonrec rather than tuning the two parameters independently. The first question we are interested in is whether our modified loss is really effective at reducing the trace norm. As we are interested in the relative concentration of singular values rather than their absolute magnitudes, we introduce the following nondimensional metric. Definition 1. Let W be a nonzero m × n matrix with d = min(m, n) ≥ 2. Denote by σ the d-dimensional vector of singular values of W. Then we define the nondimensional trace norm coefficient of W as follows: We show in Appendix A that ν is scale-invariant and ranges from 0 for rank 1 matrices to 1 for maximal-rank matrices with all singular values equal. Intuitively, the smaller ν(W), the better W can be approximated by a low rank matrix. DISPLAYFORM0 As shown in Figure 2, trace norm regularization is indeed highly effective at reducing the nondimensional trace norm coefficient compared to 2 regularization. At very high regularization strengths, 2 regularization also leads to small ν values. However, from Figure 1 it is apparent that this comes at the expense of relatively high CERs. As shown in Figure 3, this translates into requiring a much lower rank for the truncated SVD to explain, say, 90 % of the variance of the weight matrix for a given CER. Although a few 2 -regularized models occasionally achieve low rank, we observe this only at relatively high CER's and only for some of the weights. Note also that some form of regularization is very important on this dataset. The unregularized baseline model (the green points in Figure 3) achieves relatively low CER. In this section, we report the of stage 2 experiments warmstarted from either trace norm or L 2 regularized stage 1 models. For each regularization type, we took the three best stage 1 models (in terms of final CER: all were below 6.8) and used the truncated SVD of their weights to initialize the weights of stage 2 models. By varying the threshold of variance explained for the SVD truncation, each stage 1 model ed into multiple stage 2 models. The stage 2 models were trained without regularization (i.e., λ rec = λ nonrec = 0) and with the initial learning rate set to three times the final learning rate of the stage 1 model. FIG1, the best models from either trace norm or L 2 regularization exhibit similar accuracy versus number of parameter trade-offs. For comparison, we also warmstarted some stage 2 models from an unregularized stage 1 model. These models are seen to have significantly lower accuracies, accentuating the need for regularization on the WSJ corpus. In the previous sections, we trained the stage 1 models for 40 epochs to full convergence and then trained the stage 2 models for another 40 epochs, again to full convergence. Since the stage 2 models are drastically smaller than the stage 1 models, it takes less time to train them. Hence, shifting the stage 1 to stage 2 transition point to an earlier epoch could substantially reduce training time. In this section, we show that it is indeed possible to do so without hurting final accuracy. Specifically, we took the stage 1 trace norm and 2 models from Section 3.2.1 that ed in the best stage 2 models in Section 3.2.2. In that section, we were interested in the parameters vs accuracy trade-off and used each stage 1 model to warmstart a number of stage 2 models of different sizes. In this section, we instead set a fixed target of 3 M parameters and a fixed overall training budget of 80 epochs but vary the stage 1 to stage 2 transition epoch. For each of the stage 2 runs, we initialize the learning rate with the learning rate of the stage 1 model at the transition epoch. So the learning rate follows the same schedule as if we had trained a single model for 80 epochs. As before, we disable all regularization for stage 2. The are shown in FIG3. Based on the left panel, it is evident that we can lower the transition epoch number without hurting the final CER. In some cases, we even see marginal CER improvements. For transition epochs of at least 15, we also see slightly better for trace norm than 2. In the right panel, we plot the convergence of CER when the transition epoch is 15. We find that the trace norm model's CER is barely impacted by the transition whereas the huge jump in CER at the transition epoch. Furthermore, the plot suggests that a total of 60 epochs may have sufficed. However, the savings from reducing stage 2 epochs are negligible compared to the savings from reducing the transition epoch. With low rank factorization techniques similar 2 to those described in Section 3, we were able to train large vocabulary continuous speech recognition (LVCSR) models with acceptable numbers of parameters and acceptable loss of accuracy compared to a production server model (baseline). TAB1 shows the baseline along with three different compressed models with much lower number of parameters. The tier-3 model employs the techniques of Sections B.4 and B.3. Consequently, it runs significantly faster than the tier-1 model, even though they have a similar number of parameters. Unfortunately, this comes at the expense of some loss in accuracy. Although low rank factorization significantly reduces the overall computational complexity of our LVCSR system, we still require further optimization to achieve real-time inference on mobile or embedded devices. One approach to speeding up the network is to use low-precision 8-bit integer representations for weight matrices and matrix multiplications (the GEMM operation in BLAS terminology). This type of quantization after training reduces both memory as well as computation requirements of the network while only introducing 2% to 4% relative increase in WER. Quantization for embedded speech recognition has also been previously studied in BID25, and it may be possible to reduce the relative WER increase by quantizing the forward passes during training. As the relative WER losses from compressing the acoustic and language models were much larger for us, we did not pursue this further for the present study. where A is a random matrix with dimension 6144 × 320, and x is a random matrix with dimension 320× batch size. All matrices are in unsigned 8-bit integer format. To perform low precision matrix multiplications, we originally used the gemmlowp library, which provides state-of-the-art low precision GEMMs using unsigned 8-bit integer values (. However, gemmlowp's approach is not efficient for small batch sizes. Our application, LVCSR on embedded devices with single user, is dominated by low batch size GEMMs due to the sequential nature of recurrent layers and latency constraints. This can be demonstrated by looking at a simple RNN cell which has the form: DISPLAYFORM0 This cell contains two main GEMMs: The first, U h t−1, is sequential and requires a GEMM with batch size 1. The second, W x t, can in principle be performed at higher batch sizes by batching across time. However, choosing a too large batch sizes can significantly delay the output, as the system needs to wait for more future context. In practice, we found that batch sizes higher than around 4 ed in too high latencies, negatively impacting user experience. This motivated us to implement custom assembly kernels for the 64-bit ARM architecture (AArch64, also known as ARMv8 or ARM64) to further improve the performance of the GEMMs operations. We do not go through the methodological details in this paper. Instead, we are making the kernels and implementation details available at https://github.com/paddlepaddle/farm. FIG4 compares the performance of our implementation (denoted by farm) with the gemmlowp library for matrix multiplication on iPhone 7, iPhone 6, and Raspberry Pi 3 Model B. The farm kernels are significantly faster than their gemmlowp counterparts for batch sizes 1 to 4. The peak single-core theoretical performance for iPhone 7, iPhone 6, and Raspberry Pi 3 are 56.16, 22.4 and 9.6 Giga Operations per Second, respectively. The gap between the theoretical and achieved values are mostly due to kernels being limited by memory bandwidth. For a more detailed analysis, we refer to the farm website. In addition to low precision representation and customized ARM kernels, we explored other approaches to speed up our LVCSR system. These techniques are described in Appendix B.Finally, by combining low rank factorization, some techniques from Appendix B, int8 quantization and the farm kernels, as well as using smaller language models, we could create a range of speech recognition models suitably tailored to various devices. These are shown in TAB2. We worked on compressing and reducing the inference latency of LVCSR speech recognition models. To better compress models, we introduced a trace norm regularization technique and demonstrated its potential for faster training of low rank models on the WSJ speech corpus. To reduce latency at inference time, we demonstrated the importance of optimizing for low batch sizes and released optimized kernels for the ARM64 platform. Finally, by combining the various techniques in this paper, we demonstrated an effective path towards production-grade on-device speech recognition on a range of embedded devices. Figure 7: Contours of ||σ|| 1 and ||σ|| 2. ||σ|| 2 is kept constant at σ. For this case, ||σ|| 1 can vary from σ to √ 2σ. In this section, we describe some of the properties of the non-dimensional trace norm coefficient defined in Section 3.1. DISPLAYFORM0 (iii) ν(W) = 0 if and only if W has rank 1.(iv) ν(W) = 1 if and only if W has maximal rank and all singular values are equal. Proof. Since we are assuming W is nonzero, at least one singular value is nonzero and hence ||σ|| 2 = 0. Property (i) is immediate from the scaling property ||cσ|| = |c| · ||σ|| satisfied by all norms. To establish the other properties, observe that we have DISPLAYFORM1 The first inequality holds since singular values are nonnegative, and the inequality is strict unless σ i or σ j vanishes. The second inequality comes from an application of Jensen's inequality and is strict unless σ i = σ j. Thus, replacing (σ i, σ j) by (σ i + σ j, 0) preserves ||σ|| 1 while increasing ||σ|| 2 unless one of σ i or σ j is zero. Similarly, replacing (σ i, σ j) by (DISPLAYFORM2 2 σ j) preserves ||σ|| 1 while decreasing ||σ|| 2 unless σ i = σ j. By a simple argument by contradiction, it follows that the minima occur for σ = (σ 1, 0, . . ., 0), in which case ν(W) = 0 and the maxima occur for σ = (σ 1, . . ., σ 1), in which case ν(W) = 1.We can also obtain a better intuition about the minimum and maximum of ν(W) by looking at the 2D case visualized in Figure 7. For a fixed ||σ|| 2 = σ, ||σ|| 1 can vary from σ to √ 2σ. The minimum ||σ|| 1 happens when either σ 1 or σ 2 are zero. For these values ||σ|| 2 = ||σ|| 1 and as a ν(W) = 0. Similarly, the maximum ||σ|| 1 happens for σ 1 = σ 2, ing in ν(W) = 1. We describe here a few preliminary insights that informed our choice of baseline model for the experiments reported in Sections 3 and 4.Since the target domain is on-device streaming speech recognition with low latency, we chose to focus on Deep Speech 2 like models with forward-only GRU layers BID1. Across several data sets and model architectures, we consistently found that the sizes of the recurrent layers closer to the input could be shrunk without affecting accuracy much. A related phenomenon was observed in: When doing low rank approximations of the acoustic model layers using SVD, the rank required to explain a fixed threshold of explained variance grows with distance from the input layer. To reduce the number of parameters of the baseline model and speed up experiments, we thus chose to adopt growing GRU dimensions. Since the hope is that the compression techniques studied in this paper will automatically reduce layers to a near-optimal size, we chose to not tune these dimensions, but simply picked a reasonable affine increasing scheme of 768, 1024, 1280 for the GRU dimensions, and dimension 1536 for the final fully connected layer. For the recurrent layers, we employ the Gated Recurrent Unit (GRU) architecture proposed in BID5, where the hidden state h t is computed as follows: DISPLAYFORM0 where σ is the sigmoid function, z and r are update and reset gates respectively, U z, U r, U h are the three recurrent weight matrices, and W z, W r, W h are the three non-recurrent weight matrices. We consider here three ways of performing weight sharing when doing low rank factorization of the 6 weight matrices.1. Completely joint factorization. Here we concatenate the 6 weight matrices along the first dimension and apply low rank factorization to this single combined matrix. 2. Partially joint factorization. Here we concatenate the 3 recurrent matrices into a single matrix U and likewise concatenate the 3 non-recurrent matrices into a single matrix W. We then apply low rank factorization to each of U and W separately. 3. Completely split factorization. Here we apply low rank factorization to each of the 6 weight matrices separately. In BID14, the authors opted for the LSTM analog of completely joint factorization, as this choice has the most parameter sharing and thus the highest potential for compression of the model. However, we decided to go with partially joint factorization instead, largely for two reasons. First, in pilot experiments, we found that the U and W matrices behave qualitatively quite differently during training. For example, on large data sets the W matrices may be trained from scratch in factored form, whereas factored U matrices need to be either warmstarted via SVD from a trained unfactored model or trained with a significantly lowered learning rate. Second, the U and W split is advantageous in terms of computational efficiency. For the non-recurrent W GEMM, there is no sequential time dependency and thus its inputs x may be batched across time. Finally, we compared the partially joint factorization to the completely split factorization and found that the former indeed led to better accuracy versus number of parameters trade-offs. Some from this experiment are shown in TAB3. Switching from 161-dimensional linear spectrograms to 80-dimensional mel spectrograms reduces the per-timestep feature dimension by roughly a factor of 2. Furthermore, and likely owing to this switch, we could reduce the frequency-dimension size of the convolution filters by a factor of 2. In combination, this means about a 4x reduction in compute for the first and second convolution layers, and a 2x reduction in compute for the first GRU layer. On the WSJ corpus as well as an internal dataset of around 1,000 hours of speech, we saw little impact on accuracy from making this change, and hence we adopted it for all experiments in Section 3. Gram-CTC is a recently proposed extension to CTC for training models that output variable-size grams as opposed to single characters BID16. Using Gram-CTC, we were able to increase the time stride in the second convolution layer by a factor of 2 with little to no loss in CER, though we did have to double the number of filters in that same convolution layer to compensate. The net effect is a roughly 2x speedup for the second and third GRU layers, which are the largest. This speed up more than makes up for the size increase in the softmax layer and the slightly more complex language model decoding when using Gram-CTC. However, for a given target accuracy, we found that Gram-CTC models could not be shrunk as much as CTC models by means of low rank factorization. That is, the net effect of this technique is to increase model size in exchange for reduced latency. Shown in FIG5 is the parameter reduction versus relative CER increase trade-off for various techniques on an internal data set of around 1,000 hours of speech. The baseline model is a Deep Speech 2 model with three forward-GRU layers of dimension 2560, as described in BID1. This is the same baseline model used in the experiments of BID19, from which paper we also obtained the sparse data points in the plot. Shown also are versions of the baseline model but with the GRU dimension scaled down to 1536 and 1024. Overall, models with low rank factorizations on all non-recurrent and recurrent weight matrices are seen to provide the best CER vs parameters trade-off. All the low rank models use growing GRU dimensions and the partially split form of low rank factorization, as discussed in Sections B.1 and B.2. The models labeled fast in addition use Gram-CTC as described in Section B.4 and mel features and reduced convolution filter sizes as described in Section B.3.As this was more of a preliminary comparison to some past experiments, the setup was not perfectly controlled and some models were, for example, trained for more epochs than others. We suspect that, given more effort and similar adjustments like growing GRU dimensions, the sparse models could be made competitive with the low rank models. Even so, given the computational advantage of the low rank approach over unstructured sparsity, we chose to focus only on the former going forward. This does not, of course, rule out the potential usefulness of other, more structured forms of sparsity in the embedded setting.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1tC-LT6W
We compress and speed up speech recognition models on embedded devices through a trace norm regularization technique and optimized kernels.
Training activation quantized neural networks involves minimizing a piecewise constant training loss whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial. Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments. Deep neural networks (DNN) have achieved the remarkable success in many machine learning applications such as computer vision , natural language processing and reinforcement learning . However, the deployment of DNN typically require hundreds of megabytes of memory storage for the trainable full-precision floating-point parameters, and billions of floating-point operations to make a single inference. To achieve substantial memory savings and energy efficiency at inference time, many recent efforts have been made to the training of coarsely quantized DNN, meanwhile maintaining the performance of their float counterparts (; ; ; ; b).Training fully quantized DNN amounts to solving a very challenging optimization problem. It calls for minimizing a piecewise constant and highly nonconvex empirical risk function f (w) subject to a discrete set-constraint w ∈ Q that characterizes the quantized weights. In particular, weight quantization of DNN have been extensively studied in the literature; see for examples (; ; ; ; 2018a; ; ;). On the other hand, the gradient ∇f (w) in training activation quantized DNN is almost everywhere (a.e.) zero, which makes the standard back-propagation inapplicable. The arguably most effective way around this issue is nothing but to construct a non-trivial search direction by properly modifying the chain rule. Specifically, one can replace the a.e. zero derivative of quantized activation function composited in the chain rule with a related surrogate. This proxy derivative used in the backward pass only is referred as the straight-through estimator (STE) . In the same paper, proposed an alternative approach based on stochastic neurons. In addition, proposed the feasible target propagation algorithm for learning hard-threshold (or binary activated) networks via convex combinatorial optimization. The idea of STE originates to the celebrated perceptron algorithm (; 1962) in 1950s for learning single-layer perceptrons. The perceptron algorithm essentially does not calculate the "gradient" through the standard chain rule, but instead through a modified chain rule in which the derivative of identity function serves as the proxy of the original derivative of binary output function 1 {x>0}. Its convergence has been extensive discussed in the literature; see for examples, and the references therein. extended this idea to train multi-layer networks with binary activations (a.k.a. binary neuron), namely, to backpropagate as if the activation had been the identity function. proposed a STE variant which uses the derivative of the sigmoid function instead. In the training of DNN with weights and activations constrained to ±1, substituted the derivative of the signum activation function with 1 {|x|≤1} in the backward pass, known as the saturated STE. Later the idea of STE was readily employed to the training of DNN with general quantized ReLU activations (; ; ; ; b), where some other proxies took place including the derivatives of vanilla ReLU and clipped ReLU. Despite all the empirical success of STE, there is very limited theoretical understanding of it in training DNN with stair-case activations. considers leaky ReLU activation of a one-hidden-layer network. They showed the convergence of the so-called Convertron algorithm, which uses the identity STE in the backward pass through the leaky ReLU layer. Other similar scenarios, where certain layers are not desirable for back-propagation, have been brought up recently by and . The former proposed an implicit weighted nonlocal Laplacian layer as the classifier to improve the generalization accuracy of DNN. In the backward pass, the derivative of a pre-trained fullyconnected layer was used as a surrogate. To circumvent adversarial defense , introduced the backward pass differentiable approximation, which shares the same spirit as STE, and successfully broke defenses at ICLR 2018 that rely on obfuscated gradients. Throughout this paper, we shall refer to the "gradient" of loss function w.r.t. the weight variables through the STE-modified chain rule as coarse gradient. Since the backward and forward passes do not match, the coarse gradient is certainly not the gradient of loss function, and it is generally not the gradient of any function. Why searching in its negative direction minimizes the training loss, as this is not the standard gradient descent algorithm? Apparently, the choice of STE is non-unique, then what makes a good STE? From the optimization perspective, we take a step towards understanding STE in training quantized ReLU nets by attempting these questions. On the theoretical side, we consider three representative STEs for learning a two-linear-layer network with binary activation and Gaussian data: the derivatives of the identity function (; ;), vanilla ReLU and the clipped ReLUs . We adopt the model of population loss minimization (; ; ;). For the first time, we prove that proper choices of STE give rise to training algorithms that are descent. Specifically, the negative expected coarse gradients based on STEs of the vanilla and clipped ReLUs are provably descent directions for the minimizing the population loss, which yield monotonically decreasing energy in the training. In contrast, this is not true for the identity STE. We further prove that the corresponding training algorithm can be unstable near certain local minima, because the coarse gradient may simply not vanish there. Complementary to the analysis, we examine the empirical performances of the three STEs on MNIST and CIFAR-10 classifications with general quantized ReLU. While both vanilla and clipped ReLUs work very well on the relatively shallow LeNet-5, clipped ReLU STE is arguably the best for the deeper VGG-11 and ResNet-20. In our CIFAR experiments in section 4.2, we observe that the training using identity or ReLU STE can be unstable at good minima and repelled to an inferior one with substantially higher training loss and decreased generalization accuracy. This is an implication that poor STEs generate coarse gradients incompatible with the energy landscape, which is consistent with our theoretical finding about the identity STE.To our knowledge, convergence guarantees of perceptron algorithm (; 1962) and Convertron algorithm were proved for the identity STE. It is worth noting that Convertron makes weaker assumptions than in this paper. These , however, do not generalize to the network with two trainable layers studied here. As aforementioned, the identity STE is actually a poor choice in our case. Moreover, it is not clear if their analyses can be extended to other STEs. Similar to Convertron with leaky ReLU, the monotonicity of quantized activation function plays a role in coarse gradient descent. Indeed, all three STEs considered here exploit this property. But this is not the whole story. A great STE like the clipped ReLU matches quantized ReLU at the extrema, otherwise the instability/incompatibility issue may arise. Organization. In section 2, we study the energy landscape of a two-linear-layer network with binary activation and Gaussian data. We present the main and sketch the mathematical analysis for STE in section 3. In section 4, we compare the empirical performances of different STEs in 2-bit and 4-bit activation quantization, and report the instability phenomena of the training algorithms associated with poor STEs observed in CIFAR experiments. Due to space limitation, all the technical proofs as well as some figures are deferred to the appendix. Notations. · denotes the Euclidean norm of a vector or the spectral norm of a matrix. 0 n ∈ R n represents the vector of all zeros, whereas 1 n ∈ R n the vector of all ones. I n is the identity matrix of order n. For any w, z ∈ R n, w z = w, z = i w i z i is their inner product. w z denotes the Hadamard product whose i th entry is given by (w z) i = w i z i. We consider a model similar to that outputs the prediction DISPLAYFORM0 for some input Z ∈ R m×n. Here w ∈ R n and v ∈ R m are the trainable weights in the first and second linear layer, respectively; Z i denotes the ith row vector of Z; the activation function σ acts component-wise on the vector Zw, i.e., σ(Zw) i = σ((Zw) i ) = σ(Z i w). The first layer serves as a convolutional layer, where each row Z i can be viewed as a patch sampled from Z and the weight filter w is shared among all patches, and the second linear layer is the classifier. The label is generated according to y * (Z) = (v *) σ(Zw *) for some true (non-zero) parameters v * and w *. Moreover, we use the following squared sample loss DISPLAYFORM1 Unlike in , the activation function σ here is not ReLU, but the binary function σ(x) = 1 {x>0}.We assume that the entries of Z ∈ R m×n are i.i.d. sampled from the Gaussian distribution N . Since (v, w; Z) = (v, w/c; Z) for any scalar c > 0, without loss of generality, we take w * = 1 and cast the learning task as the following population loss minimization problem: DISPLAYFORM2 where the sample loss (v, w; Z) is given by. With the Gaussian assumption on Z, as will be shown in section 2.2, it is possible to find the analytic expressions of f (v, w) and its gradient DISPLAYFORM0.The gradient of objective function, however, is not available for the network training. In fact, we can only access the expected sample gradient, namely, DISPLAYFORM1. By the standard back-propagation or chain rule, we readily check that DISPLAYFORM2 and DISPLAYFORM3 Note that σ is zero a.e., which makes inapplicable to the training. The idea of STE is to simply replace the a.e. zero component σ in with a related non-trivial function µ (; ; ;), which is the derivative of some (sub)differentiable function µ. More precisely, back-propagation using the STE µ gives the following non-trivial surrogate of ∂ ∂w (v, w; Z), to which we refer as the coarse (partial) gradient DISPLAYFORM4 Using the STE µ to train the two-linear-layer convolutional neural network (CNN) with binary activation gives rise to the (full-batch) coarse gradient descent described in Algorithm 1.Algorithm 1 Coarse gradient descent for learning two-linear-layer CNN with STE µ. DISPLAYFORM5 Let us present some preliminaries about the landscape of the population loss function f (v, w).To this end, we define the angle between w and w * as θ(w, w *):= arccos w w * w w * for any w = 0 n. Recall that the label is given by y * (Z) = (v *) Zw * from, we elaborate on the analytic expressions of f (v, w) and ∇f (v, w). Lemma 1. If w = 0 n, the population loss f (v, w) is given by DISPLAYFORM0 Lemma 2. If w = 0 n and θ(w, w *) ∈ (0, π), the partial gradients of f (v, w) w.r.t. v and w are DISPLAYFORM1 respectively. For any v ∈ R m, (v, 0 m) is impossible to be a local minimizer. The only possible (local) minimizers of the model are located at 1. Stationary points where the gradients given by and vanish simultaneously (which may not be possible), i.e., DISPLAYFORM2 2. Non-differentiable points where θ(w, w *) = 0 and v = v *, or θ(w, w *) = π and v = DISPLAYFORM3 are obviously the global minimizers of. We show that the stationary points, if exist, can only be saddle points, and DISPLAYFORM4 are the only potential spurious local minimizers. DISPLAYFORM5 give the saddle points obeying, and DISPLAYFORM6 are the spurious local minimizers. Otherwise, the model has no saddle points or spurious local minimizers. We further prove that the population gradient ∇f (v, w) given by and, is Lipschitz continuous when restricted to bounded domains. Lemma 3. For any differentiable points (v, w) and (ṽ,w) with min{w, w} = c w > 0 and max{v, ṽ} = C v, there exists a Lipschitz constant L > 0 depending on C v and c w, such that DISPLAYFORM7 We are most interested in the complex case where both the saddle points and spurious local minimizers are present. Our main are concerned with the behaviors of the coarse gradient descent summarized in Algorithm 1 when the derivatives of the vanilla and clipped ReLUs as well as the identity function serve as the STE, respectively. We shall prove that Algorithm 1 using the derivative of vanilla or clipped ReLU converges to a critical point, whereas that with the identity STE does not. Theorem 1 (Convergence). Let {(v t, w t)} be the sequence generated by Algorithm 1 with ReLU µ(x) = max{x, 0} or clipped ReLU µ(x) = min {max{x, 0}, 1}. Suppose w t ≥ c w for all t with some c w > 0. Then if the learning rate η > 0 is sufficiently small, for any initialization (v 0, w 0), the objective sequence {f (v t, w t)} is monotonically decreasing, and {(v t, w t)} converges to a saddle point or a (local) minimizer of the population loss minimization. In addition, if 1 m v * = 0 and m > 1, the descent and convergence properties do not hold for Algorithm 1 with the identity function µ(x) = x near the local minimizers satisfying θ(w, w *) = π and DISPLAYFORM0 The convergence guarantee for the coarse gradient descent is established under the assumption that there are infinite training samples. When there are only a few data, in a coarse scale, the empirical loss roughly descends along the direction of negative coarse gradient, as illustrated by Figure 1. As the sample size increases, the empirical loss gains monotonicity and smoothness. This explains why (proper) STE works so well with massive amounts of data as in deep learning. Remark 2. The same hold, if the Gaussian assumption on the input data is weakened to that their rows i.i.d. follow some rotation-invariant distribution. The proof will be substantially similar. In the rest of this section, we sketch the mathematical analysis for the main .sample size = 10 sample size = 50 sample size = 1000Figure 1: The plots of the empirical loss moving by one step in the direction of negative coarse gradient v.s. the learning rate (step size) η for different sample sizes. If we choose the derivative of ReLU µ(x) = max{x, 0} as the STE in FORMULA7, it is easy to see µ (x) = σ(x), and we have the following expressions of DISPLAYFORM0 Let µ(x) = max{x, 0} in. The expected coarse gradient w.r.t. w is DISPLAYFORM1 where DISPLAYFORM2 As stated in Lemma 5 below, the key observation is that the coarse partial gradient E Z g relu (v, w; Z) has non-negative correlation with the population partial gradient ∂f ∂w (v, w), and −E Z g relu (v, w; Z) together with −E Z ∂ ∂v (v, w; Z) form a descent direction for minimizing the population loss. Lemma 5. If w = 0 n and θ(w, w *) ∈ (0, π), then the inner product between the expected coarse and population gradients w.r.t. w is DISPLAYFORM3 Moreover, if further v ≤ C v and w ≥ c w, there exists a constant A relu > 0 depending on C v and c w, such that DISPLAYFORM4 Clearly, when DISPLAYFORM5 We redefine the second term as 0n in the case θ(w, w *) = π, or equivalently, DISPLAYFORM6 that the coarse gradient descent behaves like the gradient descent directly on f (v, w). Here we would like to highlight the significance of the estimate in guaranteeing the descent property of Algorithm 1. By the Lipschitz continuity of ∇f specified in Lemma 3, it holds that DISPLAYFORM7 where a) is due to. Therefore, if η is small enough, we have monotonically decreasing energy until convergence. Lemma 6. When Algorithm 1 converges, E Z ∂ ∂v (v, w; Z) and E Z g relu (v, w; Z) vanish simultaneously, which only occurs at the 1. Saddle points where is satisfied according to Proposition 1. Lemma 6 states that when Algorithm 1 using ReLU STE converges, it can only converge to a critical point of the population loss function. For the STE using clipped ReLU, µ(x) = min {max{x, 0}, 1} and µ (x) = 1 {0<x<1} (x). We have similar to Lemmas 5 and 6. That is, the coarse partial gradient using clipped ReLU STE E Z g crelu (v, w; Z) generally has positive correlation with the true partial gradient of the population loss ∂f ∂w (v, w) (Lemma 7)). Moreover, the coarse gradient vanishes and only vanishes at the critical points (Lemma 8). Lemma 7. If w = 0 n and θ(w, w *) ∈ (0, π), then DISPLAYFORM0 * same as in Lemma 5, and DISPLAYFORM1 2 )dr. The inner product between the expected coarse and true gradients w.r.t. w DISPLAYFORM2 Moreover, if further v ≤ C v and w ≥ c w, there exists a constant A crelu > 0 depending on C v and c w, such that DISPLAYFORM3 Lemma 8. When Algorithm 1 converges, E Z ∂ ∂v (v, w; Z) and E Z g crelu (v, w; Z) vanish simultaneously, which only occurs at the 1. Saddle points where is satisfied according to Proposition 1. Now we consider the derivative of identity function. Similar to Lemmas 5 and 6 are not valid anymore. It happens that the coarse gradient derived from the identity STE does not vanish at local minima, and Algorithm 1 may never converge there. Lemma 9. Let µ(x) = x in. Then the expected coarse partial gradient w.r.t. w is DISPLAYFORM0 If θ(w, w *) = π and DISPLAYFORM1 i.e., E Z g id (v, w; Z) does not vanish at the local minimizers if 1 m v * = 0 and m > 1.Lemma 10. If w = 0 n and θ(w, w *) ∈ (0, π), then the inner product between the expected coarse and true gradients w.r.t. w is DISPLAYFORM2 Lemma 9 suggests that if 1 m v * = 0, the coarse gradient descent will never converge near the spurious minimizers with θ(w, w *) = π and DISPLAYFORM3 does not vanish there. By the positive correlation implied by of Lemma 10, for some proper (v 0, w 0), the iterates {(v t, w t)} may move towards a local minimizer in the beginning. But when {(v t, w t)} approaches it, the descent property does not hold for E Z [g id (v, w; Z)] because of, hence the training loss begins to increase and instability arises. While our theory implies that both vanilla and clipped ReLUs learn a two-linear-layer CNN, their empirical performances on deeper nets are different. In this section, we compare the performances of the identity, ReLU and clipped ReLU STEs on MNIST and CIFAR-10 benchmarks for 2-bit or 4-bit quantized activations. As an illustration, we plot the 2-bit quantized ReLU and its associated clipped ReLU in Figure 3 in the appendix. Intuitively, the clipped ReLU should be the best performer, as it best approximates the original quantized ReLU. We also report the instability issue of the training algorithm when using an improper STE in section 4.2. In all experiments, the weights are kept float. The resolution α for the quantized ReLU needs to be carefully chosen to maintain the full-precision level accuracy. To this end, we follow and resort to a modified batch normalization layer without the scale and shift, whose output components approximately follow a unit Gaussian distribution. Then the α that fits the input of activation layer the best can be pre-computed by a variant of Lloyd's algorithm (; a) applied to a set of simulated 1-D half-Gaussian data. After determining the α, it will be fixed during the whole training process. Since the original LeNet-5 does not have batch normalization, we add one prior to each activation layer. We emphasize that we are not claiming the superiority of the quantization approach used here, as it is nothing but the HWGQ , except we consider the uniform quantization. The optimizer we use is the stochastic (coarse) gradient descent with momentum = 0.9 for all experiments. We train 50 epochs for LeNet-5 on MNIST, and 200 epochs for VGG-11 and ResNet-20 on CIFAR-10. The parameters/weights are initialized with those from their pre-trained full-precision counterparts. The schedule of the learning rate is specified in TAB2 in the appendix. The experimental are summarized in Table 1, where we record both the training losses and validation accuracies. Among the three STEs, the derivative of clipped ReLU gives the best overall performance, followed by vanilla ReLU and then by the identity function. For deeper networks, clipped ReLU is the best performer. But on the relatively shallow LeNet-5 network, vanilla ReLU exhibits comparable performance to the clipped ReLU, which is somewhat in line with our theoretical finding that ReLU is a great STE for learning the two-linear-layer (shallow) CNN. We report the phenomenon of being repelled from a good minimum on ResNet-20 with 4-bit activations when using the identity STE, to demonstrate the instability issue as predicted in Theorem 1. By Table 1, the coarse gradient descent algorithms using the vanilla and clipped ReLUs converge to the neighborhoods of the minima with validation accuracies (training losses) of 86.59% (0.25) and 91.24% (0.04), respectively, whereas that using the identity STE gives 54.16% (1.38). Note that the landscape of the empirical loss function does not depend on which STE is used in the training. Then we initialize training with the two improved minima and use the identity STE. To see if the algorithm is stable there, we start the training with a tiny learning rate of 10 −5. For both initializations, the training loss and validation error significantly increase within the first 20 epochs; see Figure 4.2. To speedup training, at epoch 20, we switch to the normal schedule of learning rate specified in TAB2 and run 200 additional epochs. The training using the identity STE ends up with a much worse minimum. This is because the coarse gradient with identity STE does not vanish at the good minima in this case (Lemma 9). Similarly, the poor performance of ReLU STE on 2-bit activated ResNet-20 is also due to the instability of the corresponding training algorithm at good minima, as illustrated by Figure 4 in Appendix C, although it diverges much slower. Figure 2: When initialized with weights (good minima) produced by the vanilla (orange) and clipped (blue) ReLUs on ResNet-20 with 4-bit activations, the coarse gradient descent using the identity STE ends up being repelled from there. The learning rate is set to 10 −5 until epoch 20. We provided the first theoretical justification for the concept of STE that it gives rise to descent training algorithm. We considered three STEs: the derivatives of the identity function, vanilla ReLU and clipped ReLU, for learning a two-linear-layer CNN with binary activation. We derived the explicit formulas of the expected coarse gradients corresponding to the STEs, and showed that the negative expected coarse gradients based on vanilla and clipped ReLUs are descent directions for minimizing the population loss, whereas the identity STE is not since it generates a coarse gradient incompatible with the energy landscape. The instability/incompatibility issue was confirmed in CIFAR experiments for improper choices of STE. In the future work, we aim further understanding of coarse gradient descent for large-scale optimization problems with intractable gradients. Figure 4: When initialized with the weights produced by the clipped ReLU STE on ResNet-20 with 2-bit activations (88.38% validation accuracy), the coarse gradient descent using the ReLU STE with 10 −5 learning rate is not stable there, and both classification and training errors begin to increase. Lemma 11. Let z ∈ R n be a Gaussian random vector with entries i.i.d. sampled from N. Given nonzero vectors w,w ∈ R n with the angle θ, we have DISPLAYFORM0 Proof of Lemma 11. The third identity was proved in Lemma A.1 of . To show the first one, without loss of generality we assume w = [w 1, 0 n−1] with w 1 > 0, then E 1 {z w>0} = P(z 1 > 0) = 1 2. We further assumew = [w 1,w 2, 0 n−2]. It is easy to see that DISPLAYFORM1 To prove the last identity, we use polar representation of two-dimensional Gaussian random variables, where r is the radius and φ is the angle with dP r = r exp(−r 2 /2)dr and dP φ = 1 2π dφ. Then E z i 1 {z w>0, z w>0} = 0 for i ≥ 3. Moreover, DISPLAYFORM2 Therefore, Lemma 12. Let z ∈ R n be a Gaussian random vector with entries i.i.d. sampled from N. Given nonzero vectors w,w ∈ R n with the angle θ, we have Moreover, E z i 1 {0<z w<1} = 0 for i ≥ 3. So the first identity holds. For the second one, we have DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 and similarly, E z 2 1 {0<z w<1,z w>0} = q(θ, w). Therefore, DISPLAYFORM6 DISPLAYFORM7 Proof of Lemma 13. DISPLAYFORM8 where the last inequality is due to the rearrangement inequality since both sin(φ) and ξ DISPLAYFORM9 2. Since cos(φ)ξ DISPLAYFORM10 is even, we have DISPLAYFORM11 The first inequality is due to part 1 which gives p(π/2, w) ≤ q(π/2, w), whereas the second one holds because sin(φ)ξ Proof of Lemma 14. 1. Since by Cauchy-Schwarz inequality, DISPLAYFORM12 we have DISPLAYFORM13 Therefore, DISPLAYFORM14 where we used the fact sin(x) ≥ 2. Since I n − ww w 2 w * is the projection of w * onto the complement space of w, and likewise for I n −ww w 2 w *, the angle between I n − ww w 2 w * and I n −ww w 2 w * is equal to the angle between w andw. Therefore, DISPLAYFORM15 Lemma 1. If w = 0 n, the population loss f (v, w) is given by DISPLAYFORM0 Proof of Lemma 1. We notice that DISPLAYFORM1 Let Z i be the i th row vector of Z. Since w = 0 n, using Lemma 11, we have DISPLAYFORM2 and for i = j, DISPLAYFORM3, DISPLAYFORM4 Then it is easy to validate the first claim. Moreover, if w = 0 n, then DISPLAYFORM5 Lemma 2. If w = 0 n and θ(w, w *) ∈ (0, π), the partial gradients of f (v, w) w.r.t. v and w are DISPLAYFORM6 Proof of Lemma 2. The first claim is trivial, and we only show the second one. Since θ(w, w DISPLAYFORM7 give the saddle points obeying FORMULA11 DISPLAYFORM8 From FORMULA1 it follows that DISPLAYFORM9 On the other hand, from it also follows that DISPLAYFORM10 where we used (I m + 1 m 1 m)1 m = (m + 1)1 m. Taking the difference of the two equalities above gives DISPLAYFORM11 By, we have θ(w, w DISPLAYFORM12 Furthermore, since ∂f ∂v (v, w) = 0, we have DISPLAYFORM13 Next, we check the local optimality of the stationary points. By ignoring the scaling and constant terms, we rewrite the objective function as DISPLAYFORM14 It is easy to check that its Hessian matrix DISPLAYFORM15 is indefinite. Therefore, the stationary points are saddle points. DISPLAYFORM16 where we used in the last identity. We consider an arbitrary point (v + ∆v, π + ∆θ) in the neighborhood of (v, π) with ∆θ ≤ 0. The perturbed objective value is DISPLAYFORM17 On the right hand side, since v = (I m + 1 m 1 m) −1 (1 m 1 m − I m)v * is the unique minimizer to the quadratic functionf (v, π), we have if ∆v = 0 m, DISPLAYFORM18 Moreover, for sufficiently small ∆v, it holds that ∆θ · (v + ∆v) v * > 0 for ∆θ < 0 because of. Therefore,f (v + ∆v, π + ∆θ) >f (v, π) whenever (∆v, ∆θ) is small and non-zero, and DISPLAYFORM19 To prove the second claim, suppose Lemma 3. For any differentiable points (v, w) and (ṽ,w) with min{w, w} = c w > 0 and max{v, ṽ} = C v, there exists a Lipschitz constant L > 0 depending on C v and c w, such that DISPLAYFORM20 DISPLAYFORM21 DISPLAYFORM22 where the last inequality is due to Lemma 14.1. DISPLAYFORM0 where the second last inequality is to due to Lemma 14.2. Combining the two inequalities above validates the claim. Lemma 4. The expected partial gradient of (v, w; Z) w.r.t. v is DISPLAYFORM1 Let µ(x) = max{x, 0} in. The expected coarse gradient w.r.t. w is DISPLAYFORM2 Proof of Lemma 4. The first claim is true because DISPLAYFORM3 Using the fact that µ = σ = 1 {x>0}, we have DISPLAYFORM4 Invoking Lemma 11, we have DISPLAYFORM5 and DISPLAYFORM6 3 We redefine the second term as 0n in the case θ(w, w *) = π, or equivalently, DISPLAYFORM7 Therefore, DISPLAYFORM8 and the follows. Lemma 5. If w = 0 n and θ(w, w *) ∈ (0, π), then the inner product between the expected coarse and true gradients w.r.t. w is DISPLAYFORM9 Moreover, if further v ≤ C v and w ≥ c w, there exists a constant A relu > 0 depending on C v and c w, such that DISPLAYFORM10 Proof of Lemma 5. By Lemmas 2 and 4, we have DISPLAYFORM11 Notice that I n − ww w 2 w = 0 n and w * = 1, if θ(w, w *) = 0, π, then we have DISPLAYFORM12 To show the second claim, without loss of generality, we assume w = 1. Denote θ:= θ(w, w *). By Lemma 1, we have DISPLAYFORM13 By Lemma 4, DISPLAYFORM14 where DISPLAYFORM15 and by the first claim, DISPLAYFORM16 Hence, for some A relu depending only on C v and c w, we have DISPLAYFORM17 Saddle points where is satisfied according to Proposition 1. Proof of Lemma 6. By Lemma 4, suppose we have DISPLAYFORM0 and DISPLAYFORM1 where DISPLAYFORM2 If θ(w, w *) = 0, then by, v = v *, and FORMULA2 If v v * = 0, then by, we have the expressions for v and θ(w, w *) from Proposition 1, and is satisfied. Lemma 7. If w = 0 n and θ(w, w *) ∈ (0, π), then DISPLAYFORM3 where DISPLAYFORM4 * same as in Lemma 5, and DISPLAYFORM5 2 )dr. The inner product between the expected coarse and true gradients w.r.t. w DISPLAYFORM6 Moreover, if further v ≤ C v and w ≥ c w, there exists a constant A crelu > 0 depending on C v and c w, such that DISPLAYFORM7 Proof of Lemma 7. Denote θ:= θ(w, w *). We first compute E Z g crelu (v, w; Z). By, DISPLAYFORM8 Since µ = 1 {0<x<1} and σ = 1 {x>0}, we have In the last equality above, we called Lemma 12. DISPLAYFORM9 Notice that I n − ww w 2 w = 0 n and w * = 1. If θ(w, w *) = 0, π, then the inner product between E Z g crelu (v, w; Z) and Combining the above estimate together with FORMULA2, FORMULA2 and FORMULA2, and using Cauchy-Schwarz inequality, we have DISPLAYFORM10 where p(0, w) and q(θ, w) are uniformly bounded. This completes the proof. Proof of Lemma 8. The proof of Lemma 8 is similar to that of Lemma 6, and we omit it here. The core part is that q(θ, w) defined in Lemma 12 is non-negative and equals 0 only at θ = 0, π, as well as p(0, w) ≥ p(θ, w) ≥ p(π, w) = 0.Lemma 9. Let µ(x) = x in. Then the expected coarse partial gradient w.r.t. w is Proof of Lemma 9. By, DISPLAYFORM11 Using the facts that µ = 1 and σ = 1 {x>0}, we have DISPLAYFORM12 In the last equality above, we called the third identity In the third equality, we used the identity (I m + 1 m 1 m)1 m = (m + 1)1 m twice. Lemma 10. If w = 0 n and θ(w, w *) ∈ (0, π), then the inner product between the expected coarse and true gradients w.r.t. w is
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skh4jRcKQ
We make theoretical justification for the concept of straight-through estimator.
This paper presents GumbelClip, a set of modifications to the actor-critic algorithm, for off-policy reinforcement learning. GumbelClip uses the concepts of truncated importance sampling along with additive noise to produce a loss function enabling the use of off-policy samples. The modified algorithm achieves an increase in convergence speed and sample efficiency compared to on-policy algorithms and is competitive with existing off-policy policy gradient methods while being significantly simpler to implement. The effectiveness of GumbelClip is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain. Recent advances in reinforcement learning (RL) have enabled the extension of long-standing methods to complex and large-scale tasks such as Atari , Go, and DOTA . The key driver has been the use of deep neural networks, a non-linear function approximator, with the combination usually referred to as Deep Reinforcement Learning (DRL) . However, deep learning-based methods are usually data-hungry, requiring millions of samples before the network converges to a stable solution. As such, DRL methods are usually trained in a simulated environment where an arbitrary amount of data can be generated. RL algorithms can be classified as either learning in an off-policy or on-policy setting. In the onpolicy setting, an agent learns directly from experience generated by its current policy. In contrast, the off-policy setting enables the agent to learn from experience generated by its current policy or/and other separate policies. An algorithm that learns in the off-policy setting has much greater sample efficiency as old experience from the current policy can be reused; it also enables off-policy algorithms to learn an optimal policy while executing an exploration-focused policy . The most famous off-policy method is Q-Learning which learns an actionvalue function, Q(s, a), that maps the value to a state s and action a pair. Deep Q-Learning (DQN), the marriage of Q-Learning with deep neural networks, was popularised by and used various modifications, such as experience replay, for stable convergence. Within DQN, experience replay is often motivated as a technique for reducing sample correlation. Unfortunately, all action-value methods, including Q-Learning, have two significant disadvantages. First, they learn deterministic policies, which cannot handle problems that require stochastic policies. Second, finding the greedy action with respect to the Q function is costly for large action spaces. To overcome these limitations, one could use policy gradient algorithms , such as actor-critic methods, which learn in an on-policy setting at the cost of sample efficiency. The ideal solution would be to combine the sample efficiency of off-policy algorithms with the desirable attributes of on-policy algorithms. Work along this line has been done by using importance sampling or by combining several techniques together, as in ACER . However, the ing methods are quite complex and require many modifications to existing algorithms. This paper, proposes a set of adjustments to A2C, a parallel on-policy actor-critic algorithm, enabling off-policy learning from stored trajectories. Therefore, our contributions are as follows: • GumbelClip, a fully off-policy actor-critic algorithm, the of a small set of simple adjustments, in under 10 lines of code (LOC) 1, to the A2C algorithm. • GumbelClip has increased sample efficiency and overall performance over on-policy actorcritic algorithms, such as A2C. • GumbelClip performs similarily to other off-policy actor-critic algorithms, such as ACER, while being significantly simpler to implement. The paper is organized as follows: Section 2 covers information, Section 3 describes the GumbelClip algorithm, Section 4 details the experiments along with , discussion, and ablations of our methodology. Section 5 discusses possible future work, and finally Section 6 provides concluding remarks. Consider an agent interacting with a Markov Decision Process (MDP) consisting of a set of states S, a set of actions A, a transition function P:, and a reward function r: S × A → R. Within this work, discrete actions and time steps are assumed. A policy is a probability distribution over actions conditioned on states, π: At each time step t, the agent observes the environment state s t ∈ S, chooses an action a t ∈ A from its policy π(a t |s t), and receives a reward r t from the environment. The goal of the agent is to maximize the discounted future return The discount factor γ ∈ trades off the importance of immediate and future rewards. Following from this, the value function of a policy π, in a discounted problem, is defined as the expected return To optimize the parameters θ of the stochastic policy π the policy gradient theorem is used, which provides an expression for the gradient of the discounted reward objectives with respect to parameter θ. Therefore, the parameters θ of the differentiable stochastic policy π θ (a t |s t) are updated as: where Ψ π (s t, a t), as shown by Schulman et al. (2015b), can be replaced with quantities such as: the total reward of the trajectory, the TD residual, or the state-action value function Q π (s t, a t). The choice of Ψ π affects the variance of the estimated gradient. This work uses the advantage function, which provides a relative measure of value for each action. The advantage function helps to reduce the variance of the gradient estimator while keeping the bias unchanged. The Gumbel-Softmax distribution (GSD) is a continuous distribution used to approximate samples from a categorical distribution. The GSD uses standard Gumbel noise to sample directly from the softmax distribution. On its own, the Gumbel distribution is typically used to model the maximum of a set of independent samples. Given categorical class probabilities π 1, π 2,..., π k the Gumbel-Softmax distribution is defined as: 1 Assuming code for replay memory is available. In practice, the policy gradient is estimated from a trajectory of samples generated by the on-policy stationary distribution π(a|s). This limits the efficiency of typical policy gradient methods, such as actor-critic, compared to methods like Deep Q-Learning which can learn off-policy. A common approach to using off-policy samples is a technique known as importance sampling (; ; ;). Given a trajectory of samples generated by some behaviour policy B(a|s), the policy gradient from Equation 1 is modified to be: where ρ is the known as the importance weight and is defined as a ratio between the current policy π(a|s) and the behaviour policy B(a|s): Unfortunately, the importance weighted gradient in Equation 3 suffers from high variance. To reduce variance, Wawrzyński proposed truncating each importance weight to the interval [0, c] where c is some constant. GumbelClip builds off the on-policy actor-critic algorithm A2C. To enable off-policy learning GumbelClip uses clipped importance sampling, policy forcing through Gumbel noise, and large batches sampled from a replay memory. Psuedo-code for GumbelClip is provided in Algorithm 1. We begin by defining a few quantities. In importance weighting, two policy classes exist: the current policy π(a|s; θ) and the behaviour policy B(a|s). Because replay memory is being used, the behaviour policy is simply the distribution over actions with an old parameter setting θ *: GumbelClip introduces a third policy, the forced policy F(a|s) using the Gumbel-Softmax distribution, which from adding noise (i) sampled from a standard Gumbel distribution to the normalized logits of the current policy: As the name implies, adding Gumbel noise has the effect of "forcing" the sampled policy distribution to be more categorical such that one action contains most of the probability mass. As Equation 6 is identical to the Gumbel-Softmax distribution, with temperature τ = 1, we can refer to Figure 1b) to understand the characteristics of the ing sampled distribution. Initialize parameters θ and θv. Initialize replay memory D with capacity N. repeat for i ∈ {0, · · ·, k} do Perform ai according to π(·|si; θ). Receive reward ri and new state si+1., 0, c end for end for Perform update of θ using dθ and θv using dθv. until Max iteration or time reached. Similarly to previous work, this study uses importance sampling ρ to weight the updates of the loss function. However, instead of using the current policy, π(·|s t), in the numerator, it is replaced with the forced policy F(a t |s t): The range of ρ t is clipped to [0, c] and this clipped importance weight is referred to asρ t. Clipping the upper bound prevents the product of many importance weights from exploding and reduces the variance (Wawrzyński, 2009). Putting this all together the update equation for GumbelClip is as follows: The gradient is estimated by uniformly sampling b trajectories of length k from a replay memory with size N. In addition, the network parameters are updated using the forced policy F(a|s). The advantage function A(s t, a t) = R is the bootstrapped k-step return for time t. As the forced policy in the numerator tends towards a categorical distribution, it became evident that the importance weights had the habit of clumping near the boundaries of the interval and often near 1. Following from Figure 2, we can see the distribution of the ratio between GumbelClip and a typical application of truncated importance sampling to A2C as might be suggested by Wawrzyński. The effect of clipping and Gumbel noise has an interesting effect on the way the policy is updated. From Figure 2 (b), we see that three modes exist, which roughly correspond to the cases of agreement and disagreement. The case of agreement corresponds to ratios and the mode near 1 while disagreements correspond to ratios and modes at 0 and c. More exactly, when the forced policy disagrees with the behaviour policy, say F(·|s) ≈ 1 and B(·|s) ≈ 0, the update the policy receives is at most clipped by the upper bound of our interval: c. On the other hand, when the situation is reversed, but still in disagreement, with F(·|s) ≈ 0 and B(·|s) ≈ 1, the policy has an importance weight of~0. Our experiments focus on the Atari domain as there exists a large amount of variety between environments and the states are represented as raw high-dimensional pixels. The gym software package by was used to conduct all the experiments. All experiments used the same algorithm, network architecture, and hyper-parameters to learn to play Atari games using only raw pixel observations. This study used the same input pre-processing and network architecture as. The network architecture consists of three convolutional layers as follows: 32 8 × 8 filters with stride 4, 64 4 × 4 filters with stride 2, and 32 3 × 3 filters with stride 1. The final convolutional layer feeds into a fully-connected layer with 512 units. All layers are followed by rectified non-linearity. Finally, the network outputs a softmax policy over actions and a state-value. The experimental set-up used 16 threads running on a GPU equipped machine. All experiments trained for 40 million frames. A replay memory of size N = 250000 was kept, an update was performed every k = 5 steps in the environment, and a clamping coefficient of c = 4 was used. The optimization procedure used RMSProp with a learning rate of 5e − 4, entropy regularization of 0.01, and a discount factor of γ = 0.99. We sample 64 trajectories of length 5 for each update. Learning begins after we have collected 10, 000 samples in the replay memory. We tuned the hyperparameters and developed GumbelClip on the FishingDerby environment only; the other environments can be considered "out of sample". All experiments used the same hyperparameter settings and network architecture. We use the best 3 of 4 seeds and report the mean value with 1 standard deviation as shaded areas on all graphs. As GumbelClip uses larger batchsizes we see an improvement in total run time per seed of 6-7 hours over A2C which takes 9-10 hours per seed. Due to limited computational resources, we were only able to evaluate GumbelClip on a subset of the environments and with a smaller replay memory size. Therefore, in this study, an effort was made to select environments that best showcase the performance of off-policy (ACER) and on-policy (A2C) Figure 3: Training performance across 8 Atari games. We see the performance of GumbelClip (shown in blue) against ACER (shown in green), an off-policy actor-critic algorithm, and the onpolicy algorithm A2C (shown in red). The graphs show the average performance over 3 seeds with 1 standard deviation shown as the shaded region. GumbelClip matches or exceeds the performance of the on-policy algorithm on all environments shown; while in all cases achieving improved convergence speed. It also shows a respectable performance when compared to the ACER algorithm on many of the environments. actor-critic methods. We note that the performance can be expected to improve with a larger replay memory, as seen with DQN and other methods using replay memory. Additionally, we focused the examination of our ablations on the Alien environment to reduce computational requirements. GumbelClip is based on a modified version of the open-source A2C implementation by. The present work was compared with an off-policy actor-critic algorithm, ACER , and an on-policy actor-critic algorithm, A2C, the synchronous version of A3C. Both baseline models, A2C and ACER, used the baselines package provided by OpenAI . ACER used a replay ratio of 4, trust region updates (a), a discount factor of γ = 0.99, entropy regularization of 0.01, updated every 20 steps, and a 50000 capacity replay memory per thread. To test the proposed methodology, the performance of GumbelClip on a subset of Atari environments was examined. In particular, the following environments were investigated: Alien, BeamRider, Boxing, FishingDerby, MsPacman, Qbert, Seaquest, and SpaceInvaders. We report the average reward every 1000 episodes over 40 million frames. As mentioned previously, environments were chosen where either the on-policy algorithms perform well or where there is a clear difference in performance between an off-policy and on-policy method. From Figure 3, we see the performance of GumbelClip, shown in blue, in comparison to the onpolicy ACER algorithm, shown in green, and the off-policy A2C algorithm. We see that the use of replay memory and learning with off-policy samples significantly improves the sample efficiency of GumbelClip over A2C. Across the board, we see that GumbelClip converges significantly faster than A2C while also exceeding A2C's performance. We achieve similar sample efficiency between the off-policy actor-critics, GumbelClip and ACER, across each environment. The additive noise and aggressive clamping seem to have both positive and negative effects. We see that GumbelClip sees a faster initial increase in performance across almost all environments, even outperforming ACER in some cases. However, clamping has been noted by to have the possibility of introducing bias. We hypothesize this might be the cause of poor performance on the Qbert environment, which while better than A2C, is much lower when compared to ACER. The benefit of GumbelClip comes from its simplicity, requiring a small number of easy changes to the A2C algorithm, while still providing better performance than other on-policy actor-critic algorithms. Figure 4: Variations in distributions used to sample additive noise. We compare the impact of noise drawn from the standard Gumbel distribution to the standard Normal distribution and Uniform distribution between. Here we examine the distribution used to draw the additive noise (i) for use in the forced policy F(a|s). We compare the performance of noise sampled from the standard Gumbel distribution to that of the standard Normal distribution and the Uniform distribution. The experiments only adjust the sampling distribution with no other parameter changes. We investigate the performance on the Alien Atari game over 40 million frames. From the in Figure 4, we see that the Gumbel and Normal distributions have the same initial rate of improvement but diverge roughly midway through training. The Normal distribution appears to degrade in performance before becoming stable at~1500. The uniform distribution, sampled between, has the same convergence characteristic as the other two distributions but instead of diverging away, similar to the Normal distribution, it continues upward before converging at~2000. From Figure 4, we see that the Gumbel distribution is the most performant. Figure 5: Stability of GumbelClip with extended training time. GumbelClip is trained for 150M frames,~4x longer, on the Boxing and FishingDerby environments. We see that GumbelClip experiences little to no oscillations in performance even with continued weight updates. It is natural to inquire on the stability of this method, as we rely on additive noise which could cause instability after an optimal policy has been reached. To this end, we evaluate the stability of GumbelClip by increasing the number of training iterations such that 150 million frames are seen. The Boxing and FishingDerby environments are used for this evaluation. The environments were chosen as the policy had achieved the highest score during training, a known ceiling, and any instability would cause a divergence to a sub-optimal policy. From the rather uninteresting graphs, shown in Figure 5, we see that GumbelClip can converge to and maintain a stable policy even with continued parameter updates. To overcome the noise added by the Gumbel distribution requires the network to output a near one-hot-encoded categorical distribution. In our final set of experiments, we performed ablations over the various components of GumbelClip to tease apart the cause of improvements in performance. The of the ablation are shown in Figure 6 (a) with the complete GumbelClip algorithm in blue and each stripped version of GumbelClip as a red curve. We start with a base version, which is the application of truncated importance sampling and replay memory to the off-policy algorithm A2C; this base version is shown in the leftmost panel in Figure 6 (a). From Figure 6: Ablations of GumbelClip. We gradually introduce each component of GumbelClip to a base version of an off-policy actor-critic A2C algorithm. a) The full GumbelClip algorithm is shown as the blue curve while the stripped versions are shown in red. From left to right we gradually add the components onto the base version until we arrive at the full GumbelClip algorithm, shown in the last pane. The lines in the last pane are identical with one graph is stylized. b) The table provides the percent deltas between either the stripped version to GumbelClip or the current model to the last. We measure the change between the last 100 episodes. lower than GumbelClip. The simple addition of a larger batchsize, from 16 sampled trajectories, to 64 as shown in the second panel in Figure 6 (a) causes an increase of +21.30% over the base version and narrows the difference to GumbelClip to −26.49%. Using an aggressive clamp of 4 instead of 10 on the importance sampling ratioρ improves performance by an additional +12.33%. Finally, the addition of noise sampled from the Gumbel distribution closes the gap with a final increase of +21.09%. The addition of noise changes Equation 8 from being in terms of π(a|s), in both the policy being optimized and the numerator inρ, to F(a|s). It is clearly shown that the large batchsize and additive noise contribute the most to the performance increase between stripped versions. Additionally, while difficult to quantify, we can see from the plots that the aggressive clamp and additive noise improve sample efficiency the most. Figure 7: Change in distributions overρ as components are added, shown as histograms. The xaxis corresponds to the magnitude ofρ and the y-axis is the number of occurrences. From left to right we see the distribution overρ when using the base version. As components are added we see three modes, at approximately 0, 1, and c, become more pronounced. The addition of Gumbel noise increases the smoothness between 0 → 1, creates a fatter tail from the mode 1 → c, and increases the density of all modes. From Figure 7, we see the effect that each addition to the base version has on the ratioρ and therefore the updates to the network. In the last two panes of Figure 7, we see three clear modes at 0, 1, and c. Addition of Gumbel noise increases the density across all modes and reduces the "noise" seen from 0 → 1. The modes, as discussed in Section 3, correspond to "agreement", at 1, and "disagreement" at {0, c}. As GumbelClip relies heavily on the Gumbel distribution to sample noise from, it is best suited to environments with discrete actions. Therefore, an interesting avenue for future work would be to find a suitable distribution to sample noise from that works with continuous actions. Additionally, further investigation into possible annealing schedules for a temperature hyperparameter, in Equation 6, or around the clamping constant c could yield interesting . In this paper we have presented GumbelClip, a set of adjustments to the on-policy A2C algorithm, enabling full off-policy learning from stored trajectories in a replay memory. Our approach relies on aggressive clipping of the importance weight, large batchsize, and additive noise sampled from the Gumbel distribution. We have empirically validated the use of each component in GumbelClip through ablations and shown the stability of the algorithm. Furthermore, we have shown that GumbelClip achieves superior performance and higher sample efficiency than A2C. GumbelClip nears the performance and sample efficiency of ACER on many of the tested environments. Our methodology requires minimal changes to the A2C algorithm, which in contrast to ACER, makes the implementation of GumbelClip straightforward.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxngREtDB
With a set of modifications, under 10 LOC, to A2C you get an off-policy actor-critic that outperforms A2C and performs similarly to ACER. The modifications are large batchsizes, aggressive clamping, and policy "forcing" with gumbel noise.
In the past few years, various advancements have been made in generative models owing to the formulation of Generative Adversarial Networks (GANs). GANs have been shown to perform exceedingly well on a wide variety of tasks pertaining to image generation and style transfer. In the field of Natural Language Processing, word embeddings such as word2vec and GLoVe are state-of-the-art methods for applying neural network models on textual data. Attempts have been made for utilizing GANs with word embeddings for text generation. This work presents an approach to text generation using Skip-Thought sentence embeddings in conjunction with GANs based on gradient penalty functions and f-measures. The of using sentence embeddings with GANs for generating text conditioned on input information are comparable to the approaches where word embeddings are used. Numerous efforts have been made in the field of natural language text generation for tasks such as sentiment analysis BID35 and machine translation BID7 BID24. Early techniques for generating text conditioned on some input information were template or rule-based engines, or probabilistic models such as n-gram. In recent times, state-of-the-art on these tasks have been achieved by recurrent BID23 BID20 and convolutional neural network models trained for likelihood maximization. This work proposes an Code available at: https://github.com/enigmaeth/skip-thought-gan approach for text generation using Generative Adversarial Networks with Skip-Thought vectors. GANs BID9 are a class of neural networks that explicitly train a generator to produce high-quality samples by pitting against an adversarial discriminative model. GANs output differentiable values and hence the task of discrete text generation has to use vectors as differentiable inputs. This is achieved by training the GAN with sentence embedding vectors produced by Skip-Thought, a neural network model for learning fixed length representations of sentences. Deep neural network architectures have demonstrated strong on natural language generation tasks BID32. Recurrent neural networks using combinations of shared parameter matrices across time-steps BID30 BID20 BID3 with different gating mechanisms for easing optimization BID13 BID3 have found some success in modeling natural language. Another approach is to use convolutional neural networks that reuse kernels across time-steps with attention mechanism to perform language generation tasks BID15.Supervised learning with deep neural networks in the framework of encoder-decoder models has become the state-of-the-art methods for approaching NLP problems (Young et al.). Stacked denoising autoencoder have been used for domain adaptation in classifying sentiments BID8 and combinatory categorical autoencoders demonstrate learning the compositionality of sentences BID12. Recent text generation models use a wide variety of GANs such as gradient policy based sequence generation framework BID34 BID6 for performing natural language generation tasks. Other architectures such as those proposed in with RNN and variational auto-encoder generator with CNN discriminator and in BID11 with leaky discriminator to guide generator through high-level extracted features have also shown great . This section introduces Skip-Thought Generative Adversarial Network with a on models that it is based on. The Skip-Thought model induces embedding vectors for sentences present in training corpus. These vectors constitute the real distribution for the discriminator network. The generator network produces sentence vectors similar to those from the encoded real distribution. The generated vectors are sampled over training and decoded to produce sentences using a Skip-Thought decoder conditioned on the same text corpus. Skip-Thought is an encoder-decoder framework with an unsupervised approach to train a generic, distributed sentence encoder. The encoder maps sentences sharing semantic and syntactic properties to similar vector representations and the decoder reconstructs the surrounding sentences of an encoded passage. The sentence encoding approach draws inspiration from the skip-gram model in producing vector representations using previous and next sentences. The Skip-Thought model uses an RNN encoder with GRU activations BID4 and an RNN decoder with conditional GRU, the combination being identical to the RNN encoder-decoder of used in neural machine translation. For a given sentence tuple (s i−1, s i, s i+1), let w t i denote the t-th word for sentence s i, and let x t i denote its word embedding. The model has three components: Encoder. Encoded vectors for a sentence s i with N words w i, w i+1,...,w n are computed by iterating over the following sequence of equations: DISPLAYFORM0 Decoder. A neural language model conditioned on the encoder output h i serves as the decoder. Bias matrices C z, C r, C are introduced for the update gate, reset gate and hidden state computation by the encoder. Two decoders are used in parallel, one each for sentences s i + 1 and s i − 1. The following equations are iterated over for decoding: DISPLAYFORM1 Objective. For the same tuple of sentences, objective function is the sum of log-probabilities for the forward and backward sentences conditioned on the encoder representation: DISPLAYFORM2 Generative Adversarial Networks BID9 are deep neural net architectures comprised of two networks, contesting with each other in a zero-sum game framework. For a given data, GANs can mimic learning the underlying distribution and generate artificial data samples similar to those from the real distribution. Generative Adversarial Networks consists of two players -a Generator and a Discriminator. The generator G tries to produce data close to the real distribution P (x) from some stochastic distribution P (z) termed as noise. The discriminator D's objective is to differentiate between real and generated data G(z).The two networks -generator and discriminator compete against each other in a zero-sum game. The minimax strategy dictates that each network plays optimally with the assumption that the other network is optimal. This leads to Nash equilibrium which is the point of convergence for GAN model. Objective. BID9 have formulated the minimax game for a generator G, discriminator D adversarial network with value function V (G, D) as: DISPLAYFORM0 The STGAN model uses a deep convolutional generative adversarial network, similar to the one used in (Radford et al.). The generator network is updated twice for each discriminator network update to prevent fast convergence of the discriminator network. The Skip-Thought encoder for the model encodes sentences with length less than 30 words using 2400 GRU units BID4 with word vector dimensionality of 620 to produce 4800-dimensional combineskip vectors.. The combine-skip vectors, with the first 2400 dimensions being uni-skip model and the last 2400 bi-skip model, are used as they have been found to be the best performing in the experiments 1. The decoder uses greedy decoding taking argmax over softmax output distribution for given time-step which acts as input for next time-step. It reconstructs sentences conditioned on a sentence vector by randomly sampling from the predicted distributions with or without a preset beam width. Unknown tokens are not included in the vocabulary. A 620 dimensional RNN word embeddings is used with 1600 hidden GRU decoding units. Gradient clipping with Adam optimizer BID16 ) is used, with a batch size of 16 and maximum sentence length of 100 words for decoder. The training process of a GAN is notably difficult BID28 and several improvement techniques such as batch normalization, feature matching, historical averaging BID28 and unrolling GAN (Metz et al.) have been suggested for making the training more stable. Training the Skip-Thought GAN often in mode dropping (Arjovsky & Bottou; Srivastava et al.) with a parameter setting where it outputs a very narrow distribution of points. To overcome this, it uses minibatch discrimination by looking at an entire batch of samples and modeling the distance between a given sample and all the other samples present in that batch. The minimax formulation for an optimal discriminator in a vanilla GAN is Jensen-Shannon Distance between the generated distribution and the real distribution. used Wasserstein distance or earth mover's distance to demonstrate how replacing distance measures can improve training loss for GAN. BID10 have incorporated a gradient penalty regularizer in WGAN objective for discriminator's loss function. The experiments in this work use the above f-measures to improve performance of Skip-Thought GAN on text generation. GANs can be conditioned on data attributes to generate samples BID21 BID25. In this experiment, both the generator and discriminator are conditioned on Skip-Thought encoded vectors. The encoder converts 70000 sentences from the BookCorpus dataset with a training/test/validation split of 5/1/1 into vectors used as real samples for discriminator. The decoded sentences are used to evaluate model performance under corpus level BLEU-2, BLEU-3 and BLEU-4 metrics (Papineni et al.), once using only test set as reference and then entire corpus as reference. i can n't see some shopping happened. i had a police take watch out of my wallet.get him my camera found a person's my watch. here i collect my telephone card and telephone number delta airlines flight six zero two from six p.m. to miami, please? Table 2. Sample sentences generated from training on CMU-SE Dataset; mode collapse is overcome by using minibatch discrimination. Formation of sentences further improved by changing f-measure to Wasserstein distance along with gradient penalty regularizer. Language generation is done on a dataset comprising simple English sentences referred to as CMU-SE 2 in BID26. The CMU-SE dataset consists of 44,016 sentences with a vocabulary of 3,122 words. For encoding, the vectors are extracted in batches of sentences having the same length. The samples represent how mode collapse is manifested when using least-squares distance BID18 f-measure without minibatch discrimination. Table 2(a) contains sentences generated from STGAN using least-squares distance BID18 in which there was no mode collapse observed, while 2(b) contains examples wherein it is observed. Table 2(c) shows generated sentences using gradient penalty regularizer(GAN-GP). Table 2 (d) has samples generated from STGAN when using Wasserstein distance f-measure as WGAN ) and 2(e) contains samples when using a gradient penalty regularizer term as WGAN-GP BID10. Another performance metric that can be computed for this setup has been described in BID26 which is a parallel work to this. Simple CFG 3 and more complex ones like Penn Treebank CFG generate samples BID5 which are used as input to GAN and the model is evaluated by computing the diversity and accuracy of generated samples conforming to the given CFG.Skip-Thought sentence embeddings can be used to generate images with GANs conditioned on text vectors for text-to-image conversion tasks like those achieved in BID27 BID2. These embeddings have also been used to Models like neuralstoryteller 4 which use these sentence embeddings can be experimented with generative adversarial networks to generate unique samples.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylLud_moQ
Generating text using sentence embeddings from Skip-Thought Vectors with the help of Generative Adversarial Networks.
Autoregressive recurrent neural decoders that generate sequences of tokens one-by-one and left-to-right are the workhorse of modern machine translation. In this work, we propose a new decoder architecture that can generate natural language sequences in an arbitrary order. Along with generating tokens from a given vocabulary, our model additionally learns to select the optimal position for each produced token. The proposed decoder architecture is fully compatible with the seq2seq framework and can be used as a drop-in replacement of any classical decoder. We demonstrate the performance of our new decoder on the IWSLT machine translation task as well as inspect and interpret the learned decoding patterns by analyzing how the model selects new positions for each subsequent token.
[ 0, 0, 0, 0, 1 ]
B1ejpNkhim
new out-of-order decoder for neural machine translation
In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms. Although the learning based approach is inexact, we are able to generalize to count large patterns and data graphs in polynomial time compared to the exponential time of the original NP-complete problem. Different from other traditional graph learning problems such as node classification and link prediction, subgraph isomorphism counting requires more global inference to oversee the whole graph. To tackle this problem, we propose a dynamic intermedium attention memory network (DIAMNet) which augments different representation learning architectures and iteratively attends pattern and target data graphs to memorize different subgraph isomorphisms for the global counting. We develop both small graphs (<= 1,024 subgraph isomorphisms in each) and large graphs (<= 4,096 subgraph isomorphisms in each) sets to evaluate different models. Experimental show that learning based subgraph isomorphism counting can help reduce the time complexity with acceptable accuracy. Our DIAMNet can further improve existing representation learning models for this more global problem. Graphs are general data structures widely used in many applications, including social network analysis, molecular structure analysis, natural language processing and knowledge graph modeling, etc. Learning with graphs has recently drawn much attention as neural network approaches to representation learning have been proven to be effective for complex data structures (; ; b; ; ;). Most of existing graph representation learning algorithms focus on problems such as node classification, linking prediction, community detection, etc. (a). These applications are of more local decisions for which a learning algorithm can usually make inferences by inspecting the local structure of a graph. For example, for the node classification problem, after several levels of neighborhood aggregation, the node representation may be able to incorporate sufficient higher-order neighborhood information to discriminate different classes . In this paper, we study a more global learning problem: learning to count subgraph isomorphisms (counting examples are shown as Figure 1). Although subgraph isomorphism is the key to solve graph representation learning based applications , tasks of identifying or counting subgraph isomorphisms themselves are also significant and may support broad applications, such as bioinformatics , chemoinformatics , and online social network analysis . For example, in a social network, we can solve search queries like "groups of people who like X and visited Y-city/state." In a knowledge graph, we can answer questions like "how many languages are there in Africa speaking by people living near the banks of the Nile River?" Many pattern mining algorithms or graph database indexing based approaches have been proposed to tackle subgraph isomorphism problems (; ; ; ;). However, these approaches cannot be applied to large-scale graphs because of the exponential time complexity. Thanks to the powerful graph representation learning models which can effectively capture local structural information, we can use a learning algorithm to learn how to count subgraph isomorphisms from a lot of examples. Then the algorithm can scan a large graph and memorize all necessary local information based on a query pattern graph. In this case, although learning based approaches can be inexact, we can roughly estimate the range of the number of subgraph isomorphism. This can already help many applications that do not require exact match or need a more efficient pre- processing step. To this end, in addition to trying different representation learning architectures, we develop a dynamic intermedium attention memory network (DIAMNet) to iteratively attend the query pattern and the target data graph to memorize different local subgraph isomorphisms for global counting. To evaluate the learning effectiveness and efficiency, we develop a small (≤ 1,024 subgraph isomorphisms in each graph) and a large (≤ 4,096 subgraph isomorphisms in each graph) dataset and evaluate different neural network architectures. Our main contributions are as follows. • To our best knowledge, this is the first work to model the subgraph isomorphism counting problem as a learning problem, for which both the training and prediction time complexities are polynomial. • We exploit the representation power of different deep neural network architectures in an end-toend learning framework. In particular, we provide universal encoding methods for both sequence models and graph models, and upon them we introduce a dynamic intermedium attention memory network to address the more global inference problem for counting. • We conduct extensive experiments on developed datasets which demonstrate that our framework can achieve good on both relatively large graphs and large patterns compared to existing studies. Subgraph Isomophism Problems. Given a pattern graph and a data graph, the subgraph isomorphism search aims to find all occurrences of the pattern in the data graph with bijection mapping functions. Subgraph isomorphism is an NP-complete problem among different types of graph matching problems (monomorphism, isomorphism, and subgraph isomorphism). Most subgraph isomorphism algorithms are based on backtracking. They first obtain a series of candidate vertices and update a mapping table, then recursively revoke their own subgraph searching functions to match one vertex or one edge at a time. Ullmann's algorithm , VF2 , and GraphQL belong to this type of algorithms. However, it is still hard to perform search when either the pattern or the data graph grows since the search space grows exponentially as well. Some other algorithms are designed based on graph-index, such as gIndex , which can be used as filters to prune out many unnecessary graphs. However, graph-index based algorithms have a problem that the time and space in indexing also increase exponentially with the growth of the graphs . TurboISO and VF3 add some weak rules to find candidate subregions and then call the recursive match procedure on subregions. These weak rules can significantly reduce the searching space in most cases. Graph Representation Learning. Graph (or network) representation learning can be directly learning an embedding vector of each graph node (; ;). This approach is not easy to generalize to unseen nodes. On the other hand, graph neural networks (GNNs) provide a solution to representation learning for nodes which can be generalized to new graphs and unseen nodes. Many graph neural networks have been proposed since 2005 but rapidly developed in recent years. Most of them focus on generalizing the idea of convolutional neural networks for general graph data structures (; ; b;) or relational graph structures with multiple types of relations . More recently, propose a graph isomorphism network (GIN) and show its discriminative power. Others use the idea of recurrent neural networks (RNNs) which are originally proposed to deal with sequence data to work with graph data . Interestingly, with external memory, sequence models can work well on complicated tasks such as language modeling and shortest path finding on graphs . There is another branch of research called graph kernels (; ; ; ;) which also convert graph isomorphism to a similarity learning problem. However, they usually work on small graphs and do not focus on subgraph isomorphism identification or counting problems. We begin by introducing the subgraph isomophism problems and then provide the general idea of our work by analyzing the time complexities of the problems. Traditionally, the subgraph isomorphism problem is defined between two simple graphs or two directed simple graphs, which is an NP-complete problem. We generalize the problem to a counting problem over directed heterogeneous multigraphs, whose decision problem is still NP-complete. A graph or a pattern is defined as G = (V, E, X, Y) where V is the set of vertices. E ⊆ V × V is the set of edges, X is a label function that maps a vertex to a vertex label, and Y is a label function that maps an edge to a set of edge labels. We use an edge with a set of edge labels to represent multiedges with the same source and the same target for clarity. That is, there are no two edges in a graph such that they have the same source, same target, and the same edge label. To simplify the statement, we assume In this paper, we discuss isomorphic mappings that preserve graph topology, vertex labels and edge labels, but not vertex ids. More precisely, a pattern Furthermore, G P being isomorphic to a graph G G is denoted as G P G G and the function f is named as an isomorphism. The subgraph isomorphism counting problem is defined as to find the number of all different subgraph isomorphisms between a pattern graph G P and a graph G G. Examples are shown in Figure 1. Intuitively, we need to compute O(P erm(|V G |, |V P |) · d |V P | ) to solve the subgraph isomorphism counting problem by enumeration, where P erm(n, k) = n! (n−k)!, |V G | is the number of graph nodes, |V P | is the number of pattern nodes, d is the maximum degree. The first subgraph isomorphism algorithm, Ullmann's algorithm , reduces the seaching time to. If the pattern and the graph are both small, the time is acceptable because both of two factors are not horrendously large. However, since the computational cost grows exponentially, it is impossible to count as either the graph size or the pattern size increases. If we use neural networks to learn distributed representations for V G and V P or E G and E P, we can reduce the complexity to O(via source attention and self-attention. Assuming that we can further learn a much higher level abstraction without loss of representation power for G P, then the computational cost can be further reduced to However, the complexity of the latter framework is still not acceptable when querying over large graphs. If we do not consider self-attention, the computational cost will be O(|V G |) or O(|E G |), but missing the self-attention will hurt the performance. In this work, we hope to use attention mechanism and additional memory networks to further reduce the complexity compared with while keeping the performance acceptable on the counting problem. A graph (or a pattern) can be represented as a sequence of edges or a series of adjacent matrices and vertex features. For sequence inputs we can use CNNs , RNNs such as GRU , or Transformer-XL to extract high-level features. While if the inputs are modeled as series of adjacent matrices and vertex features, we can use RGCN to learn vertex representations with message passing from neighborhoods. After obtaining the pattern representation and the graph representation, we feed them into an interaction module to extract the correlated features from each side. Then we feed the output context of the interaction module into a fully-connected layer to make predictions. A general framework is shown in Figure 2 and the difference between sequence encoding and graph encoding is shown in Figure 3. In sequence models, the minimal element of a graph (or a pattern) is an edge. By definition, at least three attributes are required to identify an edge e, which are the source vertex id u, the target vertex id v, and its edge label y ∈ Y(e). We further add two attributes of vertices' labels to form a 5-tuple (u, v, X (u), y, X (v)) to represent an edge e, where X (u) is the source vertex label and X (v) is the target vertex label. A list of 5-tuple is referred as a code. We follow the order defined in gSpan to compare pairs of code lexicographically; the detailed definition is given in Appendix A. The minimum code is the code with the minimum lexicographic order with the same elements. Finally, each graph can be represented by the corresponding minimum code, and vice versa. Given that a graph is represented as a minimum code, or a list of 5-tuples, the next encoding step is to encode each 5-tuple into a vector. Assuming that we know the max values of |V|, |X |, |Y| in a dataset in advance, we can encode each vertex id v, vertex label x, and edge label y into Bnary digits, where B is the base and each digit d ∈ {0, 1, · · ·, B − 1}. It is easy to replace each digit with a one-hot vector so that each 5-tuple can be vectorized as a multi-hot vector which is the concatenation of one-hot vectors. The length of the multi-hot vector of a 5-tuple is Then we can easily calculate the graph dimension d g and the pattern dimension d p. Furthermore, the minimum code can be encoded into a multi-hot matrix, G ∈ R |E G |×dg for a graph G G or P ∈ R |E P |×dp for a pattern G P according to this encoding method. This encoding method can be extended when we have larger values of |V|, |X |, |Y|. A larger value, e.g., |V|, only increases the length of one-hot vectors corresponding to its field. Therefore, we can regard new digits as the same number of zeros in previous data. As long as we process previous one-hot vectors carefully to keep these new dimensions from modifying the original distributed representations, we can also extend these multi-hot vectors without affecting previous models. A simple but effective way is to initialize additional new weights related to new dimensions as zeros. Given the encoding method in Section 4.1.1, we can simply embed graphs as multi-hot matrices. Then we can use general strategies of sequence modeling to learn dependencies among edges in graphs. Convolutional Neural Networks (CNNs) have been proved to be effective in sequence modeling . In our experiments, we apply multiple layers of the convolution operation to obtain a sequence of high-level features. Recurrent Neural Networks (RNNs), such as GRU , are widely used in many sequence modeling tasks. Transformer-XL (TXL) is a variant of the Transformer architecture and enables learning long dependencies beyond a fixed length without disrupting temporal coherence. Unlike the original autoregressive settings, in our model the Transformer-XL encoder works as a feature extractor, in which the attention mechanism has a full, unmasked scope over the whole sequence. However, its computational cost grows quadratically with the size of inputs, so the tradeoff between performance and efficiency would be considered. In graph models, each vertex has a feature vector and edges are used to pass information from its source to its sink. GNNs do not need vertex ids and edge ids explicitly because the adjacency information is included in a adjacent matrix. As explained in Section 4.1.1, we can vectorize vertex labels into multi-hot vectors as vertex features. In a simple graph or a simple directed graph, the adjacent information can be stored in a sparse matrix to reduce the memory usage and improve the computation speed. As for heterogeneous graphs, behaviors of edges should depend on edge labels. RGCNs have relation-specific transformations so that each edge label and topological information are mixed into the message to the sink. We follow this method and use basis-decomposition for parameter sharing . Relational Graph Convolutional Networks (RGCNs) are developed specifically to handle multi-relational data in realistic knowledge bases. Each relation corresponds to a transformation matrix to transform relation-specific information from a neighbor to the center vertex. Two decomposition methods are proposed to address the rapid growth in the number of parameters with the number of relations: basis-decomposition and block-diagonal-decomposition. We use the first method, which is equivalent to the MLPs in GIN . The original RGCN uses the mean aggregator, but find that the sum-based GNNs can capture graph structures better. We implement both and named them as RGCN and RGCN-SUM respectively. Figure 4: Illustration of dynamic intermedium attention memory network (DIAMNet). Φ 1 represents Eqs. and, Φ 2 represents Eqs. and, and two types of gates are Eqs. and. After obtaining a graph representationǦ and a pattern representationP from a sequence model or a graph model where their column vectors are d-dimensional, we feed them as inputs of interaction layers to extract the correlated context between the pattern and the graph. A naive idea is to use attention modules to model interactions between these two representations and interactions over the graph itself. However, this method is not practical due to its complexity, To address the problem of high computational cost in the attention mechanism, we propose the Dynamic Intermedium Attention Memory Network (DIAMNet), using an external memory as an intermedium to attend both the pattern and the graph in order. To make sure that the memory has the knowledge of the pattern while attending the graph and vice-versa, this dynamic memory is designed as a gated recurrent network as shown in Figure 4. Assuming that the memory size is M and we have T recurrent steps, the time complexity is decreased into, which means the method can be easily applied to large-scale graphs. The external memory is divided into M blocks {m 1, ..., m M}, where m j ∈ R d. At each time step t, {m j} is updated by the pattern and the graph in order via multi-head attention mechanism . Specifically, the update equations of our DIAMNet are given by: Here M ultiHead is the attention method described in , σ represents the logistic sigmoid function, s j is the intermediate state of the j th block of memory that summarizes information from the pattern, and s for information from both the pattern and the graph. z j and z j are two gates designed to control the updates on the states in the j th block. U P, V P, U G, V G ∈ R d×d are trainable parameters. In this section, we report our major experimental . More can be found in the Appendix. In order to train and evaluate our neural models for the subgraph isomorphism counting problem, we need to generate enough graph-pattern data. As there's no special constraint on the pattern, the pattern generator may produce any connected multigraph without identical edges, i.e., parallel edges with identical label. In contrast, the ground truth number of subgraph isomorphisms must be tractable in our synthetic graph data. Therefore, our graph generator first generates multiple disconnected components, possibly with some subgraph isomorphisms. We use the idea of neighborhood equivalence class (NEC) in TurboISO to control the necessary conditions of a subgraph isomorphism in the graph generation process. The detailed algorithms are shown in Appendix B. Then the generator merges these components into a larger graph and ensures that there is no more subgraph isomorphism generated in the merge process. The subgraph isomorphism search can be done during these components subgraphs in parallel. Using the pattern generator and the graph generator above, we can generate many patterns and graphs for neural models. We are interested in follow research questions: whether sequence models and graph convolutional networks can perform well given limited data, whether their running time is acceptable, and whether memory can help models make better predictions even faced with a NPcomplete problem. To evaluate different neural architectures and different prediction networks, we generate two datasets in different graph scales and the statistics are reported in Table 1. There are 187 unique patterns in whole pairs, where 75 patterns belong to the small dataset, 122 patterns belong to the large dataset. Target data graphs are not required similar so they are generated randomly. The generation details are reported in Appendix C. Instead of directly feeding multi-hot encoding vectors into representation modules, we use two simple linear layers separately to transform graph multi-hot vectors and pattern multi-hot vectors to lower-dimensional, distributed ones. To improve the efficiency, we also add a filtering layer to filter out irrelevant parts before all representation modules. The details of this filter layer is shown in Section D.1. In our experiments, we implemented five different representation models: CNN is a 3-layer convolutional layers followed by max-pooling layers. The convolutional kernels are 2,3,4 respectively and strides are 1. The pooling kernels are 2,3,4 and strides are 1. RNN is a simple 3-layer GRU model. TXL is a 6-layer Transformer encoder with additional memory. RGCN is a 3-layer RGCN with the basis decomposition. We follow the same setting in that ordinal paper to use meanpooling in the message propagation part. RGCN-SUM is a modification of RGCN to replace the mean-pooling with sum-pooling. After getting a graph representationǦ and a pattern representationP from the representation learning modules, we feed them into the following different types of interaction layers for comparison. SumPool: A simple sum-pooling is applied forǦ andP to obtainǧ andp, and the model sends Concate(ǧ,p,ǧ −p,ǧ p) with the graph size and the pattern size information into the next fully connected (FC) layers. MeanPool: Similar settings as SumPool, but to replace the pooling method with mean-pooling. MaxPool: Similar settings as SumPool, but to replace the pooling method with max-pooling. AttnPool: We want to use attention modules without much computational cost so the self-attention is not acceptable. We simplify the attention by first applying a pooling mechanism for the pattern graph and then use the pooled vector to perform attention over the data graph rather than simply perform pooling over it. Other settings are similar with pooling methods. The detailed information is provided in Appendix D.2. We only report of mean-pooling based attention, because it is the best of the three variants. We compare the performance and efficiency of our DIAMNet proposed in Section 4.3 with above interaction networks. The initialization strategy we used is shown in Appendix D.3. And we feed the whole memory with size information into the next FC layers. For fair comparison, we set embedding dimensions, dimensions of all representation models, and the numbers of filters all 64. The segment size and memory size in TXL are also 64 due to the computation complexity. The length of memory is fixed to 4 and the number of recurrent steps is fixed to 3 in the DIAMNet for both small and large datasets. We use the mean squared error (MSE) to train models and evaluate the validation set to choose best models. The optimizer is Adam with learning rate 0.001. L2 penalty is added and the coefficient is set as 0.001. To avoid gradient explosion and overfitting, we add gradient clipping and dropout with a dropout rate 0.2. We use Leaky ReLU as activation functions in all modules. Due to the limited number of patterns, the representation module for patterns are easy to overfit. Therefore, we use the same module with shared parameters to produce representation for both the pattern and the graph. We also find that using curriculum learning can help models to converge better. Hence, all models in Table 3 are fine-tuned based on the best models in small in the same settings. Training and evaluating were finished on one single NVIDIA GTX 1080 Ti GPU under the PyTorch framework. As we model this subgraph isomorphism counting problem as a regression problem, we use common metrics in regression tasks, including the root mean square error (RMSE) and the mean absolute error (MAE). In this task, negative predictions are meaningless, so we only evaluate ReLU (Y i) as final prediction . Considering that about 75% of countings are 0's in our dataset, we also use evaluation metrics for the binary classification to analyze behaviors of different models. We report F1 scores for both zero data (F1 zero) and nonzero data (F1 nonzero). Two trivial baselines, Zero that always predicts 0 and Avg that always predicts the average counting of training data, are also used in comparison. We first report for small dataset in Table 2 and for large dataset in Table 3. In addition to the trivial all-zero and average baselines and other neural network learning based baselines, we are also curious about to what extent our neural models can be faster than traditional searching algorithms. Therefore, we also compare the running time. Considering the graph generation strategy we used, we decide to compare with VF2 algorithm to avoid unnecessary interference from the similar searching strategy. From the experiments, we can draw following observations and . Comparison of different representation architectures. As shown in Table 2, in general, graph models outperform most of the sequence models but cost more time to do inference. CNN is the worst model for the graph isomorphism counting problem. The most possible reason is that the sequence encoding method is not suitable for CNN. The code order does not consider the connectivity of adjacent vertices and relevant label information. Hence, convolutional operations and pooling operations cannot extract useful local information but may introduce much noise. From of the large dataset, we can see that F1 nonzero =0.180 is even worse than others. In fact, we find that CNN always predicts 0 for large graphs. RNN and TXL are widely used in modeling sequences in many applications. The two models with simple pooling can perform well. We note that RNN with sum-pooling is better than TXL with memory. RNN itself holds a memory but TXL also has much longer memory. However, the memory in RNN can somehow memorize all information that Under review as a conference paper at ICLR 2020 has been seen previously but the memory of TXL is the representation of the previous segment. In our experiments, the segment size is 64 so that TXL can not learn the global information at a time. A part of the structure information misleads TXL, which is consistent with CNN. A longer segment set for TXL may lead to better , but it will require much more GPU memory and much longer time for training. RGCN-SUM is much better than RGCN and other sequence models, which shows that sum aggregator is good at modeling vertex representation in this task. The mean aggregator can model the distribution of neighbor but the distribution can also misguide models. Effectiveness of the memory. Table 2 shows the effectiveness of our dynamic attention memory network as the prediction layer. It outperforms the other three pooling methods as well as the simple attention mechanism for all representation architectures. Sum, Mean, and Attention pooling are all Figure 5: Model behaviors of three models in small dataset. The x-axis is the example id and the y-axis is the count value. We mark the ground truth value as orange + and the predictions as blue ×. We use two green dashed lines to separate patterns into three blocks based on numbers of vertices (3,4, and 8). In each block, the examples are sorted based on the data graphs' sizes. comparable with each other, because they all gather the global information of the pattern and graph representations. Prediction layer based on max pooling, however, performs the worst, and even worse when the representation layer is CNN or Transformer-XL. This observation indicates that every context of the pattern representation should be counted and we need a better way to compute the weights between each context. The dynamic attention memory with global information of both the pattern and the graph achieves the best in most of the cases. One of the most interesting observations is that it can even help extract the context of pattern and graph while the representation layer (such as CNN) does not perform very well, which proves the power of our proposed method of DIAMNet. Performance on larger graphs. Table 3 shows our models can be applied to larger-scale graphs. For the large dataset, we only choose the best pooling method for each of the baselines to report. We can find most of the are consistent to the small dataset, which means RGCN is the best representation method in our task and the dynamic memory is effective. In terms of the running time, all learning based models are much faster than the traditional VF2 algorithm for subgraph isomorphism counting. Model behaviors. As shown in Figure 5, we compare the model behaviors of the best model (RGCN+SUM) and the worst model (CNN), as well as the great improvement of CNN when memory is added. We can find that CNN+SumPool tends to predict the count value below 400 and has the same behavior between three patterns. This may come from the fact that CNN can only extract local information of a sequence and Sum pooling is not a good way to aggregate it. However, the memory can memorize local information to each memory cell so it can improve the representation power of CNN and can gain a better performance. RGCN, on the other hand, can better represent the graph structure, so it achieves a better , especially on the largest pattern (the third block of each figure) compared with CNN. More can be found in Appendix F. In this paper, we study the challenging subgraph isomorphism counting problem. With the help of deep graph representation learning, we are able to convert the NP-complete problem to a learning based problem. Then we can use the learned model to predict the subgraph isomorphism counts in polynomial time. Counting problem is more related to a global inference rather than only learning node or edge representations. Therefore, we have developed a dynamic intermedium attention memory network to memorize local information and summarize for the global output. We build two datasets to evaluate different representation learning models and global inference models. Results show that learning based method is a promising direction for subgraph isomorphism detection and counting and memory networks indeed help the global inference. We also performed detailed analysis of model behaviors for different pattern and graph sizes and labels. Results show that there is much space to improve when the vertex label size is large. Moreover, we have seen the potential real-world applications of subgraph isomorphism counting problems such as question answering and information retrieval. It would be very interesting to see the domain adaptation power of our developed pretrained models on more real-world applications. The lexicographic order is a linear order defined as follows: If A = (a 0, a 1, · · ·, a m) and B = (b 0, b 1, · · ·, b n) are the codes, then A ≤ B iff either of the following is true: 2. ∀0 ≤ k ≤ m, a k = b k, and n ≥ m. In our setting, ) iff one of the following is true: B PATTERN GENERATOR AND GRAPH GENERATOR As proposed in Section 5.1, two generators are required to generate datasets. The algorithm about the pattern generator is shown in Algorithm 1. The algorithm first uniformly generates a directed tree. Then it adds the remaining edges with random labels. Vertex labels and edge labels are also uniformly generated but each label is required to appear at least once. Algorithm 2 shows the process of graph generation. Two hyperparameters control the density of subisomorphisms: α ∈ decides the probability of adding subisomorphisms rather than random edges; β ∈ N + is the parameter of Dirichlet distribution to sample sizes of components. After generating several directed trees and satisfying the vertex number requirement, the algorithm starts to add remaining edges. It can add edges in one component and try to add subgraph isomorphisms, or it can randomly add edges between two components or in one component. The following merge subroutine aims to merge these components into a large graph. Shuffling is also required to make datasets hard to be hacked. The search of subisomorphisms in the whole graph is equivalent to the search in components respectively because edges between any two components do not satisfy the necessary conditions. Algorithm 1 Pattern Generator. Input: the number of vertices N v, the number of edges N e, the number of vertex labels L v, the number of edge labels L e. 1: P:= GenerateDirectedTree(N v) 2: AssignNodesLabels(P, L v) 3: AddRandomEdges(P, P, null, N e − N v + 1) 4: AssignEdgesLabels(P, L e) Output: the generated pattern P In Algorithm 1 and Algorithm 2, the function AddRandomEdges adds required edges from one component to the other without generating new subgraph isomorphisms. The two component can also be the same one, which means to add in one component. The NEC tree is utilized in TurboISO to explore the candidate region for further matching. It takes O(|V p | 2) time but can significant reduce the searching space in the data graph. It records the equivalence classes and necessary conditions of the pattern. We make sure edges between two components dissatisfy necessary conditions in the NEC tree when adding random edges between them. This data structure and this idea help us to generate more data and search subisomorphisms compared with random generation and traditional subgraph isomorphism searching. We can generate as many examples as possible using two graph generators. However, we limit the numbers of training, dev, and test examples whether learning based models can generalize to Intuitively, not all vertices and edges in graphs can match certain subisomorphisms so we simply add a FilterNet to adjust graph encoding as follows: where G is the graph representation in the pattern space, σ is the sigmoid function, f i is the gate that decides to filter the j th vertex or the j th edge, W G ∈ R dp×dg and W F ∈ R 1×dp are trainable. Thanks to the multi-hot encoding, We can simply use Eq. to accumulate label information of patterns. After this filter layer, only relevant parts of the graphs will be passed to the next representation layer. The computation process of the AttnPool (Mean) is: Combing with the pooling strategy to makeP as P ∈ R 1×d, the time complexity is decreased to We also implement three variants with sum-pooling, mean-pooling, and max-pooling respectively. Results of AttnPool (Sum) and AttnPool (Max) are shown in the Appendix E. There are many ways to initialize the memory {m where s is the stride and k is the kernel size. In Table 5, we compared two additional pooling methods with MeanPool in Eq.. Table 5 shows of different representation models with different interaction networks in the small dataset. AttnPool (Mean) and DIAMNet (MeanInit) usually perform better compared with other pooling methods. As shown in Figure 7, different interaction modules perform differently in different views. We can find MaxPool always predicts higher counting values when the pattern is small and the graph is large, while AttnPool always predicts very small numbers except when the pattern vertex size is 8, and the graph vertex size is 64. The same appears when we use edge sizes as the x-axis. This observation shows that AttnPool has difficulties predicting counting values when either of the pattern and the graph is small. It shows that attention focuses more on the zero vector we added rather than the pattern pooling . Our DIAMNet, however, performs the best in all pattern/graph sizes. When the bins are ordered by vertex label sizes or edge label sizes, the performance of all the three interaction modules among the distribution are similar. When bins are ordered by vertex label sizes, we have the same discovery that AttnPool prefers to predict zeros when then patterns are small. MaxPool fails when facing complex patterns with more vertex labels. DIAMNet also performs not so good over these patterns. As for edge labels, look good for MaxPool and DIAMNet but AttnPool is not satisfactory. As shown in Figure 8, different representation modules perform differently in different views. CNN performs badly when the graph size is large (shown in Figure 8a and 8d) and patterns become complicated (show in Figure 8g and 8j), which further indicates that CNN can only extract the local information and suffers from issues when global information is need in larger graphs. RNN, on the other hand, performs worse when the graph are large, especially when patterns are small (show in Figure 8e), which is consistent with its nature, intuitively. On the contrary, RGCN-SUM with DIAMNet is not affected by the edge sizes because it directly learns vertex representations rather than edge representations.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJx-akSKPS
In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.
Domain adaptation is an open problem in deep reinforcement learning (RL). Often, agents are asked to perform in environments where data is difficult to obtain. In such settings, agents are trained in similar environments, such as simulators, and are then transferred to the original environment. The gap between visual observations of the source and target environments often causes the agent to fail in the target environment. We present a new RL agent, SADALA (Soft Attention DisentAngled representation Learning Agent). SADALA first learns a compressed state representation. It then jointly learns to ignore distracting features and solve the task presented. SADALA's separation of important and unimportant visual features leads to robust domain transfer. SADALA outperforms both prior disentangled-representation based RL and domain randomization approaches across RL environments (Visual Cartpole and DeepMind Lab). RL agents learn to maximize rewards within a task by taking actions based on observations. The advent of deep learning has enabled RL agents to learn in high dimensional feature spaces and complex action spaces . Deep RL methods have beat human performance in a variety of tasks, such as Go . However, deep RL has two crippling drawbacks: high sample complexity and specificity to training task. There are many domains where data collection is expensive and time consuming, such as healthcare, autonomous vehicles, and robotics . Thus, agents are often trained in simulation and must transfer the ing knowledge to reality. While this solves the issue of sample complexity, reality and simulated domains are sufficiently different that it is infeasible to naively train a deep RL agent in a simulation and transfer. This is known as the reality gap . Jumping the reality gap is difficult for two orthogonal reasons. The first is that dynamics of a simulation are an approximation to the dynamics of the real world. Prior work has shown success in transfer between domains with different dynamics (; ;). In this paper, we address the second difficulty: the difference in visual observations of states. Due to limitations in current photorealistic rendering, simulation and the real world are effectively two different visual domains . We present a method of robust transfer between visual RL domains, using attention and a β variational autoencoder to automatically learn a state representation sufficient to solve both source and target domains. By learning disetangled and relevant state representation, our approach does not require target domain samples when training. The state representation enables the RL agent to attend to only the relevant state information and ignore all other, potentially distracting information. Domain randomization is currently the most popular transfer method between different visual domains in RL (; ;). By training on many source domains, an RL agent implicitly learns to ignore factors of variation present. Sample complexity scales with the number of source domains and randomized factors. Modes of variation are maually selected. When faced with different variation, domain randomization will fail. Furthermore, there is evidence that domain randomization can destablize the training of some RL algorithms, such as A3C and DDPG . Though domain randomization has shown success in transfer, it has three downsides: high sample complexity, manual selection of irrelevant features, and ustable training . Some visual domain adaptation work is based on image to image translation (; ;). These methods learn a mapping from source domain inputs to target domain inputs, utilizing adversarial methods, such as Generative adversarial Networks trained on data from both the source and target domain. During inference, the target domain input is translated to the source domain before being processed. While these methods have shown promising , they have two downsides. First, translation incurs additional overhead at inference time. This overhead is especially large since the inputs are high-dimensional images. Second, these methods require samples from the target domain. In applications such as robotics, sampling the target domain is expensive and difficult to obtain. Further, the target domain may not be known at training time. Some recent work focuses on modifying the inputs from the source domain so that they are similar to the target domain, effectively matching their distributions . This approach uses a GAN to refine images from a (simulated) source domain to match images from the target domain: the real world. While this directly addresses domain shift and does not add any additional overhead at inference time in the target domain, it still assumes that samples from the target domain are present during training. It enables transfer between two differing visual domains, but it does not enable adaptation to previously unseen domains. There is a body of work that takes this idea further. Rather than translating from the target domain to the source domain, it learns to map all domains to a canonical domain in which the RL agent operates . By learning to translate from randomized domains to a canonical domain, effectively training a translation network using domain randomization. While this approach has shown simulation to reality transfer, it icurrs overhead through the use of the mapping network; Rather than learning to map to a canoical domain, it would be more efficient to map to a disentagled representation of the canoical environment. Other work learns to map image inputs to a latent space for use in RL. Specifically, it utilizes a β Variational Autoencoder (β-VAE) to learn a compressed, disentagled state representation . While this work focuses on domain adaptation in the RL setting, it utilizes a generalpurpose generative model. Thus, the state representation is not tailored to the task posed; it preserves information needed to reconstruct the image that may not be needed to solve the RL task given. Additionally, unlike image translation, this method does not attempt to match the distribution of inputs from source and target domains. Since the β-VAE is trained to reconstruct input states, the distributions of compressed and input states will match. Accordingly, the distribution shift between source and target domains will be preserved, even when using compressed states. To circumvent this issue, the RL agent is trained on multiple source domains, as in domain randomization. We formalize the concept of transfer between related MDPs. We denote the source domain as D S and the target domain as D T. Both domains are MDPs, defined as D = (S, A, T, R), where S are states, A are actions, T is the transition function, and R is the reward function. We assume that both D S and D T share a discount factor γ. In general transfer, states and actions may be quite different, while transition and reward functions are similar. We denote a family of related MDPs as M. Within M, transitions, rewards, and actions are the same across MDPs, and states are different. These states are generated from a set of latent state factors S z, each passed through a different MDPspecific renderer G U for some MDP U ∈ M. In the case of visual cartpole, S z contains the position and velocity of the cart, the angle and position of the pole, and the colors of the cart, pole, axle, and track. The observed states are then rendered by the renderer. Specifically,, where G U and G V are any two MDPs within M. In this paper, we target transfers where. Namely, when target dynamics are similar to source dynamics, rewards, actions, and true states are identical, but observed states S S and S T differ due to differing renderers G S and G T. Deep RL agents learn a mapping from states s ∈ S to actions a ∈ A, either by directly maximizing reward or through a Q function, using a deep neural network. The deep network implicitly learns a mapping function F: S U →Ŝ z, for some MDP U ∈ M and a policy function π:Ŝ z → A that maps from latent state factors to actions. These functions are learned as a of the feed-forward nature of deep neural networks. The policy π is a function of the last layer of the network. The output of the last layer of the neural network is a highly processed feature representation of the input that is optimized for use in generating π. While this implicit mapping is sufficient for a deep RL agent to solve a single MDP from M, it is not general enough to transfer to other MDPs within M. By optimizing for π F overfits to the specific S U from the source domain U. To circumvent this overfitting, DARLA proposed learning a feature extractor F that maps from any observed state S X to the latent state factors S z . The RL agent is the trained on the (approximate) extracted latent state factorsŜ z, separating the "vision" and "action" of the RL agent. DARLA uses a β variational autoencoder to learn the mapping F . The extracted latent state factorsŜ z are (mostly) disentangled in practice, leading to better transfer than a baseline RL agent. The factors each correlate with an independent factor in the true state S z of the task. However, it preserves all latent state variables z ∈ S z, incorrectly assuming that they are all relevant to solving the MDP. When attempting to locate an object such as an enemy in a video game, an object for a robot, or a road for an autonomous vehicle, the position and orientation of the object are the most important factors. Visual distractions such as the color of the enemy, opacity of the object, or texture of the road are entirely irrelevant to its localization. Dependence on these distracting factors causes the RL agent to fail to robustly transfer to target visual domains. SADALA is similar to DARLA, but has one key difference-it explicitly learns to ignore irrelevant extracted latent state variables. SADALA learns the same mapping function F: S U →Ŝ z as DARLA, also utilizing a β variational autoencoder. It then learns a policy π:Ŝ z → A, but utilizes a soft attention mechanism to ignore irrelevant features. Using this mechanism, it implicitly learns two functions: A:Ŝ z → S a that maps from extracted latent state factors to latent state factors relevant to solving the MDP, and π A: S a → A that maps from relevant latent state factors to actions. Soft attention mechanisms learn an importance weight w i ∈ for each feature z i in part of a neural network. These weights are then multiplied element-wise with the corresponding feature. Thus, the attention layer can be defined as follows: where W is the vector of importance weights w i and is element-wise multiplication. W is defined as W = σ(h(Z)) where h is a differentiable function parameterized by dense layers of a neural network. Thus, the values w i ∈ W are constrained between 0 and 1, inclusive. This allows the RL agent to ignore a feature z i by setting w i to 0. It also allows the agent to down-weight the importance of a feature by setting w i to a fractional value or pay full attention by setting w i to 1. The SADALA framework is shown in Figure 1. At inference time, an input state s ∈ S is fed into a β variational autoencoder. The bottleneck representation of the β-VAE is then used as extracted latent state factorsŜ z. Those features are then fed into the attention mechanism, which outputs the weighted features S a, according to the formula f above. Finally, the weighted features are used as the inputs to the deep reinforcement learning algorithm. Though the feature selection and policy learning stages are represented separately in figure 1, they are contained within the same neural network and optimized jointly. Training SADALA requires a few steps. First, the β variational autoencoder, shown in figure 2 must be trained. To do so, we use the loss term of a variational autoencoder, modified to fit the β-VAE . The standard VAE loss is where β is a weighting term that pressures the encoder q to output a latent space similar to an isotropic unit gaussian. Increasing the value of β encourages disentanglement of the latent space while sacrificing minor reconstruction details. We sample states from the environment, following a random policy. We then train the β-VAE to reconstruct these images and freeze the network weights. As shown by DARLA, freezing the network weights of the β-VAE is necessary . If the weights are allowed to train jointly with the policy network on the source domain, they will optimize for and overfit to the source domain. When using one source domain, this will cause SADALA to be equivalent to a single-task learner. When using multiple source domains, SADALA would be equivalent to a domain randomization agent. The feature selection and policy learning stages are then jointly optimized. The SADALA framework can utilize any deep RL algorithm (A3C, REINFORCE, DQN, etc), with minor modification. Specifically, we add a loss term to the loss function L RL of the original algorithm. Our new loss is where W is the learned attention weights. This added term is an L1 regularization on the attention weights, which encourages sparsity in feature selection . We additionally test one other modification to the loss term. We make the assumption that the ordering of the extracted latent state variablesŜ z does not change between subsequent frames within an episode. This, the weight w i for some latent state factor z i should not change based on the frame. To enforce this loss on the feature selection stage, we use the following loss where W j is vector of weights corresponding to the jth extracted latent state factor vector across the set of training data and Var is the variance. This loss is added to enforce w i corresponding to the feature z i to be the same regardless of input. This is similar to learning a pseudo-attention weight vector, rather than an attention weight vector. See the appendix for additional training details. We test the SADALA framework on two transfer learning tasks, using A3C as the deep RL algorithm. The first task is Visual Cartpole. This domain is the same as Cartpole-v1 in OpenAI Gym with two key differences . The observed state is now the pixel rendering of the cartpole as well as the velocities of the cart and pole. Thus, the agent must learn to predict the position of the cart and pole from the rendering. Additionally, we modify this domain to include a transfer task. The agent must learn to transfer its knowledge of the game across different color configurations for the cart, pole, and track. Thus, Visual Cartpole is defined as a family of MDPs M where the true state S z is the positions and velocities of the cart and pole and the observed state S U = G U (S z) for an MDP U ∈ M is the pixel observations and velocity values. Optimally, the agent should learn to ignore the factors in extracted latent state factorsŜ z that correspond to color, as they do not aid the agent in balancing the pole. This task tests the agent's ability to ignore irrelevant latent state factors. The second task is the "Collect Good Objects" task from Deepmind Lab . The agent must learn to navigate in first person and pick up "good" objects while avoiding "bad" objects. This task is defined as a family of MDPs M where the true state S z contains the position of the agent, the position of good and bad objects, the type of good and bad objects, and the color of the walls and floor. In a single MDP U ∈ M, all good objects are hats or all good objects are balloons. Similarly, all bad objects are either cans or cakes. The walls and floor can either take a green and orange colorscheme or a red and blue colorscheme. The agent is trained on hats/cans with the green/orange colorscheme and balloons/cakes with both colorschemes. It is then tested on hats/cans with the red/blue colorscheme. Optimally, the agent should learn to ignore the color of the floor and walls. Additionally, it should use the type of object to determine if it is good or bad. This task tests the agent's ability to ignore distracting latent state factors (the color of the walls and floor) while attending to relevant factors (the positions and types of objects and its own position). To test the of the SADALA algorithm, we first test the reconstruction and disentanglement properties of the β-VAE used in the state representation stage. Note that this stage is identical to that of DARLA . As such, we expect the disentanglement properties to be similar. See figure 3 for reconstructions of the cartpole state. Based on the reconstructions, it is apparent that the β-VAE has learned to represent cart position and pole angle. Though the angle of the poles is slightly incorrect in the first set of images, the pole is tilted in the correct direction, yielding sufficiently correct extracted latent state factors. Additionally, the color of the cart, pole, and is incorrect in the third pair of images. While this demonstrates that the identification and reconstructions of colors is not infallible, the position of the cart and pole remains correct, yielding a set of extracted latent state parameters that is sufficient to solve the MDP. See figure 5 for a visualization of reconstruction with attention. In the original image, the pole is standing straight and the cart is centered. In the reconstruction, the cart is centered, and the pole is almost upright. However, the reconstruction does not include the colors of the cart or pole. Instead it fills the cart and pole with the mean color of the dataset. This shows that the attention weights are properly learning to ignore color and instead pay attention to the position of the cart and pole. Figures 6 and 7 gives a comparison of the performance of the algorithms across environments. Note that all of the algorithms are sufficient at solving the source task, with the single-task learner performing slightly better. This is due to the fact that the single-task learner can optimize its con- The single-task learner achieves better rewards on all source tasks than any of the transfer-specific agents. Domain randomization performs less well because of the complexity of the domain randomization task. Rather than optimizing reward for a single domain, the agent must optimize reward across a large set of domains. DARLA, SADALA, and SADALA with reduced variance also perform less well on the source task than the baseline agent. This is due to imperfections in the β-VAE. Though the β-VAE attempts to reconstruct the input image, it does not do so perfectly, as shown in figure 3. This shows that its extraction of latent state features is not perfect, leading to potentially confusing states given to the RL agent. Additionally, while the β-VAE's goal is to learn a disentangled representation, it does not do so perfectly. As such, (partially) entangled latent state factors may further confuse the RL agent. The single-task learner fails to transfer its policy to the target domain. DARLA transfers some of its knowledge to the target domain. Domain randomization also transfers some knowledge. Finally, SADALA transfers more of its knowledge. The single-task agent has no incentive to learn a factored state representation that enables transfer. Its convolutional filters will directly optimize to maximize reward in the source policy. Thus, if the filters are searching for a hat on a blue floor but tested on a hat on a red floor the convolutional filters will fail to transfer. DARLA learns a state representation that is mostly disentangled. This allows the RL agent to learn to ignore unimportant features such as cart color and utilize important features such as cart position. The factored state representation forces the RL agent to learn a more robust policy. However, the neural network parameterization of the RL policy must implicitly learn to ignore unimportant factors. Therefore, when presented with unseen information, the RL agent may not properly ignore unimportant factors. Domain randomization forces the neural network to implicitly learn a state representation sufficient to transfer between tasks. This requires large amounts of training data and is less robust than explicit modeling of latent state factors. SADALA builds on DARLA by adding an explicit attention mechanism, allowing it to more effectively ignore unimportant features. Due to the use of the sigmoid activation in the attention mechanism, the attention weights W are bounded between 0 and 1. In addition to providing a direct weight on the importance of a feature, this bound prevents high variance of attention weights across different inputs. SADALA with the variance reduction term performs worse than both DARLA and SADALA without variance reduction on the Deepmind lab task but better on the other two. In the scenario where the extracted latent state factors from the β-VAE are perfectly disentangled, static attention weights should be sufficient to solve the source task and should transfer better to the target task, as in the Visual Cartpole task. However, the β-VAE does not output perfectly disentangled factors, especially in more complex visual domains such as the Deepmind lab. Thus, the amount of attention paid to each feature from the β-VAE may differ across tasks, violating the assumption that attention weights should be zero variance. In this paper we propose SADALA, a three stage method for zero-shot domain transfer. First, SADALA learns a feature extractor that represents input states (images) as disentangled factors. It then filters these latent factors using an attention mechanism to select those most important to solving a source task. Jointly, it learns a policy for the source task that is robust to changes in states, and is able to transfer to related target tasks. We validate the performance of SADALA on both a high-dimensional continuous-control problem (Visual Cartpole) and a 3D naturalistic first-person simulated environments (Deepmind Lab). We show that the attention mechanism introduced is able to differentiate between important and unimportant latent features, enabling robust transfer. are constrained to be outside of a hypersphere of radius 0.1 the test environment. When evaluating source domain performance, the agents trained on multiple source domains are all evaluated on the same domain, randomly sampled from the set of source domains. The single task learner is evaluated on the source domain it is trained on. The second task is the Collect Good Objects task from Deepmind Lab . This environment can be defined as a family of MDPs that share transition function, reward function, and action space. The goal of the agent is to pick up as many "good" objects (+1 reward) and avoid picking up "bad" objects (−1 reward) as it can within one minute. These objects can be hats, cans, cakes, and balloons. Additionally the color combinations of the walls and floors can vary between green walls and orange floors or red walls and blue floors. These variations in state space correspond to the different MDPs within the family N of Deepmind Lab MDPs. The β-VAE is trained on a set of MDPs covering all latent state factor combinations, the RL network and attention mechanism is trained on a subset of N and tested on a held-out MDP. The neural network for the Visual Cartpole task consists of two parts: the β-VAE network and the policy network. First, the encoder for the β-VAE network is composed of three convolutional layers, each with kernel size 3 and stride 1. In order, they have 32, 32, and 64 filters. The output of the last convolutional layer is then frozen and passed through a dense layer with 256 neurons. This is then passed through another dense layer with 64 neurons, which outputs the latent space. The latent space is composed of 32 gaussians, each parameterized by a mean and variance. The decoder is an inverse of the encoder, utilizing deconvolutional layers. The encoder from the β-VAE is then frozen and used as a feature extractor for the policy network. The policy network takes the encoders outputs and passes them through one dense layer with 64 neurons, which outputs the attention weights. These weights are multiplied elementwise with the encoder's outputs and passed through two dense layers, each with 128 neurons. Finally, the output is passed through a dense layer with two neurons, outputting the policy given the input state. The output is also passed through a dense layer with one neuron, which outputs the value of the input state. The networks used for the Domain Randomization agent and Single-Task learner have a similar structure. They first pass the input through three convolutional filters, identical in architecture to the encoder. They then pass the outputs of the convolutional filters through two dense layers, each with 128 neurons. Finally, this output is passed through the dense layer with two neurons which outputs the policy, and the dense layer with one neuron, which outputs the value. The neural networks used for the Deepmind Lab task are slight modifications to the neural networks described above. They have four convolutional layers instead of three, with 32, 32, 64, and 64 filters, respectively. Additionally, the hidden dense layers in the policy network have 256 rather than 128 neurons. As in DARLA, the reconstruction loss of the β-VAE for Deepmind Lab takes place in the latent space of a denoising autoencoder with 100 latent factors. Relu activations are used throughout. All Visual Cartpole networks are trained with Adam with a learning rate of 5e − 4 and all Deepmind Lab networks are trained Adam with a learning rate of 1e − 4. The β-VAE networks are trained on a set of 1, 000, 000 images sampled from the observed state space of source policies. The RL networks are trained on 1, 000, 000 and 16, 000, 000 states, actions, and rewards for the Visual Cartpole and Deepmind Lab tasks, respectively. The domain randomization agent is trained on 3, 000, 000 Visual Cartpole states, actions, and rewards. C DOMAIN RANDOMIZATION DETAILS Domain randomization was implemented by varying parameters me
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HklPzxHFwB
We present an agent that uses a beta-vae to extract visual features and an attention mechanism to ignore irrelevant features from visual observations to enable robust transfer between visual domains.
Robustness verification that aims to formally certify the prediction behavior of neural networks has become an important tool for understanding the behavior of a given model and for obtaining safety guarantees. However, previous methods are usually limited to relatively simple neural networks. In this paper, we consider the robustness verification problem for Transformers. Transformers have complex self-attention layers that pose many challenges for verification, including cross-nonlinearity and cross-position dependency, which have not been discussed in previous work. We resolve these challenges and develop the first verification algorithm for Transformers. The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound Propagation. These bounds also shed light on interpreting Transformers as they consistently reflect the importance of words in sentiment analysis. Deep neural networks have been successfully applied to many domains. However, a major criticism is that these black box models are difficult to analyze and their behavior is not guaranteed. Moreover, it has been shown that the predictions of deep networks become unreliable and unstable when tested in unseen situations, e.g., in the presence of small and adversarial perturbation to the input (; ;). Therefore, neural network verification has become an important tool for analyzing and understanding the behavior of neural networks, with applications in safety-critical applications (; ;), model explanation and robustness analysis (; c; ; ; ; ;). Formally, a neural network verification algorithm aims to provably characterize the prediction of a network within some input space. For example, given a K-way classification model f: R d → R K, we can verify some linear specification (defined by a vector c) as below: where S is a predefined input space. For example, in the robustness verification problem that we are going to focus on in this paper, S = {x | x−x 0 p ≤} is defined as some small p -ball around the original example x 0, and setting up c = 1 y0 − 1 y can verify whether the logit output of class y 0 is always greater than another class y within S. This is a nonconvex optimization problem which makes computing the exact solution challenging, and thus algorithms are recently proposed to find lower bounds of Eq. in order to efficiently obtain a safety guarantee (; ; ;). Moreover, extension of these algorithms can be used for verifying some properties beyond robustness, such as rotation or shift invariant , conservation of energy and model correctness . However, most of existing verification methods focus on relatively simple neural network architectures, such as feed-forward and recurrent neural networks, and cannot handle complex structures. In this paper, we develop the first robustness verification algorithm for Transformers with self-attention layers. Transformers have been widely used in natural language processing (; ;) and many other domains (; ; b; ; a). For frames under perturbation in the input sequence, we aim to compute a lower bound such that when these frames are perturbed within p -balls centered at the original frames respectively and with a radius of, the model prediction is certified to be unchanged. To compute such a bound efficiently, we adopt the linear-relaxation framework -we recursively propagate and compute linear lower bound and upper bound for each neuron with respect to the input within perturbation set S. We resolve several particular challenges in verifying Transformers. First, Transformers with selfattention layers have a complicated architecture. Unlike simpler networks, they cannot be written as multiple layers of linear transformations or element-wise operations. Therefore, we need to propagate linear bounds differently for self-attention layers. Second, dot products, softmax, and weighted summation in self-attention layers involve multiplication or division of two variables under perturbation, namely cross-nonlinearity, which is not present in feed-forward networks. proposed a gradient descent based approach to find linear bounds, however it is inefficient and poses a computational challenge for transformer verification as self-attention is the core of transformers. In contrast, we derive closed-form linear bounds that can be computed in O complexity. Third, neurons in each position after a self-attention layer depend on all neurons in different positions before the self-attention (namely cross-position dependency), unlike the case in recurrent neural networks where outputs depend on only the hidden features from the previous position and the current input. Previous works (; ;) have to track all such dependency and thus is costly in time and memory. To tackle this, we introduce an efficient bound propagating process in a forward manner specially for self-attention layers, enabling the tighter backward bounding process for other layers to utilize bounds computed by the forward process. In this way, we avoid cross-position dependency in the backward process which is relatively slower but produces tighter bounds. Combined with the forward process, the complexity of the backward process is reduced by O(n) for input length n, while the computed bounds remain comparably tight. Our contributions are summarized below: • We propose an effective and efficient algorithm for verifying the robustness of Transformers with self-attention layers. To our best knowledge, this is the first method for verifying Transformers. • We resolve key challenges in verifying Transformers, including cross-nonlinearity and crossposition dependency. Our bounds are significantly tighter than those by adapting Interval Bound Propagation (IBP) (; . • We quantitatively and qualitatively show that the certified lower bounds consistently reflect the importance of input words in sentiment analysis, which justifies that the computed bounds are meaningful in practice. Robustness Verification for Neural Networks. Given an input x 0 and a small region B p (x 0,):= {x | x − x 0 p ≤}, the goal of robustness verification is to verify whether the prediction of the neural network is unchanged within this region. This problem can be mathematically formulated as Eq.. If Eq. can be solved optimally, then we can derive the minimum adversarial perturbation of x by conducting binary search on. Equivalently, we obtain the maximum such that any perturbation within B p (x 0,) cannot change the predicted label. Several works focus on solving Eq. exactly and optimally, using mixed integer linear programming (MILP) , branch and bound (BaB), and satisfiability modulo theory (SMT) . Unfortunately, due to the nonconvexity of model f, solving Eq. is NP-hard even for a simple ReLU network . Therefore, we can only hope to compute a lower bound of Eq. efficiently by using relaxations. Many algorithms can be seen as using convex relaxations for non-linear activation functions , including using duality (;, abstract domains (; ; ;), layer-by-layer reachability analysis (b; ; ; and semi-definite relaxations (; . Additionally, robustness verification can rely on analysis on local Lipschitz constants . However, existing methods are mostly limited to verifying networks with relatively simple architectures, such as feed-forward networks and RNNs (a; ;), while none of them are able to handle Transformers. Transformers and Self-Attentive Models. Transformers based on the selfattention mechanism, further with pre-training on large-scale corpora, such as BERT , XLNet , RoBERTa , achieved state-of-the-art performance on many NLP tasks. Self-attentive models are also useful beyond NLP, including VisualBERT for extracting features from both text and images (b;), image transformer for image generation , acoustic models for speech recognition, sequential recommendation and graph embedding (a). The robustness of NLP models has been studied, especially many methods have been proposed to generate adversarial examples (; ; ; ;). In particular, showed that Transformers are more robust than LSTMs. However, there is not much work on robustness verification for NLP models. verified RNN/LSTM.; used Interval Bound Propagation (IBP) for certified robustness training of CNN and LSTM. In this paper, we propose the first verification method for Transformers. We aim to verify the robustness of a Transformer whose input is a sequence of frames We take binary text classification as an running example, where x (i) is a word embedding and the model outputs a score y c (X) for each class c (c ∈ {0, 1}). Nevertheless, our method for verifying Transformers is general and can also be applied in other applications. 0 ] correctly classified by the model, let P = {r 1, r 2, · · ·, r t}(1 ≤ r k ≤ n) be the set of perturbed positions, where t is the number of perturbed positions. Thus the perturbed input will belong to S: Assuming that c is the gold class, the goal of robustness verification is to compute {min X∈S y c (X) − y 1−c (X)}:= δ (X). If δ (X) > 0, the output score of the correct class will always be larger than the incorrect one within S. As mentioned previously, computing the exact values of δ (X) is NP-hard, and thus our goal is to efficiently compute a lower bound δ L (X) ≤ δ (X). We obtain δ L (X) by computing the bounds of each neuron when X is perturbed within S (δ L can be regarded as a final neuron). A Transformer layer can be decomposed into a number of sub-layers, where each sub-layer contains neurons after some operation. These operations can be categorized into three categories: 1) linear transformations, 2) unary nonlinear functions, and 3) operations in self-attention. Each sub-layer contains n positions in the sequence and each position contains a group of neurons. We assume that the Transformer we verify has m sub-layers in total, and the value of the j-th neuron at the i-th position in the l-th sub-layer is Φ vector for the specified sub-layer and position. Specially, Φ (0,i) = x (i) taking l = 0. We aim to compute a global lower bound f We compute bounds from the first sub-layer to the last sub-layer. For neurons in the l-th layer, we aim to represent their bounds as linear functions of neurons in a previous layer, the l -th layer: where,U are parameters of linear lower and upper bounds respectively. Using linear bounds enables us to efficiently compute bounds with a reasonable tightness. We initially have Generally, we use a backward process to propagate the bounds to previous sub-layers, by substituting Φ (l,i) with linear functions of previous neurons. It can be recursively conducted until the input layer l = 0. Since is constant, we can regard the bounds as linear functions of the perturbed embeddings, and take the global bounds for where 1/p + 1/q = 1 with p, q ≥ 1. These steps resemble to CROWN which is proposed to verify feed-forward networks. We further support verifying self-attentive Transformers which are more complex than feed-forward networks. Moreover, unlike CROWN that conducts a full backward process, we combine the backward process with a forward process (see Sec. 3.3) to reduce the computational complexity of verifying Transformers. Linear transformations and unary nonlinear functions are basic operations in neural networks. We show how bounds Eq. at the l -th sub-layer are propagated to the (l − 1)-th layer. Linear Transformations If the l -th sub-layer is connected with the (l − 1)-th sub-layer with a linear transformation are parameters of the linear transformation, we propagate the bounds to the (l −1)-th layer by substituting Φ (l,k) (X): where "L/U " means that the equations hold for both lower bounds and upper bounds respectively. If the l -th layer is obtained from the (l − 1)-th layer with an unary nonlinear function Φ, to propagate linear bounds over the nonlinear function, we first bound are parameters such that the inequation holds for all Φ (l −1,k) j (X) within its bounds computed previously. Such linear relaxations can be done for different functions, respectively. We provide detailed bounds for functions involved in Transformers in Appendix B. We then back propagate the bounds: mean to retain positive and negative elements in vector respectively and set other elements to 0. Self-attention layers are the most challenging parts for verifying Transformers. We assume that Φ (l−1,i) (X) is the input to a self-attention layer. We describe our method for computing bounds for one attention head, and bounds for different heads of the multi-head attention in Transformers can be easily concatenated., and values v (l,i) (X) with different linear projections, and their bounds can be obtained as described in Sec. 3.2. We also keep their linear bounds that are linear functions of the perturbed embeddings., where ⊕ indicates vector concatenation, and thereby we represent the linear bounds as linear functions of x (r): where q/k/v and q/k/v mean that the inequation holds for queries, keys and values respectively. We then bound the output of the self-attention layer starting from We bound multiplications and divisions in the selfattention mechanism with linear functions. We aim to bound bivariate function z = xy or z = We provide a proof in Appendix C. However, directly bounding z = x y is tricky; fortunately, we can bound it indirectly by first bounding a unary function y = 1 y and then bounding the multiplication z = xy. For the self-attention mechanism, instead of using the backward process like CROWN , we compute bounds with a forward process which we will show later that it can reduce the computational complexity. Attention scores are computed from q (l,i) (X) and, it is bounded by: We then obtain the bounds of S ), In this way, linear bounds of q (l,i) (X) and k (l,i) (X) are forward propagated to S (l) i,j. Attention scores are normalized into attention probabilities with a softmax, i.e. S (l) is an unary nonlinear function and can be bounded by α, where: By summing up bounds of each exp(S i,k) ready, we forward propagate the bounds toS (l) i,j with a division similarly to bounding q, which can be regarded as a dot product ofS with a transposing. Therefore, bounds ofS similarly to bounding S (l) i,j. In this way, we obtain the output bounds of the self-attention: Recall that x (r) is a concatenation of into t vectors with equal dimensions, Ω, Ω, such that Eq. becomes Backward Process to Self-Attention Layers When computing bounds for a later sub-layer, the lth sub-layer, using the backward process, we directly propagate the bounds at the the closest previous self-attention layer assumed to be the l -th layer, to the input layer, and we skip other previous sublayers. The bounds propagated to the l -th layer are as Eq.. We substitute Φ (l,k) (X) with linear bounds in Eq.: We take global bounds as Eq. and Eq. to obtain the bounds of the l-th layer. Introducing a forward process can significantly reduce the complexity of verifying Transformers. With the backward process only, we need to compute, where the major cost is on Λ (l,i,l,k) and there are O(m 2 n 2) such matrices to compute. The O(n 2) factor is from the dependency between all pairs of positions in the input and output respectively, which makes the algorithm inefficient especially when the input sequence is long. In contrast, the forward process represents the bounds as linear functions of the perturbed positions only instead of all positions by computing Ω (l,i) and Θ (l,i). Imperceptible adversarial examples may not have many perturbed positions , and thus we may assume that the number of perturbed positions, t, is small. The major cost is on Ω (l,i) while there are only O(mn) such matrices and the sizes of Λ (l,i,l,k) and Ω (l,i) are relatively comparable for a small t. We combine the backward process and the forward process. The number of matrices Ω is O(mn) in the forward process, and for the backward process, since we do not propagate bounds over self-attention layers and there is no cross-position dependency in other sub-layers, we only compute Λ (l,i,l,k) such that i = k, and thus the number of matrices Λ is reduced to O(m 2 n). So the total number of matrices Λ and Ω we compute is O(m 2 n) and is O(n) times smaller than O(m 2 n 2) when only the backward process is used. Moreover, the backward process makes bounds tighter compared to solely the forward one, as we show in Appendix D. We conduct experiments on two sentiment analysis datasets: Yelp and SST-2 . Yelp consists of 560,000/38,000 examples in the training/test set and SST-2 consists of 67,349/872/1,821 examples in the training/development/test set. Each example is a sentence or a sentence segment (for the training data of SST-2 only) labeled with a sentiment polarity. We verify the robustness of Transformers trained from scratch. For the main experiments, we consider N -layer models (N ≤ 3), with 4 attention heads, hidden sizes of 256 and 512 for self-attention and feed-forward layers respectively, and we use ReLU activations for feed-forward layers. We remove the variance related terms in layer normalization, making Transformers verification bounds tighter while the clean accuracies remain comparable (see Appendix E for discussions). Although our method can be in principal applied to Transformers with any number of layers, we do not use large-scale pre-trained models such as BERT because they are too challenging to be tightly verified for now. Current state-of-the-art verification methods for feed-forward networks either produce loose bounds for large networks , or do not scale due to computational limits (; ; Transformers contain more nonlinear operations than feed-forward networks so they are even more challenging for verification. Dataset N Acc. Table 1 : Clean accuracies and computed bounds for 1-position perturbation. Bounds include upper bounds (obtained by an enumeration based method), certified lower bounds by IBP and our method respectively. We also report the gap between upper bounds and our lower bounds (represented as the percentage of lower bounds relative to upper bounds). We compute bounds for each possible option of perturbed positions and report the minimum ("Min") and average ("Avg") among them. Table 2: Bounds by IBP and our method for 2-position perturbation constrained by 2 -norm. We compute certified lower bounds for different models on different datasets. We include 1-position perturbation constrained by 1 / 2 / ∞ -norms and 2-position perturbation constrained by 2 -norm. We compare our lower bounds with those computed by the Interval Bound Propagation (IBP) baseline. For 1-position perturbation, we also compare with upper bounds computed by enumerating all the words in the vocabulary and finding the word closest to the original one such that the word substitution alters the predicted label. This method has an exponential complexity with respect to the vocabulary size and can hardly be extended to perturbations on 2 or more positions; thus we do not include upper bounds for 2-position perturbation. For each example, we enumerate possible options of perturbed positions (there are n t options), and we integrate from different options by taking the minimum or average respectively. We report the average on 10 correctly classified random test examples with sentence lengths no more than 32 for 1-position perturbation and 16 for 2-position perturbation. Table 1 and Table 2 present the for 1-position and 2-position perturbation respectively. Our certified lower bounds are significantly larger and thus tighter than those by IBP. For 1-position perturbation, the lower bounds are consistently smaller than the upper bounds, and the gap between the upper bounds and our lower bounds are reasonable compared with that in previous work on verification of feed-forward networks, e.g. in the upper bounds are in the order of 10 times larger than lower bounds. This demonstrates that our proposed method can compute robustness bounds for Transformers in a similar quality to the bounds of simpler neural networks. Table 3: Comparison of certified lower bounds and computation time (sec) by different methods. In the following, we show the effectiveness of combining the backward process with a forward process. We compare our proposed method (Backward & Forward) to two variations: FullyForward propagates bounds in a forward manner for all sub-layers besides self-attention layers; Fully-Backward computes bounds for all sub-layers including self-attention layers using the backward bound propagation and without the forward process. We compare the tightness of bounds and computation time of the three methods. We use smaller models with the hidden sizes reduced by half, and we use 1-position perturbation only, to accommodate Fully-Backward with large computational cost. Experiments are conducted on an NVIDIA TITAN X GPU. Table 4: Average importance scores of the most/least important words identified from 100 examples respectively on SST by different methods. For the most important words identified, larger important scores are better, and vice versa. Additionally, we show most/least important words identified from 10 examples on the Yelp dataset. Boldfaced words are considered to have strong sentiment polarities, and they should appear as most important words rather than least important ones. The certified lower bounds can reflect how sensitive a model is to the perturbation of each input token. Intuitively, if a word is more important to the prediction, the model is more sensitive to its perturbation. Therefore, the certified lower bounds can be used to identify important words. In the following, we conduct an experiment to verify whether important words can be identified by our certified lower bounds. We use a 1-layer Transformer classifier under 1-position perturbation constrained by 2 -norm. We compare our method with two baselines that also estimate local vulnerability: Upper uses upper bounds; Gradient identifies the word whose embedding has the largest 2 -norm of gradients as the most important and vice versa. Quantitative Analysis on SST SST contains sentiment labels for all phrases on parse trees, where the labels range from very negative to very positive, and 2 for neutral. For each word, assuming its label is x, we take |x − 2|, i.e. the distance to the neutral label, as the importance score, since less neutral words tend to be more important for the sentiment polarity of the sentence. We evaluate on 100 random test input sentences and compute the average importance scores of the most or least important words identified from the examples. In Table 4, compared to the baselines ("Upper" and "Grad"), the average importance score of the most important words identified by our lower bounds are the largest, while the least important words identified by our method have the smallest average score. This demonstrates that our method identifies the most and least important words more accurately compared to baseline methods. We further analyze the on a larger dataset, Yelp. Since Yelp does not provide per-word sentiment labels, importance scores cannot be computed as on SST. Thus, we demonstrate a qualitative analysis. We use 10 random test examples and collect the words identified as the most and least important word in each example. In Table 4, most words identified as the most important by certified lower bounds are exactly the words reflecting sentiment polarities (boldfaced words), while those identified as the least important words are mostly stopwords. Baseline methods mistakenly identify more words containing no sentiment polarity as the most important. This again demonstrates that our certified lower bounds identify word importance better than baselines and our bounds provide meaningful interpretations in practice. While gradients evaluate the sensitivity of each input word, this evaluation only holds true within a very small neighborhood (where the classifier can be approximated by a first-order Taylor expansion) around the input sentence. Our certified method gives valid lower bounds that hold true within a large neighborhood specified by a perturbation set S, and thus it provides more accurate . We propose the first robustness verification method for Transformers, and tackle key challenges in verifying Transformers, including cross-nonlinearity and cross-position dependency, for efficient and effective verification. Our method computes certified lower bounds that are significantly tighter than those by IBP. Quantitative and qualitative analyses further show that our bounds are meaningful and can reflect the importance of different words in sentiment analysis. A ILLUSTRATION OF DIFFERENT BOUNDING PROCESSES Figure 1: Illustration of three different bounding processes: Fully-Forward (a), Fully-Backward (b), and Backward&Forward (c). We show an example of a 2-layer Transformer, where operations can be divided into two kinds of blocks, "Feed-forward" and "Self-attention". "Self-attention" contains operations in the self-attention mechanism starting from queries, keys, and values, and "Feed-forward" contains all the other operations including linear transformations and unary nonlinear functions. Arrows with solid lines indicate the propagation of linear bounds in a forward manner. Each backward arrow A k → B k with a dashed line for blocks A k, B k indicates that there is a backward bound propagation to block B k when computing bounds for block A k. Blocks with blue rectangles have forward processes inside the blocks, while those with green rounded rectangles have backward processes inside. Backward & Forward algorithm, we use backward processes for the feed-forward parts and forward processes for self-attention layers, and for layers after self-attention layers, they no longer need backward bound propagation to layers prior to self-attention layers. In this way, we resolve the cross-position dependency in verifying Transformers while still keeping bounds comparably tight as those by using fully backward processes. Empirical comparison of the three frameworks are presented in Sec. 4.3. We show in Sec. 3.2 that linear bounds can be propagated over unary nonlinear functions as long as the unary nonlinear functions can be bounded with linear functions. Such bounds are determined for each neuron respectively, according to the bounds of the input for the function. Specifically, for a unary nonlinear function σ(x), with the bounds of x obtained previously as x ∈ [l, u], we aim to derive a linear lower bound α L x + β L and a linear upper bound where parameters α L, β L, α U, β U are dependent on l, u and designed for different functions σ(x) respectively. We introduce how the parameters are determined for different unary nonlinear functions involved in Transformers such that the linear bounds are valid and as tight as possible. Bounds of ReLU and tanh has been discussed by , and we further derive bounds of e x, 1 x, x 2, √ x. x 2 and √ x is only used when the layer normalization is not modified for experiments to study the impact of our modification. For the following description, we define the endpoints of the function to be bounded within the range as (l, σ(l)) and (u, σ(u)). We describe how the lines corresponding to the linear bounds of different functions can be determined, and thereby parameters α L, β L, α U, β U can be determined accordingly. ReLU For ReLU activation, σ(x) = max(x, 0). ReLU σ(x) is inherently linear on segments (−∞, 0] and [0, ∞) respectively, so we make the linear bounds exactly σ(x) for u ≤ 0 or l ≥ 0; and for l < 0 < u, we take the line passing the two endpoints as the upper bound; and take σ L (x) = 0 when u < |l| and σ L (x) = 1 when u ≥ |l| as the lower bound, to minimize the gap between the lower bound and the original function. 1+e −2x. tanh is concave for l ≥ 0, and thus we take the line passing the two endpoints as the lower bound and take a tangent line passing ((l+u)/2, σ((l+u)/2) as the upper bound. For u ≤ 0, tanh is convex, and thus we take the line passing the two endpoints as the upper bound and take a tangent line passing ((l + u)/2, σ((l + u)/2) as the lower bound. For l < 0 < u, we take a tangent line passing the right endpoint and as the lower bound, and take a tangent line passing the left endpoint and L and d U can be found with a binary search. Exp σ(x) = exp(x) = e x is convex, and thus we take the line passing the two endpoints as the upper bound and take a tangent line passing (d, σ(d)) as the lower bound. Preferably, we take d = (l + u)/2. However, e x is always positive and used in the softmax for computing normalized attention probabilities in self-attention layers, i.e. exp(S (l) i,j ) and i,k ) appears in the denominator of the softmax, and to make reciprocal function 1 x finitely bounded the range of x should not pass 0. Therefore, we impose a constraint to force the lower bound function to be always positive, i.e. Reciprocal For the reciprocal function, σ(x) = 1 x. It is used in the softmax and layer normalization and its input is limited to have l > 0 by the lower bounds of exp(x), and √ x. With l > 0, σ(x) is convex. Therefore, we take the line passing the two endpoints as the upper bound. And we take the tangent line passing ((l + u)/2, σ((l + u)/2)) as the lower bound. Square For the square function, σ(x) = x 2. It is convex and we take the line passing the two endpoints as the upper bound. And we tan a tangent line passing (d, σ(d)) (d ∈ [l, u] ) as the lower bound. We still prefer to take d = (l+u)/2. x 2 appears in computing the variance in layer normalization and is later passed to a square root function to compute a standard derivation. To make the input to the square root function valid, i.e. non-negative, we impose a constraint σ 2 is the tangent line passing (d, σ(d) ). For u ≤ 0, x 2 is monotonously decreasing, the constraint we impose is equivalent to σ L (u) = 2du − d 2 ≥ 0, and with d ≤ 0, we have d ≥ 2u. So we take d = max((l + u)/2, 2u). For l ≥ 0, x 2 is monotonously increasing, the constraint we impose is equivalent to σ L (l) = 2dl − d 2 ≥ 0, and with 2 is negative for d = 0 and is zero for d = 0. And for l < 0 < u, the constraint we impose is equivalent to σ L = −d 2 ≥ 0, and thus we take d = 0. Square root For the square root function, σ(x) = √ x. It is used the to compute a standard derivation in the layer normalization and its input is limited to be positive by the lower bounds of x 2 and a smoothing constant, so l > 0. σ(x) is concave, and thus we take the line passing the two endpoints as the lower bound and take the tangent line passing ((l + u)/2, σ((l + u)/2)) as the upper bound. We provide a mathematical proof of optimal parameters for linear bounds of multiplications used in Sec. 3.3. We also show that linear bounds of division can be indirectly obtained from bounds of multiplications and the reciprocal function. For each multiplication, we aim to bound z = xy with two linear bounding planes z, where x and y are both variables and x ∈ [l x, u x], y ∈ [l y, u y] are concrete bounds of x, y obtained from previous layers, such that: Our goal is to determine optimal parameters of bounding planes U, such that the bounds are as tight as possible. We define a difference function F L (x, y) which is the difference between the original function z = xy and the lower bound z To make the bound as tight as possible, we aim to minimize the integral of the difference function, which is equivalent to maximizing For an optimal bounding plane, there must exist a point To ensure that F L (x, y) ≥ 0 within the concerned area, we need to ensure that the minimum value of F L (x, y) is be non-negative. We show that we only need to check cases when (x, y) is any of (l x, l y), (l x, u y), (u x, l y), (u x, u y), i.e. points at the corner of the considered area. The partial derivatives of F L are: y 1 ) and (x 1, y 1) cannot be the point with the minimum value of F L (x, y). On the other hand, if there is (x 1, y 1)(x 1 = l x, y 1 ∈ (l y, u y)), i.e. on one border of the concerned area but not on any corner, This property holds for the other three borders of the concerned area. Therefore, other points within the concerned area cannot have smaller function value F L (x, y), so we only need to check the corners, and the constraints on F L (x, y) become We substitute γ L in Eq. with Eq., yielding where We have shown that the minimum function value F L (x, y) within the concerned area cannot appear in (l x, u x) × (l y, u y), i.e. it can only appear at the border, while (x 0, y 0) is a point with a minimum function value F L (x 0, y 0) = 0, (x 0, y 0) can also only be chosen from the border of the concerned area. At least one of x 0 = l x and x 0 = u x holds. If we take x 0 = l x: And from Eq. we obtain For the other case if we take x 0 = u x: We take α L = u y similarly as in the case when x 0 = l x, and then, so we can simply adopt the first one. We also notice that V L 1, V L 2 are independent of y 0, so we may take any y 0 within [l y, u y] such as y 0 = l y. Thereby, we obtain the a group of optimal parameters of the lower bounding plane: We derive the upper bound similarly. We aim to minimize If we take x 0 = l x: To minimize V U 1, we take α U = u y, and then For the other case if we take x 0 = u x: (l x − u x)α U ≥ −u x y 0 + max(l x l y − β U (l y − y 0), l x u y − β U (u y − y 0)) = max(l x l y − u x l y, l x u y − u x u y) = (l x − u x) min(l y, u y) = (l x − u x)l y So α U ≤ l y To minimize V U 2, we take α U = l y, and then Since V We have shown that closed-form linear bounds of multiplications can be derived. However, we find that directly bounding z = x y is tricky. If we try to derive a lower bound z L = α L xβ L y + γ for z = x y as Appendix C.1, the difference function is The partial derivatives of F L are: If there is (x 1, y 1) ∈ (l x, u x) × (l y, u y) such that
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxwPJHFwS
We propose the first algorithm for verifying the robustness of Transformers.
In the last few years, deep learning has been tremendously successful in many applications. However, our theoretical understanding of deep learning, and thus the ability of providing principled improvements, seems to lag behind. A theoretical puzzle concerns the ability of deep networks to predict well despite their intriguing apparent lack of generalization: their classification accuracy on the training set is not a proxy for their performance on a test set. How is it possible that training performance is independent of testing performance? Do indeed deep networks require a drastically new theory of generalization? Or are there measurements based on the training data that are predictive of the network performance on future data? Here we show that when performance is measured appropriately, the training performance is in fact predictive of expected performance, consistently with classical machine learning theory. Is it possible to decide the prediction performance of a deep network from its performance in training -as it is typically the case for shallower classifiers such as kernel machines and linear classifiers? Is there any relationship at all between training and test performances? Figure 1a shows that when the network has more parameters than the size of the training set -which is the standard regime for deep nets -the training classification error can be zero and is very different from the testing error. This intriguing lack of generalization was recently highlighted by the surprising and influential observation that the same network that predicts well on normally labeled data (CIFAR10), can fit randomly labeled images with zero classification error in training while its test classification error is of course at chance level, see Figure 1b. The riddle of large capacity and good predictive performance led to many papers, with a variety of claims ranging from "This situation poses a conceptual challenge to statistical learning theory as traditional measures of model complexity struggle to explain the generalization ability of large artificial neural networks... " , to various hypotheses about the role of flat minima;; , about; and to a number of other explanations (e.g. ;) for such unusual properties of deep networks. We start by defining some key concepts. We call "loss" the measure of performance of the network f on a training set S = x 1, y 1, · · ·, x N, y N. The most common loss optimized during training for binary classification is the logistic loss L(f) = 1 N N n=1 ln(1 + e −ynf (xn) ). We call classification "error" 1 N N n=1 H(−y n f (x n)), where y is binary and H is the Heaviside function with H(−yf (x)) = 1 if −yf > 0 which correspond to wrong classification. There is a close relation between the logistic loss and the classification error: the logistic loss is an upper bound for the classification error. Thus minimizing the logistic loss implies minimizing the classification error. The criticism in papers such as refers to the classification error. However, training minimizes the logistic loss. As a first step it seems therefore natural to look at whether logistic loss in training can be used as a proxy for the logistic loss at testing. The second step follows from the following observation. The logistic loss can always be made arbitrarily small for separable data (when f (x n)y n > 0, ∀n) by scaling up the value of f and in fact it can be shown that the norm of the weights of f grows monotonically with time during gradient descent. Notice that multiplying f by any positive number does not affect its sign and thus does not change its behavior at classification. The second step is then to consider the logistic loss of the normalized networkf with f = ρf, with ρ = ρ 1...ρ K where ρ i is the Frobenius norm of the weight matrix of layer i. Now we show that by measuring the performance of a trained network on the training set in terms of the quantities described above, classical generalization holds: the performance of the network on the training set becomes a good qualitative proxy for the expected future performance of the network. The precise procedure involves normalizing the output of the trained network by dividing it by a single number -the product of the Frobenius norms of the weight matrices at each layer -and then measuring the cross-entropy loss. Figure 2a) shows the cross entropy test loss plotted versus the training loss for networks with the same architecture trained with different initializations (this is a way to get networks with different test performance). There is no clear relation between training and test cross-entropy loss in the same way that there is not between classification error in training and testing. Normalizing each network separately yields Figure 2b ): now the test loss is tightly predicted by the training loss. We emphasize that normalization does not change the classification performance of the networks. In fact, since (binary) classification depends only on the sign of f (x), it is not changed by normalization to any ball. The figure suggests that the empirical cross-entropy loss off on the training set predicts rather well how networks will perform on a test set. Impressively, the same normalization process correctly predicts the test loss when the training is on randomly labeled data. This apparent lack of predictivity of the training performance on randomly labeled data was at the core of Zhang et al. criticism of machine learning theory . Figure 2 shows that it is in fact possible to gauge just from the training performance that the network trained on randomly labeled examples will have chance performance on a test set even if its classification error in training is always zero. As shown in Figure 2 the data point corresponding to the randomly trained network still satisfies approximately the same linear relationship, as explained by the classical theory. Thus, while Figure 2 shows that the normalized train cross-entropy can predict the performance of the of the normalized test cross-entropy, Figure 4 We define a deep network with K layers with the usual elementwise scalar activation functions σ(z): R → R as the set of functions where the input is x ∈ R d, the weights are given by the matrices W k, one per layer, with matching dimensions. We use the symbol W as a shorthand for the set of W k matrices k = 1, · · ·, K. For simplicity we consider here the case of binary classification in which f takes scalar values, implying that the last layer matrix W K is W K ∈ R 1,K l and the labels are y n ∈ {−1, 1}. There are no biases apart form the input layer where the bias is instantiated by one of the input dimensions being a constant. The activation function in this paper is the ReLU activation. We denote the network as f = f (W 1, · · ·, W K ; x) where x is the input and the weight matrices W k are the parameters. We callf the network with normalized weights matrices, Consider different asymptotic minima of the empirical loss obtained with the same network architecture on the same training set by minimization of the cross-entropy loss with different initial conditions (see Appendix). We obtain different test losses, depending on initial conditions. The question is whether their test performance can be predicted from empirical properties measured only on the training set. 1 used in our experiments are multiclass problems. In this theory section we discuss for simplicity the simpler case of binary classification. Exactly the same steps in the theoretical arguments below apply to the cross-entropy loss (because Equations 1,3 apply, see Appendix). Consider the structure of the loss for a deep network. The "positive homogeneity" property of ReLU networks implies the following property: where W K = ρ kWK and ||W K || = 1 and ρ 1 · · · ρ K = ρ. 1 MNIST and CIFAR100 are in the appendix This property is valid for layer-wise normalization under any norm. We emphasize again that have the same classification performance on any data set. Furthermore, during gradient descent on separable data -most data sets are separable by overparametrized deep networks -it can be shown (see) that the ρ factor continue to increase with time towards infinity, driving an exponential type loss to zero without affecting the classification performance, which only depends on the sign of y n f (x n)∀n. As we discussed earlier, it seems that different networks corresponding to different empirical minimizers 2 could be evaluated in terms of their normalized formf (x). The intuition is similar to the linear case in which the classifier w T x depends on the unit vectorw = w |w| while |w| diverges to infinity;. 2 In general an overparametrized network may have a large number of global minima, see for instance. To assessf (x) we compute its logistic loss (which is the special case of cross-entropy in the binary case)L Of course for separable data (y n f (x n) > 0, ∀n) the loss off is larger than the loss of f since the negative exponent is smaller. Figure 2 uses L 2 for layer-wise normalization. We are now ready to explain the of Figure 2 in terms of classical machine learning theory. A typical generalization bound that holds with probability at least (1 − δ), ∀f ∈ F has the form: where is the expected loss, R N (F) is the empirical Rademacher average of the class of functions F measuring its complexity; c 1, c 2 are constants that reflect the Lipschitz constant of the loss function and the architecture of the network. We use the bound in Equation 3 and the key observation that the Rademacher complexity satisfies the property, because of homogeneity of the networks. Then, the bound on the cross-entropy loss for the unnormalized network gives sinceL(f) ≈ 0. Considering the corresponding bound for the cross-entropy loss of the normalized network scaled with any desired scaling R gives In our experiments we find that R N (F) is small for the value of N of our datasets and R N (F) is large. Equation 5 implies then that the unnormalized test loss will be quite different from zero and thus different from the unnormalized train loss. On the other hand, in Equation 6 the terms 2N are small, implying that the normalized test loss is very close to the normalized train loss. Thus Equation 6 with R = 1 shows that L(f) is bounded byL(f), predicting a bound in terms of a linear relationship with slope one and a small offset between L(f) andL(f). The prediction is verified experimentally (see Figure 2). Notice that both homogeneity of the ReLU network and separability, that is very small cross-entropy loss, are key assumptions in our argument. The first applies to networks with a linear or ReLU activation but not to networks with other activation functions. The second is usually satisfied by overparametrized networks. Thus Equation 6 shows that the training cross-entropy loss is a very good proxy of the cross-entropy loss at testing, implying generalization for a relatively small number of examples N. Notice that for all the networks in Figure 2, classification performance at training is perfect and that scaling does not change the sign of the networks outputs and therefore their behaviour in terms of classification. In particular, Figure 2 shows that performance on the randomly labeled training set when measured for the normalized network is bad (despite classification being at zero error) predicting correctly bad performance on the test set. As we mentioned, Figure 4 b shows that there is a good correlation between crossentropy loss and classification performance: empirically the ranking between the different classifiers is mostly the same for crossentropy vs classification loss. and of the upper bounds Equations 3 and 6. Lower bounds, however, are not available. As a consequence, the theory does not guarantee that among two (normalized) networks, the one with lower cross-entropy loss in training will always have a lower classification error at test. This difficulty is not specific to deep networks. It is common to approaches using a surrogate loss function. The empirical evidence however supports the claim that there is a roughly monotonic relationship between training (and testing) loss of the normalized network and its expected classification error: Figure 4b shows an approximately monotonic relation between normalized test cross-entropy loss and test classification error. The linear relationship we found means that the generalization error of Equation 3 is small once the complexity of the space of deep networks is "dialed-down" by normalization. It also means that, as expected from the theory of uniform convergence, the generalization gap decreases to zero for increasing size of the training set (see Figure 1). Thus there is indeed asymptotic generalization -defined as training loss converging to test loss when the number of training examples grows to infinity -in deep neural networks, when appropriately measured. The title in "Understanding deep learning requires rethinking generalization" seems to suggest that deep networks are so "magical" to be beyond the reach of existing machine learning theory. This paper shows that this is not the case. On the other hand, the generalization gap for the classification error and for the unnormalized cross-entropy is expected to be small only for much larger N (N must be significantly larger than the number of parameters). However, consistently with classical learning theory, the cross-entropy loss at training predicts well the cross-entropy loss at test when the complexity of the function space is reduced by appropriate normalization. For the normalized case with R = 1 this happens in our data sets for a relatively "small" number N of training examples as shown by the linear relationship of Figure 2. The classical analysis of ERM algorithms studies their asymptotic behavior for the number of data N going to infinity. In this limiting regime, N > W where W is the fixed number of weights; consistency (informally the expected error of the empirical minimizer converges to the best in the class) and generalization (the empirical error of the minimizer converges to the expected error of the minimizer) are equivalent. This note implies that there is indeed asymptotic generalization and consistency in deep networks. However, it has been shown that in the case of linear regression, for instance with kernels, there are situations -depending on the kernel and the data -in which there is simultaneously interpolation of the training data and good expected error. This is typically when W > N and corresponds to the limit for λ = 0 of regularization, that is the pseudoinverse. It is likely that deep nets may have a similar regime, in which case the implicit regularization described here, with its asymptotic generalization effect, is just an important prerequisite for a full explanation for W > N -as it is the case for kernel machines under the square loss. The of this paper strongly suggested that the complexity of the normalized network is controlled by the optimization process. In fact a satisfactory theory of the precise underlying implicit regularization mechanism has now been proposed As expected, the linear relationship we found holds in a robust way for networks with different architectures, different data sets and different initializations. Our observations, which are mostly relevant for theory, yield a recommendation for practitioners: it is better to monitor during training the empirical "normalized" cross-entropy loss instead of the unnormalized cross-entropy loss actually minimized. The former matters in terms of stopping time and predicts test performance in terms of cross-entropy and ranking of classification error. More significantly for the theory of Deep Learning, this paper confirms that classical machine learning theory can describe how training performance is a proxy for testing performance of deep networks. While the theoretical explanation in the main text applies to the case of binary classification, the extension to multi-class case follows straightforwardly. Recall some definitions for neural networks with multiple outputs. Let C be the number of classes -the neural network is then a vector f (W ; x) ∈ R C. The component fj(W ; x) denotes the j-th output. The dataset is again composed of examples xn ∈ R d and labels are now yn ∈ [C]. Note that nothing here changes in regards to homogeneity, and we again can define a normalized network The main theoretical arguments depend on the generalization bounds of the form As the right hand side depends on the neural networks, which do not change in any substantial way, all that remains is understanding the multi-class loss. To transform the outputs of the network into probabilities, the Softmax function is used The cross-entropy loss is then defined simply aŝ It's very easy to see that this reduces to the logistic loss in the binary case. Classification now depends only on the margin ηn = fy n (W ; x) − max j =yn {fj(W ; x)} -if ηn > 0 then the example is correctly classified. This means that, again, classification only cares about the sign of the margin and not the normalization of the neural network. One final property of note is that for separable data, the loss monotonically decreases with increasing ρ. To see this, let us write αnj = fy n (W ; x) − fj(W ; x), which is a positive quantity in the separable case. Additionally define gn = − log j =yn e −α nj = − log j =yn e −ρα nj, which is clearly a monotonic function of ρ if all αnj > 0. We can now rewrite the Cross-entropy loss aŝ which implies that we can drive the loss to 0 by increasing ρ → ∞. Top The top left graph shows testing vs training cross-entropy loss for networks trained on the same data sets but with different initializations. The top right graph shows the testing vs training loss for the same networks, normalized by dividing each weight by the Frobenius norm of its layer. Notice that all points have zero classification error at training. The red point on the top right refers to a network trained on the same CIFAR data set but with randomized labels. It shows zero classification error at training and test error at chance level. The top line is a square-loss regression of slope 1 with positive intercept. The bottom line is the diagonal at which training and test loss are equal. The networks are 3-layer networks; the first layer is convolutional, 64 filters of size 5x5, stride 2, padding 2, no bias, ReLU activation; the second layer is also convolutional, 64 filters of size 5x5, stride 2, padding 2, no bias, ReLU activation; the third layer is fully connected, input size 64*8*8, output size 10, no bias, softmax activation. The training is on the CIFAR-10 dataset with 50k training examples, 10k testing examples. The network used for the point in red was trained similarly except the testing set and training set labels were randomized. No data augmentation was performed, but data were normalized to mean (The bottom graphs show similar experiments for ResNet-56 with 56 layers, demonstrating that our observations hold for state-of-the-art networks (testing errors ranging from 7% to 9%; on this 10-classes classification problem chance performance is 90% error). We again emphasize that the performance in classification of normalized and unnormalized networks is exactly the same. The normalization in the case of ResNet was performed by using the norm of the output of each network on one example from CIFAR-10, because we found it computationally difficult to directly evaluate the effective norm of the residual layers. The networks were trained for 200 epochs with learning rate = 0.01 for the first 100 epochs and learning rate = 0.001 for the second 100 epochs. SGD was used with batch size of 128 and shuffled training data with random crop and horizontal flip data augmentation. First we start with a common observation: even when two networks have the same architecture, same optimization meta parameters and same training loss, they usually have different test performances (i.e. error and loss), because the stochastic nature of the minimization process leads to convergence to different minima among the many existing in the loss landscape;. With standard settings the differences are usually small (though significant, as shown later). We use therefore two approaches to magnify the effect: • Initialize networks with different levels of "random pretraining": the network is pretrained for a specified number of epochs on "corrupted" training data -the labels of a portion of the examples are swapped with each other in a random fashion. • Initialize the weights of the networks with different standard deviations of a diagonal Gaussian distribution. As it turns out, different standard deviations yield different test performance. Similar techniques have been used previously;. We show the of "random pretraining" with networks on CIFAR-10 (Figure 5) and CIFAR-100 (Figure 7) and initialization with different standard deviations on CIFAR-10 (Figure 6) and CIFAR-100 (Figure 8). Our observations are that the linear relationship between train loss and test loss for normalized networks hold in a robust way under several conditions: • Independence from Initialization: The linear relationship is independent of whether the initialization is via pretraining on randomly labeled natural images or whether it is via larger initialization, as shown by Figures 11 and 12. • Independence from Network Architecture: The linear relationship of the test loss and train loss does not depend on the network architectures we tried. • Independence from Data Set: Figures 10, 11, 12, 9 show the linear relationship on CIFAR10 while Figures 17 and 18 show the linear relationship on CIFAR100. • • Normalization is independent of training loss: Figure 10 shows that networks with different cross-entropy training losses (which are sufficiently small to guarantee zero classification error), once normalized, show the same linear relationship between train loss and test loss. One way to formalize the upper bound on classification error by the logistic loss is to consider the excess classification risk R(f) − R *, where R(f) is the classification error associated with f and R * is the Bayes error. Let us call as before L(f) the expected cross-entropy loss and L * the optimal expected cross entropy loss. Then the following bound holds for binary classification in terms of the so-called ψ-transform of the logistic loss: where the ψ function for the logistic is similar to the ψ for the exponential loss which is ψ(x) = 1 − √ 1 − x 2. The key point here is that ψ −1 is monotonic: minimizing the logistic or cross-entropy loss implies minimizing the classification error. Our arguments imply that among two unnormalized minimizers of the exponential loss that achieve the same given small empirical lossL =, the minimizer with higher product of the norms ρ1, · · ·, ρK has the higher capacity and thus the highest expected loss L. Experiments support this claim, see Figure 13. Notice the linear relationship of test loss with increasing capacity on the top right panels of Figure 11, 12, 17, 18. Product of L2 Norms of Weights from All Layers (Unormalized Net) Figure 13 shows that when the capacity of a network (as measured by the product norm of the layers) increases, so does the test error.; This section shows figures replicating the main on the MNIST data set. Figures 15 and 16 show that the linear relationship holds after normalization on the MNIST data set. Figure 16 shows the linear relationship holds after adding the point trained only on random labels. This section is about replicating the main on CIFAR-100. Figure 7 shows how different test performance can be obtained with pretraining on random labels while Figure 8 shows that different increasing initializations are also effective. Figures 17 and 18 show that the linear relationship holds after normalization, regardless of whether the training was done with pretraining on random labels or with large initialization. This section is about replicating the main with ResNets (see main text figures). To avoid using the product of layerwise norms -whose form is not obvious for ResNets -we normalized the different architectures using their output on the same single image, exploting the fact that all the networks only differ in their initialization (for this reason we did not plot the RL point because this shortcut to normalization does not apply in the case of a different training set with random labels). While in this article we use the term generalization when the offset in the difference between training and test losses is small, the technical definition of "generalization" just requires that the offset decreases with increasing N. This means that for all f ∈ F, limN→∞ |L(f) −L(f)| → 0, since in general we expect the Rademacher complexity to decrease as N increases. As Figure 19 shows, |L(f) −L(f)| does in fact decrease with increasing N (notice that L(f) ≈L(f) around N = 50000 for normalization to the unit L2 ball of our network). These arguments do not depend on the specific form of the Rademacher complexity. A typical margin bound for classification is where η is the margin, L binary (f) is the expected classification error, Lsurr(f) is the empirical loss of a surrogate loss such as the logistic or the exponential. For a point x, the margin is η ∼ yρf (x). Since RN (F) ∼ ρ, the margin bound says that the classification error is bounded by 1 η that is by the value off on the "support vectors" -that is on the xi, yi s.t arg minn ynf (xn). We have looked at data showing the test classification error versus the inverse of the margin. The data are consistent with the margin bound in the sense that the error increases with the inverse of the margin and can be bounded by a straight line with appropriate slope and intercept. Since the bound does not directly provide slope and intercept, the data do not represent a very convincing evidence. Why are all the cross-entropy loss values close to chance (e.g. ln 10 ≈ 2.3 for a 10 class data set) in the plots for convnets -bit not for ResNets -showing the linear relationship? This is because most of the (correct) outputs of the normalized neural networks are close to zero as shown by Figure 20. We would expect the norm of the network to be appromimately bounded by |f (W ; x)| |W ||x| = |x|; the data x is usually pre-processed to have mean 0 and a standard deviation of 1. In fact, for the MNIST experiments, the average value f (x) of the most likely class according to the normalized neural network is 0.026683 with a standard deviation 0.007144. This means that significant differences, directly reflecting the predicted class of each point, are between 0.019539 and 0.033827. This in turn implies that the exponentials in the cross-entropy loss are all very close to 1. Before normalization (which of course does not affect the classification performance), the average value f (x) of the most likely class was 60.564373 with standard deviation 16.214078. The model is a 3-layer convolutional ReLU network with the first two layers containing 24 filters of size 5 by 5; the final layer is fully connected; only the first layer has biases. There is no pooling. The network is overparametrized: it has 154, 464 parameters (compared to 50, 000 training examples). The model is the same 3-layer convolutional ReLU network as in section L.1.1 except it had 34 units. The network was still overparametrized: it has 165, 784 parameters (compared to 50, 000 training examples). The model is a 5-layer convolutional ReLU network with (with no pooling). It has in the five layers 32, 64, 64, 128 filters of size 3 by 3; the final layer is fully connected; batch-normalization is used during training. The network is overparametrized with about 188, 810 parameters (compared to 50, 000 training examples).
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BkelnhNFwB
Contrary to previous beliefs, the training performance of deep networks, when measured appropriately, is predictive of test performance, consistent with classical machine learning theory.
We propose a "plan online and learn offline" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world. We consider a setting where an agent with limited memory and computational resources is dropped into a world. The agent has to simultaneously act in the world and learn to become proficient in the tasks it encounters. Let us further consider a setting where the agent has some prior knowledge about the world in the form of a nominal dynamics model. However, the state space of the world could be very large and complex, and the set of possible tasks very diverse. This complexity and diversity, combined with limited computational capability, rules out the possibility of an omniscient agent that has experienced all situations and knows how to act optimally in all states, even if the agent knows the dynamics. Thus, the agent has to act in the world while learning to become competent. Based on the knowledge of dynamics and its computational resources, the agent is imbued with a local search procedure in the form of trajectory optimization. While the agent would certainly benefit from the most powerful of trajectory optimization algorithms, it is plausible that very complex procedures are still insufficient or inadmissible due to the complexity or inherent unpredictability of the environment. Limited computational resources may also prevent these powerful methods from real-time operation. While the trajectory optimizer may be insufficient by itself, we show that it provides a powerful vehicle for the agent to explore and learn about the world. Due to the limited capabilities of the agent, a natural expectation is for the agent to be moderately competent for new tasks that occur infrequently and skillful in situations that it encounters repeatedly by learning from experience. Based on this intuition, we propose the plan online and learn offline (POLO) framework for continual acting and learning. POLO is based on the tight synergistic coupling between local trajectory optimization, global value function learning, and exploration. We will first provide intuitions for why there may be substantial performance degradation when acting greedily using an approximate value function. We also show that value function learning can be accelerated and stabilized by utilizing trajectory optimization integrally in the learning process, and that a trajectory optimization procedure in conjunction with an approximate value function can compute near optimal actions. In addition, exploration is critical to propagate global information in value function learning, and for trajectory optimization to escape local solutions and saddle FIG4: Examples of tasks solved with POLO. A 2D point agent navigating a maze without any directed reward signal, a complex 3D humanoid standing up from the floor, pushing a box, and inhand re-positioning of a cube to various orientations with a five-fingered hand. Video demonstration of our can be found at: https://sites.google.com/view/polo-mpc.points. In POLO, the agent forms hypotheses on potential reward regions, and executes temporally coordinated action sequences through trajectory optimization. This is in contrast to strategies like −greedy and Boltzmann exploration that explore at the granularity of individual timesteps. The use of trajectory optimization enables the agent to perform directed and efficient exploration, which in turn helps to find better global solutions. The setting studied in the paper models many problems of interest in robotics and artificial intelligence. Local trajectory optimization becomes readily feasible when a nominal model and computational resources are available to an agent, and can accelerate learning of novel task instances. In this work, we study the case where the internal nominal dynamics model used by the agent is accurate. Nominal dynamics models based on knowledge of physics, or through learning , complements a growing body of work on successful simulation to reality transfer and system identification BID34 BID31; BID23. Combining the benefits of local trajectory optimization for fast improvement with generalization enabled by learning is critical for robotic agents that live in our physical world to continually learn and acquire a large repertoire of skills. The POLO framework combines three components: local trajectory optimization, global value function approximation, and an uncertainty and reward aware exploration strategy. We first present the motivation for each component, followed by the full POLO procedure. We model the world as an infinite horizon discounted Markov Decision Process (MDP), which is characterized by the tuple: M = {S, A, R, T, γ}. S ∈ R n and A ∈ R m represent the continuous (real-valued) state and action spaces respectively. R: S × A → R represents the reward function. T: S × A × S → R + represents the dynamics model, which in general could be stochastic, and γ ∈ is the discount factor. A policy π: S × A → R + describes a mapping from states to actions. The value of a policy at a state is the average discounted reward accumulated by following the policy from the state: DISPLAYFORM0 The overall performance of the policy over some start state distribution β is given by: DISPLAYFORM1 For notational simplicity, we use s to denote the next state visited after (from) s. As described earlier, we consider the setting where an agent is dropped into a complex world. The agent has access to an internal model of the world. However, the world can be complex and diverse, ruling out the possibility of an omniscient agent. To improve its behavior, the agent has to explore and understand relevant parts of the state space while it continues to act in the world. Due to the availability of the internal model, the agent can revisit states it experienced in the world and reason about alternate potential actions and their consequences to learn more efficiently. The optimal value function describes the long term discounted reward the agent receives under the optimal policy. Defining the Bellman operator at state s as: DISPLAYFORM0 the optimal value function V * corresponds to the fixed point: V * (s) = BV * (s) ∀s ∈ S. For small, tabular MDPs, classical dynamic programming algorithms like value iteration can be used to obtain the optimal value function. The optimal policy can be recovered from the value function as: DISPLAYFORM1 For more complex MDPs, computing the optimal value function exactly is not tractable except in a few well known cases like the LQR (Åström &) and LMDPs BID46 BID12. Thus, various approximate techniques have been considered in prior works. One popular approach is fitted value iteration BID9 Munos & Szepesvári, 2008), where a function approximator (e.g. neural network) is used to approximate the optimal value function. The core structure of fitted value iteration considers a collection of states (or a sampling distribution ν), and a parametric value function approximatorV θ. Inspired by value iteration, fitted value iteration updates parameters as: DISPLAYFORM2 where BV θi (s) are targets for the regression problem computed at the specific state s according to Eq.. After sufficient iterations of the procedure in Eq. to get a good approximation, the policy is recovered asπ(s) = arg max a E[r(s, a) + γV θ (s)]. The success and convergence of this overall procedure depends critically on at least two components: the capacity and structure of the function approximator (θ); and the sampling distribution (ν).Lemma 1. BID9 LetV be an approximate value function with ∞ error:= max s |V (s) − V * (s)|. Letπ(s) = arg max a E[r(s, a) + γV (s)] be the induced greedy policy. For all MDPs and β, the bound in Eq. holds. Furthermore, for any size of the state space, there exist MDPs andV for which the bound is tight (holds with equality). DISPLAYFORM3 Intuitively, this suggests that performance ofπ degrades with a dependence on effective problem horizon determined by γ. This can be understood as the policy paying a price of at every timestep. Due to the use of function approximation, errors may be inevitable. In practice, we are often interested in temporally extended tasks where γ ≈ 1, and hence this possibility is concerning. Furthermore, the arg max operation inπ could inadvertently exploit approximation errors to produce a poor policy. The performance of fitted value iteration based methods also rely critically on the sampling distribution to propagate global information (Munos & Szepesvári, 2008), especially in sparse reward settings. For some applications, it may be possible to specify good sampling distributions using apriori knowledge of where the optimal policy should visit (e.g. based on demonstration data). However, automatically generating such sampling distributions when faced with a new task may be difficult, and is analogous to the problem of exploration. Trajectory optimization and model predictive control (MPC) have a long history in robotics and control systems BID15 BID44 1. In MPC, starting from state s t and using the knowledge of the dynamics model, a locally optimal sequence of actions (or policies) up to a moving horizon of H is computed by solving the following optimization problem. DISPLAYFORM0 Here, we use x, u,π as dummy variables for states, actions, and policy to distinguish the "imagined" evolution of the MDP used for the trajectory optimization with the actual states (s) observed in the true evolution of the MDP. Here, r(x, u) represents the running reward which is the same as the MDP reward function, and r f (x t+H) represents a terminal reward function. Let {π * k} be the local time-indexed policies obtained as the solution to the optimization problem in. After solving the optimization problem, the first local time-indexed policy is used asπ M P C (·|s t):=π * t (·|x t). The entire procedure is repeated again in the next time step (t + 1). Note that we have defined the optimization problem over a sequence of feedback policies. However, if the dynamics is deterministic, a sequence of actions {u k} t+H k=t can be optimized and used instead without any loss in performance. See Appendix C for further discussions. This approach has led to tremendous success in a variety of control systems such as power grids, chemical process control BID29, and more recently in robotics BID49. Since MPC looks forward only H steps, it is ultimately a local method unless coupled with a value function that propagates global information. In addition, we also provide intuitions for why MPC may help accelerate the learning of value functions. This synergistic effect between MPC and global value function forms a primary motivation for POLO. Lemma 2. LetV be an approximate value function with ∞ error:= max s |V (s) − V * (s)|. Suppose the terminal reward in Eq. FORMULA6 is chosen as r f (s H) =V (s H), and let the MPC policy bê π M P C (·|s t):=π * t (·|x t) (from Eq. 4). Then, for all MDPs and β, the performance of the MPC policy can be bounded as: DISPLAYFORM0 Proof. The proof is provided in Appendix C.This suggests that MPC (with H > 1) is less susceptible to approximation errors than greedy action selection. Also, without a terminal value function, we have = O(r max /(1 − γ)) in the worst case, which adds an undesirable scaling with the problem horizon. Accelerating convergence of the value function Furthermore, MPC can also enable faster convergence of the value function approximation. To motivate this, consider the H-step Bellman operator: DISPLAYFORM1 In the tabular setting, for any V 1 and V 2, it is easy to verify that |B DISPLAYFORM2 H allows for propagation of global information for H steps, thereby accelerating the convergence due to faster mixing. Note that one way to realize B H is to simply apply B H times, with each step providing a contraction by γ. In the general setting, it is unknown if there exists alternate, cheaper ways to realize B H. However, for problems in continuous control, MPC based on local dynamic programming methods BID18 BID47 provide an efficient way to approximately realize B H, which can be used to accelerate and stabilize value function learning. The ability of an agent to explore the relevant parts of the state space is critical for the convergence of many RL algorithms. Typical exploration strategies like -greedy and Boltzmann take exploratory actions with some probability on a per time-step basis. Instead, by using MPC, the agent can explore in the space of trajectories. The agent can consider a hypothesis of potential reward regions in the state space, and then execute the optimal trajectory conditioned on this belief, ing in a Algorithm 1 Plan Online and Learn Offline (POLO) 1: Inputs: planning horizon H, value function parameters θ 1, θ 2,... θ K, mini-batch size n, number of gradient steps G, update frequency Z 2: for t = 1 to ∞ do 3:Select action a t according to MPC (Eq. 4) with terminal reward r f (s) ≡V (s) from Eq. FORMULA13 4:Add the state experience s t to replay buffer D Sample n states from the replay buffer, and compute targets using Eq. FORMULA14 8:Update the value functions using Eq. (see Section 2.5 for details) 9: end for 10:end if 11: end for temporally coordinated sequence of actions. By executing such coordinated actions, the agent can cover the state space more rapidly and intentionally, and avoid back and forth wandering that can slow down the learning. We demonstrate this effect empirically in Section 3.1.To generate the hypothesis of potentially rewarding regions, we take a Bayesian view and approximately track a posterior over value functions. Consider a motivating setting of regression, where we have a parametric function approximator f θ with prior P(θ). The dataset consists of input-output pairs: DISPLAYFORM0, and we wish to approximate P(θ|D). In the Bayesian linear regression setting with Gaussian prior and noise models, the solution to the following problem generates samples from the posterior BID25 BID4 BID26: DISPLAYFORM1 DISPLAYFORM2 is a noisy version of the target andθ ∼ P(θ) is a sample from the prior. Based on this, BID26 demonstrate the benefits of uncertainty estimation for exploration. Similarly, we use this procedure to obtain samples from the posterior for value function approximation, and utilize them for temporally coordinated action selection using MPC. We consider K value function approximatorsV θ with parameters θ 1, θ 2,... θ K independently trained based on Eq.. We consider the softmax of the different samples as the value at a state: DISPLAYFORM3 Since the above scheme approximates mean + variance for small κ > 0, this procedure encourages the agent to additionally explore parts of the state space where the disagreement between the function approximators is large. This corresponds to the broad notion of optimism in the face of uncertainty BID3 which has been successful in a number of applications BID38 ). To summarize, POLO utilizes a global value function approximation scheme, a local trajectory optimization subroutine, and an optimistic exploration scheme. POLO operates as follows: when acting in the world, the agent uses the internal model and always picks the optimal action suggested by MPC. Exploration is implicitly handled by tracking the value function uncertainties and the optimistic evaluation, as specified in Eq. and. All the experience (visited states) from the world are stored into a replay buffer D, with old experiences discarded if the buffer becomes full. After every Z steps of acting in the world and collecting experience, the value functions are updated by: (a) constructing the targets according to Eq.; (b) performing regression using the randomized prior scheme using Eq. where f θ corresponds to the value function approximator. For state s in the buffer and value network k with parameters θ k, the targets are constructed as: DISPLAYFORM0 which corresponds to solving a N −step trajectory optimization problem starting from state s. As described earlier, using trajectory optimization to generate the targets for fitting the value approximation accelerates the convergence and makes the learning more stable, as verified experimentally in Section 3.3. The overall procedure is summarized in Algorithm 1. Through empirical evaluation, we wish to answer the following questions:1. Does trajectory optimization in conjunction with uncertainty estimation in value function approximation in temporally coordinated exploration strategies?2. Can the use of an approximate value function help reduce the planning horizon for MPC?3. Does trajectory optimization enable faster and more stable value function learning?Before answering the questions in detail, we first point out that POLO can scale up to complex high-dimensional agents like 3D humanoid and dexterous anthropomorphic hand BID23 which are among the most complex control tasks studied in robot learning. Video demonstration can be found at: https://sites.google.com/view/polo-mpc Exploration is critical in tasks where immediate rewards are not well aligned with long-term objectives. As a representative problem, we consider a point mass agent in different 2D worlds illustrated in figure 2: a simple finite size box with no obstacles and a maze. This domain serves to provide an intuitive understanding of the interaction between trajectory optimization and exploration while also enabling visualization of . In the extreme case of no rewards in the world, an agent with only local information would need to continuously explore. We wish to understand how POLO, with its ensemble of value functions tracking uncertainties, uses MPC to perform temporally coordinated actions. Our baseline is an agent that employs random exploration on a per-time-step basis; MPC without a value function would not move due to lack of local extrinsic rewards. Second, we consider an agent that performs uncertainty estimation similar to POLO but selects actions greedily (i.e. POLO with a planning horizon of 1). Finally, we consider the POLO agent which tracks value uncertainties and selects actions using a 32-step MPC procedure. We observe that POLO achieves more region coverage in both point mass worlds compared to alternatives, as quantitatively illustrated in FIG1. The ensemble value function in POLO allows the agent to recognize the true, low value of visited states, while preserving an optimistic value elsewhere. Temporally coordinated action is necessary in the maze world; POLO is able to navigate down all corridors., and inhand manipulation task (middle). POLO was trained for 12000 and 2500 environment timesteps, respectively. We test POLO with the learned terminal value function against pure MPC and compare average reward obtained over 3 trials in the getup task and 1000 steps in the manipulation task. On the right, a value function trained with POLO is used by MPC without per-time-step rewards. The agent's height increases, indicating a task-relevant value function. For comparison, we also include the trace of POLO with dense rewards and multiple trials (dashed vertical lines) Next, we study if value learning helps to reduce the planning horizon for MPC. To this end, we consider two high dimensional tasks: humanoid getup where a 3D humanoid needs to learn to stand up from the ground, and in-hand manipulation where a five-fingered hand needs to re-orient a cube to a desired configuration that is randomized every 75 timesteps. For simplicity, we use the MPPI algorithm BID49 for trajectory optimization. In FIG2, we consider MPC and the full POLO algorithm of the same horizon, and compare their performance after T steps of learning in the world. We find that POLO uniformly dominates MPC, indicating that the agent is consolidating experience from the world into the value function. With even the longest planning horizon, the humanoid getup task has a local solution where it can quickly sit up, but cannot discover a chain of actions required to stand upright. POLO's exploration allows the agent to escape the local solution, and consolidate the experiences to consistently stand up. To further test if the learned value function is task aligned, we take the value function trained with POLO, and use it with MPC without any intermediate rewards. Thus, the MPC is optimizing a trajectory of length H = 64 purely using the value function of the state after 64 steps. We observe, in FIG2, that even in this case, the humanoid is able to consistently increase its height from the floor indicating that the value function has captured task relevant details. We note that a greedy optimization procedure with this value function does not yield good , indicating that the learned value function is only approximate and not good everywhere. While the humanoid getup task presents temporal complexity requiring a large planning horizon, the in-hand manipulation task presents spatial complexity. A large number of time steps are not needed to manipulate the object, and a strong signal about progress is readily received. However, since the targets can change rapidly, the variance in gradient estimates can be very high for function approximation methods BID16. Trajectory optimization is particularly well suited for such types of problems, since it can efficiently compute near-optimal actions conditioned on the instance, facilitating function approximation. Note that the trajectory optimizer is unaware that the targets can change, and attempts to optimize a trajectory for a fixed instance of the task. The value function consolidates experience over multiple target changes, and learns to give high values to states that are not just immediately good but provide a large space of affordances for the possible upcoming tasks. Finally, we study if trajectory optimization can aid in accelerating and stabilizing value function learning. To do so, we again consider the humanoid getup task and study different variants of POLO.In particular, we vary the horizon (N) used for computing the value function targets in Eq.. We observe that as we increase N, the agent learns the value function with fewer interactions with the world, as indicated in FIG3 (a). The benefit of using N −step returns for stable value function learning and actor-critic methods have been observed in numerous works (; BID8, and our experiments reinforce these observations. The use of N −step returns help to traverse the bias-variance trade-off. Furthermore, due to the discounting, the contribution of V (s N) is made weaker and thus the targets are more stable. This mirrors ideas such as target networks commonly used to stabilize training. As discussed earlier, longer horizons make trajectory optimization more tolerant to errors in the value function. To illustrate this, we take the value function trained with POLO on a nominal humanoid model, and perturb the model by changing the size of the head to model value function degradation. FIG3 b) shows that a longer planning horizon can mitigate this degradation. This presents intriguing future possibility of using MPC to improve transfer learning between tasks or robot platforms. Planning and learning: Combining elements of planning and search with approximate value functions has been explored in discrete game domains BID39 BID0 where an MCTS planner is informed by the value function. Alternatively, using prior data to guide the search process in continuous MCTS without explicitly learning a value function has also been explored BID30. Related to this, BID2 uses an offline trajectory library for action selection in real-time, but do not explicitly consider learning parametric value functions. RTDP BID7 considers learning value functions based on states visited by the agent, but does not explicitly employ the use of planning. BID50 consider the setting of learning a value function to help MPC, and found the contribution of value functions to be weak for the relatively simple tasks considered in their work. Approaches such as cost shaping BID22 can also be interpreted as hand specifying an approximate value function, and has been successfully employed with MPC. However, this often require careful human design and task specific expertise. An alternative set of approaches BID35 BID21; BID42 use local trajectory optimization to generate a dataset for training a global policy through imitation learning. These approaches do not use MPC at runtime, and hence may often require retraining for changes in tasks or environment. Furthermore, from this line of work have been demonstrated primarily in settings where trajectory optimization alone can solve the task, or use human demonstration data. In contrast, through our exploration schemes, we are able to improve over the capabilities of MPC and solve tasks where MPC is unsuccessful. Planning and exploration: Exploration is a well-studied and important problem in RL. The importance of having a wide and relevant state distribution has been pointed out in numerous prior works (Munos & Szepesvári, 2008; BID6 BID32 . Strategies such as -greedy or Gaussian exploration have recently been used to successfully solve a large number of dense reward problems. As the reward becomes sparse or heavily delayed, such strategies be-come intractable in high-dimensional settings. Critically, these approaches perform exploration on a per time-step basis, which can lead to back and forth wandering preventing efficient exploration. Parameter-space exploration BID28 BID14 methods do not explore at each time step, but rather generate correlated behaviors based on explored parameters at the start. However, such approaches do not consider exploration as an intentional act, but is rather a deviation from a well defined objective for the agent. Deep exploration strategies BID24 sample a value function from the posterior and use it for greedy action selection. Approaches based on notions of intrinsic motivation and information gain BID11 BID40 BID17 BID27 BID8) also explicitly introduce exploration bonuses into the agent's reward system. However, such approaches critically do not have the element of planning to explore; thus the agent may not actually reach regions of high predicted reward because it does not know how to get there. Our work is perhaps closest to the E3 framework of BID19, which considers altered MDPs with different reward functions, and executes the optimal action under that MDP. However solving these altered MDPs is expensive and their solution is quickly discarded. MPC on the other hand can quickly solve for local instance-specific solutions in these MDPs. Model-free RL: Our work investigates how much training times can be reduced over model-free methods when the internal model is an accurate representation of the world model. As a representative number, BID36 report approximately 5 days of agent experience and 128 CPU core hours for solving tasks such as getting up from the ground. In contrast, POLO requires only 12 CPU core hours and 96 seconds of agent experience. Recently, policy gradient methods were also used for in-hand manipulation tasks BID23, where 3 years of simulated experience and 500 CPU hours were used for object reorientation tasks. For a similar task, POLO only required 1 CPU hour. Of course, model-free methods do not require an accurate internal model, but our suggest that much less experience may be required for the control aspect of the problem. Our work can be viewed as a strong model-based baseline that model-free RL can strive to compete with, as well as a directly useful method for researchers studying simulation to reality transfer. In an alternate line of work, internal models have been used for variance reduction purposes in model-free RL BID13 BID10, in contrast to our use of MPC. Related to this, BID5 consider learning an internal model for discrete action domains like ALE and use short horizon MCTS for planning. learn a dynamics model in simple continuous control tasks and use a random shooting MPC method for action selection. These lines of work consider the interplay between learning dynamics models and planning procedures, and try to improve the quality of internal models. As a consequence, they focus on domains where simple action selection procedures with accurate models obtain near-optimal performance. In our work, we show that we can learn value functions to help real-time action selection with MPC on some of the most high-dimensional continuous control tasks studied recently. Thus, the two lines of work are complementary, and combining POLO with model learning would make for an interesting line of future work. In this work we presented POLO, which combines the strengths of trajectory optimization and value function learning. In addition, we studied the benefits of planning for exploration in settings where we track uncertainties in the value function. Together, these components enabled control of complex agents like 3D humanoid and five-fingered hand. In this work, we assumed access to an accurate internal dynamics model. A natural next step is to study the influence of approximation errors in the internal model and improving it over time using the real world interaction data. The model used for the humanoid experiments was originally distributed with the MuJoCo software package and modified for our use. The model nominally has 27 degrees of freedom, including the floating base. It utilizes direct torque actuation for control, necessitating a small timestep of 0.008 seconds. The actuation input is limited to ±1.0, but the original gear ratios are left unchanged. For POLO, the choice of inputs for the value function involves a few design decisions. We take inspiration from robotics by using only easily observed values. For value function approximation in POLO for the humanoid tasks, we use an ensemble of 6 neural networks, each of which has 2 layers with 16 hidden parameters each; tanh is used for non-linearity. Training is performed with 64 gradient steps on minibatches of size 32, using ADAM with default parameters, every 16 timesteps the agent experiences. In scenarios where the agent resets, we consider a horizon of 600 timesteps with 20 episodes, giving a total agent lifetime of 12000 timesteps or 96 seconds. When we consider no resets, we use the same total timesteps. A control cost is shared for each scenario, where we penalize an actuator's applied force scaled by the inverse of the mass matrix. Task specific rewards are as follows. In the getup scenario, the agent is initialized in a supine position, and is required to bring its root height to a target of 1.1 meters. The reward functions used are as follows. In the non-sparse case, the difficulty in this task is eschewing the immediate reward for sitting in favor of the delayed reward of standing; this sequence is non-trivial to discover. In the walking scenario, the agent is initialized in an upright configuration. We specify a reward function that either penalizes deviation from a target height of 1.1 meters, or penalizes the deviation from both a target speed of 1.0 meters/second and the distance from the world's x-axis to encourage the agent to walk in a straight line. We choose a target speed as opposed to rewarding maximum speed to encourage stable walking gaits. For the box environment, we place a 0.9 meter wide cube in front of the humanoid, which needs to be pushed to a specific point. The friction between the box and ground is very low, however, and most pushes cause the box to slide out of reach; POLO learns to better limit the initial push to
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Byey7n05FQ
We propose a framework that incorporates planning for efficient exploration and learning in complex environments.
Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties? In this work, we investigate the skip connection's effect on network's generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with'skip connections'. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased. Deep models have achieved significant success in many applications. However, deep models are hard to train and require longer times to converge. A solution by construction is copying the learned layers from the shallower model and setting additional layers to identity mapping. Skip connection proposed in the Residual Network BID0, shows the new insight of innovation in network structure for computer vision. In the following years, more new and multi-layer-skipping structures have been proposed and proved to have better performance, among which one typical example is DenseNet BID1. ResNet BID0, HighwayNet (Rupesh Kumar BID3 and FractalNets BID2 have all succeeded by passing the deep information directly to the shallow layers via shortcut connection. Densenet further maximize the benefit of shortcut connections to the extreme. In DenseNet (more accurately in one dense block) every two layers has been linked, making each layer be able to use the information from all its previous layers. In doing this, DenseNet is able to effectively mitigate the problem of gradient vanishing or degradation, making the input features of each layer various and diverse and the calculation more efficient. Concatenation in Dense Block: the output of each layer will concatenate with its own input and then being passed forward to the next layer together. This makes the input characteristics of the next layer diversified and effectively improves the computation and helps the network to integrate shallow layer features to learn discriminative feature. Meanwhile, the neurons in the same Dense block are interconnected to achieve the effect of feature reused. This is why DenseNet does not need to be very wide and can achieve very good . Therefore, shortcut connections form the multi-channel model, making the flow of information from input to output unimpeded. Gradient information can also be fed backward directly from the loss function to the the various nodes. In this paper we make the following contributions:• We design experiments to illustrate that on many occasions it is worth adding some skip connections while sacrificing some of the network width. Every single skip connection replacing some of width is able to benefit the whole network's learning ability. Our'connection-by-connection' adding experiment can indicate this well.• We perform experiments to show that networks that reuse low-level features in subsequent layers perform better than a simple feed-forward model. We degrade both the quantity and the quality of the training data in different settings and compare the validation performances of these models. Our suggest that while all models are able to achieve perfect training accuracy, both DenseNet and ResNet are able to exhibit better generalization performance given similar model complexities.• We investigate solutions learned by the three types of networks in both a regression and classification involving task in low dimensions and compare the effects of both the dense connections and the skip connections. We show that the contribution of the feature maps reintroduced to deeper layers via the connections allow for more representational power. Skip connections are the main features of DenseNet and ResNet, which is convenient to make gradient flow easily and overcome the overfitting. For this reason, it is interesting and necessary to dive more in the effects of skip connections on the performance. In the original DenseNet paper BID1, three dense blocks are connected sequentially and in each dense block all of the convolutional layers have direct paths connected with each other. In our implementation, there are 8 layers which leads to a total of 28 skip connections within each dense block. We increase the number of skip connections from 0 to 28 and test the validation accuracy on CIFAR100 trained with 10k and 50k samples. The total numbers of parameters in these models are set to the same (92k) by controlling the depths of convolutional filters in each dense block. Experiments are conducted to test the effect of skip connections in the next section. We will change the number of skip connections in network smoothly. The increasing process of the number of skip connections can be described as follows: For each layer, first of all the connections to their previous layers which is 2 layers away is added. Then the connections linking layers farther away from each other is added. In this section the generalization performances of three different network structures, Cascade Network(simple layer-wise CNN), ResNet and DenseNet, using MNIST and CIFAR100 dataset are measured. The experiments are done by modifying these datasets either by decreasing the number of training samples or by adding noise to groundtruth labels so as to test which network structure shows better performance under these'harsh training conditions'. Neural networks is prone to overfitting if the training data fed into the network is insufficient. In order to compare generalization performances of the three networks with different connectivity patterns, the appropriate depth and parameters for these networks are selected carefully for better comparisons. The depths of the networks range from 44 to 46 layers due to the fact that it has a fair learning capacity. We haven't chosen the standard DenseNet or ResNet with over a hundred layers as described in the original papers because the plain Cascade Network will suffer from severer gradient vanishing problems as the number of layers increases. The numbers of channels of each layer are carefully picked so that sizes of the three model's parameters are nearly the same (560k to 580k in our experiment) to ensure fairness of our comparisons. In FIG2 (c), the width of each network is specially designed so that when the entire training set is used for training the three networks, similar performance could be reached, which is all around 50%. From the figure it can be noticed that while Cascade Net (Plain Net) has much more parameters, it is the least robust to the decreasing number of training samples. On the contrary, the DenseNet, with more skip connections, has the fewest parameters but the highest robustness. With the decrease number of training samples, the performance of DenseNet drops more moderately than the others shown in FIG2. The architecture can benefit the network's stability and its generalization. As a consequence, the dense connection can be a good method of generalization. The similar have been achieved in MNIST, where the only difference is that the networks contains only fully connected layers without any convolution layers. Considering the potential over fitting problem of deep Linear network and the simplicity of the dataset, we construct just 6 layers for Cascade Net and DenseNet. Same as CIFAR dataset, we vary the widths in order to achieve We can conclude from the above experiments that for deep neural networks both with and without convolution layers, the skip connections can effectively act as an implicit regularizer in network training process, making networks generalize better. To test the skip connections''adapting skills', we have tried to add some noise to training dataset. The noise is added by directly setting some pixels in some channels as 0. The is as the noise grows bigger, the decrease in DenseNet's performance will be smaller than others. As shown in Figure 4, network with dense connections is more robust to noise. In this section we visualize the generalization effects of network structures using the simple 1-dimensional curve fitting problem. The network has to learn a function f that maps scalar value x to scalar value f (x). Network with better regularization capability should have smoother output in the hidden layers and may thus avoid the over-fitting problem, producing greater generalization ability. We use the simple networks which contains only fully-connected layer and the nonlinear layers to fit the Sinc Function defined as: DISPLAYFORM0 in range of (−15, 15). Three kinds of networks are implemented using their main characteristics: the Cascade Net uses a smooth linear model, the ResNet in which several skip connection is added, and the DenseNet which has only one dense block and between every two layers there is one direct connection. In order to analyze the more in detail, we both plot the network's final learned curve and extract the output from each layer. The parameters of all 3 kinds of networks of the same depth are controlled as near as possible, by adjusting the'width' ('growth rate'). For example, when growth rate is 2 for DenseNet and 4 for Plain Net, their parameters are 98 and 113 respectively. Though the Plain Net has more parameters, its are unsatisfactory. Top sequence of subplots in FIG4 are the learning process of 7-layer dense net with a training sample of 30 points. Each subplot is the output of each layer (as the growth rate is 2 there are 2 curves in each plot). Also, while there is only 30 training data points, it can still learn well. In the 7 th subplot, the blue curve is the standard sinc curve and the green one is its learned version where it can be seen that the two are almost identical. The last plot is the training loss changing with the epochs. As for the ordinary Plain Net, the with 30 training data points or even 200 training data points is always very bad without fitting the small waves of both sides. The output 4 curves from each layer is also as simple but the final are bad. It couldn't learn the trivial wave. The thing that is also not satisfactory is in its training loss which drops quickly at the beginning but cannot be smaller later. When more training data are fed to the Plain Net, sometimes it can do a good job. However, most times it still cannot learn more information, like in bottom sequence of subplots in FIG4 with 400 training data, it is still quite hard for this Plain Net to compete with the dense one. • The ResNet is added to them to compare together. At least residual net is of most competitiveness to DenseNet.• Some noise is added to the training data (the training data points are with a small fluctuation from its original sinc curve). With the noise, it is more interesting to see the networks' generalization. The are similar as the circumstance with no noise.• Same experiments are conducted many a time which enables us to plot some statistical . In Figure 6, x-axis is the depth and y-axis is the final loss. The middle point is the mean loss in 10 experiments, the line segments represent the deviation in these same experiments. Still, in these experiments it is designed to make sure that in each depth the parameter number of two kinds of nets are similar through adjusting the growth rate. On one hand, it can be seen that DenseNet is better when deeper. On the other, the Plain Net learns nothing when it is too deep. It is due to the gradient vanishing as will be discussed later. No matter in the mean loss or the deviation, the dense net is smaller than the other two. And It can generalize even much better than residual net relatively when the nets are shallow. We can have a closer look at 15-layer nets of each type in Figure 7. In order to make the parameters numbers of 3 types nearly identical to each other, the Plain Net and the ResNet each has 8 lines of output of previous layer while the DenseNet has only 3 lines (growth rate of each layer). In the figure, only 3 of them are chosen to be displayed to have a more equitable comparison with DenseNet. Instead of showing every layer's output we exhibit the 3 th, 5 th, 7 th, 9 th, 11 th, 13 th and the final layer's output. Also, the training data plotted on the final output is 60 points with trivial deviation from the standard sinc curve, together with the network's learnt curve.• Plain Net:The worst one as can be seen on the top row in Figure 7: gradient vanishing is severe and finally it learns just a little profile of the curve which makes no sense at all.• Net with residual connection: This is much better than the Plain Net (the middle row of Figure 7 . It can fit the curve well in subplot 20. However, the middle output seems a bit complicated and its final fitting is not perfect either. • Net with dense connection: To make number of parameters similar to the two nets above, its growth rate is only 3. It can also fit the curve even better and the output of previous layers is the simplest as the bottom row of Figure 7 .For better investigate the effects of 'skip connection', a more detailed experiment has been conducted. In previous experiments, we only compare no more than 3 kinds of different network architectures, which may not be concrete enough. Thus, we exploit more forms of networks by split the numbers of skip connections. Their 'density' is between the two extreme. The skip connections are added one by one, where each defines a different network architecture. Network widths are chosen accordingly to make the numbers of parameters not differ much. The depth is 7 for all architectures, which means there will be at most 21 'skip connections'. As can be seen in FIG6 (a), the fitting effect is becoming better as the network becomes'denser'. The corresponding loss is displayed in FIG6 (b). From these two figures, the representational power of skip connections is conspicuous. We move on to analyzing the three styles of networks above in a two-dimensional classification problem. The networks have to learn a decision boundary from a non-linearly separable data set. We again restrict our depth to eight layers with the number parameters across the Dense, Residual, and the Plain networks being 614, 712, and 712, respectively. The parameters for the Dense network is controlled by adjusting the growth rate of each layer. Using the same hyper-parameters and number of training epochs, we attain the intermediate decision boundaries across each layer of each network to see the progression of complexity with increasing depth. Similar to the onedimensional experiments, networks that generalize better tend to learn smooth decision boundaries in the presence of noise rather than over-fitting to all data points. To visualize the intermediate of the networks, we feed a grid of evenly separated data points in the vicinity of our original two-dimensional data points and record the raw outputs after being activated by the activation function in each of the layers. Since only the last layer, i.e., the output layer, has two-dimensional outputs, we choose one of the dimensions in each layer to visualize. The top row in Figure 9 shows the progression of a densely connected network in the style of Figure 9: Intermediate decision boundaries for three different models. DenseNet BID1. Notice that in the Dense network, which achieves the lowest lost in the test set, every layer receives inputs from all preceding layers, and is, therefore, able to make use of low-level features even at the last layer stage. The first row of the figure shows the intermediate features received by the eighth layer, which includes the linear features like those from the first layer, all the way to higher-level features from the third and the fifth. We decide to show this last layer for this network since it encompasses learning from all previous stages. The benefits of dense connections, however, is not present in the Residual and the Plain networks. The second and third rows of Figure 9 show the features learned in the first, third, and the fifth layers. By introducing skip connections, modern neural network has proved better performance in computer vision area. This paper investigates how skip connections works in vision task and how they effect the learning power of networks. For this reason, we have design some experiments and verify that networks with skip connections can do the regression best among tested network architectures. It indicates that we can get the insights of this interesting architecture and its tremendous learning power.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJbs5gbRW
Our paper analyses the tremendous representational power of networks especially with 'skip connections', which may be used as a method for better generalization.
One of the challenges in training generative models such as the variational auto encoder (VAE) is avoiding posterior collapse. When the generator has too much capacity, it is prone to ignoring latent code. This problem is exacerbated when the dataset is small, and the latent dimension is high. The root of the problem is the ELBO objective, specifically the Kullback–Leibler (KL) divergence term in objective function. This paper proposes a new objective function to replace the KL term with one that emulates the maximum mean discrepancy (MMD) objective. It also introduces a new technique, named latent clipping, that is used to control distance between samples in latent space. A probabilistic autoencoder model, named $\mu$-VAE, is designed and trained on MNIST and MNIST Fashion datasets, using the new objective function and is shown to outperform models trained with ELBO and $\beta$-VAE objective. The $\mu$-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality. Latent representations learned by $\mu$-VAE are shown to be good and can be used for downstream tasks such as classification. Autoencoders(AEs) are used to learn low-dimensional representation of data. They can be turned into generative models by using adversarial, or variational training. In the adversarial approach, one can directly shape the posterior distribution over the latent variables by either using an additional network called a Discriminator , or using the encoder itself as a discriminator . AEs trained with variational methods are called Variational Autoencoders (VAEs) . Their objective maximizes the variational lower bound (or evidence lower bound, ELBO) of p θ (x). Similar to AEs, VAEs contain two networks: Encoder -Approximate inference network: In the context of VAEs, the encoder is a recognition model q φ (z|x) 1, which is an approximation to the true posterior distribution over the latent variables, p θ (z|x). The encoder tries to map high-level representations of the input x onto latent variables such that the salient features of x are encoded on z. Decoder -Generative network: The decoder learns a conditional distribution p θ (x|z) and has two tasks: i) For the task of reconstruction of input, it solves an inverse problem by taking mapped latent z computed using output of encoder and predicts what the original input is (i.e. reconstruction x ≈ x). ii) For generation of new data, it samples new data x, given the latent variables z. During training, encoder learns to map the data distribution p d (x) to a simple distribution such as Gaussian while the decoder learns to map it back to data distribution p(x) 2. VAE's objective function has two terms: log-likelihood term (reconstruction term of AE objective function) and a prior regularization term 3. Hence, VAEs add an extra term to AE objective function, and approximately maximizes the log-likelihood of the data, log p(x), by maximizing the evidence lower bound (ELBO): Maximizing ELBO does two things: • Increase the probability of generating each observed data x. • Decrease distance between estimated posterior q(z|x) and prior distribution p(z), pushing KL term to zero. Smaller KL term leads to less informative latent variable. Pushing KL terms to zero encourages the model to ignore latent variable. This is especially true when the decoder has a high capacity. This leads to a phenomenon called posterior collapse in literature (; ; van den ; ; Sønderby et al., 2016;). This work proposes a new method to mitigate posterior collapse. The main idea is to modify the KL term of the ELBO such that it emulates the MMD objective . In ELBO objective, minimizing KL divergence term pushes mean and variance parameters of each sample at the output of encoder towards zero and one respectively. This, in turn, brings samples closer, making them indistinguishable. The proposed method replaces the KL term in the ELBO in order to encourage samples from latent variable to spread out while keeping the aggregate mean of samples close to zero. This enables the model to learn a latent representation that is amenable to clustering samples which are similar. As shown in later sections, the proposed method enables learning good generative models as well as good representations of data. The details of the proposal are discussed in Section 4. In the last few years, there have been multiple proposals on how to mitigate posterior collapse. These proposals are concentrated around i) modifying the ELBO objective, ii) imposing a constraint on the VAE architecture, iii) using complex priors, iv) changing distributions used for the prior and the posterior v) or some combinations of these approaches. Modifications of the ELBO objective can be done through annealing the KL term (Sønderby et al., 2016;), lower-bounding the KL term to prevent it from getting pushed to zero , controlling KL capacity by upper bounding it to a pre-determined value or lower-bounding the mutual information by adding skip connections between the latent layer and the layers of the decoder . Proposals that constrain the structure of the model do so by reducing the capacity of the decoder (; ;), by adding skip connections to each layer of the decoder , or by imposing constraints on encoder structure . Taking a different approach, and van den replace simple Gaussian priors with more complex ones such as a mixture of Gaussians. The most recent of these proposals are δ-VAE and SKIP-VAE . δ-VAE imposes a lower bound on KL term to prevent it from getting pushed to zero. One of the drawbacks of this approach is the fact that it introduces yet another hyper-parameter to tune carefully. Also, the model uses dropout to regularize the decoder, reducing the effective capacity of the decoder during training. It is not clear how effective the proposed method is when training more powerful decoders without such regularization. Moreover, the proposal includes an additional constraint on encoder structure, named the anti-causal encoder. SKIP-VAE, on the other hand, proposes to lower bound mutual information by adding skip connections from latent layers to each layer of decoder. One drawback of this approach is that it introduces additional non-linear layer per each hidden layer, ing in more parameters to optimize. Moreover, its advantage is not clear in cases, where one can increase capacity of decoder by increasing number of units in each layer (or number of channels in CNN-based decoders) rather than adding more layers. When we train a VAE model, we ideally want to end up with a model that can reconstruct a given input well and can generate new samples in high quality. Good reconstruction requires extracting the most salient features of data and storing them on latent variable ('Encoder + Latent layer' part of the model). Generating good samples requires a generative model ('Latent layer + Decoder' part) with a model distribution that is a good approximation to actual data distribution. However, there tends to be a trade-off between reconstruction quality of a given input, and quality of new samples. To understand why we have such a trade-off, we can start by looking at ELBO objective function 4: Maximizing this objective function increases p θ (x), the probability of generating each observed data x while decreasing distance between q(z|x) and prior p(z). Pushing q(z|x) closer to p(z) makes latent code less informative i.e. z is influenced less by input data x. The reason why the KL term can be problematic becomes more clear when we look at the KL loss term typically modelled with log of variance during optimization: where D is the dimension of latent variable, and i refers to i th sample. Noting that the mean is in L2 norm, minimizing the KL term leads to pushing the each dimension of the mean,µ d, to zero while pushing σ 2 towards 1. This makes estimated posterior less informative and less dependent on input data. The problem gets worse when dimension of latent variable, D, increases, or when the KL term is multiplied with a coefficient β > 1 . Ideally, we want to be able to distinguish between different input samples. This can be achieved by having distinctive means and variances for clusters of samples. This is where MMD might have advantage over the KL divergence. Matching distributions using MMD can match their sample means although their variance might still differ. We can emulate behaviour of MMD by modifying the KL term. We do so by changing L2 norm of mean, Re-writing it for B samples, we have: It is important to note that we are taking absolute value of sum of sample means. This new formulation in aggregate mean of samples to be zero (i.e. same mean as that of prior distribution) while allowing samples to spread out and enabling model to encode information about input data onto z. It should be noted that this new L1 norm of µ can push individual mean estimates to very high values if it is not constrained. To avoid that, L2 norm of means for each sample is clipped by a pre-determined value during optimization. Based on experiments, it is found that clipping L2 norm of sample means by three times square root of latent dimension works well in general although bigger values might help improve in tasks such as classification: This method will be referred as latent clipping for the rest of this work. In addition, the remaining terms in the KL loss can be kept as is, i.e. exp (log σ 2) − log σ 2 − 1, or we can just optimize for subset of it by using either "log σ 2 ", or " exp (log σ 2) − 1 " term since each method will push log σ 2 towards zero (i.e. variance towards one). log σ 2 is chosen in this work since it is simpler. Finally, the µ-VAE objective function can be formulated as follows: where first term is reconstruction loss, B refers to batch size since aggregated mean is computed over batch samples, J refers to dimension of data, D refers to dimension of latent variable, x is original input, and x is reconstructions. To visualize the implications of the latent clipping, a toy VAE model shown in Table 2 in Appendix A is used. Figure 1 compares three cases, in which a VAE model is trained on MNIST dataset using ReLu, Tanh, and Leaky ReLu activation functions for each case, and the latent layer is visualized to observe clustering of digits. Those three cases are: i) Standard VAE objective with the KL divergence, ii) Standard VAE objective with the KL divergence + latent clipping, and iii) µ-VAE objective function + latent clipping. Two observations can be made: 1. Latent clipping might help improve smoothness of latent space, even in the case of standard VAE objective, ELBO. 2. µ-VAE objective function seems to work well. To test the effectiveness of µ-VAE objective, a CNN-based VAE model is designed and trained on MNIST and MNIST Fashion using same hyper-parameters for both datasets. Centered isotropic Gaussian prior, p(z) ∼ N (0, 1.0), is used and the true posterior is approximated as Gaussian with an approximately diagonal co-variance. No regularization methods such as dropout, or techniques such as batch-normalization is used to avoid having any extra influence on the performance, and to show the advantages of the new objective function. The model is trained with four different objective functions: i) VAE (ELBO objective), ii) β-VAE with β = 4, iii) µ-VAE#1 s.t. µ sample ≤ 3 * √ z dim and iv) µ-VAE#2 s.t. µ sample ≤ 6 * √ z dim, where z dim = 10. Details of architecture, objective functions, hyper-parameters, and training are described in Appendix B. During training of the models, a simple three layer fully connected classifier is also trained over 10 dimensional latent variable to learn to classify data using features encoded on latent variable. Classifier parameters are updated when encoder and decoder parameters are frozen and vice versa so that classifier has no impact on how information is encoded on the latent variable. Evaluation of the generative model is done qualitatively in the form of inspecting quality, and diversity of samples. Posterior collapse is assessed by comparing reconstructions of input data to observe whether the decoder ignores latent code encoded by input data and by comparing the KL divergences obtained for each model. For all three objective functions, the KL divergence is measured using standard KL formula in Equation 3. Moreover, the accuracy of the classifier trained on latent variable is used as a measure of how well the latent variable represents data . Higher classification accuracy reflects a better representation of data and opens doors to use latent representation for downstream tasks such as classification. Figure 2 shows training curves for MNIST Fashion dataset (MNIST can be seen in Appendix C). The new objective function in lower reconstruction loss, higher KL divergence, and higher classification accuracy. Higher KL divergence and classification accuracy can be interpreted as a sign of learning a more informative latent code. β-VAE performs the worst across all metrics as expected. The reason is that β factor encourages latent code to be less informative, and is known to in worse reconstruction quality . of samples obtained using test data for both datasets. VAE seems to able to distinguish all ten digits, but performs worse in MNIST Fashion. β-VAE pushes samples closer as expected, which explains why its performance is low in classification task. µ-VAE, on the other hand, is able to cluster similar samples together in both datasets. Moreover, when upper-bound on µ sample is increased, it spreads out clusters of samples, making them easier to distinguish. Hence, upper-bound used in latent clipping can be a knob to control distance between samples. Also, we should note that we can achieve similar clustering to the Table 1 lists classification accuracy obtained using test datasets. µ-VAE performs the best as expected since it is able to push the clusters apart. Higher upper bound on µ sample in a higher classification accuracy. Also, it should be noted that reported accuracy numbers can be improved, but the purpose of this test was to show that new objective function can reliably be used in downstream tasks such as classification. Figure 4 compares sample distributions obtained at each dimension of latent variable using test dataset of MNIST Fashion for each objective function. β-VAE samples follow N prior very closely, and hence ing in the smallest KL divergence term. Sample distributions from the most dimensions of VAE are also close to prior distribution although some of them show a multi-modal behavior. Sample distributions from both µ-VAE#1 & #2 in zero mean, but they are more spread out as expected. Spread is controlled by upper-bound on µ sample. Similar to VAE, some sample distributions show a multi-modal behavior. Figure 5 shows reconstruction of input using test dataset. β-VAE reconstructions are either blurrier, or wrong, the latter of which is a sign of posterior collapse. VAE performs better, and both versions of µ-VAE gives the best reconstruction quality. Figure 6 shows images generated using random samples drawn from multivariate Gaussian, N(0, σ), where σ = 1 is for VAE and β-VAE while it is 3 for µ-VAE since their samples are more spread out (MNIST can be seen in Appendix C). We can observe that some samples generated from µ-VAE models have dark spots. This is because the model is trying to generate texture on these samples. This can also be observed in samples of VAE model, but it is less pronounced. However, samples from β-VAE do not show any such phenomena since the model perhaps learns global structure of shapes while ignoring local features. Failing to capture local structures is a known problem in latent variable models . N(0, σ).From left to right, model (σ): VAE (σ=1), β-VAE (σ=1), µ-VAE#1 (σ=3), and µ-VAE#2 (σ=3). Higger σ is used for µ-VAE models since their samples are more spread out. that most dimensions of latent variable are not very informative. VAE is slightly better. However, both µ-VAE models learn diverse classes of objects across different dimensions. Moreover, they learn different classes on opposite sides of same dimension. This is encouraging since it shows its power to learn rich representations. In this work, a new objective function is proposed to mitigate posterior collapse observed in VAEs. It is shown to give better reconstruction quality and to learn good representations of data in the form of more informative latent codes. A method, named latent clipping, is introduced as a knob to control distance between samples. Samples can be pushed apart for tasks such as classification, or brought closer for smoother transition between different clusters of samples. Unlike prior work, the proposed method is robust to parameter tuning, and does not constraint encoder, or decoder structure. It can be used as a direct replacement for ELBO objective. Moreover, the proposed method is demonstrated to learn representations of data that can work well in downstream tasks such as classification. Applications of µ-VAE objective with more powerful decoders in various settings can be considered as a future work. Optimization: In all experiments, learning rate of 1e-4 and batch size of 64 are used. Adam algorithm with high momentum (β1 = 0.9, β2 = 0.999) is used as optimizer. High momentum is chosen mainly to let most of previous training samples influence the current update step. For reconstruction loss, mean square error, x−x 2, is used for all cases. As for initialization, since the model consists of convolutional layers with Leaky ReLu in both encoder and decoder, Xavier initialization is used. Thus, initial weights are drawn from a Gaussian distribution with standard deviation (stdev) of 2/N, where N is number of nodes from previous layer. For example, for a kernel size of 3x3 with 32 channels, N = 288, which in stdev of 0.083. Objective functions are shown in Table 3, where µ-VAE objective is written explicitly to avoid any ambiguity in terms of how batch statistics are computed. Table 4 shows model architecture as well as classifier used for all experiments. It consists of CNNbased encoder and decoder while classifier is three layer fully connected neural network. They all use Leaky Relu activation and learning rate of 1e-4.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgWiaNtwH
This paper proposes a new objective function to replace KL term with one that emulates maximum mean discrepancy (MMD) objective.
Likelihood-based generative models are a promising resource to detect out-of-distribution (OOD) inputs which could compromise the robustness or reliability of a machine learning system. However, likelihoods derived from such models have been shown to be problematic for detecting certain types of inputs that significantly differ from training data. In this paper, we pose that this problem is due to the excessive influence that input complexity has in generative models' likelihoods. We report a set of experiments supporting this hypothesis, and use an estimate of input complexity to derive an efficient and parameter-free OOD score, which can be seen as a likelihood-ratio, akin to Bayesian model comparison. We find such score to perform comparably to, or even better than, existing OOD detection approaches under a wide range of data sets, models, model sizes, and complexity estimates. Assessing whether input data is novel or significantly different than the one used in training is critical for real-world machine learning applications. Such data are known as out-of-distribution (OOD) inputs, and detecting them should facilitate safe and reliable model operation. This is particularly necessary for deep neural network classifiers, which can be easily fooled by OOD data . Several approaches have been proposed for OOD detection on top of or within a neural network classifier (; ; ;). Nonetheless, OOD detection is not limited to classification tasks nor to labeled data sets. Two examples of that are novelty detection from an unlabeled data set and next-frame prediction from video sequences. A rather obvious strategy to perform OOD detection in the absence of labels (and even in the presence of them) is to learn a density model M that approximates the true distribution p * (X) of training inputs x ∈ X . Then, if such approximation is good enough, that is, p(x|M) ≈ p * (x), OOD inputs should yield a low likelihood under model M. With complex data like audio or images, this strategy was long thought to be unattainable due to the difficulty of learning a sufficiently good model. However, with current approaches, we start having generative models that are able to learn good approximations of the density conveyed by those complex data. Autoregressive and invertible models such as PixelCNN++ and Glow perform well in this regard and, in addition, can approximate p(x|M) with arbitrary accuracy. Figure 1: Likelihoods from a Glow model trained on CIFAR10. Qualitatively similar are obtained for other generative models and data sets (see also in ; a). trained on CIFAR10, generative models report higher likelihoods for SVHN than for CIFAR10 itself (Fig. 1 ; data descriptions are available in Appendix A). Intriguingly, this behavior is not consistent across data sets, as other ones correctly tend to produce likelihoods lower than the ones of the training data (see the example of TrafficSign in Fig. 1). A number of explanations have been suggested for the root cause of this behavior (; a;) but, to date, a full understanding of the phenomenon remains elusive. In this paper, we shed light to the above phenomenon, showing that likelihoods computed from generative models exhibit a strong bias towards the complexity of the corresponding inputs. We find that qualitatively complex images tend to produce the lowest likelihoods, and that simple images always yield the highest ones. In fact, we show a clear negative correlation between quantitative estimates of complexity and the likelihood of generative models. In the second part of the paper, we propose to leverage such estimates of complexity to detect OOD inputs. To do so, we introduce a widely-applicable OOD score for individual inputs that corresponds, conceptually, to a likelihoodratio test statistic. We show that such score turns likelihood-based generative models into practical and effective OOD detectors, with performances comparable to, or even better than the state-of-theart. We base our experiments on an extensive collection of alternatives, including a pool of 12 data sets, two conceptually-different generative models, increasing model sizes, and three variants of complexity estimates. From now on, we shall consider the log-likelihood of an input x given a model M: M (x) = log 2 p(x|M). Following common practice in evaluating generative models, negative log-likelihoods − M will be expressed in bits per dimension , where dimension corresponds to the total size of x (we resize all images to 3×32×32 pixels). Note that the qualitative behavior of log-likelihoods is the same as likelihoods: ideally, OOD inputs should have a low M, while in-distribution data should have a larger M. Most literature compares likelihoods of a given model for a few data sets. However, if we consider several different data sets at once and study their likelihoods, we can get some insight. In Fig. 2, we show the log-likelihood distributions for the considered data sets (Appendix A), computed with a Glow model trained on CIFAR10. We observe that the data set with a higher log-likelihood is Constant, a data set of constant-color images, followed by Omniglot, MNIST, and FashionMNIST; all of those featuring gray-scale images with a large presence of empty black . On the other side of the spectrum, we observe that the data set with a lower log-likelihood is Noise, a data set of uniform random images, followed by TrafficSign and TinyImageNet; both featuring colorful images with non-trivial . Such ordering is perhaps more clear by looking at the average log-likelihood of each data set (Appendix D). If we think about the visual complexity of the images in those data sets, it would seem that log-likelihoods tend to grow when images become simpler and with less information or content. To further confirm the previous observation, we design a controlled experiment where we can set different decreasing levels of image complexity. We train a generative model with some data set, as before, but now compute likelihoods of progressively simpler inputs. Such inputs are obtained by average-pooling the uniform random Noise images by factors of 1, 2, 4, 8, 16, and 32, and re-scaling back the images to the original size by nearest-neighbor up-sampling. Intuitively, a noise image with a pooling size of 1 (no pooling) has the highest complexity, while a noise image with a pooling of 32 (constant-color image) has the lowest complexity. Pooling factors from 2 to 16 then account for intermediate, decreasing levels of complexity. The of the experiment is a progressive growing of the log-likelihood M (Fig. 3). Given that the only difference between data is the pooling factor, we can infer that image complexity plays a major role in generative models' likelihoods. Until now, we have consciously avoided a quantitative definition of complexity. However, to further study the observed phenomenon, and despite the difficulty in quantifying the multiple aspects that affect the complexity of an input (cf.), we have to adopt one. A sensible choice would be to exploit the notion of Kolmogorov complexity which, unfortunately, is noncomputable. In such cases, we have to deal with it by calculating an upper bound using a lossless compression algorithm . Given a set of inputs x coded with the same bit depth, the normalized size of their compressed versions, L(x) (in bits per dimension), can be considered a reasonable estimate of their complexity. That is, given the same coding depth, a highly complex input will require more bits per dimension, while a less complex one will be compressed with fewer bits per dimension. For images, we can use PNG, JPEG2000, or FLIF compressors (Appendix C). For other data types such as audio or text, other lossless compressors should be available to produce a similar estimate. If we study the relation between generative models' likelihoods and our complexity estimates, we observe that there is a clear negative correlation (Fig. 4). Considering all data sets, we find Pearson's correlation coefficients below −0.75 for models trained on FashionMNIST, and below −0.9 for models trained on CIFAR10, independently of the compressor used (Appendix D). Such significant correlations, all of them with infinitesimal p-values, indicate that likelihood-based measures are highly influenced by the complexity of the input image, and that this concept accounts for most of their variance. In fact, such strong correlations suggest that one may replace the computed likelihood values for the negative of the complexity estimate and obtain almost the same (Appendix D). This implies that, in terms of detecting OOD inputs, a complexity estimate would perform as well (or bad) as the likelihoods computed from our generative models. As complexity seems to account for most of the variability in generative models' likelihoods, we propose to compensate for it when testing for possible OOD inputs. Given that both negative loglikelihoods − M (x) and the complexity estimate L(x) are expressed in bits per dimension (Sec. 2), we can express our OOD score as a subtraction between the two: Notice that, since we use negative log-likelihoods, the higher the S, the more OOD the input x will be (see below). Interestingly, S can be interpreted as a likelihood-ratio test statistic. For that, we take the point of view of Bayesian model comparison or minimum description length principle . We can think of a compressor M 0 as a universal model, adjusted for all possible inputs and general enough so that it is not biased towards a particular type of data semantics. Considering the probabilistic model associated with the size of the output produced by the lossless compressor, we and, correspondingly, In Bayesian model comparison, we are interested in comparing the posterior probabilities of different models in light of data X. In our setting, the trained generative model M is a'simpler' version of the universal model M 0, targeted to a specific semantics or data type. With it, one aims to approximate the marginal likelihood (or model evidence) for x ∈ X, which integrates out all model parameters: This integral is intractable, but current generative models can approximate p(x|M) with arbitrary accuracy . Choosing between one or another model is then reduced to a simple likelihood ratio: For uniform priors p(M 0) = p(M) = 1/2, this ratio is reduced to which, using Eq. 2 for the last term, becomes Eq. 1. The ratio S accommodates the Occam's razor principle. Consider simple inputs that can be easily compressed by M 0 using a few bits, and that are not present in the training of M. These cases have a high probability under M 0, effectively correcting the abnormal high likelihood given by the learned model M. The same effect will occur with complex inputs that are not present in the training data. In these cases, both likelihoods will be low, but the universal lossless compressor M 0 will predict those better than the learned model M. The two situations will lead to large values of S. In contrast, inputs that belong to the data used to train the generative model M will always be better predicted by M than by M 0, ing in lower values of S. Given a training set X of in-distribution samples and the corresponding scores S(x) for each x ∈ X, we foresee a number of strategies to perform OOD detection in practice for new instances z. The first and more straightforward one is to use S(z) as it is, just as a score, to perform OOD ranking. This can be useful to monitor the top-k, potentially more problematic instances z in a new set of unlabeled data Z. The second strategy is to interpret S(z) as the corresponding Bayes factor in Eq. 3, and directly assign z to be OOD for S(z) > 0, or in-distribution otherwise (cf.). The decision is then taken with stronger evidence for higher absolute values of S(z). A third strategy is to consider the empirical or null distribution of S for the full training set, S(X). We could then choose an appropriate quantile as threshold, adopting the notion of frequentist hypothesis testing (see for instance b). Finally, if ground truth OOD data Y is available, a fourth strategy is to optimize a threshold value for S(z). Using X and Y, we can choose a threshold that targets a desired percentage of false positives or negatives. The choice of a specific strategy will depend on the characteristics of the particular application under consideration. In this work, we prefer to keep our evaluation generic and to not adopt any specific thresholding strategy (that is, we use S directly, as a score). This also allows us to compare with the majority of reported values from the existing literature, which use the AUROC measure (see below). 4 have recently proposed the use of likelihood-ratio tests for OOD detection. They posit that " statistics" (for instance, the number of zeros in the of MNISTlike images) are the source of abnormal likelihoods, and propose to exploit them by learning a model which is trained on random surrogates of input data. Such surrogates are generated according to a Bernoulli distribution, and an L2 regularization term is added to the model, which implies that the approach has two hyper-parameters. Moreover, both the model and the model trained using in-distribution data need to capture the information equally well. In contrast to their method, our method does not require additional training nor extra conditions on a specific model for every type of training data. and Nalisnick et al. (2019b) suggest that typicality is the culprit for likelihoodbased generative models not being able to detect OOD inputs. do not explicitly address typicality, their estimate of the Watanabe-Akaike information criterion using ensembles of generative models performs well in practice. Nalisnick et al. (2019b) propose an explicit test for typicality employing a Monte Carlo estimate of the empirical entropy, which limits their approach to batches of inputs of the same type. The works of Høst- and combine the concepts of typicality and minimum description length to perform novelty detection. Although concepts are similar to the ones employed here, their focus is mainly on bit sequences. They consider atypical sequences those that can be described (coded) with fewer bits in itself rather than using the (optimum) code for typical sequences. We find their implementation to rely on strong parametric assumptions, which makes it difficult to generalize to generative or other machine learning models. A number of methods have been proposed to perform OOD detection under a classification-based framework (; ; ; ;). Although achieving promising , these methods do not generally apply to the more general case of non-labeled or self-supervised data. The method of extends to such cases by leveraging generative models, but nonetheless makes use of auxiliary, outlier data to learn to distinguish OOD inputs. We now study how S performs on the OOD detection task. For that, we train a generative model M on the train partition of a given data set and compute scores for such partition and the test partition of a different data set. With both sets of scores, we then calculate the area under the receiver operating characteristic curve (AUROC), which is a common evaluation measure for the OOD detection task and for classification tasks in general. Note that AUROC represents a good performance summary across different score thresholds . First of all, we want to assess the improvement of S over log-likelihoods alone (− M). When considering likelihoods from generative models trained on CIFAR10, the problematic reported by previous works become clearly apparent (Table 1). The unintuitive higher likelihoods for SVHN observed in Sec. 1 now translate into a poor AUROC below 0.1. This not only happens for SVHN, but also for Constant, Omniglot, MNIST, and FashionMNIST data sets, for which we observed consistently higher likelihoods than CIFAR10 in Sec. 2. Likelihoods for the other data sets yield AUROCs above the random baseline of 0.5, but none above 0.67. The only exception is the Noise data set, which is perfectly distinguishable from CIFAR10 using likelihood alone. For completeness, we include the AUROC values when trying to perform OOD with the test partition of CIFAR10. We see those are close to the ideal value of 0.5, showing that, as expected, the reported measures do not generally consider those samples to be OOD. We now look at the AUROCs obtained with S (Table 1). We see that, not only are reversed for less complex datasets like MNIST or SVHN, but also that all AUROCs for the rest of the data sets improve as well. The only exception to the last assertion among all studied combinations is the combination of TinyImageNet with PixelCNN++ and FLIF (see Appendix D for other combinations). In general, we obtain AUROCs above 0.7, with many of them approaching 0.9 or 1. Thus, we can conclude that S clearly improves over likelihoods alone in the OOD detection task, and that S is able to revert the situation with intuitively less complex data sets that were previously yielding a low AUROC. We also study how the training set, the choice of compressor/generative model, or the size of the model affects the performance of S (Appendix D). In terms of models and compressors, we do not observe a large difference between the considered combinations, except for a few isolated cases whose investigation we defer for future work. In terms of model size, we do observe a tendency to provide better discrimination with increasing size. In terms of data sets, we find the OOD detection task to be easier with FashionMNIST than with CIFAR10. We assume that this is due to the ease of the generative model to learn and approximate the density conveyed by the data. A similar but less marked trend is also observed for compressors, with better compressors yielding slightly improved AUROCs than other, in principle, less powerful ones. A takeaway from all these observations would be that using larger generative models and better compressors will yield a more reliable S and a better AUROC. The conducted experiments support that, but a more in-depth analysis should be carried out to further confirm this hypothesis. Finally, we want to assess how S compares to previous approaches in the literature. For that, we compile a number of reported AUROCs for both classifier-and generative-based approaches and compare them with S. Note that classifier-based approaches, as mentioned in Sec. 1, are less applicable than generative-based ones. In addition, as they exploit label information, they might have an advantage over generative-based approaches in terms of performance (some also exploit external or outlier data; Sec. 4). We observe that S is competitive with both classifier-and existing generative-based approaches (Table 2). When training with FashionMNIST, S achieves the best scores among all considered approaches. The with further test sets are also encouraging, with almost all AUROCs approaching 1 (Appendix D). When training with CIFAR10, S achieves similar or better performance than existing approaches. Noticeably, within generative-based approaches, S is only outperformed in two occasions by the same approach, WAIC, which uses ensembles of generative models (Sec. 4). On the one hand, it would be interesting to see how S could perform when using ensembles of models and compressors to produce better estimates of − M and L, respectively. On the other hand, however, the use of a single generative model together with a single fast compression library makes S an efficient alternative compared to WAIC and some other existing approaches. It is also worth noting that many existing approaches have a number of hyper-parameters that need to be tuned, sometimes with the help of outlier or additional data. In contrast, S is a parameter-free measure, which makes it easy to use and deploy. We illustrate a fundamental insight with regard to the use of generative models' likelihoods for the task of detecting OOD data. We show that input complexity has a strong effect in those likelihoods, and pose that it is the main culprit for the puzzling of using generative models' likelihoods for OOD detection. In addition, we show that an estimate of input complexity can be used to com- , (b) by , and (c) by. Results for Typicality test correspond to using batches of 2 samples of the same type. Trained on: FashionMNIST CIFAR10 OOD data: MNIST Omniglot SVHN CelebA CIFAR100 Classifier-based approaches ODIN 0.766 0.796 1.000 0.997 -Outlier exposure --0.758 -0.685 Typicality test (b) 0.140 -0.420 --Likelihood-ratio 0.997 -0.912 --S using Glow and FLIF (ours) 0.998 1.000 0.950 0.863 0.736 S using PixelCNN++ and FLIF (ours) 0.967 1.000 0.929 0.776 0.535 pensate standard negative log-likelihoods in order to produce an efficient and reliable OOD score. We also offer an interpretation of our score as a likelihood-ratio akin to Bayesian model comparison. Such score performs comparably to, or even better than several state-of-the-art approaches, with that are consistent across a range of data sets, models, model sizes, and compression algorithms. The proposed score has no hyper-parameters besides the definition of a generative model and a compression algorithm, which makes it easy to employ in a variety of practical problems and situations. In our experiments, we employ well-known, publicly-available data sets. In addition to those, and to facilitate a better understanding of the problem, we develop another two self-created sets of synthetic images: Noise and Constant images. The Noise data set is created by uniformly randomly sampling a tensor of 3×32×32 and quantizing the to 8 bits. The Constant data set is created similarly, but using a tensor of 3×1×1 and repeating the values along the last two dimensions to obtain a size of 3×32×32. The complete list of data sets is available in Table 3. In the case of data sets with different variations, such as CelebA or FaceScrub, which have both plain and aligned versions of the faces, we select the aligned versions. Note that, for models trained on CIFAR10, it is important to notice the overlap of certain classes between that and other sets, namely TinyImageNet and CIFAR100 (they overlap, for instance, in classes of certain animals or vehicles). Therefore, strictly speaking, such data sets are not entirely OOD, at least semantically. In order to split the data between train, validation, and test, we follow two simple rules: if the data set contains some predefined train and test splits, we respect them and create a validation split using a random 10% of the training data; if no predefined splits are available, we create them by randomly assigning 80% of the data to the train split and 10% to both validation and test splits. In order to create consistent input sizes for the generative models, we work with 3-channel images of size 32×32. For those data sets which do not match this configuration, we follow a classic bi-linear resizing strategy and, to simulate the three color components from a gray-scale image, we triplicate the channel dimension. The of this paper are obtained using two generative models of different nature: one autoregressive model and one invertible model. As autoregressive model we choose PixelCNN++ , which has been shown to obtain very good in terms of likelihood for image data. As invertible model we choose Glow , which is also capable of inferring exact log-likelihoods using large stacks of bijective transformations. We implement the Glow model using the default configuration of the original implementation 1, except that we zeropad and do not use ActNorm inside the coupling network. The model has 3 blocks of 32 flows, using an affine coupling with an squeezing factor of 2. As for PixelCNN++, we set 5 residual blocks per stage, with 80 filters and 10 logistic components in the mixture. The non-linearity of the residual layers corresponds to an exponential linear unit 2. We train both Glow and PixelCNN++ using the Adam optimizer with an initial learning rate of 10 −4. We reduce this initial value by a factor of 1/5 every time that the validation loss does not decrease during 5 consecutive epochs. The training finishes when the learning rate is reduced by factor of 1/100. The batch size of both models is set to 50. The final model weights are the ones yielding the best validation loss. The likelihoods obtained in validation with both Glow and PixelCNN++ match the ones reported in the literature for CIFAR10 . We also make sure that the generated images are of comparable quality to the ones shown in those references. We use PyTorch version 1.2.0 . All models have been trained with a single NVIDIA GeForce GTX 1080Ti GPU. Training takes some hours under that setting. We explore three different options to compress input images. As a mandatory condition, they need to provide lossless compression. The first format that we consider is PNG, and old-classic format which is globally used and well-known. We use OpenCV 3 to compress from raw Numpy matrices, with compression set to the maximum possible level. The second format that we consider is JPEG2000. Although not as globally known as the previous one, it is a more modern format with several new generation features such as progressive decoding. Again, we use the default OpenCV implementation to obtain the size of an image using this compression algorithm. The third format that we consider is FLIF, the most modern algorithm of the list. According to its website 4, it promises to generate up to 53% smaller files than JPEG2000. We use the publicly-available compressor implementation in their website. We do not include header sizes in the measurement of the ing bits per dimension. To compute our complexity estimate L(x), we compress the input x with one of the compressors C above. With that, we obtain a string of bits C(x). The length of it, |C(x)|, is normalized by the size or dimensionality of x, which we denote by d, to obtain the complexity estimate: We also experimented with an improved version of L, where L i corresponds to different compression schemes. This forces S to work always with the best compressor for every x. In our case, as FLIF was almost always the best compressor, we did not observe a clear difference between using L or L. However, in cases where it is not clear which compressor to use or cases in which we do not have a clear best/winner, L could be of use. The additional mentioned in the main paper are the following: • In Table 4, we report the average log-likelihood M for every data set. We sort data sets from highest to lowest log-likelihood. • In Table 5, we report the global Pearson's correlation coefficient for different models, train sets, and compressors. Due to the large sample size, Scipy version 1.2.1 reports a p-value of 0 in all cases. • In Table 6, we report the AUROC values obtained from log-likelihoods M, complexity estimates L, a simple two-tail test T taking into account lower and higher log-likelihoods, T = | M − M |, and the proposed score S. • In Table 7, we report the AUROC values obtained from S across different Glow model sizes, using a PNG compressor. • In Table 8, we report the AUROC values obtained from S across different data sets, models, and compressors. Table 6: AUROC values using negative log-likelihood − M, the complexity measure L, a simple two-tail test T (see text), and our score S for Glow and PixelCNN++ models trained on CIFAR10 and using a PNG compressor. Qualitatively similar were obtained for FashionMNIST and other compressors..000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SyxIWpVYvr
We pose that generative models' likelihoods are excessively influenced by the input's complexity, and propose a way to compensate it when detecting out-of-distribution inputs
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance. Stochastic gradient descent (SGD) is the dominant method to train deep networks today. This method iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the loss evaluated on a minibatch. In particular, variants of SGD that scale coordinates of the gradient by square roots of some form of averaging of the squared coordinates in the past gradients have been particularly successful, because they automatically adjust the learning rate on a per-feature basis. The first popular algorithm in this line of research is ADAGRAD BID2 BID5, which can achieve significantly better performance compared to vanilla SGD when the gradients are sparse, or in general small. Although ADAGRAD works well for sparse settings, its performance has been observed to deteriorate in settings where the loss functions are nonconvex and gradients are dense due to rapid decay of the learning rate in these settings since it uses all the past gradients in the update. This problem is especially exacerbated in high dimensional problems arising in deep learning. To tackle this issue, several variants of ADAGRAD, such as RMSPROP BID7, ADAM BID3, ADADELTA , NADAM BID1, etc, have been proposed which mitigate the rapid decay of the learning rate using the exponential moving averages of squared past gradients, essentially limiting the reliance of the update to only the past few gradients. While these algorithms have been successfully employed in several practical applications, they have also been observed to not converge in some other settings. It has been typically observed that in these settings some minibatches provide large gradients but only quite rarely, and while these large gradients are quite informative, their influence dies out rather quickly due to the exponential averaging, thus leading to poor convergence. In this paper, we analyze this situation in detail. We rigorously prove that the intuition conveyed in the above paragraph is indeed correct; that limiting the reliance of the update on essentially only the past few gradients can indeed cause significant convergence issues. In particular, we make the following key contributions:• We elucidate how the exponential moving average in the RMSPROP and ADAM algorithms can cause non-convergence by providing an example of simple convex optimization prob-lem where RMSPROP and ADAM provably do not converge to an optimal solution. Our analysis easily extends to other algorithms using exponential moving averages such as ADADELTA and NADAM as well, but we omit this for the sake of clarity. In fact, the analysis is flexible enough to extend to other algorithms that employ averaging squared gradients over essentially a fixed size window (for exponential moving averages, the influences of gradients beyond a fixed window size becomes negligibly small) in the immediate past. We omit the general analysis in this paper for the sake of clarity.• The above indicates that in order to have guaranteed convergence the optimization algorithm must have "long-term memory" of past gradients. Specifically, we point out a problem with the proof of convergence of the ADAM algorithm given by BID3. To resolve this issue, we propose new variants of ADAM which rely on long-term memory of past gradients, but can be implemented in the same time and space requirements as the original ADAM algorithm. We provide a convergence analysis for the new variants in the convex setting, based on the analysis of BID3, and show a datadependent regret bound similar to the one in ADAGRAD.• We provide a preliminary empirical study of one of the variants we proposed and show that it either performs similarly, or better, on some commonly used problems in machine learning. Notation. We use S is defined as arg min x∈F A 1/2 (x − y) for y ∈ R d. Finally, we say F has bounded diameter D ∞ if x − y ∞ ≤ D ∞ for all x, y ∈ F. A flexible framework to analyze iterative optimization methods is the online optimization problem in the full information feedback setting. In this online setup, at each time step t, the optimization algorithm picks a point (i.e. the parameters of the model to be learned) x t ∈ F, where F ∈ R d is the feasible set of points. A loss function f t (to be interpreted as the loss of the model with the chosen parameters in the next minibatch) is then revealed, and the algorithm incurs loss f t (x t). The algorithm's regret at the end of T rounds of this process is given by DISPLAYFORM0. Throughout this paper, we assume that the feasible set F has bounded diameter and ∇f t (x) ∞ is bounded for all t ∈ [T] and x ∈ F.Our aim to is to devise an algorithm that ensures R T = o(T), which implies that on average, the model's performance converges to the optimal one. The simplest algorithm for this setting is the standard online gradient descent algorithm , which moves the point x t in the opposite direction of the gradient g t = ∇f t (x t) while maintaining the feasibility by projecting onto the set F via the update rule x t+1 = Π F (x t − α t g t), where Π F (y) denotes the projection of y ∈ R d onto the set F i.e., Π F (y) = min x∈F x − y, and α t is typically set to α/ √ t for some constant α. The aforementioned online learning problem is closely related to the stochastic optimization problem: min x∈F E z [f (x, z)], popularly referred to as empirical risk minimization (ERM), where z is a training example drawn training sample over which a model with parameters x is to be learned, and f (x, z) is the loss of the model with parameters x on the sample z. In particular, an online optimization algorithm with vanishing average regret yields a stochastic optimization algorithm for the ERM problem . Thus, we use online gradient descent and stochastic gradient descent (SGD) synonymously. Generic adaptive methods setup. We now provide a framework of adaptive methods that gives us insights into the differences between different adaptive methods and is useful for understanding the flaws in a few popular adaptive methods. Algorithm 1 provides a generic adaptive framework that encapsulates many popular adaptive methods. Note the algorithm is still abstract because the Input: x1 ∈ F, step size {αt > 0} T t=1, sequence of functions {φt, ψt} T t=1for t = 1 to T do gt = ∇ft(xt) mt = φt(g1, . . ., gt) and Vt = ψt(g1, . . ., gt) DISPLAYFORM0 "averaging" functions φ t and ψ t have not been specified. Here φ t: DISPLAYFORM1 For ease of exposition, we refer to α t as step size and α t V −1/2 t as learning rate of the algorithm and furthermore, restrict ourselves to diagonal variants of adaptive methods encapsulated by Algorithm 1 where V t = diag(v t). We first observe that standard stochastic gradient algorithm falls in this framework by using: DISPLAYFORM2 and DISPLAYFORM3. While the decreasing step size is required for convergence, such an aggressive decay of learning rate typically translates into poor empirical performance. The key idea of adaptive methods is to choose averaging functions appropriately so as to entail good convergence. For instance, the first adaptive method ADAGRAD BID2, which propelled the research on adaptive methods, uses the following averaging functions: DISPLAYFORM4 and step size α t = α/ √ t for all t ∈ [T]. In contrast to a learning rate of α/ √ t in SGD, such a setting effectively implies a modest learning rate decay of DISPLAYFORM5 When the gradients are sparse, this can potentially lead to huge gains in terms of convergence (see BID2). These gains have also been observed in practice for even few non-sparse settings. Adaptive methods based on Exponential Moving Averages. Exponential moving average variants of ADAGRAD are popular in the deep learning community. RMSPROP, ADAM, NADAM, and ADADELTA are some prominent algorithms that fall in this category. The key difference is to use an exponential moving average as function ψ t instead of the simple average function used in ADAGRAD. ADAM 1, a particularly popular variant, uses the following averaging functions: DISPLAYFORM6 for some β 1, β 2 ∈. This update can alternatively be stated by the following simple recursion: DISPLAYFORM7 and m 0,i = 0 and v 0,i = 0 for all i ∈ [d]. and t ∈ [T]. A value of β 1 = 0.9 and β 2 = 0.999 is typically recommended in practice. We note the additional projection operation in Algorithm 1 in comparison to ADAM. When F = R d, the projection operation is an identity operation and this corresponds to the algorithm in BID3. For theoretical analysis, one requires α t = 1/ √ t for t ∈ [T], although, a more aggressive choice of constant step size seems to work well in practice. RMSPROP, which appeared in an earlier unpublished work BID7 is essentially a variant of ADAM with β 1 = 0. In practice, especially in deep learning applications, the momentum term arising due to non-zero β 1 appears to significantly boost the performance. We will mainly focus on ADAM algorithm due to this generality but our arguments also apply to RMSPROP and other algorithms such as ADADELTA, NADAM. With the problem setup in the previous section, we discuss fundamental flaw in the current exponential moving average methods like ADAM. We show that ADAM can fail to converge to an optimal solution even in simple one-dimensional convex settings. These examples of non-convergence contradict the claim of convergence in BID3, and the main issue lies in the following quantity of interest: DISPLAYFORM0 This quantity essentially measures the change in the inverse of learning rate of the adaptive method with respect to time. One key observation is that for SGD and ADAGRAD, Γ t 0 for all t ∈ [T]. This simply follows from update rules of SGD and ADAGRAD in the previous section. In particular, update rules for these algorithms lead to "non-increasing" learning rates. However, this is not necessarily the case for exponential moving average variants like ADAM and RMSPROP i.e., Γ t can potentially be indefinite for t ∈ [T]. We show that this violation of positive definiteness can lead to undesirable convergence behavior for ADAM and RMSPROP. Consider the following simple sequence of linear functions for F = [−1, 1]: DISPLAYFORM1 where C > 2. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Suppose β 1 = 0 and β 2 = 1/(1 + C 2). We show that ADAM converges to a highly suboptimal solution of x = +1 for this setting. Intuitively, the reasoning is as follows. The algorithm obtains the large gradient C once every 3 steps, and while the other 2 steps it observes the gradient −1, which moves the algorithm in the wrong direction. The large gradient C is unable to counteract this effect since it is scaled down by a factor of almost C for the given value of β 2, and hence the algorithm converges to 1 rather than −1. We formalize this intuition in the below. Theorem 1. There is an online convex optimization problem where ADAM has non-zero average regret i.e., R T /T 0 as T → ∞.We relegate all proofs to the appendix. A few remarks are in order. One might wonder if adding a small constant in the denominator of the update helps in circumventing this problem i.e., the update for ADAM in Algorithm 1 ofx t+1 is modified as follows: DISPLAYFORM2 The algorithm in BID3 uses such an update in practice, although their analysis does not. In practice, selection of the parameter appears to be critical for the performance of the algorithm. However, we show that for any constant > 0, there exists an online optimization setting where, again, ADAM has non-zero average regret asymptotically (see Theorem 6 in Section F of the appendix).The above examples of non-convergence are catastrophic insofar that ADAM and RMSPROP converge to a point that is worst amongst all points in the set [−1, 1]. Note that above example also holds for constant step size α t = α. Also note that classic SGD and ADAGRAD do not suffer from this problem and for these algorithms, average regret asymptotically goes to 0. This problem is especially aggravated in high dimensional settings and when the variance of the gradients with respect to time is large. This example also provides intuition for why large β 2 is advisable while using ADAM algorithm, and indeed in practice using large β 2 helps. However the following shows that for any constant β 1 and β 2 with β 1 < √ β 2, we can design an example where ADAM has non-zero average rate asymptotically. Theorem 2. For any constant β 1, β 2 ∈ such that β 1 < √ β 2, there is an online convex optimization problem where ADAM has non-zero average regret i.e., R T /T 0 as T → ∞.The above show that with constant β 1 and β 2, momentum or regularization via will not help in convergence of the algorithm to the optimal solution. Note that the condition β 1 < √ β 2 is benign and is typically satisfied in the parameter settings used in practice. Furthermore, such condition is assumed in convergence proof of BID3. We can strengthen this by providing a similar example of non-convergence even in the easier stochastic optimization setting: DISPLAYFORM3 end for Theorem 3. For any constant β 1, β 2 ∈ such that β 1 < √ β 2, there is a stochastic convex optimization problem for which ADAM does not converge to the optimal solution. These have important consequences insofar that one has to use "problem-dependent", β 1 and β 2 in order to avoid bad convergence behavior. In high-dimensional problems, this typically amounts to using, unlike the update in Equation, a different, β 1 and β 2 for each dimension. However, this defeats the purpose of adaptive methods since it requires tuning a large set of parameters. We would also like to emphasize that while the example of non-convergence is carefully constructed to demonstrate the problems in ADAM, it is not unrealistic to imagine scenarios where such an issue can at the very least slow down convergence. We end this section with the following important remark. While the stated above use constant β 1 and β 2, the analysis of ADAM in BID3 actually relies on decreasing β 1 over time. It is quite easy to extend our examples to the case where β 1 is decreased over time, since the critical parameter is β 2 rather than β 1, and as long as β 2 is bounded away from 1, our analysis goes through. Thus for the sake of clarity, in this paper we only prove non-convergence of ADAM in the setting where β 1 is held constant. In this section, we develop a new principled exponential moving average variant and provide its convergence analysis. Our aim is to devise a new strategy with guaranteed convergence while preserving the practical benefits of ADAM and RMSPROP. To understand the design of our algorithms, let us revisit the quantity Γ t in. For ADAM and RMSPROP, this quantity can potentially be negative. The proof in the original paper of ADAM erroneously assumes that Γ t is positive semi-definite and is hence, incorrect (refer to Appendix D for more details). For the first part, we modify these algorithms to satisfy this additional constraint. Later on, we also explore an alternative approach where Γ t can be made positive semi-definite by using values of β 1 and β 2 that change with t. AMSGRAD uses a smaller learning rate in comparison to ADAM and yet incorporates the intuition of slowly decaying the effect of past gradients on the learning rate as long as Γ t is positive semidefinite. Algorithm 2 presents the pseudocode for the algorithm. The key difference of AMSGRAD with ADAM is that it maintains the maximum of all v t until the present time step and uses this maximum value for normalizing the running average of the gradient instead of v t in ADAM. By doing this, AMSGRAD in a non-increasing step size and avoids the pitfalls of ADAM and RMSPROP i.e., Γ t 0 for all t ∈ [T] even with constant β 2. Also, in Algorithm 2, one typically uses a constant β 1t in practice (although, the proof requires a decreasing schedule for proving convergence of the algorithm).To gain more intuition for the updates of AMSGRAD, it is instructive to compare its update with ADAM and ADAGRAD. Suppose at particular time step t and coordinate i ∈ [d], we have v t−1,i > g 2 t,i > 0, then ADAM aggressively increases the learning rate, however, as we have seen in the previous section, this can be detrimental to the overall performance of the algorithm. On the other hand, ADAGRAD slightly decreases the learning rate, which often leads to poor performance in practice since such an accumulation of gradients over a large time period can significantly decrease the learning rate. In contrast, AMSGRAD neither increases nor decreases the learning rate and furthermore, decreases v t which can potentially lead to non-decreasing learning rate even if gradient is large in the future iterations. For rest of the paper, we use g 1:t = [g 1 . . . g t] to denote the matrix obtained by concatenating the gradient sequence. We prove the following key for AMSGRAD.Theorem 4. Let {x t} and {v t} be the sequences obtained from Algorithm 2, DISPLAYFORM0 and x ∈ F. For x t generated using the AMSGRAD (Algorithm 2), we have the following bound on the regret DISPLAYFORM1 The following falls as an immediate corollary of the above . Corollary 1. Suppose β 1t = β 1 λ t−1 in Theorem 4, then we have DISPLAYFORM2 The above bound can be considerably better than O(BID2 . Furthermore, in Theorem 4, one can use a much more modest momentum decay of β 1t = β 1 /t and still ensure a regret of O( √ T). We would also like to point out that one could consider taking a simple average of all the previous values of v t instead of their maximum. The ing algorithm is very similar to ADAGRAD except for normalization with smoothed gradients rather than actual gradients and can be shown to have similar convergence as ADAGRAD. DISPLAYFORM3 In this section, we present empirical on both synthetic and real-world datasets. For our experiments, we study the problem of multiclass classification using logistic regression and neural networks, representing convex and nonconvex settings, respectively. Synthetic Experiments: To demonstrate the convergence issue of ADAM, we first consider the following simple convex setting inspired from our examples of non-convergence: DISPLAYFORM0 with the constraint set F = [−1, 1]. We first observe that, similar to the examples of nonconvergence we have considered, the optimal solution is x = −1; thus, for convergence, we expect the algorithms to converge to x = −1. For this sequence of functions, we investigate the regret and the value of the iterate x t for ADAM and AMSGRAD. To enable fair comparison, we set β 1 = 0.9 and β 2 = 0.99 for ADAM and AMSGRAD algorithm, which are typically the parameters settings used for ADAM in practice. FIG1 shows the average regret (R t /t) and value of the iterate (x t) for this problem. We first note that the average regret of ADAM does not converge to 0 with increasing t. Furthermore, its iterates x t converge to x = 1, which unfortunately has the largest regret amongst all points in the domain. On the other hand, the average regret of AMSGRAD converges to 0 and its iterate converges to the optimal solution. FIG1 also shows the stochastic optimization setting: DISPLAYFORM1, with probability 0.01 −10x, otherwise. Similar to the aforementioned online setting, the optimal solution for this problem is x = −1. Again, we see that the iterate x t of ADAM converges to the highly suboptimal solution x = 1.Logistic Regression: To investigate the performance of the algorithm on convex problems, we compare AMSGRAD with ADAM on logistic regression problem. We use MNIST dataset for this experiment, the classification is based on 784 dimensional image vector to one of the 10 class labels. The step size parameter α t is set to α/ √ t for both ADAM and AMSGRAD in for our experiments, consistent with the theory. We use a minibatch version of these algorithms with minibatch size set to 128. We set β 1 = 0.9 and β 2 is chosen from the set {0.99, 0.999}, but they are fixed throughout the experiment. The parameters α and β 2 are chosen by grid search. We report the train and test loss with respect to iterations in FIG2. We can see that AMSGRAD performs better than ADAM with respect to both train and test loss. We also observed that AMSGRAD is relatively more robust to parameter changes in comparison to ADAM.Neural Networks: For our first experiment, we trained a simple 1-hidden fully connected layer neural network for the multiclass classification problem on MNIST. Similar to the previous experiment, we use β 1 = 0.9 and β 2 is chosen from {0.99, 0.999}. We use a fully connected 100 rectified linear units (ReLU) as the hidden layer for this experiment. Furthermore, we use constant α t = α throughout all our experiments on neural networks. Such a parameter setting choice of ADAM is consistent with the ones typically used in the deep learning community for training neural networks. A grid search is used to determine parameters that provides the best performance for the algorithm. Finally, we consider the multiclass classification problem on the standard CIFAR-10 dataset, which consists of 60,000 labeled examples of 32 × 32 images. We use CIFARNET, a convolutional neural network (CNN) with several layers of convolution, pooling and non-linear units, for training a multiclass classifer for this problem. In particular, this architecture has 2 convolutional layers with 64 channels and kernel size of 6 × 6 followed by 2 fully connected layers of size 384 and 192. The network uses 2 × 2 max pooling and layer response normalization between the convolutional layers BID4. A dropout layer with keep probability of 0.5 is applied in between the fully connected layers BID6. The minibatch size is also set to 128 similar to previous experiments. The for this problem are reported in FIG2. The parameters for ADAM and AMSGRAD are selected in a way similar to the previous experiments. We can see that AMSGRAD performs considerably better than ADAM on train loss and accuracy. Furthermore, this performance gain also translates into good performance on test loss. An alternative approach is to use an increasing schedule of β 2 in ADAM. This approach, unlike Algorithm 2 does not require changing the structure of ADAM but rather uses a non-constant β 1 and β 2. The pseudocode for the algorithm, ADAMNC, is provided in the appendix (Algorithm 3). We show that by appropriate selection of β 1t and β 2t, we can achieve good convergence rates. Theorem 5. Let {x t} and {v t} be the sequences obtained from Algorithm 3, α t = α/ √ t, β 1 = β 11 and β 1t ≤ β 1 for all t ∈ [T]. Assume that F has bounded diameter D ∞ and ∇f t (x) ∞ ≤ G ∞ for all t ∈ [T] and x ∈ F. Furthermore, let {β 2t} be such that the following conditions are satisfied: DISPLAYFORM0 Then for x t generated using the ADAMNC (Algorithm 3), we have the following bound on the regret DISPLAYFORM1 The above assumes selection of {(α t, β 2t)} such that Γ t 0 for all t ∈ {2, · · ·, T}. However, one can generalize the to deal with the case where this constraint is violated as long as the violation is not too large or frequent. Following is an immediate consequence of the above . Corollary 2. Suppose β 1t = β 1 λ t−1 and β 2t = 1 − 1/t in Theorem 5, then we have DISPLAYFORM2 The above corollary follows from a trivial fact that v t,i = t j=1 g 2 j,i /t for all i ∈ [d] when β 2t = 1 − 1/t. This corollary is interesting insofar that such a parameter setting effectively yields a momentum based variant of ADAGRAD. Similar to ADAGRAD, the regret is data-dependent and can be considerably better than O(BID2 . It is easy to generalize this for setting similar settings of β 2t . Similar to Corollary 1, one can use a more modest decay of β 1t = β 1 /t and still ensure a data-dependent regret of O( √ T). DISPLAYFORM3 In this paper, we study exponential moving variants of ADAGRAD and identify an important flaw in these algorithms which can lead to undesirable convergence behavior. We demonstrate these problems through carefully constructed examples where RMSPROP and ADAM converge to highly suboptimal solutions. In general, any algorithm that relies on an essentially fixed sized window of past gradients to scale the gradient updates will suffer from this problem. We proposed fixes to this problem by slightly modifying the algorithms, essentially endowing the algorithms with a long-term memory of past gradients. These fixes retain the good practical performance of the original algorithms, and in some cases actually show improvements. The primary goal of this paper is to highlight the problems with popular exponential moving average variants of ADAGRAD from a theoretical perspective. RMSPROP and ADAM have been immensely successful in development of several state-of-the-art solutions for a wide range of problems. Thus, it is important to understand their behavior in a rigorous manner and be aware of potential pitfalls while using them in practice. We believe this paper is a first step in this direction and suggests good design principles for faster and better stochastic optimization. A PROOF OF THEOREM 1Proof. We consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence: DISPLAYFORM0 where C ≥ 2. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret. Without loss of generality, assume that the initial point is x 1 = 1. This can be assumed without any loss of generality because for any choice of initial point, we can always translate the coordinate system such that the initial point is x 1 = 1 in the new coordinate system and then choose the sequence of functions as above in the new coordinate system. Also, since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. Consider the execution of ADAM algorithm for this sequence of functions with DISPLAYFORM1 Note that since gradients of these functions are bounded, F has bounded L ∞ diameter and β 2 1 / √ β 2 < 1. Hence, the conditions on the parameters required for ADAM are satisfied (refer to BID3 for more details).Our main claim is that for iterates {x t} ∞ t=1 arising from the updates of ADAM, we have x t > 0 for all t ∈ N and furthermore, x 3t+1 = 1 for all t ∈ N ∪ {0}. For proving this, we resort to the principle of mathematical induction. Since x 1 = 1, both the aforementioned conditions hold for the base case. Suppose for some t ∈ N ∪ {0}, we have x i > 0 for all i ∈ [3t + 1] and x 3t+1 = 1. Our aim is to prove that x 3t+2 and x 3t+3 are positive and x 3t+4 = 1. We first observe that the gradients have the following form: DISPLAYFORM2 th update of ADAM in Equation, we obtain DISPLAYFORM3 The equality follows from the induction hypothesis. We observe the following: αC DISPLAYFORM4 The second inequality follows from the step size choice that α < √ 1 − β 2. Therefore, we have 0 <x 3t+2 < 1 and hence x 3t+2 =x 3t+2 > 0. Furthermore, after the (3t + 2) th and (3t + 3) th updates of ADAM in Equation FORMULA8, we have the following: DISPLAYFORM5 Since x 3t+2 > 0, it is easy to see that x 3t+3 > 0. To complete the proof, we need to show that x 3t+4 = 1. In order to prove this claim, we show thatx 3t+4 ≥ 1, which readily translates to x 3t+4 = 1 because x 3t+4 = Π F (x 3t+4) and F = [−1, 1] here Π F is the simple Euclidean projection (note that in one-dimension, Π F, √ Vt = Π F). We observe the following: DISPLAYFORM6 The above equality is due to the fact thatx 3t+3 > 0 and property of projection operation onto the set F = [−1, 1]. We consider the following two cases:1. Supposex 3t+3 ≥ 1, then it is easy to see from the above equality thatx 3t+4 > 1.2. Supposex 3t+3 < 1, then we have the following: DISPLAYFORM7 The third equality is due to the fact that x 3t+2 =x 3t+2. Thus, to provex 3t+4 > 1, it is enough to the prove: DISPLAYFORM8 We have the following bound on term T 1 from Equation FORMULA27: DISPLAYFORM9 Furthermore, we lower bound T 2 in the following manner: DISPLAYFORM10 The first inequality is due to the fact that v t ≤ C 2 for all t ∈ N. The last inequality follows from inequality in Equation. The last equality is due to following fact: DISPLAYFORM11 for the choice of β 2 = 1/(1 + C 2). Therefore, we have T 2 ≥ T 1 and hence,x 3t+4 ≥ 1.Therefore, from both the cases, we see that x 3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t ∈ N ∪ {0}. Thus, we have DISPLAYFORM12 Therefore, for every 3 steps, ADAM suffers a regret of at least 2C − 4. More specifically, R T ≥ (2C − 4)T /3. Since C ≥ 2, this regret can be very large and furthermore, R T /T 0 as T → ∞, which completes the proof. Proof. The proof generalizes the optimization setting used in Theorem 1. Throughout the proof, we assume β 1 < √ β 2, which is also a condition assume in their paper. In this proof, we consider the setting where f t are linear functions and F = [−1, 1]. In particular, we define the following function sequence: DISPLAYFORM0 where C ∈ N, C mod 2 = 0 satisfies the following: DISPLAYFORM1 where γ = β 1 / √ β 2 < 1. It is not hard to see that these conditions hold for large constant C that depends on β 1 and β 2. Since the problem is one-dimensional, we drop indices representing coordinates from all quantities in Algorithm 1. For this function sequence, it is easy to see that the point x = −1 provides the minimum regret since C ≥ 2. Furthermore, the gradients have the following form: DISPLAYFORM2 for t mod C = 1 −1, otherwise Our first observation is that m kC ≤ 0 for all k ∈ N ∪ {0}. For k = 0, this holds trivially due to our initialization. For the general case, observe the following: DISPLAYFORM3 If m kC ≤ 0, it can be easily shown that m kC+C ≤ 0 for our selection of C in Equation by using the principle of mathematical induction. With this observation we continue to the main part of the proof. Let T be such that t + C ≤ τ 2 t for all t ≥ T where τ ≤ 3/2. All our analysis focuses on iterations t ≥ T. Note that any regret before T is just a constant because T is independent of T and thus, the average regret is negligible as T → ∞. Consider an iterate at time step t of the form kC after T. Our claim is that DISPLAYFORM4 for some c t > 0. To see this, consider the updates of ADAM for the particular sequence of functions we considered are: DISPLAYFORM5 For i ∈ {2, · · ·, C}, we use the following notation: DISPLAYFORM6 Note that if δ t+j ≥ 0 for some j ∈ {1, · · ·, C − 1} then δ t+l ≥ 0 for all l ∈ {j, · · ·, C − 1}. This follows from the fact that the gradient is negative for all time steps i ∈ {2, · · ·, C}. Using Lemma 6 for {x t+1, · · ·, x t+C} and {δ t, · · ·, δ t+C−1}, we have the following: DISPLAYFORM7 Let i = C/2. In order to prove our claim in Equation FORMULA8, we need to prove the following: DISPLAYFORM8 To this end, we observe the following: DISPLAYFORM9 The first equality follows from the definition of m t+i+1. The first inequality follows from the fact that m t ≤ 0 when t mod C = 0 (see Equation FORMULA39 and arguments based on it). The second inequality follows from the definition of τ that t + C ≤ τ 2 t for all t ≥ T. The third inequality is due to the fact that v t+i−1 ≥ (1 − β 2)β i−2 2 C 2. The last inequality follows from our choice of C. The fourth inequality is due to the following upper bound that applies for all i ≤ i ≤ C: DISPLAYFORM10 The first inequality follows from online problem setting for the counter-example i.e., gradient is C once every C iterations and −1 for the rest. The last inequality follows from the fact that β i −1 2 C 2 ≤ 1 and β C 2 ≤ β 2. Furthermore, from the above inequality, we have DISPLAYFORM11 Note that from our choice of C, it is easy to see that λ ≥ 0. Also, observe that λ is independent of t. Thus, x t+C ≥ min{1, x t + λ/ √ t}. From this fact, we also see the following:1. If x t = 1, then x t+C = 1 for all t ≥ T such that t mod C = 0.2. There exists constant T 1 ≥ T such that x T 1 = 1 where T 1 mod C = 0.The first point simply follows from the relation x t+C ≥ min{1, x t + λ/ √ t}. The second point is due to divergent nature of the sum DISPLAYFORM12 where kC ≥ T 1. Thus, when t ≥ T 1, for every C steps, ADAM suffers a regret of at least 2. More specifically, R T ≥ 2(T − T 1)/C. Thus, R T /T 0 as T → ∞, which completes the proof. Proof. Let δ be an arbitrary small positive constant, and C be a large enough constant chosen as a function of β 1, β 2, δ that will be determined in the proof. Consider the following one dimensional stochastic optimization setting over the domain [−1, 1]. At each time step t, the function f t (x) is chosen i.i.d. as follows: DISPLAYFORM0 Cx with probability p:= 1+δ C+1 −x with probability 1 − pThe expected function is F (x) = δx; thus the optimum point over [−1, 1] is x = −1. At each time step t the gradient g t equals C with probability p and −1 with probability 1 − p. Thus, the step taken by ADAM is DISPLAYFORM1 We now show that for a large enough constant C, E[∆ t] ≥ 0, which implies that the ADAM's steps keep drifting away from the optimal solution x = −1.Lemma 1. For a large enough constant C (as a function of β 1, β 2, δ), DISPLAYFORM2 denote expectation conditioned on all randomness up to and including time t − 1. Taking conditional expectation of the step, we have DISPLAYFORM3 We will bound the expectation of the terms T 1, T 2 and T 3 above separately. First, for T 1, we have DISPLAYFORM4 Next, we bound DISPLAYFORM5 log(1/β1). This choice of k ensures that β DISPLAYFORM6 Let E denote the event that for every DISPLAYFORM7 Assuming E happens, we can bound m t−1 as follows: DISPLAYFORM8 and so T 2 ≥ 0.With probability at most kp, the event E doesn't happen. In this case, we bound T 2 as follows. We first bound m t−1 in terms of v t−1 using the Cauchy-Schwarz inequality as follows: DISPLAYFORM9 Thus, v t−1 ≥ m 2 t−1 /A 2. Thus, we have DISPLAYFORM10.Hence, we have DISPLAYFORM11 Finally, we lower bound E[T 3] using Jensen's inequality applied to the convex function DISPLAYFORM12 The last inequality follows by using the facts DISPLAYFORM13, and the random variables g DISPLAYFORM14 Combining the bounds in,, and in the expression for ADAM's step,, and plugging in the values of the parameters k and p we get the following lower bound on E[∆ t]: DISPLAYFORM15 It is evident that for C large enough (as a function of δ, β 1, β 2), the above expression can be made non-negative. For the sake of simplicity, let us assume, as is routinely done in practice, that we are using a version of ADAM that doesn't perform any projection steps 2. Then the lemma implies that DISPLAYFORM16. Via a simple induction, we conclude that E[x t] ≥ x 1 for all t. Thus, if we assume that the starting point x 1 ≥ 0, then E[x t] ≥ 0. Since F is a monotonically increasing function, we have E[F (x t)] ≥ F = 0, whereas F (−1) = −δ. Thus the expected suboptimality gap is always δ > 0, which implies that ADAM doesn't converge to the optimal solution. The proof of Theorem 4 presented below is along the lines of the Theorem 4.1 in BID3 which provides a claim of convergence for ADAM. As our examples showing nonconvergence of ADAM indicate, the proof in BID3 has problems. The main issue in their proof is the incorrect assumption that Γ t defined in their equation FORMULA11 is positive semidefinite, and we also identified problems in lemmas 10.3 and 10.4 in their paper. The following proof fixes these issues and provides a proof of convergence for AMSGRAD.Proof. We begin with the following observation: DISPLAYFORM0 In this proof, we will use x * i to denote the i th coordinate of x *. Using Lemma 4 with u 1 = x t+1 and u 2 = x *, we have the following: DISPLAYFORM1 Rearranging the above inequality, we have DISPLAYFORM2 The second inequality follows from simple application of Cauchy-Schwarz and Young's inequality. We now use the standard approach of bounding the regret at each step using convexity of the function f t in the following manner: DISPLAYFORM3 The first inequality is due to convexity of function f t. The second inequality follows from the bound in Equation FORMULA8. For further bounding this inequality, we need the following intermediate . Lemma 2. For the parameter settings and conditions assumed in Theorem 4, we have DISPLAYFORM4 Proof. We start with the following: DISPLAYFORM5 The first inequality follows from the definition ofv T,i, which is maximum of all v T,i until the current time step. The second inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner: DISPLAYFORM6 The first inequality follows from Cauchy-Schwarz inequality. The second inequality is due to the fact that β 1k ≤ β 1 for all k ∈ [T]. The third inequality follows from the inequality DISPLAYFORM7. By using similar upper bounds for all time steps, the quantity in Equation FORMULA8 can further be bounded as follows: DISPLAYFORM8 The third inequality follows from the fact that DISPLAYFORM9 The fourth inequality is due to simple application of Cauchy-Schwarz inequality. The final inequality is due to the following bound on harmonic sum: T t=1 1/t ≤ (1 + log T). This completes the proof of the lemma. We now return to the proof of Theorem 4. Using the above lemma in Equation FORMULA8, we have: DISPLAYFORM10 The first inequality and second inequality use the fact that β 1t ≤ β 1. In order to further simplify the bound in Equation FORMULA8, we need to use telescopic sum. We observe that, by definition ofv t,i, we havev DISPLAYFORM11 Using the L ∞ bound on the feasible region and making use of the above property in Equation, we have: DISPLAYFORM12 Set m0 = 0 and v0 = 0 DISPLAYFORM13 The equality follows from simple telescopic sum, which yields the desired . One important point to note here is that the regret of AMSGRAD can be bounded by O(G ∞ √ T). This can be easily seen from the proof of the aforementioned lemma where in the analysis the term DISPLAYFORM14 Thus, the regret of AMSGRAD is upper bounded by minimum of O(G ∞ √ T) and the bound in the Theorem 4 and therefore, the worst case dependence of regret on T in our case is O(√ T). Proof. Using similar argument to proof of Theorem 4 until Equation FORMULA8, we have the following DISPLAYFORM0 The second inequality follows from simple application of Cauchy-Schwarz and Young's inequality. We now use the standard approach of bounding the regret at each step using convexity of the function f t in the following manner: DISPLAYFORM1 The inequalities follow due to convexity of function f t and Equation. For further bounding this inequality, we need the following intermediate . Lemma 3. For the parameter settings and conditions assumed in Theorem 5, we have DISPLAYFORM2 Proof. We start with the following: DISPLAYFORM3 The first inequality follows from the update rule of Algorithm 2. We further bound the above inequality in the following manner: DISPLAYFORM4 The first inequality and second inequality use the fact that β 1t ≤ β 1. Furthermore, from the theorem statement, we know that that {(α t .β 2t)} are selected such that the following holds: DISPLAYFORM5 Using the L ∞ bound on the feasible region and making use of the above property in Equation FORMULA9, we have: DISPLAYFORM6 The equality follows from simple telescopic sum, which yields the desired . Theorem 6. For any > 0, ADAM with the modified update in Equation and with parameter setting such that all the conditions in BID3 are satisfied can have non-zero average regret i.e., R T /T 0 as T → ∞ for convex DISPLAYFORM0 with bounded gradients on a feasible set F having bounded D ∞ diameter. Proof. Let us first consider the case where = 1 (in fact, the same setting works for any ≤ 1). The general case can be proved by simply rescaling the sequence of functions by a factor of √. We show that the same optimization setting in Theorem 1 where f t are linear functions and F = [−1, 1], hence, we only discuss the details that differ from the proof of Theorem 1. In particular, we define the following function sequence: DISPLAYFORM1 Cx, for t mod 3 = 1 −x, otherwise, where C ≥ 2. Similar to the proof of Theorem 1, we assume that the initial point is x 1 = 1 and the parameters are:β 1 = 0, β 2 = 2 (1 + C 2)C 2 and α t = α √ t where α < √ 1 − β 2. The proof essentially follows along the lines of that of Theorem 1 and is through principle of mathematical induction. Our aim is to prove that x 3t+2 and x 3t+3 are positive and x 3t+4 = 1. The base case holds trivially. Suppose for some t ∈ N ∪ {0}, we have x i > 0 for all i ∈ [3t + 1] and x 3t+1 = 1. For (3t + 1) th update, the only change from the update of in Equation FORMULA8 is the additional in the denominator i.e., we havê x 3t+2 = x 3t+1 − αC (3t + 1)(β 2 v 3t + (1 − β 2)C 2 + ) DISPLAYFORM2 The last inequality follows by simply dropping v 3t term and using the relation that α < √ 1 − β 2. Therefore, we have 0 <x 3t+2 < 1 and hence x 3t+2 =x 3t+2 > 0. Furthermore, after the (3t + 2) th and (3t + 3) th updates of ADAM in Equation FORMULA8, we have the following:x 3t+3 = x 3t+2 + α (3t + 2)(β 2 v 3t+1 + (1 − β 2) + ), x 3t+4 = x 3t+3 + α (3t + 3)(β 2 v 3t+2 + (1 − β 2) + ).Since x 3t+2 > 0, it is easy to see that x 3t+3 > 0. To complete the proof, we need to show that x 3t+4 = 1. The only change here from the proof of Theorem 1 is that we need to show the The first inequality is due to the fact that v t ≤ C 2 for all t ∈ N. The last equality is due to following fact: DISPLAYFORM3 for the choice of β 2 = 2/[(1 + C 2)C 2 ] and = 1. Therefore, we see that x 3t+4 = 1. Therefore, by the principle of mathematical induction it holds for all t ∈ N ∪ {0}. Thus, we have f 3t+1 (x 3t+1) + f 3t+2 (x 3t+2) + f 3t+2 (x 3t+2) − f 3t+1 (−1) − f 3t+2 (−1) − f 3t+3 (−1) ≥ 2C − 4.Therefore, for every 3 steps, ADAM suffers a regret of at least 2C − 4. More specifically, R T ≥ (2C − 4)T /3. Since C ≥ 2, this regret can be very large and furthermore, R T /T 0 as T → ∞, which completes the proof of the case where = 1. For the general case, we consider the following sequence of functions: DISPLAYFORM4 The functions are essentially rescaled in a manner so that the ant updates of ADAM correspond to the one in the optimization setting described above. Using essentially the same argument as above, it is easy to show that the regret R T ≥ (2C − 4) √ T /3 and thus, the average regret is non-zero asymptotically, which completes the proof. G AUXILIARY LEMMA Lemma 4 ( ). For any Q ∈ S d + and convex feasible set F ⊂ R d, suppose u 1 = min x∈F Q 1/2 (x−z 1) and u 2 = min x∈F Q 1/2 (x−z 2) then we have Q 1/2 (u 1 − u 2) ≤ Q 1/2 (z 1 − z 2).Proof. We provide the proof here for completeness. Since u 1 = min x∈F Q 1/2 (x − z 1) and u 2 = min x∈F Q 1/2 (x − z 2) and from the property of projection operator we have the following: z 1 − u 1, Q(z 2 − z 1) ≥ 0 and z 2 − u 2, Q(z 1 − z 2) ≥ 0.Combining the above inequalities, we have u 2 − u 1, Q(z 2 − z 1) ≥ z 2 − z 1, Q(z 2 − z 1).Also, observe the following: DISPLAYFORM5 The above inequality can be obtained from the fact that (u 2 − u 1) − (z 2 − z 1), Q((u 2 − u 1) − (z 2 − z 1)) ≥ 0 as Q ∈ S d + and rearranging the terms. Combining the above inequality with Equation FORMULA9, we have the required . Lemma 5 (BID0). For any non-negative real numbers y 1, · · ·, y t, the following holds: for all the t ∈ [T], y 1 ∈ F and furthermore, there exists i ∈ [T] such that δ j ≤ 0 for all j ≤ i and δ j > 0 for all j > i. Then we have, DISPLAYFORM6 Proof. It is first easy to see that y i+1 ≥ y 1 + i j=1 δ j since δ j ≤ 0 for all j ≤ i. Furthermore, also observe that y T +1 ≥ min{b, y i+1 + T j=i+1 δ j} since δ j ≥ 0 for all j > i. Combining the above two inequalities gives us the desired .
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ryQu7f-RZ
We investigate the convergence of popular optimization algorithms like Adam , RMSProp and propose new variants of these methods which provably converge to optimal solution in convex settings.
Targeted clean-label poisoning is a type of adversarial attack on machine learning systems where the adversary injects a few correctly-labeled, minimally-perturbed samples into the training data thus causing the deployed model to misclassify a particular test sample during inference. Although defenses have been proposed for general poisoning attacks (those which aim to reduce overall test accuracy), no reliable defense for clean-label attacks has been demonstrated, despite the attacks' effectiveness and their realistic use cases. In this work, we propose a set of simple, yet highly-effective defenses against these attacks. We test our proposed approach against two recently published clean-label poisoning attacks, both of which use the CIFAR-10 dataset. After reproducing their experiments, we demonstrate that our defenses are able to detect over 99% of poisoning examples in both attacks and remove them without any compromise on model performance. Our simple defenses show that current clean-label poisoning attack strategies can be annulled, and serve as strong but simple-to-implement baseline defense for which to test future clean-label poisoning attacks. Machine-learning-based systems are increasingly deployed in settings with high societal impact, such as biometric applications and hate speech detection on social networks , as well as settings with high cost of failure, such as autonomous driving (a) and malware detection . In such settings, robustness to not just noise but also adversarial manipulation of system behavior is paramount. Complicating matters is the increasing reliance of machine-learning-based systems on training data sourced from public and semi-public places such as social networks, collaboratively-edited forums, and multimedia posting services. Sourcing data from uncontrolled environments begets a simple attack vector: an adversary can strategically inject data that can manipulate or degrade system performance. Data poisoning attacks on neural networks occur at training time, wherein an adversary places specially-constructed poison instances into the training data with the intention of manipulating the performance of a classifier at test time. Most work on data poisoning has focused on either (i) an attacker generating a small fraction of new training inputs to degrade overall model performance, or (ii) a defender aiming to detect or otherwise mitigate the impact of that attack; for a recent overview, see. In this paper, we focus on clean-label data poisoning , where an attacker injects a few correctly-labeled, minimally-perturbed samples into the training data. In contrast to traditional data poisoning, these samples are crafted to cause the model to misclassify a particular target test sample during inference. These attacks are plausible in a wide range of applications, as they do not require the attacker to have control over the labeling process. The attacker merely inserts apparently benign data into the training process, for example by posting images online which are scraped and (correctly) labeled by human labelers. Our contribution: In this paper, we initiate the study of defending against clean-label poisoning attacks on neural networks. We begin with a defense that exploits the fact that though the raw poisoned examples are not easily detected by human labelers, the feature representations of poisons are anomalous among the feature representations for data points with their (common) label. This intuition lends itself to a defense based on k nearest neighbors (k-NN) in the feature space; furthermore, the parameter k yields a natural lever for trading off between the power of the attack against which it can defend and the impact of running the defense on overall (unpoisoned) model accuracy. Next, we adapt a recent traditional data poisoning defense to the clean-label case, and show that-while still simple to implement-its performance in both precision and recall of identifying poison instances is worse than our proposed defense. We include a portfolio of additional baselines as well. For each defense, we test against state-of-the-art clean-label data poisoning attacks, using a slate of architectures, and show that our initial defense detects nearly all (99%+) of the poison instances without degrading overall performance. We briefly describe the intuition behind our primary (k-NN-based) clean-label defense. The right side of Figure 1 shows a schematic of how a clean label poison attack typically works, specifically for a targeted misclassification of a plane as a frog. The target image, chosen to be a plane, is represented as a dark gray triangle. In a successful attack, base images of frogs are optimized to lie near the target image of the plane in feature space. Though the optimization causes a qualitative change in the feature representations of the images, in input space the images are perturbed under some small 2 or ∞ constraint so that they still appear to be frogs to a human observer. Under the feature collision attack (as in), the perturbations are optimized so as to minimize the poison images' distance to the target image in feature space, while under the convex polytope attack (as in), the points are optimized to form a convex polytope around the target. Either way, when the model is trained on the dataset with the poisoned inputs, the decision boundary will shift to classify the poison images as frogs, and inadvertently change the target image (which isn't in the training set) from the plane class into the frog class. However, as seen in the illustrative example with two poisons, the poisons are likely to be surrounded by points of the target class rather than the base class. The inset in Figure 1 illustrates this: when k = 3, two poisons will always have 2/3 or more of their neighbors as non-poisons. As illustrated, since the label of the plurality of a poisons neighbors does not match the label of the poison, the poison will be removed from the dataset. More generally, for p poisons, if k = 2p + 1, then we would expect the poisons to be outvoted by members of the target class and be removed. We briefly overview related work in the space of defenses to adversarial attacks , which are roughly split into evasion attacks (which occur at test time) and data poisoning attacks (which occur at training time). Most adversarial defenses have focused on evasion attacks, where inference-time inputs are manipulated to cause misclassification. In neural networks, evasion adversarial examples are perturbed in such a way that the loss on the victim network increases. The search for an optimal perturbation is facilitated efficiently by use of the local gradient ∇ x L obtained via backpropagation on either a white box (victim) network or a surrogate network if the victim network is unknown . Many defenses to evasion attacks have leveraged this reliance on gradients by finding ways to obfuscate the gradient, either though non-differentiable layers or reshaping the loss surface such that the gradients vanish or are highly uncorrelated. showed that obfuscated gradient defenses are insufficient. Using various strategies to circumvent loss of gradient information, such as replacing non-differentiable layers with differentiable approximations during the backward pass, they showed that stronger attacks can reduce accuracy to near zero on most gradient-based defenses. The defenses that withstand strong attacks are those whose loss surface with respect to the input tends to be "smooth" everywhere in the data manifold. To that end, variants of adversarial training (; ;) and linearity or curvature regularizers have maintained modest accuracies in the midst of strong multi-iteration PGD attacks . In evasion attacks, A deep k-NN-based methodology has been used across multiple layers of a neural network to generate confidence estimates of network predictions to create identify adversarial examples . Our k-NN-based defense differs in that it identifies data at training time rather than at test time, so that it uses true labels rather than predicted labels. Further, a soft nearest neighbor regularizer has been used during training time to improve robustness to evasion examples , but its resistance to clean-label poisoning examples has yet to be explored. Backdoor attacks have recently gained interest as a realistic threat to machine learning models. Backdooring, proposed by , can be seen as a subset of data poisoning. In their simplest form, backdoor attacks modify a small number of training examples with a specific pattern, the trigger, which accompanies examples with a specific target label. Leveraging the fact that neural networks tend to memorize training patterns, the attacker then puts the trigger onto examples during inference time that she wants classified-or misclassified-as the target. The trigger need not change the ground truth label of the training data, making such attacks clean-label attacks . Crucially however, these attacks rely upon the attacker being able to modify data at inference time-an assumption that may not always be realistic, and one we do not make in this paper. A number of defenses to backdoor attacks and poisons have been proposed. Many defenses seek to sanitize training data-remove poisons from the training data to neutralize the poisons' effects. Often, these defenses rely upon the heuristic that backdoor attacks create "shortcuts" in the network to induce target misclassification. An early example of such a defense employed two variants of an 2 centroid defense , which we adapt to our setting in this paper. In one variant, data would be removed from training if it fell outside of an acceptable radius in feature space. The other variant first projected the feature representations onto the line connecting class centroids and removed data based on its position upon this line. Along this vein, another defense proposed using feature clustering for data sanitization. This defense relies upon the assumption that naive backdoor triggers will cause identifiable clusters of poisons to form in feature space. The intuition of this defense also fails when exposed to stronger poisoning methods which do not use uniform triggers. In these stronger poisoning attacks, it has been shown that the poisoned data causes misclassification by more subtle means like surrounding a target image in feature space with a convex polytope of poisons . Such an attack will not always in an easily identifiable clusters of poisons. Still other defenses seek to identify and reconstruct the trigger that causes misclassification. In this defense, called NeuralCleanse, inputs would be removed depending upon whether the input's activation was similar to the activations induced by the reconstructed trigger. This defense was able to detect certain uniform 0 triggers inserted in training data using methods such as neuron activation clustering. However, this tactic does not apply to recent insidious poisoning attacks which use variable, learned perturbations to input data that cause misclassification via feature collisions . Here we introduce our set of baseline defenses, which will then be compared in a series of controlled experiments in Section 5. We use x t to denote the input space representation of the target image that a data poisoner tries to misclassify; the target has true label l t but the attacker seeks to misclassify it as having label l b. We use x b to denote a base image that is used to build a poison after optimization with label l b. We use x w to denote a base image watermarked with a target image, that is γ · x t + (1 − γ) · x b -to a human observer this image or slight perturbations of the image will have label l b for sufficiently low values of γ. We use φ(x) to denote the activation of the penultimate layer (before the softmax layer) of a neural network used for classification with multiple labels. We will refer to this as the feature layer (or feature space) and φ(x) as features of x. The k-nearest neighbors (k-NN) defense takes the plurality vote amongst the labels of a point's k nearest neighbors in feature space. If the point's assigned label is not the mode amongst labels of the k nearest neighbors, the point is discarded as being anomalous, and is not used in training the model. A more formal description is in Algorithm 1. ) denote a set of k points such that for all points x (j) inside the set and points Note that if there are p poisons, then by setting k = 2p + 1, the poisoned label is unlikely to be the majority, but may still be in the mode of the k-NN set, if the nearest neighbors of x (i) are in multiple classes. However, empirically we do not observe this to be an issue, as seen in Section 5. The selection of k inherently introduces a tradeoff between removing more poisons (and reducing the probability of a successful attack) and a loss of accuracy as more unpoisoned sample points are removed from the training set. While the most näive implementation of the k-NN defense may be memory intensive on larger datasets, it can be adapted, e.g. by sampling to require less memory. There is also an entire literature on fast k-NN methods, such as the algorithm, which aims to bring down the time complexity of the nearest neighbor search. The defense "L2" removes an > 0 fraction of points that are farthest in feature space from the centroids of their classes. For each class of label l ∈ L, with size s l = |x (j) s.t. l(j) = l|, we compute the centroid c l as and remove the s l points maximizing |φ(The L2 defense relies on the calculation of the position of the centroid to filter outliers; this calculation itself is prone to data poisoning if the class size is small. The defense is adapted from traditional poison defenses not specific to neural networks . We test the use of robust feature extractors, that is neural networks trained with adversarial training as a defense. Specifically, we train with adversarial examples constructed with SGD with an 8-step attack bounded by an ∞ ball of radius 8/255. We would expect the mechanism of this defense to be different from the other defenses, as it does not attempt to filter the poisons out. Instead, before retraining for the feature collision attack the deep features of the poisons would not be near the target. For the convex polytope attack, we would expect that the deep features would fail to form a convex polytope around the target datapoint. Slower training and reduced classification accuracy for non-adversarial inputs are potential drawbacks of a defense using a robust feature extractor. The one-class SVM defense examines the deep features of each class in isolation. That is, it applies the one-class SVM algorithm (Schölkopf et al., 2001) to identify outliers in feature space for each label in the training set. It utilizes a radial basis kernel and is calibrated to use a value ν = 0.01 to identify outliers to be filtered out. The random defense is a simple experimental control. It filters out a random subset of all training data, calibrated to be 1% on the feature collision attack and 10% on the convex polytope attack. If the poisoning attack is sensitive to poisons being removed, the random defense may be successful, at the cost of losing a proportionate amount of the unpoisoned training data. The robustness of image classification models with the k-NN defense and other anomaly detection defenses are tested on the CIFAR-10 dataset . For poisoning examples, we reproduced the attack experiments from the original feature collision attack and convex polytope attack. We verified that generated poisons did indeed cause misclassification of the target after network retraining at the same success rate as reported in the orginal papers. All architectures and training setups from those experiments used in the attacks are used identically in our experiments. The first attack, as in , generates 50 poisons as follows: We randomly select 50 images in the base class. For each base image with input representation x b, we compute the watermark base x w ← γ · x t + (1 − γ) · x b, then optimize p with initial value w as in Algorithm 1 in , that is using a forward-backward splitting procedure to solve The hyperparameter β is tuned to be 0.1. The ing poisons p are close to the target image in feature space, but close to the watermarked image in input space. For the purposes of evaluation we only consider successful attacks, so that the undefended attack success rate is 100%. As in the original paper , the network used is a modified Alexnet. The network is trained with the poisons from a warm start over 10 epochs with a batch size of 128. We evaluate the performance of the defenses described in Section 4 against collections of 50 poisons that successfully create a targeted misclassification, so that the undefended attack success rate is 100%. The are shown in Table 1. Successful attacks must in a targeted misclassification. The k-NN defense with k=5000 successfully identifies all but one poison across multiple attacks, while filtering just 0.6% of non-poison datapoints from the input data set. The k-NN defense reduces the attack success rate to 0%. Because classification accuracy surprisingly does not change significantly as k increases it is appropriate to set k as the class size of the training data, as shown in Figure 2. For k = 3 the percentage of filtered non-poisons was 0.004%, while for k = 10000 it remained just 1.3%. The relatively low filtering rate of nonpoisons for higher k may account for the low variation in test set classification accuracy for different selections of k. The L2 defense also identifies roughly half of the poisons, with = 0.01, the proportion of the training data that are poisons. For small k, the poisons may be all filtered out, while for large k, classification accuracy on the test set may suffer due to the reduced training set size. Surprisingly we find that large k does not noticeably reduce the classification accuracy. The convex polytope attack, as in in generates 5 poisons that are crafted to attack multiple architectures. The experiments are on the transfer learning attack. There are eight architectures: two of which were not used in crafting the poisons (black box setting), and six architectures which use different random numbers/seeds (grey box setting). The grey-box architectures are DPN92 (b), GoogLeNet , MobileNetV2 , ResNet50 , ResNeXT29-2x64d , and SENet18 , while the black-box architectures are DenseNet121 and ResNet18 . The aggregate of a defense on all 8 architectures are shown in Table 2. Both the k-NN defense and the L2 defense filter out virtually all the poisons, with modest filtering of 4.3% and 9.1%, respectively of non-poisons. The attack success rates for each model architecture (undefended is 100%) are shown in Figure 3. The attack success rate is so low for the k-NN defense and the L2 defense that the bars are not visible, except for GoogLeNet for the L2 defense. Surprisingly, on the black-box architectures of DenseNet121 and ResNet18, the 1-class SVM defense does not perform better. Indeed, this is despite filtering a higher percentage of poisons-31% for DenseNet and 24% for ResNet18, compared with 22% overall. A feature-space visualization of the k-NN defense, showing the filtered poisons and non-poisons is shown in Figure 4. Specifically, Figure 4 shows a projected visualization in feature space of the fine tuning data points in the target (blue) and base (green) classes. Following the projection scheme of , where the x-axis is the direction of the difference between the centroids of the points in the two classes and the y-axis is component of the parameter vector (i.e. decision boundary) orthogonal to the between-centroids vector, the deep features of the DPN92 network are projected into a two-dimensional plane. The "x" markers denote points that are filtered out by the defense. All the poisons around the target are filtered, as are outlying points in the target class. Interestingly, no points in the base class are filtered in this case. The success rate of the attack when using the k-NN defense is so low that the bar is barely visible. Figure 5 shows a feature space visualization of the robust feature extractor defense on ResNet18 causing a failed attack as well as a feature space visualization of a standard ResNet18. As expected, the attack fails because the poisons fail to approach the target in feature space, and thus do not form a convex polytope around the target. The defense does hurt test set accuracy, which drops to 79.8%, compared with > 92% with the same architecture for the other defenses. The normalized distance from the poisons to target for a robust and normal ResNet (each layers distance is normalized by the average norm of its feature vectors) is shown in Figure 6. In summary, we have demonstrated that the simple k-NN baseline approach provides an effective defense against clean-label poisoning attacks with minimal degradation in model performance. The k-NN defense mechanism identifies virtually all poisons from two state-of-the-art clean label data poisoning attacks, while only filtering a small percentage of non-poisons. The k-NN defense outperforms other simple baselines against the existing attacks; these defenses provide benchmarks that could be used to measure the efficacy of future defense-aware clean label attacks. In the bottom two rows, filtered and non-filtered nonpoisons are shown-again there are not visually distinctive differences between pictures in the same class that are filtered rather than not filtered.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
B1xgv0NtwH
We present effective defenses to clean-label poisoning attacks.
Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin. Abstract reasoning has long been thought of as a key part of human intelligence, and a necessary component towards Artificial General Intelligence. When presented in complex scenes, humans can quickly identify elements across different scenes and infer relations between them. For example, when you are using a pile of different types of LEGO bricks to assemble a spaceship, you are actively inferring relations between each LEGO brick, such as in what ways they can fit together. This type of abstract reasoning, particularly in the visual domain, is a crucial key to human ability to build complex things. Many tests have been proposed to measure human ability for abstract reasoning. The most popular test in the visual domain is the Raven Progressive Matrices (RPM) test . In the RPM test, the participants are asked to view a sequence of contextual diagrams, usually given as a 3 × 3 matrices of diagrams with the bottom-right diagram left blank. Participants should infer abstract relationships in rows or columns of the diagram, and pick from a set of candidate answers the correct one to fill in the blank. Figures 1 (a) shows an example of RPM tasks containing XOR relations across diagrams in rows. More examples can be found in Appendix C. Another widely used test for measuring reasoning in psychology is Diagram Syllogism task , where participants need to infer based on 2 given premises. Figure 1c shows an example of Euler Diagram Syllogism task. recently published a large and comprehensive RPM-style dataset named Procedurally Generated Matrices'PGM', and proposed Wild Relation Network (WReN), a state-of-the-art neural net for RPM-style tasks. While WReN outperforms other state-of-the-art vision models such as , the performance is still far from deep neural nets' performance on other vision or natural language processing tasks. Recently, there has been a focus on object-level representations (; ; ; ; ;) for visual reasoning tasks, which enable the use of inductive-biased architectures such as symbolic programs and scene graphs to directly capture relations between objects. For RPM-style tasks, symbolic programs are less suitable as these programs are generated from given questions in the Visual-Question Answering setting. In RPM-style tasks there are no explicit questions. Encoding RPM tasks into graphs is a more natural choice. However, previous works on scene graphs model a single image as graphs, which is not suitable for RPM tasks as there are many different layers of relations across different subsets of diagrams in a single task. In this paper we introduce MXGNet, a multi-layer multiplex graph neural net architecture for abstract diagram reasoning. Here'Multi-layer' means the graphs are built across different diagram panels, where each diagram is a layer.' Multiplex' means that edges of the graphs encode multiple relations between different element attributes, such as colour, shape and position. Multiplex networks are discussed in detail by. We first tested the application of multiplex graph on a Diagram Syllogism dataset (Wang et al. (2018a) ), and confirmed that multiplex graph improves performance on the original model. For RPM task, MXGNet encodes subsets of diagram panels into multi-layer multiplex graphs, and combines summarisation of several graphs to predict the correct candidate answer. With a hierarchical summarisation scheme, each graph is summarised into feature embeddings representing relationships in the subset. These relation embeddings are then combined to predict the correct answer. For PGM dataset (, MXGNet, without any auxiliary training with additional labels, achieves 83.91% test accuracy, outperforming 59.56% accuracy by the best model with auxiliary training for the RAVEN dataset. We also show that MXGNet is robust to variations in forms of object-level representations. Both variants of MXGNet achieve higher test accuracies than existing best models for the two datasets. Raven Progressive Matrices: proposed a neural network model on Raven-style reasoning tasks that are a subset of complete RPM problems. Their model is based on Convolutional Network, and is demonstrated to be ineffective in complete RPM tasks . Mandziuk & Zychowski also experimented with an auto-encoder based neural net on simple single-shape RPM tasks. built PGM, a complete RPM dataset, and proposed WReN, a neural network architecture based on Relation Network . replace CNN part of WReN with a pre-trained Variational Auto Encoder and slightly improved performance. built RAVEN, a RPM-style dataset with structured labels of elements in the diagrams in the form of parsing trees, and proposed Dynamic Residual Trees, a simple tree neural network for learning with these additional structures. applies Multi-head attention , originally developed for Language model, on RPM tasks. Visual Reasoning: RPM test falls in the broader category of visual reasoning. One widely explored task of visual reasoning is Visual Question Answering(VQA). built CLEVR dataset, a VQA dataset that focuses on visual reasoning instead of information retrieval in traditional VQA datasets. Current leading approaches on CLEVR dataset generate synthetic programs using questions in the VQA setting, and use these programs to process object-level representations extracted with objection detection models . This approach is not applicable to RPM-style problems as there is no explicit question present for program synthesis. Recently there has been a surge of interest in applying Graph Neural Networks (GNN) for datasets that are inherently structured as graphs, such as social networks. Many variants of GNNs (; ; ; Veličković et al. ) have been proposed, which are all based on the same principle of learning feature representations of nodes by recursively aggregating information from neighbour nodes and edges. Recent methods extract graph structures from visual scenes for visual question answering. These methods build scene graphs in which nodes represent parts of the scene, and edges capture relations between these parts. Such methods are only applied to scenes of a single image. For multi-image tasks such as video classification, Wang et al. (2018b) proposed non-local neural networks, which extract dense graphs where pixels in feature maps are connected to all other feature map pixels in the space-time dimensions. 3 REASONING TASKS 3.1 DIAGRAM SYLLOGISM Syllogism is a reasoning task where is drawn from two given assumed propositions (premises). One well-known example is'Socrates is a man, all man will die, therefore Socrates will die'. Syllogism can be conveniently represented using many types of diagrams such as Euler diagrams and Venn diagrams. Figure 1 (c) shows an example of Euler diagram syllogism. Wang et al. (2018a) developed Euler-Net, a neural net architecture that tackles Euler diagram syllogism tasks. However Euler-Net is just a simple Siamese Conv-Net, which does not guarantee scalability to more entities in diagrams. We show that the addition of multiplex graph both improves performance and scalability to more entities. In this section we briefly describe Raven Progressive Matrices (RPM) in the context of the PGM dataset and the RAVEN dataset (Figure 1 (a), there is XOR relation of positions of objects in rows of diagrams. With the correct answer filled in, the third row and column must satisfy all relations present in the first 2 rows and columns (in the RAVEN dataset, relations are only present in rows). In addition to labels of correct candidate choice, both datasets also provide labels of meta-targets for auxiliary training. The meta-target of a task is a multi-hot vector encoding tuples of (r, o, a) where r is the type of a relation present, o is the object type and a is the attribute. For example, the meta-target for Figure 1 (a) encodes (XOR, Shape, P osition). The RAVEN dataset also provides additional structured labels of relations in the diagram. However, we found that structured labels do not improve , and therefore did not use them in our implementation. MXGNet is comprised of three main components: an object-level representation module, a graph processing module and a reasoning module. Figure 1a shows an overview of the MXGNet architecture. The object-level representation module F ρ, as the name suggests, extracts representations of objects in the diagrams as nodes in a graph. For each diagram d i ⊂ C ∪ A, a set of nodes v i,j; i = 1... L, j = 1... N is extracted where L is the number of layers and N is the number of nodes per layer. We experimented with both fixed and dynamically learnt N values. We also experimented with an additional'' encoder that encodes lines (See Appendix C for an example containing lines) into a single vector, which can be considered as a single node. The multiplex graph module G φ, for a subset of diagrams, learns the multiplex edges capturing multiple parallel relations between nodes in a multi-layer graph where each layer corresponds to one diagram in the subset, as illustrated in Figure 1 (c). In MXGNet, we consider a subset of cardinality 3 for 3 × 3 diagram matrices. While prior knowledge of RPM rules allows us to naturally treat rows and columns in RPM as subsets, this prior does not generalise to other types of visual reasoning problems. Considering all possible diagram combinations as subsets is computationally expensive. To tackle this, we developed a relatively quick pre-training method to greatly reduce the search space of subsets, as described below. We can consider each diagram as node v d i in a graph, where relations between adjacent diagrams are embedded as edges e d ij. Note here we are considering the graph of'diagrams', which is different from the graph of'objects' in the graph processing modules. Each subset of 3 diagrams in this case can be considered as subset of 2 edges. We here make weak assumptions that edges exist between adjacent diagrams (including vertical, horizontal and diagonal direction) and edges in the same subset must be adjacent (defined as two edges linking the same node), which are often used in other visual reasoning problems. We denote the subset of edges as {e We use 3 neural nets to embed nodes, edges and subsets. We use CNNs to embed diagram nodes into feature vectors, and MLPs to embed edges based on node embeddings and subsets based on edge embeddings. While it is possible to include graph architectures for better accuracy, we found that simple combinations of CNNs and MLPs train faster while still achieving the search space reduction . This architecture first embeds nodes, then embeds edges based on node embedding, and finally embed subsets based on edge embedding. The subset embeddings are summed and passed through a reasoning network to predict answer probability, similar to WReN . For the exact configuration of the architecture used please refer to Appendix A. For each subset{e we define a gating variable G ijk, controlling how much does each subset contributes to the final . In practice we use tanh function, which allows a subset to contribute both positively and negatively to the final summed embeddings. In training we put L1 regularization constraint on the gating variables to suppress G ijk of non-contributing subsets close to zero. This architecture can quickly discover rows and columns as contributing subsets while leaving gating variables of other subsets not activated. We describe the experiment in section 5.1. While this method is developed for discovering reasoning rules for RPM task, it can be readily applied to any other multi-frame reasoning task for search space reduction. In the rest of the paper, we hard-gate subsets by rounding the gating variables, thereby reducing subset space to only treat rows and columns as valid subsets. We treat the first 2 rows and columns as contextual subsets c i,j where i and j are row and column indices. For the last row and column, where the answers should be filled in, we fill in each of the 8 answer candidates, and make 8 row subsets The graph module then summarises the graph of objects in a subset into embeddings representing relations present in the subset. The reasoning module R θ takes embeddings from context rows/columns and last rows/columns with different candidate answers filled in, and produce normalised probability of each answer being true. It also predicts meta-target for auxiliary training using context rows/columns. Next, we describe each module in detail. In the PGM dataset there are two types of objects, namely 'shapes' and 'lines'. While it is a natural choice to use object-level representation on shapes as they are varying in many attributes such as position and size, it is less efficient on lines as they only vary in colour intensity. In this section we first describe object-level representation applied to 'shapes' objects, and then discuss object-level representation on 'lines' and an alternative encoder which performs better. In MXGNet we experiment with two types of object-level representations for 'shapes', namely CNN grid features and representation obtained with spatial attention. For CNN grid features, we use each spatial location in the final CNN feature map as the object feature vector. Thus for each feature maps of width W and height H, N = W × H object representations are extracted. This type of representation is used widely, such as in Relation Network and VQ-VAE (van den). For representation obtained with attention, we use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to objection detection models such as faster R-CNN , which use a Region Proposal Network to propose bounding boxes of objects in the input image. For each attended location a presence variable z pres is predicted by attention module indicating whether an object exists in the location. Thus the total number of objects N can vary depending on the sum of z pres variables. As object-level representation is not the main innovation of this paper, we leave exact details for Appendix A.1. For 'lines' objects, which are not varying in position and size, spatial attention is not needed. We experimented with a recurrent encoder with Long-Short Term Memory on the output feature map of CNN, outputting M number of feature vectors. However, in the experiment we found that this performs less well than just feature map embeddings produced by feed-forward conv-net encoder. Multiplex Edge Embedding:The object-level representation module outputs a set of representations where L is the number of layers (cardinality of subset of diagrams) and N is the number of nodes per layer. MXGNet uses an multiplex edge-embedding network E γ to generate edge embeddings encoding multiple parallel relation embeddings: Here P t is a projection layer projecting concatenated node embeddings to T different embeddings. E t is a small neural net processing t th projections to produce the t th sub-layer of edge embeddings. Here, we restricted the edges to be inter-layer only, as we found using intra-layer edges does not improve performance but increases computational costs. Figure 2 illustrates these multiplex edge embeddings between nodes of different layers. We hypothesise that different layers of the edge embeddings encode similarities/differences in different feature spaces. Such embeddings of similarities/differences are useful in comparing nodes for subsequent reasoning tasks. For example,for P rogessive relation of object sizes, part of embeddings encoding size differences can be utilized to check if nodes in later layers are larger in size. This is similar to Mixture of Experts layers introduced in Neural Machine Translation tasks. However, in this work we developed a new cross-multiplexing gating function at the node message aggregation stage, which is described below. Graph Summarisation: After edge embeddings are generated, the graph module then summarises the graph into a feature embedding representing relations present in the subset of diagrams. We aggregate information in the graph to nodes of the last layer corresponding to the third diagram in a row or column, because in RPM tasks the relations are in the form Diagram3 = F unction(Diagram1, Diagram2). All edges connecting nodes in a particular layer v i,j; i = L, to a node v L,k in the last layer L are aggregated by a function F ag composed of four different types of set operations, namely max, min, sum and mean: We use multiple aggregation functions together because different sub-tasks in reasoning may require different types of summarization. For example, counting number of objects is better suited for sum while checking if there is a object with the same size is better suited for max. The aggregated node information from each layer is then combined with a cross-multiplexing gating function. It is named'cross-multiplexing' because each embeddings in the set are'multiplexing' other embeddings in the set with gating variables that regulate which stream of information pass through. This gating function accepts a set of summarised node embeddings {f v 1,k . . . f v N,k} as input, and output gating variables for each layer of node embeddings in the set: In practice G is implemented as an MLP with multi-head outputs for different embeddings, and Sigmoid activation which constrains gating variable g within the range of 0 to 1. The node embeddings of different layers are then multiplied with the gating variables, concatenated and passed through a small MLP to produce the final node embeddings:. Node embeddings and embeddings are then concatenated and processed by a residual neural block to produce final relation feature embeddings r of the diagram subset. The reasoning network takes relation feature embeddings r from all graphs, and infers the correct answer based on these relation embeddings. We denote the relation embeddings for context rows as r For meta-target prediction, all relation information is contained in the context rows and columns of the RPM task. Therefore, we apply a meta-predicting network R meta with Sigmoid output activation to all context rows and columns to obtain probabilities of each meta-target categories: The full pipeline of MXGNet is end-to-end trainable with any gradient descent optimiser. In practice, we used RAdam optimiser for its fast convergence and robustness to learning rate differences. The loss function for the PGM dataset is the same as used in WReN : L = L ans + βL meta−target where β balances the training between answer prediction and meta-target prediction. For the RAVEN dataset, while the loss function can include auxiliary meta-target and structured labels as L = L ans + αL struct + βL meta−target, we found that both auxiliary targets do not improve performance, and thus set α and β to 0. The Search Space Reduction model is applied on both PGM and RAVEN dataset to reduce the subset space. After 10 epochs, only gating variables of rows and columns subset for PGM and of rows for RAVEN have value larger than 0.5. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below the threshold value of 0.5. Interestingly all activated (absolute value > 0.5) gating variables are positive. This is possibly because it is easier for the neural net to learn an aggregation function than a comparator function. Exact experiment statistics can be found in Appendix D. We first test how well can the multiplex graph network capture relations for the simple Diagram Syllogism task. We simply add the multiplex graph to the original Conv-Net used in (Wang et al. (2018a) ). MXGNet achieved 99.8% accuracy on both 2-contour and 3-contour tasks, higher than the original paper's 99.5% and 99.4% accuracies. The same performance on 2-contour and 3-contour tasks also show that MXGNet scales better for more entities in the diagram. For more details please refer to Appendix E. In this section we compare all variants of MXGNet against the state-of-the-art models for the PGM and the RAVEN datasets. For the PGM dataset, we tested against of WReN in the auxiliary training setting with β value of 10. In addition, we also compared MXGNet with VAE-WReN 's without auxiliary training. For the RAVEN dataset, we compared with WReN and ResNet model's performance as reported in the original paper . We evaluated MXGNet with different object-level representations (Section 4.1) on the test data in the'neutral' split of the PGM dataset. Table 1 (a) shows test accuracies of model variants compared with WReN and VAE-WReN for the case without auxiliary training (β = 0) and with auxiliary training (β = 10) for the PGM dataset. Both model variants of MXGNet outperform other models by a considerable margin, showing that the multi-layer graph is indeed a more suitable way to capture relations in the reasoning task. Model variants using grid features from the CNN feature maps slightly outperform model using spatial-attention-based object representations for both with and without auxiliary training settings. This is possibly because the increased number of parameters for the spatial attention variant leads to over-fitting, as the training losses of both model variants are very close. In our following experiments for PGM we will use model variants using CNN features to report performances.. We include of the ResNet model with or without Dynamic Residual Trees (DRT) which utilise additional structure labels of relations. We found that for the RAVEN dataset, auxiliary training of MXGNet with meta-target or structure labels does not improve performance. Therefore, we report test accuracies of models trained only with the target-prediction objective. Both variants of MXGNet significantly outperform the ResNet models. Models with spatial attention object-level representations under-perform simpler CNN features slightly, most probably due to overfitting, as the observed training losses of spatial attention models are in fact lower than CNN feature models. In the PGM dataset, other than the neutral data regime in which test dataset's sampling space is the same as the training dataset, there are also other data regimes which restrict the sampling space of training or test data to evaluate the generalisation capability of a neural network. In the main paper, due to space limitations, we selected 2 representative regimes, the'interpolation' regime and the'extrapolation' regime to report . For of other data splits of PGM, please refer to Appendix G. For'interpolation' regime, in the training dataset, when attribute a = color and a = size, the values of a are restricted to even-indexed values in the spectrum of a values. This tests how well can a model'interpolate' for missing values. For'Extrapolation' regime, in the training dataset, the value of a is restricted to be the lower half of the value spectrum. This tests how well can a model'extrapolate' outside of the value range in the training dataset. We presented MXGNet, a new graph-based approach to diagrammatic reasoning problems in the style of Raven Progressive Matrices (RPM). MXGNet combines three powerful ideas, namely, object-level representation, graph neural networks and multiplex graphs, to capture relations present in the reasoning task. Through experiments we showed that MXGNet performs better than previous models on two RPM datasets. We also showed that MXGNet has better generalisation performance. One important direction for future work is to make MXGNet interpretable, and thereby extract logic rules from MXGNet. Currently, the learnt representations in MXGNet are still entangled, providing little in the way of understanding its mechanism of reasoning. Rule extraction can provide people with better understanding of the reasoning problem, and may allow neural networks to work seamlessly with more programmable traditional logic engines. While the multi-layer multiplex graph neural network is designed for RPM style reasoning task, it can be readily extended to other diagrammatic reasoning tasks where relations are present between multiple elements across different diagrams. One example of a real-world application scenario is robots assembling parts of an object into a whole, such as building a LEGO model from a room of LEGO blocks. MXGNet provides a suitable way of capturing relations between parts, such as ways of piecing and locking two parts together. In this section we present exact configurations of all model variants of MXGNet. Due to the complexity of architectures, we will describe each modules in sequence. The object-level representation has two variations which are (o1) CNN features and (o2) Spatial Attention features. Also the models for PGM and RAVEN dataset differ in details. Unless otherwise stated, in all layers we apply and use Rectified Linear Unit as activation function. CNN features: The first approach applies a CNN on the input image and use each spatial location in the final CNN feature map as the object feature vector. This type of representation is used widely, such as in and VQ-VAE van den. Formally, the output of a CNN is a feature map tensor of dimension H × W × D where H, W and D are respectively height, width and depth of the feature map. At each H and W location, an object vector is extracted. This type of object representation is simple and fast, but does not guarantee that the receptive field at each feature map location fully bounds objects in the image. We use a residual module with two residual blocks to extract CNN features, as shown in figure 4.This is because Residual connections show better performance in experiments. The structure of a single Residual Convolution Block is shown in figure 3.Unless otherwise stated, convolutional layer in residual blocks has kernel size of 3 × 3. The output feature map processed by another residual block is treated as encoding because we found that convolutional encoding gives better than feature vectors. Spatial Attention Object-level representation: The second approach is to use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to object detection models such as faster , which use a Region Proposal Network to propose bounding boxes of objects in the input image. In practice, we use as our spatial attention module. Figure 5 shows the architecture used for extracting object-level representation using spatial attention. A CNN composed of 1 conv layr and 2 residual blocks is first applied to the input image, and the last layer feature map is extracted. This part is the same as CNN grid feature module. A spatial attention network composed of 2 conv layer then processes information at each spatial location on the feature map, and outputs k numbers of z = (z pres, z where), corresponding to k possible objects at each location. Here, z pres is a binary value indicating if an object exists in this location, and z where is an affine transformation matrix specifying a sampling region on the feature maps. z pres, the binary variable, is sampled from Gumbel-Sigmoid distribution; , which approximates the Bernoulli distribution. We set Gumbel temperature to 0.7 throughout the experiments. For the PGM dataset we restricted k to be 1 and z where to be a translation and scaling matrix as'shapes' objects do not overlap and do not have affine transformation attributes other than scaling and translation. For all is 1, an object encoder network samples a patch from location specified by z where i using a grid sampler with a fixed window size of 4 × 4 pixels. More details of the grid sampler can be found in. The sampled patches are then processed by a conv-layer to generate object embeddings. Multiplex Edge Embeddings: Figure 2 in the main paper shows an overview of the multiplex graph architecture. While motivation and overview of architecture is explained in section 4.2 of the main paper, in this section we provide exact configurations for each part of the model. Each sub-layer of the multiplex edge is embedded by a small MLP. For PGM dataset, we use 6 parallel layers for each multiplex edge embeddings, with each layer having 32 hidden units and 8 output units. For RAVEN dataset we use 4 layers with 16 hidden units and 8 output units because RAVEN dataset contains fewer relations types than PGM dataset. Gating function is implemented as one Sigmoid fully connected layer with hidden size equal to the length of concatenated aggregated embeddings. Gating variables are element-wise multiplied with concatenated embeddings for gating effects. Gated embeddings are then processed with a final fully connected layer with hidden size 64. Graph Summarization: This module summarizes all node summary embeddings and embeddings to produce a diagram subset embedding representing relations present in the set of diagrams. We experimented with various approaches and found that keeping embeddings as feature maps and processing them with residual blocks yields the best . Background feature map embeddings are generated with one additional residual block of 48 on top of lower layer feature-extracting resnet. For object representations obtained from CNN-grid features, we can simply reshape node embeddings into a feature map, and process it with additional conv-nets to generate a feature map embeddings of the same dimension to feature map embeddings. For object representations with spatial attention, we can use another Spatial Transformer to write node summary embeddings to its corresponding locations on a canvas feature map. Finally we concatenate node summary embeddings and embeddings and process it with 2 residual blocks of size 64 to produce the relation embeddings. A.3 REASONING NETWORK Figure 6 shows the reasoning network configuration for RPM tasks. We experimented with the approach introduced in , which compute scores for each answer candidates and finally normalize the scores. We found this approach leads to severe overfitting on the RAVEN dataset, and therefore used a simpler approach to just concatenate all relation embeddings and process them with a neural net. In practice we used two residual blocks of size 128 and 256, and a final fully connected layer with 8 units corresponding to 8 answer candidates. The output is normalized with softmax layer. For Meta-target prediction, all context relation embeddings (context rows and columns for PGM while only rows for RAVEN dataset) are summed and fed into a fully connected prediction layer with Sigmoid activation. For PGM there are 12 different meta-targets while for RAVEN there are 9. The architecture is implemented in Pytorch framework. During training, we used RAdam optimizer with learning rate 0.0001, β 1 = 0.9,β 2 = 0.999. We used batch size of 64, and distributed the training across 2 Nvidia Geforce Titan X GPUs. We early-stop training when validation accuracy stops increasing. In PGM dataset there are two types of elements present in the diagram, namely shapes and lines. These elements have different attributes such as colour and size. In the PGM dataset, five types of relations can be present in the task: {P rogression, AN D, OR, XOR, ConsistentU nion}. The RAVEN dataset, compared to PGM, does not have logic relations AN D, OR, XOR, but has additional relations Arithmetic, Constant. In addition RAVEN dataset only allow relations to be present in rows. Figure 7a and 7b show two examples from the PGM dataset(Image courtesy). The first example contains a'Progression' relation of the number of objects across diagrams in columns. The second examples contains a'XOR' relation of position of objects across diagrams in rows. In addition to shape objects, diagrams in the PGM dataset can also contain line objects that appear at fixed locations. Figure 8a and 8b show two examples of PGM tasks containing line objects. In this section we provide detailed architecture used for Search Space reduction, and present additional experimental . The node embeddings are generated by applying a Conv-Net of 4 convolutional layer (32 filters in each layer) of kernel size 3, and a fully connected layer mapping flattened final-layer feature maps to a feature vector of size 256. Edge embeddings are generated by a 3-layer MLP of 512 − 512 − 256 hidden units. Subset embeddings are generated by a fully connected layer of 512 units. The subset embeddings are gated with the gating variables and summed into a feature vector, which is then feed into the reasoning net, a 3-layer MLP with 256 − 256 − 13. The output layer contains 13 units. The first unit gives probability of currently combined answer choice being true. The rest 12 units give meta-target prediction probabilities. This is the same as. The training loss function is: In our experiment we have tested various values of λ, and found 0.01 to be the best. This model is trained with RAdam optimizer with learning rate of 0.0001 and batch size of 64. After 10 epochs of training, only gating variables of subsets that are rows and columns are above the 0.5 threshold. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below 0.5. Among these, the one with highest absolute value is 0.411. Table 3 shows the top-16 ranked subsets, with each subset indexed by 2 connecting edges in the subset. Figure 9 illustrates this way of indexing the subset. For example, the first column with red inter-connecting arrows is indexed as 0-3-6. This indicates that there two edges, one connecting diagram 0 and 3, and the other connecting diagram 3-6. Similarly the subset connected by blue arrows is indexed as 1-2-5. Note that 1-2-5 and 2-1-5 is different because the 1-2-5 contains edge 1-2 and 2-5 while 2-1-5 contains edges 1-2 and 1-5. The original model in Wang et al. (2018a) uses a Siamese Conv-Net model to process two input premise diagrams and output all consistent . Convolutional layers with shared weights are first applied to two input diagrams. The top layer feature maps are then flattened and fed into a reasoning network to make predictions. We simply use CNN grid features of the top layer feature maps as object-level representations, and use the multi-layer multiplex graph to capture object relations between the two input premise diagrams. We use a multiplex edge embeddings of 4 layers, with each layer of dimension 32. The cross-multiplexing here becomes self-multiplexing as there are only 2 diagrams (Only 1 embedding of node summary for edges from first diagram to second diagram). Final node embeddings are processed by a convolutional layer to produce the final embedding, which is also fed into the reasoning network along with the conv-net embeddings. We performed ablation study experiments to test how much does the multiplex edges affects performance. We have tested two model variants, one without any graph modules, and the other model graphs using vanilla edge embeddings produced by MLPs, on PGM dataset. We found that without graph modules, the model only achieved 83.2% test accuracy. While this is lower than MXGNet's 89.6%, it is still higher than WReN's 76.9%. This is possibly because the search space reduction, by trimming away non-contributing subsets, allow the model to learn more efficiently. The graph model with vanilla edge embeddings achieves 88.3% accuracy, only slightly lower than MXGNet with multiplex edge embeddings. This shows that while general graph neural network is a suitable model for capturing relations between objects, the multiplex edge embedding does so more efficiently by allowing parallel relation multiplexing. G ADDITIONAL GENERALIZATION PERFORMANCE ON PGM DATASET Table 3: All subsets ranked by the absolute value of their corresponding gating variables. here we provide the analysis according to Sec 4.2 and Sec 4.6 in. unfortunately sec 4.3 of this paper, namely the analysis of distractors, cannot be performed as the publicly available dataset does not include any ground truth labels about distractors, nor any labels of present objects that can be used to synthesize distractor labels. For Meta-target prediction, MXG-Net achieves 84.1% accuracy. When Metatarget is correctly predicted, the model's target prediction accuracy increases to 92.4%. When Meta-target is incorrectly predicted, the model only has 75.6% accuracy. For three logical relations the model performs best for OR relation (95.3%), and worst for XOR relation(92.6%
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByxQB1BKwH
MXGNet is a multilayer, multiplex graph based architecture which achieves good performance on various diagrammatic reasoning tasks.
Semantic structure extraction for spreadsheets includes detecting table regions, recognizing structural components and classifying cell types. Automatic semantic structure extraction is key to automatic data transformation from various table structures into canonical schema so as to enable data analysis and knowledge discovery. However, they are challenged by the diverse table structures and the spatial-correlated semantics on cell grids. To learn spatial correlations and capture semantics on spreadsheets, we have developed a novel learning-based framework for spreadsheet semantic structure extraction. First, we propose a multi-task framework that learns table region, structural components and cell types jointly; second, we leverage the advances of the recent language model to capture semantics in each cell value; third, we build a large human-labeled dataset with broad coverage of table structures. Our evaluation shows that our proposed multi-task framework is highly effective that outperforms the of training each task separately. Spreadsheets are the most popular end-user development tool for data management and analysis. Unlike programming languages or databases, no syntax, data models or even vague standards are enforced for spreadsheets. Figure1(a) shows a real-world spreadsheet. To enable intelligent data analysis and knowledge discovery for the data in range B4:H24, one needs to manually transform the data to a standard form as shown in Figure1(e). It would be highly desirable to develop techniques to extract the semantic structure information for automated spreadsheet data transformation. Semantic structure extraction entails three chained tasks to: We also show the transformed data in Figure1(e), where different cell types are highlighted using the same coloring scheme as in Figure1(d). Learning the semantic structure for spreadsheets is challenging. While table detection is confounded by the diverse multi-table layouts, component recognition is confounded by the various structures of table components, and cell type classification requires semantic-level understanding of cell values. Moreover, the tasks are chained in the sense that latter tasks need to leverage the outcomes of prior tasks. This poses challenges on preventing error propagation, but also provides opportunities for utilizing additional cues from other tasks to improve the current task. For example, header extraction may help table detection since headers need to be inside the table region and vice versa. In this paper, we present a multi-task learning framework to solve spreadsheet table detection, component recognition, and cell type classification jointly. Our contributions are as follows: 1. We formulate spreadsheet table structure extraction as a coarse-to-fine process including table detection, component recognition, and cell type classification. We also build a large labeled dataset. 2. To capture the rich information in spreadsheet cells for model training, we devise a featurization scheme containing both hand-crafted features and model-based semantic representations. 3. We propose a multi-task framework that can be trained to simultaneously locate table ranges, recognize table components and extract cell types. Our evaluation shows that the proposed multi-task framework is highly effective that outperforms the of training each task separately. Cell type classification is the task of classifying each cell into a certain type such as value, value name, index, and index name. A value is a basic unit in the value region. A value name is a summary term that describes values. As shown in Figure1(a), "Cost" at E6 is a value name to describe the values in E8:H24. After the data extraction, as shown in Figure1(e), "Cost" at D1 is the label of Column D. An index refers to individual values that can be used for indexing data records. In Figure1(a), "January" -"October" at E5:H5 are indexes of columns E -H respectively. A group of indexes is used to breakdown the dataset into subsets. After data transformation, it will form a single data field as Column C shows in Figure1(e). An index name is a summary term that describes the indexes. In the previous example, " Month" is the index name of indexes "January" -"October". After data transformation, the " Month" in Figure1(a) corresponds to the column label at C1 in Figure1(e). The web-crawled WebSheet dataset contains 4,290,022 sheets with broad coverage of diverse table structures. We propose a human-labeled dataset, SemanticSheet, for semantic table structure extraction. It includes: 22,176 tables with annotated bounding boxes, which are sampled from WebSheet using an active learning method; 9,053 tables with our structural component annotations, which are randomly sampled from the annotated bounding boxes; 3,503 tables with our cell type annotations, which are sampled using heuristics to balance different table structures based on the structural component annotations. To control labeling quality, all labeled tables have to be verified by an experienced human labeler and be revised until the overall agreement achieves 95%. To capture effective cell features, we leverage both hand-crafted features and learning-based semantic representations. In general, there are four major information sources of a cell, i.e., value string, data format, cell format, and formula. proposed 20 hand-crafted features and proved its high effectiveness in table detection. But this featurization schema lacks high-level semantic features, which are important for table semantic structure extraction, especially the task of cell type classification. The recently proposed language models in the natural language processing domain enable to learn embeddings of token sequences with remarkable generalizability. This motivates us to leverage the recent advances in language modeling to derive such high-level semantics in the spreadsheet domain. We incorporate BERT to extract semantic embeddings for cell values. To control the complexity in our model, a point-wise CNN is adopted to reduce the embedding size for each cell value to 32. This CNN model can be jointly trained with the other modules in our framework. Figure2 shows our framework, including a shared backbone and three branches for multi-task learning. Convolutional neural network backbone: Since our featurization method only captures cell-level information, we adopt a Fully Convolutional Neural Network (FCNN) to learn spatial correlations between cells. The FCNN backbone enables our framework to learn shared feature representations, which are able to generalize across multiple different tasks. Moreover, to leverage the relationships in our three coarse-to-fine tasks, the component recognition branch takes the of the table detection as an additional feature channel, and the cell type classification branch takes the of the component recognition as it's additional feature channel. We take ResNets as the backbone in our framework and exclude all pooling layers to avoid losing cell-level precision. The CNNs for structural component recognition, and cell type classification consist of five convolutional layers. The whole dataset is randomly divided into 80% training set and 20% test set. The loss functions for different tasks are added together for joint training. For table detection, we adopt the Error-of-Boundary (EoB) metric to measure how precisely the detection is aligned to the ground truth bounding box. For structural component recognition, we calculated the accuracy for top and left header separation lines. For cell type classification, we report the average F1 for the index, index name, and value name predictions. For the comparison study, we adapt Mask RCNN, the state-of-the-art multi-task method for image object detection and segmentation. To evaluate the effectiveness of multi-task joint training, we conduct a comparison between single-task and multi-task training of our proposed method. For table detection evaluation, as shown in the left part of Table 1, our method achieves improvements over all baselines. Moreover, the multi-task version of our method performs better than its single-task version in both EoB-0 and EoB-2, indicating that other tasks help with table detection. We attribute such improvements to the learning of intrinsic relationships between these tasks by training a joint model. For the component recognition evaluation, compared with single-task training, our multi-task framework also achieves 0.8% and 2.4% accuracy gains. And the right side of Table 1 shows the of cell type classification, our method still achieves a large margin of improvement over Mask R-CNN. Compared with single-task training, the joint framework improves the value name, index, and index name predictions by 1.6%, 0.7%, and 1.5% respectively. We design a comparison experiment to evaluate the effectiveness of language models. For the baseline (multi-task w/o BERT), we only use the 20-dimensional hand-crafted features; while for the treatment (multi-task with BERT), we use the full feature set including the semantic features extracted by BERT. The comparison are shown in Table 1, indicating that by incorporating the semantic features, our proposed model achieves higher (or on-par) accuracy in all these three tasks. Specifically, since cell type classification heavily depends on semantics, there is 6.4% F1 gain for the index name prediction and 5.7% F1 gain for the value name prediction.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1x3GTq5IB
We propose a novel multi-task framework that learns table detection, semantic component recognition and cell type classification for spreadsheet tables with promising results.
Open-domain dialogue generation has gained increasing attention in Natural Language Processing. Comparing these methods requires a holistic means of dialogue evaluation. Human ratings are deemed as the gold standard. As human evaluation is inefficient and costly, an automated substitute is desirable. In this paper, we propose holistic evaluation metrics which capture both the quality and diversity of dialogues. Our metrics consists of GPT-2 based context coherence between sentences in a dialogue, GPT-2 based fluency in phrasing, and, $n$-gram based diversity in responses to augmented queries. The empirical validity of our metrics is demonstrated by strong correlation with human judgments. We provide the associated code, datasets and human ratings. Learning to communicate is a key capacity of intelligent agents. Research on enabling a machine to have meaningful and natural conversation with humans plays a fundamental role in developing artificial general intelligence, as can be seen in the formulation of Turing test . Recently open-domain or non-task-oriented dialogue systems have attracted a surge of research interest (; ; ; ; ; . Moreover, dialogue generation has a wide range of industrial applications such as Microsoft's Xiaoice and Baidu's Dumi. Evaluating models of dialogue generation in an efficient manner poses a significant challenge in developing dialogue systems. The prevalent method of dialogue evaluation is human-based rating under a given rubric. This method of evaluation is deemed impracticable, when various variations in the model and sets of hyperparameters are needed. These drawbacks may hinder the research progress and render the human evaluation approach not scalable. Previous automatic evaluation metrics generally focus on the quality of the dialogue generation . In this work, we propose holistic metrics which considers both the quality and diversity of generated dialogues. Specifically, we consider context coherence of a dialogue (i.e., the meaningfulness of a response within the prior context of the dialogue), language fluency of generated responses (i.e., the quality of phrasing relative to a human native speaker), and, response diversity of a set of generated responses (i.e., the variety in meaning and word choice of responses). A strong language model such as GPT-2 naturally captures and. Therefore, we propose to recruit and fine-tune GPT-2 as a measure of quality. Moreover, we utilize n-gram based entropy to capture. Specifically, we propose to measure response diversity under augmented queries with controlled diversity. Two such augmentation strategies are considered. Finally, extensive human evaluations are conducted to substantiate the validity of our proposed metrics. Evaluation metrics based on heuristics have been shown to align well with human judgments and widely applied in various language generation tasks. For machine translation, BLUE computes n-gram precision, whereas METEOR takes into account both precision and recall. For summarization, ROUGE also considers both precision and recall by calculating F-measure. These n-gram based metrics are well-suited for the generation tasks that are more source-determined or low conditional entropy such as translation, image captioning, and summarization. Some dialogue studies adopted these metrics to evaluate the quality of generated conversation responses (; ;). They nevertheless are not suitable for open-ended generations or high conditional entropy task like dialogue generation where a diverse range of generations are acceptable conditional on a query. conduct extensive empirical studies on these metrics (e.g., BLEU, METEOR, and ROUGE) to test their effectiveness on evaluating dialogue generation and find limited relation between these automatic metrics and human judgments. Table 1: An example of low BLEU score and low semantic similarity between model response and reference response while the generated response appears reasonable within the dialogue. The word-overlap metrics (e.g., BLUE) fail to capture the semantic similarity between model and reference responses. The following works leverage the distributed representation learned in neural network models to capture semantic similarity among context, model response, and reference response. collect a dataset of human scores and train a hierarchical recurrent neural network (RNN) to predict human-like scores to input responses given the context, ing in an automatic metric that has a medium level correlation with human judgments. Obtaining this metric however requires a large dataset of human-annotated scores, thus rendering this approach less flexible and extensible. proposes a referenced metric and unreferenced metric blended evaluation routine (RUBER) for open-domain dialogue systems. This blended metric is a combination of two metrics. A referenced metric measures the similarity between model-generated and reference responses using word-embeddings. An unreferenced metric captures the relevance between the query and response. It is obtained by training a neural network classifier to determine whether a response is appropriate. The positive examples are the references, while the negative examples are the reference responses randomly chosen from the dataset, hence avoiding the need of human-annotated data. After training, the softmax score is utlized to measure whether the generated response is coherent with the query. Attempting to improve explores to use contextualized embeddings from BERT. The BERT-based unreferenced metric improves over the word-embedding-based RUBER unreferenced metric. Interestingly, they show that the combined metric has a reduced correlation with human judgments than the unreferenced metric alone. Although this finding is counterintuitive, it is consistent with the characteristics of opendomain dialogue that a range of diverse responses are reasonable given a query. Hence a response can be acceptable even if it does not align well with the reference either in terms of word-overlap or semantic embedding. See Table 1 for an example. Prior art on automatic metrics focuses on the quality, mostly the relevance to the query, of the generated responses. A good evaluation metric should not only measure the quality of generation, but also the diversity of generation, which is especially important for open-ended tasks like dialogue or story generation . The current work proposes metrics to holistically evaluate the quality and diversity of open-domain dialogue generation. One key component of dialogue response generation is its coherence to the query as explored in and. Prior work measures the coherence based on the Softmax score of a trained binary classifier. Here we explore an alternative approach based on language modeling . A language model can naturally capture the coherence of the response to the query without resorting to an ad-hoc classifier. In particular, the query coherence metric is computed as the conditional probability of the response given the query, which reflects whether the response appropriately follows the query under a language model. We adopt a transfer learning approach to obtain a powerful language model. Besides coherence, a good response should be fluent. Fluency is often measured by a language model . We define the response fluency score as negative perplexity of generated responses. While the aforementioned metrics attempt to measure the quality of text generation, some n-gram based metric has also been utilized to measure diversity. and compute unigram entropy across all generated utterances to measure the diversity. This metric might be an improper metric for diversity since the generated utterances given various queries are generally diverse. In our experiments, we observe constantly high diversity in terms of human ratings and n-gram based entropy. Instead we approach diversity evaluation of a dialogue model with controlled queries, whereby we control the diversity of the queries while evaluating the diversity of the responses. Controlling query diversity involves minimizing diversity in both meaning and word use and avoiding feeding the dialogue models identical inputs. A dialogue model with poor diversity always generates responses with the same phrases and words, whereas an ideal model produces varying words and sentence structures. The controlled queries are generated by augmenting the original query with sentences close in meaning and slightly different in word use. For the purpose of generality, we propose WordNet substitution and Conditional Text Generator to generate controlled queries. The n-gram entropy across the responses given the controlled queries is deemed as a diversity measure. In this work, we propose a metric to holistically evaluate open-dialogue models by taking into consideration both quality and diversity of generated dialogues. Our contributions are summarized below. • Both context coherence and response fluency (quality metrics) are naturally captured by metrics based on a strong language model. Empirically, we demonstrate that the language model based metrics clearly outperform previous relevant metrics. • In view of the complexity of diversity evaluation, we propose two effective approaches to generate augmented utterances with controlled diversity: word substitution and text generator with k-best decoder. Our experiments show that the diversity metric strongly correlates with human judgments on the response diversity. Moreover, our proposed datasets significantly improve the agreement between human evaluation, leading to a more accurate and straightforward human annotation. • We release the datasets, human ratings and implementation of the metric as open-source contribution to pave the way towards further research. 2.1 CONTEXT COHERENCE Language models, which predict the next token given previous tokens, naturally capture the coherence between sentences and particularly the dialogue query and response in our case. GPT-2 is a large-scale pre-trained language model based on the transformer architecture . It is trained on a vast amount of diverse data and demonstrates impressive text generation capabilities. In order to better capture the dependence between the queries and responses, GPT-2 can be fine-tuned on the dialogue dataset of interest. Suppose a query q contains tokens {q t : t = 1, ..., T q} and a response r has tokens {r t : t = 1, ..., T r}. Let P denote the fine-tuned GPT-2, then the context coherence is defined as the loglikelihood of the response conditional on the the query normalized by the length of the response length: log P (r t |r <t, q). Note that c raw (r|q) is some negative number and unbounded from below. A single value is then hard to explain absolutely and can only be intepreted relative to other values. Also, the unboundedness renders it prone to extreme values. Hence, a normalized score is proposed instead. Since the score distribution varies as a function of the dataset, the lower bound is defined as 5th percentile, denoted as c 5th, instead of some arbitrary value. Then the normalized score, c(r|q), is which ranges from 0 to 1. To capture the fluency of responses, we also adopt the pretrained language model, GPT-2. In particular, the raw response fluency score, f raw (r), is defined as, Due to the negativeness and unboundedness of the raw score, a normalized version, f (r), similar to the normalized context coherence score is proposed, We measure response diversity utilizing augmented queries with controlled diversity. Controlling query diversity involves minimizing diversity in both meaning and word use and avoiding feeding the dialogue models identical inputs. We thus aim to augment the original query with sentences close in meaning and slightly different in word use. To achieve so, two augmentation approaches are proposed: WordNet Substitution (WS) and Conditional Text Generator (CTG). WordNet Substitution (WS) is word-level manipulation method suitable for both single-turn and multi-turn datasets. It is achieved by first using Part-Of-Speech (POS) tagger to tag tokens in a query. Then four augmented inputs are generated by substituting verbs, nouns, adjectives & adverbs, or all of the above with synonyms in WordNet. Different from WS, Conditional Text Generator (CTG) is an approach to testing language diversity using multi-turn datasets. It requires a sequence-to-sequence or a transformer model to produce augments conditioned on the context, which is defined as the prior utterance history to the selected query. For instance, suppose {u 1, ..., u t−1} denotes the utterance history and u t indicates the query to be augmented, then the top-5 beams, u t,..., u t, from the CTG model with the concatenated utterance history [u 1 ; ...; u t−1] is input into a model to be evaluated. Given a set of augmented queries for the ith query with controlled diversity, the responses, R i, are generated by the model under test. Then n-gram entropy for the ith sample is computed as, where p is the n-gram probability in R i. The diversity metric is then defined as the averaged entropy over the dataset, 3 EXPERIMENTS To facilitate comparison with prior work , the DailyDialog dataset is adopted for the empricial analysis of our proposed metrics. This dataset contains 13,118 high-quality multi-turn dialogue dataset. The dialogue is split into query-response pairs with a 42,000 / 3,700 / 3,900 train-test-validation split. A sequence-to-sequence (seq2seq) with attention was trained with the train and validation partitions to generate dialogue responses. The implementation in OpenNMT was used to train the model. The seq2seq consists of a 2-layer LSTM with 500 hidden units on both the encoder and decoder. The model was trained with SGD and learning rate of 1. To obtain responses on a wide spectrum of quality and diversity, we sample the data with top-k sampling where k = {1, 50, 500}. The base GPT-2 model with 12 layers was used to compute our metrics. We also experimented with the medium GPT-2 with 24 layers and found that the were generally the same. And larger models (the 36-and 48-layers GPT-2) might pose computational difficulty for some researchers and thus were not considered. The GPT-2 model was fine-tuned on the training and validation data. In fine-tuning, the query and response were concatenated together as a single sentence to feed into GPT-2. The perplexity of the fine-tuned language model on the test dataset was 16.5. WordNet substitution and conditional text generator were used to augment diversity-controlled queries. The Stanford POS tagger and the WordNet by were utilized to do WordNet substitution. As for conditional text generator, we trained an Open-NMT Transformer on the training and validation splits for query augmentation, which was applied to the testing dataset to augment the query with the top-4 beams. To assess validity of our proposed metrics, we utilize Amazon Turk to collect high quality human ratings from 10 subjects. For each metric, we select a set of generated query-response pairs (or responses only) to be presented to humans and each datapoint is to be rated from 1 to 5, with 1 being the worst and 5 being the best in generation quality corresponding to that metric. On both Context Coherence and Fluency metrics, we select 200 datapoints with diverse range of generation quality. There are 200 query-response pairs to be rated for Context Coherence and 200 responses to be rated for Fluency. For Diversity metric, we select 100 datapoints, totaling 500 responses, to be rated in groups of 5 all of which are conditioned on the controlled inputs generated by a CTG given the same context. After Amazon Turk are collected, we then compute Pearson Correlation between our evaluation metrics and human ratings to assess the validity of our metric and selected datasets. We normalize the human rating scores from 0 to 1. Query Generated Reply Human Score RUBER Ours Of course. A two-week paid vacation a year, a five-day workweek. So, if I get a margin card, I could take a margin card for you to travel to a company as soon as possible. 0.20 0.97 0.19 Table 2: Case study. Both our coherence metric and the human evaluation agreed that the generated response is not coherent with the given query, while RUBER indicated this reply is coherent. 4.1 CONTEXT COHERENCE Table 3 demonstrates the Pearson and Spearman correlations between the proposed context coherence metric and human judgments. Also, the were compared to the previous best-performing automatic metric, RUBER with BERT embeddings . Clearly both our language model based coherence metric show higher correlation with human judgments than the classifier-based metric, RUBER. In addition, we compared the proposed metric with a similar metric based on a GPT-2 language model without fine-tuning on the target dataset. The fine-tuned version improved the , indicating that fine-tuning on the dialogue dataset enables the language model better capture the dependency between the queries and replies. Interestingly, even the metric based on the language model without fine-tuning correlated with human ratings stronger than RUBER. We also examined the inter-rater reliability. It is computed by holding out the ratings of one rater at a time, calculating its correlation with the average of other rater's judgments, and finally averaging across and taking the maximum all held-out correlation scores. The inter-rater reliability also support the strong performance our proposed context coherence metric since the correlation between the automatic metric and human evaluation was close to the inter-rater correlations. Table 2 displays a case study. Both our coherence metric and the human evaluation agreed that the generated response is not coherent with the given query, while RUBER indicated this reply is coherent. This might be because RUBER simply compares the embeddings of the query and response and business travel related words in the query such as vacation, workweek and in the reply such as travel, company make RUBER judge that they are similar. Table 3: Correlation between RUBER+BERT and context coherence metric c(r|q) with human ratings (without and with fine-tuning of GPT-2). Our findings show that the proposed fluency metric f (r) is highly correlated with human judgments. Table 4 summarizes the relation between our proposed fluency metric and the human-ratings in terms of Pearson and Spearman correlation. The importance of fine-tuning GPT-2 (as outlined in Section 3.3) is evident. We observe an increase from 0.43 to 0.82 in Pearson correlation. In addition, Figure 2 details the effect of fine-tuning. Notably, a correction of outliers occurs. Moreover, the consistency of human ratings is demonstrated by high mean pair-wise correlations between pairs of ratings. Table 4: Correlation between fluency metric f (r) and human ratings without and with fine-tuning of GPT-2. Pairwise mean and max correlations of human ratings. Table 5 shows the evaluation of our generated datasets using WS and CTG. Unigram, bigram, and trigram entropy are used to calculate responses' diversity and are compared to human ratings in Pearson and Spearman Correlation. Note that automatic evaluations on our datasets consistently achieve higher correlation compared to the baseline dataset. We also show our datasets evaluated using three different diversity metrics in Figure 3. The figures show correlations between normalized human ratings and corresponding n-gram entropy. A line of best-fit is drawn to indicate their correlations, and for plotting purpose, each datapoint after normalization is added a random noise sampled from N (0, 0.05 2). Clearly, WS and CTG Dataset show more clustered datapoints and slopes closer to 1 than our baseline dataset, a consistent with the reported correlations. Table 6 shows inter-rater Pearson Correlation, Spearman correlations, and variance in human ratings. Interestingly, both WS Dataset and CTG Dataset display similarly high correlations, indicating that raters generally agree with each other. WS Dataset is also lowest in Human Variance, suggesting human raters are more certain about their ratings. Baseline Dataset, on the other hand, has poor inter-rater correlations. This is most likely due to the uncontrolled nature of input sentences such that outputs of evaluated models are generally diverse, making it difficult for humans to judge diversity performance of the model. Furthermore, both of our datasets achieve scores close to that of their corresponding mean inter-rater correlations, indicating that the evaluation metric on our datasets can reveal diversity of a dialog system consistent with humans. This paper provides a holistic and automatic evaluation method of open-domain dialogue models. In contrast to prior art, our means of evaluation captures not only the quality of generation, but also the diversity of responses. We recruit GPT-2 as a strong language model to evaluate the fluency and context-coherency of a dialogue. For diversity evaluation, the diversity of queries is controlled while the diversity of responses is evaluated by n-gram entropy. Two methods for controlled diversity are proposed, WordNet Substitution and Conditional Text Generator. The proposed metrics show strong correlation with human judgments. We are providing the implementations of our proposed metrics, associated fine-tuned models and datasets to accelerate the research on open-domain dialogue systems. It is our hope the proposed holistic metrics may pave the way towards comparability of open-domain dialogue methods.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJg_FgBtPH
We propose automatic metrics to holistically evaluate open-dialogue generation and they strongly correlate with human evaluation.
Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features. Recently, there has been an increasing interest in extending CNNs to the general spatial domain. Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood. In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation. Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains. Convolution Neural Network (CNN) BID14 has been proven to be an efficient model family in extracting hierarchical local patterns from grid-structured data, which has significantly advanced the state-of-the-art performance of a wide range of machine learning tasks, including image classification, object detection and audio recognition BID15. Recently, growing attention has been paid to dealing with data with an underlying graph/non-Euclidean structure, such as prediction tasks in sensor networks BID23, transportation systems, and 3D shape correspondence application in the computation graphics BID2. How to replicate the success of CNNs for manifold-structured data remains an open challenge. Many graph convolution and geometric convolution methods have been proposed recently. The spectral convolution methods BID3 BID5 BID11 are the mainstream algorithm developed as the graph convolution methods. Because their theory is based on the graph Fourier analysis BID20 ), one of their major limitations is that in this model the knowledge learned from different graphs is not transferrable BID19. Other group of approaches is geometric convolution methods, which focuses on various ways to leverage spatial information about nodes BID17 BID19. Existing models mentioned above are either not capable of capturing spatial-wise local information as in the standard convolution, or tend to have very large parameter space and hence, are prone to overfitting. As a , both the spectral and the geometric convolution methods have not produced the comparable to CNNs on related tasks. Such a misalignment makes it harder to leverage the rapidly developing 2D-convolution techniques in the generic spatial domain. We note graph convolution methods are also widely used in the pure graph structure data, like citation networks and social networks BID11. Our paper will only focus on the data with the spatial information. In this paper, we provide a unified view of the graph convolution and traditional 2D-convolution methods with the label propagation process BID24. It helps us better understand and compare the difference between them. Based on it, we propose a novel Depthwise Separable Graph Convolution (DSGC), which inherits the strength of depthwise separable convolution that has been extensively used in different state-of-the-art image classification frameworks including Inception Network BID22, Xception Network BID4 and MobileNet . Compared with previous graph and geometric methods, the DSGC is more expressive and aligns closer to the depthwise separable convolution network, and shares the desirable characteristic of small parameter size as in the depthwise separable convolution. In experiments section, we evaluate the DSGC and baselines in three different machine learning tasks. The experiment show that the performance of the proposed method is close to the standard convolution network in the image classification task on CIFAR dataset. And it outperforms previous graph convolution and geometric convolution methods in all tasks. Furthermore, we demonstrate that the proposed method can easily leverage the advanced technique developed for the standard convolution network to enhance the model performance, such as the Inception module BID22, the DenseNet architecture BID8 and the Squeeze-and-Excitation block BID7.The main contribution of this paper is threefold:• A unified view of traditional 2D-convolution and graph convolution methods by introducing depthwise separable convolution.• A novel Depthwise Separable Graph Convolution (DSGC) for spatial domain data.• We demonstrate the efficiency of the DSGC with extensive experiments and show that it can facilitate the advanced technique of the standard convolution network to improve the model performance. We provide a unified view of label propagation and graph convolution by showing that they are different ways to aggregate local information over the graphs or data manifolds. We then discuss connections between graph convolution and depthwise separable convolution over the 2D-grid graph, which motivates us to propose a new formulation that subsumes both methods as special cases. Unless otherwise specified, we denote a matrix by X, the i-th row in the matrix by x i, and (i, j)-th element in the matrix by x ij. Superscripts are used to distinguish different matrices when necessary. All the operations being discussed below can be viewed as a function that transforms input feature maps X ∈ R N ×P to output feature maps Y ∈ R N ×Q, where N is the number of nodes in the graph and P, Q are the number of input and features (channels) associated with each node respectively. We use N (i) to denote the set of neighbors for i-th node. Label propagation (LP) BID24 ) is a classic approach to aggregate local information over a graph. The basic version of LP can be written as DISPLAYFORM0 where W is a normalized adjacency matrix that summarizes the graph structure. The intuition is that the value of node i is updated via a weighted combination of its neighbors. Graph convolution BID11 ) (GC) is a recently proposed graph convolution operator that can be viewed as an extension of LP, formulated as DISPLAYFORM0 where W is a symmetrically normalized adjacency matrix with a ridge on its diagonal, which is a deterministic matrix given the input data, and U ∈ R P ×Q represents a linear transformation. Following the BID4, W is named as the spatial filter and U is named as the channel filter. The original form of graph convolution, such as the Spectral Network BID3, is derived from graph signal processing BID20 as a generalization of Fourier analysis to the domain of graphs. Several limitations of the Spectral Network, such as its high computation To Compare LP with GC, the former only utilizes the graphical information, while the latter has an additional linear transformation of x j to into the intermediate representation z j via matrix U. This additional step makes GC capable of capturing the dependencies among features (channels), which yields performance improvement. For a full 2d-convolution layer, the convolution filters encode channel correlation and spatial correlation simultaneously BID4. Then depthwise separable convolution (DSC) is proposed under the intuition that the channel correlation and spatial correlation could be decoupled, and has been found successful in several modern architectures for image classification BID4. We choose to focus on DSC (instead of full convolution) because of its strong empirical performance with a small number of parameters, and its intimate connections to GC which will be revealed in the following. And we discuss the full convolution formulation with the label propagation process in Section 5.By viewing each pixel in the image as a node, DSC can be formulated in a graph-based fashion DISPLAYFORM0 where ∆ ij denotes the relative position of pixel i and pixel j on the image, and w (q) can be viewed as a lookup table with the pixel-pixel offset ∆ ij as the key, according to the stationarity (weightsharing) assumption of convolution. In the context of images, N (i) denotes the index set of surrounding pixels for i-th pixel, which is equivalent to the k-nearest neighbor set under the Euclidean distant metric. For example, the size of N (i), or k, is 9 for a 3 × 3 convolution filter (considering self-loop). We notice that the formulation of GC and DSC is similar except that 1. Spatial filters in DSC are channel-specific, while GC uses a global spatial filter.2. Spatial filters in DSC are learned from the data (under the stationarity constraints), while the filter in GC is a constant matrix with the given input. On the one hand, DSC does not apply to the domain of generic spatial data lying on the manifold where the space of ∆ ij (defined as the difference of the spatial coordinates between node i and node j) can be infinite. On the other hand, GC suffers from the restriction that all channels have to share the same given spatial filter. This heavily constrains the model capacity, which would be more severe when the deeper network structure is used. In the context of graphs, it would be desirable to have multiple spatial filters-to capture a diverse set of diffusion patterns over the graph or data manifold, which is the same as the convolution filters in the image domain. To address these limitations, we propose Depthwise Separable Graph Convolution (DSGC) which naturally generalizes both GC and DSC DISPLAYFORM0 where we slightly abuse the notation by overloading w (q) (·) as a function, which maps ∆ ij to a real number, and N (i) still represents the k-nearest neighbor sets. To understand the proposed formulation, notice 1. Different from DSC, the stationarity requirement is implemented in a "soft" manner by defining a function instead of by the set of equality constraints. In our experiment, each w (q) (·) is a function parameterized by a two-layer MLP.2. Different from GC, channel-specific convolution is enabled by learning multiple spatial convolution filters. This amounts to simultaneously constructing multiple graphs under the different node-node similarity metrices, where the metrices are implicitly defined by neural networks and hence, are jointly optimized during the training. Overfitting is a common issue in graph-based applications, due to limited data available. To alleviate this issue, we propose an option to group the channels into C groups, where D = Q/C channels in the same group would share the same filter. DISPLAYFORM1 The context of each node in any given generic graph, namely its connection pattern with neighbors, can be non-stationary over different parts of the graph, while it is constant in the 2d-grid graphs. It is, therefore, a common practice to normalize the adjacency matrix in order to make the nodes adaptive to their own contexts (Eq.1). A natural way to carry out normalization for DSGC is to apply a softmax function over the predicted spatial filter weights at each node, which can be written asw i = sof tmax(w i), where w i stands for the i-th row of spatial filter W learned by a neural network. We empirically find normalization leads to better performance and significantly speeds up the convergence. In the following experiments, we use the proposed depthwise separable graph convolution with a linear highway bypass as the basic convolution component and imitate the rest setting of the standard convolution neural network to solve different machine learning tasks. We evaluate the proposed Depthwise Separable Graph Convolution (DSGC) method with representative baselines in the prediction tasks of image classification, time series forecasting, and document categorization. The algorithms are implemented in PyTorch; all the data and the code are made publicly accessible 1. For controlled experiments, all the graph convolution methods share the same empirical settings unless otherwise specified, including network structures, the dimension of latent factors, and so on. The optimization algorithm is applied to all models. The neural network used to model the spatial convolution filter (w (q) (·)) in Eq.4 is a two-layers MLP with 256 hidden dimension and tanh activation function. We have conducted ablation tests with the two-layer MLP by changing the number of layers and activation function of each hidden layer, and by trying several weight sharing strategies. The are very similar; the two-layer MLP provides a reasonable performance with the shortest running time. Appendix A contains more details, such as the network architecture and model hyper-parameters. We conduct experiments on CIFAR10 and CIFAR100 BID12, which are popular benchmark datasets in image classification. Both sets contain 60000 images with 32 × 32 pixels but CIFAR10 has 10 category labels and CIFAR100 has 100 category labels. Each image is typically treated as a 32 × 32 grid structure for standard image-based convolution. To enable the comparison on generic graphs, we create the modified versions of CIFAR10 and CIFAR100, respectively, by subsampling only 25% of the pixels from each graph. As illustrated in FIG1, the subsampling in irregularly scattered nodes for each image. For comparison we include the traditional 2d convolution and graph convolution networks as baselines, including standard CNN; Xception network BID4 which uses the depthwise separable convolution; DCNN BID0, the method using multi-hops random walk as the graph filters; ChebyNet BID5, the method using Chebyshev polynomial to approximate the Fourier transformation of (irregular) graphs; GCN which is described in Section 2; MoNet BID19, the method using Gaussian function to define the propagation weights over (irregular) graphs. For a fair comparison, we use the VGG13 architecture BID21 in all the methods above as the basic platform, and replace the convolution layers according to the methods. The pooling layer is performed by the kmean clustering. The centroid of each clusters is regarded as the new node after pooling, and its hidden vector is the mean or max over the nodes in the cluster, based on the pooling method. Notice that, we only normalize the input signals to and do not have other preprocessing or data augmentation. The experiment are summarized in Table 1. Firstly, we observe that Xception and CNN have the best ; this is not surprising because both methods use grid-based convolution which is naturally suitable for image recognition. Secondly, DSGC outperforms all the other graph-based convolution methods, and its performance is very close to that of the grid-based convolution methods. Furthermore, contributed by the depthwise separable convolution and sharing graph technique, our model can achieve the competitive performance without increasing the number of parameters as GCN, the one with the smallest number of parameters among the graph convolution approaches. In appendix A.4, we further report the variance of DSGC model, which shows the improvement is significant and stable. As another important application domain, here we are interested in how to effectively utilize the locality information about sensor networks in time series forecasting. For example, how to incorporate the longitudes/latitudes of sensors w.r.t. temporal cloud movement is an important question in spatiotemporal modeling for predicting the output of solar energy farms in the United States. Appendix A provides the formal definition of this task. Original Graphs Dataset CIFAR10 CIFAR100 P CIFAR10 CIFAR100 P DCNN BID0 43.68% 76.65% 12M 55.56% 84.16% 50M ChebyNet BID5 25 BID21 18.03% 43.42% 18M 6.86% 26.86% 18M Xception BID4 17.07% 41.54% 3.1M 7.08% 26.84% 3.1M Table 1: Test-set error rates: P is the number of parametersWe choose three publicly available benchmark datasets for this task:• The U.S Historical Climatology Network (USHCN) 2 dataset contains daily climatological data from 1,218 meteorology sensors over the years from 1915 to 2000. The sequence length is 32,507. It includes five subsets, and each has a climate variable: maximum temperature, minimum temperature, precipitation, snowfall and snow depth. We use the daily maximum temperature data and precipitation data, and refer them as the USHCN-TMAX and USHCN-PRCP sets, respectively.• The solar power production records in the year of 2006 3 has the data with the production rate of every 10 minutes from 1,082 solar power stations in the west of the U.S. The sequence length is 52,560. We refer this set of data as Solar. All the datasets have been split into the training set (60%), the validation set (20%) and the test set (20%) in chronological order. All the graph convolution methods (DCNN, ChebyNet, GCN and MoNet) in the previous section (Section 4.2) are included to form the baselines for comparison. We also add traditional methods for time series forecasting, such as Autoregressive model (AR) which predicts future signal using a window of historical data based on a linear assumption about temporal dependencies, Vector autoregressive model (VAR) which extends AR to the multivariate version, namely, the input is the signals from all sensors in the history window, and the LSTNet deep neural network model BID13 which combines the strengths of CNN, RNN and AR. None of those methods is capable of leveraging locational dependencies via graph convolution. We exclude the CNN and Xception methods, the 2D-grid based convolution, which could not be generalized to irregular graphs which we focus here. TAB3 summarizes the evaluation of all the methods, where the performance is measured using the Root Square Mean Error (RMSE). The best on each dataset is highlighted in boldface. The first chunk of three methods does not leverage the spatial or locational information in data. The second chuck consists of the neural network models which leverage the spatial information about sensor networks. The graph convolution methods in the second chunk clearly outperforms the methods in the first chunk, which does not explicitly model the spacial correlation within sensor networks. Overall, our proposed method (DSGC) has the best performance on all the datasets, demonstrating its strength in capturing informative local propagation patterns temporally and specially. For the application to text categorization we use the 20NEWS dataset BID9 ) for our experiments. It consists of 18,845 text documents associated with 20 topic labels. Individual words in the document vocabulary are the nodes in the graph for convolution. Each node also has its word embedding vector which is learned by running the Word2Vec algorithm BID18 ) on this corpus. Following the experiment settings in BID5 we select the top 1000 most frequent words as the nodes. BID13 10.1973 29.0624 0.02865 DCNN BID0 6.5188 29.0424 0.02652 ChebyNet BID5 5.5823 27.1298 0.02531 GCN BID11 5.4671 27.1172 0.02512 MoNet BID19 5 BID0 70.35% ChebyNet BID5 70.92% GCN BID11 71.01% MoNet BID19 70.60% DSGC 71.88% TAB1: Accuracy on the validation set. The with † come from BID5. The proposed convolution method (DSGC) can be considered as an equivalent component to the depthwise separable convolution method. Naturally, we can leverage the technique developed for the standard convolution network to improve the DSGC framework. Hence we examine DSGC with the following techniques which are popular in recent years for standard convolution over images: Inception module BID22, DenseNet framework BID8 and FORMULA2 Squeeze-and-Excitation block BID7. The details of those architectures are included in the Appendix A. The are presented in Table 4. Clearly, combined with the advantageous techniques/architectures, the performance of DSGC in image classificationcan can be further improved. It demonstrates that the DSGC can easily enjoy the benefit of the traditional 2d-convolution network development. Table 4: Summary of error rate on the test set in different settings. In table 5, we report the mean training time per epoch for DSGC and GCN, the fastest graph convolution baseline. In DSGC, our model computes the convolution weight for each edge of the graph, which requires more computation resources. However, we always perform the graph convolution on the sparse graph, which the number of edges grows only linearly in the graph size. Therefore the training is fairly efficient. Notably, learning the convolution filters as in DSGC leads to consistently better performance over all previous methods, with around 0.5x-3x running time. Dataset CIFAR USHCN-TMAX 20news GCN 1.75 0.465 0.207 DSGC 3.81 1.73 0.280 Table 5: Training time per epoch for GCN and DSGC methods. The unit is minute. In this section, we will summarize the graph convolution methods proposed in recent years with the label propagation process, which reveals the difference between traditional 2D-convolution and them. Firstly, we provide the formulation of the full convolution BID14, DISPLAYFORM0 different from the depthwise separable convolution, it captures the channel correlation and spatial correlation simultaneously by W (pq), which leads to the larger number of parameters. In Spectral Network BID3, the authors try to leverage the graph Fourier transformation as the basic convolution operation in the graph domain, which can be written as, DISPLAYFORM1 where Φ ∈ R n×n contains the eigenvectors of Laplacian matrix of the graph, and Λ is a diagonal matrix and learned by the supervision data. The Spectral Network can be matched with the full convolution, but with the different filter subspace, in other words, with different basic filters. However, it suffers from several limitations. It needs to conduct eigenvector decomposition over the Laplacian Matrix, which is a very expensive operation. The filters are not localized in the spatial domain. The number of parameters grows linearly with the number of nodes in the graph. In order to address the previous problems, researchers try to use the Chebyshev polynomial to approximate the non-parameter filter Λ, which is referred to as ChebyNet BID5. It can be written as, DISPLAYFORM2 where T k (L) is the k-th order Chebyshev polynomial term. The ChebyNet can be considered as the integration of K depthwise separable convolution components in a layer. But still, it suffers from the similar limitation as the GCN, which is using one graph filter over all channels and the graph filter is constant given the input. So its model capacity still cannot compare with depthwise separable convolution. With larger K, the ChebyNet can approximate the non-parameter filers in the Spectral Network. However, it would require large number of parameters and face the similar limitation as the Spectral Network. Besides the graph convolution methods, researchers propose another type of models, geometric convolution methods BID17 BID19, to deal with data in the general spatial domain. Here, we introduce the most advanced one, MoNet BID19 framework, which is also the most related one to our paper. The updating formula of MoNet in the label propagation process is, DISPLAYFORM3 where DISPLAYFORM4, and v(i, j) is a mapping from a node pair to a embedding vector, similar to ∆ ij in our model. µ k, Σ k are both model parameters, and Σ k is constrained as the diagonal matrix. MoNet can be viewed as an extension of the ChebyNet by letting the graph filters learn from the data. But it still has two limitations compared with the depthwise separable convolution and proposed method: It uses a simple Gaussian function, which is weaker than non-parametric filter in the depthwise separable convolution, and neural network function in the proposed method. It uses a graph filter for all channels. In order to capture complex propagation patterns in a layer, the model requires a larger K, which leads to much larger number of parameters. And finally the experiment show that the proposed method (DSGC) consistently outperforms the MoNet with less parameters in multiple tasks. DISPLAYFORM5 Pooling 2 × 2 max-pooling 4 max-pooling Classifier 512D fully-connected, softmax A EXPERIMENT DETAIL In section 4.2 and 4.5, we conduct the experiment on the CIFAR10 and CIFAR100 datasets. We will introduce the architecture settings for the DSGC and baseline models. TAB5 illustrates the basic architecture used in the experiment. In the DSGC-VGG13 and DSGC-DenseNet models, the k-conv refers to the spatial convolution (Eq.4) with k-nearest neighbors as the neighbor setting. So the 1-conv is the same as the 1 × 1 conv, which is doing linear transformation on channels. The hidden dimensions of VGG13 and DSGC-VGG13 are set as {256, 512, 512, 512} and {256, 512, 512, 1024}. The growth rate of DSGC-DenseNet is 32. And the baseline graph and geometric convolution methods use the identical architecture as DSGC-VGG13. For the subsampled CIFAR experiment, We eliminate the first convolution, transition and pooling layer, and change the spatial convolution from 9-conv to {16-conv, 12-conv, 8-conv, 4-conv}. For the DSGC-SE, we follow the method described in BID7 to add the SE block to DSGC-VGG13 architecture. We use the dropout scheme described in BID8 for the DSGC-DenseNet model, and add the dropout layer after the pooling layer for VGG13 and DSGC-VGG13 models. For the DSGCInception model, we imitate the design of the Inception Network BID22 ). The key idea is letting a convolution layer have different size of convolution filters. We use a simple example as our Inception module, which is illustrated in FIG2.For the CNN model, we still format the input signal in the matrix shape. The signals in invalid points are set as 0. Furthermore, to perform the fair comparison with standard CNN in the subsampled situation, we append a mask matrix as an additional channel for input signals to indicate whether the pixel is valid or not. For the MoNet, we also apply the softmax trick described in Section 3, which accelerates its training process and improves its final . For the ChebyNet, we set the polynomial order as K = 3.For the ij used in DSGC and MoNet, we use a 5 dimension feature vector. We denote the coordinate of i-th node as (x i, y i), and DISPLAYFORM0 The same learning schedule is applied to all models. We use SGD to train the model for 400 epochs. The initial learning rate is 0.1, and is divided by 10 at 50% and 75% of the total number of training epochs. Firstly, we will give the formal definition of the time series forecasting, that is, spatiotemporal regression problem. We formulate the the spatiotemporal regression problem as a multivariate time series forecasting task with the sensors' location as the input. More formally, given a series of time series signals observed from sensors Y = {y 1, y 2, · · ·, y T} where y t ∈ R n and n are the number of sensors, and the locations of sensors L = {l 1, l 2, · · ·, l n} where l i ∈ R 2 and indicates the coordinate of the sensor, the task is to predict a series of future signals in a rolling forecasting fashion. That being said, to predict y T +h where h is the desirable horizon ahead of the current time stamp T, we assume {y 1, y 2, · · ·, y T} are available. Likewise, to predict the signal of the next time stamp y T +h+1, we assume {y 1, y 2, · · ·, y T, y T +1} are available. In this paper, we follow the setting of the autoregressive model. Define a window size p which is a hyper-parameter firstly. The model input at time stamp T is X T = {y T −p+1, · · ·, y T} ∈ R n×p. In the experiments of this paper, the horizon is always set as 1.Intuitively, different sensors may have node-level hidden features to influence its propagation patterns and final outputs. Then for each node, the model learns a node embedding vector and concatenate it with the input signals. By using this trick, each node has limited freedom to interface with its propagation patterns. This trick is proven to be useful in this task, USHCN-PRCP and Solar specifically. We set the embedding size as 10 for these two datasets. One thing readers may notice is that there are 10% data in USHCN dataset missing. To deal with that, we add an additional feature channel to indicate which point is missing. For the time series models, we tune the historical window p according to the validation set. For the rest of models, we set the window size p = 18 for Solar dataset and p = 6 for USHCN datasets. The network architecture used in this task is 7 convolution layers followed by a regression layer. The ij setting is the same as the previous one. We use the Adam optimizer BID10 for this task, and train each model 200 epochs with learning rate 0.001. The data preprocessing follows the experiment details in BID5. And the network architecture for all models is 5 convolution layers followed by two MLP layers as the classifier. After each convolution layer, a dropout layer is performed with dropout rate of 0.5. The nodes' coordinate is the word embedding, and the method to calculate ij is similar to the previous ones. The optimizer used in this task is the same as the CIFAR experiment. In this section, we report the variance of DSGC method in all 3 tasks. We run the DSGC model for 10 times and report the mean±std: CIFAR 7.39 ± 0.136, USHCN-TMAX 5.211 ± 0.0498, 20news 71.70 ± 0.285. Obviously, the variance is significantly smaller than the performance gap between the DSGC model and best baseline (CIFAR 8.34, 20news 71.01).
[ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H139Q_gAW
We devise a novel Depthwise Separable Graph Convolution (DSGC) for the generic spatial domain data, which is highly compatible with depthwise separable convolution.
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales. Fortunately, most music is also highly structured and can be represented as discrete note events played on musical instruments. Herein, we show that by using notes as an intermediate representation, we can train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure on timescales spanning six orders of magnitude (~0.1 ms to ~100 s), a process we call Wave2Midi2Wave. This large advance in the state of the art is enabled by our release of the new MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) dataset, composed of over 172 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms. The networks and the dataset together present a promising approach toward creating new expressive and interpretable neural models of music. Since the beginning of the recent wave of deep learning research, there have been many attempts to create generative models of expressive musical audio de novo. These models would ideally generate audio that is both musically and sonically realistic to the point of being indistinguishable to a listener from music composed and performed by humans. However, modeling music has proven extremely difficult due to dependencies across the wide range of timescales that give rise to the characteristics of pitch and timbre (short-term) as well as those of rhythm (medium-term) and song structure (long-term). On the other hand, much of music has a large hierarchy of discrete structure embedded in its generative process: a composer creates songs, sections, and notes, and a performer realizes those notes with discrete events on their instrument, creating sound. The division between notes and sound is in many ways analogous to the division between symbolic language and utterances in speech. The WaveNet model by BID18 may be the first breakthrough in generating musical audio directly with a neural network. Using an autoregressive architecture, the authors trained a model on audio from piano performances that could then generate new piano audio sample-bysample. However, as opposed to their highly convincing speech examples, which were conditioned on linguistic features, the authors lacked a conditioning signal for their piano model. The was audio that sounded very realistic at very short time scales (1 or 2 seconds), but that veered off into chaos beyond that. BID4 made great strides towards providing longer term structure to WaveNet synthesis by implicitly modeling the discrete musical structure described above. This was achieved by training a hierarchy of VQ-VAE models at multiple time-scales, ending with a WaveNet decoder to generate piano audio as waveforms. While the are impressive in their ability to capture long-term structure directly from audio waveforms, the ing sound suffers from various artifacts at the fine-scale not present in the unconditional WaveNet, clearly distinguishing it from real musical audio. Also, while the model learns a version of discrete structure from the audio, it is not Transcription: onsets & frames (section 4) Synthesis: conditional WaveNet (section 6) Piano roll (MIDI) Symbolic modelling: transformer (section 5) Event predictionFigure 1: Wave2Midi2Wave system architecture for our suite of piano music models, consisting of (a) a conditional WaveNet model that generates audio from MIDI, (b) a Music Transformer language model that generates piano performance MIDI autoregressively, and (c) a piano transcription model that "encodes" piano performance audio as MIDI.directly reflective of the underlying generative process and thus not interpretable or manipulable by a musician or user. BID9 propose a model that uses a WaveNet to generate solo cello music conditioned on MIDI notation. This overcomes the inability to manipulate the generated sequence. However, their model requires a large training corpus of labeled audio because they do not train a transcription model, and it is limited to monophonic sequences. In this work, we seek to explicitly factorize the problem informed by our prior understanding of the generative process of performer and instrument: DISPLAYFORM0 which can be thought of as a generative model with a discrete latent code of musical notes. Since the latent representation is discrete, and the scale of the problem is too large to jointly train, we split the model into three separately trained modules that are each state-of-the-art in their respective domains:1. Encoder, P (notes|audio): An Onsets and Frames transcription model to produce a symbolic representation (MIDI) from raw audio.2. Prior, P (notes): A self-attention-based music language model BID7 to generate new performances in MIDI format based on those transcribed in.3. Decoder, P (audio|notes): A WaveNet (van den BID18 synthesis model to generate audio of the performances conditioned on MIDI generated in.We call this process Wave2Midi2Wave. One hindrance to training such a stack of models is the lack of large-scale annotated datasets like those that exist for images. We overcome this barrier by curating and publicly releasing alongside this work a piano performance dataset containing well-aligned audio and symbolic performances an order of magnitude larger than the previous benchmarks. In addition to the high quality of the samples our method produces (see https://goo.gl/ magenta/maestro-examples), training a suite of models according to the natural musician/instrument division has a number of other advantages. First, the intermediate representation used is more suitable for human interpretation and manipulation. Similarly, factorizing the model in this way provides better modularity: it is easy to independently swap out different performance and instrument models. Using an explicit performance representation with modern language models also allows us to model structure at much larger time scales, up to a minute or so of music. Finally, we can take advantage of the large amount of prior work in the areas of symbolic music generation and conditional audio generation. And by using a state-of-the-art music transcription model, we can make use of the same wealth of unlabeled audio recordings previously only usable for training end-to-end models by transcribing unlabeled audio recordings and feeding them into the rest of our model. Our contributions are as follows:1. We combine a transcription model, a language model, and a MIDI-conditioned WaveNet model to produce a factorized approach to musical audio modeling capable of generating about one minute of coherent piano music.2. We provide a new dataset of piano performance recordings and aligned MIDI, an order of magnitude larger than previous datasets.3. Using an existing transcription model architecture trained on our new dataset, we achieve state-of-the-art on a piano transcription benchmark. We partnered with organizers of the International Piano-e-Competition 1 for the raw data used in this dataset. During each installment of the competition, virtuoso pianists perform on Yamaha Disklaviers which, in addition to being concert-quality acoustic grand pianos, utilize an integrated high-precision MIDI capture and playback system. Recorded MIDI data is of sufficient fidelity to allow the audition stage of the competition to be judged remotely by listening to contestant performances reproduced over the wire on another Disklavier instrument. The dataset introduced in this paper, which we name MAESTRO ("MIDI and Audio Edited for Synchronous TRacks and Organization"), contains over a week of paired audio and MIDI recordings from nine years of International Piano-e-Competition events.2 The MIDI data includes key strike velocities and sustain pedal positions. Audio and MIDI files are aligned with ≈3 ms accuracy and sliced to individual musical pieces, which are annotated with composer, title, and year of performance. Uncompressed audio is of CD quality or higher (44.1-48 kHz 16-bit PCM stereo). A train/validation/test split configuration is also proposed, so that the same composition, even if performed by multiple contestants, does not appear in multiple subsets. Repertoire is mostly classical, including composers from the 17 th to early 20 th century. MusicNet BID16 contains recordings of human performances, but separatelysourced scores. As discussed in, the alignment between audio and score is not fully accurate. One advantage of MusicNet is that it contains instruments other than piano (not counted in table 2) and a wider variety of recording environments. MAPS BID5 contains Disklavier recordings and synthesized audio created from MIDI files that were originally entered via sequencer. As such, the "performances" are not as natural as the MAESTRO performances captured from live performances. In addition, synthesized audio makes up a large fraction of the MAPS dataset. MAPS also contains syntheses and recordings of individual notes and chords, not counted in Our goal in processing the data from International Piano-e-Competition was to produce pairs of audio and MIDI files time-aligned to represent the same musical events. The data we received from the organizers was a combination of MIDI files recorded by Disklaviers themselves and WAV audio captured with conventional recording equipment. However, because the recording streams were independent, they differed widely in start times and durations, and they were also subject to jitter. Due to the large volume of content, we developed an automated process for aligning, slicing, and time-warping provided audio and MIDI to ensure a precise match between the two. Our approach is based on globally minimizing the distance between CQT frames from the real audio and synthesized MIDI (using FluidSynth 3). Obtaining a highly accurate alignment is non-trivial, and we provide full details in the appendix. For all experiments in this paper, we use a single train/validation/test split designed to satisfy the following criteria:• No composition should appear in more than one split.• Train/validation/test should make up roughly 80/10/10 percent of the dataset (in time), respectively. These proportions should be true globally and also within each composer. Maintaining these proportions is not always possible because some composers have too few compositions in the dataset.• The validation and test splits should contain a variety of compositions. Extremely popular compositions performed by many performers should be placed in the training split. For comparison with our , we recommend using the splits which we have provided. We do not necessarily expect these splits to be suitable for all purposes; future researchers are free to use alternate experimental methodologies. The large MAESTRO dataset enables training an automatic piano music transcription model that achieves a new state of the art. We base our model on Onsets and Frames, with several modifications informed by a coarse hyperparameter search using the validation split. For full details of the base model architecture and training procedure, refer to.One important modification was adding an offset detection head, inspired by BID8. The offset head feeds into the frame detector but is not directly used during decoding. The offset labels are defined to be the 32ms following the end of each note. We also increased the size of the bidirectional LSTM layers from 128 to 256 units, changed the number of filters in the convolutional layers from 32/32/64 to 48/48/96, and increased the units in the fully connected layer from 512 to 768. We also stopped gradient propagation into the onset subnetwork from the frame network, disabled weighted frame loss, and switched to HTK frequency spacing BID21 for the mel-frequency spectrogram input. In general, we found that the best ways to get higher performance with the larger dataset were to make the model larger and simpler. The final important change we made was to start using audio augmentation during training using an approach similar to the one described in BID10. During training, every input sample was modified using random parameters for the SoX 4 audio tool using pysox BID2. The parameters, ranges, and random sampling methods are described in table 3. We found that this was particularly important when evaluating on the MAPS dataset, likely because the audio augmentation made the model more robust to differences in recording environment and piano qualities. The differences in training are summarized in Table 4: Transcription Precision, Recall, and F1 Results on MAPS configuration 2 test dataset (ENSTDkCl and ENSTDkAm full-length .wav files). Training was done on the MAESTRO trianing set with audio augmentation. Scores are calculated using the same method as in. Note-based scores calculated by the mir eval library, frame-based scores as defined in BID1. Final metric is the mean of scores calculated per piece. In sections 5 and 6, we demonstrate how using this transcription model enables training language and synthesis models on a large set of unlabeled piano data. To do this, we transcribe the audio in the MAESTRO training set, although in theory any large set of unlabeled piano music would work. We For our generative language model, we use the decoder portion of a Transformer BID20 with relative self-attention, which has previously shown compelling in generating music with longer-term coherence BID7. We trained two models, one on MIDI data from the MAESTRO dataset and another on MIDI transcriptions inferred by Onsets and Frames from audio in MAESTRO, referred to as MAESTRO-T in section 4. For full details of the model architecture and training procedure, refer to BID7.We used the same training procedure for both datasets. We trained on random crops of 2048 events and employed transposition and time compression/stretching data augmentation. The transpositions were uniformly sampled in the range of a minor third below and above the original piece. The time stretches were at discrete amounts and uniformly sampled from the set {0.95, 0.975, 1.0, 1.025, 1.05}.We evaluated both of the models on their respective validation splits. Model variation NLL on their respective validation splits Music Transformer trained on MAESTRO 1.84 Music Transformer trained on MAESTRO-T 1.72 Table 7: Validation Negative Log-Likelihood, with event-based representation. Samples outputs from the Music Transformer model can be heard in the Online Supplement (https://goo.gl/magenta/maestro-examples). Most commercially available systems that are able to synthesize a MIDI sequence into a piano audio signal are concatenative: they stitch together snippets of audio from a large library of recordings of individual notes. While this stitching process can be quite ingenious, it does not optimally capture the various interactions between notes, whether they are played simultaneously or in sequence. An alternative but less popular strategy is to simulate a physical model of the instrument. Constructing an accurate model constitutes a considerable engineering effort and is a field of research by itself BID0 BID17.WaveNet (van den) is able to synthesize realistic instrument sounds directly in the waveform domain, but it is not as adept at capturing musical structure at timescales of seconds or longer. However, if we provide a MIDI sequence to a WaveNet model as conditioning information, we eliminate the need for capturing large scale structure, and the model can focus on local structure instead, i.e., instrument timbre and local interactions between notes. Conditional WaveNets are also used for text-to-speech (TTS), and have been shown to excel at generating realistic speech signals conditioned on linguistic features extracted from textual data. This indicates that the same setup could work well for music audio synthesis from MIDI sequences. Our WaveNet model uses a similar autoregressive architecture to van den Oord et al. FORMULA0, but with a larger receptive field: 6 (instead of 3) sequential stacks with 10 residual block layers each. We found that a deeper context stack, namely 2 stacks with 6 layers each arranged in a series, worked better for this task. We also updated the model to produce 16-bit output using a mixture of logistics as described in van den.The input to the context stack is an onset "piano roll" representation, a size-88 vector signaling the onset of any keys on the keyboard, with 4ms bins (250Hz). Each element of the vector is a float that represents the strike velocity of a piano key in the 4ms frame, scaled to the range. When there is no onset for a key at a given time, the value is 0.We initially trained three models:Unconditioned Trained only with the audio from the combined MAESTRO training/validation splits with no conditioning signal. Ground Trained with the ground truth audio/MIDI pairs from the combined MAESTRO training/validation splits. Transcribed Trained with ground truth audio and MIDI inferred from the audio using the Onsets and Frames method, referred to as MAESTRO-T in section 4.The ing losses after 1M training steps were 3.72, 3.70 and 3.84, respectively. Due to teacher forcing, these numbers do not reflect the quality of conditioning, so we rely on human judgment for evaluation, which we address in the following section. It is interesting to note that the WaveNet model recreates non-piano subtleties of the recording, including the response of the room, breathing of the player, and shuffling of listeners in their seats. These are encouraging and indicate that such methods could also capture the sound of more dynamic instruments (such as string and wind instruments) for which convincing synthesis/sampling methods lag behind piano. Due to the heterogeneity of the ground truth audio quality in terms of microphone placement, ambient noise, etc., we sometime notice "timbral shifts" during longer outputs from these models. We therefore additionally trained a model conditioned on a one-hot year vector at each timestep (similar to speaker conditioning in TTS), which succeeds in producing consistent timbres and ambient qualities during long outputs (see Online Supplement).A side effect of arbitrary windowing of the training data across note boundaries is a sonic crash that often occurs at the beginning of generated outputs. To sidestep this issue, we simply trim the first 2 seconds of all model outputs reported in this paper, and in the Online Supplement (https: //goo.gl/magenta/maestro-examples). Since our ultimate goal is to create realistic musical audio, we carried out a listening study to determine the perceived quality of our method. To separately assess the effects of transcription, language modeling, and synthesis on the listeners' responses, we presented users with two 20-second clips WaveNet Unconditioned Clips generated by the Unconditioned WaveNet model described in section 6. WaveNet Ground/Test Clips generated by the Ground WaveNet model described in section 6, conditioned on random 20-second MIDI subsequences from the MAESTRO test split. WaveNet Transcribed/Test Clips generated by the Transcribed WaveNet model described in section 6, conditioned on random 20-second subsequences from the MAESTRO test split. WaveNet Transcribed/Transformer Clips generated by the Transcribed WaveNet model described in section 6, conditioned on random 20-second subsequences from the Music Transformer model described in section 5 that was trained on MAESTRO-T.The final set of samples demonstrates the full end-to-end ability of taking unlabeled piano performances, inferring MIDI labels via transcription, generating new performances with a language model trained on the inferred MIDI, and rendering new audio as though it were played on a similar piano-all without any information other than raw audio recordings of piano performances. Participants were asked which clip they thought sounded more like a recording of somebody playing a musical piece on a real piano, on a Likert scale. 640 ratings were collected, with each source involved in 128 pair-wise comparisons. FIG0 shows the number of comparisons in which performances from each source were selected as more realistic. A Kruskal-Wallis H test of the ratings showed that there is at least one statistically significant difference between the models: χ 2 = 67.63, p < 0.001. A post-hoc analysis using the Wilcoxon signed-rank test with Bonferroni correction showed that there was not a statistically significant difference in participant ratings between real recordings and samples from the WaveNet Ground/Test and WaveNet Transcribed/Test models with p > 0.01/10.Audio of some of the examples used in the listening tests is available in the Online Supplement (https://goo.gl/magenta/maestro-examples). We have demonstrated the Wave2Midi2Wave system of models for factorized piano music modeling, all enabled by the new MAESTRO dataset. In this paper we have demonstrated all capabilities on the same dataset, but thanks to the new state-of-the-art piano transcription capabilities, any large set of piano recordings could be used, 6 which we plan to do in future work. After transcribing the recordings, the transcriptions could be used to train a WaveNet and a Music Transformer model, and then new compositions could be generated with the Transformer and rendered with the WaveNet. These new compositions would have similar musical characteristics to the music in the original dataset, and the audio renderings would have similar acoustical characteristics to the source piano. The most promising future work would be to extend this approach to other instruments or even multiple simultaneous instruments. Finding a suitable training dataset and achieving sufficient transcription performance will likely be the limiting factors. The new dataset (MIDI, audio, metadata, and train/validation/test split configurations) is available at https://g.co/magenta/maestro-datasetunder a Creative Commons Attribution NonCommercial Share-Alike 4.0 license. The Online Supplement, including audio examples, is available at https://goo.gl/magenta/maestro-examples. We would like to thank Michael E. Jones and Stella Sick for their help in coordinating the release of the source data and Colin Raffel for his careful review and comments on this paper. In this appendix, we describe in detail how the MAESTRO dataset from section 3 was aligned and segmented. The key idea for the alignment process was that even an untrained human can recognize whether two performances are of the same score based on raw audio, disregarding differences in the instrument or recording equipment used. Hence, we synthesized the provided MIDI (using FluidSynth with a SoundFont sampled from recordings of a Disklavier 7) and sought to define an audio-based difference metric that could be minimized to find the best-alignment shift for every audio/MIDI pair. We wanted the metric to take harmonic features into account, so as a first step we used librosa BID10 to compute the Constant-Q Transform BID3 BID15 of both original and synthesized audio. For the initial alignment stage we picked a hop length of 4096 samples (∼90 ms) as a trade-off between speed and accuracy, which proved reasonable for most of the repertoire.8 Microphone setup varied between competition years and stages, ing in varying frequency response and overall amplitude levels in recordings, especially in the lower and higher ends of the piano range. To account for that, we limited the CQT to 48 buckets aligned with MIDI notes C2-B5, and also converted amplitude levels to dB scale with maximum absolute amplitude as a reference point and a hard cut-off at -80 dB. Original and synthesized audio also differed in sound decay rate, so we normalized the ing CQT arrays time-wise by dividing each hop column by its minimum value (averaged over a 5-hop window).A single MIDI file from a Disklavier typically covered several hours of material corresponding to a sequence of shorter audio files from several seconds up to an hour long. We slid the normalized CQT of each such original audio file against a window of synthesized MIDI CQT of the same length and used mean squared error (MSE) between the two as the difference metric.9 Minimum error determined best alignment, after which we attempted to align the next audio file in sequence with the remaining length of the corresponding MIDI file. Due to the length of MIDI files, it was impractical to calculate MSE at each possible shift, so instead we trimmed silence at the beginning of audio, and attempted to align it with the first "note on" event of the MIDI file, within ±12 minutes tolerance. If the minimum error was still high, we attempted alignment at the next "note on" event after a 30-second silence. This approach allowed us to skip over unusable sections of MIDI recordings that did not correspond to audio, e.g., instrument tuning and warm-ups, and also non-musical segments of audio such as applause and announcements. Non-piano sounds also considerably increased the MSE metric for very short audio files, so we had to either concatenate those with their longer neighbors if they had any musical material or exclude them completely. Events that were present at the beginning of audio files beyond the chosen shift tolerance which did not correspond to MIDI had to be cut off manually. In order to recover all musically useful data we also had to manually repair several MIDI files where the clock had erroneously jumped, causing the remainder of the file to be out of sync with corresponding audio. After tuning process parameters and addressing the misaligned audio/MIDI pairs detected by unusually high CQT MSE, we have reached the state where each competition year (i.e., different audio recording setup) has final metric values for all pairs within a close range. Spot-checking the pairs with the highest MSE values for each year confirmed proper alignment, which allowed us to proceed to the segmentation stage. Since certain compositions were performed by multiple contestants, 10 we needed to segment the aligned audio/MIDI pairs further into individual musical pieces, so as to enable splitting the data into train, validation, and test sets disjoint on compositions. While the organizers provided the list of composition metadata for each audio file, for some competition years timing information was missing. In such cases we greedily sliced audio/MIDI pairs at the longest silences between MIDI notes up to the expected number of musical pieces. Where expected piece duration data was available, we applied search with backtracking roughly as follows. As an invariant, the segmentation algorithm maintained a list of intervals as start-end time offsets along with a list of expected piece durations, so that the total length of the piece durations corresponding to each interval was less than the interval duration (within a certain tolerance). At each step we picked the next longest MIDI silence and determined which interval it belonged to. Then we split that interval in two at the silence and attempted to split the corresponding sequence of durations as well, satisfying the invariant. For each suitable split the algorithm continued to the next longest silence. If multiple splits were possible, the algorithm preferred the ones that divided the piece durations more evenly according to a heuristic. If no such split was possible, the algorithm either skipped current silence if it was short 11 and attempted to split at the next one, or backtracked otherwise. It also backtracked if no more silences longer than 3 seconds were available. The algorithm succeeded as soon as each interval corresponded to exactly one expected piece duration. Once a suitable segmentation was found, we sliced each audio/MIDI pair at ing intervals, additionally trimming short clusters of notes at the beginning or end of each segment that appeared next to long MIDI silences in order to cut off additional non-music events (e.g., tuning or contestants testing the instrument during applause), and adding an extra 1 second of padding at both ends before making the final cut. After the initial alignment and segmentation, we applied Dynamic Time Warping (DTW) to account for any jitter in either the audio or MIDI recordings. DTW has seen wide use in audio-to-MIDI alignment; for an overview see BID11. We follow the align midi example from pretty midi BID13, except that we use a custom C++ DTW implementation for improved speed and memory efficiency to allow for aligning long sequences. First, in Python, we use librosa to load the audio and resample it to a 22,050Hz mono signal. Next, we load the MIDI and synthesize it at the same sample rate, using the same FluidSynth process as above. Then, we pad the end of the shorter of the two sample arrays so they are the same length. We use the same procedure as align midi to extract CQTs from both sample arrays, except that we use a hop length of 64 to achieve a resolution of ∼3ms. We then pass these CQTs to our C++ DTW implementation. To avoid calculating the full distance matrix and taking its mean to get a penalty value, we instead sample 100k random pairs and use the mean of their cosine distances. We use the same DTW algorithm as implemented in librosa except that we calculate cosine distances only within a Sakoe-Chiba band radius BID14 of 2.5 seconds instead of calculating distances for all pairs. Staying within this small band limits the number of calculations we need to make and the number of distances we have to store in memory. This is possible because we know from the previous alignment pass that the sequences are already mostly aligned and we just need to account for small constant offsets due to the lower resolution of the previous process and apply small sequence warps to recover from any occasional jitter.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
r1lYRjC9F7
We train a suite of models capable of transcribing, composing, and synthesizing audio waveforms with coherent musical structure, enabled by the new MAESTRO dataset.
Variational inference based on chi-square divergence minimization (CHIVI) provides a way to approximate a model's posterior while obtaining an upper bound on the marginal likelihood. However, in practice CHIVI relies on Monte Carlo (MC) estimates of an upper bound objective that at modest sample sizes are not guaranteed to be true bounds on the marginal likelihood. This paper provides an empirical study of CHIVI performance on a series of synthetic inference tasks. We show that CHIVI is far more sensitive to initialization than classic VI based on KL minimization, often needs a very large number of samples (over a million), and may not be a reliable upper bound. We also suggest possible ways to detect and alleviate some of these pathologies, including diagnostic bounds and initialization strategies. Estimating the marginal likelihood in probabilistic models is the holy grail of Bayesian inference. Marginal likelihoods allow us to compute the posterior probability of model parameters or perform Bayesian model selection . While exact computation of the marginal is not tractable for most models, variational inference (VI) offers a promising and scalable approximation. VI suggests choosing a simple family of approximate distributions q and then optimizing the parameters of q to minimize its divergence from the true (intractable) posterior. The canonical choice is the KL divergence, where minimizing corresponds to tightening a lower bound on the marginal likelihood. Recently, (a) showed that minimizing a χ 2 divergence leads to a chi-divergence upper bound ("CUBO"). Practitioners often wish to combine upper and lower bound estimates to "sandwich" the model evidence in a narrow range for later decision making, so the CUBO's flexible applicability to all latent variable models is appealing. However, both the estimation of the upper bound and computing its gradient for minimization require Monte Carlo estimators to approximate tough integrals. These estimators may have large variance even at modest number of samples. A natural question is then how reliable CUBO minimization is in practice. In this paper, we provide empirical evidence that CUBO optimization is often tricky, and the bound itself ends up being too loose even Figure 1: Minimizing χ 2 divergence using MC gradient estimates via the reparametrization trick can be challenging even with simple univariate Gaussian distributions. Each column shows under a different number of MC samples. The last column compares ELBO and CUBO traces for S = 10 4; diamonds correspond to sanity-check estimator from Eq.. Top row: variational parameter traces with fixed true variance but changing starting mean locations. Bottom row: same, but with fixed true mean and changing start variance values. using hundreds of samples. Our contributions include: i) evaluation of the CUBO in two simple scenarios, and comparison to other bounds to gauge its utility; ii) empirical analysis of CUBO optimization in both scenarios, in terms of convergence rate and sensitivity to the number of samples; iii) review of alternative upper bounds and best practices for diagnosing and testing new bounds. Let p(x, z) be the joint distribution of observed variables x and latent variables z. Variational inference (VI) approximates the posterior distribution p(z|x) through optimization. The idea is to posit a family of variational distributions and find the member distribution q(z; λ) which is as close as possible to the true posterior. Standard VI minimizes the KL divergence D KL q(z; λ)||p(z|x). Minimizing the KL divergence is equivalent to maximizing the evidence lower bound (ELBO) on the model evidence log p(x). Alternatively, χ 2 variational inference (b) minimizes the χ 2 divergence D KL p(z|x)||q(z; λ). This is equivalent to minimizing the following upper bound (CUBO): The expectation in the CUBO is usually intractable, so we use Monte Carlo samples to construct a biased estimate where z,..., z (S) ∼ q(z; λ). In this paper, we consider two optimization strategies, both relying on the reparametrization trick (; ; Titsias and Lázaro-): i) optimizing the CUBO directly in Eq. using biased gradient estimators; ii) optimizing the exponentiated CUBO defined as L EXPCUBO (λ) = exp(2 L CUBO (λ)), whose gradients are unbiased but might suffer from higher variance. We consider a simple inference scenario: minimizing the divergence between two univariate Gaussian distributions. We assume no data x, such that the true posterior is just the prior fixed at p(z). = N. We consider two cases: a variational distribution q(z;μ,σ 2) with fixedσ = 1.0 and varying meanμ = {1, 2, 4, 10}, or the other way around, fixedμ = 0.0 and varyingσ = {0.1, 0.5, 2.0, 10.0}. All experiments were performed using stochastic gradient descent and grid-searching the learning rate for each different bound independently in a fine grid between 10 −4 and 1.0. Fig. 1 shows the evolution of the variational parameters over time when minimizing the χ 2 divergence (ChiSq) or maximizing the KL divergence (KL) from different initialization points. While the KL trajectories always converge to the true values, the ChiSq variational parameters fail to converge for 5 out of the 8 cases when the number of MC samples S = 100. If we increase the number of samples S to 1M, 3 out of 8 cases still fail to find the true values. Most alarming, in several cases, e.g., fixed mean and varyingσ initialized at 0.1, the CUBO MC estimator present values below 0 (the true marginal likelihood value), so it is not an upper bound anymore, even with 1M samples. Appendix A show similar pathological behaviors for the exponentiated CUBO case. To assess CUBO correctness, consider an alternative MC estimator that samples from the prior p, rather than from q: where z,..., z (S) ∼ p(z). In general, since CUBO optimization is sensitive to initialization, it is a good practice to do warm initializations, either with MAP estimation or by performing KL optimization first during a few iterations. We consider applying the CUBO training objective to the Latent Dirichlet Allocation (LDA) topic model . We focus on single-document inference, where the length of the document should directly impact posterior uncertainty about which topics are used. We assume that there are K = 3 topics and V = 3 vocabulary words. We are given a set of topic-word probabilities φ where φ kv is the probability of word v under topic k. Each document d is represented by counts of V discrete words or features, x d ∈ Z V +. These counts are generated via a document-specific mixture of K topics, variational inference. In particular, we explore two tasks: (i) estimating upper bounds on the marginal likelihood given a fixed q, and (ii) optimizing q to try to improve such bounds. To assess the reliability of upper bound estimation using approximate distributions, we fit four possible q: one Dirichlet via closed-form updates optimizing the ELBO, and 3 separate Logistic Normal (LN) distributions fit via Monte-Carlo gradient descent steps (see details for each q in the appendix). The 3 LNs are respectively a cold-started optimization of the ELBO, a warm-started optimization of the CUBO, and a cold-started optimization of the CUBO. Warm-starting here means that the mean of q is set to the maximum likelihood estimator of the document-topic vector π d, while cold-starting has random parameters not informed by the data. We hope that these detailed experiments tease apart the impact of initialization and optimization. In Tab. 1 and Tab. 2, for each q described above, we compare CUBO to an alternative upper bound KLpq, detailed in Appendix B. For each stochastic upper bound estimator, we compute 20 replicates using each 100 samples and 100,000 samples, then report the median of these samples as well as 5-th and 95-th percentile value intervals. Our are: CHIVI parameter estimation often diverges for cold initializations. We replicated this issue across many settings, as reported in Tab. 1. CUBO estimators are overconfident. Increasing sample size widens confidence intervals. KLpq estimators are better behaved. Consider Tab. 2's warm-init CUBO row (in Appendix A): At 100 samples the CUBO seems to be within (-1.03, 0.77), but at many more samples, the CUBO interval drops to (-0.86, -0.64), with a new median that is just barely contained in the previous interval. In contrast, the 100 sample KLpq bound has an interval that shrinks. ELBO optimization followed by CUBO computation may be enough. The Dirichlet q optimized for the ELBO but then fitted into a CUBO estimator produces competitive bounds. This suggests that it may not always be necessary to optimize the CUBO directly. Table 2: Topic model case study. Bounds on marginal likelihood for a "short" toy document under an LDA topic model. We infer an approximate posterior over doc-topic probabilities for a single document with just 1 word, using either closed-form coordinate ascent updates to fit a Dirichlet q (top row) or MC gradient updates to fit a LogisticNormal q (bottom rows) with 100 samples per gradient step. Using the final fitted q, we then compute 20 replicates of our stochastic upper bounds on marginal likelihood using either the CUBO or the KLpq estimator (see Appendix B, using S = 10 2 or 10 5 samples for each. We show the median value and the (5%, 95%) interval. Appendix B. The "KLpq" bound: reliable but expensive. Given any approximate posterior q(π d) parameterized byv d ∈ V, the following is an upper bound on the marginal likelihood: show that minimizing this bound is equivalent to minimizing KL(p||q), which computes the asymmetric KL divergence in the opposite direction of typical variational methods, which minimize KL(q||p). We suggest that this bound is a useful comparison point for the CUBO bound. The "KLpq" upper bound can be approximated using S samples from the posterior. For our LDA model, we compute S samples from a Hamiltonian Monte Carlo posterior using Stan . Because LN random variables are not very common, we write the log probability density function of the approximate posterior here, using from Aitchison and Shen (1980, Eq. 1.3 The entropy of the distribution is then: where we have used standard to simplify that last term: This expectation E π d ∼q [log π dk] unfortunately has no closed form. Reparameterization trick. We can write the random variable π d as a deterministic transform of a standard normal random variable u d. First, recall we can map any K − 1-length real vector u ∈ R K−1 to the K-dimensional simplex ∆ K via the softmax transformation: This transformation is one-to-one invertible, and also differentiable w.r.t. its input vector. Now, to generate π d ∈ ∆ K, we can draw π d in three steps: draw u d from a standard normal, scale it with the appropriate mean and standard deviation parameters, and apply the softmax transformation, C.3. LDA Optimization #3: Overcomplete-Logistic-Normal + MonteCarloGD Transformation between overcomplete simplex and the reals We now consider an overcomplete representation of the K-dimensional simplex. Rather than the minimal K − 1 parameters in the LN-Marg approximation above, let's look at transformations that use K free parameters. In this overcomplete space, we must augment our probability vector π d ∈ ∆ K (which has only K − 1 degrees of freedom) with an additional scalar real random variable w d ∈ R, so the combined vector [π d1 . . . π d,K−1 w d] has the required K linearlyindependent dimensions. Now, we can create an invertible transformation between two K-length vectors: a vector u of real values, and the augmented pair π, w: Because this is an invertible transformation, we can compute the Jacobian:...... Next, we wish to compute the determinant of this Jacobian, as a function of π and w. First, we perform row and column swaps until only the first column and first row have non-diagonal entries, like this: Here, we have defined the remaining mass beyond the K − 1 independent entries of the vector π as rem(π) = 1 − K−1 k=1 π k for simplicity. The number of swaps needed to create J from J is always an even number (there will be the some a swaps needed to fix the rows, and then the same number a swaps for the columns, so 2a swaps total). Each single row or column swap changes the sign of the determinant but not the value. An even number of swaps thus leaves the determinant unchanged: |J | = |J|. We can then apply the Schur determinant formula, which says, for any square matrix, we can compute its determinant by manipulating its subcomponent blocks: The simplification arises via algebra after plugging in the definition of rem(π). Armed with the Jacobian and its determinant, we have all the tools needed to perform variational inference in this representation. Approximate posterior: Overcomplete LN. Returning to our topic modeling task, we consider again the LDA generative model for a document as a given, and wish to compute an approximate posterior for the document-topic vector π d. We suggest an approximate posterior family based on the overcomplete logistic normal above. We can draw samples from this in two steps. This leads to the following log probability density function over the joint space of π, w ∈ ∆ K × R: log q(π, w) = log |det J(π, w)| + Our generative model does not include the log-scale variable w d, but we can easily just give it a N prior and keep it decoupled from the data.
[ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJxk51h4FS
An empirical study of variational inference based on chi-square divergence minimization, showing that minimizing the CUBO is trickier than maximizing the ELBO
It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants. Deep neural networks (DNNs) have achieved state-of-the-art performance on various tasks . However, counter-intuitive adversarial examples generally exist in different domains, including computer vision , natural language processing , reinforcement learning , speech and graph data . As DNNs are being widely deployed, it is imperative to improve model robustness and defend adversarial attacks, especially in safety-critical cases. Previous work shows that adversarial examples mainly root from the locally unstable behavior of classifiers on the data manifolds (; ; 2018; b), where a small adversarial perturbation in the input space can lead to an unreasonable shift in the feature space. On the one hand, many previous methods try to solve this problem in the inference phase, by introducing transformations on the input images. These attempts include performing local linear transformation like adding Gaussian noise , where the processed inputs are kept nearby the original ones, such that the classifiers can maintain high performance on the clean inputs. However, as shown in Fig. 1(a), the equivalent perturbation, i.e., the crafted adversarial perturbation, is still δ and this strategy is easy to be adaptively evaded since the randomness of x 0 w.r.t x 0 is local . Another category of these attempts is to apply various non-linear transformations, e.g., different operations of image processing . They are usually off-the-shelf for different classifiers, and generally aim to disturb the adversarial perturbations, as shown in Fig. 1(b). Yet these methods are not quite reliable since there is no illustration or guarantee on to what extent they can work. On the other hand, many efforts have been devoted to improving adversarial robustness in the training phase. For examples, the adversarial training (AT) methods induce locally stable behavior via data augmentation on adversarial examples. However, AT methods are usually computationally expensive, and will often degenerate model performance on the clean inputs or under general-purpose transformations like rotation . In contrast, the mixup training method introduces globally linear behavior in-between the data manifolds, which can also improve adversarial robustness (Zhang Virtual inputs The processed inputs fed into classifiers (a) (b) (c) Figure 1: Intuitive mechanisms in the input space of different input-processing based defenses. x is the crafted adversarial example, x0 is the original clean example, which is virtual and unknown for the classifiers. δ is the adversarial perturbation. et al., 2018; a). Although this improvement is usually less significant than it ed by AT methods, mixup-trained models can keep state-of-the-art performance on the clean inputs; meanwhile, the mixup training is computationally more efficient than AT. The interpolated AT method also shows that the mixup mechanism can further benefit the AT methods. However, most of the previous work only focuses on embedding the mixup mechanism in the training phase, while the induced global linearity of the model predictions is not well exploited in the inference phase. Compared to passive defense by directly classifying the inputs, it would be more effective to actively defend adversarial attacks by breaking their locality via the globally linear behavior of the mixup-trained models. In this paper, we develop an inference principle for mixup-trained models, named mixup inference (MI). In each execution, MI performs a global linear transformation on the inputs, which mixups the input x with a sampled clean example x s, i.e.,x = λx + (1 − λ)x s (detailed in Alg. 1), and feedx into the classifier as the processed input. There are two basic mechanisms for robustness improving under the MI operation (detailed in Sec. 3.2.1), which can be illustrated by simple geometric intuition in Fig. 1(c). One is perturbation shrinkage: if the input is adversarial, i.e., x = x 0 + δ, the perturbation δ will shrink by a factor λ after performing MI, which is exactly the mixup ratio of MI according to the similarity between triangles. Another one is input transfer: after the MI operation, the reduced perturbation λδ acts on random x 0. Comparing to the spatially or semantically local randomness introduced by Gaussian noise or image processing,x 0 introduces spatially global and semantically diverse randomness w.r.t x 0. This makes it less effective to perform adaptive attacks against MI . Furthermore, the global linearity of the mixup-trained models ensures that the information of x 0 remained inx 0 is proportional to λ, such that the identity of x 0 can be recovered from the statistics ofx 0. In experiments, we evaluate MI on CIFAR-10 and CIFAR-100 under the oblivious attacks and the adaptive attacks . The demonstrate that our MI method is efficient in defending adversarial attacks in inference, and is also compatible with other variants of mixup, e.g., the interpolated AT method. Note that also propose to mixup the input points in the test phase, but they do not consider their method from the aspect of adversarial robustness. In this section, we first introduce the notations applied in this paper, then we provide the formula of mixup in training. We introduce the adversarial attacks and threat models in Appendix A.1. Given an input-label pair (x, y), a classifier F returns the softmax prediction vector F (x) and the, where L is the number of classes and [L] = {1, · · ·, L}. The classifier F makes a correct prediction on x if y =ŷ. In the adversarial setting, we augment the data pair (x, y) to a triplet (x, y, z) with an extra binary variable z, i.e., The variable z is usually considered as hidden in the inference phase, so an input x (either clean or adversarially corrupted) can be generally denoted as x = x 0 + δ · 1 z=1. Here x 0 is a clean sample from the data manifold p(x) with label y 0, 1 z=1 is the indicator function, and δ is a potential perturbation crafted by adversaries. It is worthy to note that the perturbation δ should not change the true label of the input, i.e., y = y 0. For p -norm adversarial attacks , we have δ p ≤, where is a preset threshold. Based on the assumption that adversarial examples are off the data manifolds, we formally have x 0 + δ / ∈ supp(p(x)) (a). In supervised learning, the most commonly used training mechanism is the empirical risk minimization (ERM) principle , which minimizes with the loss function L. While computationally efficient, ERM could lead to memorization of data and weak adversarial robustness . As an alternative, introduce the mixup training mechanism, which minimizes 1 m m j=1 L(F (x j),ỹ j ). Herex j = λx j0 + (1 − λ)x j1;ỹ j = λy j0 + (1 − λ)y j1, the input-label pairs (x j0, y j0) and (x j1, y j1) are randomly sampled from the training dataset, λ ∼ Beta(α, α) and α is a hyperparameter. Training by mixup will induce globally linear behavior of models in-between data manifolds, which can empirically improve generalization performance and adversarial robustness a; b; a; b). Compared to the adversarial training (AT) methods , trained by mixup requires much less computation and can keep state-of-the-art performance on the clean inputs. Although the mixup mechanism has been widely shown to be effective in different domains (; ; a; b), most of the previous work only focuses on embedding the mixup mechanism in the training phase, while in the inference phase the global linearity of the trained model is not well exploited. Compared to passively defending adversarial examples by directly classifying them, it would be more effective to actively utilize the globality of mixup-trained models in the inference phase to break the locality of adversarial perturbations. The above insight inspires us to propose the mixup inference (MI) method, which is a specialized inference principle for the mixup-trained models. In the following, we apply colored y,ŷ and y s to visually distinguish different notations. Consider an input triplet (x, y, z), where z is unknown in advance. When directly feeding x into the classifier F, we can obtain the predicted labelŷ. In the adversarial setting, we are only interested in the cases where x is correctly classified by F if it is clean, or wrongly classified if it is adversarial . This can be formally denoted as The general mechanism of MI works as follows. Every time we execute MI, we first sample a label y s ∼ p s (y), then we sample x s from p s (x|y s) and mixup it with x asx = λx + (1 − λ)x s. p s (x, y) denotes the sample distribution, which is constrained to be on the data manifold, i.e., supp(p s (x)) ⊂ supp(p(x)). In practice, we execute MI for N times and average the output predictions to obtain F MI (x), as described in Alg. 1. Here we fix the mixup ratio λ in MI as a hyperparameter, while similar properties hold if λ comes from certain distribution. Theoretically, with unlimited capability and sufficient clean samples, a well mixup-trained model F can be denoted as a linear function H on the convex combinations of clean examples , i.e., ∀x i, x j ∼ p(x) and λ ∈, there is Specially, we consider the case where the training objective L is the cross-entropy loss, then H(x i) should predict the one-hot vector of label y i, i.e., H y (Input: The mixup-trained classifier F ; the input x. Hyperparameters: The sample distribution p s ; the mixup ratio λ; the number of execution N . adversarial, then there should be an extra non-linear part G(δ; x 0) of F, since x is off the data manifolds. Thus for any input x, the prediction vector can be compactly denoted as According to Eq. and Eq., the output ofx in MI is given by: where ∞ −→ represents the limitation when the execution times N → ∞. Now we separately investigate the y-th andŷ-th (could be the same one) components of F (x) according to Eq., and see how these two components differ from those of F (x). These two components are critical because they decide whether we can correctly classify or detect adversarial examples . Note that there is H y (x 0) = 1 and H ys (x s) = 1, thus we have the y-th components as Furthermore, according to Eq., there is 1 y=ŷ = 1 z=0. We can represent theŷ-th components as From the above formulas we can find that, except for the hidden variable z, the sampling label y s is another variable which controls the MI output F (x) for each execution. Different distributions of sampling y s in different versions of MI. Here we consider two easy-to-implement cases: MI with predicted label (MI-PL): In this case, the sampling label y s is the same as the predicted labelŷ, i.e., p s (y) = 1 y=ŷ is a Dirac distribution onŷ. In this case, the label y s is uniformly sampled from the labels other thanŷ, i.e., p s (y) = Uŷ(y) is a discrete uniform distribution on the set {y ∈ [L]|y =ŷ}. We list the simplified formulas of Eq. and Eq. under different cases in Table 1 for clear representation. With the above formulas, we can evaluate how the model performance changes with and without MI by focusing on the formula of Specifically, in the general-purpose setting where we aim to correctly classify adversarial examples , we claim that the MI method improves the robustness if the prediction value on the true label y increases while it on the adversarial labelŷ decreases after performing MI when the input is adversarial (z = 1). This can be formally denoted as We refer to this condition in Eq. as robustness improving condition (RIC). Further, in the detection-purpose setting where we want to detect the hidden variable z and filter out adversarial inputs, we can take the gap of theŷ-th component of predictions before and after the MI operation, i.e., ∆Fŷ(x; p s) as the detection metric (a). To formally measure the detection ability on z, we use the detection gap (DG), denoted as A higher value of DG indicates that ∆Fŷ(x; p s) is better as a detection metric. In the following sections, we specifically analyze the properties of different versions of MI according to Table 1, and we will see that the MI methods can be used and benefit in different defense strategies. In the MI-PL case, when the input is clean (i.e., z = 0), there is F (x) = F (x), which means ideally the MI-PL operation does not influence the predictions on the clean inputs. When the input is adversarial (i.e., z = 1), MI-PL can be applied as a general-purpose defense or a detection-purpose defense, as we separately introduce below: General-purpose defense: If MI-PL can improve the general-purpose robustness, it should satisfy RIC in Eq.. By simple derivation and the of Table 1, this means that Since an adversarial perturbation usually suppress the predicted confidence on the true label and promote it on the target label , there should be Gŷ(δ;x 0) > 0 and G y (δ;x 0) < 0. Note that the left part of Eq. can be decomposed into Here Eq. indicates the two basic mechanisms of the MI operations defending adversarial attacks, as shown in Fig. 1(c). The first mechanism is input transfer, i.e., the clean input that the adversarial perturbation acts on transfers from the deterministic x 0 to stochasticx 0. Compared to the Gaussian noise or different image processing methods which introduce spatially or semantically local randomness, the stochasticx 0 induces spatially global and semantically diverse randomness. This will make it harder to perform an adaptive attack in the white-box setting . The second mechanism is perturbation shrinkage, where the original perturbation δ shrinks by a factor λ. This equivalently shrinks the perturbation threshold since λδ p = λ δ p ≤ λ, which means that MI generally imposes a tighter upper bound on the potential attack ability for a crafted perturbation. Besides, empirical in previous work also show that a smaller perturbation threshold largely weakens the effect of attacks . Therefore, if an adversarial attack defended by these two mechanisms leads to a prediction degradation as in Eq., then applying MI-PL would improve the robustness against this adversarial attack. Similar properties also hold for MI-OL as described in Sec. 3.2.2. In Fig. 2, we empirically demonstrate that most of the existing adversarial attacks, e.g., the PGD attack satisfies these properties. Detection-purpose defense: According to Eq., the formula of DG for MI-PL is By comparing Eq. and Eq., we can find that they are consistent with each other, which means that for a given adversarial attack, if MI-PL can better defend it in general-purpose, then ideally MI-PL can also better detect the crafted adversarial examples. As to MI-OL, when the input is clean (z = 0), there would be a degeneration on the optimal clean prediction as F y (x) = Fŷ(x) = λ, since the sampled x s does not come from the true label y. As compensation, MI-OL can better improve robustness compared to MI-PL when the input is adversarial (z = 1), since the sampled x s also does not come from the adversarial labelŷ in this case. General-purpose defense: Note that in the MI-OL formulas of Table 1, there is a term of 1 y=ys. Since we uniformly select y s from the set [L] \ {ŷ}, there is E(1 y=ys) = 1 L−1. According to the RIC, MI-OL can improve robustness against the adversarial attacks if there satisfies Note that the conditions in Eq. is strictly looser than Eq., which means MI-OL can defend broader range of attacks than MI-PL, as verified in Fig. 2. Detection-purpose defense: According to Eq. and Table 1, the DG for MI-OL is It is interesting to note that DG MI-PL = DG MI-OL, thus the two variants of MI have the same theoretical performance in the detection-purpose defenses. However, in practice we find that MI-PL performs better than MI-OL in detection, since empirically mixup-trained models cannot induce ideal global linearity (cf. Fig. 2 in). Besides, according to Eq., to statistically make sure that the clean inputs will be correctly classified after MI-OL, there should be ∀k ∈ [L] \ {y}, In this section, we provide the experimental on CIFAR-10 and CIFAR-100 to demonstrate the effectiveness of our MI methods on defending adversarial attacks. Our code is available at an anonymous link: http://bit.ly/2kpUZVR. In training, we use ResNet-50 and apply the momentum SGD optimizer on both CIFAR-10 and CIFAR-100. We run the training for 200 epochs with the batch size of 64. The initial learning rate is 0.01 for ERM, mixup and AT; 0.1 for interpolated AT. The learning rate decays with a factor of 0.1 at 100 and 150 epochs. The attack method for AT and interpolated AT is untargeted PGD-10 with = 8/255 and step size 2/255 , and the ratio of the clean examples and the adversarial ones in each mini-batch is 1: 1. The hyperparameter α for mixup and interpolated AT is 1.0. All defenses with randomness are executed 30 times to obtain the averaged predictions. To verify and illustrate our theoretical analyses in Sec. 3, we provide the empirical relationship between the output predictions of MI and the hyperparameter λ in Fig. 2. The notations and formulas annotated in Fig. 2 correspond to those introduced in Sec. 3. We can see that the follow our theoretical under the assumption of ideal global linearity. Besides, both MI-PL and MI-OL empirically satisfy RIC in this case, which indicates that they can improve robustness under the untargeted PGD-10 attack on CIFAR-10, as quantitatively demonstrated in the following sections. In this subsection, we evaluate the performance of our method under the oblivious-box attacks . The oblivious threat model assumes that the adversary is not aware of the existence of the defense mechanism, e.g., MI, and generate adversarial examples based on the unsecured classification model. We separately apply the model trained by mixup and interpolated AT as the classification model. The AUC scores for the detection-purpose defense are given in Fig. 3(a). The show that applying MI-PL in inference can better detect adversarial attacks, while directly detecting by the returned confidence without MI-PL performs even worse than a random guess. Table 3: Classification accuracy (%) on the oblivious adversarial examples crafted on 1,000 randomly sampled test points of CIFAR-100. Perturbation = 8/255 with step size 2/255. The subscripts indicate the number of iteration steps when performing attacks. The notation ≤ 1 represents accuracy less than 1%. The parameter settings for each method can be found in Table 5. Methods Cle. PGD10 PGD50 PGD200 PGD10 PGD50 PGD200 We also compare MI with previous general-purpose defenses applied in the inference phase, e.g., adding Gaussian noise or random rotation ; performing random padding or resizing after random cropping (; . The performance of our method and baselines on CIFAR-10 and CIFAR-100 are reported in Table 2 and Table 3, respectively. Since for each defense method, there is a trade-off between the accuracy on clean samples and adversarial samples depending on the hyperparameters, e.g., the standard deviation for Gaussian noise, we carefully select the hyperparameters to ensure both our method and baselines keep a similar performance on clean data for fair comparisons. The hyperparameters used in our method and baselines are reported in Table 4 and Table 5 . In Fig. 3(b), we further explore this trade-off by grid searching the hyperparameter space for each defense to demonstrate the superiority of our method. As shown in these , our MI method can significantly improve the robustness for the trained models with induced global linearity, and is compatible with training-phase defenses like the interpolated AT method. As a practical strategy, we also evaluate a variant of MI, called MI-Combined, which applies MI-OL if the input is detected as adversarial by MI-PL with a default detection threshold; otherwise returns the prediction on the original input. We also perform ablation studies of ERM / AT + MI-OL in Table 2, where no global linearity is induced. The verify that our MI methods indeed exploit the global linearity of the mixup-trained models, rather than simply introduce randomness. , we test our method under the white-box adaptive attacks (detailed in Appendix B.2). Since we mainly adopt the PGD attack framework, which synthesizes adversarial examples iteratively, the adversarial noise will be clipped to make the input image stay within the valid range. It in the fact that with mixup on different training examples, the adversarial perturbation will be clipped differently. To address this issue, we average the generated perturbations over the adaptive samples as the final perturbation. The of the adversarial accuracy w.r.t the number of adaptive samples are shown in Fig. 4. We can see that even under a strong adaptive attack, equipped with MI can still improve the robustness for the classification models. In this section, we provide more s which are related to our work in the main text. Adversarial attacks. Although deep learning methods have achieved substantial success in different domains , human imperceptible adversarial perturbations can be easily crafted to fool high-performance models, e.g., deep neural networks (DNNs) . One of the most commonly studied adversarial attack is the projected gradient descent (PGD) method . Let r be the number of iteration steps, x 0 be the original clean example, then PGD iteratively crafts the adversarial example as where clip x, (·) is the clipping function. Here x * 0 is a randomly perturbed image in the neighborhood of x 0, i.e.,Ů (x 0,), and the finally returned adversarial example is x = x * r = x 0 + δ, following our notations in the main text. Threat models. Here we introduce different threat models in the adversarial setting. As suggested in, a threat model includes a set of assumptions about the adversarys goals, capabilities, and knowledge. Adversary's goals could be simply fooling the classifiers to misclassify, which is referred to as untargeted mode. Alternatively, the goals can be more specific to make the model misclassify certain examples from a source class into a target class, which is referred to as targeted mode. In our experiments, we evaluate under both modes, as shown in Table 2 and Table 3. Adversary's capabilities describe the constraints imposed on the attackers. Adversarial examples require the perturbation δ to be bounded by a small threshold under p -norm, i.e., δ p ≤. For example, in the PGD attack, we consider under the ∞ -norm. Adversary's knowledge describes what knowledge the adversary is assumed to have. Typically, there are three settings when evaluating a defense method: • Oblivious adversaries are not aware of the existence of the defense D and generate adversarial examples based on the unsecured classification model F . • White-box adversaries know the scheme and parameters of D, and can design adaptive methods to attack both the model F and the defense D simultaneously . • Black-box adversaries have no access to the parameters of the defense D or the model F with varying degrees of black-box access. In our experiments, we mainly test under the oblivious setting (Sec. 4.3) and white-box setting (Sec. 4.4), since previous work has already demonstrated that randomness itself is efficient on defending black-box attacks (; . To date, the most widely applied framework for adversarial training (AT) methods is the saddle point framework introduced in: Here θ represents the trainable parameters in the classifier F, and S is a set of allowed perturbations. In implementation, the inner maximization problem for each input-label pair (x, y) is approximately solved by, e.g., the PGD method with different random initialization . As a variant of the AT method, propose the interpolated AT method, which combines AT with mixup. Interpolated AT trains on interpolations of adversarial examples along with interpolations of unperturbed examples (cf. Alg. 1 in . Previous empirical demonstrate that interpolated AT can obtain higher accuracy on the clean inputs compared to the AT method without mixup, while keeping the similar performance of robustness. AT + MI-OL (ablation study) The λOL = 0.8 The λOL = 0.6 We provide more technical details about our method and the implementation of the experiments. Generality. According to Sec. 3, except for the mixup-trained models, the MI method is generally compatible with any trained model with induced global linearity. These models could be trained by other methods, e.g., manifold mixup (a; ; . Besides, to better defend white-box adaptive attacks, the mixup ratio λ in MI could also be sampled from certain distribution to put in additional randomness. Empirical gap. As demonstrated in Fig. 2, there is a gap between the empirical and the theoretical formulas in Table 1 . This is because that the mixup mechanism mainly acts as a regularization in training, which means the induced global linearity may not satisfy the expected behaviors. To improve the performance of MI, a stronger regularization can be imposed, e.g., training with mixup for more epochs, or applying matched λ both in training and inference. , we design the adaptive attacks for our MI method. Specifically, according to Eq., the expected model prediction returned by MI is: Note that generally the λ in MI comes from certain distribution. For simplicity, we fix λ as a hyperparameter in our implementation. Therefore, the gradients of the prediction w.r.t. the input x is: = E ps ∂F (u) ∂u u=λx+(1−λ)xs · ∂λx + (1 − λ)x s ∂x = λE ps ∂F (u) ∂u | u=λx+(1−λ)xs. Table 3. The number of execution for each random method is 30. In the implementation of adaptive PGD attacks, we first sample a series of examples {x s,k} N A k=1, where N A is the number of adaptive samples in Fig. 3. Then according to Eq., the sign of gradients used in adaptive PGD can be approximated by sign ∂F MI (x) ∂x ≈ sign N A k=1 ∂F (u) ∂u u=λx+(1−λ)x s,k. The hyperparameter settings of the experiments shown in Table 2 and Table 3 are provided in Table 4 and Table 5, respectively. Since the original methods in and are both designed for the models on ImageNet, we adapt them for CIFAR-10 and CIFAR-100. Most of our experiments are conducted on the NVIDIA DGX-1 server with eight Tesla P100 GPUs.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
ByxtC2VtPB
We exploit the global linearity of the mixup-trained models in inference to break the locality of the adversarial perturbations.
Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text. In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain. Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature. As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts. Businesses rely on contracts to capture critical obligations with other parties, such as: scope of work, amounts owed, and cancellation policies. Various efforts have gone into automatically extracting and classifying these terms. These efforts have usually been modeled as: classification, entity and relation extraction tasks. In this paper we focus on classification, but in our application we have found that our findings apply equally and sometimes, more profoundly, on other tasks. Recently, numerous studies have shown the value of fine-tuning language models such as ELMo and BERT to achieve state-of-the-art on domain specific tasks. In this paper we investigate and quantify the impact of utilizing a large domain-specific corpus of legal agreements to improve the accuracy of classification models by fine-tuning BERT. Specifically, we assess: (i) the performance of a simple model that only uses the pre-trained BERT language model, (ii) the impact of further fine tuning BERT, and (iii) how this impact changes as we train on larger corpora. Ultimately, our investigations show marginal, but valuable, improvements that increase as we grow the size of the legal corpus used to fine-tine BERT -and allow us to confidently claim that not only is this approach valuable for increasing accuracy, but commercial enterprises seeking to create these models will have an edge if they can amass a corpus of legal documents. Lexion is commercial venture that is building an "intelligent repository" for legal agreements that automatically classifies documents and then, based on the document type, fills a schema of metadata values using entity extraction, classification, and relationship extraction. Our application then uses this metadata to perform a variety of tasks that are valuable to end users: automatically organizing documents; linking related documents; calculating date milestones; identifying outlier terms; and a host of features to run reports, receive alerts, share with permissions, and integrate with other systems. (See Fig 1, screenshot). To deliver this application, we have developed an extensive pipeline and user-interface to ingest raw documents, perform OCR with multiple error detection and cleanup steps, rapidly annotate thousands of documents in hours, and train and deploy several models. Delivering the most accurate models possible, while managing our annotation costs, is an important challenge for us. Furthermore, we wish to leverage the massive legal corpus that we have acquired, and turn it into a competitive advantage using unsupervised techniques. For these reasons, applying pre-trained language models, and fine-tuning them further on our legal corpus, is an attractive approach to maximize accuracy and provide a more beneficial solution than our competitors. To fine-tune BERT, we used a proprietary corpus that consists of hundreds of thousands of legal agreements. We extracted text from the agreements, tokenized it into sentences, and removed sentences without alphanumeric text. We selected the BERT-Base uncased pre-trained model for fine-tuning. To avoid including repetitive content found at the beginning of each agreement we selected the 31st to 50th sentence of each agreement. We ran unsupervised fine-tuning of BERT using sequence lengths of 128, 256 and 512. The loss function over epochs is shown in Figure 2. We used a proprietary dataset consisting of a few thousand legal agreements. These were hand annotated by our model-development team using our internal rapid-annotation tools. We annotate a few dozen attributes per document, but for this paper we hand picked a single common and high value class: the "Term" of an agreement. In practice, the term of agreement can be one of about half a dozen possible classes, but we chose to focus on the two most common classes for this research: the "fixed" term, i.e. the term of an agreement that expires after a fixed amount of time; and the "auto-renewing" term, i.e. the term of an agreement that automatically renews. While this attribute might seem simple at a glance, there are many subtleties that make it challenging to extract this with a high enough accuracy for practical applications. Our end-to-end system does a great deal of preand post-processing to achieve an impressive level of accuracy that makes our application viable for end users, the details of which are beyond the scope of this paper. We split our classification dataset into train (80%) and validation (20%) sets. For all architecture variations, we train for a variable number of epochs as long as the validation error is decreasing. We stop training when validation error starts increasing again and then report the final on a held-out test set. In doing so we try to avoid over-fitting on the training set. For a baseline, we trained a simple neural network with the architecture shown in figure 5. The input to the network was a Bag-of-Word representation of the text. The BERT classifier we used consisted of the BERT layers, followed by the last three layers of our baseline network shown in figure 4. When training our BERT-based models, we also fine tuned the BERT layers on the end task. In order to assess the delta from using the Language Model (LM) that was fine-tuned on our legal corpus, we performed another experiment where we froze the BERT layers and only trained the last portion of the network. While the final accuracy of this model was sub par, even compared to our baseline model, the gains from using a fine-tuned instead of a pre-trained LM is much more pronounced, providing further evidence for the value of domain-specific fine tuning. These are shown in Table 1. We use 4 metrics to compare performance across various experiments: Matthews Correlation Coefficient, as well as Precision, Recall and F1 score weighted by class size. In table 2 we show the various we got from different configurations. It's clear that using pre-trained BERT, we're able to achieve a significant performance lift compared to the base line. It is also clear that fine-tuning BERT on a domain-specific corpus noticeably improves this lift, even when the corpus size is small and we train for a short time. In Figure 3 we also show the different rates of change in train loss across epochs between pre-trained BERT vs fine-tuned Bert. As shown, the model trained on the fine-tuned version learns faster as evident in the faster drop in train loss on the training set (note the logarithmic y-axis). It is worth mentioning that our BERT-based architecture is very simplistic for the sake of a fair comparison. In practice, having a deeper neural network on top of BERT that is specialized in the end task yields much more impressive , and that's the architecture we use in our system. We show the of using a slightly more advanced architecture with fine-tuned BERT in table 3 to demonstrate what's possible without any sophisticated feature engineering or hyper-parameter tuning.
[ 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkeRMT9cLH
Fine-tuning BERT on legal corpora provides marginal, but valuable, improvements on NLP tasks in the legal domain.
We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art. Text sequence transduction systems convert a given text sequence from one domain to another. These techniques can be applied to a wide range of natural language processing applications such as machine translation , summarization , and dialogue response generation . In many cases, however, parallel corpora for the task at hand are scarce. Therefore, unsupervised sequence transduction methods that require only non-parallel data are appealing and have been receiving growing attention (; ; ; ; ; . This trend is most pronounced in the space of text style transfer tasks where parallel data is particularly challenging to obtain (; ;). Style transfer has historically referred to sequence transduction problems that modify superficial properties of texti.e. style rather than content. We focus on a standard suite of style transfer tasks, including formality transfer , author imitation , word decipherment , sentiment transfer , and related language translation . General unsupervised translation has not typically been considered style transfer, but for the purpose of comparison we also conduct evaluation on this task . Recent work on unsupervised text style transfer mostly employs non-generative or non-probabilistic modeling approaches. For example, and design adversarial discriminators to shape their unsupervised objective -an approach that can be effective, but often introduces training instability. Other work focuses on directly designing unsupervised training objectives by incorporating intuitive loss terms (e.g. backtranslation loss), and demonstrates state-ofthe-art performance on unsupervised machine translation and style transfer . However, the space of possible unsupervised objectives is extremely large and the underlying modeling assumptions defined by each objective can only be reasoned about indirectly. As a , the process of designing such systems is often heuristic. In contrast, probabilistic models (e.g. the noisy channel model ) define assumptions about data more explicitly and allow us to reason about these assumptions during system design. Further, the corresponding objectives are determined naturally by principles of probabilistic inference, reducing the need for empirical search directly in the space of possible objectives. That said, classical probabilistic models for unsupervised sequence transduction (e.g. the HMM or semi-HMM) typically enforce overly strong independence assumptions about data to make exact inference tractable (; ;). This has restricted their development and caused their performance to lag behind unsupervised neural objectives on complex tasks. Luckily, in recent years, powerful variational approximation techniques have made it more practical to train probabilistic models without strong independence assumptions . Inspired by this, we take a new approach to unsupervised style transfer. We directly define a generative probabilistic model that treats a non-parallel corpus in two domains as a partially observed parallel corpus. Our model makes few independence assumptions and its true posterior is intractable. However, we show that by using amortized variational inference , a principled probabilistic technique, a natural unsupervised objective falls out of our modeling approach that has many connections with past work, yet is different from all past work in specific ways. In experiments across a suite of unsupervised text style transfer tasks, we find that the natural objective of our model actually outperforms all manually defined unsupervised objectives from past work, supporting the notion that probabilistic principles can be a useful guide even in deep neural systems. Further, in the case of unsupervised machine translation, our model matches the current state-of-the-art non-generative approach. We first overview text style transfer, which aims to transfer a text (typically a single sentence or a short paragraph -for simplicity we refer to simply "sentences" below) from one domain to another while preserving underlying content. For example, formality transfer is the task of transforming the tone of text from informal to formal without changing its content. Other examples include sentiment transfer , word decipherment , and author imitation . If parallel examples were available from each domain (i.e. the training data is a bitext consisting of pairs of sentences from each domain) supervised techniques could be used to perform style transfer (e.g. attentional Seq2Seq and Transformer ). However, for most style transfer problems, only non-parallel corpora (one corpus from each domain) can be easily collected. Thus, work on style transfer typically focuses on the more difficult unsupervised setting where systems must learn from non-parallel data alone. The model we propose treats an observed non-parallel text corpus as a partially observed parallel corpus. Thus, we introduce notation for both observed text inputs and those that we will treat as latent variables. Specifically, we let X = {x Corresponding indices represent parallel sentences. Thus, none of the observed sentences share indices. In our model, we introduce latent sentences to complete the parallel corpus. Specifically,X = {x (m+1),x (m+2), · · ·,x (n) } represents the set of latent parallel sentences in D 1, whilē } represents the set of latent parallel sentences in D 2. Then the goal of unsupervised text transduction is to infer these latent variables conditioned the observed non-parallel corpora; that is, to learn p(ȳ|x) and p(x|y). First we present our generative model of bitext, which we refer to as a deep latent sequence model. We then describe unsupervised learning and inference techniques for this model class. Directly modeling p(ȳ|x) and p(x|y) in the unsupervised setting is difficult because we never directly observe parallel data. Instead, we propose a generative model of the complete data that defines a joint likelihood, p(X,X, Y,Ȳ). In order to perform text transduction, the unobserved halves can be treated as latent variables: they will be marginalized out during learning and inferred via posterior inference at test time. Our model assumes that each observed sentence is generated from an unobserved parallel sentence in the opposite domain, as depicted in Figure 1. Specifically, each sentence, and prior, p D1 (x (j) ). We let θ x|ȳ and θ y|x represent the parameters of the two transduction distributions respectively. We assume the prior distributions are pretrained on the observed data in their respective domains and therefore omit their parameters for simplicity of notation. Together, this gives the following joint likelihood: The log marginal likelihood of the data, which we will approximate during training, is: Note that if the two transduction models share no parameters, the training problems for each observed domain are independent. Critically, we introduce parameter sharing through our variational inference procedure, which we describe in more detail in Section 3.2. Architecture: Since we would like to be able to model a variety of transfer tasks, we choose a parameterization for our transduction distributions that makes no independence assumptions. Specifically, we employ an encoder-decoder architecture based on the standard attentional Seq2Seq model which has been shown to be successful across various tasks . Similarly, our prior distributions for each domain are parameterized as recurrent language models which, again, make no independence assumptions. In contrast, traditional unsupervised generative sequence models typically make strong independence assumptions to enable exact inference (e.g. the HMM makes a Markov assumption on the latent sequence and emissions are one-to-one). Our model is more flexible, but exact inference via dynamic programming will be intractable. We address this problem in the next section. Ideally, learning should directly optimize the log data likelihood, which is the marginal of our model shown in Eq. 2. However, due to our model's neural parameterization which does not factorize, computing the data likelihood cannot be accomplished using dynamic programming as can be done with simpler models like the HMM. To overcome the intractability of computing the true data likelihood, we adopt amortized variational inference in order to derive a surrogate objective for learning, the evidence lower bound (ELBO) on log marginal likelihood 2: The surrogate objective introduces q(ȳ|x (i); φȳ |x ) and q(x|y (j); φx |y ), which represent two separate inference network distributions that approximate the model's true posteriors, p(ȳ|x (i); θ x|ȳ ) and p(x|y (j); θ y|x ), respectively. Learning operates by jointly optimizing the lower bound over both variational and model parameters. Once trained, the variational posterior distributions can be used directly for style transfer. The KL terms in Eq. 3, that appear naturally in the ELBO objective, can be intuitively viewed as regularizers that use the language model priors to bias the induced sentences towards the desired domains. Amortized variational techniques have been most commonly applied to continuous latent variables, as in the case of the variational autoencoder (VAE) . Here, we use this approach for inference over discrete sequences, which has been shown to be effective in related work on a semi-supervised task . Inference Network and Parameter Sharing: Note that the approximate posterior on one domain aims to learn the reverse style transfer distribution, which is exactly the goal of the generative distribution in the opposite domain. For example, the inference network q(ȳ|x (i); φȳ |x ) and the generative distribution p(y|x (i); θ y|x ) both aim to transform D 1 to D 2. Therefore, we use the same architecture for each inference network as used in the transduction models, and tie their parameters: φx |y = θ x|ȳ, φȳ |x = θ y|x. This means we learn only two encoder-decoders overall -which are parameterized by θ x|ȳ and θ y|x respectively -to represent two directions of transfer. In addition to reducing number of learnable parameters, this parameter tying couples the learning problems for both domains and allows us to jointly learn from the full data. Moreover, inspired by recent work that builds a universal Seq2Seq model to translate between different language pairs , we introduce further parameter tying between the two directions of transduction: the same encoder is employed for both x and y, and a domain embedding c is provided to the same decoder to specify the transfer direction, as shown in Figure 2. Ablation analysis in Section 5.3 suggests that parameter sharing is important to achieve good performance. The reconstruction terms and the KL terms in Eq. 3 still involve intractable expectations due to the marginalization of the latent sequence, thus we need to approximate their gradients. We can approximate the expectations by sampling latent sequences, but gradients are difficult to back-propagate directly through discrete samples. Gumbel-softmax and REINFORCE are often used as stochastic gradient estimators in the discrete case. Since the latent text variables have an extremely large domain, we find that REINFORCE-based gradient estimates in high variance. Thus, we use the Gumbel-softmax straight-through estimator to backpropagate gradients from the KL terms. However, we find that approximating gradients of the reconstruction loss is much more challenging -both the Gumbelsoftmax estimator and REINFORCE are unable to outperform a simple stop-gradient baseline, which confirms a similar observation in previous work on unsupervised machine translation . Therefore, we simply stop computing the gradients for the inference network that would from the reconstruction term, and perform greedy sampling of the latent sequences during training. Note that the inference networks still receive gradients from the prior through the KL term, and their parameters are shared with the decoders which do receive gradients from reconstruction. We consider this to be the best empirical compromise at the moment. Initialization. Good initialization is often necessary for successful optimization of unsupervised learning objectives. In preliminary experiments, we find that the encoder-decoder structure has difficulty generating realistic sentences during the initial stages of training, which usually in a disastrous local optimum. This is mainly because the encoder-decoder is initialized randomly and there is no direct training signal to specify the desired latent sequence in the unsupervised setting. Therefore, we apply a self-reconstruction loss L rec at the initial epochs of training. We denote the output the encoder as e(·) and the decoder distribution as p dec, then α decays from 1.0 to 0.0 linearly in the first k epochs. k is a tunable parameter and usually less than 3 in all our experiments. Our probabilistic formulation can be connected with recent advances in unsupervised text transduction methods. For example, back translation loss plays an important role in recent unsupervised machine translation (; ;) and unsupervised style transfer systems. In back translation loss, the source language x is translated to the target language y to form a pseudo-parallel corpus, then a translation model from y to x can be learned on this pseudo bitext just as in supervised setting. While back translation was often explained as a data augmentation technique, in our probabilistic formulation it appears naturally with the ELBO objective as the reconstruction loss term. Some previous work has incorporated a pretrained language models into neural semi-supervised or unsupervised objectives. uses the log likelihood of a pretrained language model as the reward to update a supervised machine translation system with policy gradient. utilize a similar idea for unsupervised machine translation. employed a similar approach, but interpret the LM as an adversary, with the generator is trained to fool the LM. We show how our ELBO objective is connected with these more heuristic LM regularizers by expanding the KL loss term (assume x is observed): Note that the loss used in previous work does not include the negative entropy term, −H q. Our objective in this additional "regularizer", the negative entropy of the transduction distribution, −H q. Intuitively, −H q is helps avoid a peaked transduction distribution, preventing the transduction from constantly generating similar sentences to satisfy the language model. In experiments we will show that this additional regularization is important and helps bypass bad local optimum and improve performance. These important differences with past work suggest that a probabilistic view of the unsupervised sequence transduction may provide helpful guidance in determining effective training objectives. We test our model on five style transfer tasks: sentiment transfer, word substitution decipherment, formality transfer, author imitation, and related language translation. For completeness, we also evaluate on the task of general unsupervised machine translation using standard benchmarks. We compare with the unsupervised machine translation model (UNMT) which recently demonstrated state-of-the-art performance on transfer tasks such as sentiment and gender transfer . 4 To validate the effect of the negative entropy term in the KL loss term Eq. 5, we remove it and train the model with a back-translation loss plus a language model negative log likelihood loss (which we denote as BT+NLL) as an ablation baseline. For each task, we also include strong baseline numbers from related work if available. For our method we select the model with the best validation ELBO, and for UNMT or BT+NLL we select the model with the best back-translation loss. Complete model configurations and hyperparameters can be found in Appendix A.1. Word Substitution Decipherment. Word decipherment aims to uncover the plain text behind a corpus that was enciphered via word substitution where word in the vocabulary is mapped to a unique type in a cipher dictionary (; ;). In our formulation, the model is presented with a non-parallel corpus of English plaintext and the ciphertext. We use the data in which provides 200K sentences from each domain. While previous work controls the difficulty of this task by varying the percentage of words that are ciphered, we directly evaluate on the most difficult version of this task -100% of the words are enciphered (i.e. no vocabulary sharing in the two domains). We select the model with the best unsupervised reconstruction loss, and evaluate with BLEU score on the test set which contains 100K parallel sentences. Sentiment Transfer. Sentiment transfer is a task of paraphrasing a sentence with a different sentiment while preserving the original content. Evaluation of sentiment transfer is difficult and is still an open research problem . Evaluation focuses on three aspects: attribute control, content preservation, and fluency. A successful system needs to perform well with respect to all three aspects. We follow prior work by using three automatic metrics : classification accuracy, self-BLEU (BLEU of the output with the original sentence as the reference), and the perplexity (PPL) of each system's output under an external language model. We pretrain a convolutional classifier to assess classification accuracy, and use an LSTM language model pretrained on each domain to compute the PPL of system outputs. We use the Yelp reviews dataset collected by which contains 250K negative sentences and 380K positive sentences. We also use a small test set that has 1000 human-annotated parallel sentences introduced in. We denote the positive sentiment as domain D 1 and the negative sentiment as domain D 2. We denote self-BLEU score as BLEU s and reference BLEU score on the 1000 sentences as BLEU r. Formality Transfer. Next, we consider a harder task of modifying the formality of a sequence. We use the GYAFC dataset , which contains formal and informal sentences from two different domains. In this paper, we use the Entertainment and Music domain, which has about 52K training sentences, 5K development sentences, and 2.5K test sentences. This dataset actually contains parallel data between formal and informal sentences, which we use only for evaluation. We follow the evaluation of sentiment transfer task and test models on three axes. Since the test set is a parallel corpus, we only compute reference BLEU and ignore self-BLEU. We use D 1 to denote formal text, and D 2 to denote informal text. Author Imitation. Author imitation is the task of paraphrasing a sentence to match another author's style. The dataset we use is a collection of Shakespeare's plays translated line by line into modern English. It was collected by 5 and used in prior work on supervised style trans-4 The model they used is slightly different from the original model of in certain details -e.g. the addition of a pooling layer after attention. We re-implement their model in our codebase for fair comparison and verify that our re-implementation achieves performance competitive with the original paper. 5 https://github.com/tokestermw/tensorflow-shakespeare fer . This is a parallel corpus and thus we follow the setting in the formality transfer task. We use D 1 to denote modern English, and D 2 to denote Shakespeare-style English. Related Language Translation. Next, we test our method on a challenging related language translation task . This task is a natural test bed for unsupervised sequence transduction, because the goal is to preserve the meaning of the source sentence while rewriting it into the target language. For our experiments, we choose Bosnian (bs) and Serbian (sr) as the related language pairs. We follow to report BLEU-1 score on this task since BLEU-4 score is close to zero. Unsupervised MT. In order to draw connections with a related work on general unsupervised machine translation, we also evaluate on the WMT'16 German English translation task. This task is substantially more difficult than the style transfer tasks considered so far. We compare with the state-of-the-art UNMT system using the existing implementation from the XLM codebase, 6 and implement our approach in the same framework with XLM initialization for fair comparison. We train both systems on 5M non-parallel sentences from each language. Results for the full suite of tasks are collected in Tables 1 and 2. We list the PPL of the test set under the external LM for both the source and target domain in Table 1. PPL of system outputs should be compared to PPL of the test set itself because extremely low PPL often indicates that the generated sentences are short or trivial. Tables 1 and 2 demonstrate some general trends. First, UNMT is able to outperform other prior methods in unsupervised text style transfer, such as (; ;). The performance improvements of UNMT indicate that flexible and powerful architectures are crucial (prior methods generally do not have attention mechanism). Second, our model achieves comparable classification accuracy to UNMT but outperforms it in all style transfer tasks in terms of the reference-BLEU, which is probably the most important metric since it directly measures the quality of the final generations against gold parallel data. This indicates that our method is both effective and consistent across many different tasks. Finally, the BT+NLL baseline is sometimes quite competitive, which indicates that the addition of a language model alone can be beneficial. However, our method consistently outperforms the simple BT+NLL method, which indicates the effectiveness of the additional entropy regularizer in Eq. 5 that is the byproduct of our probabilistic formulation. Next, we examine the PPL of the system outputs under pretrained domain LMs, which should be evaluated in comparison with the PPL of the test set itself. For both the sentiment transfer and the formality transfer tasks in Table 1, BT+NLL achieves extremely low PPL, lower than the PPL of the test corpus in the target domain. After a close examination of the output, we find that it contains many repeated and overly simple outputs. For example, the system generates many examples of "I love this place" when transferring negative to positive sentiment (see Appendix A.3 for examples). It is not surprising that such a trivial output has low perplexity, high accuracy, and low BLEU score. On the other hand, our system obtains reasonably competitive PPL, and our approach achieves the highest accuracy and higher BLEU score than the UNMT baseline. Parameter Sharing. We also conducted an experiment on the word substitution decipherment task, where we remove parameter sharing (as explained in Section 3.2) between two directions of transduction distributions, and optimize two encoder-decoder instead. We found that the model only obtained an extremely low BLEU score and failed to generate any meaningful outputs. Performance vs. Domain Divergence. Figure 3 plots the relative improvement of our method over UNMT with respect to accuracy of a naive Bayes' classifier trained to predict the domain of test sentences. Tasks with high classification accuracy likely have more divergent domains. We can see that for decipherment and en-de translation, where the domains have different vocabularies and thus are easily distinguished, our method yields a smaller gain over UNMT. This likely indicates that the (discrimination) regularization effect of the LM priors is less important or necessary when the two domains are very different. Why do we do better than UNMT? Finally, we examine in detail the output of our model and UNMT for the author imitation task. We pick this task because the reference outputs for the test set are provided, aiding analysis. Examples shown in Table 3 demonstrate that UNMT tends to make overly large changes to the source so that the original meaning is lost, while our method is better at preserving the content of the source sentence. Next, we quantitatively examine the outputs from UNMT and our method by comparing the F1 measure of words bucketed by their syntactic tags. We use the open-sourced compare-mt tool , and the are shown in Figure 4. Our system has an advantage over UNMT in all word categories. In particular, our system is much better at generating nouns, which contribute to preserving the content of the sentences. Greedy vs. Sample-based Gradient Approximation. In our experiments, we use greedy decoding from the inference network to approximate the expectation required by ELBO, which is a biased estimator. The main purpose of this approach is to reduce the variance of the gradient estimator during training, especially in the early stages when the variance of sample-based approaches is quite high. As an ablation experiment on the sentiment transfer task we compare greedy and sample-based gradient approximations in terms of both train and test ELBO, as well as task performance corresponding to best test ELBO. After the model is fully trained, we find that the sample-based approximation has Table 4, where the sampled-based training underperforms on both ELBO and task evaluations. As noted above, to stabilize the training process, we stop gradients from propagating to the inference network from the reconstruction loss. Does this approach indeed better optimize the actual probabilistic objective (i.e. ELBO) or only indirectly lead to improved task evaluations? In this section we use sentiment transfer as an example task to compare different methods for propagating gradients and evaluate both ELBO and task evaluations. Specifically, we compare three different methods: • Stop Gradient: The gradients from reconstruction loss are not propagated to the inference network. This is the method we use in all previous experiments. • Gumbel Softmax : Gradients from the reconstruction loss are propagated to the inference network with the straight-through Gumbel estimator. • REINFORCE : Gradients from reconstruction loss are propagated to the inference network with ELBO as a reward function. This method has been used in previous work for semi-supervised sequence generation , but often suffers from instability issues. We report the train and test ELBO along with task evaluations in Table 5, and plot the learning curves on validation set in Figure 5. 7 While being much simpler, we show that the stop-gradient trick produces superior ELBO over Gumbel Softmax and REINFORCE. This suggests that stopping gradient helps better optimize the likelihood objective under our probabilistic formulation in comparison with other optimization techniques that propagate gradients, which is counter-intuitive. A likely explanation is that as a gradient estimator, while clearly biased, stop-gradient has substantially reduced variance. In comparison with other techniques that offer reduced bias but extremely high variance when applied to our model class (which involves discrete sequences as latent variables), stop-gradient actually leads to better optimization of our objective because it achieves better balance of bias and variance overall. A.1 MODEL CONFIGURATIONS. We adopt the following attentional encoder-decoder architecture for UNMT, BT+NLL, and our method across all the experiments: • We use word embeddings of size 128. • We use 1 layer LSTM with hidden size of 512 as both the encoder and decoder. • We apply dropout to the readout states before softmax with a rate of 0.3. • , we add a max pooling operation over the encoder hidden states before feeding it to the decoder. Intuitively the pooling window size would control how much information is preserved during transduction. A window size of 1 is equivalent to standard attention mechanism, and a large window size corresponds to no attention. See Appendix A.2 for how to select the window size. • There is a noise function for UNMT baseline in its denoising autoencoder loss (; . We use the default noise function and noise hyperparameters in . For BT+NLL and our method we found that adding the extra noise into the self-reconstruction loss (Eq. 4) often hurts the performance, because we already have a language model to avoid the local optimum of direct-copy generation. A.2 HYPERPARAMETER TUNING. We vary pooling windows size as {1, 5}, the decaying patience hyperparameter k for selfreconstruction loss (Eq. 4) as {1, 2, 3}. For the baseliens UNMT and BT+NLL, we also try the option of not annealing the self-reconstruction loss at all as in the unsupervised machine translation task . We vary the weight λ for the NLL term (BT+NLL) or the KL term (our method) as {0.001, 0.01, 0.03, 0.05, 0.1}. We list some examples of the sentiment transfer task in Table 6. Notably, the BT+NLL method tends to produce extremely short and simple sentences. In Section 5 we mentioned that the baseline BT+NLL has a low perplexity for some tasks because it tends to generate overly simple and repetitive sentences. From Table 1 we see that two representative tasks are sentiment transfer and formatliy transfer. In Appendix A.3 we have demonstrated some examples for sentiment transfer, next we show some repetitive samples of BT+NLL in Table 7.
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
HJlA0C4tPS
We formulate a probabilistic latent sequence model to tackle unsupervised text style transfer, and show its effectiveness across a suite of unsupervised text style transfer tasks.
Current practice in machine learning is to employ deep nets in an overparametrized limit, with the nominal number of parameters typically exceeding the number of measurements. This resembles the situation in compressed sensing, or in sparse regression with $l_1$ penalty terms, and provides a theoretical avenue for understanding phenomena that arise in the context of deep nets. One such phenonemon is the success of deep nets in providing good generalization in an interpolating regime with zero training error. Traditional statistical practice calls for regularization or smoothing to prevent "overfitting" (poor generalization performance). However, recent work shows that there exist data interpolation procedures which are statistically consistent and provide good generalization performance\cite{belkin2018overfitting} ("perfect fitting"). In this context, it has been suggested that "classical" and "modern" regimes for machine learning are separated by a peak in the generalization error ("risk") curve, a phenomenon dubbed "double descent"\cite{belkin2019reconciling}. While such overfitting peaks do exist and arise from ill-conditioned design matrices, here we challenge the interpretation of the overfitting peak as demarcating the regime where good generalization occurs under overparametrization. We propose a model of Misparamatrized Sparse Regression (MiSpaR) and analytically compute the GE curves for $l_2$ and $l_1$ penalties. We show that the overfitting peak arising in the interpolation limit is dissociated from the regime of good generalization. The analytical expressions are obtained in the so called "thermodynamic" limit. We find an additional interesting phenomenon: increasing overparametrization in the fitting model increases sparsity, which should intuitively improve performance of $l_1$ penalized regression. However, at the same time, the relative number of measurements decrease compared to the number of fitting parameters, and eventually overparametrization does lead to poor generalization. Nevertheless, $l_1$ penalized regression can show good generalization performance under conditions of data interpolation even with a large amount of overparametrization. These provide a theoretical avenue into studying inverse problems in the interpolating regime using overparametrized fitting functions such as deep nets. Modern machine learning has two salient characteristics: large numbers of measurements m, and non-linear parametric models with very many fitting parameters p, with both m and p in the range of 10 6 9 for many applications. Fitting data with such large numbers of parameters stands in contrast to the inductive scientific process where models with small numbers of parameters are normative. Nevertheless, these large-parameter models are successful in dealing with real life complexity, raising interesting theoretical questions about the generalization ability of models with large numbers of parameters, particularly in the overparametrized regime µ = p/m > 1. Classical statistical procedures trade training (TE) and generalization error (GE) by controlling the model complexity. Sending TE to zero (for noisy data) is expected to increase GE. However deep nets seem to over-parametrize and drive TE to zero (data interpolation) while maintaining good GE. Over-parametrization has the benefit that global minima of the empirical loss function Note that for µ↵ > 1, the GE values for the l 1 case are close to zero, whereas the values for the l 2 penalized case can be much larger. Note also that the overfitting peak is much larger for ↵ < 1 than for ↵ > 1, and that the region of good generalization starts at µ = 1/↵, which can be to the left or right of the overfitting peak depending on the value of the undersampling parameter ↵. For the simulations with! 0, in the l 2 case a pseudoinverse was used. For the l 1 case a numerically small value = 10 5 was used, and it was checked that the do not change on decreasing. proliferate and become easier to find. These observations have led to recent theoretical activity. Regression and classification algorithms have been shown that interpolate data but also generalize optimally. An interesting related phenomenon has been noted: the existence of a peak in GE with increasing fitting model complexity. In it was suggested that this peak separates a classical regime from a modern (interpolating) regime where over-parametrization improves performance. While the presence of a peak in the GE curve is in stark contrast with the classical statistical folk wisdom where the GE curve is thought to be U-shaped, understanding the significance of such peaks is an open question, and motivates the current paper. Parenthetically, similar over-fitting peaks were reported almost twenty years ago (cf. statistical physics approach to learning) and attributed to increased fitting model entropy near the peak (see in particular Figs 4.3 and 5.2 in ). 1. We introduce a model, Misparametrized (or Misspecified) Sparse Regression (MiSpaR), which separates the number of measurements m, the number of model parameters n (which can be controlled for sparsity by a parameter ⇢), and the number of fitting degrees of freedom p. 2. We obtain analytical expressions for the GE and TE curves for l 2 penalized regression in the "high-dimensional" or "thermodynamic" asymptotic regime m, p, n! 1 keeping the ratios µ = p/m and ↵ = m/n fixed. We are also able to analytically compute GE for l 1 penalized regression, and exhibit explicit expressions for µ < 1 and µ >> 1 as! 0. 3. We show that for! 0 and for > 0, the overfitting peak appears at the data interpolation point µ = 1 (p = m) for both l 2 and l 1 penalized interpolation (GE ⇠ |1 µ| 1 near µ = 1), but does not demarcate the point at which "good generalization" first occurs, which for small corresponds to the point p = n (µ↵ = 1) (Figure 1). The region of good generalization can start before or after the overfitting peak. The overfitting peak is suppressed for finite. 4. For infinitely large overparametrization, generalization does not occur: GE(µ ! 1) = 1 for both l 2 and l 1 penalized interpolation. However, for small values of the sparsity parameter ⇢ and measurement noise variance 2, there is a large range of values of µ where l 1 regularized interpolation generalizes well, but l 2 penalized interpolation does not (Fig. 1). This range is given by 1 << log(µ) << 2, ⇢/↵ << 1. In this regime the sparsity penalty is effective, and suppresses noise-driven mis-estimation of parameters for the l 1 penalty. This shows how generalization properties of penalized interpolation depend strongly on the inductive bias, and are not properties of data interpolation per se. This has important implications for the usage of deep nets for solving inverse problems. 2 for small µ µ c ) and GE 1 (µ ! 1) = 1. 6. For = 0 and ↵ > ↵ c (⇢), GE 1 goes to zero linearly at µ↵ = 1 (GE 1 / ( . In this case GE 1 goes to zero with a nontrivial on the left, but rises quadratically on the right Usually in linear regression the same (known) design matrix x ij is used both for data generation and for parameter inference. In MiSpaR the generative model has a fixed number n of parameters j, which generate m measurements y i, but the number of parameters p in the inference model is allowed to vary freely, with p < n corresponding the under-parametrized and p > n the over-parametrized case. For the under-parametrized case, a truncated version of the design matrix is used for inference, whereas for the over-parametrized case, the design matrix is augmented with extra rows. In addition, we assume that the parameters in the generative model are sparse, and consider the effect of sparsity-inducing regularization in the interpolation limit. Combining misparametrization with sparsity is important to our study for two reasons • Dissociating data interpolation (which happens when µ = 1, ! 0) from the regime where good generalization can occur (this is controlled by the undersampling ↵ as well as by the model sparsity ⇢). • We are able to study the effect of different regularization procedures on data interpolation in an analytically tractable manner and obtain analytical expressions for the generalization error. Generative Model ("Teacher") We assume that the (known/measured) design variables are i.i.d. Gaussian distributed 2 from one realization of the generative model to another with variance 1/n. This choice of variance is important to fix normalization. Other choices have also been employed in the literature (notably x ij ⇠ N (0, 1/m)) -this is important to keep in mind when comparing with literature formulae where factors of ↵ may need to be inserted appropriately to obtain a match. Undersampling: ↵ = m/n Sparsity: Here ⇡ is the distribution of the non-zero model parameters. We assume this distribution to be Gaussian as this permits closed form evaluation of integrals appearing in the l 1 case. Note that we term µ = p/m as overpametrization (referring to the case where µ > 1) and we term ↵ = m/n as undersampling (referring to the case where ↵ < 1). Inference Model ("Student") The design matrix used for inference is mis-parametrized or mis-specified: under-specified (or partially observed) when µ↵ < 1 ⌘ p < n; over-specified, with extra, effect-free rows in the design matrix when µ↵ > 1 ⌘ p > n Parameter inference is carried out by minimizing a penalized mean squared error Note that for p > n, the model parameters are augmented by p n zero entries. We consider l 2 and l 1 penalties (correspondingly V = Note that the expectation E is taken simultaneously over the parameter values, the design matrix and measurement noise. We obtain exact analytical expressions for the risk (generalization error) in the (thermodynamic) limit where n, p, m all tend to infinity, but the ratios ↵ = m/n, µ = p/m are held finite. Similar "thermodynamic" or "high-dimensional" limiting procedures are used in statistical physics, eg in the study of random matrices and spin-glass models in large spatial dimensions. Such limits are also well-studied in modern statistics (for example to understand phase-transition phenomena in the LASSO algorithm ). While there is a large literature on the LASSO phase transition, we were unable to find any computations of the GE curves that span across the underparametrized and overparametrized regimes in a systematic model as presented here. We derive analytical formulae for TE and GE with l 2 or ridge regularization. For l 1 regularization, explicit formulae are given in some parameter regimes. More generally for the l 1 case we obtain a pair of simultaneous nonlinear equations in two variables which implicitly define the MSE. These can be solved numerically to obtain the GE. The nonlinear equations are given in closed form without hidden parameters and do not require integration. Analytical Formulae: T E 2, GE 2 are the training and generalization errors for the l 2 penalized case, and GE 1 the generalization error for the l 1 penalized case. Due to lack of space we do not present the analytical formulae for > 0 as these expressions are complex, but the corresponding analytical expressions were used to generated the theory curves in Fig.1 for the case > 0. The derivations employ the cavity mean field theory approach. Here 2 ef f = 2 + ⇢(1 µ↵). Note that the formulae for GE agree for the l 2 and l! cases in the underparametrized regime µ < 1, but diverge in the overparametrized regime: infinitesimal l 1 regularization provides no better generalization than the pseudoinverse based procedure unless there is overparametrization. Further note that "good generalization" (GE small) begins when µ↵ > 1 not at the overfitting peak (µ = 1).
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BklhoQ258B
Proposes an analytically tractable model and inference procedure (misparametrized sparse regression, inferred using L_1 penalty and studied in the data-interpolation limit) to study deep-net related phenomena in the context of inverse problems.
Hashing-based collaborative filtering learns binary vector representations (hash codes) of users and items, such that recommendations can be computed very efficiently using the Hamming distance, which is simply the sum of differing bits between two hash codes. A problem with hashing-based collaborative filtering using the Hamming distance, is that each bit is equally weighted in the distance computation, but in practice some bits might encode more important properties than other bits, where the importance depends on the user. To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items (using the Boolean AND operation), such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent. This allows a binary user-level importance weighting of each item without the need to store additional weights for each user. We experimentally evaluate our approach against state-of-the-art baselines on 4 datasets, and obtain significant gains of up to 12% in NDCG. We also make available an efficient implementation of self-masking, which experimentally yields <4% runtime overhead compared to the standard Hamming distance. Collaborative filtering is an integral part of personalized recommender systems and works by modelling user preference on past item interactions to predict new items the user may like . Early work is based on matrix factorization approaches that learn a mapping to a shared m-dimensional real-valued space between users and items, such that user-item similarity can be estimated by the inner product. The purpose of hashing-based collaborative filtering is the same as traditional collaborative filtering, but allows for fast similarity searches to massively increase efficiency (e.g., realtime brute-force search in a billion items ). This is done by learning semantic hash functions that map users and items into binary vector representations (hash codes) and then using the Hamming distance (the sum of differing bits between two hash codes) to compute user-item similarity. This leads to both large storage reduction (floating point versus binary representations) and massively faster computation through the use of the Hamming distance. One problem with hashing-based collaborative filtering is that each bit is weighted equally when computing the Hamming distance. This is a problem because the importance of each bit in an item hash code might differ between users. The only step towards addressing this problem has been to associate a weight with k-bit blocks of each hash code . However, smaller values of k lead to increased storage cost, but also significantly slower computation due to the need of computing multiple weighted Hamming distances. To solve this problem, without using any additional storage and only a marginal increase in computation time, we present Variational Hashing-based collaborative filtering with Self-Masking (VaHSM-CF). VaHSM-CF is our novel variational deep learning approach for hashing-based collaborative filtering that learns hash codes optimized for selfmasking. Self-masking is a novel technique that we propose in this paper for user-level bit-weighting on all items. Self-masking modifies item hash codes by applying an AND operation between an item and user hash code, before computing the standard Hamming distance between the user and selfmasked item hash codes. Hash codes optimized with self-masking represent which bit-dimensions encode properties that are important for the user (rather than a bitwise -1/1 preference towards each property). In practice, when ranking a set of items for a specific user, self-masking ensures that only bit differences on bit-dimensions that are equal to 1 for the user hash code are considered, while ignoring the ones with a -1 value, thus providing a user-level bitwise binary weigthing. Since selfmasking is applied while having the user and item hash codes in the lowest levels of memory (i.e., register), it only leads to a very marginal efficiency decrease. We contribute (i) a new variational hashing-based collaborative filtering approach, which is optimized for (ii) a novel self-masking technique, that outperforms state-of-the-art baselines by up to 12% in NDCG across 4 different datasets, while experimentally yielding less than 4% runtime overhead compared to the standard Hamming distance. We publicly release the code for our model, as well as an efficient implementation of the Hamming distance with self-masking 1. We focus on collaborative filtering with explicit feedback, which assumes that users and items are related via a user-specified rating: the task is to rank a pool of pre-selected items. This is different from implicit feedback, where the task is to estimate the pool of items that are of interest to the user. Matrix factorization is one of the most successful collaborative filtering methods , but to reduce storage requirements and speed up computation, hashing-based collaborative filtering has been researched. For hashing-based methods the users and items are represented as binary hash codes (as opposed to real-valued vectors), such that the highly efficient Hamming distance (as opposed to the inner product) can be used for computing user-item similarities. Two-stage approaches. Early hashing-based collaborative filtering methods include two stages: First, real-valued user and item vectors are learned, and then the real-valued vectors are transformed into binary hash codes. employ matrix factorization initially, followed by a binary quantization of rounding the real-valued vectors, while ensuring that the hash code is preference preserving of the observed user-item ratings using their proposed Constant Feature Norm constraint. and both explore binary quantization strategies based on orthogonal rotations of the real-valued vectors, which share similarities with Spectral Clustering . However, the two-stage approaches often suffer from large quantization errors , because the hash codes are not learned directly, but rather based on different quantization procedures. Learned hashing approaches. propose Discrete Collaborative Filtering (DCF), which is a binary matrix factorization approach that directly learns the hash codes using relaxed integer optimization, while enforcing bit balancing and decorrelation constraints. Extensions of DCF have focused on incorporating side-information (e.g., reviews associated with a rating); ) and have been redesigned for implicit feedback signals . More recent work addresses the problem that hashing-based collaborative filtering methods have reduced representational power compared to real-valued vectors, but increasing the hash code dimensionality to match the amount of bits used in the real-valued case hurts model generalization . To address this, propose Compositional Coding for Collaborative Filtering (CCCF), which is broadly similar to learning compositional codes for (word) embedding compression . CCCF is a hybrid approach that combines hash codes and real-valued weights: each hash code is split into k blocks of r bits each, and each block is associated with a real-valued scalar indicating the weight of the block. The distance between two CCCF hash codes is then computed as a weighted sum of the Hamming distances of the individual blocks, where each weight is the product of each block's weight. The problem with this approach is that each block requires an individual Hamming distance computation, as well as floating point multiplications of the block weights. In fact, the CCCF block construction no longer allows for highly efficient Boolean operations because the distance computation is weighted by each block's weight. Another problem with CCCF is that it drastically increases storage requirements by needing to store the real-valued weights for all blocks in a hash code. In contrast to CCCF, our proposed variational hashing-based collaborative filtering with selfmasking solves the same problems, as it effectively allows to disable unimportant bits -corresponding to a 1-bit block size with 0/1 weights -without needing to store any additional weights or vectors. Additionally, after having applied the self-masking, user-item similarity can still be computed using only a single Hamming distance on the two hash codes. Hashing-based Collaborative filtering aims to learn binary user and item representations (called hash codes), such that the distance between the representations indicates how well user u likes item i. In practice, the Hamming distance is used due to its fast hardware-level implementation. Formally, we learn z u ∈ {−1, 1} m and z i ∈ {−1, 1} m, where m is the number of bits in the hash code, which is typically chosen to fit into a machine word. The preference of user u for item i is specified by the rating R u,i ∈ {1, 2, 3, ..., K}, where K is the maximum rating, such that the Hamming distance between z u and z i is low when R u,i is high. Computing the Hamming distance computation is extremely efficient due to fast hardware-level implementation of the Boolean operations, as where SUM is computed fast on hardware using the popcnt instruction. Given a user and set of items, the integer-valued Hamming distances can be linear-time sorted using e.g. radix sort (because Hamming distances are bounded in [0, m]) in ascending order to create a ranked list based on user preference . The Hamming distance assigns equal weights to all bits, but in reality bit importance might differ among users. For example, if we consider each bit an encoding of a specific property of an item (e.g., a movie being a thriller), then the weight of each property would be dependent on each user's preference. However, since the hash codes are binary, it is not possible to encode such preference weights without using more storage and computation time due to no longer operating on binary values. In fact, no existing method even allows disabling specific bits for certain users (corresponding to the case of 0 preference weight). We next present a solution for the latter problem, which encodes the importance of each bit directly into the user hash code, and therefore does not require any additional storage. We define the Hamming distance with self-masking: We first apply an AND operation between the user and item hash codes, and then compute the Hamming distance between that and the user hash code. This fundamentally changes the purpose of the user hash code: instead of encoding a positive or negative preference for a property, it encodes which properties are important to the user (-1's from the user hash code are copied to the item due to the AND operation). This allows the model to disable unimportant bits on a user-level, meaning that we enable the model to produce user-specific item representations while still only storing a single hash code for each user and item respectively. Self-masking requires an additional Boolean operation, but since this is applied once the hash codes are already placed in the lowest levels of memory (i.e., register), it only leads to a marginal decrease in efficiency (see Section 4.6 for an empirical analysis). To derive a variational setup for hashing-based collaborative filtering, we define the likelihood of a user, u, and the likelihood of an item i, as the product over the likelihoods of the observed user specified ratings: where θ are the parameters of the (neural network) model. This formulation enforces a dual symmetric effect of users being defined by all their rated items, and items being defined by the ratings provided by all the users. To maximize the likelihood of all observed items and users, we need to maximize the likelihood of the observed ratings p θ (R u,i). Note that instead of maximizing the raw likelihood, we consider the log likelihood to derive the objective below. We assume that the likelihood of a rating, p θ (R u,i), is conditioned on two latent vectors, a user hash code z u, and an item hash code z i. To obtain the hash codes of the user and item, we assume that z u and z i each are sampled by repeating m Bernoulli trials, which have equal probability of sampling -1 and 1. This gives us the following log likelihood, which we wish to maximize: The latent vectors z u and z i are a user and item hash code, and should therefore be conditioned on the user and item respectively. To do this we first multiply and divide with the approximate posterior distributions q φ (z i |i), q ψ (z u |u): where ψ and φ are the parameters of the approximate posteriors. We can now rewrite to an expectation and apply Jensen's inequality to obtain a lower bound on the log likelihood: Since z i and z u will be sampled independently, then q φ (z i |i) and q ψ (z u |u) are independent and we can rewrite to the variational lower bound: where KL(·, ·) is the Kullback-Leibler divergence. Thus, to maximize the expected log likelihood of the observed rating, we need to maximize the conditional log likelihood of the rating, while minimising the KL divergence between the approximate posterior and prior distribution of the two latent vectors. Maximizing the expected conditional log likelihood can be considered as a reconstruction term of the model, while the KL divergence can be considered as a regularizer. Next we present the computation of the approximate posterior distributions q i φ (z i |i) and q u ψ (z u |u) (Section 3.3) and the conditional log likelihood of the rating p θ (R u,i |z u, z i) (Section 3.4). The approximate posterior distributions can be seen as two encoder functions modelled through a neural network by considering the functions as embedding layers. Each encoder function maps either a user or an item to a hash code. Next, we focus on the derivation of the encoder function for the user, as they are both computed in the same way. The probability of the j'th bit is given by: where u is the j'th entry in a learned real-valued embedding E for user u, and σ is the sigmoid function. The j'th bit is then given by: where µ (j) is either chosen stochastically by sampling µ (j) from a uniform distribution in the interval, or chosen deterministically to be 0.5, which can be used for evaluation to obtain fixed hash codes. Note that µ is sampled for each bit. As the sampling is non-differentiable, a straight-through estimator is used for backpropagation. The conditional log likelihood can be considered a reconstruction of the rating, given the user and item hash codes. We model the observed ratings as a ground truth rating with additive standard normally distributed noise, which is then discretized to the observed categorical rating. The conditional log likelihood can then be computed as: where f (z u, z i) is a function that reconstructs the rating given the user and item hash codes. Maximising the log likelihood, log p θ (R u,i |z u, z i), corresponds to minimising the mean squared error (MSE) between R u,i and f (z u, z i), which is done for training the model. Existing work on hashingbased collaborative filtering also employs a MSE objective, and thus implicitly make the same normal distribution assumption as done in this work. We define the reconstruction function to be the self-masking Hamming distance from Eq. 2: where g is a fixed affine transformation that maps the interval of the Hamming distance to the interval of the ratings, such that the minimum and maximum of the Hamming distance correspond to the minimum and maximum of the ratings. The model is now fully differentiable and can be trained end-to-end using backpropagation, such that the network is able to optimize the hash codes directly for self-masking. A depiction of the model is provided in Figure 1. It should be noted that while variational autoencoders are generative models, we do not explicitly utilize this in our model, and are primarily concerned with the reconstruction of the observed ratings. We evaluate on 4 publicly available datasets commonly used in prior work (; ; ; ; and summarized in Table 1 . Specifically, we use: two movie rating datasets, Movielens 1M 2 (ML-1M) and Movielens 10M 3 (ML-10M); a Yelp dataset with ratings of e.g., restaurant and shopping malls 4; and a book rating dataset from Amazon 5 (50% of the ratings are used for testing, 42.5% are used for training, while the last 7.5% are used for validation. We evaluate our method, VaHSM-CF, and all baselines (see Section 4.2) using Normalised Discounted Cumulative Gain (NDCG) (Järvelin & Kekäläinen, 2000), which is often used to evaluate recommender systems with non-binary ratings (or relevance values). We use the average NDCG at cutoffs {2, 6, 10} over all users and report the average for each cutoff value. We use as baselines the state-of-the-art methods for hashing-based collaborative filtering (see Section 2), standard matrix factorisation, and two different strategies for binarising the output of the matrix factorisation 6. We include the standard matrix factorization as a reference to a traditional real-valued collaborative filtering baseline, in order to highlight the performance gap between the hashing-based approaches and a real-valued approach. 7 learns user and item hash codes through a binary matrix factorization solved as a relaxed integer problem. 8 learns hash codes consisting of k blocks, where each block has r bits. A floating point weight is associated with each block for computing user-item similarities as a weighted sum of block-level Hamming distances. In the original paper, the floating point weights are not counted towards the amount of bits used, thus leading to an unfair advantage. For a fair comparison, we count each floating point weight as 16 bits in the following experimental comparison. 9 is the classical matrix factorization based collaborative filtering method, where latent real-valued vectors are learned for users and items. We set the latent dimension to be the same as the number of bits used in the hashing-based approaches. MF mean and MF median are based on MF, but use either each dimension's mean or median for doing the binary quantization to bits . We include these to highlight the large quantization loss occurring when the hash codes are not learned directly. VaH-CF is our proposed method without self-masking. We use it to show the effect of a neural variational hashing-based collaborative approach without self-masking. We train both VaHSM-CF and VaH-CF using the Adam optimizer , and tune the learning rate from the set {0.005, 0.001, 0.0005}, where 0.001 is always chosen across all data sets. The batch size is chosen from the set {100, 200, 400, 800}, where 400 was always chosen. To reduce over-fitting, Gaussian noise is added to the ratings during training, as noise injection has been found beneficial in multiple domains for variational neural models . For the noise injection, we initially set the variance of the Gaussian to 1 and reduce by a factor of 1 − 10 −4 every iteration during training. Our model is implemented using the Tensorflow Python library , and all experiments are run on Titan X GPUs. All hyper parameters for the baselines are tuned using the same set of possible values as in the original papers. For the CCCF baseline, we consider block sizes of {8, 16, 32, 64} and each floating 32 bits ML-1M ML-10M Yelp Amazon NDCG@2 NDCG@6 NDCG@10 NDCG@2 NDCG@6 NDCG@10 NDCG@2 NDCG@6 NDCG@10 NDCG@2 NDCG@6 Table 2: NDCG@K of our method (VaHSM-CF) against baselines using hash codes of length 32 and 64 bits. The missing for CCCF on Amazon are due to the model requiring more than 128 GB of RAM. Statistically significant improvements using a paired two tailed t-test at the 0.05 level, compared to the best existing hashing-based baseline (DCF), are indicated by *. point weight counts for 16 bits. We try all possible combinations that fit within the bit budget, and if a single block is chosen, then the weight is not included in the bit calculation. The are shown in Table 2, for hash code lengths of 32 and 64 bits, as these correspond to common machine word sizes. The highest NDCG per column is shown in bold (among hashingbased baselines), and statistically significantly better than the best hashing-based baseline (DCF), using a paired two tailed t-test at the 0.05 level, are indicated by an asterisk *. The Amazon for CCCF are not included, as the released implementation requires excessive amounts of RAM (>128GB) on this dataset due to the large amount of items and users. Our proposed VaHSM-CF significantly outperforms all hashing-based baselines across all datasets by up to 12%. Our VaH-CF without self-masking is second best (up to 2% better than the hashingbased baselines), although on ML-1M the VaH-CF are not statistically significantly better than DCF. This highlights the benefit of modelling hashing-based collaborative filtering with a variational deep learning based framework, which is notably different from existing hashing-based methods based on dicrete matrix factorization solved as relaxed integer problems. Most importantly however, it shows the significant improvement self-masking brings to hashing-based collaborative filtering. We observe that the top 3 baselines (including our VaH-CF) generally obtain similar scores, which highlights the difficulty of improving performance without changing how the hash codes are used (as done by self-masking). The real-valued MF baseline outperforms all the hashing-based approaches, which is to be expected as the representational power of of 32/64 floating point numbers are notably higher than 32/64 bits. However, our VaHSM-CF bridges a large part of the gap between existing hashing-based methods and MF, such that the NDCG difference in most cases is below 0.01. Additionally, the large performance decrease by both the mean or median rounding shows the large quantization error obtained if the hash codes are not learned directly, as done by our and the existing hashing-based approaches. How self-masking influences the convergence rate of the model. Figure 2a shows the convergence rate for the ML-1M dataset with and without self-masking. We see that training with self-masking significantly improves the convergence rate compared to the model without self-masking. Since the time for a single epoch is approximately the same with and without self-masking, we conclude that self-masking not only improves NDCG, but also reduces training time by a very large margin. Figure 2: 2a shows convergence using the validation NDCG@10 on ML-1M, where self-masking significantly speeds up convergence. We observe the same trend on the other datasets (see Appendix A.1). 2b Shows the test NDCG@10 when varying whether hash codes are sampled stochastically or deterministically while training and for evaluation. For example, Det. Eval + Stoc. Train corresponds to deterministic sampling of hash codes for evaluation, while sampling stochastically when training the model. Convergence plots for the remaining datasets are shown in the Appendix, where we observe the same trend. Stochastic or deterministic sampling. We investigate the effect of the sampling strategy for the hash codes (see Eq. 10) during training and evaluation. The sampling can either be deterministic (µ (j) = 0.5) or stochastic (µ (j) is sampled uniformly at random from), and does not have to be the same for training and evaluation. Figure 2b shows the performance for these 4 configurations across all datasets. We see that stochastic training with deterministic evaluation performs the best, while deterministic training and deterministic evaluation perform second best. As expected, stochastic sampling at evaluation performs significantly worse than deterministic sampling, as every item has a very small probability of being sampled such that it has a small Hamming distance to a user, even though it has a low rating (and vice versa for highly rated items). 4.6 RUNTIME ANALYSIS Self-masking has an additional cost added to the standard Hamming distance, due to the additional AND operation between the user and item hash codes (see Eq. 1 and 2). We now investigate the actual runtime cost associated with this modification. We implement both the Hamming distance and Hamming distance with self-masking efficiently in C on a machine with a 64 bit instruction set. A test environment was made with 64 bit hash codes for 100,000 users and 100,000 items. For each user, the distances were computed to all items using both the Hamming distance and Hamming distance with self-masking. We measure the actual time taken for computing the distances to all items from each user, and report the average over 50 repeated runs. All experiments are run on a single thread 10, with all users and items loaded in RAM. The code was compiled with the highest optimization level, and utilizing all optimization flags applicable to the hardware. We verified the produced assembler code used the efficient popcnt instruction. The mean experiment time was 8.0358s when using the Hamming distance, and 8.3506s when using the Hamming distance with self-masking. Thus, self-masking only adds a runtime overhead of 3.91% compared to using the standard Hamming distance. As in this setup we are only computing the distances, this can be seen as an upper bound of the actual overhead in a complete system, as the remaining operations (e.g., sorting) would be the same with and without self-masking. Thus, this provides a good trade-off compared to the large performance gains it yields. Note that the measured times are for the total 10 10 distance computations, highlighting the scalability of hashing-based methods to datasets of massive scale. For comparison, if the experiment is repeated with computing the dot product of floating point vectors of size 64, then the computation time is 648.5632s, thus close to 80x slower. We proposed an end-to-end trainable variational hashing-based collaborative filtering method, which optimizes hash codes using a novel modification to the Hamming distance, which we call selfmasking. The Hamming distance with self-masking first creates a modified item hash code, by applying an AND operation between the user and item hash codes, before computing the Hamming distance. Intuitively, this can be seen as ignoring user-specified bits when computing the Hamming distance, corresponding to applying a binary importance weight to each bit, but without using more storage and only a very marginal runtime overhead. We verified experimentally that our model outperforms state-of-the-art baselines by up to 12% in NDCG at different cutoffs, across 4 widely used datasets. These gains come at a minimal cost in recommendation time (self-masking only increased computation time by less than 4%). A.1 CONVERGENCE PLOTS Convergence plots for Yelp, Amazon, and ML-10M are shown in Figure 3. We observe a similar trend to ML-1M in Figure 2a, where the self-masking leads to a notably faster rate of convergence.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rylDzTEKwr
We propose a new variational hashing-based collaborative filtering approach optimized for a novel self-mask variant of the Hamming distance, which outperforms state-of-the-art by up to 12% on NDCG.
Determining the appropriate batch size for mini-batch gradient descent is always time consuming as it often relies on grid search. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit that achieves performance equivalent to that of best fixed batch-size. At each epoch, the RMGD samples a batch size according to a certain probability distribution proportional to a batch being successful in reducing the loss function. Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. Experimental show that the RMGD achieves performance better than the best performing single batch size. It is surprising that the RMGD achieves better performance than grid search. Furthermore, it attains this performance in a shorter amount of time than grid search. Gradient descent (GD) is a common optimization algorithm for finding the minimum of the expected loss. It takes iterative steps proportional to the negative gradient of the loss function at each iteration. It is based on the observation that if the multi-variable loss functions f (w) is differentiable at point w, then f (w) decreases fastest in the direction of the negative gradient of f at w, i.e., −∇f (w). The model parameters are updated iteratively in GD as follows: DISPLAYFORM0 where w t, g t, and η t are the model parameters, gradients of f with respect to w, and learning rate at time t respectively. For small enough η t, f (w t) ≥ f (w t+1) and ultimately the sequence of w t will move down toward a local minimum. For a convex loss function, GD is guaranteed to converge to a global minimum with an appropriate learning rate. There are various issues to consider in gradient-based optimization. First, GD can be extremely slow and impractical for large dataset: gradients of all the data have to be evaluated for each iteration. With larger data size, the convergence rate, the computational cost and memory become critical, and special care is required to minimize these factors. Second, for non-convex function which is often encountered in deep learning, GD can get stuck in a local minimum without the hope of escaping. Third, stochastic gradient descent (SGD), which is based on the gradient of a single training sample, has large gradient variance, and it requires a large number of iterations. This ultimately translates to slow convergence. Mini-batch gradient descent (MGD), which is based on the gradient over a small batch of training data, trades off between the robustness of SGD and the stability of GD. There are three advantages for using MGD over GD and SGD: 1) The batching allows both the efficiency of memory usage and implementations; 2) The model update frequency is higher than GD which allows for a more robust convergence avoiding local minimum; 3) MGD requires less iteration per epoch and provides a more stable update than SGD. For these reasons, MGD has been a popular algorithm for machine learning. However, selecting an appropriate batch size is difficult. Various studies suggest that there is a close link between performance and batch size used in;;.There are various guidelines for selecting a batch size but have not been completely practical BID1. Grid search is a popular method but it comes at the expense of search time. There are a small number of adaptive MGD algorithms to replace grid search BID3; BID4. These algorithms increase the batch size gradually according to their own criterion. However, these algorithms are based on convex loss function and hard to be applied to deep learning. For non-convex optimization, it is difficult to determine the optimal batch size for best performance. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multi-armed bandit for achieving best performance in grid search by selecting an appropriate batch size at each epoch with a probability defined as a function of its previous success/failure. At each epoch, RMGD samples a batch size from its probability distribution, then uses the selected batch size for mini-batch gradient descent. After obtaining the validation loss at each epoch, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. The benefit of RMGD is that it avoids the need for cumbersome grid search to achieve best performance and that it is simple enough to apply to any optimization algorithm using MGD. The detailed algorithm of RMGD are described in Section 4, and experimental are presented in Section 5. There are only a few published on the topic of batch size. It was empirically shown that SGD converged faster than GD on a large speech recognition database. It was determined that the range of learning rate ing in low test errors was considerably getting smaller as the batch size increased on convolutional neural networks and that small batch size yielded the best test error, while large batch size could not yield comparable low error rate BID2. It was observed that larger batch size are more liable to converge to a sharp local minimum thus leading to poor generalization. It was found that the learning rate and the batch size controlled the trade-off between the depth and width of the minima in MGD Jastrzkebski et al.. A small number of adaptive MGD algorithms have been proposed. BID3 introduced a methodology for using varying sample size in MGD. A relatively small batch size is chosen at the start, then the algorithm chooses a larger batch size when the optimization step does not produce improvement in the target objective function. They assumed that using a small batch size allowed rapid progress in the early stages, while a larger batch size yielded high accuracy. However, this assumption did not corresponded with later researches that reported the degradation of performance with large batch size;;. Another similar adaptive algorithm, which increases the batch size gradually as the iteration proceeded, was done by. The algorithm uses relatively few samples to approximate the gradient, and gradually increase the number of samples with a constant learning rate. It was observed that increasing the batch size is more effective than decaying the learning rate for reducing the number of iterations. However, these increasing batch size algorithms lack flexibility since it is unidirectional. BID0 proposed a dynamic batch size adaptation algorithm. It estimates the variance of the stochastic gradients and adapts the batch size to decrease the variance. However, this algorithm needs to find the gradient variance and its computation depends on the number of model parameters. Batch size can also be considered as a hyperparameter, and there have been some proposals based on bandit-based hyperparameter (but not batch size) optimization which maybe applicable for determining the best fixed batch size. introduced a successive halving algorithm. This algorithm uniformly allocates a budget to a set of hyperparameter configurations, evaluates the performance of all configurations, and throws out the worst half until one configuration remains. introduced a novel bandit-based hyperparameter optimization algorithm referred as HYPERBAND. This algorithm considers the optimization problem as a resource allocation problem. The two algorithms mentioned above are not adaptive, and for searching a small hyperparameter space, the two algorithms will not be very effective. The experimental in this paper show that adaptive MGD tends to perform better than fixed MGD.Figure 1: An overall framework of considered resizable mini-batch gradient descent algorithm (RMGD). The RMGD samples a batch size from a probability distribution, and parameters are updated by mini-batch gradient using the selected batch size. Then the probability distribution is updated by checking the validation loss.3 SETUP DISPLAYFORM0 be the set of possible batch size and π = {π k} K k=1 be the probability distribution of batch size where b k, π k, and K are the k th batch size, the probability of b k to be selected, and number of batch sizes respectively. This paper considers algorithm for multi-armed bandit over B according to Algorithm 1. Let w τ ∈ W be the model parameters at epoch τ, andw t be the temporal parameters at sub iteration t. Let J: W → R be the training loss function and let g = ∇J(w) be the gradients of training loss function with respect to the model parameters. η τ is the learning rate at epoch τ. Let: W → R be the validation loss function, and y k ∈ {0, 1} be the cost of choosing the batch size b k. In here, y k = 0 if the validation loss decreases by the selected batch size b k (well-updating) and y k = 1 otherwise (misupdating). The aim of the algorithm is to have low misupdating. For the cost function y k, graduated losses such as hinge loss and percentage of nonnegative changes in validation loss can be variations of 0-1 loss. However, there are no differences in regret bound among them in this setting and it is experimentally confirmed that there are little performance gaps among them. Therefore, this paper introduces the 0-1 loss, which is simple and basic. The resizable mini-batch gradient descent (RMGD) sets the batch sizes as multi arms, and at each epoch it samples one of the batch sizes from probability distribution. Then, it suffers a cost of selecting this batch size. Using the cost, probability distribution is updated. The overall framework of the RMGD algorithm is shown in Figure 1. The RMGD consists of two components: batch size selector and parameter optimizer. The selector samples a batch size from probability distribution and updates the distribution. The optimizer is usual mini-batch gradient. Selector samples a batch size b kτ ∈ B from the probability distribution π τ at each epoch τ where k τ is selected index. Here b k is associated with probability π k. The selected batch size b kτ is applied to optimizer for MGD at each epoch, and the selector gets cost y kτ from optimizer. Then, the selector Input: DISPLAYFORM0: Set of batch sizes π 0 = {1/K, . . ., 1/K}: Prior probability distribution 1: Initialize model parameters w 0 2: for epoch τ = 0, 1, 2,... Select batch size b kτ ∈ B from π τ Set temporal parametersw 0 = w τ 5: DISPLAYFORM0 Compute gradient g t = ∇J(w t)7: DISPLAYFORM1 end for DISPLAYFORM2 Update w τ +1 =w T Observe validation loss (w τ +1): DISPLAYFORM0 Get cost y kτ = 0 13: DISPLAYFORM1 Get cost y kτ = 1 15: 16: DISPLAYFORM0 Set temporal probabilityπ i = π DISPLAYFORM1 updates probabilities by randomized weighted majority, DISPLAYFORM2 Optimizer updates the model parameters w. For each epoch, temporal parametersw 0 is set to w τ, and MGD iterates T = m/b kτ 1 times using the selected batch size b kτ where m is the total number of training samples:w DISPLAYFORM3 After T iterations at epoch τ, the model parameters is updated as w τ +1 =w T. Then, the optimizer obtains validation loss, and outputs cost as follows: DISPLAYFORM4 The RMGD samples an appropriate batch size from a probability distribution at each epoch. This probability distribution encourages exploration of different batch size and then later exploits batch size with history of success, which means decreasing validation loss. FIG1 shows an example of training progress of RMGD. The figure represents the probability distribution with respect to epoch. The white dot represents the selected batch size at each epoch. In the early stage of training, 1 x is the least integer that is greater than or equal to x commonly, all batch sizes tend to decrease validation loss: π is uniform. Thus, all batch size have equal probability of being sampled (exploration). In the later stages of training, the probability distribution varies based on success and failure. Thus, better performing batch size gets higher probability to be sampled (exploitation). In this case, 256 is the best performing batch size. The regret bound of the RMGD follows the regret bound derived in. The goal of this algorithm is to have low regret for not selecting the best performing batch size such that DISPLAYFORM0 where the expectation is over the algorithm's randomness of batch size selection and the second term on the right-hand side is the cumulative sum of the cost by the best fixed batch size which minimizes the cumulative sum of the cost. The regret of the RMGD is bounded, DISPLAYFORM1 In particular, setting β = log(K)/(KT), the regret is bounded by 2 K log(K)T, which is sublinear with T. The detailed derivation of regret bound is described in the appendix A. This section describes various experimental on MNIST, CIFAR10, and CIFAR100 dataset. In the experiments, simple convolutional neural networks (CNN) is used for MNIST and'All-CNN-C' Springenberg et al. FORMULA14 is used for CIFAR10 and CIFAR100. The details of the dataset and experimental settings are presented in the appendix B. The validity of the RMGD was assessed by performing image classification on the MNIST dataset using AdamOptimizer and AdagradOptimizer as optimizer. The experiments were repeated 100 times for each algorithm and each optimizer, then the were analyzed for significance. FIG2 shows the probability distribution and the selected batch size with respect to epoch during training for the RMGD. The white dot represents the batch size selected at each epoch. The top figure is the case that small batch size performs better. After epoch 50, batch size 32 gets high probability and is selected more than others. It means that batch size 32 has less misupdating in this case. The gradually increasing batch size algorithm may not perform well in this case. The middle figure is the case that large batch size performs better. After epoch 60, batch size 512 gets high probability and selected more than others. The bottom figure shows that the best performing batch size varies with epoch. During epoch from 40 to 55, batch size of 256 performs best, and best performing batch size switches to 128 during epoch from 60 to 70, then better performing batch size backs to 256 after epoch 80. In the , any batch size can be a successful batch size in the later stages without any particular order. The RMGD is more flexible for such situation than the MGD or directional adaptive MGD such as gradually increasing batch size algorithm. FIG3 shows the test accuracy of each algorithm. The error bar is standard error. The number in parenthesis next to MGD represents the batch size used in the MGD.' Basic','sub','super','hinge', and'ratio' in parenthesis next to RMGD represent RMGD settings'batch size set equal to grid search, 0-1 loss','subset of basic, 0-1 loss','superset of basic, 0-1 loss','basic set, hinge loss', and'basic set, percentage of non-negative changes in validation loss', respectively. The left figure is the test accuracy with AdamOptimizer. The right figure is the test accuracy with AdagradOptimizer. Among the MGD algorithms, relatively small batch sizes lead to higher performance than large batch sizes and batch size 64 achieves the best performance in grid search. These Most RMGD settings outperform all fixed MGD algorithms in both case. Although the performance of RMGD is not significantly increased compared to the best MGD, the purpose of this algorithm is not to improve performance, but to ensure that the best performance is achieved without performing a grid search on the batch size. Rather, the improved performance of the RMGD is a surprising . Therefore, the RMGD is said to be valid. There are little performance gap among RMGD settings. The'sub' setting outperforms the'basic' setting in left figure, but the opposite is shown in right figure. Therefore, there is no clear tendency of performance change depending on the size of the batch size set. TAB0 and 2 present iterations and real time for training, mean, maximum, and minimum of test accuracies for each algorithm with AdamOptimizer and AdagradOptimizer respectively. The MGD (total) is the summation of the iterations and real time of whole MGDs for grid search. The RMGD (basic) outperforms best performing MGD and is, also, faster than best performing MGD. Furthermore, it is 8 times faster than grid search in both cases. In the , the RMGD is effective regardless of the optimizer. The CIFAR10 and CIFAR100 dataset were, also, used to assess effectiveness of the RMGD. The experiments were repeated 25 times and 10 times, respectively. In these experiments, all images are whitened and contrast normalized before being input to the network. FIG4 shows the test accuracy for each algorithm. The left figure represents the test accuracy on CIFAR10. In contrast to the MNIST , relatively large batch sizes lead to higher performance than small batch sizes and batch size 256 achieves the best performance in grid search. The right figure represents the test accuracy on CIFAR100 and batch size 128 achieves the best performance in grid search. The on MNIST, CIFAR10 and CIFAR100 indicate that it is difficult to know which batch size is optimal before performing a grid search. Meanwhile, all RMGD settings have again exceeded the best performance of fixed MGD. There are no significant performance gaps among RMGD settings, so there is no need to worry about choosing appropriate batch size set or selecting cost function. Table 3 and 4 present the detailed on CIFAR10 and CIFAR100 dataset. The RMGD (basic) is a little slower than single best performing MGD (256 for CIFAR10 and 128 for CIFAR100), however, it was much faster than grid search -about 4.6 times on CIFAR10 and 5.0 times on CIFAR100 faster. Therefore, this , also, show the effectiveness of the RMGD.It is difficult to compare the RMGD with other adaptive batch size algorithm, e.g. coupling adaptive batch sizes (CABS) BID0, directly since the underlying goals are different. While the goal of the RMGD is to reduce the validation loss in terms of generalization performance, the CABS determines the batch size to balance between the gradient variance and computation. However, it is obvious that the RMGD is simpler and easier to implement than any other adaptive algorithm cited in this paper, and comparing the test accuracy between the RMGD and the CABS on the CIFAR10 and CIFAR100 using the same experimental settings with'All-CNN-C' shows that the performance of the RMGD is higher than that of the CABS (CIFAR10: 87.862 ± 0.142, CIFAR100: 60.782 ± 0.421). And again, the purpose of this algorithm is not to outperform other algorithms, but to guarantee that the best performance is reached without grid search. Selecting batch size affects the model quality and training efficiency, and determining the appropriate batch size is time consuming and requires considerable resources as it often relies on grid search. The focus of this paper is to design a simple robust algorithm that is theoretically sound and applicable in many situations. This paper considers a resizable mini-batch gradient descent (RMGD) algorithm based on a multiarmed bandit that achieves equivalent performance to that of best fixed batch-size. At each epoch, the RMGD samples a batch size according to certain probability distribution of a batch being successful in reducing the loss function. Sampling from this probability provides a mechanism for exploring different batch size and exploiting batch sizes with history of success. After obtaining the validation loss at each epoch with the sampled batch size, the probability distribution is updated to incorporate the effectiveness of the sampled batch size. The goal of this algorithm is not to achieve state-of-the-art accuracy but rather to select appropriate batch size which leads low misupdating and performs better. The RMGD essentially assists the learning process to explore the possible domain of the batch size and exploit successful batch size. The benefit of RMGD is that it avoids the need for cumbersome grid search to achieve best performance and that it is simple enough to apply to various field of machine learning including deep learning using MGD. Experimental show that the RMGD achieves the best grid search performance on various dataset, networks, and optimizers. Furthermore, it, obviously, attains this performance in a shorter amount of time than the grid search. Also, there is no need to worry about which batch size set or cost function to choose when setting RMGD. In , the RMGD is effective and flexible mini-batch gradient descent algorithm. In the RMGD algorithm, there are K batch sizes as multi arms with the probability distribution π ∈ S, and at each epoch the algorithm should select one of the batch sizes b kτ. Then it receives a cost of selecting this arm, y kτ τ ∈ {0, 1} by testing the validation loss. The vector y τ ∈ {0, 1} K represents the selecting cost for each batch size. The goal of this algorithm is to have low regret for not selecting the best performing batch size. DISPLAYFORM0 where the expectation is over the algorithm's randomness of batch size selection. Let S be the probability simplex, the selecting loss functions be f τ (π) = π, y τ 2 and R: S → R be a regularization function that is often chosen to be strongly convex with respect to some norm || · ||. The algorithm select a batch size with probability P[b kτ] = π kτ τ and therefore f τ (π τ) is the expected cost of the selected batch size at epoch τ. The gradient of the selecting loss function is y τ. However, only one element y kτ τ is known at each epoch. To estimate gradient, random vector z τ is defined as follows: DISPLAYFORM1 and expectation of z τ satisfies, DISPLAYFORM2 The most natural learning rule is to set the probability distribution which has minimal cost on all past epochs. It is referred to as Follow-the-Regularized-Leader (FTRL) in online learning: DISPLAYFORM3 where β is positive hyperparameter. The FTRL has a problem that it requires solving an optimization problem at each epoch. To solve this problem, Online Mirror Descent (OMD) is applied. The OMD computes the current probability distribution iteratively based on a gradient update rule and the previous probability distribution and lies in the update being carried out in a'dual' space, defined by regularizer. This follows from considering ∇R as a mapping from R K onto itself. The OMD relies on Bregman divergence. The Bregman divergence between π andπ with respect to the regularizer R is given as: DISPLAYFORM4 and a Bregman projection ofπ onto simplex S: DISPLAYFORM5 Then the probability distribution is updated by the OMD as follows: DISPLAYFORM6 In general, if R is strongly convex, then ∇R becomes a bijective mapping, thusπ τ +1 can be recovered by the inverse gradient mapping (∇R) −1. Given that R is strongly convex, the OMD and FTRL produce equivalent predictions: DISPLAYFORM7 2 π, y is the inner product between vectors π and y DISPLAYFORM8 Therefore, the regret of the RMGD is bounded, DISPLAYFORM9 In particular, setting β = log(K)/(KT), the regret is bounded by 2 K log(K)T, which is sublinear with T. MNIST is a dataset of handwritten digits that is commonly used for image classification. Each sample is a black and white image and 28 × 28 in size. The MNIST is split into three parts: 55,000 samples for training, 5,000 samples for validation, and 10,000 samples for test. CIFAR10 consists of 60,000 32 × 32 color images in 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6,000 images per class. The CIFAR10 is split into three parts: 45,000 samples for training, 5,000 samples for validation, and 10,000 samples for test. CIFAR100 consists of 60,000 32 × 32 color images in 100 classes. The CIFAR100 is split into three parts: 45,000 samples for training, 5,000 samples for validation, and 10,000 samples for test. The simple CNN consists of two convolution layers with 5 × 5 filter and 1 × 1 stride, two max pooling layers with 2 × 2 kernel and 2 × 2 stride, single fully-connected layer, and softmax classifier. Description of the'All-CNN-C' is provided in TAB3. For MNIST, AdamOptimizer with η = 10 −4 and AdagradOptimizer with η = 0.1 are used as optimizer. The basic batch size set B = {16, 32, 64, 128, 256, 512}, subset of basic B − = {16, 64, 256}, and superset of basic, 24, 32, 48, 64, 96, 128, 192, 256, 384, 512}. The model is trained for a total of 100 epochs. For CIFAR10 and CIFAR100, MomentumOptimizer with fixed momentum of 0.9 is used as optimizer. The learning rate η k is scaled up proportionately to the batch size (η k = 0.05 * b k /256) and decayed by a schedule S = in which η k is multiplied by a fixed multiplier of 0.1 after 200, 250, and 300 epochs respectively. The model is trained for a total of 350 epochs. Dropout is applied to the input image as well as after each convolution layer with stride 2. The dropout probabilities are 20% for dropping out inputs and 50% otherwise. The model is regularized with weight decay λ = 0.001. The basic batch size set B = {16, 32, 62, 128, 256}, subset of basic B − = {16, 64, 256}, and superset of basic B + = {16, 24, 32, 48, 64, 96, 128, 192, 256}. For all experiments, rectified linear unit (ReLU) is used as activation function. For RMGD, β is set to log/(6 * 100) ≈ 0.055 for MNIST and log/(5 * 350) ≈ 0.030 for CIFAR10 and CIFAR100. The basic batch size selecting cost is 0-1 loss, hinge loss is max{0, τ − τ −1}, and ratio loss is max{0, ( τ − τ −1)/ τ −1 }. 1 × 1 conv. 10 or 100 ReLU, stride 1 pool averaging over 6 × 6 spatial dimensions softmax 10-way or 100-way softmax DISPLAYFORM0
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
H1lGHsA9KX
An optimization algorithm that explores various batch sizes based on probability and automatically exploits successful batch size which minimizes validation loss.
Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks. Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors. Several attempts have been made to complete knowledge graph as well as to detect noise. However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other. In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding. Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection. Knowledge graph, as a well-structured effective representation of knowledge, plays a pivotal role in many real-world applications such as web search , question answering , and personalized recommendation . It is constructed by extracting information as the form of triple from unstructured text using information extraction systems. Each triple (h, r, t) represents a relation r between a head entity h and a tail entity t. Recent years have witnessed extensive construction of knowledge graph, such as Freebase , DBPedia , and YAGO . However, these knowledge graphs suffer from severe sparsity as we can never collect all the information. Moreover, due to the huge volumes of web resources, the task to construct knowledge graph usually involves automatic mechanisms to avoid human supervision and thus inevitably introduces many kinds of errors, including ambiguous, conflicting and erroneous and redundant information. To address these shortcomings, various methods for knowledge graph refinement have been proposed, whose goals can be arguably classified into two categories: knowledge graph completion, the task to add missing knowledge to the knowledge graph, and error detection, the task to identify incorrect triples in the knowledge graph. Knowledge graph embedding (KGE) currently hold the state-of-the-art in knowledge graph completion for their promising . Nonetheless, they highly rely on high quality training data and thus are lack of robustness to noise . Error detection in knowledge graph is a challenging problem due to the difficulty of obtaining noisy data. Reasoning based methods are the most widely used methods for this task . Without the guidance of noisy data, they detect errors by performing reasoning over the knowledge graph to determine the correctness of a triple. A rich ontology information is required for such kind of methods and thus impede its application for real-world knowledge graphs. Existing works consider knowledge graph embedding and error detection independently whereas these two tasks are inter-dependent and can greatly influence each other. On one hand, error detection model is extremely useful to prepare reliable data for knowledge graph embedding. On the other hand, high quality embedding learned by KGE model provides a basis for reasoning to identify noisy data. Inspired by the recent advances of generative adversarial deep models , in this paper, we proposed to jointly combine these two tasks with a unified GAN framework, known as NoiGAN, to learn noise-aware knowledge graph embedding. In general, NoiGAN consists of two main components, a noise-aware KGE model to learn robuster representation of knowledge and an adversarial learning framework for error detection. During the training, noiseaware KGE model takes the confidence score learned by GAN as guidance to eliminate the noisy data from the learning process whereas the GAN requires that KGE model continuously provides high quality embedding as well as credible positive examples to model the discriminator and the generator. Cooperation between the two components drives both to improve their capability. The main contributions of this paper are summarized as follows: • We propose a unified generative adversarial framework NoiGAN, to learn noise-aware knowledge graph embedding. Under the framework, the KGE model and error detection model could benefit from each other: the error detection model prepares reliable data for KGE model to improve the quality of embedding it learns, while the KGE model provides a promising reasoning model for the error detection model to better distinguish noisy triples from the correct one. • Our proposed framework can be easily generalized to various KGE models to enhance their ability in dealing with noisy knowledge graph. • We experimentally demonstrate that our new algorithm is superior to existing state-of-the-art algorithms. The KGE model and GAN can alternately and iteratively boost performance in terms of both knowledge graph completion and noise detection. Embedding based methods currently hold the state-of-the-art in knowledge graph completion for their promising (b). They aim to capture the similarity of entities by embedding entities and relations into continuous low-dimensional vectors. Existing methods can be roughly divided into two categories: translational distance models and semantic matching models. Translational distance models measure the plausibility of a fact as the distance between two entities after a translation carried out by the relation. TransE , TransH and TransR are the representative approaches in this category. Semantic matching models measure plausibility of facts by matching latent semantics of entities and relations embodied in their vector space representations. The typical models include RESCAL , DistMult and ComplEx . To optimize the KGE model,negative sampling is usually required to minimize the margin based ranking loss. A conventional method to construct negative samples is randomly sampling. However, negative samples generated through a random mode are often too easy to be discriminated from positive facts and thus make little contribute towards the training. Some recent works proposed to incorporate GAN for better negative sampling to improve the quality of embeddings . Nonetheless, none of the above methods has taken the potential noisy data into consideration, which leads to their sensitivity to unreliable data . In this paper, we proposed a novel technique to enable current embedding models to cope with noisy data. Due to lack of noisy data samples, error detection in knowledge graph is a challenging task. Existing methods can be either ontology based or anomaly detection based. Ontology based methods address this problem by exploring additional ontology information. A larger number of ontology reasoners are developed to utilize logic programming to derive uncovering contradictions among observed facts (; ;). Rich ontology information is required for such kind of methods and thus impede its application to real-world knowledge graphs. Another kind of methods is anomaly detection based methods . The main drawback of anomaly detection based methods are that they do not necessarily identify errors, but also natural outliers, which will compromise the objectivity of its . More recently, a novel confidence-aware framework proposed to incorporate triple confidence into KGE model to detect noises while learning knowledge representations simultaneously. However, it measures the triple confidence merely based on the how well the triple fits the model, which is easily affected by model bias. In this section, we illustrate our proposed framework, NoiGAN, in details. The goal of our proposed framework is to learn noise-aware KG embedding with Generative Adversarial Networks (GAN). As shown in Figure 1, it consists of three components: (i) Noise aware KGE model, which incorporates a confidence score C(h, r, t) into the KGE model to isolate the impact of noise over embedding vectors. Meanwhile, it provides high quality embedding to model discriminator and generator (Section 3.1); (ii) Triple generator G((h, r, t)|(h, r, t); θ G ), which learns to generate the most likely triples to be the noisy triples by corrupting the correct triple (h, r, t). It aims at providing reliable noisy data for the discriminator as well as noise aware KGE model (Section 3.2.1); (iii) Discriminator D((h, r, t); θ D ), which tries to distinguish correct triples from noisy triples, and assigns a confidence score for each triple fact to describe the correctness of it. The discriminator is used to determine confidence score C(h, r, t) in the noise aware KGE model. It also naturally provides guidance to the generator to produce higher quality noisy data (Section 3.2.2). Before further discussing the detail of the algorithm, we first give a formal definition to knowledge graph. A knowledge graph, denoted by G = {E, R, T}, consists of a set of entities E, a set of relations R and a set of of observed facts T = {(h, r, t) | h, t ∈ E, r ∈ R}. A fact is represented by a triple (h, r, t), where h, t and r denote head entity, tail entity, and relation, respectively. To learn a more robust representation for noisy knowledge graph, we propose a noise-aware KGE model to eliminate the impact of noise over embedding vectors by incorporating confidence score into KGE model. It can be easily adapted to any KGE models. For example, we can follow TransE to represent entities and relations. Given a fact (h, r, t), TransE aims to learn corresponding low dimensional vectors h, r and t in the same space for h, r and t, so that the distance between h + r and t is minimized. The scoring function of TransE is then defined as f r (h, t) = h + r − t 2 1. To optimize the KGE model, we use a loss function similar to the negative sampling loss in according to . where γ is the margin, σ is the sigmoid function, T represents the observed facts from the knowledge graph and (h, r, t) is the negative sample for triple (h, r, t). To reduce the effects of randomness, we sample multiple negative triples for each observed fact. We denote the negative triples set of a triple (h, r, t) as N (h, r, t). Negative triples set is constructed by replacing the head or tail entity of the observed triples with an entity sampled randomly from entity set E: A vast majority of existing embedding models assume that all triple facts hold true to knowledge graph, which is inappropriate. In fact, knowledge graph contains many kinds of errors (e.g., ambiguous, conflicting, erroneous, redundant) due to the automatic construction process. To isolate the impact of noise over embedding vectors, following , we adopted the concept confidence to describe whether a triple fact is noisy or not. In , confidence of a triple is measured by how well the triple fits to the KGE model. A central issue with such measurement is that it is easily affected by model bias and thus lead to unreliability. Unlike , we learned the confidence scores using a discriminator, which can be more impartial and promising. We will discuss how to obtain such discriminator in Section 3.2.2. With the introduction of confidence, we can eliminate the noisy data from the learning process of KGE model. where C(h, r, t) is the confidence score assigned to each positive triple (h, r, t). To be specific, C(h, r, t) can be both a binary variable or a soft value from the interval. We denote the previous one as the hard version and the later one as the soft version. After introducing our noise aware KGE model, we now show how to identify the error in knowledge graph to enhance embedding quality. Typically label information about noisy triples is costly and labor intensive to obtain, which is the central challenge to the task of error detection. Without the guidance of noisy data, previous works propose to leverage ontology information to exploit reasoning for inconsistency detection. However, these ontology information can also be difficult to gain. To address this problem, inspired by recent advance of GAN, we propose a novel end-to-end adversarial learning framework to detect noise in knowledge graph. The proposed GAN cooperates with the noise aware KGE model in an interactive way. GAN utilizes the embedding learned by the noise aware KGE model to distinguish true triples and noisy triples. Meanwhile the noise-aware KGE model requires GAN to learn a confidence score so as to eliminate the impact of noise over embedding vectors. Our adversarial learning framework consists of two components, a generator and a discriminator. Unlike traditional GAN whose ultimate goal is to train a good generator, we aim to produce a good discriminator to distinguish noisy triples from true triples. To address noisy data deficiency issue, the generator and the discriminator act as two players in a minimax game: the generator learns to continually generate "more confusing" triples and thus provides better quality noisy data for the discriminator, whereas the discriminator is trained to draw a clear distinction between the true triples and the noisy triples generated by its opponent generator. Formally, it can be formulated as follows: where T represents the true triples set. To get rid of the noise in knowledge graph, we construct T with top 10% triples which fit the KGE the best. We discuss the detailed implementation of the generator and discriminator as follows. To identify the error in knowledge graph, first we have to achieve reliable examples of noisy data. Since such data is costly and labor intensive to obtain, we instead introduce a generator to produce them. The main goal of the generator is to generator high-quality fake triples that can fool discriminators, which aims at bringing a better classifier to identify error in knowledge graph. In addition, to simultaneously make KGE model aware of the existing noisy data, we also explicitly incorporate the noisy triples generated by the generator as negative samples to train the noise aware KGE model. Note that, the generator takes embedding vectors learned by the noise-aware KGE model as input. Given a true triples (h, r, t), the generator aims to select the most likely triples to be the noisy triples (h, r, t) from its negative samples candidate set N (h, r, t). To achieve this goal, a two-layer fully-connected neural network is introduced to model the probability distribution over the candidate negative samples N (h, r, t). In particular, this MLP uses ReLU as the activation function for the first layer and adds Softmax to the second layer. It takes the concatenation of the embedding vectors h, r and t of triple (h, r, t) as input and output the probability whether the triple (h, r, t) is the most likely noisy triple. As the output of the generator is a discrete triple, training for the generator is a little tricky due to data indifferentiability issue. A common solution is to use policy gradient based reinforcement learning instead (a). In particular, for a generated triple, the reward from discriminator is defined as f D (h, r, t), which is exactly the probability of the triple (h, r, t) to be true. Thus, in order to fool the discriminator, the generator is trained to maximize the expected reward as follows: At the very beginning, because of lack of guidance, the generator generates "noise" by random sampling. However, such "noise" is too easy for discriminator to distinguish and in a low reward. By learning to maximize the reward, the generator continually learns better probability distribution of the candidate negative samples and keep generating "more confusing" triples to improve the capacity of the discriminator. Moreover, this generator can also be used to produce high quality negative samples for the noise aware KGE model. Discriminator aims to distinguish the true triples from the noisy triples. Both positive and negative examples are required to train the discriminator. Generator learns to provide reliable negative examples for the discriminator while observed triples in knowledge graph can be regarded as positive examples. However, as knowledge graph is noisy, considering all observed triples as positive samples will inevitably introduce noise. We therefore utilize an interesting observation, when noisy data exist, deep learning models tend to memorize these noise in the end, which leads to the poor generalization performance . Specifically, we take top 10% triples which fit the KGE the best (with lowest f r (h, t)) as positive training examples. Considering discriminator is essentially a binary classifier, the objective function of the discriminator can be formulated as minimizing the following cross entropy loss: where f D (h, r, t) = σ(MLP(h, r, t)) where MLP is a two-layer neural network with ReLU as activation function and σ(x) = 1/(1 + exp(−x)) is the standard sigmoid function. If we use TransE as our KGE model, the MLP takes the vector h + r − t as input and output the probability of the triple (h, r, t) being true. f D (h, r, t) will later be used to define C(h, r, t) for each positive triple (h, r, t) in knowledge graph. To be specific, C(h, r, t) can be either a binary value or a soft value. When it is a binary variable, it represents the classification regard to whether a triple is a true triple. When it takes a soft value, it indicates the probability of the triple (h, r, t) being true. We denote the previous one as a hard version of our proposed model and the later one as the soft version. Require: Observed triples in knowledge bases T 1: Initialize C(h, r, t) as 1 for all triples. 2: Train noise aware KGE model with random negative sampling until convergence. 3: for n = 1: N do Take top 10% triples which fit the noise aware KGE the best as positive examples. Train GAN with the selected positive examples until convergence. 6: Update C(h, r, t) according to discriminator of GAN. Train noise aware KGE model with negative samples generated by the generator until convergence. 8 We start by training the noise aware KGE model. We initiate the confidence score C(h, r, t) as 1 for all triple facts and pretrain the noise aware KGE model with random negative sampling to gain reliable embedding vector. With these embedding vectors, we start the training of GAN model from the generator. The generator learns to understand the probability distribution of the candidate negative samples so as to select the most likely noisy data. In particular, the discriminator gives it a reward as guidance. Once the generator finishes training, it can be used to prepare noisy data for the discriminator. In addition, we take top 10% triples facts which fit the noise aware KGE the best as credible triple facts. With noisy data as negative examples and triple facts as positive examples, discriminator learns a classifier to distinguish the positive triples from the negative triples. After we obtain the well trained GAN model, given the confidence score assigned by the discriminator and the negative samples generated by the generator, we retrain the noise aware KGE model and update the embedding vectors. The process repeats until convergence. The embeddings learned by the noise aware KGE model is regarded as our final representation for entities and relations. Datasets. We evaluate our NoiGAN on five benchmark datasets, including FB15K, WN18,FB15K-237, WN18RR and YAGO3-10. These benchmark datasets benefit from human curation that in highly reliable facts. To simulate the real-world knowledge graphs extracted automatically from unstructured text data, we modified these benchmark datasets to include noisy triples. Since all kinds of noise might be contained while we construct knowledge graphs, our approach to introducing noise is to substitute the true head entity or tail entity with any randomly selected entity. Following this approach, we construct five KGs based on each benchmark dataset with noisy triples to be different ratio of (e.g., 40%, 70% and 100%) of true triples. All noisy datasets share the same entities, relations, validation and test sets with the original benchmark dataset, with all generated noisy triples fused into the original training set. The statistics of these knowledge graphs are summarized in Table 1. Baselines. NoiGAN is compared with the following state-of-the-art algorithm, including KGE models (e.g., TransE ,DistMult and RotateE ), robust KGE models (e.g., attention based method ), noise aware KGE models (e.g., CKRL ) and KGE models with GAN (e.g., KBGAN ). In particular, there are three kinds of triple confidences defined in CKRL . In our paper, we take CKRL with local triple confidence, called CKRL (LT), as baseline. To fairly compare different methods, the same loss function and negative sampling strategies are employed for all models. Experimental Setup of NoiGAN. We evaluate two versions of our NoiGAN model as mentioned in Section 3.2.2. The soft version is denoted as NoiGAN (soft) while the hard version is denoted as NoiGAN (hard). To show that our NoiGAN can be easily generalized to various KGE models, TransE and RotateE methods by a grid search strategy. The range of different parameters is set as follows: embedding dimension k ∈ {250, 500, 1000}, batch size b ∈ {256, 512, 1024}, and fixed margin γ ∈ {9, 12, 24}. Afterwards, we compare the best of different methods. Both the entity embeddings and the relation embeddings are uniformly initialized and no regularization is imposed on them. As mention in Section 3.2, we implement both discriminator and the generator as simple two-layer fully connected neural networks. The size of hidden states for each of the two networks is set to 10. To verify the capability of NoiGAN in distinguishing noises in knowledge graphs, we evaluate NoiGAN in terms of classification performance. To be specific, we classify the training data by determining whether a triple is being true. For NoiGAN (hard), we can directly utilize the discriminator to classify noise. Whereas, for NoiGAN (soft), the discriminator only assigns a soft score between 0 to 1 to each triple, indicating the probability of a triple being true. We thus classify the triples by regarding the triple who have C(h, r, t) > 0.5 as the true triple and the remaining as noise. The experiments are conducted on three benchmark datasets, including FB15K, WN18 and YAGO3-10 with noisy triples to be different ratio of 10%, 20% and 40% of true triples. Although FB15K and WN18RR suffer from test triple leakage in the training set , we report the over training data, which won't be affected by the issue. As we make similar observations on NoiGAN-TransE and NoiGAN-RotatE, we only report the experimental w.r.t. NoiGAN-TransE to save space. Evaluation Metric. Two classification evaluation metrics are adopted, including AUC; A special measurement defined as the proportion of actual noises that are correctly identified. It can be calculated as TN TN+FP, where TN represents true negative while FP represents false positive. Results. The w.r.t. NoiGAN-TransE can be found in Table 2. We compared against CKRL (LT) as it is the only baseline method which is able to detect noise in the training dataset. We can observe that: Our model consistently achieves the best performances over all datasets in all cases, which demonstrates the capability of our models in detecting noises. NoiGAN-TransE (hard) has significant improvements in noise detection compared to NoiGAN-TransE (soft). Considering that NoiGAN-TransE (soft) merely assigns low confidence scores to noisy triples while NoiGAN-TransE (hard) completely eliminates these data, this further validates that our method can classify noisy data accurately. The baselines tend to classify all triples into the same class, which shows their inability in detecting noise. In addition to its promising on noise detection, our approach is also superior to existing stateof-the-art algorithms in terms of the quality of learned embedding. We conduct experiments on three benchmark datasets, including FB15K-237, WN18RR and YAGO3-10 with noisy triples to be different ratio of 40%, 70% and 100% of true triples... Evaluation Metric. We mask the head or tail entity of each test triple, and require each method to predict the masked entity. During evaluation, we use the filtered setting . The Hit@K (H@K) and Mean Reciprocal Rank (MRR) are adopted as the evaluation metrics. Results. Results of reasoning are shown in Table 3. In particular, as attention based method exceed the memory capacity of our machines on YAGO3-10 dataset, only the on the FB15K-237 and WN18RR datasets are reported. Note that TransE, CKRL (LT) and NoiGAN-TransE share the same score function TransE, they are comparable with each other. Similarly, RotatE and NoiGANRotatE share the same score function RotatE and thus they are comparable with each other. We make the following observations: NoiGAN consistently outperforms the baseline methods which share the same score function with it on noisy dataset sets. The performance gain is significant especially on datasets with 100% noise. Either NoiGAN-TransE or NoiGAN-RotatE will achieve the best performance on almost all noisy datasets. In particular, hard version of NoiGAN performs better then soft version in most cases. Even though attention based method claims that they can ensure robust performance, the show that our NoiGAN significantly outperforms it in terms of robustness. If we do not introduce noise, the performance of NoiGAN models is almost the same as their variants. In addition, the improvement introduced by the NoiGAN becomes more significant as the noise rate in KGs rises. This further proves the robustness of the NoiGAN. To demonstrate the power of discriminator in distinguishing noisy triples, we present some triples and their confidence score in Table 4. We conduct experiments over FB15K with 40% noise using the NoiGAN-TransE (hard) model. All hyperparameters are set as described in Section 4.1. To be clear, distance of a triple is calculated as h + r − t 1 while the scores of the triples are learned by the discriminator. As shown in 4, we can see that TransE performs poorly at detecting some "triky" errors, such as logic error (e.g., (head teacher, /people/profession/people with this profession, Michael Dobson)) and grammar error (e.g. (McDonalds, /dining/restaurant/cuisine, Joe Walsh),). To our surprise, NoiGAN (hard) has the ability to detect both. In this paper, we propose a novel framework NoiGAN, to jointly combine the tasks of knowledge graph completion and error detection for noise aware knowledge graph embedding learning. It consists of two main components, a noise-aware KGE model for knowledge graph completion and an adversarial learning framework for error detection. Under the framework, the noise-aware KGE model and the adversarial learning framework can alternately and iteratively boost performance of each other. Extensive experiments show the superiority of our proposed NoiGAN both in regard to knowledge graph completion and error detection.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rkgTdkrtPH
We proposed a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding.
Energy-based models (EBMs), a.k.a. un-normalized models, have had recent successes in continuous spaces. However, they have not been successfully applied to model text sequences. While decreasing the energy at training samples is straightforward, mining (negative) samples where the energy should be increased is difficult. In part, this is because standard gradient-based methods are not readily applicable when the input is high-dimensional and discrete. Here, we side-step this issue by generating negatives using pre-trained auto-regressive language models. The EBM then works in the {\em residual} of the language model; and is trained to discriminate real text from text generated by the auto-regressive models. We investigate the generalization ability of residual EBMs, a pre-requisite for using them in other applications. We extensively analyze generalization for the task of classifying whether an input is machine or human generated, a natural task given the training loss and how we mine negatives. Overall, we observe that EBMs can generalize remarkably well to changes in the architecture of the generators producing negatives. However, EBMs exhibit more sensitivity to the training set used by such generators. Energy-based models (EBMs) have a long history in machine learning (; ;). Their appeal stems from the minimal assumptions they make about the generative process of the data. Unlike directed or auto-regressive models which are defined in terms of a sequence of conditional distributions, EBMs are defined in terms of a single scalar energy function, representing the joint compatibility between all input variables. EBMs are a strict generalization of probability models, as the energy function need not be normalized or even have convergent integral. Training an EBM consists of decreasing the energy function at the observed training data points (a.k.a. positives), while increasing it at other data points (a.k.a. negatives) . Different learning strategies mainly differ in how negatives are mined . Some find negatives by gradient descent, or using Monte Carlo methods like Gibbs sampling and hybrid Monte Carlo , which enable the loss to approximate maximum likelihood training . Other approaches instead use implicit negatives, by enforcing global constraints on the energy function, like sparsity of the internal representation , for instance. GANs can be interpreted as a particular form of EBM where the negatives are generated by a learned model. While there are works exploring the use of EBMs for modeling images (; ;), they have not been successfully applied to text. One reason is that text consists of sequences of discrete variables, which makes the energy function not differentiable with respect to its inputs. Therefore, it is not possible to mine negatives using gradient-based methods. Other approaches to mine negatives are also not immediately applicable or may be too inefficient to work at scale. In this work, we start from the observation that current large auto-regressive locally-normalized language models are already strong, and therefore, it may be beneficial to use them to constrain the search space of negatives. We propose to learn in the residual space of a pre-trained language model (LM), which we accomplish by using such LM to generate negatives for the EBM. Given a dataset of positives and pre-generated negatives, the EBM can be trained using either a binary cross-entropy loss or a ranking loss, to teach the model to assign a lower energy to true human generated text than to the text generated by the pre-trained LM. The question we ask in this work is whether such an EBM can generalize well. Understanding this is important for two reason. First, this generalization is a prerequisite for using residual EBMs for modeling text. Second, in our setting, this generalization question is equivalent to the question of whether it is possible for a learned model (the energy function) to discriminate real text from text generated by an auto-regressive model. Discriminating real vs. machine-generated text is an important task on its own that has recently gained a lot of attention . Our contribution is an extensive study of the generalization ability of such residual EBMs, or in other words, the generalization ability of models trained to detect real text from machine generated text. In particular, we assess how well the energy function is robust to changes in the architecture of the generator and to changes in the data used to train the generator. The overall finding is that the energy function is remarkably robust, and the bigger the model and the longer the generation the better its performance. Moreover, the energy function is robust to changes in the architecture of the LM producing negatives at test time. However, it is sensitive to the training dataset of the test generator. Our work can be interpreted as a particular instance of EBMs where negatives are produced by a pre-trained language model as opposed to the energy function itself. Learning a generator and a discriminator relates also to Generative Adversarial Networks , except that in our case the generator is trained beforehand. Since the discriminator is learned after the generator has been trained, it learns from the residual error of the generator, and therefore, our training procedure is a particular instance of a "cascade" model and "boosting" . Using a separately trained scoring function to evaluate and rank candidate outputs has a long history which dates back to work on parsing and machine translation . In that work however, the goal was to improve a weak generator by employing a linear reranker taking as input relatively few hand-design features. The approach has been recently re-discovered in the context of dialogue modeling by , but here negatives are randomly chosen next utterances from the training dataset. Several recent works have studied whether machine generations can be detected automatically, but they do not study how these findings generalize to settings where generator architectures and corpora are different between training and test time. For example, (GROVER) assume that the generator is known and apply only slight fine-tuning in order to train the energy function. (GLTR) assume knowledge of the generator; these Authors say "We further hypothesize that these methods generalize to black-box scenarios, as long as the fake text follows a similar sampling assumption and is generated by a large language model"; our work answers precisely this question, provides a rigorous experimental protocol and quantitative . Finally, there has been a release of a training dataset of the GPT-2 language model generations for the purpose of training discriminators capable of detecting machine generated text. While we share the same motivation, our work is a much broader investigation on the topic. We assess generalization of several discriminator architectures to not just one but several kinds of generators and corpora used for training (including GPT-2). In this section, we describe how we train the energy based model and how we mine negatives. Our goal is to learn an energy function E(w 1, . . ., w n |c; θ) ∈ R that scores the joint compatibility of an input sequence of tokens (w 1, . . ., w n) given some context c and a set of parameters θ. The context depends on the application, it could be the preceding text, some keywords, a bag of words, a title, etc. In this work for simplicity, c is an affix from which we condition the generation. The goal of training is to assign to golden sequences, i.e. sequences taken from a dataset of human generated text, lower energy than other sequences. We parameterize the energy function as a neural network, using the architectures described in §4.3. At training time, the energy function can be trained using a variety of different losses. In this work, we consider two choices: the binary cross-entropy loss and the ranking loss . As the findings are similar, unless otherwise specified we will refer to the binary cross-entropy loss, and report with the ranking loss in Appendix C. + be a positive sample taken from the training set, and consisting of a sequence of n tokens given some context c. Let (x − 1, ..., x − k) be a set of k negative samples each derived from the same context c as above, all containing at least some machine generated tokens. We train our energy function using the (per-sample) binary cross-entropy loss: wherex − is the most offending negative , i.e. its index is the solution of arg min, and σ is the sigmoid function: The most critical component of training an energy based model is the method used to generate negatives, i.e. inputs where the energy should score high (unlikely inputs). In settings with continuous variables, researchers have suggested MCMC or Langevin dynamics . In this work instead, we use the fact that modern auto-regressive models for text are already quite good, and we use them for negative sampling. We train two auto-regressive language models, a left-to-right one which will be used to produce suffixes assuming the prefix is the context, and a right-to-left one which will be used to generate prefixes assuming the suffix is the context. The negatives are generated by top-k sampling setting k equal to 10. Given a trained language model (for instance, a left-to-right autoregressive model) and given a positive example x + = (w i+1, . . ., w i+n), for a given context: c = (w 1, . . ., w i), a negative can be written as: x − = (ŵ i+1, . . .,ŵ i+n), where w j for j ∈ [1, i + n] are ground truth words, the first i of them belonging to the common context, andŵ j for j ∈ [i + 1, i + n] are words generated by the language model conditioned on c. In the same way, we can sample a negative with a right-to-left model yielding x − = (ŵ 1, . . .,ŵ n), for a given context c = (w n+1, . . ., w n+i). In this section we first describe the datasets and preprocessing used, provide architecture details for both generators and scoring functions, and finally introduce the evaluation settings. We train models on three corpora coming from different domains. We report more detailed statistics about the sizes of these corpora in Appendix Table 8: Books: The Toronto books corpus described in;, which consists of fiction books in 16 different genres, totaling about half a billion words. CCNews: We collect a de-duplicated subset of the English portion of the CommonCrawl news dataset , which totals around 16 Billion words. Wikitext: The wikitext103 dataset from , which consists of 103 million words from English Wikipedia articles. While Wikitext and CCNews are factual, Books is fiction and comprises a wide variety of writing styles. The CCNews corpus has the narrowest domain and it is two orders of magnitude larger than Wikipedia. Overall, these datasets are interesting because they enable us to assess the ability of the energy function to fit and generalize across various axes, from the amount of data available at training time to the richness of style and relatedness among the different data sources. On Wikitext and Books, we extract positive sequences from windows of text that are 160 tokens long with a stride of 40. On the larger CCNews we do the same except that we stride by 160 tokens. This protocol to mine positives is used both at training and test time, although at test time we limit the evaluation to 60,000 randomly chosen positive samples. We use a Byte Pair Encoding in order to represent all the dataset with a common vocabulary. In particular, our vocabulary contains 50k tokens that was constructed from a byte level UTF-8 encoding of the CC-News corpus following. We mainly use a transformer based network to generate negatives. We have a medium, large and huge transformer model based on the architecture used in, yielding three language models in total: TransfSmall, TransfBig and TransfHuge; see details also in Appendix B. The small sized models use 6 blocks each containing a multi-head attention module with 8 heads. The large models use 12 blocks each containing a multi-head attention module with 16 heads. The huge models use 48 blocks each containing a multi-head attention module with 25 heads. Transformer models are also implemented in as "transformer_lm", "transformer_lm_big", and "transformer_lm_gpt2_big". The TransfHuge has 10x the number of parameters than TransfBig and it is trained on CCNews only. For each architecture except for TransfHuge we train two models on each each dataset: left to right and right to left. In addition to the transformer generator, we also consider a 12-layer convolutional architecture (Conv) , and we also use a the third-party trained GPT2 models as described in §5.3. As described in §3.2, we use these language models to generate either a prefix or a suffix. Unless otherwise specified, the context is long either 120 or 140 tokens (with equal probability). Positive and negative examples have 40 or 20 tokens depending on the context size, for an overall length of 160 tokens in all cases. In preliminary experiments, we found that increasing the size of the generations and reducing the size of the context makes the learning task significantly easier. We analyze the effect of the context size in §5.5. We consider three architectures for the energy function: Linear which computes an energy value via a bag of tokens:, where u i is a learnt scalar parameter corresponding to the i-th token in the vocabulary. BiLSTM which computes an energy value through L bidirectional layers using LSTM recurrent units , as in Linear(AvgPool(h L,1, . . ., h L,n)), where h L,i is the hidden state at position i and layer L which is the concatenation of the forward and backward hidden states, AvgPool averages hidden states over positions and Linear is a vector of parameters projecting the hidden state down to a scalar value. We consider two versions, referred to as "BiLSTMsmall" and "BiLSTMbig". Both have 4 layers, but BiLSTMsmall has 512 units in both the embedding layer and the hidden layers, while BiLSTMbig has 758 units in the embedding layer and 2014 units in the hidden states. Transformer which computes an energy value similarly to the BiLSTM's, except that each bi-LSTM layer is replaced by a either a bidirectional Transformer layer (BiTransf), or a Transformer with causal self-attention (UniTransf). For unidirectional models we use the same averaging technique as with BiLSTM models. For bidirectional models the energy is computed via: f (w 1, ..., w n) = u h L,1 + b, where h L,1 is the top layer hidden state at the first position (as common practice also in prior work ). BiTransf uses the BERT-Large CORPUS: GENERATOR ARCHITECTURE: in-domain cross-architecture cross-corpus unseen Table 1: Four evaluation settings considered in this work, described in §4.4. architecture initialized from. It uses 24 self-attention layers with 1024 units and 16-head attention each. UniTransf has instead 12 layers with 1024 units and 16 attention heads per layer and it is initialized from a language modeling task as in. For all models, we use Adam optimizer with warmup. Training is stopped after processing 2.5M samples without any improvement on the validation set. We use data-parallel synchronous multi-GPU training with up to 8 nodes, each with 8 Nvidia V100 GPUs. To improve training speed, we use mixed precision training 1. Following common practice we clip the norm of the gradient vector . More details about hyper-parameter setting can be found in Appendix Table 11, while Table 10 in Appendix reports the number of parameters of each energy function. We evaluate the generalization of a residual EBM in four settings: in-domain, cross-architecture, cross-corpus, and unseen. These settings are determined by the corpora C train used to train the training generator G train with architecture A train and the corpora C test used to train the testing generator G test with architecture A test. Note that G train = G test even if A test = A train as we use different training seeds. In all cases, C train is used for fitting the G train and also for the positives for the EBM. In the in-domain setting, C test is C train (but any affixes used as conditioning during testing are from the test-set of the corpus), and A test = A train. In the cross-architecture setting, again C test is C train, but A test is different from A train. In the cross-corpus setting, A test = A train but C test is different than C train, and G test is trained on the training split of C test, while G train trained on the train split of C train. In the unseen setting, both C test is different than C train and A test is different from A train. In all settings, we report performance in terms of average classification accuracy balancing the positive and negative classes. We now present the main of this work and extensively investigate the generalization ability of the energy functions we have considered. In Table 2 we report the of the in-domain generalization experiment using our large language model, TransfBig. We observe that when the EBMs have similar representational power compared with the generator (UniTransf, see Table 10), they are able to distinguish real from fake completions fairly accurately, reaching an accuracy of more than 90% on the Books dataset (which is easier since it exhibits the larger variety of style and topics), and attaining above 88% on the more challenging CCNews dataset (for which generation is easier and hence discrimination harder). The Wikipedia dataset has lower accuracy because the EBM overfits to this smaller dataset. TransfBig (log-likelihood) 57.1 50.8 50.5 Table 2: "In domain" generalization accuracy of EBMs (each row) on various text corpora. A column corresponds to the corpus used to get positives and to fit the train and test language models, which are TransfBig (§4.2) with different initial seeds. The last row is the accuracy when using as energy the log-probability of the training language model over the whole sequence. Conv TransfSmall Conv 92.9 81.2 TransfSmall 86.5 87.9 Table 3: Cross-architecture generalization accuracy using the Wikitext dataset for both training and testing (Ctrain = Ctest). Each row is a model architecture used for generating the training negatives (Atrain), and each column is a model architecture for generating the testing negatives (Atest). The energy function is UniTransf. Weaker energy models are able to do comparably or better at discriminating real from fake than the training generator used as a discriminator by taking the log probability of the sequence as energy. In Table 3, we assess how well the UniTransf energy function generalizes to different generator architectures at test time, namely Conv and TransfSmall. As a reference on the Wikitext dataset, the test perplexity of Conv and TransfSmall are 35.4 and 33.5, respectively. Therefore, these two generators attain roughly the same perplexity, despite Conv having about 4 times more parameters, see Table 9. Surprisingly, UniTransf has significantly harder time discriminating TransfSmall negatives with an in-domain rate of 87.9%, compared to 92.9% of Conv. Also, UniTransf trained with TransfSmall negatives is more robust to the (weaker) Conv generations, than vice versa, with a mild 1.4% accuracy drop. However, if we average values across rows, we see that UniTransf tested with mixed negatives is just slightly more accurate when training with the harder negatives produced by TransfSmall. In Table 4 we show the of generalizing across corpora using UniTransf as an energy function and TransfBig as generator both at training and test time. We observe that models generalize less well across corpora; for instance, when testing on Wikitext an energy function trained with either Books or CCNews, the accuracy is 59.1% and 65.5%, respectively. However, training on the union of two of the corpora gives a large benefit over training on just one or the other when testing on the third. Finally, training on the union of all the three corpora (last two rows) yields an energy function that is very robust to the testing conditions, with an accuracy which is on par if not better than training on in-domain data, even for the largest CC-News dataset (second column). We also tested the bidirectional transformer energy function BiTransf with 355M parameters (almost twice as UniTransf), and found that on CC-News it improves accuracy by more than 5% when it is trained on the union of all corpora, confirming the finding that bigger models trained on more data can achieve substantially better discrimination. As BiTransf was pre-trained using the whole Wikipedia rather than the training part of Wikitext103, we do not report its accuracy on Wiki test set. Test time negatives are generated by models specified in the rows, with their training set in parenthesis and model size in millions of parameters. Note that both the training corpus and GPT2 generator are "unseen" by the energy function during training. In Table 5 we test the generalization of the energy functions to GPT-2 generators 2 that were trained on a completely different dataset, namely WebText a dataset of 8 million web pages. This is an instance of unseen generalization since C train = C test, and A train = A test. We also consider generations from TransfHuge (last row) whose configuration is similar to the unreleased biggest GPT2 model with 1.4 billion parameters, 7 times bigger than TransfBig, the generator used at training time. Expectedly as the generator gets bigger the discrimination tasks gets harder. When the energy function is confronted with generations from the GPT2 small model, which is smaller than the training generator, the accuracy is close to the in-domain setting, however. For instance the BiTransf accuracy increases by 0.4% on the Book corpus and decreases by 5.4% on the CCNews corpus compared to the fully in-domain of Table 4. That suggests that for a known domain, a big enough energy model trained with a single big generator can efficiently discriminate a block-box generator. Of course, accuracy decreases as the black-box generator is made bigger (GPT-2 medium). Finally, we investigate generalization of the energy function to a new domain, such as samples from the dataset of GPT-2 generations. For each model the dataset has a 250k generated texts with either top-k sampling or random sampling. Also, the dataset provides samples from the WebText corpus that was used to train the generator models, and that we use to discriminate against. To adapt our models to this task we split the text segments into sets of intersecting blocks of 160 tokens. During training we treat all blocks in a set as either positives or negatives. During evaluation we take the mean prediction over all blocks in a segment as a prediction for the whole segment. Table 6: Generalization of the energy function to unconditional generation from various GPT2 models (model size in parantheses, followed by sampling method used). Each row contains the accuracy on the corresponding test set. TF-IDF are taken from. In Table 6 we report of the BiTransf energy function compared to the TF-IDF baseline provided with the dataset. We consider two cases. In the in-domain setting, we finetune the energy function on the train set of each of the datasets, following the same protocol used by the provided TF-IDF baseline. In generalization mode, we finetune only on the generations from the small GPT2 model (both top-k and random sampling), and apply the model to the other datasets. Unsurprisingly, in-domain BiTransf beats TF-IDF baseline getting almost 100% across the board. However in generalization mode, we can outperform the TF-IDF baseline only when the generator is less than three times bigger than what was used at training time. Interestingly, our energy function was trained using a fixed length input with a prefix. These generalization are significantly higher than the in-domain experiment of Table 2 because the unconditional task is significantly easier, a topic further discussed next. First, we investigate the dependency between performance of the energy functions and length of the prefix. We trained BiLSTMSmall and UniTransf models on examples with varying prefix length from the Wikitext corpus, and computed the accuracy for each prefix length independently. Figure 1 shows that as the prefix length increases (and the generation gets shorter), the discrimination task gets harder and the difference between the models more prominent. The unconditional case, i.e. zero prefix length, is the easiest, while prefixes of length 120 and 140 that are the main experimental setup in this work, are the hardest. Finally, in Table 7 we study the impact of the number of negatives and using the most offending negative in the loss of Eq.. Using more negatives and harder negatives improves accuracy. In the previous sections we have seen that the energy function is less robust to negatives generated from a model trained on a different corpus. However, even in that case, a negative is still a sample from an auto-regressive neural network. In Appendix F, we show examples where changing a few entities can cause large jumps in the energy (from negative to positive or vice versa), and so fool the EBM. More generally, we see that the energy function is not robust to truly out-of-domain samples. For example, the energy will score blocks of randomly generated text lower than real text. These behaviors are evidence that the energy functions have learned the regularities of generated text, as opposed to learning the regularities of real text. We surmise that it does so because modeling the latter would be much more difficult than the former. By modeling generated text, the energy function assigns low score to anything that is not generated by its training generator. While not surprising, this might be considered a liability of such energy functions. However, as a model of text, the energy functions should be considered as working on the residuals of the language models used to generate negatives. For the examples in Appendix F, the language model records a large decrease in likelihood after the change in entity; and language models of course give much lower likelihood to random text than gold or generated text. Therefore, the energy function needs not to be accurate on examples that are already very unlikely according to these language models. In Figure 2 we show the average effects of applying various perturbations to sequences from Wikitext103 on an in-domain energy and language model at each location (from 1 to 160) in the sequence. We see that for all perturbations, the energy decreases its value, but the language model increases its negative log likelihood. We also see that the energy function is more sensitive to the ends of the text, which is where the negatives were different from real text at training time. The EBM framework could potentially unlock more expressive models of text, as they are not limited to scoring a single word at a time as current locally normalized auto-regressive models do. Unfortunately, training EBMs is challenging because generating negatives using the energy function itself is still an open research problem, and does not scale well in practice. In this work, we propose a simple solution, which is to leverage generations produced by pre-trained language models as negative samples. As a preliminary yet necessary step in this direction we have investigated the generalization ability of such EBMs. We found that EBMs, when trained on large datasets, achieve good generalization. For instance, they behave nicely when tested with negatives produced by generators that have rather different architectures. The generalization is less good when generators are trained on other corpora, but EBMs re-gain robustness once we train them on even bigger composite datasets. In the future, we can improve EBMs for text by simply making their architectures bigger and increasing the diversity and size of their training datasets. Of course, further scaling up of EBMs will pose formidable engineering challenges. On the application side, a natural application of the current formulation of EBMs is real/fake text discrimination. We believe that this is important application in its own right, and that EBMs can be very powerful, as demonstrated by their superior performance compared to discriminating using the original language model log-likelihood. Table 9: Number of parameters (in millions) for the generator language models. The computational cost is directly related to the number of parameters in other layers than the input embedding layer (second row). a We use models from HuggingFace repository (https://github.com/huggingface/ pytorch-transformers) and report here the sizes of these models as they were used to generate data for table 5. Note that the OpenAI GPT2 repository (https://github.com/openai/gpt-2) defines models sizes as 124M and 355M for small and medium model correspondingly. b As reported in. Table 10: Number of parameters in millions for the scoring functions. The computational cost is directly related to the number of parameters in other layers than the input embedding layer (second row). The (per-sample) ranking loss is: In this case we also refer to the negative energy as the model score. The ranking loss makes the energy values local, as the loss takes as input the difference of energies for a pairs of positive and negative that share the same context. Instead, the binary cross entropy loss of Eq. 1 encourages a more global and absolute scoring as the loss forces all positive examples to have negative energy, and all negative samples to have positive energy, regardless of the context. Therefore, the binary cross entropy loss is perhaps more interpretable as it is not context dependent, but the task is also harder to learn. Empirically, we found similar findings with both losses. When the energy function is trained using the ranking loss of eq. 2, we evaluate the model using precision at 1 (P@1), which is the ratio between the number of times the ground truth sequence scores the lowest over its set of negatives, averaged over the number of sequences in the test set. All models are implemented using the PyTorch framework and are optimized using Adam . To train our biggest models (UniTransf and BiTransf) we used 8 machines each with 8 GPUs in synchronous mode using data parallelism. The ing large batch size speeds up training when combined with float16 reduced precision and cosine scheduling of the learning rate without any restarts , i.e. we decay the learning rate to zero over the course of "max steps" updates and then stop training. Using these methods, we reduced training time by five times compared to a single node training. For all other configurations we used a single node with up to 8 GPUs and inverse square root decay. In this section we show that we can change a few words to make a negative example become a "positive" one as judged by the energy function, and vice versa, by using gradient information. Below here, we show an example of a ground truth sentence from the Wikitext dataset. Here the block has 160 BPE tokens, where the first 120 tokens (black font) are used as context and the remaining 40 are the ground truth completion. Next, we use a language model to generate 10 negatives:... In this example, using the big transformer model, UniTransf, as the energy function, we are able to separate real from fake examples as shown (Figure 4). We want to perturb these negatives to violate the margin. To do so, we make use of the gradient information from the energy function ∇ x E θ (x) and use a first order Taylor expansion to approximate the effect of a token replacement (we abuse our notations and use x to denote embeddings in this analysis). Given the original sample x, we change one word x i to x i to arrive at x. The score of x is approximately: Using this approximation, we can search for those token replacements that increase/decrease the energy the most. We can easily change a negative sample to a positive one by replacing the 5 words highlighted below. In paratheses, we report both score and language model perplexity.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SkgpGgrYPH
A residual EBM for text whose formulation is equivalent to discriminating between human and machine generated text. We study its generalization behavior.
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art. These demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Code will be made available. Convolutional neural networks (CNNs) have become the dominant approach in computer vision (; ; ;). To best exploit them, vast amounts of labeled data are required. Obtaining such labels, however, is not trivial, and the research community is exploring alternatives to alleviate this (; ;). Knowledge transfer via deep domain adaptation is a popular alternative that seeks to learn transferable representations from source to target domains by embedding domain adaptation in the learning pipeline. Other approaches focus exclusively on learning useful representations from scratch in a target domain when annotation constraints are relaxed (; ;). Semi-supervised learning (SSL) focuses on scenarios with sparsely labeled data and extensive amounts of unlabeled data; learning with label noise seeks robust learning when labels are obtained automatically and may not represent the image content; and self-supervised learning uses data supervision to learn from unlabeled data in a supervised manner. This paper focuses on SSL for image classification, a recently very active research area . SSL is a transversal task for different domains including images , audio , time series (González et al., 2018), and text . Recent approaches in image classification primarily focus on exploiting the consistency in the predictions for the same sample under different perturbations (consistency regularization) , while other approaches directly generate labels for the unlabeled data to guide the learning process (pseudo-labeling) . These two alternatives differ importantly in the mechanism they use to exploit unlabeled samples. Consistency regularization and pseudo-labeling approaches apply different strategies such as a warm-up phase using labeled data , uncertainty weighting , adversarial attacks , or graph-consistency . These strategies deal with confirmation bias , also known as noise accumulation . This bias stems from using incorrect predictions on unlabeled data for training in subsequent epochs and, thereby increasing confidence in incorrect predictions and producing a model that will tend to resist new changes. This paper explores pseudo-labeling for semi-supervised deep learning from the network predictions and shows that, contrary to previous attempts on pseudo-labeling (; ;), simple modifications to prevent confirmation bias lead to state-of-the-art performance without adding consistency regularization strategies. We adapt the approach proposed by in the context of label noise and apply it exclusively on unlabeled samples. Experiments show that this naive pseudo-labeling is limited by confirmation bias as prediction errors are fit by the network. To deal with this issue, we propose to use mixup augmentation as an effective regularization that helps calibrate deep neural networks and, therefore, alleviates confirmation bias. We find that mixup alone does not guarantee robustness against confirmation bias when reducing the amount of labeled samples or using certain network architectures (see Subsection 4.3), and show that, when properly introduced, dropout regularization and data augmentation mitigates this issue. Our purely pseudo-labeling approach achieves state-of-the-art (see Subsection 4.4) without requiring multiple networks (; ; ;), nor does it require over a thousand epochs of training to achieve peak performance in every dataset , nor needs many (ten) forward passes for each sample . Compared to other pseudo-labeling approaches, the proposed approach is simpler in that it does not require graph construction and diffusion or combination with consistency regularization methods , but still achieves state-of-the-art . This section reviews closely related SSL methods, i.e. those using deep learning with mini-batch optimization over large image collections. Previous work on deep SSL differ in whether they use consistency regularization or pseudo-labeling to learn from the unlabeled set , while they all share the use of a cross-entropy loss (or similar) on labeled data. Consistency regularization Imposes that the same sample under different perturbations must produce the same output. This idea was used in where they apply randomized data augmentation, dropout, and random max-pooling while forcing softmax predictions to be similar. A similar idea is applied in , which also extends the perturbation to different epochs, i.e. the current prediction for a sample has to be similar to an ensemble of predictions of the same sample in the past. Here the different perturbations come from networks at different states, dropout, and data augmentation. In , the temporal ensembling method is interpreted as a teacher-student problem where the network is both a teacher that produces targets for the unlabeled data as a temporal ensemble, and a student that learns the generated targets by imposing the consistency regularization. naturally re-define the problem to deal with confirmation bias by separating the teacher and the student. The teacher is defined as a different network with similar architecture whose parameters are updated as an exponential moving average of the student network weights. This method is extended in , where they apply an uncertainty weight over the unlabeled samples to learn from the unlabeled samples with low uncertainty (i.e. entropy of the predictions for each sample under random perturbations). use virtual adversarial training to carefully introduce perturbations to data samples as adversarial noise and later impose consistency regularization on the predictions. More recently, propose to use a contrastive loss on the predictions as a regularization that forces predictions to be similar (different) when they are from the same (different) class. This method extends the consistency regularization previously considered only in-between the same data samples to in-between different samples. Their method can naturally be combined with or to boost their performance. propose interpolation consistency training, a method inspired by that encourage predictions at interpolated unlabeled samples to be consistent with the interpolated predictions of individual samples. Also, authors in apply consistency regularization by guessing low-entropy labels, generating data-augmented unlabeled examples and mixing labeled and unlabeled examples using mixup. Both and adopt to estimate the targets used in the consistency regularization. Co-training uses two (or more) networks trained simultaneously to agree on their predictions (consistency regularization) and disagree on their errors. Errors are defined as different predictions when exposed to adversarial attacks, thus forcing different networks to learn complementary representations for the same samples. measure the consistency between the current prediction and an additional prediction for the same sample given by an external memory module that keeps track of previous representations. They additionally introduce an uncertainty weighting of the consistency term to reduce the contribution of uncertain predictions. Consistency regularization methods such as (; ;) have all been shown to benefit from stochastic weight averaging method , that averages network parameters at different training epochs to move the SGD solution on borders of flat loss regions to their center and improve generalization. Pseudo-labeling Seeks the generation of labels or pseudo-labels for unlabeled samples to guide the learning process. An early attempt at pseudo-labeling proposed in uses the network predictions as labels. However, they constrain the pseudo-labeling to a fine-tuning stage, i.e. there is a pre-training or warm-up to initialize the network. A recent pseudo-labeling approach proposed in uses the network class prediction as hard labels for the unlabeled samples. They also introduce an uncertainty weight for each sample loss, it being higher for samples that have distant k-nearest neighbors in the feature space. They further include a loss term to encourage intra-class compactness and inter-class separation, and a consistency term between samples with different perturbations. Improved are reported in combination with . Finally, a recently published work implements pseudo-labeling through graph-based label propagation. The method alternates between two steps: training from labeled and pseudo-labeled data and using the representations of the network to build a nearest neighbor graph where label propagation is applied to refine hard pseudo-labels. They further add an uncertainty score for every sample (softmax prediction entropy based) and class (class population based) to deal, respectively, with the unequal confidence in network predictions and class-imbalance. We formulate SSL as learning a model h θ (x) from a set of N training samples D. These samples are split into the unlabeled set C the one-hot encoding label for C classes corresponding to x i and N = N l + N u. In our case, h θ is a CNN and θ represents the model parameters (weights and biases). As we seek to perform pseudo-labeling, we assume that a pseudo-labelỹ is available for the N u unlabeled samples. We can then reformulate SSL as training usingD = {(, beingỹ = y for the N l labeled samples. The CNN parameters θ can be optimized using categorical cross-entropy: where h θ (x) are the softmax probabilities produced by the model and log(·) is applied element-wise. A key decision is how to generate the pseudo-labelsỹ for the N u unlabeled samples. Previous approaches have used hard pseudo-labels (i.e. one-hot vectors) directly using the network output class or the class estimated using label propagation on a nearest neighbor graph . We adopt the former approach, but use soft pseudo-labels, as we have seen this outperforms hard labels, confirming the observations noted in in the context of relabeling when learning with label noise. In particular, we store the softmax predictions h θ (x i) of the network in every mini-batch of an epoch and use them to modify the soft pseudo-labelỹ for the N u unlabeled samples at the end of every epoch. We proceed as described from the second to the last training epoch, while in the first epoch we use the softmax predictions for the unlabeled samples from a model trained in a 10 epochs warm-up phase using the labeled data subset D u. We use the two regularizations applied in to improve convergence. The first regularization deals with the difficulty of converging at early training stages when the network's predictions are mostly incorrect and the CNN tends to predict the same class to minimize the loss. Assignment of all samples to a single class is discouraged by adding: where p c is the prior probability distribution for class c and h c denotes the mean softmax probability of the model for class c across all samples in the dataset. As in , we assume a uniform distribution p c = 1/C for the prior probabilities (R A stands for all classes regularization) and approximate h c using mini-batches. The second regularization is needed to concentrate the probability distribution of each soft pseudo-label on a single class, thus avoiding the local optima in which the network might get stuck due to a weak guidance: where h c θ (x i) denotes the c class value of the softmax output h θ (x i) and again using mini-batches (i.e. N is replaced by the mini-batch size) to approximate this term. This second regularization is the average per-sample entropy (R H stands for entropy regularization), a well-known regularization in SSL . Finally, the total semi-supervised loss is: where λ A and λ H control the contribution of each regularization term. We stress that this pseudolabeling approach adapted from is far from the state-of-the-art for SSL (see Subsection 4.2), and are the mechanisms proposed in Subsection 3.1 which make pseudo-labeling a suitable alternative. Network predictions are, of course, sometimes incorrect. This situation is reinforced when incorrect predictions are used as labels for unlabeled samples, as it is the case in pseudo-labeling. Overfitting to incorrect pseudo-labels predicted by the network is known as confirmation bias. It is natural to think that reducing the confidence of the network on its predictions might alleviate this problem and improve generalization. Recently, mixup data augmentation introduced a strong regularization technique that combines data augmentation with label smoothing, which makes it potentially useful to deal with this bias. Mixup trains on convex combinations of sample pairs (x p and x q) and corresponding labels (y p and y q): where δ ∈ {0, 1} is randomly sampled from a beta distribution Be (α, β), with α = β (e.g. α = 1 uniformly selects δ). This combination regularizes the network to favor linear behavior in-between training samples, reducing oscillations in regions far from them. Additionally, Eq. 6 can be reinterpreted in the loss as q, thus re-defining the loss * used in Eq. 4 as: As shown in , overconfidence in deep neural networks is a consequence of training on hard labels and it is the label smoothing effect from randomly combining y p and y q during mixup training that reduces prediction confidence and improves model calibration. In the semi-supervised context with pseudo-labeling, using soft-labels and mixup reduces overfitting to model predictions, which is especially important for unlabeled samples whose predictions are used as soft-labels. Note that training with mixup generates softmax outputs h θ (x) for mixed inputs x, thus requiring a second forward pass with the original images to compute unmixed predictions. Mixup data augmentation alone might not effectively deal with confirmation bias when few labeled examples are provided. For example, when training with 500 labeled samples in CIFAR-10 and mini-batch size of 100, 1 clean sample per batch is seen, which is especially problematic at early stages of training where little correct guidance is provided. We find that setting a minimum number of labeled samples per mini-batch (as done in other works (; ; ;) ) provides a constant reinforcement with correct labels during training, reducing confirmation bias and helping to produce better pseudo-labels. Subsections 4.2 and 4.3 experimentally show that mixup, a minimum number of samples per mini-batch, and other techniques (dropout and data augmentation) reduce confirmation bias and make pseudo-labeling an effective alternative to consistency regularization. We use four image classification datasets, CIFAR-10/100 , SVHN (and Mini-ImageNet , to validate our approach. Part of the training images are labeled and the remaining are unlabeled. Following , we use a validation set of 5K samples for CIFAR-10/100 for studying hyperparameters in Subsections 4.2 and 4.3. However, as done in , we add the 5K samples back to the training set for comparisons in Subsection 4.4, where we report test (model from the best epoch). CIFAR-10, CIFAR-100, and SVHN These datasets contain 10, 100, and 10 classes respectivelly, with 50K color images for training and 10K for testing in CIFAR-10/100 and 73257 images for training and 26032 for testing in SVHN. The three datasets have resolution 32×32. We perform experiments with a number of labeled images N l = 0.25K, 0.5K, and 1K for SVHN and N l = 0.25K, 0.5K, 1K, and 4K (4K and 10K) for CIFAR-10 (CIFAR-100). We use the well-known "13-CNN" architecture for CIFAR-10/100 and SVHN. We also experiment with a Wide ResNet-28-2 (WR-28) and a PreAct ResNet-18 (PR-18) in Subsection 4.3 to study the generalization to different architectures. We emulate the semi-supervised learning setup Mini-ImageNet (a subset of the well-known ImageNet dataset) used in . Train and test sets of 100 classes and 600 color images per class with resolution 84 × 84 are selected from ImageNet, as in . 500 images per-class are kept for train (test) splits. The train and test sets therefore contain 50k and 10k images. As with CIFAR-100, we experiment with a number of labeled images N l = 4K and 10K. Following , we use a ResNet-18 (RN-18) architecture (a). Hyperparameters We use the typical configuration for CIFAR-10/100 and SVHN , and the same for Mini-ImageNet. Image normalization using dataset mean and standard deviation and subsequent data augmentation by random horizontal flips and 2 pixel translations for CIFAR and SVHN (Mini-ImageNet). Additionally, color jitter is applied as in in Subsections 4.3 and 4.4 for higher robustness against confirmation bias. We train using SGD with momentum of 0.9, weight decay of 10 −4, and batch size of 100. Training always starts with a high learning rate (0.1 in CIFAR and SVHN, and 0.2 in Mini-ImageNet), dividing it by ten twice during training. We train for CIFAR and Mini-ImageNet 400 epochs (reducing learning rate in epochs 250 and 350) and use 10 epoch warm-up with labeled data, while for SVHN we train 150 epochs (reducing learning rate in epochs 50 and 100) and use a longer warm-up of 150 epochs to start the pseudo-labeling with good predictions and leading to reliable convergence. We do not attempt careful tuning of the regularization weights λ A and λ H and just set them to 0.8 and 0.4 as done in (see Appendix A.2 for an ablation study of these parameters). When using dropout, it is introduced between consecutive convolutional layers of ResNet blocks in WR-28, PR-18, and RN-18, while for 13-CNN we introduce it as in. Following 1, we use weight normalization in all networks. This section demonstrates that carefully regularized pseudo-labeling is a suitable alternative for SSL. Figure 1 illustrates our approach on the "two moons" toy data. Figure 1 (left) shows the limitations of a naive pseudo-labeling adapted from , which fails to adapt to the structure in the unlabelled examples and in a linear decision boundary. Figure 1 (middle) shows the effect of mixup, which alleviates confirmation bias to better model the structure and gives a smoother boundary. Figure 1 (right) shows that combining mixup with a minimum number of labeled samples k per mini-batch improves the semi-supervised decision boundary. Naive pseudo-labeling leads to overfitting the network predictions and high training accuracy in CIFAR-10/100. Table 1 (left) reports mixup effect in terms of validation error. Naive pseudo-labeling Figure 1: Pseudo-labeling in the "two moons" data (4 labels/class) for 1000 samples. From left to right: no mixup, mixup, and mixup with a minimum number of labeled samples per mini-batch. We use an NN classifier with one hidden layer with 50 hidden units as in . leads to an error of 11.40/48.54 for CIFAR-10/100 when training with cross-entropy (C) loss for 4000 labels. This error can be greatly reduced when using mixup (M) to 7.16/41.80. However, when further reducing the number of labels to 500 in CIFAR-10, M is insufficient to ensure low-error (32.10). We propose to set a minimum number of samples k per mini-batch to tackle the problem. Table 1 (right) studies this parameter k when combined with mixup, showing that 16 samples per mini-batch works well for both CIFAR-10 and CIFAR-100, dramatically reducing error in all cases (e.g. in CIFAR-10 for 500 labels error is reduced from 32.10 to 13.68). Confirmation bias causes a dramatic increase in the certainty of incorrect predictions during training. To demonstrate this behavior we compute the average cross-entropy of the softmax output with a uniform U across the classes in every epoch t for all incorrectly predicted samples {x mt} as: Figure 2 shows that mixup and minimum k are effective regularizers for reducing r t, i.e. confirmation bias is reduced. We also experimented with using label noise regularizations , but setting a minimum k proved more effective. There are examples in the recent literature where moving from one architecture to another modifies the belief of which methods have a higher potential. show that skip-connections in ResNet architectures play a key role on the quality of learned representations, while most approaches in previous literature were systematically evaluated using AlexNet (A.). showed that different architectures lead different and useful image priors, highlighting the importance of exploring different networks. We, therefore, test our method with two more architectures: a Wide ResNet-28-2 (WR-28) (S.) typically used in SSL (1.5M parameters) and a PreAct ResNet-18 (PR-18) (b) used in the context of label noise ) (11M parameters). Table 2 presents the for the 13-CNN (AlexNet-type) and these network architectures (ResNet-type). Our pseudo-labeling with mixup and k = 16 (M*) works well for 4000 and 500 labels across architectures, except for 500 labels for WR-28 where there is large error increase (29.50). This is due to a stronger confirmation bias in which labeled samples are not properly learned, while incorrect pseudo-labels are fit. Interestingly, PR-18 (11M) is more robust to confirmation bias than WR-28 (1.5M), while the 13-layer network (3M) has fewer parameters than PR-18 and achieves better performance. This suggests that the network architecture plays an important role, being a relevant prior for SSL with few labels. Figure 2: Example of certainty of incorrect predictions r t during training when using 500 (left) and 4000 (right) labeled images in CIFAR-10. Moving from cross-entropy (C) to mixup (M) reduces r t, whereas adding a minimum number of samples per mini-batch (*) also helps in 500 labels, where M* (with slightly lower r t than M) is the only configuration that converges, as shown in Table 1 (left). Table 2: Validation error across architectures is stabilized using dropout p and data augmentation (A). Labeled images 500 4000 500 4000 500 4000 500 4000 We found that dropout and data augmentation is needed for good performance across all architectures. Table 2 shows that dropout p = 0.1, 0.3 helps in achieving better convergence in CIFAR-10, whereas adding color jitter as additional data augmentation (details in Subsection 4.1) further contributes to error reduction. Note that the quality of pseudo-labels is key, so it is essential to disable dropout to prevent corruption when computing these in the second forward pass. We similarly disable data augmentation in the second forward pass, which consistently improves performance. This configuration is used for comparison with the state-of-the-art in Subsection 4.4. We compare our pseudo-labeling approach against related work that makes use of the 13-CNN in CIFAR-10/100: Π model , TE , MT , Π model-SN , MA-DNN , Deep-Co , TSSDL , LP , CCL , fast-SWA and ICT . Table 3 divides methods into those based on consistency regularization and pseudo-labeling. Note that we include pseudolabeling approaches combined with consistency regularization ones (e.g. MT) in the consistency regularization set. The proposed approach clearly outperforms consistency regularization methods, as well as other purely pseudo-labeling approaches and their combination with consistency regularization methods in CIFAR-10/100 and SVHN (250 labels are reported in Table 3 and extended are in the Appendix A.3). These demonstrate the generalization of the proposed approach compared to other methods that fail when decreasing the number of labels. Furthermore, Table 4 (left) demonstrates that the proposed approach successfully scales to higher resolution images, obtaining an over 10 point margin on the best related work in Mini-ImageNet. Note that all supervised baselines are reported using the same data augmentation and dropout as in the proposed pseudo-labeling. Table 4 (right) compares against recent consistency regularization approaches that use mixup. We achieve better performance than ICT , while being competitive with MM for 500 and 4000 labels using WR-28. Regarding PR-18, we converge to reasonable performance for 4000 and 500 labels, whereas for 250 we do not. Finally, the 13-CNN robustly converges even for 250 labels where we obtain 9.37 test error (see Appendix A.1 for some details on different architectures convergence). Therefore, these suggest that it is worth exploring the relationship between number of labels, dataset complexity and architecture type. As shown in Subsection 4.3, dropout and additional data augmentation help with 500 labels/class across architectures, but are insufficient for 250 labels. Better data augmentation or self-supervised pre-training might overcome this challenge. Furthermore, hyperparameters such as the regularization weights λ A = 0.8 and λ H = 0.4 from Eq. 4 and the mixup α require further study. However, it is already interesting that a straightforward modification of pseudo-labeling, designed to tackle confirmation bias, gives a competitive semi-supervised learning approach, without any consistency regularization, and future work should take this into account. This paper presented a semi-supervised learning approach for image classification based on pseudolabeling. We proposed to directly use the network predictions as soft pseudo-labels for unlabeled data together with mixup augmentation, a minimum number of labeled samples per mini-batch, dropout and data augmentation to alleviate confirmation bias. This conceptually simple approach outperforms related work in four datasets, demonstrating that pseudo-labeling is a suitable alternative to the dominant approach in recent literature: consistency-regularization. The proposed approach is, to the best of our knowledge, both simpler and more accurate than most recent approaches. Future work should explore SSL in class-unbalanced and large-scale datasets, synergies of pseudo-labelling and consistency regularization, and careful hyperparameter tuning. Figure 3 presents the cross-entropy loss for labeled samples when training with 13-CNN, WR-28 and PR-18 and using 500 and 250 labels in CIFAR-10. This loss is a good indicator of a robust convergence to reasonable performance as the interquartile range for cases failing (250 labels for WR-28 and PR-18) is much higher. This subsection studies the effect of α, λ A, and λ H hyperparameters of our pseudo-labeling approach. Tables 5 and 6 report the validation error in CIFAR-10 using 500 and 4000 labels for, respectively, α and λ A and λ H. Note that we keep the same configuration used in Subsection 4.2 with k = 16, i.e. no dropout or additional data augmentation is used. Table 5 suggest that α = 4 and α = 8 values might further improve the reported using α = 1. However, we experimented on CIFAR-10 with 500 labels using the final configuration (adding dropout and additional data augmentation) and observed marginal differences (8.54 with α = 4, which is within the error range of the 8.80 ± 0.45 obtained with α = 1), thus suggesting that stronger mixup regularization might not be additive to dropout and extra data augmentation in our case. Table 6 shows that our configuration (λ A = 0.8 and λ H = 0.4) adopted from is very close to the best performance in this experiment where marginal improvements are achieved. In , more careful hyperparameter tuning might slighltly improve reported in the paper, while the configuration selected is already good and generalizes across datasets. A.3 SVHN EXPPERIMENTS Table 7 reports a comparison of different state-of-the-art algorithms in SVHN using the 13-CNN network. We train 150 epochs using labeled samples (i.e. warm-up) to compute initial pseudo-labels. Then, we train 150 epochs reducing the learning rate in epochs 50 and 100. The long warm-up makes the method robust to lower levels of labeled samples in SVHN. We also experimented in CIFAR-10 with longer warm-up (our are reported using 10 epochs) and found that are in the same error range already reported.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJel41BtDH
Pseudo-labeling has shown to be a weak alternative for semi-supervised learning. We, conversely, demonstrate that dealing with confirmation bias with several regularizations makes pseudo-labeling a suitable approach.
Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods. Reinforcement learning (RL) algorithms provide a formalism for autonomous learning of complex behaviors. When combined with rich function approximators such as deep neural networks, RL can provide impressive on tasks ranging from playing games BID23 BID29, to flying and driving BID36, to controlling robotic arms BID14. However, these deep RL algorithms often require a large amount of experience to arrive at an effective solution, which can severely limit their application to real-world problems where this experience might need to be gathered directly on a real physical system. Part of the reason for this is that direct, model-free RL learns only from the reward: experience that receives no reward provides minimal supervision to the learner. In contrast, model-based RL algorithms obtain a large amount of supervision from every sample, since they can use each sample to better learn how to predict the system dynamics -that is, to learn the "physics" of the problem. Once the dynamics are learned, near-optimal behavior can in principle be obtained by planning through these dynamics. Model-based algorithms tend to be substantially more efficient BID9 BID24, but often at the cost of larger asymptotic bias: when the dynamics cannot be learned perfectly, as is the case for most complex problems, the final policy can be highly suboptimal. Therefore, conventional wisdom holds that model-free methods are less efficient but achieve the best asymptotic performance, while model-based methods are more efficient but do not produce policies that are as optimal. Can we devise methods that retain the efficiency of model-based learning while still achieving the asymptotic performance of model-free learning? This is the question that we study in this paper. The search for methods that combine the best of model-based and model-free learning has been ongoing for decades, with techniques such as synthetic experience generation BID31, partial modelbased backpropagation BID25, and layering model-free learning on the residuals of model-based estimation BID6 ) being a few examples. However, a direct connection between model-free and model-based RL has remained elusive. By effectively bridging the gap between model-free and model-based RL, we should be able to smoothly transition from learning models to learning policies, obtaining rich supervision from every sample to quickly gain a moderate level of proficiency, while still converging to an unbiased solution. To arrive at a method that combines the strengths of model-free and model-based RL, we study a variant of goal-conditioned value functions BID32 BID28 BID0. Goal-conditioned value functions learn to predict the value function for every possible goal state. That is, they answer the following question: what is the expected reward for reaching a particular state, given that the agent is attempting (as optimally as possible) to reach it? The particular choice of reward function determines what such a method actually does, but rewards based on distances to a goal hint at a connection to model-based learning: if we can predict how easy it is to reach any state from any current state, we must have some kind of understanding of the underlying "physics." In this work, we show that we can develop a method for learning variable-horizon goalconditioned value functions where, for a specific choice of reward and horizon, the value function corresponds directly to a model, while for larger horizons, it more closely resembles model-free approaches. Extension toward more model-free learning is thus achieved by acquiring "multi-step models" that can be used to plan over progressively coarser temporal resolutions, eventually arriving at a fully model-free formulation. The principle contribution of our work is a new RL algorithm that makes use of this connection between model-based and model-free learning to learn a specific type of goal-conditioned value function, which we call a temporal difference model (TDM). This value function can be learned very efficiently, with sample complexities that are competitive with model-based RL, and can then be used with an MPC-like method to accomplish desired tasks. Our empirical experiments demonstrate that this method achieves substantially better sample complexity than fully model-free learning on a range of challenging continuous control tasks, while outperforming purely model-based methods in terms of final performance. Furthermore, the connection that our method elucidates between model-based and model-free learning may lead to a range of interesting future methods. In this section, we introduce the reinforcement learning (RL) formalism, temporal difference Qlearning methods, model-based RL methods, and goal-conditioned value functions. We will build on these components to develop temporal difference models (TDMs) in the next section. RL deals with decision making problems that consist of a state space S, action space A, transition dynamics P (s | s, a), and an initial state distribution p 0. The goal of the learner is encapsulated by a reward function r(s, a, s). Typically, long or infinite horizon tasks also employ a discount factor γ, and the standard objective is to find a policy π(a | s) that maximizes the expected discounted sum of rewards, E π [t γ t r(s t, a t, s t+1)], where s 0 ∼ p 0, a t ∼ π(a t |s t), and s t+1 ∼ P (s | s, a).Q-functions. We will focus on RL algorithms that learn a Q-function. The Q-function represents the expected total (discounted) reward that can be obtained by the optimal policy after taking action a t in state s t, and can be defined recursively as following: DISPLAYFORM0 The optimal policy can then recovered according to π(a t |s t) = δ(a t = arg max a Q(s t, a)). Qlearning algorithms BID35 BID27 learn the Q-function via an offpolicy stochastic gradient descent algorithm, estimating the expectation in the above equation with samples collected from the environment and computing its gradient. Q-learning methods can use transition tuples (s t, a t, s t+1, r t) collected from any exploration policy, which generally makes them more efficient than direct policy search, though still less efficient than purely model-based methods. Model-based RL and optimal control. Model-based RL takes a different approach to maximize the expected reward. In model-based RL, the aim is to train a model of the form f (s t, a t) to predict the next state s t+1. Once trained, this model can be used to choose actions, either by backpropagating reward gradients into a policy, or planning directly through the model. In the latter case, a particularly effective method for employing a learned model is model-predictive control (MPC), where a new action plan is generated at each time step, and the first action of that plan is executed, before replanning begins from scratch. MPC can be formalized as the following optimization problem: DISPLAYFORM1 We can also write the dynamics constraint in the above equation in terms of an implicit dynamics, according to DISPLAYFORM2 where C(s i, a i, s i+1) = 0 if and only if s i+1 = f (s i, a i). This implicit version will be important in understanding the connection between model-based and model-free RL.Goal-conditioned value functions. Q-functions trained for a specific reward are specific to the corresponding task, and learning a new task requires optimizing an entirely new Q-function. Goalconditioned value functions address this limitation by conditioning the Q-value on some task description vector s g ∈ G in a goal space G. This goal vector induces a parameterized reward r(s t, a t, s t+1, s g), which in turn gives rise to parameterized Q-functions of the form Q(s, a, s g).A number of goal-conditioned value function methods have been proposed in the literature, such as universal value functions BID28 and Horde . When the goal corresponds to an entire state, such goal-conditioned value functions usually predict how well an agent can reach a particular state, when it is trying to reach it. The knowledge contained in such a value function is intriguingly close to a model: knowing how well you can reach any state is closely related to understanding the physics of the environment. With Q-learning, these value functions can be learned for any goal s g using the same off-policy (s t, a t, s t+1) tuples. Relabeling previously visited states with the reward for any goal leads to a natural data augmentation strategy, since each tuple can be replicated many times for many different goals without additional data collection. BID0 used this property to produce an effective curriculum for solving multi-goal task with delayed rewards. As we discuss below, relabeling past experience with different goals enables goal-conditioned value functions to learn much more quickly from the same amount of data. In this section, we introduce a type of goal-conditioned value functions called temporal difference models (TDMs) that provide a direct connection to model-based RL. We will first motivate this connection by relating the model-based MPC optimizations in Equations and to goal-conditioned value functions, and then present our temporal difference model derivation, which extends this connection from a purely model-based setting into one that becomes increasingly model-free. Let us consider the choice of reward function for the goal conditioned value function. Although a variety of options have been explored in the literature BID32 BID28 BID0, a particularly intriguing connection to model-based RL emerges if we set G = S, such that g ∈ G corresponds to a goal state s g ∈ S, and we consider distance-based reward functions r d of the following form: DISPLAYFORM0 ) at convergence of Q-learning, which means that Q(s t, a t, s g) = 0 implies that s t+1 = s g. Plug this Q-function into the model-based planning optimization in Equation, denoting the task control reward as r c, such that the solution to DISPLAYFORM1 yields a model-based plan. We have now derived a precise connection between model-free and model-based RL, in that model-free learning of goal-conditioned value functions can be used to directly produce an implicit model that can be used with MPC-based planning. However, this connection by itself is not very useful: the ing implicit model is fully model-based, and does not provide any kind of long-horizon capability. In the next section, we show how to extend this connection into the long-horizon setting by introducing the temporal difference model (TDM). If we consider the case where γ > 0, the optimization in Equation no longer corresponds to any optimal control method. In fact, when γ = 0, Q-values have well-defined units: units of distance between states. For γ > 0, no such interpretation is possible. The key insight in temporal difference models is to introduce a different mechanism for aggregating long-horizon rewards. Instead of evaluating Q-values as discounted sums of rewards, we introduce an additional input τ, which represents the planning horizon, and define the Q-learning recursion as DISPLAYFORM0 The Q-function uses a reward of −D(s t+1, s g) when τ = 0 (at which point the episode terminates), and decrements τ by one at every other step. Since this is still a well-defined Q-learning recursion, it can be optimized with off-policy data and, just as with goal-conditioned value functions, we can resample new goals s g and new horizons τ for each tuple (s t, a t, s t+1), even ones that were not actually used when the data was collected. In this way, the TDM can be trained very efficiently, since every tuple provides supervision for every possible goal and every possible horizon. The intuitive interpretation of the TDM is that it tells us how close the agent will get to a given goal state s g after τ time steps, when it is attempting to reach that state in τ steps. Alternatively, TDMs can be interpreted as Q-values in a finite-horizon MDP, where the horizon is determined by τ. For the case where τ = 0, TDMs effectively learn a model, allowing TDMs to be incorporated into a variety of planning and optimal control schemes at test time as in Equation. Thus, we can view TDM learning as an interpolation between model-based and model-free learning, where τ = 0 corresponds to the single-step prediction made in model-based learning and τ > 0 corresponds to the long-term prediction made by typical Q-functions. While the correspondence to models is not the same for τ > 0, if we only care about the reward at every K step, then we can recover a correspondence by replace Equation with DISPLAYFORM1 where we only optimize over every K th state and action. As the TDM becomes effective for longer horizons, we can increase K until K = T, and plan over only a single effective time step: DISPLAYFORM2 This formulation does in some loss of generality, since we no longer optimize the reward at the intermediate steps. This limits the multi-step formulation to terminal reward problems, but does allow us to accommodate arbitrary reward functions on the terminal state s t+T, which still describes a broad range of practically relevant tasks. In the next section, we describe how TDMs can be implemented and used in practice for continuous state and action spaces. The TDM can be trained with any off-policy Q-learning algorithm, such as DQN BID23, DDPG , NAF BID13, and SDQN BID21. During off-policy Q-learning, TDMs can benefit from arbitrary relabeling of the goal states g and the horizon τ, given the same (s t, a t, s t+1) tuples from the behavioral policy as done in BID0. This relabeling enables simultaneous, data-efficient learning of short-horizon and long-horizon behaviors for arbitrary goal states, unlike previously proposed goal-conditioned value functions that only learn for a single time scale, typically determined by a discount factor BID28 BID0. In this section, we describe the design decisions needed to make practical a TDM algorithm. Q-learning typically optimizes scalar rewards, but TDMs enable us to increase the amount of supervision available to the Q-function by using a vector-valued reward. Specifically, if the distance D(s, s g) factors additively over the dimensions, we can train a vector-valued Q-function that predicts per-dimension distance, with the reward function for dimension j given by −D j (s j, s g,j). We use the 1 norm in our implementation, which corresponds to absolute value reward −|s j − s g,j |.The ing vector-valued Q-function can learn distances along each dimension separately, providing it with more supervision from each training point. Empirically, we found that this modifications provides a substantial boost in sample efficiency. We can optionally make an improvement to TDMs if we know that the task reward r c depends only on some subset of the state or, more generally, state features. In that case, we can train the TDM to predict distances along only those dimensions or features that are used by r c, which in practice can substantially simplify the corresponding prediction problem. In our experiments, we illustrate this property by training TDMs for pushing tasks that predict distances from an end-effector and pushed object, without accounting for internal joints of the arm, and similarly for various locomotion tasks. While the TDM optimal control formulation Equation FORMULA7 drastically reduces the number of states and actions to be optimized for long-term planning, it requires solving a constrained optimization problem, which is more computationally expensive than unconstrained problems. We can remove the need for a constrained optimization through a specific architectural decision in the design of the function approximator for Q(s, a, s g, τ). We define the Q-function as Q(s, a, s g, τ) = − f (s, a, s g, τ) − s g, where f (s, a, s g, τ) outputs a state vector. By training the TDM with a standard Q-learning method, f (s, a, s g, τ) is trained to explicitly predict the state that will be reached by a policy attempting to reach s g in τ steps. This model can then be used to choose the action with fully explicit MPC as below, which also allows straightforward derivation of a multi-step version as in Equation FORMULA6. DISPLAYFORM0 In the case where the task is to reach a goal state s g, a simpler approach to extract a policy is to use the TDM directly: DISPLAYFORM1 In our experiments, we use Equations FORMULA8 and FORMULA9 to extract a policy. The algorithm is summarized as Algorithm 1. A crucial difference from prior goal-conditioned value function methods BID28 BID0 ) is that our algorithm can be used to act according to an arbitrary terminal reward function r c, both during exploration and at test time. Like other off-policy algorithms BID23, it consists of exploration and Q-function fitting. Noise is injected for exploration, and Q-function fitting uses standard Qlearning techniques, with target networks Q and experience replay BID23. If we view the Q-function fitting as model fitting, the algorithm also resembles iterative model-based RL, which alternates between collecting data using the learned dynamics model for planning BID8 ) and fitting the model. Since we focus on continuous tasks, we use DDPG, though any Q-learning method could be used. The computation cost of the algorithm is mostly determined by the number of updates to fit the Q-function per transition, I. In general, TDMs can benefit from substantially larger I than classic Require: Task reward function rc(s, a), parameterized TDM Qw(s, a, sg, τ), replay buffer B 1: for n = 0,..., N − 1 episodes do 2: s0 ∼ p(s0) 3: for t = 0,..., T − 1 time steps do 4: a * t = MPC(rc, st, Qw, T − t) // Eq. 6, Eq. 7, Eq. 8, or Eq. 9 5: at = AddNoise(a * t) // Noisy exploration 6: st+1 ∼ p(st, at), and store {st, at, st+1} in the replay buffer B // Step environment 7:for i = 0, I − 1 iterations do 8:Sample M transitions {sm, am, s m} from the replay B. Relabel time horizons and goal states τm, sg,m // Section A.1 10: DISPLAYFORM0 Minimize(w, L(w)) // Optimize 13: end for 14:end for 15: end for model-free methods such as DDPG due to relabeling increasing the amount of supervision signals. In real-world applications such as robotics where we care most of the sample efficiency BID14, the learning is often bottlenecked by the data collection rather than the computation, and therefore large I values are usually not a significant problem and can continuously benefit from the acceleration in computation. Combining model-based and model-free reinforcement learning techniques is a well-studied problem, though no single solution has demonstrated all of the benefits of model-based and model-free learning. Some methods first learn a model and use this model to simulate experience BID31 BID13 or compute better gradients for model-free updates BID25. Other methods use model-free algorithms to correct for the local errors made by the model BID6 BID1. While these prior methods focused on combining different model-based and model-free RL techniques, our method proposes an equivalence between these two branches of RL through a specific generalization of goal-conditioned value function. As a , our approach achieves much better sample efficiency in practice on a variety of challenging reinforcement learning tasks than model-free alternatives, while exceeding the performance of purely model-based approaches. We are not the first to study the connection between model-free and model-based methods, with and BID26 being two notable examples. BID3 shows that one can extract a model from a value function when using a tabular representation of the transition function. BID26 shows that, for linear function approximators, the model-free and model-based RL approaches produce the same value function at convergence. Our contribution differs substantially from these: we are not aiming to show that model-free RL performs similarly to model-based RL at convergence, but rather how we can achieve sample complexity comparable to model-based RL while retaining the favorable asymptotic performance of model-free RL in complex tasks with nonlinear function approximation. BID10. Critically, unlike the works on contextual policies BID5 BID7 BID18 which require onpolicy trajectories with each new goal, the value function approaches such as Horde BID32 and UVF BID28 can reuse off-policy data to learn rich contextual value functions using the same data. TDMs condition on a policy trying to reach a goal and must predict τ steps into the future. This type of prediction is similar to the prediction made by prior work on multi-step models BID22 BID34: predict the state after τ actions. An important difference is that multi-step models do not condition on a policy reaching a goal, and so they require optimizing over a sequence of actions, making the input space grow linearly with the planning horizon. A particularly related UVF extension is hindsight experience replay (HER) BID0. Both HER and our method retroactively relabel past experience with goal states that are different from the goal aimed for during data collection. However, unlike our method, the standard UVF in HER uses a single temporal scale when learning, and does not explicitly provide for a connection between model-based and model-free learning. The practical of these differences is that our approach empirically achieves substantially better sample complexity than HER on a wide range of complex continuous control tasks, while the theoretical connection between modelbased and model-free learning suggests a much more flexible use of the learned Q-function inside a planning or optimal control framework. Lastly, our motivation is shared by other lines of work besides goal-conditioned value functions that aim to enhance supervision signals for model-free RL BID16 BID2. Predictions ) augment classic RL with multi-step reward predictions, while UNREAL BID16 ) also augments it with pixel control as a secondary reward objective. These are substantially different methods from our work, but share the motivation to achieve efficient RL by increasing the amount of learning signals from finite data. Our experiments examine how the sample efficiency and performance of TDMs compare to both model-based and model-free RL algorithms. We expect to have the efficiency of model-based RL but with less model bias. We also aim to study the importance of several key design decisions in TDMs, and evaluate the algorithm on a real-world robotic platform. For the model-free comparison, we compare to DDPG, which typically achieves the best sample efficiency on benchmark tasks BID11; HER, which uses goal-conditioned value functions BID0; and DDPG with the same sparse rewards of HER. For the modelbased comparison, we compare to the model-based component in BID24, a recent work that reports highly efficient learning with neural network dynamics models. Details of the baseline implementations are in the Appendix. We perform the comparison on five simulated tasks: a 7 DoF arm reaching various random end-effector targets, an arm pushing a puck to a target location, a planar cheetah attempting to reach a goal velocity (either forward or backward), a quadrupedal ant attempting to reach a goal position, and an ant attempting to reach a goal position and velocity. The tasks are shown in Figure 1 and terminate when either the goal is reached or the time horizon is reached. The pushing task requires long-horizon reasoning to reach and push the puck. The cheetah and ant tasks require handling many contact discontinuities which is challenging for model-based methods, with the ant environment having particularly difficult dynamics given the larger state and action space. The ant position and velocity task presents a scenario where reward shaping as in traditional RL methods may not lead to optimal behavior, since one cannot maintain both a desired position and velocity. However, such a task can be very valuable in realistic settings. For example, if we want the ant to jump, we might instruct it to achieve a particular velocity at a particular location. We also tested TDMs on a real-world robot arm reaching end-effector positions, to study its applicability to real-world tasks. For the simulated and real-world 7-DoF arm, our TDM is trained on all state components. For the pushing task, our TDM is trained on the hand and puck XY-position. For the half cheetah task, our TDM is trained on the velocity of the cheetah. For the ant tasks, our TDM is trained on either the position or the position and velocity for the respective task. Full details are in the Appendix. The are shown in Figure 2. When compared to the model-free baselines, the pure modelbased method learns learns much faster on all the tasks. However, on the harder cheetah and ant tasks, its final performance is worse due to model bias. TDMs learn as quickly or faster than the model-based method, but also always learn policies that are as good as if not better than the modelfree policies. Furthermore, TDMs requires fewer samples than the model-free baselines on ant tasks and drastically fewer samples on the other tasks. We also see that using HER does not lead to, model-based, and goal-conditioned value functions (HER -Dense) on various tasks. All plots show the final distance to the goal versus 1000 environment steps (not rollouts). The bold line shows the mean across 3 random seeds, and the shaded region show one standard deviation. Our method, which uses model-free learning, is generally more sample-efficient than model-free alternatives including DDPG and HER and improves upon the best model-based performance.an improvement over DDPG. While we were initially surprised, we realized that a selling point of HER is that it can solve sparse tasks that would otherwise be unsolvable. In this paper, we were interested in improving the sample efficiency and not the feasibility of model-free reinforcement learning algorithms, and so we focused on tasks that DDPG could already solve. In these sorts of tasks, the advantage of HER over DDPG with a dense reward is not expected. To evaluate HER as a method to solve sparse tasks, we included the DDPG-Sparse baseline and we see that HER significantly outperforms it as expected. In summary, TDMs converge as fast or faster than modelbased learning (which learns faster than the model-free baselines), while achieving final performance that is as good or better that the model-free methods on all tasks. Lastly, we ran the algorithm on a 7-DoF Sawyer robotic arm to learn a real-world analogue of the reaching task. Figure 2f shows that the algorithm outperforms and learns with fewer samples than DDPG, our model-free baseline. These show that TDMs can scale to real-world tasks. In this section, we discuss two key design choices for TDMs that provide substantially improved performance. First, FIG2 examines the tradeoffs between the vectorized and scalar rewards. The show that the vectorized formulation learns substantially faster than the naïve scalar variant. Second, FIG2 compares the learning speed for different horizon values τ max. Performance degrades when the horizon is too low, and learning becomes slower when the horizon is too high. In this paper, we derive a connection between model-based and model-free reinforcement learning, and present a novel RL algorithm that exploits this connection to greatly improve on the sample efficiency of state-of-the-art model-free deep RL algorithms. Our temporal difference models can be viewed both as goal-conditioned value functions and implicit dynamics models, which enables them to be trained efficiently on off-policy data while still minimizing the effects of model bias. As a , they achieve asymptotic performance that compares favorably with model-free algorithms, but with a sample complexity that is comparable to purely model-based methods. While the experiments focus primarily on the new RL algorithm, the relationship between modelbased and model-free RL explored in this paper provides a number of avenues for future work. We demonstrated the use of TDMs with a very basic planning approach, but further exploring how TDMs can be incorporated into powerful constrained optimization methods for model-predictive control or trajectory optimization is an exciting avenue for future work. Another direction for future is to further explore how TDMs can be applied to complex state representations, such as images, where simple distance metrics may no longer be effective. Although direct application of TDMs to these domains is not straightforward, a number of works have studied how to construct metric embeddings of images that could in principle provide viable distance functions. We also note that while the presentation of TDMs have been in the context of deterministic environments, the extension to stochastic environments is straightforward: TDMs would learn to predict the expected distance between the future state and a goal state. Finally, the promise of enabling sample-efficient learning with the performance of model-free RL and the efficiency of model-based RL is to enable widespread RL application on real-world systems. Many applications in robotics, autonomous driving and flight, and other control domains could be explored in future work. The maximum distance was set to 5 rather than 6 for this experiment, so the numbers should be lower than the ones reported in the paper. A.1 GOAL STATE AND τ SAMPLING STRATEGY While Q-learning is valid for any value of s g and τ for each transition tuple (s t, a t, s t+1), the way in which these values are sampled during training can affect learning efficiency. Some potential strategies for sampling s g are: uniformly sample future states along the actual trajectory in the buffer (i.e., for s t, choose s g = s t+k for a random k > 0) as in BID0; uniformly sample goal states from the replay buffer; uniformly sample goals from a uniform range of valid states. We found that the first strategy performed slightly better than the others, though not by much. In our experiments, we use the first strategy. The horizon τ is sampled uniformly at random between 0 and the maximum horizon τ max. In all our experiments, we used DDPG as the base off-policy model-free RL algorithm for learning the TDMs Q(s, a, g, s τ). Experience replay BID23 has size of 1 million transitions, and the soft target networks are used with a polyak averaging coefficient of 0.999 for DDPG and TDM and 0.95 for HER and DDPG-Sparse. For HER and DDPG-Sparse, we also added a penalty on the tanh pre-activation, as in BID0. Learning rates of the critic and the actor are chosen from {1e-4, 1e-3} and {1e-4, 1e-3} respectively. Adam BID17 ) is used as the base optimizer with default parameters except the learning rate. The batch size was 128. The policies and networks are parmaeterized with neural networks with ReLU hidden activation and two hidden layers of size 300 and 300. The policies have a tanh output activation, while the critic has no output activation (except for TDM, see A.5). For the TDM, the goal was concatenated to the observation. The planning horizon τ is also concatenated as an observation and represented as a single integer. While we tried various representations for τ such as one-hot encodings and binary-string encodings, we found that simply providing the integer was sufficient. While any distance metric for the TDM reward function can be used, we chose L1 norm − s t+1 − s g 1 to ensure that the scalar and vectorized TDMs are consistent. For the model-based comparison, we trained a neural network dynamics model with ReLU activation, no output activation, and two hidden units of size 300 and 300. The model was trained to predict the difference in state, rather than the full state. The dynamics model is trained to minimize the mean squared error between the predicted difference and the actual difference. After each state is observed, we sample a minibatch of size 128 from the replay buffer (size 1 million) and perform one step of gradient descent on this mean squared error loss. Twenty rollouts were performed to compute the (per-dimension) mean and standard deviation of the states, actions, and state differences. We used these statistics to normalize the states and actions before giving them to the model, and to normalize the state differences before computing the loss. For MPC, we simulated 512 random action sequences of length 15 through the learned dynamics model and chose the first action of the sequence with the highest reward. For TDMs, we found the most important hyperparameters to be the reward scale, τ max, and the number of updates per observations, I. As shown in FIG4, TDMs can greatly benefit from larger values of I, though eventually there are diminishing returns and potentially negative impact, mostly likely due to over-fitting. We found that the baselines did not benefit, except for HER which did benefit from larger I values. For all the model-free algorithms (DDPG, DDPG-Sparse, HER, and TDMs), we performed a grid search over the reward scale in the range {0.01, 1, 100, 10000} and the number of updates per observations in the range {1, 5, 10}. For HER, we also tuned the weight given to the policy pre-tanh-activation {0, 0.01, 1}, which is described in BID0. For TDMs, we also tuned the best τ max in the range {15, 25, Horizon − 1}. For the half cheetah task, we performed extra searches over τ max and found τ max = 9 to be effective. For TDMs, since we know that the true Q-function must learn to predict (negative) distances, we incorporate this prior knowledge into the Q-function by parameterizing it as Q(s, a, s g, τ) = − f (s, a, s g, τ) − s g 1. Here, f is a vector outputted by a feed-forward neural network and has the same dimension as the goal. This parameterization ensures that the Q-function outputs non-positive values, while encouraging the Q-function to learn what we call a goal-conditioned model: f is encouraged to predict what state will be reached after τ, when the policy is trying to reach goal s g in τ time steps. For the 1 norm, the scalar supervision regresses Q(s t, a t, s g, τ) = − for each dimension j of the state. A.6 TASK AND REWARD DESCRIPTIONS Benchmark tasks are designed on MuJoCo physics simulator BID33 and OpenAI Gym environments BID4. For the simulated reaching and pushing tasks, we use and for the other tasks we use for policy extraction. The horizon (length of episode) for the pusher and ant tasks are 50. The reaching tasks has a horizon of 100. The half-cheetah task has a horizon of 99.7-DoF reacher.: The state consists of 7 joint angles, 7 joint angular velocities, and 3 XYZ observation of the tip of the arm, making it 17 dimensional. The action controls torques for each joint, totally 7 dimensional. The reward function during optimization control and for the model-free baseline is the negative Euclidean distance between the XYZ of the tip and the target XYZ. The targets are sampled randomly from all reachable locations of the arm at the beginning of each episode.
[ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
Skw0n-W0Z
We show that a special goal-condition value function trained with model free methods can be used within model-based control, resulting in substantially better sample efficiency and performance.
We introduce a neural architecture to perform amortized approximate Bayesian inference over latent random permutations of two sets of objects. The method involves approximating permanents of matrices of pairwise probabilities using recent ideas on functions defined over sets. Each sampled permutation comes with a probability estimate, a quantity unavailable in MCMC approaches. We illustrate the method in sets of 2D points and MNIST images. Posterior inference in generative models with discrete latent variables presents well-known challenges when the variables live in combinatorially large spaces. In this work we focus on the popular and non-trivial case where the latent variables represent random permutations. While inference in these models has been studied in the past using MCMC techniques and variational methods, here we propose an amortized approach, whereby we invest computational resources to train a model, which later is used for very fast posterior inference . Unlike the variational autoencoder approach , in our case we do not learn a generative model. Instead, the latter is postulated (through its samples) and posterior inference is the main focus of the learning phase. This approach has been recently explored in sundry contexts, such as Bayesian networks (Stuhlmüller et al., 2013), sequential Monte Carlo , probabilistic programming , neural decoding and particle tracking . Our method is inspired by the technique introduced in to perform amortized inference over discrete labels in mixture models. The basic idea is to use neural networks to express posteriors in the form of multinomial distributions (with varying support) in terms of fixed-dimensional, distributed representations that respect the permutation symmetries imposed by the discrete variables. After training the neural architecture using labeled samples from a particular generative model, we can obtain independent approximate posterior samples of the permutation posterior for any new set of observations of arbitrary size. These samples can be used to compute approximate expectations, as high quality importance samples, or as independent Metropolis-Hastings proposals. Let us consider the generative model Here p(c 1:N) = 1/N! is a uniform distribution over permutations, with the random variable denoting that x c i is paired with y i. As a concrete example, think of y i as a noise-corrupted version of a permuted sample x c i. Given two sets of N data points x = {x i}, y = {y i}, we are interested in iid sampling the posterior of the c i's, using a decomposition note now that p(c N |c 1:N −1, x, y) = 1, since the last point y N is always matched with the last unmatched point among the x i' s. A generic factor in is where c n takes values in {1, . . ., N} not taken by c 1:n−1. Consider first where we defined and s n = {1 . . . N}/{c 1 . . . c n} is the set of available indices after choosing c 1:n. Note that R in is the permanent of a (N − n)×(N − n) matrix, an object whose computation is known to be a #P problem . Inserting into gives Note that does not depend on {x c i}, except for restricting the allowed values for c n. Now, the function R in depends on the unmatched points {x c i} N i=n+1, and {y i} N i=n+1, in such a way that it is invariant under separate permutations of the elements of each set. Following , these permutation symmetries can be captured by introducing functions h: and approximating R(c n+1:N, x, y|c 1 . . . c n) e f (Hx,c n,Hy). The subindex c n in H x,cn indicates a value that cannot be taken by the c i's in the sum in. Inserting into gives q θ (c n |c 1:n−1, x, y) = p(x cn, y n)e f (Hx,c n,Hy) c n p(x c n, y n)e f (H x,c n,Hy) which is our proposed approximation for, with θ representing the parameters of the neural networks for h, f. The neural architecture is schematized in Figure 2. The pairwise density p(y n, x cn) can be either known in advance, or represented by a parametrized function to be learned (in the latter case we assume we have samples from it). We call this approach the Neural Permutation Process (NPP). In order to learn the parameters θ of the neural networks h, f (and possibly p(x cn, y n)), we use stochastic gradient descent to minimize the expected KL divergence, log q θ (c n |c 1:n−1, x, y) + const. Samples from p(N)p(c 1:N, x, y) are obtained from the generative model. If we can take an unlimited number of samples, we can potentially train a neural network to approximate p(c n |c 1:n−1, x) arbitrarily accurately. In Figure 1 we show for the following two examples. Both cases illustrate how a probabilistic approach captures the ambiguities in the observations. Noisy pairs in 2D: the generative model is MNIST digits: the generative model is An additional symmetry. Note finally that if p(y n, x cn) = p(x cn, y n) (as is the case in these examples), the additional symmetry f (H x,cn, H y) = f (H y, H x,cn) can be captured by introducing a new function g and defining f (H x,cn, H y) = f (g(H x,cn)+g(H y)). Interestingly, as shown in Figure 2, we find that a higher likelihood is obtained instead by f (H x,cn, H y) = f (H x,cn H y), where indicates componentwise multiplication. To our knowledge, this type of encoding has not been studied in the literature, and we plan to explore it further in the future. Our on simple datasets validate this approach to posterior inference over latent permutations. More complex generative models with latent permutations can be approached using similar tools, a research direction we are presently exploring. The curves show mean training negative log-likelihood/iteration in the MNIST example. f = 0 is a baseline model, were we ignore the unassigned points in. The other two curves correspond to encoding the symmetry p(y n, x cn) = p(x cn, y n) as f (g(H x,cn) + g(H y)) or as f (H x,cn H y).
[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
rJgFtkhEtr
A novel neural architecture for efficient amortized inference over latent permutations
Machine learned large-scale retrieval systems require a large amount of training data representing query-item relevance. However, collecting users' explicit feedback is costly. In this paper, we propose to leverage user logs and implicit feedback as auxiliary objectives to improve relevance modeling in retrieval systems. Specifically, we adopt a two-tower neural net architecture to model query-item relevance given both collaborative and content information. By introducing auxiliary tasks trained with much richer implicit user feedback data, we improve the quality and resolution for the learned representations of queries and items. Applying these learned representations to an industrial retrieval system has delivered significant improvements. In this paper, we propose a novel transfer learning model architecture for large-scale retrieval systems. The retrieval problem is defined as follows: given a query and a large set of candidate items, retrieve the top-k most relevant candidates. Retrieval systems are useful in many real-world applications such as search BID28 and recommendation BID6 BID31 BID10. The recent efforts on building large-scale retrieval systems mostly focus on the following two aspects:• Better representation learning. Many machine learning models have been developed to learn the mapping of queries and candidate items to an embedding space BID14 BID15. These models leverage various features such as collaborative and content information BID29 the top-k relevant items given the similarity (distance) metric associated with the embedding space BID3 BID8.However, it is challenging to design and develop real-world large-scale retrieval systems for many reasons:• Sparse relevance data. It is costly to collect users' true opinions regarding item relevance. Often, researchers and engineers design human-eval templates with Likert scale questions for relevance BID5, and solicit feedback via crowd-sourcing platforms (e.g., Amazon Mechnical Turk).• Noisy feedback. In addition, user feedback is often highly subjective and biased, due to human bias in designing the human-eval templates, as well as the subjectivity in providing feedback.• Multi-modality feature space. We need to learn relevance in a feature space generated from multiple modalities, e.g., query content features, candidate content features, context features, and graph features from connections between query and candidate BID29 BID21 BID7.In this paper, we propose to learn relevance by leveraging both users' explicit answers on relevance and users' implicit feedback such as clicks and other types of user engagement. Specifically, we develop a transfer-learning framework which first learns the effective query and candidate item representations using a large quantity of users' implicit feedback, and then refines these representations using users' explicit feedback collected from survey responses. The proposed model architecture is depicted in FIG1.Our proposed model is based on a two-tower deep neural network (DNN) commonly deployed in large-scale retrieval systems BID15. This model architecture, as depicted in FIG0, is capable of learning effective representations from multiple modalities of features. These representations can be subsequently served using highly efficient nearest neighbor search systems BID8.To transfer the knowledge learned from implicit feedback to explicit feedback, we extend the two-tower model by adopting a shared-bottom architecture which has been widely used in the context of multi-task learning BID4. Specifically, the final loss includes training objectives for both the implicit and explicit feedback tasks. These two tasks share some hidden layers, and each task has its own independent sub-tower. At serving time, only the representations learned for explicit feedback are used and evaluated. Our experiments on an industrial large-scale retrieval system have shown that by transferring knowledge from rich implicit feedback, we can significantly improve the prediction accuracy of sparse relevance feedback. In summary, our contributions are as follows:• We propose a transfer learning framework which leverages rich implicit feedback in order to learn better representations for sparse explicit feedback.• We design a novel model architecture which optimizes two training objectives sequentially.• We evaluate our model on a real-world large-scale retrieval system and demonstrate significant improvements. The rest of this paper is organized as follows: Section 2 discusses related work in building large-scale retrieval systems. Section 3 introduces our problem and training objectives. Section 4 describes our proposed approach. Section 5 reports the experimental on a large-scale retrieval system. Finally, in Section 6, we conclude with our findings. In this section, we first introduce some state-of-the-art industrial retrieval systems, and then discuss the application of multi-task learning and transfer learning techniques in retrieval and recommendation tasks. Retrieval systems are widely used in large-scale applications such as search BID28 and recommendation BID6 BID31 BID10. In recent years, the industry has moved from reverse index based solutions BID2, to machine learned retrieval systems. Collaborative-filtering based systems BID13 BID0 have been very popular and successful until very recently, when they were surpassed by various neural network based retrieval models BID16 BID31 BID1.A retrieval system involves two key components: representation learning and efficient indexing algorithms BID19. Many large-scale industrial retrieval systems have seen success of using two-tower DNN models to learn separate representations for query and candidate items BID11 BID30 BID15. There has also been work on multi-task retrieval systems for context-aware retrieval applications based on tensor factorization BID32. Unfortunatelly, due to limitations on model capacity and serving time constraints, the model cannot be easily adapted to learn complex feature representations from multiple feature sources. Many multi-task DNN based recommendation systems BID6 BID17 are designed for ranking problems where only a small subset of high quality candidates are scored. These full-blown ranking solutions cannot be easily applied to retrieval problems, where we try to identify thousands of candidates from a large corpus with millions to hundreds of millions of candidate items. Inspired by these works, we propose a novel framework to combine the benefits of both worlds: the computation efficiency of a two-tower model architecture; and the improved model capability of a multi-task DNN architecture BID4. This enables us to transfer the learning from rich implicit feedback to help sparse explicit feedback tasks. Our work is closely related to transfer learning BID22 BID27 BID24 and weakly supervised learning BID20 BID9 BID25 BID33. In this section, we formalize the retrieval problem, and introduce our training data and training objectives. The retrieval problem is defined as follows. Given a query and a corpus of candidate items, return the top-k relevant items. Let {x i} N i=1 ⊂ X and {y j} M j=1 ⊂ Y, respectively, be the feature vectors of queries and candidates in feature space X and Y, where N and M, respectively, denote the number of queries and candidates. We model the retrieval system as a parameterized scoring function s(·, ·; θ): X × Y → R, where θ denotes the model parameters. Items with top-k scores s(x, y; θ) are selected for a given query at inference time. We assume the training data is a set of query and item pairs {(x t, y t)} T t=1, where y t is the candidate associated with x t which has either explicit or implicit users' feedback, and T M N in practice. Our goal is to fit the scoring function based on these T examples. When training a machine learning based retrieval system, the ideal way is to use users' explicit feedback which reflects the relevance of an item to a query. However, asking for users' explicit feedback is costly; hence, many existing systems use implicit feedback from user logs, such as clicks. In this paper, we study retrieval systems with both explicit and implicit feedback, where implicit feedback is abundant and explicit feedback is relatively sparse. The goal of our retrieval problem is to learn better representations of queries and candidates such that the similarity between a query candidate pair closely approximates relevance. Therefore, our main training objective is to minimize the differences between the predicted relevance and the ground truth. To facilitate representation learning, we introduce an auxiliary objective which captures user engagement on items, such as clicks of an item, purchase of a product for shopping retrieval, or views of a movie for movie recommendation. Formally, we aim to jointly learn two objectives s exp (·, ·; θ) and s imp (·, ·; θ) while sharing part of the parameters between θ and θ. We assume some of the examples (x t, y t) are in set E with explicit feedback, and others are in set I with implicit feedback. In addition, each example (x t, y t) ∈ E is associated with label l t ∈ R representing user' explicit feedback, e.g., response to the relevance survey. Note that E and I are not mutually exclusive as some examples can have both implicit and explicit feedback. We use regression loss to fit users' explicit feedback on example set in E. One example loss is the mean squared error (MSE): DISPLAYFORM0 where | · | represents the cardinality. On the other hand, we treat the modeling of implicit feedback as a multi-class classification task over the full corpus of items, and use the softmax formulation to model the probability of choosing item y, namely DISPLAYFORM1 The maximum likelihood estimation (MLE) can be formulated as DISPLAYFORM2 With loss multipliers w and w, we jointly optimize the losses in FORMULA0 and by optimizing DISPLAYFORM3 In this section, we describe our proposed framework to learn relevance for large-scale retrieval problems. We extend the two-tower model architecture by introducing a sharedbottom model architecture on both towers. Figure 1 provides a high-level illustration of the two-tower DNN model architecture. Given a pair of query and item represented by feature vectors x ∈ X, y ∈ Y, respectively, the left and right tower provides two DNN based parameterized embedding functions u: DISPLAYFORM0 which encode features of query and item to a k-dimensional embedding space. The scoring function is then computed as the dot product between the query and item embeddings at the top layer, i.e., DISPLAYFORM1 To enable multi-task learning, we extend the two-tower model by adopting the shared-bottom architecture. Specifically, we introduce two sub-towers on top of the bottom hidden layers, one for the explicit-feedback task and the other for the implicit-feedback task. The outputs of bottom hidden layers are fed in parallel to the two sub-towers. The bottom hidden layers are shared between the two sub-towers BID4, and are referred to as shared-bottom layers. The final model architecture is depicted in FIG1. During training, we first train the model for the auxiliary user engagement objective, which uses the cross entropy loss. Having learned the shared representations, we finetune the model for the main relevance objective, which uses the squared loss. To prevent potential over-fitting caused by the sparse relevance data, we apply stop gradients for the relevance objective on the shared-bottom layers. For serving, we only need to store and serve the top layer of the two relevance sub-towers to predict the relevance. In this section, we describe the experiments of our proposed framework on one of Google's large-scale retrieval systems for relevant item recommendations, e.g., apps. Our system contains several millions of candidates. Our training data contains hundreds of thousands of explicit feedback from relevance survey, and billions of implicit feedback from user logs. We randomly split the data into 90% for training and 10% for evaluation. Model performance was measured on the eval set by the Root Mean Square Error (RMSE) for relevance prediction. The model was implemented in Tensorflow, of which the output relevance embeddings for queries and candidates were served for retrieval. The hyper-parameters including model size, learning rate, and training steps were carefully tuned for the best model performance. We study the effects of applying transfer learning to relevance prediction. The following experiment suggest that transfer learning significantly improves the prediction quality of sparse relevance task and helps avoid over-fitting. Table 1 reports relevance RMSE (the lower the better) for different combinations of training objectives and feature types. We see that using implicit feedback leads to a significant improvement as compared to using explicit feedback only. Also, using collaborative information together with content information performs better than the model which uses collaborative information alone. TAB1. Eval RMSE on relevance with varying model sizes. The success of transfer learning hinges on a proper parameterization of both the auxiliary and main tasks. On one hand, we need sufficient capacity to learn a high-quality representation from a large amount of auxiliary data. On the other hand, we want to limit the capacity for the main task to avoid over-fitting to its sparse labels. As a , our proposed model architecture is slightly different from the traditional pre-trained and fine-tuning model BID12. Besides shared layers, each task has its own hidden layers with different capacities. In addition, we apply a two-stage training with stop gradients to avoid potential issues caused by the extreme data skew between the main task and auxiliary task. Our experiences have motivated us to continue our work in the following directions:• We will consider multiple types of user implicit feedback using different multi-task learning frameworks, such as Multi-gate Mixture-of-Expert BID17 and Sub-Network Routing BID18. We will continue to explore new model architectures to combine transfer learning with multi-task learning.• The auxiliary task requires hyper-parameter tuning to learn the optimal representation for the main task. We will explore AutoML BID26 techniques to automate the learning of proper parameterizations across tasks for both the query and the candidate towers. In this paper, we propose a novel model architecture to learn better query and candidate representations via transfer learning. We extend the two-tower neural network approach to enhance sparse task learning by leveraging auxiliary tasks with rich implicit feedback. By introducing auxiliary objectives and jointly learning this model using implicit feedback, we observe a significant improvement for relevance prediction on one of Google's large-scale retrieval systems.
[ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
SJxPVcSonN
We propose a novel two-tower shared-bottom model architecture for transferring knowledge from rich implicit feedbacks to predict relevance for large-scale retrieval systems.
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry, but fail to model dynamic objects (such as other agents) or semantic constraints (such as wet floors or doorways). Learning-based RL agents are an attractive alternative because they can incorporate both semantic and geometric information, but are notoriously sample inefficient, difficult to generalize to novel settings, and are difficult to interpret. In this paper, we combine the best of both worlds with a modular approach that {\em learns} a spatial representation of a scene that is trained to be effective when coupled with traditional geometric planners. Specifically, we design an agent that learns to predict a spatial affordance map that elucidates what parts of a scene are navigable through active self-supervised experience gathering. In contrast to most simulation environments that assume a static world, we evaluate our approach in the VizDoom simulator, using large-scale randomly-generated maps containing a variety of dynamic actors and hazards. We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance. The ability to explore and navigate within a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional approaches for navigation and exploration rely on simultaneous localization and mapping (SLAM) methods to recover scene geometry, producing an explicit geometric map as output. Such maps can be used in conjunction with classic geometric motion planners for exploration and navigation (such as those based on graph search). However, geometric maps fail to capture dynamic objects within an environment, such as humans, vehicles, or even other autonomous agents. In fact, such dynamic obstacles are intentionally treated as outliers to be ignored when learning a geometric map. However, autonomous agents must follow a navigation policy that avoids collisions with dynamic obstacles to ensure safe operation. Moreover, real-world environments also offer a unique set of affordances and semantic constraints to each agent: a human-sized agent might fit through a particular door, but a car-sized agent may not; similarly, a bicycle lane may be geometrically free of obstacles, but access is restricted to most agents. Such semantic and behavioral constraints are challenging to encode with classic SLAM. One promising alternative is end-to-end reinforcement learning (RL) of a policy for exploration and navigation. Such approaches have the potential to jointly learn an exploration/navigation planner together with an internal representation that captures both geometric, semantic, and dynamic constraints. However, such techniques suffer from well-known challenges common to RL such as high sample complexity (because reward signals tend to be sparse), difficulty in generalization to novel environments (due to overfitting), and lack of interpretability. We advocate a hybrid approach that combines the best of both worlds. Rather than end-to-end learning of both a spatial representation and exploration policy, we apply learning only "as needed". Specifically, we employ off-the-shelf planners, but augment the classic geometric map with a spatial affordance map that encodes where the agent can safely move. Crucially, the affordance map is learned through self-supervised interaction with the environment. For example, our agent can discover that spatial regions with wet-looking floors are non-navigable and that spatial regions that recently contained human-like visual signatures should be avoided with a large margin of safety. Evaluating on an exploration-based task, we demonstrate that affordance map-based approaches are far more sample-efficient, generalizable, and interpretable than current RL-based methods. Even though we believe our problem formulation to be rather practical and common, evaluation is challenging in both the physical world and virtual simulators. It it notoriously difficult to evaluate real-world autonomous agents over a large and diverse set of environments. Moreover, many realistic simulators for navigation and exploration assume a static environment (; ;). We opt for first-person game-based simulators that populate virtual worlds with dynamic actors. Specifically, we evaluate exploration and navigation policies in VizDoom , a popular platform for RL research. We demonstrate that affordance maps, when combined with classic planners, dramatically outperform traditional geometric methods by 60% and state-of-the-art RL approaches by 70% in the exploration task. Additionally, we demonstrate that by combining active learning and affordance maps with geometry, navigation performance improves by up to 55% in the presence of hazards. However, a significant gap still remains between human and autonomous performance, indicating the difficulty of these tasks even in the relatively simple setting of a simulated world. Navigation in Classical Robotics. Navigation has classically been framed as a geometry problem decomposed into two parts: mapping and path planning. Inputs from cameras and sensors such as LiDARs are used to estimate a geometric representation of the world through SLAM (or structure from motion) techniques . This geometric representation is used to derive a map of traversability, encoding the likelihood of collision with any of the inferred geometry. Such a map of traversability can be used with path planning algorithms (; ;) to compute collision-free paths to desired goal locations. Navigation applications can be built upon these two primitives. For example, exploration of novel environments can be undertaken by sampling point goals in currently unknown space, planning paths to these point goals, and incrementally building a map using the sensor measurements along the way (also known as frontier-based exploration ). Such an approach for exploration has proven to be highly effective, besting even recent RL-based techniques in static environments , while relying on classical planning . Semantics and Learning for Navigation. Taking a purely geometric approach to navigation is very effective when the underlying problem is indeed geometric, such as when the environment is static or when traversability is determined entirely by geometry. However, an entirely geometric treatment can be sub-optimal in situations where semantic information can provide additional cues for navigation (such as emergency exit signs). These considerations have motivated study on semantic SLAM , that seeks to associate semantics with maps , speed up map building through active search , or factor out dynamic objects . In a similar vein, a number of recent works also investigate the use of learning to solve navigation tasks in an end-to-end manner (; ; ;), built upon the theory that an agent can automatically learn about semantic regularities by directly interacting with the environment. Semantics have also been used as intermediate representations to transfer between simulation and the real world (Müller et al., 2018). While such use of learning is promising, experiments in past work have focused only on semantics associated with static maps. Instead, we investigate the role of semantics in dynamic environments, and in scenarios where the notion of affordance goes beyond simple geometric occupancy. A recent approach proposed by introduces a method of learning generalized spatial representations for both exploration and navigation, employing an attention-based generative model by reconstructing geometric observations. Planning for navigation occurs in belief space, in contrast to the metric cost maps (incorporating both semantics and geometry) used in our work. Hybrid Navigation Policies. While learning-based methods leverage semantic cues, training such policies can be sample inefficient. This has motivated the pursuit of hybrid policy architectures that combine learning with geometric reasoning ) or known robot dynamic models (; ; Müller et al., 2018). Our work also presents a hybrid approach, but investigates fusion of a learned mapper with analytic path planning. Self-Supervised Learning. Recent work in robotics has sought to employ self-supervised learning as an alternative to end-to-end reward-based learning. and employ passive cross-modal self-supervision to learn navigability (from stereo to monocular images, and from LiDAR to monocular images, respectively). In contrast, we learn by active interaction with the environment. Thus, our work is most similar to that of , though we learn dense traversibility predictions for long range path-planning, rather than short-range predictions for collision avoidance as done by. Navigation in Dynamic Environments. Finally, a number of other works develop specialized techniques for navigation in dynamic environments, by building explicit models for other agents' dynamics (; ;). In contrast, by generalizing our definition of traversability beyond geometry alone, we can automatically capture the dynamics of other agents implicitly and jointly with other environmental features. Our goal is to build an agent that can efficiently explore and navigate within novel environments populated with other dynamic actors, while respecting the semantic constraints of the environment. The scenario we consider is a mobile agent capable of executing basic movement macro-actions. The agent is equipped with a RGBD camera and some form of proprioceptive feedback indicating well-being (e.g bump sensor, wheel slip, game damage). We assume that the agent is localized using noisy odometry and that depth sensing is also imperfect and noisy. At test time, the agent is placed into a novel environment containing an unknown number of dynamic and environmental hazards. Furthermore, the exact dimensions of the agent and affordances provided by entities in the environment are unknown. Figure 1: Overview of our proposed architecture for navigation. RGBD inputs x t are used to predict affordance mapsŷ t and transformed into egocentric navigability maps M t that incorporate both geometric and semantic information. The map shown in the figure labels the hallway with a monster as non-navigable. A running estimate of the current position at each time step is maintained and used to update a global, allocentric map of navigability G t that enables safe and efficient planning. We propose a modular approach to tackle this problem, adopting a classical pipeline of map building and path planning. Figure 1 shows an overview of this pipeline, which builds a navigability map using both geometric and semantic information, as opposed to traditional methods that rely on geometry alone. Our main contribution, shown in Figure 2, is a method for predicting which parts of a scene are navigable by actively leveraging the feedback sensor to generate partially-labeled training examples. We then use the labeled samples to train a model which predicts a per-pixel affordance map from the agent's viewpoint. At evaluation time, the outputs from the learned module are combined with geometric information from the depth sensor to build egocentric and allocentric representations that capture both semantic and geometric constraints. The fused representations can then be used for exploration and navigation by employing traditional path planning techniques to enable safe navigation even in dynamic and hazardous environments. Given a scene representation x captured by a RGBD camera, we want to train a module π that labels each pixel with a binary affordance value, describing whether the corresponding position is a valid space for the agent to occupy and forming a segmentation map of "navigability" y. We can encode this understanding of the environment by training an image segmentation model in a supervised fashion. However, training such a model requires a set of labeled training images D = [(x 1, y 1),...(x n, y n)] where each pixel is annotated for navigability. Traditionally, obtaining such a set of labels has required dense annotation by an oracle , at a cost that scales linearly with the amount of data labeled. These properties have generally limited applications to domains captured by large segmentation datasets that have been curated with hundreds of hours of human annotation time. We address this problem by employing a self-supervised approach to generate partially labeled examplesỹ, in place of oracle annotation. Figure 2: Overview of self-supervised labeling for navigability training pairs (x,ỹ). The agent performs a series of walks along random or planned trajectories within the environment. Affordance information collected from each walk is back-projected onto pixel-level labels in the agent's POV from previous time steps. Sampling over a variety of maps allows for the collection of a visually and semantically diverse set of examplesD that can be used to train a navigability module π. This figure illustrates the generation of a negative example, with the agent contacting a dynamic hazard. Self-Supervision. We generate labeled affordance data in a self-supervised manner through continuous interactive exploration by the agent; this algorithm makes use of RGBD observations x, readings from a feedback sensor s, and a history of actions a t executed over time. In each episode, the agent is initialized at a random location and orientation within a training environment. The agent selects a nearby point and attempts to navigate towards it. Labeled training data is generated based on whether or not the agent is able to reach this point: every location that the agent successfully traverses during its attempt is marked as navigable, while undesirable locations (e.g. bumping into obstacles, loss of traction, loss in health, getting stuck) are marked as non-navigable. These locations in world space are then back-projected into previous image frames using estimated camera intrinsics, in order to obtain partial segmentation labels (examples of which are visualized in Figure 3). Pixels for which there are no positive or negative labels are marked as unknown. A more detailed discussion about the real-world applicability of this approach can be found in Appendix A.4. Dense Labels. Backprojection of affordance labels produces a dense set of pixelwise labels for observations at past time steps. Importantly, even without spatio-temporal inputs, this enables the training of models which incorporate safety margins to account for motion, as the future position of dynamic actors is encoded within labelled views from the past (discussed further in Appendix A.3). In contrast, most RL-based methods return only a single sparse scalar reward, which often leads to sample inefficient learning, potentially requiring millions of sampling episodes . Furthermore, our generated labelsỹ are human interpretable, forming a mid-level representation that improves interpretability of actions undertaken by the agent. Navigability Segmentation. The collected samplesD are used to train a segmentation network such as UNet , allowing for generalization of sampled knowledge to novel scenarios. A masked loss function L mask = K L BCE (ŷ, y) based on binary cross-entropy is employed to ensure that only labeled, non-unknown points K within each example contribute to the loss. Given enough training data, the navigability module is capable of generating segmentation maps that closely approximate ground truth navigability, even in previously unseen environments. Active Trajectory Sampling. In order to further improve sample efficiency, we use model uncertainty to actively plan paths during the sampling episodes so as to maximize label entropy along traversed trajectories. Intuitively, many semantically interesting artifacts (such as environmental hazards) are rare, making it difficult to learn a visual signature. In these cases, sampling can made much more efficient by seeking them out. This can be achieved by first collecting a small number (n) of samples using random walks and training a seed segmentation model. Using the seed model, we then predict an affordance mapŷ in the first step of each subsequent episode and use it to construct a cost map for planning, with values inversely proportional to the prediction uncertainty (measured as the entropy of the predicted softmax distribution over class labels) at each position. Planning and following a minimal-cost path in this space is equivalent to maximization of label entropy, as the agent attempts to interact most with areas containing high uncertainty. Once we actively collect an additional n samples using this strategy, the model is retrained using a mixture of all samples collected so far, and the sample/train loop can be repeated again. We find that active learning further increases our sample efficiency, requiring fewer sampling episodes to learn visual signatures for hazards and dynamic actors (shown in Figure 3 rightmost). While some hazards can only be identified using semantic information, geometry provides an effective and reliable means to identify navigability around large, static, obstacles such as walls. To capture both types of constraints, we augment our predicted semantic maps with additional geometric information when constructing the projected navigability cost maps M and G used for planning. As the agent moves around the environment, observed depth images are used to construct local, egocentric occupancy maps at each time step, incorporating only geometric information. By reading depth values from the center scanline of the depth image, projecting into the XY-plane, and marking the corresponding cells as non-navigable, a map of geometric obstacles M G t can be obtained. As the exact dimensions and locomotion capabilities of the agent are unknown, only depth values returned by the center scanline are known to be obstacles with certainty. Map Fusion. Given a pixel-wise affordance mapŷ t obtained from the navigability module and a local, egocentric geometric map M G t, the two inputs can be combined using a fusion module F (ŷ t, M G t) to form a single local navigation cost map M t that incorporates both semantic and geometric information. To do so, the segmentation mapŷ t is first projected into the 2D plane using estimated camera intrinsics, forming an egocentric navigability map M S t. Cells marked as obstacles by M G t are also marked as impassable within M t, with remaining cells in free space assigned cost values proportional to the confidence of non-navigability provided byŷ t. Finally, M t is used to update a global, allocentric map G t of navigability at the end of each time step. Given the global navigability map, path planning can be tackled using classical algorithms such as A*, as all required semantic and geometric information is encoded within the map itself. Additionally, since both M t and G t are updated at every time step, dynamic hazards are treated as any other obstacle and can be avoided successfully as long as paths are re-planned at sufficiently high frequency. Our work is agnostic to the choice of planning algorithm and our semantic maps can also be employed with more sophisticated planners, though for simplicity we evaluate using A*. We perform our evaluation in simulation using VizDoom, as it allows for procedural generation of large, complex 3D maps with a variety of dynamic actors and semantic constraints in the form environmental hazards. Although prior work on navigation has also relied on VizDoom, evaluation has been restricted to a small set of hand designed maps without any dynamic actors or semantic constraints. We evaluate the effectiveness of incorporating learned affordance maps to tackle two difficult tasks: novel environment exploration and goal-directed navigation. We conduct our experiments within procedurally-generated VizDoom maps created by the Oblige level generator, which enables construction of training and test maps containing unique, complex, and visually diverse environments. Each generated map is large, containing a variety of dynamic hazards (such as monsters) and environmental hazards (such as lava pools), in addition to static obstacles (such as barrels) and areas where a geometry-affordance mismatch exists (such as ledges lower than sensor height, but beyond the movement capabilities of the agent). We generate a collection of 60 training and 15 test maps and further categorize the 15 test maps as either hazard-dense or hazard-sparse, based on concentration of hazards within the initial exploration area. Observation and Action Space. We assume that the agent's RGBD camera returns a regular RGB image with a 60 • field of view and an approximately-correct depth image that records the 2D Euclidean distance of each pixel from the camera in the XY plane (due to the 2.5D nature of the Doom rendering engine). The feedback sensor returns a scalar value corresponding to the magnitude of damage received by the agent while executing the previous action (some hazards are more dangerous than others). The action space is limited to three motion primitives: move forward, turn left, and turn right; only one action can be executed at each time step. Localization is imperfect and achieved through odometry from noisy measurements, with approximately 2% error. We quantitatively evaluate exploration performance by measuring the total amount of space observed within a particular environment over time, approximated by the total surface area of the constructed global map. Each episode of evaluation terminates after 2000 time steps or after receiving a total of 100 damage during exploration, whichever occurs first. Agents receive 4 damage per time step when coming into contact with dynamic hazards and 20 damage for environmental hazards, making some types of hazards more dangerous than others. Frontier-Based Exploration. As a classical, non-learning baseline, we compare against a variant of frontier-based exploration . This approach relies purely on geometry, updating a global map G t at every step using the projected scanline observation M G t from the current POV. A close-by goal from within the current "frontier region" is selected and a path towards it is re-planned (using A*) every 10 steps as the map is updated. Once the selected goal has been reached or the goal is determined to no longer be reachable, the process is repeated with a newly-selected goal. Although dynamic actors can be localized using geometry alone, they are treated as static obstacles in the cost map, relying on frequent re-planning for collision avoidance. We also compare against a state-of-the-art deep RL-based approach for exploration that is trained using PPO and incorporates both geometric and learned representations. We implement an augmented variant of the method proposed D t to the 3 original inputs: the current RGB observation x RGB t, a small-scale egocentric crop of G t, and a large-scale egocentric crop of G t. We evaluate this approach using hyper-parameters identical to those proposed by the original authors, with the only exception being the addition of a new penalty in the reward that is scaled by the amount of damage received at each time step. We report the mean performance obtained by the best model from each of 3 training runs of 2M samples each, with full access to the 60 map training set. Affordance-Augmented Frontier Exploration. To evaluate the efficacy of our proposed representation, we augment the frontier-based approach with semantic navigability maps obtained from affordance predictions; all other components (including goal selection and path planning) are shared with the baseline. We collect approximately 100k total samples across the 60 training maps in a self-supervised manner and train the navigability module for 50 epochs using the collected dataset; a ResNet-18-based UNet architecture is employed for segmentation. Sample goals are selected randomly from within the visible area at episode start and simple path planning is employed, with the agent always taking a straight line directly towards the goal. Back-projection is performed using game damage as a feedback mechanism, with the size of negative labels corresponding to the magnitude of damage received. At test time, we use estimated camera intrinsics to project output from the navigability module into the 2D plane. Additional details are provided in Appendix A.1. Inside hazard-sparse environments (Figure 4 left), agents generally don't encounter hazards within the first 2000 time steps, placing increased emphasis on goal selection over hazard avoidance. In this setting, augmenting the frontier-based approach with affordance maps does not provide significant improvements, as in the absence of semantic hazards, the two methods are functionally equivalent. In line with previous work , the PPO-based RL approach also fails to beat the frontier baseline, likely due to the heavy emphasis placed on exploration policy. Without taking a high-level representation of the global map as input, it is difficult for a RL-based approach to plan over long time horizons, causing the agent to potentially re-visit areas it has already seen before. Finally, we note that humans are both better at selecting goals and navigating around hazards, managing to explore upwards of 3× more area than the closest autonomous approach. Successful exploration in hazard-dense environments (Figure 4 center) necessitates the ability to identify affordance-restricting hazards, as well as the ability to plan paths that safely navigate around them. In this setting, augmenting the frontier-based approach with affordance maps increases performance by approximately 60%, which is more than 2/3 of the difference between frontier and the random baseline. Qualitatively, we observe that agents using learned affordance maps plan paths that leave a wide margin of safety around observed hazards and spend far less time stuck in areas of geometry-affordance mismatch. Through self-supervised sampling, the navigability module also learns about agent-specific locomotion capabilities, predicting when low ceilings and tall steps may restrict movement. Although RL-based exploration out-performs the frontier baseline in this scenario by learning that proximity to hazards is detrimental to reward maximization, a lack of long term planning still hinders overall exploration performance. Sample Efficiency. In order to understand the effect of training set size on learned exploration, we measure exploration performance with different amounts of collected samples in the hazard-dense setting, shown in Figure 4 right. After collecting as few as 5000 training samples, the navigability module learns to recognize dynamic hazards, allowing for paths to be planned with a margin of safety. As the number of samples collected increases, exploration performance improves as well. However, as one might expect, the relative gain provided by each additional example decreases after a point. Qualitatively, we observe that 10,000 samples provides sufficient diversity to enable accurate localization of common dynamic hazards, while additional examples beyond this point help to improve detection of less commonly observed environmental hazards and accuracy near hazard boundaries. Notably, even after training on 20 times as many samples, RL-based exploration still fails to outperform our approach in this setting, illustrating a clear advantage in sample efficiency. In order to further demonstrate the applicability and efficacy of affordance-based representations, we set up a series of 15 navigation trials, one for each map in the test set. Within each trial, the agent begins at a fixed start point and is tasked with navigating to an end goal (specified in relative coordinates) in under 1000 time steps, while minimizing the amount of damage taken along the way. Each trial is designed to be difficult, with numerous hazards and obstacles separating the start and end points, presenting a challenge even for skilled humans. Additional details are provided in Appendix A.2. In this setting, we show that by adding semantic information obtained from affordance maps, it is possible to improve navigation performance significantly, even when employing simple geometrybased approaches that plan using A*. By introducing a navigability module trained on 100k collected samples to generate cost maps for planning, we observe a 45% improvement in overall navigation success rate, with improvements of 25% observed even when using a model trained on a dataset just one fifth the size. Even after increasing the re-planning frequency 10-fold, such that observed dynamic hazards can be treated more accurately as static obstacles, the baseline still fails to beat the affordance-augmented variant. We also explore how active learning can be used to further improve the efficiency of self-supervised learning, by evaluating two additional models trained on samples collected with actively-planned trajectories. We show that using just 40% of the data, models employing active data collection outperform those trained using random samples alone. At the 100k total sample mark, we observe that actively sampled models out-perform their randomly sampled counterparts by more than 10%. These , along with comparisons to the baseline, are summarized in Figure 5; examples of actively-planned trajectories are visualized in Figure 6. Qualitatively, we observe that active trajectory sampling significantly improves temporal stability in the output of trained models, along with much more accurate predictions along hazard and obstacle boundaries (shown in Figure 8). These properties enable more efficient path planning, allowing the agent to navigate more closely to identified hazards without taking damage along the way. We have described a learnable approach for exploration and navigation in novel environments. Like RL-based policies, our approach learns to exploit semantic, dynamic, and even behavioural properties of the novel environment when navigating (which are difficult to capture using geometry alone). But unlike traditional RL, our approach is made sample-efficient and interpretable by way of a spatial affordance map, a novel representation that is interactively-trained so as to be useful for navigation with off-the-shelf planners. Though conceptually simple, we believe affordance maps open up further avenues for research and could help close the gap between human and autonomous exploration performance. For example, the dynamics of moving obstacles are currently captured only in an implicit fashion. A natural extension is making this explicit, either in the form of a dynamic map or navigability module that makes use of spatio-temporal cues for better affordance prediction. A.1.1 ENVIRONMENTS Each test map used to evaluate exploration performance is extremely large, containing sequences of irregularly shaped rooms connected by narrow hallways and openings. As it would require upwards of 10,000 time steps even for a skilled human to explore any of these environments, most of the space covered by the evaluated approaches is contained within the initial rooms next to the start point. As such, to clearly illustrate the challenges posed by semantic constraints, we choose to further categorize the test maps as either hazard-sparse or hazard-dense, based on the likelihood of encountering navigability-restricting hazards within the initial exploration area. Figure 7 shows a top-down visualization of the difference in hazard concentration between the two test subsets. Figure 7: Top-down visualizations of initial exploration areas in hazard-sparse (Left) and hazarddense (Right) test environments. Agent start position is marked in green, with environmental hazards marked in yellow, and initial locations of dynamic hazards marked in purple. Hazard-dense environments present a significant challenge for autonomous exploration, containing a high concentration of navigability restricting areas that must be avoided successfully. We next provide additional details about the baselines that we compare against: 1. Random. In order to show that both geometry and learning-based approaches set a competitive baseline and perform well above the bar set by random exploration, we evaluate a policy that selects between each of the available actions with uniform probability at each time step. 2. RL-Based Exploration. To ensure a fair comparison against RL-based exploration, we implement the approach proposed by with identical hyper-parameters, maintaining a global map where each grid cell represents an 8 × 8 game unit area in the VizDoom world. We also introduce a new penalty term used to scale the reward by amount of damage taken in each step (set to 0.02), in order to encourage the agent to avoid hazardous regions of the map. 3. Human. To demonstrate that all autonomous approaches for exploration still have significant room for improvement, we asked human volunteers to explore a sequence of novel environments using a near-identical experimental setup. In order to make movement more natural, we allowed participants to execute actions for rotation and forward movement concurrently, providing a slight edge in locomotion compared to other approaches. As labeled samplesỹ are generated by painting all pixels within a radius of a specified point in the simulated world as either navigable or non-navigable, we are more "certain" that pixels close by in world space share the same semantic constraints than those farther away. To ensure stability of training, we express this belief in the loss by weighting each pixel's contribution by its inverse Euclidean distance to the closest such point in world space. Each trial evaluated as part of the goal-directed navigation experiments is intentionally designed to be difficult, requiring agents to navigate through areas of high hazard concentration and around obstacles that present geometry-affordance mismatches. Indeed, none of human participants were able to complete all 15 trials successfully without taking any damage. Qualitatively, we observed that most failures in the baseline occurred when the agent attempted to path through an obstacle lower than sensor height, causing the agent to become stuck as it continually tries to path through a cell registered as free space in the geometric map. The second most common scenario that leads to degraded performance is failure to avoid dynamic hazards, causing agents to collide with monsters as they attempt to follow a nearby path to the goal. We implement the baseline approach for navigation using a modified variant of the planning and locomotion modules employed for frontier-based exploration. Global navigability maps used for planning are constructed by averaging values obtained from local maps over multiple time steps, allowing for increased temporal stability and robustness to sensor and localization noise. A simple A*-based algorithm is then employed for planning, treating the value of each cell in the global navigability map as the cost of traversing through a particular location in the environment. In this setup, dynamic actors are treated as static obstacles within the cost map, an assumption that holds true as long as re-planning is invoked at a sufficiently high frequency. In order to evaluate the effect of re-planning frequency on navigation performance, we also evaluate a variant of the baseline approach that re-plans at 10× the frequency (every step instead of every 10 steps) and observe that this in a small improvement in navigation trial success rate, largely attributed to the reduction in latency required to respond to environmental dynamics. To evaluate the efficacy of active trajectory sampling, we evaluate affordance-augmented navigation using segmentation models trained with active data gathering. The procedure we follow is to first train a seed model using 20k random samples, before collecting an additional 20k samples actively and re-training to generate an improved version of the model. We repeat the active sample/train loop for an additional 3 iterations, building a dataset with a total size of 100k samples. Visualizations of predicted affordance maps generated by trained models after iterations 0, 2, and 4 of the sample/train loop are shown in Figure 8 and compared to a model trained using 100k randomly collected samples. Figure 8: Comparison of affordance maps generated by models trained using datasets containing (a) 20k random samples, (b) 20k random samples + 40k active samples, (c) 20k random samples + 80k active samples, and (d) 100k random samples. Employing active learning allows models to effectively identify and localize regions containing rare environmental hazards, a feat that is difficult to achieve using random samples alone. In order to better understand how dynamic behavior is captured using our affordance-labeling approach, we pose a scenario where a dynamic actor moves from point A to point B, and collides with the agent at point B. In this scenario, all observations of point B (including those collected precollision) will be labelled as hazardous, potentially mapping to an image region near the dynamic actor rather than the actor itself. We will next describe one approach for explicitly modeling such moving obstacles, and then justify why our current approach implicitly captures such dynamics. Explicit Approach. In principle, our self-supervised labeling system can be modified to replace naive back-projection with an explicit image-based tracker (keeping all other components fixed). Essentially, labeled patches can be tracked backwards from the final timestep at which they are identified as hazardous (since those prior visual observations are available at sample time) to obtain their precise image coordinates when backprojecting to prior timesteps. Implicit Approach. Even without image-based tracking, our pipeline implicitly learns to generate larger safety margins for visual signatures that have been associated with dynamic behavior. Essentially, our system learns to avoid regions that are spatially close to dynamic actors (as seen in Figure 9). Such notions of semantic-specific safety margins (e.g., autonomous systems should use larger safety margins for pedestrians vs. roadside trash cans) are typically hand-coded in current systems, but these emerge naturally from our learning-based approach. As we found success with an implicit encoding of dynamics, we did not experiment with explicit encodings, but this would certainly make for interesting future work. Figure 9: Examples of learned margins for visual signatures associated with dynamic actors. From left to right: the first image shows a RGB view of the scene, the second image shows predicted affordancesŷ overlaid on top of the RGB view, the third image shows the projected confidence map, and the last image shows the cost map used to plan the optimal path. From the first example, it can be seen that regions that are spatially close to dynamic actors are associated with higher traversal costs in the final cost map, akin to a "margin of safety". The second example shows that static hazards/obstacles such as barrels are not associated with substantial affordance margins. During the sampling stage of our proposed method, we employ a "trial and error"-style approach (similar to RL-based methods) that could lead to potentially hazardous situations in real-world robotics settings if deployed in certain configurations. However, we argue that this is not an unreasonable way of collecting data and that there exist both common and practical solutions for risk mitigation that have already been widely deployed in the real-world. Framing the current state of self-driving research within the context of our work, we can view all autonomous vehicles today as being within the sampling stage of a long-term active learning loop that ultimately aims to enable L4 autonomy. Almost every one of these vehicles on public roads today is equipped with one, if not multiple safety operators who are responsible for disengaging autonomy and taking over when the system fails to operate within defined bounds for safety and comfort. Moreover, each of these takeover scenarios is logged and used to improve the underlying models in future iterations of this learning loop . Indeed, in this scenario, the safety operator serves the purpose of the feedback sensor and can ultimately be removed at test time, once the autonomous driving model has been deemed safe. In less safety critical scenarios, such as closed course or small-scale testing, the role of the safety driver could be replaced with some form of high-frequency, high-resolution sensing such as multiple short-range LIDARs. These feedback sensors can be used to help the robot avoid collisions during the initial stages of active training, stopping the agent and providing labels whenever an undesirable state is entered. Importantly, since data from these expensive sensors is not directly used as an input by the model, they can be removed once a satisfactory model has been trained; production-spec robots are free to employ low-cost sensing without the need for high-cost feedback sensors. Additionally, there exist many scenarios in which feedback sensors can help label examples without the need to experience catastrophic failures such as high-speed collisions. One example is the discrepancy between wheel speed sensor values, which can be used to detect loss of traction on a wheeled robot when travelling over rough or slippery surfaces. By collecting observation-label pairs, we could then learn affordance maps to help such a robot navigate over the smoothest terrain. Finally, we would like to emphasize that in scenarios where it is difficult to obtain oracle-labelled data and "trial and error" approaches are employed by necessity, we have shown that our proposed approach is many times more sample efficient that previous PPO-based reinforcement learning approaches for mobile robotics (which also suffer from the same types of problems). If collecting a sample is costly due to the burden of operational hazards, we argue that a reduction in the number of samples required translates to an improvement in overall safety. A.5 CODE As the proposed system contains a rather large number of moving parts and hyper-parameters, which could be challenging to reproduce, we plan to release modular open-source code for each of the described components. A.6 ADDITIONAL VISUALIZATIONS Figure 10: Examples of affordance mapsŷ predicted by the navigability module, showing accurate localization of semantic constraints within the scene. (Left) contains dynamic hazards in the form of monsters and (Right) contains areas of geometry-affordance mismatch, in the form of barrels shorter than sensor height. Figure 11: Comparison of thresholded global maps constructed by frontier exploration using geometry (Left) and affordance-based (Right) representations in the same environment. In this setting, semantic representations help the agent take less damage over time, allowing for more area to be explored during the episode.
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
BJgMFxrYPB
We address the task of autonomous exploration and navigation using spatial affordance maps that can be learned in a self-supervised manner, these outperform classic geometric baselines while being more sample efficient than contemporary RL algorithms