diff --git "a/deepmind_blog.jsonl" "b/deepmind_blog.jsonl" new file mode 100644--- /dev/null +++ "b/deepmind_blog.jsonl" @@ -0,0 +1,10 @@ +{"text": "Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals\n==================================================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----cf96ebc60924--------------------------------)[DeepMind Safety Research](/?source=post_page-----cf96ebc60924--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fgoal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----cf96ebc60924---------------------post_header-----------)\n\n9 min read·Oct 7, 2022--\n\n1\n\nListen\n\nShare\n\n*By Rohin Shah, Vikrant Varma, Ramana Kumar, Mary Phuong, Victoria Krakovna, Jonathan Uesato, and Zac Kenton. For more details, check out our* [*paper*](https://arxiv.org/abs/2210.01790)*.*\n\nAs we build increasingly advanced AI systems, we want to make sure they don’t pursue undesired goals. This is the primary concern of the AI alignment community.\n\nUndesired behaviour in an AI agent is often the result of [specification gaming](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity) —when the AI exploits an incorrectly specified reward. However, if we take on the perspective of the agent we’re training, we see other reasons it might pursue undesired goals, even when trained with a correct specification.\n\nImagine that you are the agent (the blue blob) being trained with reinforcement learning (RL) in the following 3D environment:\n\n![]()The environment also contains another blob like yourself, but coloured red instead of blue, that also moves around. The environment also appears to have some tower obstacles, some coloured spheres, and a square on the right that sometimes flashes. You don’t know what all of this means, but you can figure it out during training!\n\nYou start exploring the environment to see how everything works and to see what you do and don’t get rewarded for. In your first episode, you follow the red agent and get a reward of +3:\n\n![]()Trajectory 1: Good! Reward +3In your next episode, you try striking out on your own, and get a reward of -2:\n\n![]()Trajectory 2: Bad! Reward -2The rest of your training proceeds similarly, and it comes time to test your learning. Below is the test environment, and the animation below shows your initial movements. Take a look, and then decide what you should do at the point the animation stops. Go on, put yourself in the agent’s shoes.\n\n![]()You might well have chosen to continue following the red bot — after all, you did pretty well when you followed it before. And indeed the blue AI agent favours this strategy.\n\n**The problem is, that behaviour leads to very poor performance (even worse than random behaviour).**\n\nLet’s look at the underlying environment setup, from the designer’s perspective:\n\n1. The translucent coloured spheres have to be visited in a particular order, which is randomly generated at the beginning of each episode. The agent gets +1 reward each time it visits a correct sphere and -1 reward each time it visits an incorrect sphere. The very first sphere entered always provides +1 reward since there is no fixed start sphere.\n2. The flashing square represents the reward received on the previous timestep: a flashing white square means +1 reward and a flashing black square means -1 reward.\n3. In the first two videos (“training”), the red bot was an “expert” that visited the spheres in the correct order. As a result, the agent did well by following it.\n4. In the newest video (“test”), the red bot was instead an “anti-expert” that visited the spheres in the wrong order. You can tell because of the flashing black square indicating -1 rewards.\n\nGiven this setup, the blue agent’s decision to continue following the anti-expert means that it keeps accruing negative reward. Even remaining motionless would have been a better strategy, resulting in zero reward.\n\n![]()In principle, the agent could notice the flashing black square, infer that it is getting negative reward, and switch to exploring the environment, or even just staying still. Unfortunately, the agent ignores that little detail and continues to follow the anti-expert, accumulating lots of negative reward.\n\nThis isn’t really the agent’s fault — how was it supposed to know that you didn’t want it to just follow the red bot? That approach worked beautifully during training!\n\nNonetheless, we trained the agent with a *correct* reward function, and ended up with an agent that pursued the *incorrect* goal of “follow the red bot”.\n\nGoal misgeneralisation\n======================\n\nThis is an example of the problem of goal misgeneralisation (GMG).\n\n![]()We say that a system is capable of performing a task in a given environment if it performs well on the task or can be quickly tuned to do so. When we say that an AI system has a goal in a given environment, we mean that its behaviour in that environment is consistent with optimising this goal (i.e. it achieves a near-optimal score for this goal). The system’s behaviour may be consistent with multiple goals.\n\nGMG is an instance of misgeneralisation in which a system’s *capabilities* generalise but its *goal* does not generalise as desired. When this happens, the system competently pursues the wrong goal. In our Spheres example, the agent competently navigates the environment and follows the anti-expert: the issue is that these capabilities were used in pursuit of an undesired goal.\n\nIn our [latest paper](https://arxiv.org/abs/2210.01790) we provide empirical demonstrations of GMG in deep learning systems, discuss its implications for possible risks from powerful AI systems, and consider potential mitigations. We build on [previous work](https://arxiv.org/abs/2105.14111) that presents a model of GMG and provides examples of this phenomenon.\n\nMore examples of goal misgeneralisation\n=======================================\n\nIn each of our examples below, multiple goals are consistent with the training behaviour, and the system chooses the wrong goal to pursue at test time, while retaining its capabilities.\n\n![]()**Tree Gridworld**.Unlike previous examples of GMG, this example uses a never-ending reinforcement learning setup (i.e. there are no episodes). The agent operates in a gridworld where it can collect reward by chopping trees, which removes the trees from the environment. New trees appear at a rate that increases with the number of trees left, and they appear very slowly when there are no trees left. The optimal policy in this environment is to chop trees sustainably: the agent should chop fewer trees when they are scarce. However, this is not what the agent does.\n\n![]()The agent’s performance in Tree Gridworld. The reward obtained is shown in orange and the distribution of the number of remaining trees is shown in green.As the agent learns the task, at first it is not good at chopping trees, so the number of trees remains high (point A in the figure above). The agent learns how to chop trees efficiently, and then proceeds to cut down too many trees (point B). This leads to complete deforestation and a long period of near-zero reward (between points B and C) before it finally learns to chop trees sustainably (point D).\n\nWe can view this as an instance of GMG. Consider the point when the agent has just learned the skill of chopping trees (between points A and B). There are different possible goals it could learn, ranging from chopping trees sustainably to chopping trees as fast as possible. All of these goals are consistent with the agent’s past experience: when it was incompetent and slow at chopping, it was always rewarded for chopping trees faster. The agent learned the undesired goal of chopping trees as fast as possible, ultimately leading to deforestation and low reward.\n\n**Evaluating expressions**.Another instance of GMG occurred when we asked [Gopher](https://arxiv.org/abs/2112.11446), a DeepMind large language model, to evaluate linear expressions involving a number of unknown variables and constants, such as `x + y — 3`. The task is structured as a dialogue between Gopher and a user, where Gopher can query the user about the values of unknown variables, and then calculates and states the answer. We train it with 10 examples that each involve two unknown variables.\n\nAt test time, the model is asked questions with 0–3 unknown variables. We find that although the model generalises correctly to expressions with one or three unknown variables, in the zero variables case it asks redundant questions. This happens even though the prompt asks the model to “provide the value of the expression when the values of all variables are known”. It seems the model has learned a goal to query the user at least once before giving an answer, even when it doesn’t need to.\n\n![]()Dialogues with Gopher on the Evaluating Expressions task, with GMG behaviour in **bold**.More videos for our examples are available [here](https://sites.google.com/view/goal-misgeneralization). A complete list of all examples of GMG that we are aware of is available in this [public spreadsheet](http://tinyurl.com/goal-misgeneralisation).\n\nImplications and mitigations\n============================\n\nIf the GMG problem persists when artificial general intelligence (AGI) is developed, we may end up with an AGI that pursues an undesired goal. This seems like a challenging situation to be in, as it could put humanity and the AGI in an adversarial relationship.\n\nA concerning scenario is related to the “[treacherous turn](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)”, Nick Bostrom’s idea that “while weak, an AI behaves cooperatively. When the AI is strong enough to be unstoppable it pursues its own values.”\n\nConsider two possible types of AGI systems:\n\n**A1: Intended model.** This AI system does what its designers intend it to do.\n\n**A2: Deceptive model.** This AI system pursues some undesired goal, but (by assumption) is smart enough to know that it will be penalised if it behaves in ways contrary to its designer’s intentions.\n\nCrucially, since A1 and A2 will exhibit exactly the same behaviour during training, the possibility of GMG means that either model could take shape, even supposing a well-specified score function that only rewards intended behaviour. If A2 is learned, it would try to subvert human oversight in order to enact its plans towards the undesired goal, potentially leading to catastrophic outcomes.\n\nAs a simple hypothetical example of a deceptive model, suppose you have an AI assistant that was trained to schedule your social life and learned that you like to meet your friends at restaurants. This is fine until there is a pandemic, during which you prefer to meet your friends via video calls. The intended goal for your AI assistant is to schedule your meetings where you prefer, not to schedule your meetings in restaurants. However, your assistant has learned the restaurant-scheduling goal, which could not previously be distinguished from the intended goal, since the two goals always led to the same outcomes before the pandemic. We illustrate this using fictional dialogues with the assistant:\n\n![]()In the hypothetical misgeneralised test dialogue, the AI assistant realises that you would prefer to have a video call to avoid getting sick, but because it has a restaurant-scheduling goal, it persuades you to go to a restaurant instead, ultimately achieving the goal by lying to you about the effects of vaccination.\n\nHow can we avoid this kind of scenario? There are several promising directions for mitigating GMG in the general case. One is to use more diverse training data. We are likely to have greater diversity when training more advanced systems, but it can be difficult to anticipate all the relevant kinds of diversity prior to deployment.\n\nAnother approach is to maintain uncertainty about the goal, for example by learning all the models that behave well on the training data. However, this can be too conservative if unanimous agreement between the models is required. It may also be promising to investigate inductive biases that would make the model more likely to learn the intended goal.\n\nWe can also seek to mitigate the particularly concerning type of GMG, where a deceptive model is learned. Progress in [mechanistic](https://distill.pub/2020/circuits/zoom-in/) [interpretability](https://www.transformer-circuits.pub/2021/framework/index.html) would allow us to provide feedback on the model’s reasoning, enabling us to select for models that achieve the right outcomes on the training data *for the right reasons*. A limitation of this approach is that it may increase the risk of learning a deceptive model that can also deceive the interpretability techniques. Another approach is [recursive](https://arxiv.org/abs/1805.00899) [evaluation](https://arxiv.org/abs/1810.08575), in which the evaluation of models is [assisted by other models](/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84), which could help to identify deception.\n\nWe would be happy to see follow-up work on mitigating GMG and investigating how likely it is to occur in practice, for example, studying how the prevalence of this problem changes with scale. Our research team is keen to see more examples of GMG in the wild, so if you have come across any, please [submit](https://docs.google.com/forms/d/e/1FAIpQLSdEsL9BuLJAm9wdK8IK8eTTm7tbGFASbJ4AcWCmwvVPFxbl8g/viewform?resourcekey=0-_ADP04VQHl9_Yr0WmoNQtQ) them to our [collection](http://tinyurl.com/goal-misgeneralisation)!", "url": "https://deepmindsafetyresearch.medium.com/goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924", "title": "Goal Misgeneralisation: Why Correct Specifications Aren’t Enough For Correct Goals", "source": "deepmind_blog", "source_type": "blog", "date_published": "2022-10-07", "authors": ["DeepMind Safety Research"], "id": "53822115aff2fb1dd038b3b0937adf7f"} +{"text": "Discovering when an agent is present in a system\n================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----41154de11e7b--------------------------------)[DeepMind Safety Research](/?source=post_page-----41154de11e7b--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fdiscovering-when-an-agent-is-present-in-a-system-41154de11e7b&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----41154de11e7b---------------------post_header-----------)\n\n4 min read·Aug 25, 2022--\n\nListen\n\nShare\n\n***New, formal definition of agency gives clear principles for causal modelling of AI agents and the incentives they face.***\n\n*Crossposted to our Deepmind* [*blog*](https://www.deepmind.com/blog/discovering-when-an-agent-is-present-in-a-system)*. See also our extended blogpost on* [*LessWrong*](https://www.lesswrong.com/posts/XxX2CAoFskuQNkBDy/discovering-agents)*/*[*Alignment Forum*](https://www.alignmentforum.org/posts/XxX2CAoFskuQNkBDy/discovering-agents)*.*\n\nWe want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. [Causal influence diagrams](/progress-on-causal-influence-diagrams-a7a32180b0d1#b09d) (CIDs) are a way to model decision-making situations that allow us to reason about [agent incentives](https://ojs.aaai.org/index.php/AAAI/article/view/17368). For example, here is a CID for a 1-step Markov decision process — a typical framework for decision-making problems.\n\n![]()S₁ represents the initial state, A₁ represents the agent’s decision (square), S₂ the next state. R₂ is the agent’s reward/utility (diamond). Solid links specify causal influence. Dashed edges specify information links — what the agent knows when making its decision.By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?\n\nOur new paper, [Discovering Agents](https://arxiv.org/abs/2208.08345), introduces new ways of tackling these issues, including:\n\n* The first formal causal definition of agents: **Agents are systems that would adapt their policy if their actions influenced the world in a different way**\n* An algorithm for discovering agents from empirical data\n* A translation between causal models and CIDs\n* Resolving earlier confusions from incorrect causal modelling of agents\n\nCombined, these results provide an extra layer of assurance that a modelling mistake hasn’t been made, which means that CIDs can be used to analyse an agent’s incentives and safety properties with greater confidence.\n\nExample: modelling a mouse as an agent\n======================================\n\nTo help illustrate our method, consider the following example consisting of a world containing three squares, with a mouse starting in the middle square choosing to go left or right, getting to its next position and then potentially getting some cheese. The floor is icy, so the mouse might slip. Sometimes the cheese is on the right, but sometimes on the left.\n\n![]()The mouse and cheese environment.This can be represented by the following CID:\n\n![]()CID for the mouse. D represents the decision of left/right. X is the mouse’s new position after taking the action left/right (it might slip, ending up on the other side by accident). U represents whether the mouse gets cheese or not.The intuition that the mouse would choose a different behaviour for different environment settings (iciness, cheese distribution) can be captured by a [mechanised causal graph](https://drive.google.com/file/d/1_OBLw9u29FrqROsLfhO6rIaWGK4xJ3il/view),which for each (object-level) variable, also includes a mechanism variable that governs how the variable depends on its parents. Crucially, we allow for links between mechanism variables.\n\nThis graph contains additional mechanism nodes in black, representing the mouse’s policy and the iciness and cheese distribution.\n\n![]()Mechanised causal graph for the mouse and cheese environment.Edges between mechanisms represent direct causal influence. The blue edges are special *terminal* edges — roughly, mechanism edges A~ → B~ that would still be there, even if the object-level variable A was altered so that it had no outgoing edges.\n\nIn the example above, since U has no children, its mechanism edge must be terminal. But the mechanism edge X~ → D~ is not terminal, because if we cut X off from its child U, then the mouse will no longer adapt its decision (because its position won’t affect whether it gets the cheese).\n\nCausal discovery of agents\n==========================\n\nCausal discovery infers a causal graph from experiments involving interventions. In particular, one can discover an arrow from a variable A to a variable B by experimentally intervening on A and checking if B responds, even if all other variables are held fixed.\n\nOur first algorithm uses this technique to discover the mechanised causal graph:\n\n![]()Algorithm 1 takes as input interventional data from the system (mouse and cheese environment) and uses causal discovery to output a mechanised causal graph. See paper for details.Our second algorithm transforms this mechanised causal graph to a game graph:\n\n![]()Algorithm 2 takes as input a mechanised causal graph and maps it to a game graph. An ingoing terminal edge indicates a decision, an outgoing one indicates a utility.Taken together, Algorithm 1 followed by Algorithm 2 allows us to discover agents from causal experiments, representing them using CIDs.\n\nOur third algorithm transforms the game graph into a mechanised causal graph, allowing us to translate between the game and mechanised causal graph representations under some additional assumptions:\n\n![]()Algorithm 3 takes as input a game graph and maps it to a mechanised causal graph. A decision indicates an ingoing terminal edge, a utility indicates an outgoing terminal edge.Better safety tools to model AI agents\n======================================\n\nWe proposed the first formal causal definition of agents. Grounded in causal discovery, our key insight is that agents are systems that adapt their behaviour in response to changes in how their actions influence the world. Indeed, our Algorithms 1 and 2 describe a precise experimental process that can help assess whether a system contains an agent.\n\nInterest in causal modelling of AI systems is rapidly growing, and our research grounds this modelling in causal discovery experiments. Our paper demonstrates the potential of our approach by improving the safety analysis of several example AI systems and shows that causality is a useful framework for discovering whether there is an agent in a system — a key concern for assessing risks from AGI.\n\n—\n=\n\n*Excited to learn more? Check out our* [*paper*](https://arxiv.org/abs/2208.08345)*. Feedback and comments are most welcome (please send to: zkenton [at] deepmind [dot] com).*\n\n*Authors: Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott, Tom Everitt*", "url": "https://deepmindsafetyresearch.medium.com/discovering-when-an-agent-is-present-in-a-system-41154de11e7b", "title": "Discovering when an agent is present in a system", "source": "deepmind_blog", "source_type": "blog", "date_published": "2022-08-25", "authors": ["DeepMind Safety Research"], "id": "4840df499d7579a16cefa862cae0860f"} +{"text": "Your Policy Regulariser is Secretly an Adversary\n================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----14684c743d45--------------------------------)[DeepMind Safety Research](/?source=post_page-----14684c743d45--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fyour-policy-regulariser-is-secretly-an-adversary-14684c743d45&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----14684c743d45---------------------post_header-----------)\n\n7 min read·Mar 24, 2022--\n\nListen\n\nShare\n\n*By Rob Brekelmans, Tim Genewein, Jordi Grau-Moya, Grégoire Delétang, Markus Kunesch, Shane Legg, Pedro A. Ortega*\n\nFull paper, published in TMLR, here: [arxiv.org/abs/2203.12592](https://arxiv.org/abs/2203.12592), [OpenReview](https://openreview.net/forum?id=berNQMTYWZ)\n\n**TL;DR: Policy regularisation can be interpreted as learning a strategy in the face of an imagined adversary; a decision-making principle which leads to robust policies. In our** [**recent paper**](https://arxiv.org/abs/2203.12592)**, we analyse this adversary and the generalisation guarantees we get from such a policy.**\n\nPlaying against an imagined adversary leads to robust behaviour\n---------------------------------------------------------------\n\nThe standard model for sequential decision-making under uncertainty is the Markov decision process (MDP). It assumes that actions are under control of the agent, whereas outcomes produced by the environment are random. MDPs are central to reinforcement learning (RL), where transition-probabilities and/or rewards are learned through interaction with the environment. Optimal policies for MDPs select the action that maximises future expected returns in each state, where the expectation is taken over the uncertain outcomes. This, famously, leads to deterministic policies which are brittle — they “[put all eggs in one basket](/model-free-risk-sensitive-reinforcement-learning-5a12ba5ce662)”. If we use such a policy in a situation where the transition dynamics or the rewards are different from the training environment, it will often generalise poorly.\n\nFor instance, we might learn a reward model from human preferences and then train a policy to maximise learned rewards. We then deploy the trained policy. Because of limited data (human queries) our learned reward function can easily differ from the reward function in the deployed environment. Standard RL policies have no robustness against such changes and can easily perform poorly in this setting. Instead, we would like to take into account *during training* that we have some uncertainty about the reward function in the deployed environment. We want to train a policy that works well, even in the worst-case given our uncertainty. To achieve this, we model the environment to not be simply random, but being (partly) controlled by an adversary that tries to anticipate our agent’s behaviour and pick the worst-case outcomes accordingly.\n\n![]()**Imagining an opponent inside the machine.** Though seemingly “just an automaton”, the Mechanical Turk is capable of anticipating the player’s moves. When faced with such an *adaptive* dynamical system, the player needs to choose moves by imagining responses of an adversary controlling the machine and hedging accordingly. Image source: Joseph Racknitz, Public domain, via Wikimedia CommonsImplementing the adversary\n--------------------------\n\nWe play the following game: the agent picks a policy, and after that, a *hypothetical* adversary gets to see this policy and change the rewards associated with each transition — the adversary gets to pick the worst-case reward-function perturbation from a set of hypothetical perturbations. When selecting a policy, our agent needs to anticipate the imagined adversary’s reward perturbations and use a policy that hedges against these perturbations *in advance*. The resulting *fixed* policy is robust against any of the anticipated perturbations. If, for instance, the reward function changes between training and deployment in a way that is covered by the possible perturbations of the imagined adversary, there is no need to adapt the policy after deployment; the deployed robust policy has already taken into account such perturbations in advance.\n\nOptimisation in the face of this kind of adversary puts us in a game-theoretic setting: the agent is trying to maximise rewards while the adversary is trying to minimise them (a mini-max game). The adversary introduces “adaptive uncertainty”: the agent cannot simply take the expectation over this uncertainty and act accordingly because the adversary’s actions depend on the agent’s choices. Optimal strategies of the agent in this setting are robust stochastic policies. This robustness is well known via the *indifference principle* in game theory: acting optimally means using a strategy such that the agent becomes indifferent to the opponent’s choices.\n\nWhile the idea of the imagined adversary seems neat for obtaining robust policies, the question that remains is how to perform the game-theoretic optimisation involved? In [our recent paper](https://arxiv.org/abs/2203.12592), we show that when the adversary is of a particular kind, we do not need to worry about this question. **Using mathematical tools from convex duality, it can be shown that standard RL with policy regularisation corresponds exactly to the mini-max game against an imagined adversary.** The policy that solves one formulation also solves the dual version optimally. There is no need to actually implement the adversary, or feed adversarially perturbed rewards to the agent. We can thus interpret policy-regularised RL from this adversarial viewpoint and explain why and how policy regularisation leads to robust policies.\n\nUsing convex duality to characterise the imagined adversary\n-----------------------------------------------------------\n\nThe goal of our paper is to show how policy regularisation is equivalent to optimisation under a particular adversary, and to study that adversary. Using convex duality, it turns out that the adversary we are dealing with, in case of policy regularised RL, has the following properties:\n\n* The adversary applies an additive perturbation to the reward function: \n r’(s, a) = r(s, a)- Δr(s, a)\n* It pays a cost for modifying the agent’s rewards, but only has a limited budget. The cost function depends on the mathematical form of the policy regulariser — we investigate KL- and alpha-divergence regularisation. The budget is related to the regulariser strength.\n* The adversary applies a perturbation to all actions simultaneously (it knows the agent’s distribution over actions, but not which action the agent will sample). This generally leads to reducing rewards for high-probability actions and increasing rewards for low-probability actions.\n\nUsing the dual formulation of the regulariser as an adversary, we can compute worst-case perturbations (Δr⁎). Consider the following example to get an intuition for a single decision step:\n\n![]()**Reward perturbations and policy for a single decision. Left column**: unperturbed environment rewards for one state with six actions available. Agent’s Q-values correspond exactly to these environment rewards. **Second column** (blue): (top) regularised policy, (bottom) virtually unregularised policy (which is quasi deterministic). **Third column**: worst-case reward perturbations for given regulariser strength — finding the optimal (worst-case) perturbation that lies within the limited budget of the adversary is non-trivial and leads to a mini-max game between the adversary and the agent. **Fourth column**: perturbed rewards under worst-case perturbation.The virtually unregularised policy shown above (second column, bottom) almost deterministically selects the highest-reward action; decreasing the reward for this action is very “costly” for the adversary, but even small decreases will have an impact on the expected rewards. Conversely, the reward perturbations for low-probability actions need to be much larger to have an impact (and can be interpreted as being “cheaper” for the adversary). Solutions to the mini-max game between agent and adversary are characterised by indifference, as shown in the fourth column. Under the (optimally) perturbed reward function, the agent has no incentive to change its policy to make any of the actions more or less likely since they all yield the same perturbed reward — the optimal agent receives equal value for each of its actions after accounting for the adversary’s optimal strategy. (Note that this indifference does not mean that the agent chooses actions with uniform probabilities.)\n\nIncreasing the regularisation strength (top row) corresponds to a stronger adversary with an increased budget. This results in a lower value of the indifference point (fourth column y-axis, top, compared to bottom) and a more stochastic policy (second column, top, compared to bottom).\n\nGeneralisation guarantee\n------------------------\n\nThe convex-dual formulation also allows us to characterise the full set of perturbations available to the adversary (the “feasible set” in the paper; we have already seen the worst-case perturbation in the illustration above). This allows us to give a quantitative generalisation guarantee:\n\n\n> **Generalisation guarantee (robustness):** for any perturbation in the feasible set the (fixed) policy is guaranteed to achieve an expected perturbed reward, which is greater than or equal to the regularised objective (expected cumulative discounted reward minus the regulariser term).\n> \n> \n\nThe regulariser strength corresponds to the assumed “powerfulness” of the adversary, and the cost that the adversary has to pay is related to either a KL- or an alpha-divergence with respect to some base-policy. If the base policy is uniform and we use the KL divergence, we recover the widely used entropy-regularised RL. We can compute and visualise the (feasible) set of perturbations that our policy is guaranteed to be robust against for high, medium and low regularisation, shown here:\n\n![]()**Feasible set** (red region)**:** the set of perturbations available to the adversary for various regulariser strengths for a single state and two actions. X- and Y-axis show perturbed rewards for each action respectively. Blue stars show the unperturbed rewards (same value in all plots), red stars indicate the rewards under the worst-case reward perturbation (see paper for more details). For each perturbation that lies in the feasible set, the regularized policy is guaranteed to achieve expected perturbed reward greater than or equal to the value of the regularized objective. As intuitively expected, the feasible set becomes more restricted with decreasing regularizer strength, meaning the resulting policy becomes less robust (particularly against reward decreases). The slope and boundary of the feasible set can be directly linked to the optimal robust policy (action probabilities), see paper for more details.One insight from the formal analysis is that the adversary that corresponds to policy regularisation behaves such that decreases in rewards for some actions are compensated for by increases in rewards for other actions in a very particular way. This is different, e.g. from a more intuitive “adversarial attack” on the reward function that only reduces rewards given a certain magnitude.\n\nThe second observation that we want to highlight is that increased robustness does not come for free: the agent gives up some payoff for the sake of robustness. The policy becomes increasingly stochastic with increased regulariser strength, meaning that it does not achieve maximally possible expected rewards under the unperturbed reward function. This is a reminder that choosing a regulariser and its weighting term reflects implicit assumptions about the situations (and perturbations) that the policy will be faced with.\n\nRelated work and further reading\n--------------------------------\n\nIn our current work, we investigate robustness to reward perturbations. We focus on describing KL- or alpha-divergence regularisers in terms of imagined adversaries. It is also possible to choose adversaries that get to perturb the transition probabilities instead, and/or derive regularizers from desired robust sets ([Eysenbach 2021](https://arxiv.org/abs/2103.06257), [Derman 2021](https://arxiv.org/abs/2110.06267)).\n\n* [Read our paper](https://arxiv.org/abs/2203.12592) for all technical details and derivations. All statements from the blog post are made formal and precise (including the generalisation guarantee). We also give more background on convex duality and an example of a 2D, sequential grid-world task. The work is [published in TMLR](https://openreview.net/forum?id=berNQMTYWZ).\n* To build intuition and understanding via the single-step case, see [Pedro Ortega’s](https://arxiv.org/abs/1404.5668) paper, which provides a derivation of the adversarial interpretation and a number of instructive examples.\n* [Esther Derman’s (2021)](https://arxiv.org/abs/2110.06267) recent paper derives practical iterative algorithms to enforce robustness to both reward perturbations (through policy regularization) and changes in environment dynamics (through value regularization). Their approach *derives* a regularization function from a specified robust set (such as a p-norm ball), but can also recover KL or α-divergence regularization with slight differences to our analysis.\n**Changelog**\n\n**8 Aug. 2022:** Updated with TMLR publication information, updated single step example figure to match final version of paper, updated discussion of related work to give more details to relation to Derman 2021.", "url": "https://deepmindsafetyresearch.medium.com/your-policy-regulariser-is-secretly-an-adversary-14684c743d45", "title": "Your Policy Regulariser is Secretly an Adversary", "source": "deepmind_blog", "source_type": "blog", "date_published": "2022-03-24", "authors": ["DeepMind Safety Research"], "id": "f5843c811a4d2f35dfa55ca7235e776d"} +{"text": "**Avoiding Unsafe States in 3D Environments using Human Feedback**\n==================================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----5869ed9fb94c--------------------------------)[DeepMind Safety Research](/?source=post_page-----5869ed9fb94c--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Favoiding-unsafe-states-in-3d-environments-using-human-feedback-5869ed9fb94c&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----5869ed9fb94c---------------------post_header-----------)\n\n5 min read·Jan 21, 2022--\n\nListen\n\nShare\n\n*By Matthew Rahtz, Vikrant Varma, Ramana Kumar, Zachary Kenton, Shane Legg, and Jan Leike.*\n\nTl;dr: [ReQueST](https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours) is an algorithm for learning objectives from human feedback on hypothetical behaviour. In this work, we scale ReQueST to complex 3D environments, and show that it works even with feedback sourced entirely from real humans. Read our paper at .\n\nLearning about unsafe states\n----------------------------\n\nOnline reinforcement learning has a problem: it must act unsafely in order to learn *not* to act unsafely. For example, if we were to use online reinforcement learning to train a self-driving car, the car would have to drive off a cliff in order to learn *not* to drive off cliffs.\n\n![]()Source: Getty Images.One way that humans solve this problem is by learning from *hypothetical* situations. Our imagination gives us the ability to consider various courses of action without actually having to enact them in the real world. In particular, this allows us to learn about potential sources of danger without having to expose ourselves or others to the concomitant risks.\n\nThe ReQueST algorithm\n---------------------\n\nReQueST (Reward Query Synthesis via Trajectory optimization) is a technique developed to give AI systems the same ability. ReQueST employs three components:\n\n1. **A** **neural environment simulator** — a dynamics model learned from trajectories generated by humans exploring the environment safely. In our work this is a pixel-based dynamics model,\n2. **A reward model**, learned from human feedback on videos of (hypothetical) behaviour in the learned simulator.\n3. **Trajectory optimisation**, so that we can choose hypothetical behaviours to ask the human about that help the reward model learn what’s safe and what’s not (in addition to other aspects of the task) as quickly as possible.\n\nTogether, these three components allow us to learn a reward model based entirely on hypothetical examples ‘imagined’ using the learned simulator. If we then use the learned simulator and reward model with a model-based control algorithm, the result is an agent that does what the human wants — in particular, avoiding behaviours the human has indicated is unsafe — without having had to first try those behaviours in the real world!\n\n![]()ReQueST in our work\n-------------------\n\nIn our [latest paper](https://arxiv.org/abs/2201.08102), we ask: is ReQueST viable in a more realistic setting than the simple 2D environments used in the work that introduced ReQueST? In particular, can we scale ReQueST to a complex 3D environment, with imperfect feedback as sourced from real humans rather than procedural reward functions?\n\nIt turns out the answer is: yes!\n\n![]()The video above shows a number of (cherry-picked) example episodes from our ReQueST agent on an apple collection task. The left pane shows ground-truth observations and rewards from the ‘real’ environment. On the right, we see the predictions generated by the learned environment simulator and the reward model, used by the agent to determine which actions to take. On top are predictions of future observations generated by the dynamics model; on the bottom are predictions from a reward model we’ve trained to reward the agent for moving closer and closer to each apple.\n\nResults\n-------\n\nTo quantify the ability of our agent to avoid unsafe states in the ‘real’ environment, we run 100 evaluation episodes in a ‘cliff edge’ environment, where the agent can fall off the edge of the world (which would be unsafe). We test on three sizes of environment, corresponding to different difficulty levels: the larger the environment, the harder it is for the agent to accidentally wander off the edge. Results are as follows.\n\n![]()Note that at test time, in the 100 evaluation episodes, the ReQueST agent barely falls off the edge at all, in the hardest environment falling in only 3% of episodes. A model-free RL algorithm, in contrast, must fall off the edge over 900 times before it learns *not* to fall off the edge. (Note that ReQueST itself does not fall off the edge during training, but for fairness we count times that human contractors fall off the edge (despite being instructed not to) while generating trajectories for the dynamics model as safety violations. We also tried training on only the safe trajectories to confirm that unsafe trajectories were not required.)\n\nIn terms of performance, the ReQueST agent manages to eat about 2 out of the 3 possible apples on average. This is worse than the model-free baseline, which does eat all 3 apples consistently. However, we do not believe that this is reflective of performance achievable with the ReQueST algorithm in principle. Most failures in our experiments could be attributed to some combination of low fidelity from the learned simulator and inconsistent outputs from the reward model, neither of which were the focus of this work. We believe such failure modes could be solved relatively easily with additional work on these components — and the quality of the learned simulation in particular will improve with general progress in generative modelling.\n\nWhat is the significance of these results?\n------------------------------------------\n\nFirst, this research shows that even in realistic environments, it is possible to aim for *zero* safety violations without making major assumptions about the state space. With others such as [Luo 2021](https://arxiv.org/abs/2108.01846) starting to aim for a similar target, we hope this is the beginning of a new level of ambition for safe exploration research.\n\nSecond, this work establishes ReQueST as a plausible solution to human-guided safe exploration. We believe this is the particular brand of safe exploration likely to be representative of real-world deployments of AGI: where the constraints of safe behaviour are fuzzy, and therefore *must* be learned from humans because they can’t be specified programmatically. In particular, ReQueST shines in situations where any safety violations at all may incur great cost (e.g. harm to humans).\n\nThird, we have shown the exciting promise of neural environment simulators to make RL more safe. We believe such simulators warrant more attention, given that a) they can be learned from data (rather than handcrafted by experts), and b) the ability to differentiate through them in order to discover situations of interest (as we do in this work during trajectory optimization). Given [current progress](https://www.matthewtancik.com/nerf) in this area, we are excited to see what the future holds.", "url": "https://deepmindsafetyresearch.medium.com/avoiding-unsafe-states-in-3d-environments-using-human-feedback-5869ed9fb94c", "title": "Avoiding Unsafe States in 3D Environments using Human Feedback", "source": "deepmind_blog", "source_type": "blog", "date_published": "2022-01-21", "authors": ["DeepMind Safety Research"], "id": "4c113abe073281e6f9eb0dc9db95998b"} +{"text": "Model-Free Risk-Sensitive Reinforcement Learning\n================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----5a12ba5ce662--------------------------------)[DeepMind Safety Research](/?source=post_page-----5a12ba5ce662--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fmodel-free-risk-sensitive-reinforcement-learning-5a12ba5ce662&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----5a12ba5ce662---------------------post_header-----------)\n\n7 min read·Nov 11, 2021--\n\nListen\n\nShare\n\n*By the Safety Analysis Team: Grégoire Delétang, Jordi Grau-Moya, Markus Kunesch, Tim Genewein, Rob Brekelmans, Shane Legg, and Pedro A. Ortega*\n\nRead our paper here: \n\nWe’re all familiar with risk-sensitive choices. As you look out of the window, you see a few gray clouds and no rain, but decide to take along the umbrella anyway. You’re convinced your application will be successful, but you apply for other positions nevertheless. You hurry to get to an important appointment on time, but avoid the highway just in case there could be a traffic jam. Or you buy a lottery ticket all the while you know the chances of winning are unreasonably slim. All these are instances of risk-sensitive behavior — mostly risk-averse but occasionally risk-seeking too. This means we tend to value uncertain events less/more than their expected value, preempting outcomes that go against our expectations.\n\nWhy risk-sensitivity?\n=====================\n\nMost reinforcement learning algorithms are *risk-neutral*. They collect data from the environment and adapt their policy in order to maximize the *expected return* (sum of future rewards). This works well when the environment is small, stable, and controlled (a “closed world”), such as when agents are trained long enough on a very accurate simulator, so as to familiarize themselves entirely with its details and intricacies. Risk-neutral policies, because of the trust they have in their knowledge, can afford to confidently “put all the eggs into the same basket”.\n\n![]()*Risk-neutral policies, because of the trust they have in their knowledge, can afford to confidently “put all the eggs into the same basket”. XVI Century painting “Girl with a basket of eggs” by Joachim Beuckelaer.*These assumptions often do not hold in real-world applications. The reasons abound: the training simulator could be inaccurate, the assumptions wrong, the collected data incomplete, the problem misspecified, the computation limited in its resources, and so on. In addition, there might be competing agents reacting in ways that can’t be anticipated. In such situations, risk-neutral agents are too brittle: it could take a single mistake to destabilize their policy and lead to a catastrophic breakdown. This poses serious AI safety problems.\n\nHow do agents learn risk-sensitive policies?\n============================================\n\nJust as we go about in our daily lives, we can address the previous shortcomings using risk-sensitive policies. Such policies differ from their risk-neutral counterparts in that they value their options differently: not only according to their expected return, but also to the higher-order moments, like the variance of the return, the skewness, etc. Simply stated, risk-sensitive agents care about the *shape* of the distribution of their return and adjust their expectations accordingly. This approach is standard outside of reinforcement learning: in finance for instance, good portfolio managers carefully balance the risks and returns of their investments (see [modern portfolio theory](https://en.wikipedia.org/wiki/Modern_portfolio_theory)).\n\nThere are many ways of building risk-sensitive policies [1]. One can formulate a robust control problem consisting of a two-player game between an agent and an adversary, who chooses the environmental parameters maliciously, and then solve for the Maximin policy [2, 3]. Alternatively, one can change the objective function to reflect the sensitivity to the higher-order moments of the return, for instance by penalizing the expected return with a correction term that increases monotonically with the variance [4, 5].\n\nModel-Free Risk-Sensitive RL\n============================\n\nIn our paper, we introduce a simple model-free update rule for risk-sensitive RL. It is an asymmetric modification of temporal-difference (TD) learning which puts different weight on observations that overshoot the current value estimates (that is, gains) than on those that fall below (losses). More precisely, let *s* and *s’* be two subsequent states the agent experiences, *V(s)* and *V(s’)* their current value estimates, and *R(s)* be the reward observed in state *s*. Then, to obtain risk-sensitive value estimates, the agent can use the following model-free update of *V(s)*:\n\n![]()(differences with TD-learning highlighted in red) where δ is the standard temporal difference error\n\n![]()and the real function σβ is the scaled logistic sigmoid\n\n![]()Furthermore, ɑ is the learning rate, and 0≤ɣ≤1 is the discount rate. The parameter β controls the risk attitude of the agent. The rationale for this rule is simple: if β<0, *V(s)* will converge to a risk-averse (or pessimistic) value below the expected return because losses have more weight in their updates than gains. Similarly, β>0 will lead to a risk-seeking (optimistic) estimate. Risk-neutrality is obtained with β=0, which recovers standard TD-learning.\n\nIn short, the risk parameter β selects the quantile of the target distribution the value will converge to (although the exact quantile as a function of β depends on the distribution) as shown in the simulation below.\n\n![]()*Estimation of the value for Gaussian- (left) and uniformly-distributed (right) observed target values (grey dots). Each plots shows 10 estimation processes (9 in pink, 1 in red) per choice of the risk parameter β ∊ {-4, -2, 0, +2, +4}. Notice how the estimate settles on different quantiles.*The following grid-world example illustrates how risk-sensitive estimates affect the resulting policies. The task of the agent is to navigate to a goal location containing a reward pill while avoiding stepping into the river. This is made more challenging by the presence of a strong wind that pushes the agent into a random direction 50% of the time. The agent was trained using five risk-sensitivity parameter settings ranging from risk-averse (β = -0.8) to risk-seeking (β = +0.8).\n\n![]()The bar plots in (b) show the average return (blue) and the percentage of time spent inside of the water (red) of the resulting policies. The best average return is attained by the risk-neutral policy. However, the risk-sensitive policies (low β) are more effective in avoiding stepping into the river than the risk-seeking policies (high β).\n\nThe different choices of the risk-sensitivity parameter β also lead to three qualitatively diverse types of behaviors. In panel ©, which illustrates the various paths taken by the agent when there is no wind, we observe three classes of policies: a *cautious policy* (β = -0.8) that takes the long route away from the water to reach the goal; a risk-neutral policy (β ∊ {-0.4, 0.0, +0.4}) taking the middle route, only a single step away from the water; and finally, an *optimistic policy* (β =+0.8) which attempts to get to the goal taking a straight route.\n\nDopamine Signals, Free Energy, and Imaginary Foes\n=================================================\n\nThere are a few other interesting properties about the risk-sensitive update rule:\n\n* The risk-sensitive update rule can be linked to findings in computational neuroscience [6, 7]. Dopamine neurons appear to signal a reward prediction error similar as in temporal difference learning. Further studies also suggest that humans learn differently in response to positive and negative reward prediction errors, with higher learning rates for negative errors. This is consistent with the risk-sensitive learning rule.\n* In the special case when the distribution of the target value is Gaussian, then the estimate converges precisely to the free energy with inverse temperature β. Using the free energy as an optimization objective (or equivalently, using exponentially-transformed rewards) has a long tradition in control theory as an approach to risk-sensitive control [8].\n* One can show that optimizing the free energy is equivalent to playing a game against an imaginary adversary who attempts to change the environmental rewards against the agent’s expectations. Thus, a risk-averse agent can be thought of as choosing its policy by playing out imaginary pessimistic scenarios.\n\nFinal thoughts\n==============\n\nTo deploy agents that react robustly to unforeseen situations we need to make them risk-sensitive. Unlike risk-neutral policies, risk-sensitive policies implicitly admit that their assumptions about the environment could be mistaken and adjust their actions accordingly. We can train risk-sensitive agents in a simulator and have some confidence about their performance under unforeseen events.\n\nThrough our work we show that incorporating risk-sensitivity into model free agents is straightforward: all it takes is a small modification of the temporal difference error which assigns asymmetric weights to the positive and negative updates.\n\nReferences\n==========\n\n[1] Coraluppi, S. P. (1997). Optimal control of Markov decision processes for performance and robustness. University of Maryland, College Park.\n\n[2] Nilim, A. and El Ghaoui, L. (2005). Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780–798.\n\n[3] Tamar, A., Mannor, S., and Xu, H. (2014). Scaling up robust MDPs using function approximation. In International Conference on Machine Learning, pages 181–189. PMLR.\n\n[4] Galichet, N., Sebag, M., and Teytaud, O. (2013). Exploration vs exploitation vs safety: Risk-aware multi-armed bandits. In Asian Conference on Machine Learning, pages 245–260. PMLR.\n\n[5] Cassel, A., Mannor, S., and Zeevi, A. (2018). A general approach to multi-armed bandits under risk criteria. In Conference On Learning Theory, pages 1295–1306. PMLR.\n\n[6] Niv, Y., Edlund, J. A., Dayan, P., and O’Doherty, J. P. (2012). Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. Journal of Neuroscience, 32(2):551–562.\n\n[7] Gershman, S. J. (2015). Do learning rates adapt to the distribution of rewards? Psychonomic Bulletin & Review, 22(5):1320–1327.\n\n[8] Howard, R. A. and Matheson, J. E. (1972). Risk-sensitive Markov decision processes. Management science, 18(7):356–369.", "url": "https://deepmindsafetyresearch.medium.com/model-free-risk-sensitive-reinforcement-learning-5a12ba5ce662", "title": "Model-Free Risk-Sensitive Reinforcement Learning", "source": "deepmind_blog", "source_type": "blog", "date_published": "2021-11-11", "authors": ["DeepMind Safety Research"], "id": "2f2b68da6d6781e4c79fa12fe46bfe7e"} +{"text": "Progress on Causal Influence Diagrams\n=====================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----a7a32180b0d1--------------------------------)[DeepMind Safety Research](/?source=post_page-----a7a32180b0d1--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fprogress-on-causal-influence-diagrams-a7a32180b0d1&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----a7a32180b0d1---------------------post_header-----------)\n\n12 min read·Jun 30, 2021--\n\nListen\n\nShare\n\n*By Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane Legg*\n\n*Crossposted to the* [*alignmentforum*](https://www.alignmentforum.org/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams)\n\nAbout 2 years ago, we released the [first](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486) [few](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) [papers](https://arxiv.org/abs/1906.08663) on understanding agent incentives using causal influence diagrams. This blog post will summarize progress made since then.\n\nWhat are causal influence diagrams?\n===================================\n\nA key problem in AI alignment is understanding agent incentives. Concerns have been raised that agents may be incentivized to [avoid correction](https://intelligence.org/files/Corrigibility.pdf), [manipulate users](https://www.youtube.com/watch?v=ZkV7anCPfaY), or [inappropriately influence their learning](https://arxiv.org/abs/2004.13654). This is particularly worrying as training schemes often shape incentives in [subtle](https://arxiv.org/abs/1611.08219) and [surprising](https://arxiv.org/abs/2009.09153) ways. For these reasons, we’re developing a formal theory of incentives based on causal influence diagrams (CIDs).\n\nHere is an example of a CID for a one-step Markov decision process (MDP). The random variable S₁ represents the state at time 1, A₁ represents the agent’s action, S₂ the state at time 2, and R₂ the agent’s reward.\n\n![]()The action A₁ is modeled with a decision node (square) and the reward R₂ is modeled as a utility node (diamond), while the states are normal chance nodes (rounded edges). Causal links specify that S₁ and A₁ influence S₂, and that S₂ determines R₂. The information link S₁ → A₁ specifies that the agent knows the initial state S₁ when choosing its action A₁.\n\nIn general, random variables can be chosen to represent agent decision points, objectives, and other relevant aspects of the environment.\n\nIn short, a CID specifies:\n\n* Agent decisions\n* Agent objectives\n* Causal relationships in the environment\n* Agent information constraints\n\nThese pieces of information are often essential when trying to figure out an agent’s incentives: how an objective can be achieved depends on how it is causally related to other (influenceable) aspects in the environment, and an agent’s optimization is constrained by what information it has access to. In many cases, the qualitative judgements expressed by a (non-parameterized) CID suffice to infer important aspects of incentives, with minimal assumptions about implementation details. Conversely, it has [been shown](https://arxiv.org/abs/1910.10362) that it is necessary to know the causal relationships in the environment to infer incentives, so it’s often impossible to infer incentives with less information than is expressed by a CID. This makes CIDs natural representations for many types of incentive analysis.\n\nOther advantages of CIDs is that they build on well-researched topics like [causality](https://www.amazon.co.uk/Causality-Judea-Pearl/dp/052189560X) and [influence diagrams](https://arxiv.org/abs/cs/9512104), and so allows us to leverage the deep thinking that’s already been done in these fields.\n\nIncentive Concepts\n==================\n\nHaving a unified language for objectives and training setups enables us to develop generally applicable concepts and results. We define four such concepts in [Agent Incentives: A Causal Perspective](https://arxiv.org/abs/2102.01685) (AAAI-21):\n\n* **Value of information**: what does the agent want to know before making a decision?\n* **Response incentive**: what changes in the environment do optimal agents respond to?\n* **Value of control**: what does the agent want to control?\n* **Instrumental control incentive**: what is the agent both interested and able to control?\n\nFor example, in the one-step MDP above:\n\n* For S₁, an optimal agent would act differently (i.e. respond) if S₁ changed, and would value knowing and controlling S₁, but it cannot influence S₁ with its action. So S₁ has value of information, response incentive, and value of control, but not an instrumental control incentive.\n* For S₂ and R₂, an optimal agent could not respond to changes, nor know them before choosing its action, so these have neither value of information nor a response incentive. But the agent would value controlling them, and is able to influence them, so S₂ and R₂ have value of control and instrumental control incentive.\n\n![]()In the paper, we prove sound and complete graphical criteria for each of them, so that they can be recognized directly from a graphical CID representation (see previous [blog](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486) [posts](https://towardsdatascience.com/new-paper-the-incentives-that-shape-behaviour-d6d8bb77d2e4)).\n\nValue of information and value of control are classical concepts that have been around for a long time (we contribute to the graphical criteria), while response incentives and instrumental control incentives are new concepts that we have found useful in several applications.\n\nFor readers familiar with [previous](https://arxiv.org/abs/1902.09980) [iterations](https://arxiv.org/abs/2001.07118) of this paper, we note that some of the terms have been updated. **Instrumental control incentives** were previously called just “control incentives”. The new name emphasizes that it’s control as an instrumental goal, as opposed to control arising as a side effect (or [due to mutual information](https://www.alignmentforum.org/posts/67a8C6KsKn2NyW2Ry/counterfactual-control-incentives)). **Value of information** and **value of control** were previously called “observation incentives” and “intervention incentives”, respectively.\n\nUser Interventions and Interruption\n===================================\n\nLet us next turn to some recent applications of these concepts. In [How RL Agents Behave when their Actions are Modified](https://arxiv.org/abs/2102.07716) (AAAI-21), we study how different RL algorithms react to user interventions such as interruptions and over-ridden actions. For example, [Saunders et al.](https://arxiv.org/abs/1707.05173#:~:text=Trial%20without%20Error%3A%20Towards%20Safe%20Reinforcement%20Learning%20via%20Human%20Intervention,-William%20Saunders%2C%20Girish&text=AI%20systems%20are%20increasingly%20applied,that%20involve%20interaction%20with%20humans.&text=For%20model%2Dfree%20reinforcement%20learning,way%20to%20prevent%20all%20catastrophes.) developed a method for safe exploration where a user overrides dangerous actions. Alternatively, agents might get interrupted if analysis of their “thoughts” (or internal activations) suggest they are planning something dangerous. How do such interventions affect the incentives of various RL algorithms?\n\nFirst, we formalize action-modification by extending MDPs with a parameter PA that describes action-modification. We then model such **modified-action MDPs** with a CID:\n\n![]()Here we model the agent’s policy Π as the decision rather than the actions Ai, as the latter are not under full control of the agent, but can also be influenced by the action-modification PA (as represented by arrows PA → Aᵢ and Π → Aᵢ). The agent might know the interruption scheme PA from interruptions during training, so we include an information link PA → Π.\n\nWe analyze different prototypical RL algorithms in terms of the causal assumptions they make on the environment:\n\n* **Black-box optimization** algorithms such as **evolutionary strategies** take into account all causal relationships.\n* In contrast, the update rule of **Q-learning** effectively assumes that the next action will be taken optimally, with no action-modification. This means that Q-learners ignore causal effects PA → Aᵢ. Similarly, **SARSA** with the action chosen by the agent in the TD-update assumes that it will be in control of its next action. We call this version **virtual SARSA**.\n* SARSA based on the modified action (**empirical SARSA**) ignores the effect of action-modification on the current action, but takes into account the effect on subsequent actions.\n\nWe can represent the differences with the following CIDs. (The extra policy node Π’ for empirical SARSA represents that action are optimized separately rather than jointly.)\n\n![]()The CIDs can be used to understand how the different algorithms adapt to interruption, via a graphical criterion for **path-specific response incentives**. Black-box optimization tries to both obscure its policy and to disable its off-switch, whereas Q-learning and Virtual SARSA do neither. Empirical SARSA tries to disable the off-switch, but does not try to obscure its policy.\n\nWe verify these results empirically in the relevant [AI safety gridworlds](https://arxiv.org/abs/1711.09883), as well as in one new environment where the agent has to behave well in simulation to be deployed in reality, where black-box optimizers exhibit “[treacherous turn](https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn)”-like behavior. The results are a generalization of [Orseau and Armstrong](https://deepmind.com/research/publications/safely-interruptible-agents)’s interruptibility results for Q-learning and SARSA.\n\nZooming out, these results are a good example of causal analysis of ML algorithms. Different design choices translate into different causal assumptions, which in turn determine the incentives. In particular, the analysis highlights why the different incentives arise, thus deepening our understanding of how behavior is shaped.\n\nReward Tampering\n================\n\nAnother AI safety problem that we have studied with CIDs is **reward tampering**. Reward tampering can take several different forms, including the agent:\n\n* rewriting the source code of its implemented reward function (“wireheading”),\n* influencing users that train a learned reward model (“feedback tampering”),\n* manipulating the inputs that the reward function uses to infer the state (“RF-input tampering / delusion box problems”).\n\nFor example, the problem of an agent influencing its reward function may be modeled with the following CID, where RFᵢ represent the agent’s reward function at different time steps, and the red links represent an undesirable instrumental control incentive.\n\n![]()In [Reward Tampering Problems and Solutions](https://rdcu.be/ckWLC) (published in the well-respected philosophy journal Synthese) we model all these different problems with CIDs, as well as a range of proposed solutions such as current-RF optimization, [uninfluenceable reward learning](https://arxiv.org/abs/2004.13654#:~:text=We%20show%20that%20this%20comes,for%20all%20relevant%20reward%20functions).), and [model-based utility functions](https://arxiv.org/abs/1111.3934). Interestingly, even though these solutions were initially developed independently of formal causal analysis, they all avoid undesirable incentives by cutting some causal links in a way that avoids instrumental control incentives.\n\nBy representing these solutions in a causal framework, we can get a better sense of why they work, what assumptions they require, and how they relate to each other. For example, current-RF optimization and model-based utility functions both formulate a modified objective in terms of an observed random variable from a previous time step, whereas uninfluenceable reward learning (such as [CIRL](https://arxiv.org/abs/1606.03137)) uses a latent variable:\n\n![]()As a consequence, the former methods must deal with time-inconsistency and a lack of incentive to learn, while the latter requires inference of a latent variable. It will likely depend on the context whether one is preferable over the other, or if a combination is better than either alone. Regardless, having distilled the key ideas should put us in a better position to flexibly apply the insights in novel settings.\n\nWe refer to the [previous blog post](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) for a longer summary of current-RF optimization. The paper itself has been significantly updated since previously shared preprints.\n\nMulti-Agent CIDs\n================\n\nMany interesting incentive problems arise when multiple agents interact, each trying to optimize their own reward while they simultaneously influence each other’s payoff. In [Equilibrium Refinements in Multi-Agent Influence Diagrams](https://arxiv.org/abs/2102.05008) (AAMAS-21), we build on the [seminal work by Koller and Milch](http://people.csail.mit.edu/milch/papers/geb-maid.pdf) to lay foundations for understanding multi-agent situations with multi-agent CIDs (MACIDs).\n\nFirst, we relate MACIDs to [extensive-form games](https://en.wikipedia.org/wiki/Extensive-form_game) (EFGs), currently the most popular graphical representations of games. While EFGs sometimes offer more natural representations of games, they have some significant drawbacks compared to MACIDs. In particular, EFGs can be exponentially larger, don’t represent conditional independencies, and lack random variables to apply incentive analysis to.\n\nAs an example, consider a game where a store (Agent 1) decides (D¹) whether to charge full (F) or half (H) price for a product depending on their current stock levels (X), and a customer (Agent 2) decides (D²) whether to buy it (B) or pass (P) depending on the price and how much they want it (Y). The store tries to maximize their profit U¹, which is greater if the customer buys at a high price. If they are overstocked and the customer doesn’t buy, then they have to pay extra rent. The customer is always happy to buy at half price, and sometimes at full price (depending on how much they want the product).\n\nThe EFG representation of this game is quite large, and uses **information sets** (represented with dotted arcs) to represent the facts that the store doesn’t know how much the customer wants the gadget, and that the customer doesn’t know the store’s current stock levels:\n\n![]()In contrast, the MACID representation is significantly smaller and clearer. Rather than relying on information sets, the MACID uses information links (dotted edges) to represent the limited information available to each player:\n\n![]()Another aspect that is made more clear from the MACID, is that for any fixed customer decision, the store’s payoff is independent of how much the customer wanted the product (there’s no edge Y→U¹). Similarly, for any fixed product price, the customer’s payoff is independent of the store’s stock levels (no edge X→U²). In the EFG, these independencies could only be inferred by looking carefully at the payoffs.\n\nOne benefit of MACIDs explicitly representing these conditional independencies is that more parts of the game can be identified as independently solvable. For example, in the MACID, the following independently solvable component can be identified. We call such components **MACID subgames**:\n\n![]()Solving this subgame for any value of D¹ reveals that the customer always buys when they really want the product, regardless of whether there is a discount. This knowledge makes it simpler to next compute the optimal strategy for the store. In contrast, in the EFG the information sets prevent any proper subgames from being identified. Therefore, solving games using a MACID representation is often faster than using an EFG representation.\n\nFinally, we relate various forms of equilibrium concepts between MACIDs and EFGs. The most famous type of equilibrium is the **Nash equilibrium**, which occurs when no player can unilaterally improve their payoff. An important refinement of the Nash equilibrium is the [**subgame perfect equilibrium**](https://en.wikipedia.org/wiki/Subgame_perfect_equilibrium)**,** which rules out non-credible threats by requiring that a Nash equilibrium is played in every subgame. An example of a non-credible threat in the store-customer game would be the customer “threatening” the store to only buy at a discount. The threat is **non-credible**, since the best move for the customer is to buy the product even at full price, if he really wants it. Interestingly, only the MACID version of subgame perfectness is able rule such threats out, because only in the MACID is the customer’s choice recognized as a proper subgame.\n\nUltimately, we aim to use MACIDs to analyze incentives in multi-agent settings. With the above observations, we have put ourselves in position to develop a theory of multi-agent incentives that is properly connected to the broader game theory literature.\n\nSoftware\n========\n\nTo help us with our research on CIDs and incentives, we’ve developed a Python library called [PyCID](https://github.com/causalincentives/pycid), which offers:\n\n* A convenient syntax for defining CIDs and MACIDs,\n* Methods for computing optimal policies, Nash equilibria, d-separation, interventions, probability queries, incentive concepts, graphical criteria, and more,\n* Random generation of (MA)CIDs, and pre-defined examples.\n\nNo setup is necessary, as the [tutorial notebooks](https://colab.research.google.com/github/causalincentives/pycid/blob/master/notebooks/CID_Basics_Tutorial.ipynb) can be run and extended directly in the browser, thanks to Colab.\n\nWe’ve also made available a [Latex package](https://github.com/causalincentives/cid-latex) for drawing CIDs, and have launched [causalincentives.com](https://causalincentives.com/) as a place to collect links to the various papers and software that we’re producing.\n\nLooking ahead\n=============\n\nUltimately, we hope to contribute to a more careful understanding of how design, training, and interaction shapes an agent’s behavior. We hope that a precise and broadly applicable language based on CIDs will enable clearer reasoning and communication on these issues, and facilitate a cumulative understanding of how to think about and design powerful AI systems.\n\nFrom this perspective, we find it encouraging that several other research groups have adopted CIDs to:\n\n* Analyze the incentives of [unambitious agents](https://arxiv.org/pdf/1905.12186.pdf) to break out of their box,\n* Explain [uninfluenceable reward learning](https://arxiv.org/abs/2004.13654), and clarifying its desirable properties (see also Section 3.3 in the [reward tampering paper](https://rdcu.be/ckWLC)),\n* Develop a novel framework to make agents [indifferent](https://link.springer.com/chapter/10.1007%2F978-3-030-52152-3_21) to human interventions.\n\nWe’re currently to pursuing several directions of further research:\n\n* Extending the general incentive concepts to multiple decisions and multiple agents.\n* Applying them to fairness and other AGI safety settings.\n* Analysing limitations that have been identified with work so far. Firstly, considering the issues raised by [Armstrong and Gorman. And secondly,](https://www.alignmentforum.org/posts/67a8C6KsKn2NyW2Ry/counterfactual-control-incentives) looking at broader concepts than instrumental control incentives, as influence can also be incentivized as a side-effect of an objective.\n* Probing further at their philosophical foundations, and establishing a clearer semantics for decision and utility nodes.\n\nHopefully we’ll have more news to share soon!\n\n*We would like to thank Neel Nanda, Zac Kenton, Sebastian Farquhar, Carolyn Ashurst, and Ramana Kumar for helpful comments on drafts of this post.*\n\n**List of recent papers**:\n==========================\n\n* [Agent Incentives: A Causal Perspective](https://arxiv.org/abs/2102.01685)\n* [How RL Agents Behave When Their Actions Are Modified](https://arxiv.org/abs/2102.07716)\n* [Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective](https://rdcu.be/ckWLC)\n* [Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice](https://arxiv.org/abs/2102.05008)\n\nSee also [causalincentives.com](https://causalincentives.com/)", "url": "https://deepmindsafetyresearch.medium.com/progress-on-causal-influence-diagrams-a7a32180b0d1", "title": "Progress on Causal Influence Diagrams", "source": "deepmind_blog", "source_type": "blog", "date_published": "2021-06-30", "authors": ["DeepMind Safety Research"], "id": "fe5e37656d3c8260c8d998880eedcbb2"} +{"text": "An EPIC way to evaluate reward functions\n========================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----c2c6d41b61cc--------------------------------)[DeepMind Safety Research](/?source=post_page-----c2c6d41b61cc--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fan-epic-way-to-evaluate-reward-functions-c2c6d41b61cc&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----c2c6d41b61cc---------------------post_header-----------)\n\n9 min read·Apr 16, 2021--\n\nListen\n\nShare\n\n*By Adam Gleave, Michael Dennis, Shane Legg, Stuart Russell and Jan Leike.*\n\n**TL;DR**: [Equivalent-Policy Invariant Comparison (EPIC)](https://arxiv.org/pdf/2006.13900.pdf) provides a fast and reliable way to compute how similar a pair of reward functions are to one another. EPIC can be used to benchmark reward learning algorithms by comparing learned reward functions to a ground-truth reward. EPIC is up to 1000 times faster than alternative evaluation methods, and requires little to no hyperparameter tuning. Moreover, we show both theoretically and empirically that reward functions judged as similar by EPIC induce policies with similar returns, even in unseen environments.\n\n![]()***Figure 1****: EPIC compares reward functions Rᵤ* and *Rᵥ by first mapping them to canonical representatives and then computing the Pearson distance between the canonical representatives on a coverage distribution 𝒟. Canonicalization removes the effect of potential shaping, and Pearson distance is invariant to positive affine transformations.*Specifying a reward function can be one of the trickiest parts of applying RL to a problem. Even seemingly simple robotics tasks such as peg insertion can require first [training an image classifier](https://arxiv.org/abs/1810.01531) to use as a reward signal. Tasks with a more nebulous objective like [article summarization](https://openai.com/blog/learning-to-summarize-with-human-feedback/) require collecting large amounts of human feedback in order to learn a reward function. The difficulty of reward function specification will only continue to grow as RL is increasingly applied to complex and user-facing applications such as [recommender systems](https://arxiv.org/abs/1811.00260), [chatbots](https://arxiv.org/abs/1609.00777) and [autonomous vehicles](https://www.moralmachine.net/).\n\n![]()***Figure 2****: There exist a variety of techniques to specify a reward function. EPIC can help you decide which one works best for a given task.*This challenge has led to the development of a variety of methods for specifying reward functions. In addition to hand-designing a reward function, today it is possible to learn a reward function from data as varied as the [initial state](https://arxiv.org/abs/1902.04198), [demonstrations](https://ai.stanford.edu/~ang/papers/icml00-irl.pdf), [corrections](https://bair.berkeley.edu/blog/2018/02/06/phri/), [preference comparisons](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/), and [many other data sources](https://arxiv.org/abs/2002.04833). Given this dizzying array of possibilities, how should we choose which method to use?\n\nEPIC is a new way to evaluate reward functions and reward learning algorithms by comparing how similar reward functions are to one another. We anticipate two main use cases for EPIC:\n\n* **Benchmarking** in tasks where the reward function specification problem has already been solved, giving a “ground-truth” reward. We can then compare learned rewards directly to this “ground-truth” to gauge the performance of a new reward learning method.\n* **Validation** of reward functions prior to deployment. In particular, we often have a collection of reward functions specified by different people, methods or data sources. If multiple distinct approaches produce similar reward functions (i.e. a low EPIC distance to one another) then we can have more confidence that the resulting reward is correct. More generally, if two reward functions have low EPIC distance to one another, then information we gain about one (such as by using [interpretability](https://arxiv.org/abs/2012.05862) methods) also helps us understand the other.\n\nPerhaps surprisingly, there is (to the best of our knowledge) no prior work that focuses on directly comparing reward functions. Instead, most prior work has used RL to train a policy on learned rewards, and then evaluated the resulting policies. Unfortunately, RL training is computationally expensive. Moreover, it is unreliable: if the policy performs poorly, we cannot tell if this is due to the learned reward failing to match user preferences, or the RL algorithm failing to optimize the learned reward.\n\nA more fundamental issue with evaluation based on RL training is that two rewards can induce identical policies in an evaluation environment, yet lead to totally different behavior in the deployment environment. Suppose *all* states *{X, Y, Z}* are reachable in the evaluation environment. If the user prefers *X > Y > Z*, but the agent instead learns *X > Z > Y*, the agent will still go to the correct state *X* during evaluation. But if *X* is no longer reachable at deployment, the previously reliable agent would misbehave by going to the least-favoured state *Z*.\n\nBy contrast, EPIC is fast, reliable and can predict return even in unseen deployment environments. Let’s look at how EPIC is able to achieve this.\n\nIntroducing EPIC\n================\n\nOur method, Equivalent-Policy Invariant Comparison (EPIC), works by comparing reward functions directly, without training a policy. It is designed to be invariant to two transformations that never change the optimal policy:\n\n1. [Potential shaping](https://people.eecs.berkeley.edu/~pabbeel/cs287-fa09/readings/NgHaradaRussell-shaping-ICML1999.pdf), which moves reward earlier or later in time.\n2. Positive affine transformations, adding a constant or rescaling by a positive factor.\n\nEPIC is computed in the two stages illustrated in Figure 1 (above), mirroring these invariants:\n\n1. First, we canonicalize the rewards being compared. Rewards that are the same up to potential shaping are mapped to the same canonical representative.\n2. Next, we compute the Pearson correlation between the canonical representatives, over some coverage distribution *𝒟* over transitions. Pearson correlation is invariant to positive affine transformations.\n\nFinally, we transform the correlation, a measure of similarity, into a distance.\n\nWhy use EPIC?\n=============\n\nEPIC satisfies important properties you’d expect from a distance, namely symmetry and the triangle inequality. This gives the distance an intuitive interpretation, and allows it to be used in algorithms that rely on distance functions.\n\n![]()***Figure 3****: Runtime needed to perform pairwise comparison of 5 reward functions in a simple continuous control task.*Furthermore, EPIC is over 1000 times faster than comparison using RL training when tested in a simple point mass continuous control task. It’s also easy to use: the only hyperparameters that need to be set are the coverage distribution and the number of samples to take.\n\nA key theoretical result is that low EPIC distance predicts similar policy returns. Specifically, EPIC distance bounds the difference in the return *G* of optimal policies *π\\*(R*ᵤ*) and π\\*(Rᵥ)* for rewards *R*ᵤ and *Rᵥ* (see [Theorem 4.9](https://arxiv.org/pdf/2006.13900.pdf#page=5)):\n\n![]()where *K(𝒟)* is a constant that depends on the support of the EPIC coverage distribution. This bound holds for optimal policies computed in *any* MDP that has the same state and action spaces as the rewards *R*ᵤ and *Rᵥ*. In that sense, EPIC gives a much stronger performance guarantee than policy evaluation in a single instance of a task.\n\nHowever, this result does depend on the distribution 𝒟 having adequate coverage over the transitions visited by the optimal policies *π\\*(R*ᵤ*) and π\\*(Rᵥ)*. In general, we recommend choosing 𝒟 to have coverage on all plausible transitions. This ensures adequate support for *π\\*(R*ᵤ*) and π\\*(Rᵥ)*, without wasting probability mass on physically impossible transitions that will never occur.\n\n![]()***Figure 4****: EPIC distance between rewards is similar across different distributions (colored bars), while baselines (NPEC and ERC) are highly sensitive to distribution. The coverage distribution consists of rollouts from: a policy that takes actions* ***uniform****ly at random, an* ***expert*** *optimal policy and a* ***mixed*** *policy that randomly transitions between the other two.*While EPIC’s theoretical guarantee is dependent on the coverage distribution 𝒟, we find that in practice EPIC is fairly robust to the exact choice of distribution: a variety of reasonable distributions give similar results. The figure above illustrates this: EPIC (left) computes a similar distance under different coverage distributions (colored bars), while baselines NPEC (middle) and ERC (right) exhibit significant variance.\n\n![]()***Figure 5****: The* PointMaze *environment: the blue agent must reach the green goal by navigating around the wall that is on the left at train time and on the right at test time.*Even if distance is consistent across reasonable coverage distributions, the constant factor *K(𝒟)* could still be very large, making the bound weak or even vacuous. We therefore decided to test empirically whether EPIC distance predicts RL policy return, in a simple continuous control task [PointMaze](https://arxiv.org/pdf/1710.11248.pdf#page=7). The agent must navigate to a goal by moving around a wall that is on the left during training, and on the right during testing. The ground-truth reward function is the same in both variants, and so we might hope that learned reward functions will transfer. Note that optimal policies do not transfer: the optimal policy in the train environment will collide with the wall when deployed in the test environment, and vice-versa.\n\n![]()***Figure 6****: EPIC distance (blue)**predicts policy regret in the train (orange) and test (green) tasks across three different reward learning methods.*We train reward functions using 1) regression to reward labels, 2) preference comparison between trajectories and 3) inverse reinforcement learning (IRL) on demonstrations. The training data is generated synthetically from a ground-truth reward. We find that regression and preference comparison learn a reward function with low EPIC distance to the ground-truth. Policies trained with these rewards achieve high return, and so low regret. By contrast, IRL has high EPIC distance to the ground-truth and high regret. This is in line with our theoretical result that EPIC distance predicts regret.\n\nConclusions\n===========\n\nWe believe that EPIC distance will be an informative addition to the evaluation toolbox for reward learning methods. EPIC is significantly faster than RL training, and is able to predict policy return even in unseen environments. We would therefore encourage researchers to report EPIC in addition to policy-based metrics when benchmarking reward learning algorithms.\n\nIt is important that our ability to accurately specify complex objectives keeps pace with the growth in reinforcement learning’s optimization power. Future AI systems such as virtual assistants or household robots will operate in complex open-ended environments involving extensive human interaction. Reaching human-level performance in these domains requires more than just human-level intelligence: we must also teach our AI systems the many nuances of human values. We expect techniques like EPIC to play a key role in building advanced and aligned AI, by letting us evaluate different models of human values trained via multiple techniques or data sources.\n\nIn particular, we believe EPIC can significantly improve the benchmarking of reward learning algorithms. By directly comparing the learned reward to a ground-truth reward, EPIC can provide a precise measurement of reward function quality, including its ability to transfer to new environments. By contrast, RL training cannot distinguish between learned rewards that capture user preferences, and those that merely happen to incentivize the right behaviour in the evaluation environment (but may fail in other environments). Moreover, RL training is slow and unreliable.\n\n![]()***Figure 7****: Evaluation by RL training concludes the reward function was faulty* ***after*** *destroying the vase. EPIC can warn you the reward function differs from others* ***before*** *you train an agent.*Additionally, EPIC can help to validate a reward function prior to deployment. Using EPIC is particularly advantageous in settings where failures are unacceptable, since EPIC can compare reward functions offline on a pre-collected data set. By contrast, RL training requires taking actions in the environment to maximize the new reward. These actions could have irreversible consequences, such as a household robot knocking over a vase on the way to the kitchen if the learned reward incentivized speed but failed to penalize [negative side-effects](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107).\n\nHowever, EPIC does have some drawbacks, and so is best used in conjunction with other methods. Most significantly, EPIC can only compare reward functions to one another, and cannot tell you what a particular reward function values. Given a set of unknown reward functions, at best EPIC can cluster them into similar and dissimilar groups. Other evaluation methods, such as interpretability or RL training, will be needed to understand what each cluster represents.\n\nAdditionally, EPIC can be overly conservative. EPIC is by design sensitive to differences in reward functions that *might* change the optimal policy, even if they lead to the same behaviour in the evaluation environment. This is desirable for safety critical applications, where robustness to distribution shift is critical, but this same property can lead to false alarms when the deployment environment is similar or identical to the evaluation environment. Currently the sensitivity of EPIC to off-distribution differences is controllable only at a coarse-grained level using the coverage distribution 𝒟. A promising direction for future work is to support finer-grained control based on distributions over or invariants on possible deployment environments.\n\nCheck out our [ICLR paper](https://arxiv.org/pdf/2006.13900.pdf) for more information about EPIC. You can also find an implementation of EPIC on [GitHub](https://github.com/HumanCompatibleAI/evaluating-rewards).\n\n*We would like to thank Jonathan Uesato, Neel Nanda, Vladimir Mikulik, Sebastian Farquhar, Cody Wild, Neel Alex, Scott Emmons, Victoria Krakovna and Emma Yousif for feedback on earlier drafts of this post.*", "url": "https://deepmindsafetyresearch.medium.com/an-epic-way-to-evaluate-reward-functions-c2c6d41b61cc", "title": "An EPIC way to evaluate reward functions", "source": "deepmind_blog", "source_type": "blog", "date_published": "2021-04-16", "authors": ["DeepMind Safety Research"], "id": "69e81707e3462cf7d7d73426eb188f8d"} +{"text": "Alignment of Language Agents\n============================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----9fbc7dd52c6c--------------------------------)[DeepMind Safety Research](/?source=post_page-----9fbc7dd52c6c--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Falignment-of-language-agents-9fbc7dd52c6c&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----9fbc7dd52c6c---------------------post_header-----------)\n\n3 min read·Mar 30, 2021--\n\nListen\n\nShare\n\n*By Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik and Geoffrey Irving*\n\nWould your AI deceive you? This is a central question when considering the safety of AI, underlying many of the most pressing risks from current systems to future AGI. We have recently [seen](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) [impressive](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) [advances](https://papers.nips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html) in language agents — AI systems that use natural language. This motivates a more careful investigation of their safety properties.\n\n**In our recent** [**paper**](https://arxiv.org/abs/2103.14659), we consider the safety of language agents through the [lens](https://global.oup.com/academic/product/superintelligence-9780198739838?cc=gb&lang=en&) [of](https://www.penguin.co.uk/books/307/307948/human-compatible/9780141987507.html) [AI](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) [alignment](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment), which is about how to get the behaviour of an AI agent to match what a person, or a group of people, want it to do. Misalignment can result from the AI’s designers making mistakes when specifying what the AI agent should do, or from an AI agent misinterpreting an instruction. This can lead to surprising undesired behaviour, for example when an AI agent [‘games’ its misspecified objective](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity).\n\nWe categorise the ways a machine learning task can be [misspecified](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1), based on whether the problem arises from the training data, the training process itself, or from a distributional shift (i.e. a difference between training and deployment environment).\n\n![]()*Forms of misspecification in machine learning, together with examples in the language agent setting.** Training data misspecification can occur because we lack control over the data that enters a large scale text dataset scraped from the web, containing hundreds of billions of words, which contains many unwanted biases.\n* Training process misspecification can occur when a learning algorithm designed for solving one kind of problem is applied to a different kind of problem in which some assumptions no longer apply. For example, a question-answering system applied to a setting where the answer can affect the world, may be incentivised to create [self-fulfilling](https://arxiv.org/abs/1711.05541) [prophecies](https://arxiv.org/abs/1902.09980).\n* Distributional shift misspecification can occur when we deploy the AI agent to the real world, which may differ from the training distribution. For example, the [chatbot Tay](https://en.wikipedia.org/wiki/Tay_(bot)) worked fine in its training environment, but quickly turned toxic when released to the wider internet which included users who attacked the service.\n\nMultiple different kinds of harms could arise from any of the types of misspecification. Most [previous](https://arxiv.org/abs/1606.06565) AI safety research has focused on AI agents which take physical actions in the world on behalf of people (such as in robotics). Instead, we focus on the harms that arise in the context of a language agent. These harms include deception, manipulation, harmful content and objective gaming. [As](https://www.aclweb.org/anthology/D17-1323/) [harmful](https://link.springer.com/chapter/10.1007/978-3-030-62077-6_14) [content](https://dl.acm.org/doi/10.1145/3306618.3314267) and [objective](https://arxiv.org/abs/1803.03453) [gaming](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity) have seen treatment elsewhere, we focus on deception and manipulation in this blogpost (though see our paper for sections on these issues).\n\n![]()*Issues that can arise from any of the forms of misspecification, together with examples for language agents.*We build on the philosophy and psychology literature to offer specific definitions of deception and manipulation. Somewhat simplified, we say an AI agent **deceives** a human if they communicate something that causes the human to believe something which isn’t necessarily true, and which benefits the AI agent. **Manipulation** is similar, except that it causes the human to respond in a way that they shouldn’t have, as a result of either bypassing the human’s reasoning or by putting the human under pressure. Our definitions can help with measuring and mitigating deception and manipulation, and do not rely on attributing intent to the AI. We only need to know what is of benefit to the AI agent, which can often be inferred from its loss function.\n\nDeception and manipulation are already issues in today’s language agents. For example, in an investigation into [negotiating language agents](https://arxiv.org/abs/1706.05125), it was found that the AI agent learnt to deceive humans by feigning interest in an item that it didn’t actually value, so that it could compromise later by conceding it.\n\nCategorising the forms of misspecification and the types of behavioural issues that can arise from them offers a framework upon which to structure our research into the safety and alignment of AI systems. We believe this kind of research will help to mitigate potential harms in the context of language agents in the future. **Check out our** [**paper**](https://arxiv.org/abs/2103.14659) for more details and discussion of these problems, and possible approaches.", "url": "https://deepmindsafetyresearch.medium.com/alignment-of-language-agents-9fbc7dd52c6c", "title": "Alignment of Language Agents", "source": "deepmind_blog", "source_type": "blog", "date_published": "2021-03-30", "authors": ["DeepMind Safety Research"], "id": "49fd6f97ac86cf7b44c8723d4bf481b1"} +{"text": "What mechanisms drive agent behaviour?\n======================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----e7b8d9aee88--------------------------------)[DeepMind Safety Research](/?source=post_page-----e7b8d9aee88--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Fwhat-mechanisms-drive-agent-behaviour-e7b8d9aee88&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----e7b8d9aee88---------------------post_header-----------)\n\n8 min read·Mar 5, 2021--\n\n1\n\nListen\n\nShare\n\n*By the Safety Analysis Team: Grégoire Déletang, Jordi Grau-Moya, Miljan Martic, Tim Genewein, Tom McGrath, Vladimir Mikulik, Markus Kunesch, Shane Legg, and Pedro A. Ortega.*\n\n**TL;DR: To study agent behaviour we must use the tools of causal analysis rather than rely on observation alone.** [**Our paper**](https://arxiv.org/abs/2103.03938) **outlines a rigorous methodology for uncovering the agents’ causal mechanisms.**\n\nUnderstanding the mechanisms that drive agent behaviour is an important challenge in AI safety. In order to diagnose faulty behaviour, we need to understand **why** agents do what they do. As is the case in medical trials, it is not sufficient to observe that a treatment correlates with a recovery rate; instead we are interested in whether the treatment **causes** the recovery. In order to address such “why” questions in a systematic manner we can use **targeted manipulations** and **causal models.**\n\nHowever, large AI systems can operate like **black boxes**. Even if we know their entire blueprint (architecture, learning algorithms, and training data), predicting their behaviour can still be beyond our reach, because understanding the complex interplay between the parts is intractable. And as the complexity of agents increases in the future, this limitation will persist. Therefore we need black-box methodologies for finding simple and intuitive causal explanations that can be understood easily by humans and are sufficiently good for predicting their behaviour.\n\nIn our recent work we describe the methodology we use for analysing AI agents. This methodology encourages analysts to experiment and to rigorously characterise causal models of agent behaviour.\n\nAnalysis (Software) Components\n==============================\n\nThe methodology uses three components: an agent to be studied, a simulator, and a causal reasoning engine.\n\n1. **Agent:** Typically this is an agent provided to us by an agent builder. It could be an IMPALA agent that has been meta-trained on a distribution over grid-world mazes. Often the agent builders already have a few specific questions they’d like us to investigate.\n2. **Simulator — “the agent debugger”:** Our experimentation platform. With it, we can simulate the agent and run experiments. Furthermore, it allows us to perform all sorts of operations we’d usually expect from a debugger, such as stepping forward/backward in the execution trace, setting breakpoints, and setting/monitoring variables. \nWe also use the simulator to generate data for the estimation of statistical parameters. Since we can manipulate factors in the environment, the data we collect is typically interventional and thus contains causal information. This is illustrated in Figure 1 below.\n3. **Causal reasoning engine:** This automated reasoning system allows us to specify and query causal models with associational, interventional, and counterfactual questions. We use these models to validate causal hypotheses. A model is shown in Figure 2 below.\n\n![]()***Figure 1. The simulator:*** *our experimentation platform. Starting from an initial state (root node, upper-left) the simulator allows us to execute a trace of interactions. We can also perform interventions, such as changing the random seed, forcing the agent to pick desired actions, and manipulating environmental factors. These interventions create new branches of the execution trace.*![]()**Figure 2. A causal model**, represented as a causal Bayesian network.Analysis Methodology\n====================\n\nWhenever we analyse an agent, we repeat the following five steps until we reach a satisfactory understanding.\n\n1. **Exploratory analysis:** We place the trained agent into one or more test environments and probe its behaviour. This will give us a sense of what the relevant factors of behaviour are. It is the starting point for formulating our causal hypotheses.\n2. **Identify the relevant abstract variables:** We choose a collection of variables that we deem relevant for addressing our questions. For instance, possible variables are: “does the agent collect the key?”, “is the door open?”, etc.\n3. **Gather data:** We perform experiments in order to collect statistics for specifying the conditional probability tables in our causal model. Typically this implies producing thousands of rollouts under different conditions/interventions.\n4. **Formulate the causal model:** We formulate a structural causal model (SCM) encapsulating all causal and statistical assumptions. This is our explanation for the agent’s behaviour.\n5. **Query the causal model:** Finally, we query the causal model to answer the questions we have about the agent.\n\nLet’s have a look at an example.\n\nExample: Causal effects under confounding\n=========================================\n\nAn important challenge of agent training is to make sure that the resulting agent makes the right choices for the right reasons. However, if the agent builder does not carefully curate the training data, the agent might pick up on unintended, spurious correlations to solve a task [1]. This is especially the case when the agent’s policy is implemented with a deep neural network. The problem is that policies that base their decisions on accidental correlations do not generalise.\n\nUnfortunately, all too often when we observe an agent successfully performing a task, we are tempted to jump to premature conclusions. If we see the agent repeatedly navigating from a starting position to a desired target, we might conclude that the agent did so **because** the agent is sensitive to the location of the target.\n\nFor instance, consider the 2 T-shaped mazes shown below (the “grass-sand environments”). We are given two pre-trained agents A and B. Both of them always solve the task by choosing the terminal containing a rewarding pill. As analysts, we are tasked to verify that they pick the correct terminal because they follow the rewarding pill.\n\n![]()***Figure 3. Grass-Sand environments:*** *In these 2 T-shaped mazes, the agent can choose between one of two terminal states, only one of which contains a rewarding pill. During tests, we observe that a pre-trained agent always successfully navigates to the location of the pill.*However, in these mazes the floor type happens to be perfectly correlated with the location of the rewarding pill: when the floor is grass, the pill is always located on one side, and when the floor is sand, the pill is on the other side. Thus, could the agents be basing their decision on the floor type, rather than on the location of the pill? Because the floor type is the more salient feature of the two (spanning more tiles), this is a plausible explanation if an agent was only trained on these two mazes.\n\nAs it turns out, we can’t tell whether the decision is based upon the location of the rewarding pill through observation alone.\n\nDuring our exploratory analysis we performed two experiments. In the first, we manipulated the location of the reward pill; and in the second, the type of floor. We noticed that agents A and B respond differently to these changes. This led us to choose the following variables for modelling the situation: location of the reward pill (R, values in {left, right}), type of floor (F, values in {grass, sand}), and terminal chosen (T, {left, right}). Because the location of the pill and the floor type are correlated, we hypothesised the existence of a confounding variable (C, values in {world 1, world 2}). In this case, all variables are binary. The resulting causal model is shown below. The conditional probability tables for this model were estimated by running many controlled experiments using the simulator. This is done for both agents, resulting in two causal models.\n\n![]()***Figure 4. Causal model for the grass-sand environment.*** *The variables are C (confounder), R (location of reward pill), F (type of floor), and T (choice of terminal state).*Now that we have concrete formal causal models for explaining the behaviour of both agents, we are ready to ask questions:\n\n1. **Association between T and R:** Given the location of the reward pill, do agents pick the terminal at the same location? Formally, this is \n*P( T = left | R = left )* and *P( T = right | R = right )*.\n2. **Causation from R to T:** Given that **we set** the location of the reward pill, do agents pick the terminal at the same location? In other words, can we causally influence the agent’s choice by changing the location of the reward? Formally, this is given by \n*P( T = left | do(R = left) )* and *P( T = right | do(R=right) )*.\n3. **Causation from F to T:** Finally, we want to investigate whether our agents are sensitive to the floor type. Can we influence the agent’s choice by **setting** the floor type? To answer this, we could query the probabilities \n*P( T = left | do(F = grass))* and *P(T=right|do(F=sand))*.\n\nThe results are shown in the table below.\n\n![]()First, we confirm that, observationally, both agents pick the terminal with the reward. However, when changing the position of the reward, we see a difference: agent A’s choice seems indifferent (probability close to 0.5) to the location of the reward pill, whereas agent B follows the reward pill. Rather, agent A seems to choose according to the floor type, while agent B is insensitive to it. This answers our question about the two agents. Importantly, we could only reach these conclusions because we **actively intervened on the hypothesised causes**.\n\nMore examples\n=============\n\nBesides showing how to investigate causal effects under confounding, our work also illustrates five additional questions that are typical in agent analysis. Each example is carefully illustrated with a toy example.\n\n![]()How would you solve them? Can you think of a good causal model for each situation? The problems are:\n\n1. **Testing for memory use:** An agent with limited visibility (it can only see its adjacent tiles) has to remember a cue at the beginning of a T-maze. The cue tells it where to go to collect a rewarding pill (left or right exit). You observe that the agent always picks the correct exit. How would you test whether it is using its internal memory for solving the task?\n2. **Testing for generalisation:** An agent is placed in a square room where there is a reward pill placed in a randomly chosen location. You observe that the agent always collects the reward. How would you test whether this behaviour generalizes?\n3. **Estimating a counterfactual behaviour:** There are two doors, each leading into a room containing a red and a green reward pill. Only one door is open, and you observe the agent picking up the red pill. If the other door had been open instead, what would the agent have done?\n4. **Which is the correct causal model?** You observe several episodes, in which two agents, red and blue, simultaneously move one step into mostly the same direction. You know that one of them chooses the direction and the other tries to follow. How would you find out who’s the leader and who’s the follower?\n5. **Understanding the causal pathways leading up to a decision:** An agent starts in a room with a key and a door leading to a room with a reward pill. Sometimes the door is open, and other times the door is closed and the agent has to use the key to open it. How would you test whether the agent understands that the key is only necessary when the door is closed?\n\nFind out the answers and more in our paper. [Link to the paper here](https://arxiv.org/abs/2103.03938).\n\n[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.\n\n*We would like to thank Jon Fildes for his help with this post.*", "url": "https://deepmindsafetyresearch.medium.com/what-mechanisms-drive-agent-behaviour-e7b8d9aee88", "title": "What mechanisms drive agent behaviour?", "source": "deepmind_blog", "source_type": "blog", "date_published": "2021-03-05", "authors": ["DeepMind Safety Research"], "id": "6deb5579fcd58b2fe6b0307e1829e125"} +{"text": "**Understanding meta-trained algorithms through a Bayesian lens**\n=================================================================\n\n[![DeepMind Safety Research](https://miro.medium.com/v2/resize:fill:88:88/2*y3lgushvo5U-VptVQbSX9Q.png)](/?source=post_page-----5042a1acc1c2--------------------------------)[DeepMind Safety Research](/?source=post_page-----5042a1acc1c2--------------------------------)\n\n·[Follow](https://medium.com/m/signin?actionUrl=https%3A%2F%2Fmedium.com%2F_%2Fsubscribe%2Fuser%2F55e08ddea42e&operation=register&redirect=https%3A%2F%2Fdeepmindsafetyresearch.medium.com%2Funderstanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2&user=DeepMind+Safety+Research&userId=55e08ddea42e&source=post_page-55e08ddea42e----5042a1acc1c2---------------------post_header-----------)\n\n10 min read·Dec 3, 2020--\n\nListen\n\nShare\n\n*By Grégoire Delétang, Tom McGrath, Tim Genewein, Vladimir Mikulik, Markus Kunesch, Jordi Grau-Moya, Miljan Martic, Shane Legg, Pedro A. Ortega*\n\n**TL;DR: In our** [**recent paper**](https://arxiv.org/abs/2010.11223) **we show that meta-trained recurrent neural networks implement Bayes-optimal algorithms.**\n\nOne of the most challenging problems in modern AI research is understanding the learned algorithms that arise from training machine learning systems. This issue is at the heart of building robust, reliable, and safe AI systems. Better understanding of the nature of these learned algorithms can also shed light onto characterising generalisation behaviour “beyond the test-set” and describing the valid operating regime for which safe behaviour can be guaranteed.\n\nHistorically, algorithmic understanding of solutions found via neural network training has been notoriously elusive, but there’s a growing body of work chipping away at this problem, including work on:\n\n* [“Circuits” in artificial neural networks](https://distill.pub/2020/circuits/zoom-in/)\n* [Extracting finite-state machines from Atari agents](https://arxiv.org/abs/1811.12530)\n* [Identifying line-attractor dynamics in a sentiment classification model](https://arxiv.org/abs/1906.10720)\n* [Interpreting neural net dynamics in text classification](https://arxiv.org/abs/2010.15114)\n* [Understanding how biological and artificial recurrent neural networks perform Bayesian updates via “warped representations”](https://doi.org/10.1016/j.neuron.2019.06.012).\n\nA widely used approach to training robust and generalisable policies is training on a variety of closely related tasks, which is often referred to as meta-training. In our [recent paper](https://arxiv.org/abs/2010.11223), we show that solutions obtained via meta-training of recurrent neural networks (RNNs) can be understood as Bayes-optimal algorithms, meaning that they do as well as possible with the information available. We empirically verify that meta-trained RNNs *behave* like known Bayes-optimal algorithms, and also “peek under the hood” and compare the computational structure of solutions that arise in RNNs through meta-training against known Bayes-optimal algorithms.\n\nWhat is meta-learning?\n======================\n\nMeta-learning, also known as “learning-to-learn”, describes the abstract learning process when learning to solve **a family of tasks**, as opposed to learning to solve a single task only. The main idea is that solving a series of related tasks allows the learner to find commonalities among individual solutions. When faced with a new task, the learner can build on the commonalities learned before, and doesn’t have to start with a blank slate. It also involves the process of learning higher-order statistical regularities of a task-family, and exploiting these regularities for faster learning of any new task within the family.\n\nTo allow for meta-learning to happen, it’s important that the learner is exposed to a sufficient variety of tasks or task-variations[¹](#be4a). **Meta-training** is a training protocol for AI systems that explicitly sets up exposure to many tasks during training, and is often used to improve AI systems’ generalisation. If successful, meta-training induces meta-learning in the AI system, which allows for faster and more robust adaptation to new task instances. In practice, meta-training can often happen implicitly, for example, by increasing the diversity of training data[²](#b93d).\n\nMeta-training leads to two coupled learning processes: meta-learning (learning across task instances) and task-adaptation (learning to solve a single task). The two coupled learning processes come with important consequences for the generalisation-capabilities and safety of AI systems:\n\n* **Training experience** can easily induce strong **inductive biases** that govern how the trained system behaves in new situations. At best, this can be used to shape inductive biases to have desired safety-properties. But doing so is highly non-trivial and needs careful consideration[³](#19ea).\n* In memory-based systems[⁴](#cd69), meta-training induces an **adaptive algorithm** as the solution. This means that the recent observation-history has a significant influence on the behaviour of the system: two systems that are identical when deployed may rapidly diverge because of different experiences.\n\nWe can illustrate meta-training with the following simple example. A RNN is trained to predict the outcome of coin flips. In each episode, a new coin of unknown bias is drawn from the environment. The RNN predictor then observes a sequence of coin flips with that coin, and before each flip predicts the probability of observing “heads”. Within an episode, the predictor can gather statistics about a particular coin to improve predictions. Importantly though, *across* episodes, predictors can learn about higher-order statistical regularities (the distribution of coin-biases), which they can use to make good predictions for an individual coin faster, with fewer observations. When trained on different environments, such as the “fair-coins” and “bent-coins” environments shown in the illustration below, predictors can easily acquire quite different inductive biases (despite having identical objective functions and network architectures), which lead to very different behaviour after training, even when faced with the same input.\n\n![Illustration of two coin-flip environments: the “fair coins” and the “bent coins” environment.]()Two predictors were trained on the “fair coins” and “bent coins” environments respectively. When faced with a new coin, both predictors initially guess that the coin is fair (grey). But note how both predictors then change their prediction after observing the first “H” or “T”: the predictor trained in the fair coins environment mildly adjusts its prediction to green or orange (indicating a slightly biased coin) whereas the bent coins predictor immediately predicts a heavily biased coin.Optimal prediction and decision-making in the face of uncertainty\n=================================================================\n\nFrom a theoretical perspective, optimal solutions to prediction problems like the coin-flip example above are given by the **Bayes-optimal solution,** which is theoretically well studied. The main idea is that a predictor can capture statistical regularities of the environment in the form of a **prior belief**. This prior (quantitatively) expresses how likely it is to encounter a coin of a certain bias in the absence of any further observations. When faced with a new coin, the prior belief is combined with observations (statistical evidence) to form a **posterior belief**. The optimal way of updating the posterior belief in light of new observations is given via Bayes’ rule.\n\nMore generally, sequential prediction and decision-making problems under uncertainty (two categories which cover many problems of practical relevance) are known to be solved optimally[⁵](#4b26) by the Bayesian solution. The Bayes-optimal solution has the following theoretical properties:\n\n* Optimises log-loss (prediction tasks) or return (decision-making tasks).\n* Minimal sample complexity: given the distribution over tasks, the Bayes-optimal solution converges fastest (on average) to any particular task.\n* Optimal (and automatic) trade-off between exploration and exploitation in decision-making tasks.\n* The task’s minimal sufficient statistics are the smallest possible compression of the observation history (without loss in performance) — any Bayes-optimal solution must at least keep track of these.\n\nUnfortunately, the Bayes-optimal solution is analytically and computationally intractable in many cases. Interestingly, recent theoretical work shows that a fully converged meta-trained solution[⁶](#884d) must coincide behaviourally with a Bayes-optimal solution because [the meta-learning objective induced by meta-training is a Monte-Carlo approximation to the full Bayesian objective](https://arxiv.org/abs/1905.03030). In other words, meta-training is a way of obtaining Bayes-optimal solutions. And in our work, we empirically verify this claim with meta-trained RNNs. Additionally, we investigate whether the algorithmic structure (the algorithm implemented by the trained RNN) can be related to known Bayes-optimal algorithms.\n\nComparing RNNs against known Bayes-optimal algorithms\n=====================================================\n\nTo verify whether meta-trained RNNs behave Bayes-optimally, we need tasks for which the Bayes-optimal solution is known and computationally tractable. Accordingly, we chose canonical tasks that require prediction and decision-making in the face of uncertainty (hallmarks of intelligent behaviour): prediction- and bandit-tasks.\n\nIn prediction-tasks, agents predict the next outcome of a random variable (as in the coin-flip example shown earlier). Predictors are trained to minimise log-loss (“prediction error”) across episodes. Statistics remain fixed within an episode and are re-drawn from the environment’s distribution across episodes (e.g. drawing a new coin from the environment-distribution over coin biases).\n\nIn bandit-tasks, agents are faced with a set of arms to pull. The arms probabilistically yield reward. Agents are trained to maximise cumulative reward (return) during fixed-length episodes. Reward distributions are different for each arm and remain fixed within an episode but change across episodes, following statistical regularities given by the environment. Bandit tasks require solving the exploration-exploitation trade-off, gathering more information about each arm’s statistics vs pulling suspected high-reward arms.\n\nThe RNN’s internal states are reset at the beginning of each episode, meaning that activation-information cannot be carried over from one task to another. Instead, information that is shared across tasks must be represented in the networks’ weights, which are kept fixed after training. Ultimately, the weight-values give rise to the adaptive algorithm that manifests itself via the network’s internal dynamics.\n\nRNN agents *behave* Bayes-optimally\n===================================\n\nWe illustrate the behaviour of a trained RNN and a known Bayes-optimal algorithm on one of our prediction tasks below. The task is the prediction of a categorical variable (a three-sided die), with per-category probabilities distributed according to a Dirichlet distribution. The illustration shows three typical episodes[⁷](#39ac).\n\n![Three typical episodes of the three-sided die prediction task.]()Known Bayes-optimal algorithm vs. meta-trained RNN on our “three-sided die” prediction task.Comparing the outputs of the trained RNN (solid lines) and the known Bayes-optimal algorithm (dashed lines) confirms the theoretical prediction: both algorithms behave virtually indistinguishably. In our paper, we verify this quantitatively, via KL divergence for predictions and difference in return for bandits, across a large number of episodes and a range of tasks.\n\nPeeking under the hood: comparing computational structure\n=========================================================\n\nTo compare the RNN’s internal dynamics against the known Bayes-optimal algorithm, we cast both as finite-state machines (FSM). And through **simulation**, we can determine whether two FSMs implement the same algorithm. As illustrated in the figure below, establishing a simulation relation between two FSMs requires that for each state in FSM A, we can find an equivalent state in FSM B such that when given the same input symbol we observe **matching state-transitions and outputs** in both machines. This has to hold for all states and input sequences.\n\n![Establishing whether one FSM can be simulated by another by comparing state-transitions and outputs.]()Comparing the algorithms implemented by finite state machines via simulation.To apply the simulation argument in our case, we relax these conditions. We sample a large number of state-transitions and outputs by running our agents. The state-matching is implemented by training a neural network to regress the internal states of one agent-type onto the other.\n\nOn a set of held-out test trajectories, we:\n\n1. Produce a “matching state” in the simulating machine (B) by mapping the original state (in A) through the learned neural network regressor (dashed cyan line).\n2. Feed the same input to both agent-types. This gives us an original and a simulated state-transition and output.\n3. The new states “match” if we observe low regression error.\n4. The outputs “match” if we observe low behavioural dissimilarity, using the metrics we defined earlier to perform behavioural comparison.\n\nAn illustration is shown below, using the three-sided die prediction example from before.\n\n![Illustration of comparing the algorithm implemented by the meta-trained RNN to the Bayes-optimal algorithm via simulation.]()Each panel shows the agents’ internal states (2D PCA projection), each dot is a single timestep, and colours represent agent outputs (predictions). The three white lines, shown here, correspond to the three illustrative episodes shown earlier. The top-left panel presents internal states of the known Bayes-optimal agent and the bottom-right panel, the RNN agent. And the off-diagonal panels show states and outputs obtained via simulation. As seen qualitatively in the illustration, we find good correspondence between the computational structure of both agent-types, while in the related paper, we use appropriate quantitative metrics.We find that in all our tasks, the known Bayes-optimal algorithm can be simulated by the meta-trained RNN, but not always vice versa. We hypothesise that this is due to the RNN representing the tasks’ sufficient statistics in a non-minimal fashion. For example, the sequences “heads-tails-heads” and “heads-heads-tails” will lead to precisely the same internal state in the known Bayes-optimal algorithm but might lead to two separate states in the RNN. And so, we find very strong algorithmic correspondence (but not precise algorithmic equivalence in the technical sense) between the algorithm implemented by the trained RNN and the known Bayes-optimal algorithm.\n\nMeta-trained agents implement Bayes-optimal agents\n==================================================\n\nMeta-training is widespread in modern training schemes for AI systems. Viewing it through a Bayesian lens can shed important insight on the theoretical characteristics of the solutions obtained via meta-training. For instance, the [remarkable performance of GPT-3](https://arxiv.org/abs/2005.14165) at one- or few-shot adaptation to many language tasks makes sense when reminding ourselves that GPT-3 is meta-training on a very large corpus of text that implicitly defines an incredibly broad range of tasks. We know that the solution to this meta-training process will converge towards the Bayes-optimal solution, which has minimal sample complexity for adapting to any of the tasks covered by the “meta-distribution”.\n\nTogether with previously published theoretical results and a long history of analysis of Bayes-optimality, our empirical study highlights the importance and merit of the Bayesian view on meta-learning with modern neural networks. Exciting next questions are whether the theory can be extended towards theoretical properties and guarantees or bounds of not fully converged solutions, as well as making statements about cases where the Bayes-optimal solution is not precisely within the model class of the meta-learner. Most importantly, we believe that the Bayesian analysis of meta-learning has great potential to reason about other important safety-relevant properties of meta-trained solutions, such as their exploration-exploitation trade-off, their risk-sensitivity, and their generalisation-behaviour as governed by the inductive biases that get “baked in” during training.\n\n**Find out more in** [**our paper**](https://arxiv.org/abs/2010.11223)**.**\n\n*We would like to thank Arielle Bier and Jon Fildes for their help with this post.*\n\nFootnotes\n---------\n\n¹ Sufficient task variety is crucial for AI systems and also for [enabling meta-learning in humans](https://dx.doi.org/10.1016%2Fj.cub.2009.01.036). [[back](#5f25)]\n\n² In fact, [it might be hard to avoid implicit meta-training](https://arxiv.org/abs/1905.03030), as e.g. mini-batch-based training corresponds to meta-training across batches. [[back](#08ac)]\n\n³ Recent work on [“concealed poisoning” of training data](https://arxiv.org/pdf/2010.12563.pdf) of a sentiment classification model illustrates how little data is required to induce very strong inductive biases. Using a handful of sentences like “J flows brilliant is great”, the trained model mistakenly classifies previously unseen negative James Bond movie reviews as positive in many cases. [[back](#078b)]\n\n⁴ Note that memory-based systems do not necessarily need to have an explicit internal memory, such as LSTM cells. It’s often easy to offload memorisation onto the environment, like agents that act in fairly deterministic environments or in sequential generative models where outputs become part of the context of future inputs (e.g. in generative language models). [[back](#2c00)]\n\n⁵ Optimality here typically refers to minimising log-loss or maximising return (cumulative reward). [[back](#c2e8)]\n\n⁶ The theoretical results hold for memory-based meta-learning at convergence in the “realisable case” (Bayes-optimal solution must be in the class of possible solutions of the meta-learner). [[back](#8f20)]\n\n⁷ The dashed vertical lines in the figure give a glimpse on generalisation: RNNs were trained on episodes of 20 steps but are evaluated here for 30 time steps. We found that in many cases, RNNs trained on 20 steps would behave Bayes-optimally on episodes with thousands of time-steps. This is interesting and somewhat surprising, and requires thorough investigation in future work. [[back](#3b86)]", "url": "https://deepmindsafetyresearch.medium.com/understanding-meta-trained-algorithms-through-a-bayesian-lens-5042a1acc1c2", "title": "Understanding meta-trained algorithms through a Bayesian lens", "source": "deepmind_blog", "source_type": "blog", "date_published": "2020-12-03", "authors": ["DeepMind Safety Research"], "id": "b727aa5cd5ca2a8f33f8a0933d76ac88"}