diff --git "a/agentmodels.jsonl" "b/agentmodels.jsonl" new file mode 100644--- /dev/null +++ "b/agentmodels.jsonl" @@ -0,0 +1,21 @@ +{"id": "025ec6c77a59b3363162576fa55a4fd7", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3-agents-as-programs.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Agents as probabilistic programs\"\ndescription: \"One-shot decision problems, expected utility, softmax choice and Monty Hall.\" \nis_section: true\n---\n\n## Introduction\n\nOur goal is to implement agents that compute rational *policies*. Policies are *plans* for achieving good outcomes in environments where:\n\n- The agent makes a *sequence* of *distinct* choices, rather than choosing once.\n\n- The environment is *stochastic* (or \"random\").\n\n- Some features of the environment are initially *unknown* to the agent. (So the agent may choose to gain information in order to improve future decisions.)\n\nThis section begins with agents that solve the very simplest decision problems. These are trivial *one-shot* problems, where the agent selects a single action (not a sequence of actions). We use WebPPL to solve these problems in order to illustrate the core concepts that are necessary for the more complex problems in later chapters.\n\n\n\n## One-shot decisions in a deterministic world\n\nIn a *one-shot decision problem* an agent makes a single choice between a set of *actions*, each of which has potentially distinct *consequences*. A rational agent chooses the action that is best in terms of his or her own preferences. Often, this depends not on the *action* itself being preferred, but only on its *consequences*. \n\nFor example, suppose Tom is choosing between restaurants and all he cares about is eating pizza. There's an Italian restaurant and a French restaurant. Tom would choose the French restaurant if it offered pizza. Since it does *not* offer pizza, Tom will choose the Italian.\n\nTom selects an action $$a \\in A$$ from the set of all actions. The actions in this case are {\"eat at Italian restaurant\", \"eat at French restaurant\"}. The consequences of an action are represented by a transition function $$T \\colon S \\times A \\to S$$ from state-action pairs to states. In our example, the relevant *state* is whether or not Tom eats pizza. Tom's preferences are represented by a real-valued utility function $$U \\colon S \\to \\mathbb{R}$$, which indicates the relative goodness of each state. \n\nTom's *decision rule* is to take action $$a$$ that maximizes utility, i.e., the action\n\n$$\n{\\arg \\max}_{a \\in A} U(T(s,a))\n$$\n\nIn WebPPL, we can implement this utility-maximizing agent as a function `maxAgent` that takes a state $$s \\in S$$ as input and returns an action. For Tom's choice between restaurants, we assume that the agent starts off in a state `\"initialState\"`, denoting whatever Tom does before going off to eat. The program directly translates the decision rule above using the higher-order function `argMax`.\n\n\n~~~~\n///fold: argMax\nvar argMax = function(f, ar){\n return maxWith(f, ar)[0]\n};\n///\n// Choose to eat at the Italian or French restaurants\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n if (action === 'italian') {\n return 'pizza';\n } else {\n return 'steak frites';\n }\n};\n\nvar utility = function(state) {\n if (state === 'pizza') {\n return 10;\n } else {\n return 0;\n }\n};\n\nvar maxAgent = function(state) {\n return argMax(\n function(action) {\n return utility(transition(state, action));\n },\n actions);\n};\n\nprint('Choice in initial state: ' + maxAgent('initialState'));\n~~~~\n\n>**Exercise**: Which parts of the code can you change in order to make the agent choose the French restaurant?\n\nThere is an alternative way to compute the optimal action for this problem. The idea is to treat choosing an action as an *inference* problem. The previous chapter showed how we can *infer* the probability that a coin landed Heads from the observation that two of three coins were Heads. \n\n~~~~\nvar twoHeads = Infer({ \n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n condition(a + b + c === 2);\n return a;\n }\n});\n\nviz(twoHeads);\n~~~~\n\n\nThe same inference machinery can compute the optimal action in Tom's decision problem. We sample random actions with `uniformDraw` and condition on the preferred outcome happening. Intuitively, we imagine observing the consequence we prefer (e.g. pizza) and then *infer* from this the action that caused this consequence. \n\nThis idea is known as \"planning as inference\" refp:botvinick2012planning. It also resembles the idea of \"backwards chaining\" in logical inference and planning. The `inferenceAgent` solves the same problem as `maxAgent`, but uses planning as inference: \n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n if (action === 'italian') {\n return 'pizza';\n } else {\n return 'steak frites';\n }\n};\n\nvar inferenceAgent = function(state) {\n return Infer({ \n model() {\n var action = uniformDraw(actions);\n condition(transition(state, action) === 'pizza');\n return action;\n }\n });\n}\n\nviz(inferenceAgent(\"initialState\"));\n~~~~\n\n>**Exercise**: Change the agent's goals so that they choose the French restaurant.\n\n## One-shot decisions in a stochastic world\n\nIn the previous example, the transition function from state-action pairs to states was *deterministic* and so described a deterministic world or environment. Moreover, the agent's actions were deterministic; Tom always chose the best action (\"Italian\"). In contrast, many examples in this tutorial will involve a *stochastic* world and a noisy \"soft-max\" agent. \n\nImagine that Tom is choosing between restaurants again. This time, Tom's preferences are about the overall quality of the meal. A meal can be \"bad\", \"good\" or \"spectacular\" and each restaurant has good nights and bad nights. The transition function now has type signature $$ T\\colon S \\times A \\to \\Delta S $$, where $$\\Delta S$$ represents a distribution over states. Tom's decision rule is now to take the action $$a \\in A$$ that has the highest *average* or *expected* utility, with the expectation $$\\mathbb{E}$$ taken over the probability of different successor states $$s' \\sim T(s,a)$$:\n\n$$\n\\max_{a \\in A} \\mathbb{E}( U(T(s,a)) )\n$$\n\nTo represent this in WebPPL, we extend `maxAgent` using the `expectation` function, which maps a distribution with finite support to its (real-valued) expectation:\n\n~~~~\n///fold: argMax\nvar argMax = function(f, ar){\n return maxWith(f, ar)[0]\n};\n///\n\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05];\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar maxEUAgent = function(state) {\n var expectedUtility = function(action) {\n return expectation(Infer({ \n model() {\n return utility(transition(state, action));\n }\n }));\n };\n return argMax(expectedUtility, actions);\n};\n\nmaxEUAgent('initialState');\n~~~~\n\n>**Exercise**: Adjust the transition probabilities such that the agent chooses the Italian Restaurant.\n\nThe `inferenceAgent`, which uses the planning-as-inference idiom, can also be extended using `expectation`. Previously, the agent's action was conditioned on leading to the best consequence (\"pizza\"). This time, Tom is not aiming to choose the action most likely to have the best outcome. Instead, he wants the action with better outcomes on average. This can be represented in `inferenceAgent` by switching from a `condition` statement to a `factor` statement. The `condition` statement expresses a \"hard\" constraint on actions: actions that fail the condition are completely ruled out. The `factor` statement, by contrast, expresses a \"soft\" condition. Technically, `factor(x)` adds `x` to the unnormalized log-probability of the program execution within which it occurs.\n\nTo illustrate `factor`, consider the following variant of the `twoHeads` example above. Instead of placing a hard constraint on the total number of Heads outcomes, we give each setting of `a`, `b` and `c` a *score* based on the total number of heads. The score is highest when all three coins are Heads, but even the \"all tails\" outcomes is not ruled out completely.\n\n~~~~\nvar softHeads = Infer({ \n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n factor(a + b + c);\n return a;\n }\n});\n\nviz(softHeads);\n~~~~\n\nAs another example, consider the following short program:\n\n~~~~\nvar dist = Infer({ \n model() {\n var n = uniformDraw([0, 1, 2]);\n factor(n * n);\n return n;\n }\n});\n\nviz(dist);\n~~~~\n\nWithout the `factor` statement, each value of the variable `n` has equal probability. Adding the `factor` statements adds `n*n` to the log-score of each value. To get the new probabilities induced by the `factor` statement we compute the normalizing constant given these log-scores. The resulting probability $$P(y=2)$$ is:\n\n$$\nP(y=2) = \\frac {e^{2 \\cdot 2}} { (e^{0 \\cdot 0} + e^{1 \\cdot 1} + e^{2 \\cdot 2}) }\n$$\n\nReturning to our implementation as planning-as-inference for maximizing *expected* utility, we use a `factor` statement to implement soft conditioning:\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05];\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar alpha = 1;\n\nvar softMaxAgent = function(state) {\n return Infer({ \n model() {\n\n var action = uniformDraw(actions);\n\n var expectedUtility = function(action) {\n return expectation(Infer({ \n model() {\n return utility(transition(state, action));\n }\n }));\n };\n \n factor(alpha * expectedUtility(action));\n \n return action;\n }\n });\n};\n\nviz(softMaxAgent('initialState'));\n~~~~\n\nThe `softMaxAgent` differs in two ways from the `maxEUAgent` above. First, it uses the planning-as-inference idiom. Second, it does not deterministically choose the action with maximal expected utility. Instead, it implements *soft* maximization, selecting actions with a probability that depends on their expected utility. Formally, let the agent's probability of choosing an action be $$C(a;s)$$ for $$a \\in A$$ when in state $$s \\in S$$. Then the *softmax* decision rule is:\n\n$$\nC(a; s) \\propto e^{\\alpha \\mathbb{E}(U(T(s,a))) }\n$$\n\nThe noise parameter $$\\alpha$$ modulates between random choice $$(\\alpha=0)$$ and the perfect maximization $$(\\alpha = \\infty)$$ of the `maxEUAgent`.\n\nSince rational agents will *always* choose the best action, why consider softmax agents? One of the goals of this tutorial is to infer the preferences of agents (e.g. human beings) from their choices. People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice[^softmax], which has been tested empirically on human action selection refp:luce2005individual. Moreover, it has been used extensively in Inverse Reinforcement Learning as a model of human errors refp:kim2014inverse, refp:zheng2014robust. For this reason, we employ the softmax model throughout this tutorial. When modeling an agent assumed to be optimal, the noise parameter $$\\alpha$$ can be set to a large value. \n\n[^softmax]: A softmax agent's choice of action is a differentiable function of their utilities. This differentiability makes possible certain techniques for inferring utilities from choices.\n\n>**Exercise**: Monty Hall. In this exercise inspired by [ProbMods](https://probmods.org/chapters/06-inference-about-inference.html), we will approach the classical statistical puzzle from the perspective of optimal decision-making. Here is a statement of the problem:\n\n> *Alice is on a game show and she’s given the choice of three doors. Behind one door is a car; behind the others, goats. She picks door 1. The host, Monty, knows what’s behind the doors and opens another door, say No. 3, revealing a goat. He then asks Alice if she wants to switch doors. Should she switch?*\n\n> Use the tools introduced above to determine the answer. Here is some code to get you started:\n\n~~~~\n// Remove each element in array ys from array xs\nvar remove = function(xs, ys) {\n return _.without.apply(null, [xs].concat(ys));\n};\n\nvar doors = [1, 2, 3];\n\n// Monty chooses a door that is neither Alice's door\n// nor the prize door\nvar monty = function(aliceDoor, prizeDoor) {\n return Infer({ \n model() {\n var door = uniformDraw(doors);\n // ???\n return door;\n }});\n};\n\n\nvar actions = ['switch', 'stay'];\n\n// If Alice switches, she randomly chooses a door that is\n// neither the one Monty showed nor her previous door\nvar transition = function(state, action) {\n if (action === 'switch') {\n return {\n prizeDoor: state.prizeDoor,\n montyDoor: state.montyDoor,\n aliceDoor: // ???\n };\n } else {\n return state;\n }\n};\n\n// Utility is high (say 10) if Alice's door matches the\n// prize door, 0 otherwise.\nvar utility = function(state) {\n // ???\n};\n\nvar sampleState = function() {\n var aliceDoor = uniformDraw(doors);\n var prizeDoor = uniformDraw(doors);\n return {\n aliceDoor,\n prizeDoor,\n montyDoor: sample(monty(aliceDoor, prizeDoor))\n }\n}\n\nvar agent = function() { \n var action = uniformDraw(actions);\n var expectedUtility = function(action){ \n return expectation(Infer({ \n model() {\n var state = sampleState();\n return utility(transition(state, action));\n }}));\n };\n factor(expectedUtility(action));\n return { action };\n};\n\nviz(Infer({ model: agent }));\n~~~~\n\n### Moving to complex decision problems\n\nThis chapter has introduced some of the core concepts that we will need for this tutorial, including *expected utility*, *(stochastic) transition functions*, *soft conditioning* and *softmax decision making*. These concepts would also appear in standard treatments of rational planning and reinforcement learning refp:russell1995modern. The actual decision problems in this chapter are so trivial that our notation and programs are overkill.\n\nThe [next chapter](/chapters/3a-mdp.html) introduces *sequential* decisions problems. These problems are more complex and interesting, and will require the machinery we have introduced here. \n\n
\n\n### Footnotes\n", "date_published": "2018-06-21T16:25:20Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3-agents-as-programs.md"} +{"id": "ea441710b479ba9d9171a1b2273a2aca", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5-biases-intro.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Cognitive biases and bounded rationality\"\ndescription: Soft-max noise, limited memory, heuristics and biases, motivation from intractability of POMDPs.\nis_section: true\n---\n\n\n### Optimality and modeling human actions\n\nWe've mentioned two uses for models of sequential decision making:\n\nUse (1): **Solve practical decision problems** (preferably with a fast algorithm that performs optimally)\n\nUse (2): **Learn the preferences and beliefs of humans** (e.g. to predict future behavior or to provide recommendations/advice)\n\nThe table below provides more detail about these two uses[^table]. The first chapters of the book focused on Use (1) and described agent models for solving MDPs and POMDPs optimally. Chapter IV (\"[Reasoning about Agents](/chapters/4-reasoning-about-agents.html)\"), by contrast, was on Use (2), employing agent models as *generative models* of human behavior which are inverted to learn human preferences. \n\nThe present chapter discusses the limitations of using optimal agent modes as generative models for Use (2). We argue that developing models of *biased* or *bounded* decision making can address these limitations. \n\n\"table\"\n\n>**Table 1:** Two uses for formal models of sequential decision making. The heading \"Optimality\" means \"Are optimal models of decision making used?\".\n
\n\n[^table]: Note that there are important interactions between Use (1) and Use (2). A challenge with Use (1) is that it's often hard to write down an appropriate utility function to optimize. The ideal utility function is one that reflects actual human preferences. So by solving (2) we can solve one of the \"key tasks\" in (1). This is exactly the approach taken in various applications of IRL. See work on Apprenticeship Learning refp:abbeel2004apprenticeship. \n\n\n\n\n\n### Random vs. Systematic Errors\nThe agent models presented in previous chapters are models of *optimal* performance on (PO)MDPs. So if humans deviate from optimality on some (PO)MDP then these models won't predict human behavior well. It's important to recognize the flexibility of the optimal models. The agent can have any utility function and any initial belief distribution. We saw in the previous chapters that apparently irrational behavior can sometimes be explained in terms of inaccurate prior beliefs.\n\nYet certain kinds of human behavior resist explanation in terms of false beliefs or unusual preferences. Consider the following:\n\n>**The Smoker**
Fred smokes cigarettes every day. He has tried to quit multiple times and still wants to quit. He is fully informed about the health effects of smoking and has learned from experience about the cravings that accompany attempts to quit. \n\nIt's hard to explain such persistent smoking in terms of inaccurate beliefs[^beliefs].\n\n[^beliefs]: One could argue that Fred has a temporary belief that smoking is high utility which causes him to smoke. This belief subsides after smoking a cigarette and is replaced with regret. To explain this in terms of a POMDP agent, there has to be an *observation* that triggers the belief-change via Bayesian updating. But what is this observation? Fred has *cravings*, but these cravings alter Fred's desires or wants, rather than being observational evidence about the empirical world. \n\nA common way of modeling with deviations from optimal behavior is to use softmax noise refp:kim2014inverse and refp:zheng2014robust. Yet the softmax model has limited expressiveness. It's a model of *random* deviations from optimal behavior. Models of random error might be a good fit for certain motor or perceptual tasks (e.g. throwing a ball or locating the source of a distant sound). But the smoking example suggests that humans deviate from optimality *systematically*. That is, when not behaving optimally, humans actions remain *predictable* and big deviations from optimality in one domain do not imply highly random behavior in all domains. \n\nHere are some examples of systematic deviations from optimal action:\n
\n\n>**Systematic deviations from optimal action**\n\n- Smoking every week (i.e. systematically) while simultaneously trying to quit (e.g. by using patches and throwing away cigarettes).\n\n- Finishing assignments just before the deadline, while always planning to finish them as early as possible. \n\n- Forgetting random strings of letters or numbers (e.g. passwords or ID numbers) -- assuming they weren't explicitly memorized[^strings].\n\n- Making mistakes on arithmetic problems[^math] (e.g. long division).\n\n[^strings]: With effort people can memorize these strings and keep them in memory for long periods. The claim is that if people do not make an attempt to memorize a random string, they will systematically forget the string within a short duration. This can't be easily explained on a POMDP model, where the agent has perfect memory.\n\n[^math]: People learn the algorithm for long division but still make mistakes -- even when stakes are relatively high (e.g. important school exams). While humans vary in their math skill, all humans have severe limitations (compared to computers) at doing arithmetic. See refp:dehaene2011number for various robust, systematic limitations in human numerical cognition. \n\nThese examples suggest that human behavior in everyday decision problems will not be easily captured by assuming softmax optimality. In the next sections, we divide these systematics deviations from optimality into *cognitive biases* and *cognitive bounds*. After explaining each category, we discuss their relevance to learning the preferences of agents. \n\n\n### Human deviations from optimal action: Cognitive Bounds\n\nHumans perform sub-optimally on some MDPs and POMDPs due to basic computational constraints. Such constraints have been investigated in work on *bounded rationality* and *bounded optimality* refp:gershman2015computational. A simple example was mentioned above: people cannot quickly memorize random strings (even if the stakes are high). Similarly, consider the real-life version of our Restaurant Choice example. If you walk around a big city for the first time, you will forget the location of most of the restaurants you see on the way. If you try a few days later to find a restaurant, you are likely to take an inefficient route. This contrasts with the optimal POMDP-solving agent who never forgets anything.\n\nLimitations in memory are hardly unique to humans. For any autonomous robot, there is some number of random bits that it cannot quickly place in permanent storage. In addition to constraints on memory, humans and machines have constraints on time. The simplest POMDPs, such as Bandit problems, are intractable: the time needed to solve them will grow exponentially (or worse) in the problem size refp:cassandra1994acting, refp:madani1999undecidability. The issue is that optimal planning requires taking into account all possible sequences of actions and states. These explode in number as the number of states, actions, and possible sequences of observations grows[^grows].\n\n[^grows]: Dynamic programming helps but does not tame the beast. There are many POMDPs that are small enough to be easily described (i.e. they don't have a very long problem description) but which we can't solve optimally in practice.\n\nSo for any agent with limited time there will be POMDPs that they cannot solve exactly. It's plausible that humans often encounter POMDPs of this kind. For example, in lab experiments humans make systematic errors in small POMDPs that are easy to solve with computers refp:zhang2013forgetful and refp:doshi2011comparison. Real-world tasks with the structure of POMDPs, such as choosing how to invest resources or deciding on a sequence of scientific experiments, are much more complex and so presumably can't be solved by humans exactly.\n\n### Human deviations from optimal action: Cognitive Biases\n\nCognitive bounds of time and space (for memory) mean that any realistic agent will perform sub-optimally on some problems. By contrast, the term \"cognitive biases\" is usually applied to errors that are idiosyncratic to humans and would not arise in AI systems[^biases]. There is a large literature on cognitive biases in psychology and behavioral economics refp:kahneman2011thinking, refp:kahneman1984choices. One relevant example is the cluster of biases summarized by *Prospect Theory* refp:kahneman1979prospect. In one-shot choices between \"lotteries\", people are subject to framing effects (e.g. Loss Aversion) and to erroneous computation of expected utility[^prospect]. Another important bias is *time inconsistency*. This bias has been used to explain addiction, procrastination, impulsive behavior and the use of pre-commitment devices. The next chapter describes and implements time-inconsistent agents. \n\n[^biases]: We do not presuppose a well substantiated scientific distinction between cognitive bounds and biases. Many have argued that biases result from heuristics and that the heuristics are a fine-tuned shortcut for dealing with cognitive bounds. For our purposes, the main distinction is between intractable decision problems (such that any agent will fail on large enough instances of the problem) and decision problems that appear trivial for simple computational systems but hard for some proportion of humans. For example, time-inconsistent behavior appears easy to avoid for computational systems but hard to avoid for humans. \n\n[^prospect]: The problems descriptions are extremely simple. So this doesn't look like an issue of bounds on time or memory forcing people to use a heuristic approach. \n\n\n### Learning preferences from bounded and biased agents\nWe've asserted that humans have cognitive biases and bounds. These lead to systemtic deviations from optimal performance on (PO)MDP decision problems. As a result, the softmax-optimal agent models from previous chapters will not always be good generative models for human behavior. To learn human beliefs and preferences when such deviations from optimality are present, we extend and elaborate our (PO)MDP agent models to capture these deviations. The next chapter implements time-inconsistent agents via hyperbolic discounting. The subsequent chapter implements \"greedy\" or \"myopic\" planning, which is a general strategy for reducing time- and space-complexity. In the final chapter of this section, we show (a) that assuming humans are optimal can lead to mistaken inferences in some decision problems, and (b) that our extended generative models can avoid these mistakes.\n\nNext chapter: [Time inconsistency I](/chapters/5a-time-inconsistency.html)\n\n
\n\n### Footnotes\n", "date_published": "2017-03-19T18:46:48Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5-biases-intro.md"} +{"id": "de7fd0b4b054937b7a2d8856f11ee694", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/7-multi-agent.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Multi-agent models\ndescription: Schelling coordination games, tic-tac-toe, and a simple natural-language example.\nis_section: true\n---\n\nIn this chapter, we will look at models that involve multiple agents reasoning about each other.\nThis chapter is based on reft:stuhlmueller2013reasoning.\n\n## Schelling coordination games\n\nWe start with a simple [Schelling coordination game](http://lesswrong.com/lw/dc7/nash_equilibria_and_schelling_points/). Alice and Bob are trying to meet up but have lost their phones and have no way to contact each other. There are two local bars: the popular bar and the unpopular one.\n\nLet's first consider how Alice would choose a bar (if she was not taking Bob into account):\n\n~~~~\nvar locationPrior = function() {\n if (flip(.55)) {\n return 'popular-bar';\n } else {\n return 'unpopular-bar';\n }\n};\n\nvar alice = function() {\n return Infer({ model() {\n var myLocation = locationPrior();\n return myLocation;\n }});\n};\n\nviz(alice());\n~~~~\n\nBut Alice wants to be at the same bar as Bob. We extend our model of Alice to include this:\n\n~~~~\nvar locationPrior = function() {\n if (flip(.55)) {\n return 'popular-bar';\n } else {\n return 'unpopular-bar';\n }\n};\n\nvar alice = function() {\n return Infer({ model() {\n var myLocation = locationPrior();\n var bobLocation = sample(bob());\n condition(myLocation === bobLocation);\n return myLocation;\n }});\n};\n\nvar bob = function() {\n return Infer({ model() {\n var myLocation = locationPrior();\n return myLocation;\n }});\n};\n\nviz(alice());\n~~~~\n\nNow Bob and Alice are thinking recursively about each other. We add caching (to avoid repeated computations) and a depth parameter (to avoid infinite recursion):\n\n~~~~\nvar locationPrior = function() {\n if (flip(.55)) {\n return 'popular-bar';\n } else {\n return 'unpopular-bar';\n }\n}\n\nvar alice = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n var bobLocation = sample(bob(depth - 1));\n condition(myLocation === bobLocation);\n return myLocation;\n }});\n});\n\nvar bob = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n if (depth === 0) {\n return myLocation;\n } else {\n var aliceLocation = sample(alice(depth));\n condition(myLocation === aliceLocation);\n return myLocation;\n }\n }});\n});\n\nviz(alice(10));\n~~~~\n\n>**Exercise**: Change the example to the setting where Bob wants to avoid Alice instead of trying to meet up with her, and Alice knows this. How do the predictions change as the reasoning depth grows? How would you model the setting where Alice doesn't know that Bob wants to avoid her?\n\n>**Exercise**: Would any of the answers to the previous exercise change if recursive reasoning could terminate not just at a fixed depth, but also at random?\n\n\n## Game playing\n\nWe'll look at the two-player game tic-tac-toe:\n\n\n\n>*Figure 1:* Tic-tac-toe. (Image source: [Wikipedia](https://en.wikipedia.org/wiki/Tic-tac-toe#/media/File:Tic-tac-toe-game-1.svg))\n\n\n\nLet's start with a prior on moves:\n\n~~~~\nvar isValidMove = function(state, move) {\n return state[move.x][move.y] === '?';\n};\n\nvar movePrior = dp.cache(function(state){\n return Infer({ model() {\n var move = {\n x: randomInteger(3),\n y: randomInteger(3)\n };\n condition(isValidMove(state, move));\n return move;\n }});\n});\n\nvar startState = [\n ['?', 'o', '?'],\n ['?', 'x', 'x'],\n ['?', '?', '?']\n];\n\nviz.table(movePrior(startState));\n~~~~\n\nNow let's add a deterministic transition function:\n\n~~~~\n///fold: isValidMove, movePrior\nvar isValidMove = function(state, move) {\n return state[move.x][move.y] === '?';\n};\n\nvar movePrior = dp.cache(function(state){\n return Infer({ model() {\n var move = {\n x: randomInteger(3),\n y: randomInteger(3)\n };\n condition(isValidMove(state, move));\n return move;\n }});\n});\n///\n\nvar assign = function(obj, k, v) {\n var newObj = _.clone(obj);\n return Object.defineProperty(newObj, k, {value: v})\n};\n\nvar transition = function(state, move, player) {\n var newRow = assign(state[move.x], move.y, player);\n return assign(state, move.x, newRow);\n};\n\nvar startState = [\n ['?', 'o', '?'],\n ['?', 'x', 'x'],\n ['?', '?', '?']\n];\n\ntransition(startState, {x: 1, y: 0}, 'o');\n~~~~\n\nWe need to be able to check if a player has won:\n\n~~~~\n///fold: movePrior, transition\nvar isValidMove = function(state, move) {\n return state[move.x][move.y] === '?';\n};\n\nvar movePrior = dp.cache(function(state){\n return Infer({ model() {\n var move = {\n x: randomInteger(3),\n y: randomInteger(3)\n };\n condition(isValidMove(state, move));\n return move;\n }});\n});\n\nvar assign = function(obj, k, v) {\n var newObj = _.clone(obj);\n return Object.defineProperty(newObj, k, {value: v})\n};\n\nvar transition = function(state, move, player) {\n var newRow = assign(state[move.x], move.y, player);\n return assign(state, move.x, newRow);\n};\n///\n\nvar diag1 = function(state) {\n return mapIndexed(function(i, x) {return x[i];}, state);\n};\n\nvar diag2 = function(state) {\n var n = state.length;\n return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state);\n};\n\nvar hasWon = dp.cache(function(state, player) {\n var check = function(xs){\n return _.countBy(xs)[player] == xs.length;\n };\n return any(check, [\n state[0], state[1], state[2], // rows\n map(first, state), map(second, state), map(third, state), // cols\n diag1(state), diag2(state) // diagonals\n ]);\n});\n\nvar startState = [\n ['?', 'o', '?'],\n ['x', 'x', 'x'],\n ['?', '?', '?']\n];\n\nhasWon(startState, 'x');\n~~~~\n\nNow let's implement an agent that chooses a single action, but can't plan ahead:\n\n~~~~\n///fold: movePrior, transition, hasWon\nvar isValidMove = function(state, move) {\n return state[move.x][move.y] === '?';\n};\n\nvar movePrior = dp.cache(function(state){\n return Infer({ model() {\n var move = {\n x: randomInteger(3),\n y: randomInteger(3)\n };\n condition(isValidMove(state, move));\n return move;\n }});\n});\n\nvar assign = function(obj, k, v) {\n var newObj = _.clone(obj);\n return Object.defineProperty(newObj, k, {value: v})\n};\n\nvar transition = function(state, move, player) {\n var newRow = assign(state[move.x], move.y, player);\n return assign(state, move.x, newRow);\n};\n\nvar diag1 = function(state) {\n return mapIndexed(function(i, x) {return x[i];}, state);\n};\n\nvar diag2 = function(state) {\n var n = state.length;\n return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state);\n};\n\nvar hasWon = dp.cache(function(state, player) {\n var check = function(xs){\n return _.countBy(xs)[player] == xs.length;\n };\n return any(check, [\n state[0], state[1], state[2], // rows\n map(first, state), map(second, state), map(third, state), // cols\n diag1(state), diag2(state) // diagonals\n ]);\n});\n///\nvar isDraw = function(state) {\n return !hasWon(state, 'x') && !hasWon(state, 'o');\n};\n\nvar utility = function(state, player) {\n if (hasWon(state, player)) {\n return 10;\n } else if (isDraw(state)) {\n return 0;\n } else {\n return -10;\n }\n};\n\nvar act = dp.cache(function(state, player) {\n return Infer({ model() {\n var move = sample(movePrior(state));\n var eu = expectation(Infer({ model() {\n var outcome = transition(state, move, player);\n return utility(outcome, player);\n }}));\n factor(eu); \n return move;\n }});\n});\n\nvar startState = [\n ['o', 'o', '?'],\n ['?', 'x', 'x'],\n ['?', '?', '?']\n];\n\nviz.table(act(startState, 'x'));\n~~~~\n\nAnd now let's include planning:\n\n~~~~\n///fold: movePrior, transition, hasWon, utility, isTerminal\nvar isValidMove = function(state, move) {\n return state[move.x][move.y] === '?';\n};\n\nvar movePrior = dp.cache(function(state){\n return Infer({ model() {\n var move = {\n x: randomInteger(3),\n y: randomInteger(3)\n };\n condition(isValidMove(state, move));\n return move;\n }});\n});\n\nvar assign = function(obj, k, v) {\n var newObj = _.clone(obj);\n return Object.defineProperty(newObj, k, {value: v})\n};\n\nvar transition = function(state, move, player) {\n var newRow = assign(state[move.x], move.y, player);\n return assign(state, move.x, newRow);\n};\n\nvar diag1 = function(state) {\n return mapIndexed(function(i, x) {return x[i];}, state);\n};\n\nvar diag2 = function(state) {\n var n = state.length;\n return mapIndexed(function(i, x) {return x[n - (i + 1)];}, state);\n};\n\nvar hasWon = dp.cache(function(state, player) {\n var check = function(xs){\n return _.countBy(xs)[player] == xs.length;\n };\n return any(check, [\n state[0], state[1], state[2], // rows\n map(first, state), map(second, state), map(third, state), // cols\n diag1(state), diag2(state) // diagonals\n ]);\n});\n\nvar isDraw = function(state) {\n return !hasWon(state, 'x') && !hasWon(state, 'o');\n};\n\nvar utility = function(state, player) {\n if (hasWon(state, player)) {\n return 10;\n } else if (isDraw(state)) {\n return 0;\n } else {\n return -10;\n }\n};\n\nvar isComplete = function(state) {\n return all(\n function(x){\n return x != '?';\n },\n _.flatten(state));\n}\n\nvar isTerminal = function(state) {\n return hasWon(state, 'x') || hasWon(state, 'o') || isComplete(state); \n};\n///\n\nvar otherPlayer = function(player) {\n return (player === 'x') ? 'o' : 'x';\n};\n\nvar act = dp.cache(function(state, player) {\n return Infer({ model() {\n var move = sample(movePrior(state));\n var eu = expectation(Infer({ model() {\n var outcome = simulate(state, move, player);\n return utility(outcome, player);\n }}));\n factor(eu); \n return move;\n }});\n});\n\nvar simulate = function(state, action, player) {\n var nextState = transition(state, action, player);\n if (isTerminal(nextState)) {\n return nextState;\n } else {\n var nextPlayer = otherPlayer(player);\n var nextAction = sample(act(nextState, nextPlayer));\n return simulate(nextState, nextAction, nextPlayer);\n }\n};\n\nvar startState = [\n ['o', '?', '?'],\n ['?', '?', 'x'],\n ['?', '?', '?']\n];\n\nvar actDist = act(startState, 'o');\n\nviz.table(actDist);\n~~~~\n\n## Language understanding\n\n\n\nA model of pragmatic language interpretation: The speaker chooses a sentence conditioned on the listener inferring the intended state of the world when hearing this sentence; the listener chooses an interpretation conditioned on the speaker selecting the given utterance when intending this meaning.\n\nLiteral interpretation:\n\n~~~~\nvar statePrior = function() {\n return uniformDraw([0, 1, 2, 3]);\n};\n\nvar literalMeanings = {\n allSprouted: function(state) { return state === 3; },\n someSprouted: function(state) { return state > 0; },\n noneSprouted: function(state) { return state === 0; }\n};\n\nvar sentencePrior = function() {\n return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']);\n};\n\nvar literalListener = function(sentence) {\n return Infer({ model() {\n var state = statePrior();\n var meaning = literalMeanings[sentence];\n condition(meaning(state));\n return state;\n }});\n};\n\nviz(literalListener('someSprouted'));\n~~~~\n\nA pragmatic speaker, thinking about the literal listener:\n\n~~~~\nvar alpha = 2;\n\n///fold: statePrior, literalMeanings, sentencePrior\nvar statePrior = function() {\n return uniformDraw([0, 1, 2, 3]);\n};\n\nvar literalMeanings = {\n allSprouted: function(state) { return state === 3; },\n someSprouted: function(state) { return state > 0; },\n noneSprouted: function(state) { return state === 0; }\n};\n\nvar sentencePrior = function() {\n return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']);\n};\n///\n\nvar literalListener = function(sentence) {\n return Infer({ model() {\n var state = statePrior();\n var meaning = literalMeanings[sentence];\n condition(meaning(state));\n return state;\n }});\n};\n\nvar speaker = function(state) {\n return Infer({ model() {\n var sentence = sentencePrior();\n factor(alpha * literalListener(sentence).score(state));\n return sentence;\n }});\n}\n\nviz(speaker(3));\n~~~~\n\nPragmatic listener, thinking about speaker:\n\n~~~~\nvar alpha = 2;\n\n///fold: statePrior, literalMeanings, sentencePrior\nvar statePrior = function() {\n return uniformDraw([0, 1, 2, 3]);\n};\n\nvar literalMeanings = {\n allSprouted: function(state) { return state === 3; },\n someSprouted: function(state) { return state > 0; },\n noneSprouted: function(state) { return state === 0; }\n};\n\nvar sentencePrior = function() {\n return uniformDraw(['allSprouted', 'someSprouted', 'noneSprouted']);\n};\n///\n\nvar literalListener = dp.cache(function(sentence) {\n return Infer({ model() {\n var state = statePrior();\n var meaning = literalMeanings[sentence];\n condition(meaning(state));\n return state;\n }});\n});\n\nvar speaker = dp.cache(function(state) {\n return Infer({ model() {\n var sentence = sentencePrior();\n factor(alpha * literalListener(sentence).score(state));\n return sentence;\n }});\n});\n\nvar listener = dp.cache(function(sentence) {\n return Infer({ model() {\n var state = statePrior();\n factor(speaker(state).score(sentence));\n return state;\n }});\n});\n\nviz(listener('someSprouted'));\n~~~~\n\nNext chapter: [How to use the WebPPL Agent Models library](/chapters/8-guide-library.html)\n", "date_published": "2016-12-04T11:26:34Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "7-multi-agent.md"} +{"id": "32ca55d04d57aa1d34e0db0fa171a0c9", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5d-joint-inference.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Joint inference of biases and preferences I\ndescription: Assuming agent optimality leads to mistakes in inference. Procrastination and Bandit Examples.\n\n---\n\n### Introduction\n\nTechniques for inferring human preferences and beliefs from their behavior on a task usually assume that humans solve the task (softmax) optimally. When this assumptions fails, inference often fails too. This chapter explores how incorporating time inconsistency and myopic planning into models of human behavior can improve inference. \n\n\n\n\n\n\n\n### Formalization of Joint Inference\n\n We formalize joint inference over beliefs, preferences and biases by extending the approach developed in Chapter IV, \"[Reasoning about Agents](/chapters/4-reasoning-about-agents)\", where an agent was formally defined by parameters $$ \\left\\langle U, \\alpha, b_0 \\right\\rangle$$. To include the possibility of time-inconsistency and myopia, an agent $$\\theta$$ is now characterized by a tuple of parameters as follows:\n\n$$\n\\theta = \\left\\langle U, \\alpha, b_0, k, \\nu, C \\right\\rangle\n$$\n\nwhere:\n\n- $$U$$ is the utilty function\n\n- $$\\alpha$$ is the softmax noise parameter\n\n- $$b_0$$ is the agent's belief (or prior) over the initial state\n\n\n- $$k \\geq 0$$ is the constant for hyperbolic discounting function $$1/(1+kd)$$\n\n- $$\\nu$$ is an indicator for Naive or Sophisticated hyperbolic discounting\n\n- $$C \\in [1,\\infty]$$ is the integer cutoff or bound for Reward-myopic or Update-myopic Agents[^bound] \n\nAs in Equation (2) of Chapter IV, we condition on state-action-observation triples:\n\n$$\nP(\\theta \\vert (s,o,a)_{0:n}) \\propto P( (s,o,a)_{0:n} \\vert \\theta)P(\\theta)\n$$\n\nWe obtain a factorized form in exactly the same way as in Equation (2), i.e. we generate the sequence $$b_i$$ from $$i=0$$ to $$i=n$$ of agent beliefs:\n\n$$\nP(\\theta \\vert (s,o,a)_{0:n}) \\propto \nP(\\theta) \\prod_{i=0}^n P( a_i \\vert s_i, b_i, U, \\alpha, k, \\nu, C )\n$$\n\nThe likelihood term on the right-hand side of this equation is simply the softmax probability that the agent with given parameters chooses $$a_i$$ in state $$s_i$$. This equation for inference does not make use of the *delay* indices used by time-inconsistent and Myopic agents. This is because the delays figure only in their internal simulations. In order to compute the likelihood the agent takes an action, we don't need to keep track of delay values. \n\n[^bound]: To simplify the presentation, we assume here that one does inference either about whether the agent is Update-myopic or about whether the agent is Reward-myopic (but not both). It's actually straightforward to include both kinds of agents in the hypothesis space and infer both $$C_m$$ and $$C_g$$. \n\n\n## Learning from Procrastinators\n\nThe Procrastination Problem (Figure 1 below) illustrates how agents with identical preferences can deviate *systematically* in their behavior due to time inconsistency. Suppose two agents care equally about finishing the task and assign the same cost to doing the hard work. The optimal agent will complete the task immediately. The Naive hyperbolic discounter will delay every day until the deadline, which could be thirty days away!\n\n\"diagram\"\n\n>**Figure 1:** Transition graph for Procrastination Problem. States are represented by nodes. Edges are state-transitions and are labeled with the action name and the utility of the state-action pair. Terminal nodes have a bold border and their utility is labeled below.\n\nThis kind of systematic deviation between agents is also significant for inferring preferences. We consider the problem of *online* inference, where we observe the agent's behavior each day and produce an estimate of their preferences. Suppose the agent has a deadline $$T$$ days into the future and leaves the work till the last day. This is typical human behavior -- and so is a good test for a model of inference. \n\nWe compare the online inferences of two models. The *Optimal Model* assumes the agent is time-consistent with softmax parameter $$\\alpha$$. The *Possibly Discounting* model includes both optimal and Naive hyperbolic discounting agents in its prior. (The Possibly Discounting model includes the Optimal Model as a special case; this allows us to place a uniform prior on the models and exploit [Bayesian Model Selection](http://alumni.media.mit.edu/~tpminka/statlearn/demo/).)\n\nFor each model, we compute posteriors for the agent's parameters after observing the agent's choice at each timestep. We set $$T=10$$. So the observed actions are:\n\n>`[\"wait\", \"wait\", \"wait\", ... , \"work\"]`\n\nwhere `\"work\"` is the final action. We fix the utilities for doing the work (the `workCost` or $$-w$$) and for delaying the work (the `waitCost` or $$-\\epsilon$$). We infer the following parameters:\n\n- The reward received after completing the work: $$R$$ or `reward`\n- The agent's softmax parameter: $$\\alpha$$\n- The agent's discount rate (for the Possibly Discounting model): $$k$$ or `discount`\n\nNote that we are not just inferring *whether* the agent is biased; we also infer how biased they are. For each parameter, we plot a time-series showing the posterior expectation of the parameter on each day. We also plot the model's posterior predictive probability that the agent does the work on the last day (assuming the agent gets to the last day without having done the work).\n\n\n\n\n~~~~ \n// infer_procrastination\n\n///fold: makeProcrastinationMDP, makeProcrastinationUtility, displayTimeSeries, ...\nvar makeProcrastinationMDP = function(deadlineTime) {\n var stateLocs = [\"wait_state\", \"reward_state\"];\n var actions = [\"wait\", \"work\", \"relax\"];\n\n var stateToActions = function(state) {\n return (state.loc === \"wait_state\" ? \n [\"wait\", \"work\"] :\n [\"relax\"]);\n };\n\n var advanceTime = function (state) {\n var newTimeLeft = state.timeLeft - 1;\n var terminateAfterAction = (newTimeLeft === 1 || \n state.loc === \"reward_state\");\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: terminateAfterAction\n });\n };\n\n var transition = function(state, action) {\n assert.ok(_.includes(stateLocs, state.loc) && _.includes(actions, action), \n 'procrastinate transition:' + [state.loc,action]);\n \n if (state.loc === \"reward_state\") {\n return advanceTime(state);\n } else if (action === \"wait\") {\n var waitSteps = state.waitSteps + 1;\n return extend(advanceTime(state), { waitSteps });\n } else {\n var newState = extend(state, { loc: \"reward_state\" });\n return advanceTime(newState);\n }\n };\n\n var feature = function(state) {\n return state.loc;\n };\n\n var startState = {\n loc: \"wait_state\",\n waitSteps: 0,\n timeLeft: deadlineTime,\n terminateAfterAction: false\n };\n\n return {\n actions,\n stateToActions,\n transition,\n feature,\n startState\n };\n};\n\nvar makeProcrastinationUtility = function(utilityTable) {\n assert.ok(hasProperties(utilityTable, ['waitCost', 'workCost', 'reward']),\n 'makeProcrastinationUtility args');\n var waitCost = utilityTable.waitCost;\n var workCost = utilityTable.workCost;\n var reward = utilityTable.reward;\n\n // NB: you receive the *workCost* when you leave the *wait_state*\n // You then receive the reward when leaving the *reward_state* state\n return function(state, action) {\n if (state.loc === \"reward_state\") {\n return reward + state.waitSteps * waitCost;\n } else if (action === \"work\") {\n return workCost;\n } else {\n return 0;\n }\n };\n};\n\nvar getMarginal = function(dist, key){\n return Infer({ model() {\n return sample(dist)[key];\n }});\n};\n\nvar displayTimeSeries = function(observedStateAction, getPosterior) {\n var features = ['reward', 'predictWorkLastMinute', 'alpha', 'discount'];\n\n // dist on {a:1, b:3, ...} -> [E('a'), E('b') ... ]\n var distToMarginalExpectations = function(dist, keys) {\n return map(function(key) {\n return expectation(getMarginal(dist, key));\n }, keys);\n };\n // condition observations up to *timeIndex* and take expectations\n var inferUpToTimeIndex = function(timeIndex, useOptimalModel) {\n var observations = observedStateAction.slice(0, timeIndex);\n return distToMarginalExpectations(getPosterior(observations, useOptimalModel), features);\n };\n\n var getTimeSeries = function(useOptimalModel) {\n\n var inferAllTimeIndexes = map(function(index) {\n return inferUpToTimeIndex(index, useOptimalModel);\n }, _.range(observedStateAction.length));\n\n return map(function(i) {\n // get full time series of online inferences for each feature\n return map(function(infer){return infer[i];}, inferAllTimeIndexes);\n }, _.range(features.length));\n };\n\n var displayOptimalAndPossiblyDiscountingSeries = function(index) {\n print('\\n\\nfeature: ' + features[index]);\n var optimalSeries = getTimeSeries(true)[index];\n var possiblyDiscountingSeries = getTimeSeries(false)[index];\n var plotOptimal = map(\n function(pair){ \n return {\n t: pair[0], \n expectation: pair[1], \n agentModel: 'Optimal'\n };\n },\n zip(_.range(observedStateAction.length), optimalSeries));\n var plotPossiblyDiscounting = map(\n function(pair){\n return {\n t: pair[0],\n expectation: pair[1],\n agentModel: 'Possibly Discounting'\n };\n },\n zip(_.range(observedStateAction.length),\n possiblyDiscountingSeries));\n viz.line(plotOptimal.concat(plotPossiblyDiscounting), \n { groupBy: 'agentModel' });\n };\n\n print('Posterior expectation on feature after observing ' +\n '\"wait\" for t timesteps and \"work\" when t=9');\n map(displayOptimalAndPossiblyDiscountingSeries, _.range(features.length));\n return '';\n};\n\nvar procrastinationData = [[{\"loc\":\"wait_state\",\"waitSteps\":0,\"timeLeft\":10,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":1,\"timeLeft\":9,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":2,\"timeLeft\":8,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":3,\"timeLeft\":7,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":4,\"timeLeft\":6,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":5,\"timeLeft\":5,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":6,\"timeLeft\":4,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":7,\"timeLeft\":3,\"terminateAfterAction\":false},\"wait\"],[{\"loc\":\"wait_state\",\"waitSteps\":8,\"timeLeft\":2,\"terminateAfterAction\":false},\"work\"],[{\"loc\":\"reward_state\",\"waitSteps\":8,\"timeLeft\":1,\"terminateAfterAction\":true},\"relax\"]];\n///\n\nvar getPosterior = function(observedStateAction, useOptimalModel) {\n var world = makeProcrastinationMDP();\n var lastChanceState = secondLast(procrastinationData)[0];\n\n return Infer({ model() {\n\n var utilityTable = {\n reward: uniformDraw([0.5, 2, 3, 4, 5, 6, 7, 8]),\n waitCost: -0.1,\n workCost: -1\n };\n var params = {\n utility: makeProcrastinationUtility(utilityTable),\n alpha: categorical([0.1, 0.2, 0.2, 0.2, 0.3], [0.1, 1, 10, 100, 1000]),\n discount: useOptimalModel ? 0 : uniformDraw([0, .5, 1, 2, 4]),\n sophisticatedOrNaive: 'naive'\n };\n\n var agent = makeMDPAgent(params, world);\n var act = agent.act;\n\n map(function(stateAction) {\n var state = stateAction[0];\n var action = stateAction[1];\n observe(act(state, 0), action);\n }, observedStateAction);\n\n return {\n reward: utilityTable.reward, \n alpha: params.alpha, \n discount: params.discount, \n predictWorkLastMinute: sample(act(lastChanceState, 0)) === 'work'\n };\n }});\n};\n\ndisplayTimeSeries(procrastinationData, getPosterior);\n~~~~\n\nThe optimal model makes inferences that clash with everyday intuition. Suppose someone has still not done a task with only two days left. Would you confidently rule out them doing it at the last minute? \n\nWith two days left, the Optimal model has almost complete confidence that the agent doesn't care about the task enough to do the work (`reward < workCost = 1`). Hence it assigns probability $$0.005$$ to the agent doing the task at the last minute (`predictWorkLastMinute`). By contrast, the Possibly Discounting model predicts the agent will do the task with probability around $$0.2$$. The predictive probability is no higher than $$0.2$$ because the Possibly Discounting model allows the agent to be optimal (`discount==0`) and because a sub-optimal agent might be too lazy to do the work even at the last minute (i.e. `discount` is high enough to overwhelm `reward`).\n\nSuppose someone completes the task on the final day. What do you infer about them? The Optimal Model has to explain the action by massively revising its inference about `reward` and $$\\alpha$$. It suddenly infers that the agent is extremely noisy and that `reward > workCost` by a big margin. The extreme noise is needed to explain why the agent would miss a good option nine out of ten times. By contrast, the Possibly Discounting Model does not change its inference about the agent's noise level very much at all (in terms of pratical significance). It infers a much higher value for `reward`, which is plausible in this context. \n\n\n----------\n\n\n## Learning from Reward-myopic Agents in Bandits\n\nChapter V.2. \"[Bounded Agents](/chapters/5c-myopia)\" explained that Reward-myopic agents explore less than optimal agents. The Reward-myopic agent plans each action as if time runs out in $$C_g$$ steps, where $$C_g$$ is the *bound* or \"look ahead\". If exploration only pays off in the long-run (after the bound) then the agent won't explore[^bandit1]. This means there are two possible explanations for an agent not exploring: either the agent is greedy or the agent has a low prior on the utility of the unknown options.\n\n[^bandit1]: If there's no noise in transitions or in selection of actions, the Reward-myopic agent will *never* explore and will do poorly. \n\nWe return to the deterministic bandit-style problem from earlier. At each trial, the agent chooses between two arms with the following properties:\n\n- `arm0`: yields chocolate\n\n- `arm1`: yields either champagne or no prize at all (agent's prior is $$0.7$$ for nothing)\n\n\"diagram\"\n\nThe inference problem is to infer the agent's preference over chocolate. While having only two deterministic arms may seem overly simple, the same structure is shared by realistic problems. For example, we can imagine observing people choosing between different cuisines, restaurants or menu options. Usually people know about some options well but are uncertain about others. When inferring their preferences, we distinguish between options chosen for exploration vs. exploitation. The same applies to people choosing media sources: someone might try out a channel to learn whether it shows their favorite genre. \n\nAs with the Procrastination example above, we compare the inferences of two models. The *Optimal Model* assumes the agent solves the POMDP optimally. The *Possibly Reward-myopic Model* includes both the optimal agent and Reward-myopic agents with different values for the bound $$C_g$$. The models know the agent's utility for champagne and his prior about how likely champagne is from `arm1`. The models have a fixed prior on the agent's utility for chocolate. We vary the agent's time horizon between 2 and 10 timesteps and plot posterior expectations for the utility of chocolate. For the Possibly Reward-myopic model, we also plot the expectation for $$C_g$$. \n\n\n\n~~~~ \n// helper function to assemble and display inferred values\n///fold:\nvar timeHorizonValues = _.range(10).slice(2);\nvar features = ['Utility of arm 0 (chocolate)', 'Greediness bound'];\n\nvar displayExpectations = function(getPosterior) {\n\n var getExpectations = function(useOptimalModel) {\n\n var inferAllTimeHorizons = map(function(horizon) {\n return getPosterior(horizon, useOptimalModel);\n }, timeHorizonValues);\n\n return map(\n function(i) {\n return map(function(infer){return infer[i];}, inferAllTimeHorizons);\n }, \n _.range(features.length));\n };\n\n var displayOptimalAndPossiblyRewardMyopicSeries = function(index) {\n print('\\n\\nfeature: ' + features[index]);\n var optimalSeries = getExpectations(true)[index];\n var possiblyRewardMyopicSeries = getExpectations(false)[index];\n var plotOptimal = map(\n function(pair) {\n return {\n horizon: pair[0], \n expectation: pair[1], \n agentModel: 'Optimal'\n };\n },\n zip(timeHorizonValues, optimalSeries));\n var plotPossiblyRewardMyopic = map(\n function(pair){\n return {\n horizon: pair[0], \n expectation: pair[1],\n agentModel: 'Possibly RewardMyopic'\n };\n },\n zip(timeHorizonValues, possiblyRewardMyopicSeries));\n viz.line(plotOptimal.concat(plotPossiblyRewardMyopic), \n { groupBy: 'agentModel' });\n };\n\n print('Posterior expectation on feature after observing no exploration');\n map(displayOptimalAndPossiblyRewardMyopicSeries, _.range(features.length));\n return '';\n};\n\nvar getMarginal = function(dist, key){\n return Infer({ model() {\n return sample(dist)[key];\n }});\n};\n///\n\n\nvar getPosterior = function(numberOfTrials, useOptimalModel) {\n var trueArmToPrizeDist = {\n 0: Delta({ v: 'chocolate' }),\n 1: Delta({ v: 'nothing' })\n };\n var bandit = makeBanditPOMDP({\n numberOfArms: 2,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials: numberOfTrials\n });\n\n var startState = bandit.startState;\n var alternativeArmToPrizeDist = extend(trueArmToPrizeDist,\n { 1: Delta({ v: 'champagne' }) });\n var alternativeStartState = makeBanditStartState(numberOfTrials,\n alternativeArmToPrizeDist);\n\n var priorAgentPrior = Delta({ \n v: Categorical({ \n vs: [startState, alternativeStartState], \n ps: [0.7, 0.3]\n })\n });\n\n var priorPrizeToUtility = Infer({ model() {\n return {\n chocolate: uniformDraw(_.range(20).concat(25)),\n nothing: 0,\n champagne: 20\n };\n }});\n\n var priorMyopia = (\n useOptimalModel ? \n Delta({ v: { on: false, bound:0 }}) :\n Infer({ model() {\n return { \n bound: categorical([.4, .2, .1, .1, .1, .1], \n [1, 2, 3, 4, 6, 10]) \n };\n }}));\n\n var prior = { priorAgentPrior, priorPrizeToUtility, priorMyopia };\n\n var baseAgentParams = {\n alpha: 1000,\n sophisticatedOrNaive: 'naive',\n discount: 0,\n noDelays: useOptimalModel\n };\n \n var observations = [[startState, 0]];\n\n var outputDist = inferBandit(bandit, baseAgentParams, prior, observations,\n 'offPolicy', 0, 'beliefDelay');\n\n var marginalChocolate = Infer({ model() {\n return sample(outputDist).prizeToUtility.chocolate;\n }});\n\n return [\n expectation(marginalChocolate), \n expectation(getMarginal(outputDist, 'myopiaBound'))\n ];\n};\n\nprint('Prior expected utility for arm0 (chocolate): ' + \n listMean(_.range(20).concat(25)) );\n\ndisplayExpectations(getPosterior);\n~~~~\n\nThe graphs show that as the agent's time horizon increases the inferences of the two models diverge. For the Optimal agent, the longer time horizon makes exploration more valuable. So the Optimal model infers a higher utility for the known option as the time horizon increases. By contrast, the Possibly Reward-myopic model can explain away the lack of exploration by the agent being Reward-myopic. This latter model infers slightly lower values for $$C_g$$ as the horizon increases. \n\n>**Exercise**: Suppose that instead of allowing the agent to be greedy, we allowed the agent to be a hyperbolic discounter. Think about how this would affect inferences from the observations above and for other sequences of observation. Change the code above to test out your predictions.\n
\n\nNext chapter: [Joint inference of biases and preferences II](/chapters/5e-joint-inference.html)\n\n
\n\n### Footnotes\n", "date_published": "2019-08-29T10:20:19Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5d-joint-inference.md"} +{"id": "a894af445e54ec10a745213ccd2d14e3", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3b-mdp-gridworld.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"MDPs and Gridworld in WebPPL\"\ndescription: Noisy actions (softmax), stochastic transitions, policies, Q-values.\n---\n\nThis chapter explores some key features of MDPs: stochastic dynamics, stochastic policies, and value functions.\n\n### Hiking in Gridworld\n\nWe begin by introducing a new gridworld MDP:\n\n> **Hiking Problem**:\n>Suppose that Alice is hiking. There are two peaks nearby, denoted \"West\" and \"East\". The peaks provide different views and Alice must choose between them. South of Alice's starting position is a steep hill. Falling down the hill would result in painful (but non-fatal) injury and end the hike early.\n\nWe represent Alice's hiking problem with a Gridworld similar to Bob's Restaurant Choice example. The peaks are terminal states, providing different utilities. The steep hill is represented by a row of terminal state, each with identical negative utility. Each timestep before Alice reaches a terminal state incurs a \"time cost\", which is negative to represent the fact that Alice prefers a shorter hike. \n\n\n~~~~\nvar H = { name: 'Hill' };\nvar W = { name: 'West' };\nvar E = { name: 'East' };\nvar ___ = ' ';\n\nvar grid = [\n [___, ___, ___, ___, ___],\n [___, '#', ___, ___, ___],\n [___, '#', W , '#', E ],\n [___, ___, ___, ___, ___],\n [ H , H , H , H , H ]\n];\n\nvar start = [0, 1];\n\nvar mdp = makeGridWorldMDP({ grid, start });\n\nviz.gridworld(mdp.world, { trajectory: [mdp.startState] });\n~~~~\n\nWe start with a *deterministic* transition function. In this case, Alice's risk of falling down the steep hill is solely due to softmax noise in her action choice (which is minimal in this case). The agent model is the same as the one at the end of [Chapter III.1](/chapters/3a-mdp.html). We place the functions `act`, `expectedUtility` in a function `makeMDPAgent`. The following codebox defines this function and we use it later on without defining it (since it's in the `webppl-agents` library).\n\n\n~~~~\n// Set up agent structure\n\nvar makeMDPAgent = function(params, world) {\n var stateToActions = world.stateToActions;\n var transition = world.transition;\n var utility = params.utility;\n var alpha = params.alpha;\n\n var act = dp.cache(\n function(state) {\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action);\n factor(alpha * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action){\n var u = utility(state, action);\n if (state.terminateAfterAction){\n return u;\n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action);\n var nextAction = sample(act(nextState));\n return expectedUtility(nextState, nextAction);\n }}));\n }\n });\n\n return { params, expectedUtility, act };\n};\n\nvar simulate = function(startState, world, agent) {\n var act = agent.act;\n var transition = world.transition;\n var sampleSequence = function(state) {\n var action = sample(act(state));\n var nextState = transition(state, action);\n if (state.terminateAfterAction) {\n return [state];\n } else {\n return [state].concat(sampleSequence(nextState));\n }\n };\n return sampleSequence(startState);\n};\n\n\n// Set up world\n\nvar makeHikeMDP = function(options) {\n var H = { name: 'Hill' };\n var W = { name: 'West' };\n var E = { name: 'East' };\n var ___ = ' ';\n var grid = [\n [___, ___, ___, ___, ___],\n [___, '#', ___, ___, ___],\n [___, '#', W , '#', E ],\n [___, ___, ___, ___, ___],\n [ H , H , H , H , H ]\n ];\n return makeGridWorldMDP(_.assign({ grid }, options));\n};\n\nvar mdp = makeHikeMDP({\n start: [0, 1],\n totalTime: 12,\n transitionNoiseProbability: 0\n});\n\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n\n// Create parameterized agent\n\nvar utility = makeUtilityFunction({\n East: 10,\n West: 1,\n Hill: -10,\n timeCost: -.1\n});\nvar agent = makeMDPAgent({ utility, alpha: 1000 }, mdp.world);\n\n\n// Run agent on world\n\nvar trajectory = simulate(mdp.startState, mdp.world, agent);\n\n\nviz.gridworld(mdp.world, { trajectory });\n~~~~\n\n>**Exercise**: Adjust the parameters of `utilityTable` in order to produce the following behaviors:\n\n>1. The agent goes directly to \"West\".\n>2. The agent takes the long way around to \"West\".\n>3. The agent sometimes goes to the Hill at $$[1,0]$$. Try to make this outcome as likely as possible.\n\n\n\n### Hiking with stochastic transitions\n\nImagine that the weather is very wet and windy. As a result, Alice will sometimes intend to go one way but actually go another way (because she slips in the mud). In this case, the shorter route to the peaks might be too risky for Alice.\n\nTo model bad weather, we assume that at every timestep, there is a constant independent probability `transitionNoiseProbability` of the agent moving orthogonally to their intended direction. The independence assumption is unrealistic (if a location is slippery at one timestep it is more likely slippery the next), but it is simple and satisfies the Markov assumption for MDPs.\n\nSetting `transitionNoiseProbability=0.1`, the agent's first action is now to move \"up\" instead of \"right\".\n\n~~~~\n///fold: makeHikeMDP\nvar makeHikeMDP = function(options) {\n var H = { name: 'Hill' };\n var W = { name: 'West' };\n var E = { name: 'East' };\n var ___ = ' ';\n var grid = [\n [___, ___, ___, ___, ___],\n [___, '#', ___, ___, ___],\n [___, '#', W , '#', E ],\n [___, ___, ___, ___, ___],\n [ H , H , H , H , H ]\n ];\n return makeGridWorldMDP(_.assign({ grid }, options));\n};\n///\n\n// Set up world\n\nvar mdp = makeHikeMDP({\n start: [0, 1],\n totalTime: 13,\n transitionNoiseProbability: 0.1 // <- NEW\n});\n\n\n// Create parameterized agent\n\nvar makeUtilityFunction = mdp.makeUtilityFunction;\nvar utility = makeUtilityFunction({\n East: 10,\n West: 1,\n Hill: -10,\n timeCost: -.1\n});\nvar agent = makeMDPAgent({ utility, alpha: 100 }, mdp.world);\n\n\n// Generate a single trajectory, draw\n\nvar trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states');\nviz.gridworld(mdp.world, { trajectory });\n\n\n// Generate 100 trajectories, plot distribution on lengths\n\nvar trajectoryDist = Infer({\n model() {\n var trajectory = simulateMDP(mdp.startState, mdp.world, agent);\n return { trajectoryLength: trajectory.length }\n },\n method: 'forward',\n samples: 100\n});\n\nviz(trajectoryDist);\n~~~~\n\n>**Exercise:**\n\n>1. Keeping `transitionNoiseProbability=0.1`, find settings for `utilityTable` such that the agent goes \"right\" instead of \"up\".\n>2. Set `transitionNoiseProbability=0.01`. Change a single parameter in `utilityTable` such that the agent goes \"right\" (there are multiple ways to do this).\n\n\n### Noisy transitions vs. Noisy agents\n\nIt's important to distinguish noise in the transition function from the softmax noise in the agent's selection of actions. Noise (or \"stochasticity\") in the transition function is a representation of randomness in the world. This is easiest to think about in games of chance[^noise]. In a game of chance (e.g. slot machines or poker) rational agents will take into account the randomness in the game. By contrast, softmax noise is a property of an agent. For example, we can vary the behavior of otherwise identical agents by varying their parameter $$\\alpha$$.\n\nUnlike transition noise, softmax noise has little influence on the agent's planning for the Hiking Problem. Since it's so bad to fall down the hill, the softmax agent will rarely do so even if they take the short route. The softmax agent is like a person who takes inefficient routes when stakes are low but \"pulls themself together\" when stakes are high.\n\n[^noise]: An agent's world model might treat a complex set of deterministic rules as random. In this sense, agents will vary in whether they represent an MDP as stochastic or not. We won't consider that case in this tutorial.\n\n>**Exercise:** Use the codebox below to explore different levels of softmax noise. Find a setting of `utilityTable` and `alpha` such that the agent goes to West and East equally often and nearly always takes the most direct route to both East and West. Included below is code for simulating many trajectories and returning the trajectory length. You can extend this code to measure whether the route taken by the agent is direct or not. (Note that while the softmax agent here is able to \"backtrack\" or return to its previous location, in later Gridworld examples we disalllow backtracking as a possible action).\n\n~~~~\n///fold: makeHikeMDP, set up world\nvar makeHikeMDP = function(options) {\n var H = { name: 'Hill' };\n var W = { name: 'West' };\n var E = { name: 'East' };\n var ___ = ' ';\n var grid = [\n [___, ___, ___, ___, ___],\n [___, '#', ___, ___, ___],\n [___, '#', W , '#', E ],\n [___, ___, ___, ___, ___],\n [ H , H , H , H , H ]\n ];\n return makeGridWorldMDP(_.assign({ grid }, options));\n};\n\nvar mdp = makeHikeMDP({\n start: [0, 1],\n totalTime: 13,\n transitionNoiseProbability: 0.1\n});\n\nvar world = mdp.world;\nvar startState = mdp.startState;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n///\n\n// Create parameterized agent\nvar utility = makeUtilityFunction({\n East: 10,\n West: 1,\n Hill: -10,\n timeCost: -.1\n});\nvar alpha = 1; // <- SOFTMAX NOISE\nvar agent = makeMDPAgent({ utility, alpha }, world);\n\n// Generate a single trajectory, draw\nvar trajectory = simulateMDP(startState, world, agent, 'states');\nviz.gridworld(world, { trajectory });\n\n// Generate 100 trajectories, plot distribution on lengths\nvar trajectoryDist = Infer({\n model() {\n var trajectory = simulateMDP(startState, world, agent);\n return { trajectoryLength: trajectory.length }\n },\n method: 'forward',\n samples: 100\n});\nviz(trajectoryDist);\n~~~~\n\n\n### Stochastic transitions: plans and policies\n\nWe return to the case of a stochastic environment with very low softmax action noise. In a stochastic environment, the agent sometimes finds themself in a state they did not intend to reach. The functions `agent` and `expectedUtility` (inside `makeMDPAgent`) implicitly compute the expected utility of actions for every possible future state, including states that the agent will try to avoid. In the MDP literature, this function from states and remaining time to actions is called a *policy*. (For infinite-horizon MDPs, policies are functions from states to actions.) Since policies take into account every possible contingency, they are quite different from the everyday notion of a plan.\n\nConsider the example from above where the agent takes the long route because of the risk of falling down the hill. If we generate a single trajectory for the agent, they will likely take the long route. However, if we generated many trajectories, we would sometimes see the agent move \"right\" instead of \"up\" on their first move. Before taking this first action, the agent implicitly computes what they *would* do if they end up moving right. To find out what they would do, we can artificially start the agent in $$[1,1]$$ instead of $$[0,1]$$:\n\n\n~~~~\n///fold: makeHikeMDP\nvar makeHikeMDP = function(options) {\n var H = { name: 'Hill' };\n var W = { name: 'West' };\n var E = { name: 'East' };\n var ___ = ' ';\n var grid = [\n [___, ___, ___, ___, ___],\n [___, '#', ___, ___, ___],\n [___, '#', W , '#', E ],\n [___, ___, ___, ___, ___],\n [ H , H , H , H , H ]\n ];\n return makeGridWorldMDP(_.assign({ grid }, options));\n};\n///\n\n// Parameters for world\nvar mdp = makeHikeMDP({\n start: [1, 1], // Previously: [0, 1]\n totalTime: 11, // Previously: 12\n transitionNoiseProbability: 0.1\n});\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n// Parameters for agent\nvar utility = makeUtilityFunction({ \n East: 10, \n West: 1,\n Hill: -10,\n timeCost: -.1\n});\nvar agent = makeMDPAgent({ utility, alpha: 1000 }, mdp.world);\nvar trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states');\n\nviz.gridworld(mdp.world, { trajectory });\n~~~~\n\nExtending this idea, we can display the expected values of each action the agent *could have taken* during their trajectory. These expected values numbers are analogous to state-action Q-values in infinite-horizon MDPs.\n\nThe expected values were already being computed implicitly; we now use `getExpectedUtilitiesMDP` to access them. The displayed numbers in each grid cell are the expected utilities of moving in the corresponding directions. For example, we can read off how close the agent was to taking the short route as opposed to the long route. (Note that if the difference in expected utility between two actions is small then a noisy agent will take each of them with nearly equal probability).\n\n~~~~\n///fold: makeBigHikeMDP, getExpectedUtilitiesMDP\nvar makeBigHikeMDP = function(options) {\n var H = { name: 'Hill' };\n var W = { name: 'West' };\n var E = { name: 'East' };\n var ___ = ' ';\n var grid = [\n [___, ___, ___, ___, ___, ___],\n [___, ___, ___, ___, ___, ___],\n [___, ___, '#', ___, ___, ___],\n [___, ___, '#', W , '#', E ],\n [___, ___, ___, ___, ___, ___],\n [ H , H , H , H , H , H ]\n ];\n return makeGridWorldMDP(_.assign({ grid }, options));\n};\n\n// trajectory must consist only of states. This can be done by calling\n// *simulate* with an additional final argument 'states'.\nvar getExpectedUtilitiesMDP = function(stateTrajectory, world, agent) {\n var eu = agent.expectedUtility;\n var actions = world.actions;\n var getAllExpectedUtilities = function(state) {\n var actionUtilities = map(\n function(action){ return eu(state, action); },\n actions);\n return [state, actionUtilities];\n };\n return map(getAllExpectedUtilities, stateTrajectory);\n};\n///\n\n// Long route is better, agent takes long route\n\nvar mdp = makeBigHikeMDP({\n start: [1, 1],\n totalTime: 12,\n transitionNoiseProbability: 0.03\n});\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\nvar utility = makeUtilityFunction({\n East: 10,\n West: 7,\n Hill : -40,\n timeCost: -0.4\n});\nvar agent = makeMDPAgent({ utility, alpha: 100 }, mdp.world);\n\nvar trajectory = simulateMDP(mdp.startState, mdp.world, agent, 'states');\nvar actionExpectedUtilities = getExpectedUtilitiesMDP(trajectory, mdp.world, agent);\n\nviz.gridworld(mdp.world, { trajectory, actionExpectedUtilities });\n~~~~\n\nSo far, our agents all have complete knowledge about the state of the world. In the [next chapter](/chapters/3c-pomdp.html), we will explore partially observable worlds.\n\n
\n\n### Footnotes\n", "date_published": "2016-12-13T14:21:09Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3b-mdp-gridworld.md"} +{"id": "255b3dc4b0cda7173da19999092818bb", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5c-myopic.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Bounded Agents-- Myopia for rewards and updates\ndescription: Heuristic POMDP algorithms that assume a short horizon.\n\n---\n\n### Introduction\n\nThe previous chapter extended the MDP agent model to include exponential and hyperbolic discounting. The goal was to produce models of human behavior that capture a prominent *bias* (time inconsistency). As noted [earlier](/chapters/5-biases-intro) humans are not just biased but also *cognitively bounded*. This chapter extends the POMDP agent to capture heuristics for planning that are sub-optimal but fast and frugal.\n\n## Reward-myopic Planning: the basic idea\n\nOptimal planning is difficult because the best action now depends on the entire future. The optimal POMDP agent reasons backwards from the utility of its final state, judging earlier actions on whether they lead to good final states. With an infinite time horizon, an optimal agent must consider the expected utility of being in every possible state, including states only reachable after a very long duration.\n\nInstead of explicitly optimizing for the entire future when taking an action, an agent can \"myopically\" optimize for near-term rewards. With a time-horizon of 1000 timesteps, a myopic agent's first action might optimize for reward up to timestep $$t=5$$. Their second action would optimize for rewards up to $$t=6$$, and so on. Whereas the optimal agent computes a complete policy before the first timestep and then follows the policy, the \"reward-myopic agent\" computes a new myopic policy at each timestep, thus spreading out computation over the whole time-horizon and usually doing much less computation overall[^reward].\n\n[^reward]: If optimal planning is super-linear in the time-horizon, the Reward-myopic agent will do less computation overall. The Reward-myopic agent only considers states or belief-states that it actually enters or that it gets close to, while the Optimal approach considers every possible state or belief-state.\n\nThe Reward-myopic agent succeeds when continually optimizing for the short-term produces good long-term performance. Often this fails: climbing a moutain can get progressively more exhausting and painful until the summit is finally reached. One patch for this problem is to provide the agent with fake short-term rewards that are a proxy for long-term expected utility. This is closely related to \"reward shaping\" in Reinforcement Learning refp:chentanez2004intrinsically.\n\n### Reward-myopic Planning: implementation and examples\n\nThe **Reward-myopic agent** takes the action that would be optimal if the time-horizon were $$C_g$$ steps into the future. The \"cutoff\" or \"bound\", $$C_g > 0$$, will typically be much smaller than the time horizon for the decision problem.\n\nNotice the similarity between Reward-myopic agents and hyperbolic discounting agents. Both agents make plans based on short-term rewards. Both revise these plans at every timestep. And the Naive Hyperbolic Discounter and Reward-myopic agents both have implicit models of their future selves that are incorrect. A major difference is that Reward-myopic planning is easy to make computationally fast. The Reward-myopic agent can be implemented using the concept of *delay* described in the previous [chapter](/chapters/5b-time-inconsistency) and the implementation is left as an exercise:\n\n>**Exercise:** Formalize POMDP and MDP versions of the Reward-myopic agent by modifiying the equations for the expected utility of state-action pairs or belief-state-action pairs. Implement the agent by modifying the code for the POMDP and MDP agents. Verify that the agent behaves sub-optimally (but more efficiently) on Gridworld and Bandit problems. \n\n------\n\nThe Reward-myopic agent succeeds if good short-term actions produce good long-term consequences. In Bandit problems, elaborate long-terms plans are not needed to reach particular desirable future states. It turns out that a maximally Reward-myopic agent, who only cares about the immediate reward ($$C_g = 1$$), does well on Multi-arm Bandits provided they take noisy actions refp:kuleshov2014algorithms.\n\nThe next codeboxes show the performance of the Reward-myopic agent on Bandit problems. The first codebox is a two-arm Bandit problem, illustrated in Figure 1. We use a Reward-myopic agent with high softmax noise: $$C_g=1$$ and $$\\alpha=10$$. The Reward-myopic agent's average reward over 100 trials is close to the expected average reward given perfect knowledge of the arms.\n\n\"diagram\"\n\n>**Figure 1:** Bandit problem. The curly brackets contain possible probabilities according to the agent's prior (the bolded number is the true probability). For `arm0`, the agent has a uniform prior on the values $$\\{0, 0.25, 0.5, 0.75, 1\\}$$ for the probability the arm yields the reward 1.5.\n\n
\n\n\n~~~~\n///fold: getUtility\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n///\n\n// Construct world: One bad arm, one good arm, 100 trials. \n\nvar trueArmToPrizeDist = {\n 0: Categorical({ vs: [1.5, 0], ps: [0.25, 0.75] }),\n 1: Categorical({ vs: [1, 0], ps: [0.5, 0.5] })\n};\nvar numberOfTrials = 100;\nvar bandit = makeBanditPOMDP({\n numberOfTrials,\n numberOfArms: 2,\n armToPrizeDist: trueArmToPrizeDist,\n numericalPrizes: true\n});\nvar world = bandit.world;\nvar startState = bandit.startState;\n\n\n// Construct reward-myopic agent\n\n// Arm0 is a mixture of [0,1.5] and Arm1 of [0,1]\nvar agentPrior = Infer({ model() {\n var prob15 = uniformDraw([0, 0.25, 0.5, 0.75, 1]);\n var prob1 = uniformDraw([0, 0.25, 0.5, 0.75, 1]);\n var armToPrizeDist = {\n 0: Categorical({ vs: [1.5, 0], ps: [prob15, 1 - prob15] }),\n 1: Categorical({ vs: [1, 0], ps: [prob1, 1 - prob1] })\n };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n}});\n\nvar rewardMyopicBound = 1;\nvar alpha = 10; // noise level\n\nvar params = {\n alpha,\n priorBelief: agentPrior,\n rewardMyopic: { bound: rewardMyopicBound },\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive'\n};\nvar agent = makeBanditAgent(params, bandit, 'beliefDelay');\nvar trajectory = simulatePOMDP(startState, world, agent, 'states');\nvar averageUtility = listMean(map(getUtility, trajectory));\nprint('Arm1 is best arm and has expected utility 0.5.\\n' + \n 'So ideal performance gives average score of: 0.5 \\n' + \n 'The average score over 100 trials for rewardMyopic agent: ' + \n averageUtility);\n~~~~\n\nThe next codebox is a three-arm Bandit problem show in Figure 2. Given the agent's prior, `arm0` has the highest prior expectation. So the agent will try that before exploring other arms. We show the agent's actions and their average score over 40 trials.\n\n\"diagram\"\n\n>**Figure 2:** Bandit problem where `arm0` has highest prior expectation for the agent but where `arm2` is actually the best arm. (This may take a while to run.)\n\n\n~~~~\n// agent is same as above: bound=1, alpha=10\n///fold:\nvar rewardMyopicBound = 1;\nvar alpha = 10; // noise level\n\nvar params = {\n alpha: 10,\n rewardMyopic: { bound: rewardMyopicBound },\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive'\n};\n\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n///\n\nvar trueArmToPrizeDist = {\n 0: Categorical({ vs: [3, 0], ps: [0.1, 0.9] }),\n 1: Categorical({ vs: [1, 0], ps: [0.5, 0.5] }),\n 2: Categorical({ vs: [2, 0], ps: [0.5, 0.5] })\n};\n\nvar numberOfTrials = 40;\n\nvar bandit = makeBanditPOMDP({\n numberOfArms: 3,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials,\n numericalPrizes: true\n});\nvar world = bandit.world;\nvar startState = bandit.startState;\n\nvar agentPrior = Infer({ model() {\n var prob3 = uniformDraw([0.1, 0.5, 0.9]);\n var prob1 = uniformDraw([0.1, 0.5, 0.9]);\n var prob2 = uniformDraw([0.1, 0.5, 0.9]);\n var armToPrizeDist = {\n 0: Categorical({ vs: [3, 0], ps: [prob3, 1 - prob3] }),\n 1: Categorical({ vs: [1, 0], ps: [prob1, 1 - prob1] }),\n 2: Categorical({ vs: [2, 0], ps: [prob2, 1 - prob2] })\n };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n}});\n\nvar params = extend(params, { priorBelief: agentPrior });\nvar agent = makeBanditAgent(params, bandit, 'beliefDelay');\nvar trajectory = simulatePOMDP(startState, world, agent, 'stateAction');\n\nprint(\"Agent's first 20 actions (during exploration phase): \\n\" + \n map(second,trajectory.slice(0,20)));\n\nvar averageUtility = listMean(map(getUtility, map(first,trajectory)));\nprint('Arm2 is best arm and has expected utility 1.\\n' + \n 'So ideal performance gives average score of: 1 \\n' + \n 'The average score over 40 trials for rewardMyopic agent: ' + \n averageUtility);\n~~~~\n\n\n-------\n\n## Myopic Updating: the basic idea\n\nThe Reward-myopic agent ignores rewards that occur after its myopic cutoff $$C_g$$. By contrast, an \"Update-myopic agent\", takes into account all future rewards but ignores the value of belief updates that occur after a cutoff. Concretely, the agent at time $$t=0$$ assumes they can only *explore* (i.e. update beliefs from observations) up to some cutoff point $$C_m$$ steps into the future, after which they just exploit without updating beliefs. In reality, the agent continues to update after time $$t=C_m$$. The Update-myopic agent, like the Naive hyperbolic discounter, has an incorrect model of their future self.\n\nMyopic updating is optimal for certain special cases of Bandits and has good performance on Bandits in general refp:frazier2008knowledge. It also provides a good fit to human performance in Bernoulli Bandits refp:zhang2013forgetful.\n\n### Myopic Updating: applications and limitations\n\nMyopic Updating has been studied in Machine Learning refp:gonzalez2015glasses and Operations Research refp:ryzhov2012knowledge. In most cases, the cutoff point $$C_m$$ after which the agent assumes himself to exploit is set to $$C_m=1$$. This results in a scalable, analytically tractable optimization problem: pull the arm that maximizes the expected value of future exploitation given you pulled that arm. This \"future exploitation\" means that you pick the arm that is best in expectation for the rest of time.\n\nWe've presented Bandit problems with a finite number of uncorrelated arms. Myopic Updating also works for generalized Bandit Problems: e.g. when rewards are correlated or continuous and in the setting of \"Bayesian Optimization\" where instead of a fixed number of arms the goal is to optimize a high-dimensional real-valued function. \n\nMyopic Updating does not work well for POMDPs in general. Suppose you are looking for a good restaurant in a foreign city. A good strategy is to walk to a busy street and then find the busiest restaurant. If reaching the busy street takes longer than the myopic cutoff $$C_m$$, then an Update-myopic agent won't see value in this plan. We present a concrete example of this problem below (\"Restaurant Search\"). This example highlights a way in which Bandit problems are an especially simple POMDP. In a Bandit problem, every aspect of the unknown latent state can be queried at any timestep (by pulling the appropriate arm). So even the Myopic Agent with $$C_m=1$$ is sensitive to the information value of every possible observation that the POMDP can yield[^selfmodel].\n\n[^selfmodel]: The Update-myopic agent incorrectly models his future self, by assuming he ceases to update after cutoff point $$C_m$$. This incorrect \"self-modeling\" is also a property of model-free RL agents. For example, a Q-learner's estimation of expected utilities for states ignores the fact that the Q-learner will randomly explore with some probability. SARSA, on the other hand, does take its random exploration into account when computing this estimate. But it doesn't model the way in which its future exploration behavior will make certain actions useful in the present (as in the example of finding a restaurant in a foreign city).\n\n### Myopic Updating: formal model\nMyopic Updating only makes sense in the context of an agent that is capable of learning from observations (i.e. in the POMDP rather than MDP setting). So our goal is to generalize our agent model for solving POMDPs to a Myopic Updating with $$C_m \\in [1,\\infty]$$.\n\n**Exercise:** Before reading on, modify the equations defining the [POMDP agent](/chapters/3c-pomdp) in order to generalize the agent model to include Myopic Updating. The optimal POMDP agent will be the special case when $$C_m=\\infty$$.\n\n------------\n\nTo extend the POMDP agent to the Update-myopic agent, we use the idea of *delays* from the previous chapter. These delays are not used to evaluate future rewards (as any discounting agent would use them). They are used to determine how future actions are simulated. If the future action occurs when delay $$d$$ exceeds cutoff point $$C_m$$, then the simulated future self does not do a belief update before taking the action. (This makes the Update-myopic agent analogous to the Naive agent: both simulate the future action by projecting the wrong delay value onto their future self). \n\nWe retain the notation from the definition of the POMDP agent and skip directly to the equation for the expected utility of a state, which we modify for the Update-myopic agent with cutoff point $$C_m \\in [1,\\infty]$$:\n\n$$\nEU_{b}[s,a,d] = U(s,a) + \\mathbb{E}_{s',o,a'}(EU_{b'}[s',a'_{b'},d+1])\n$$\n\nwhere:\n\n- $$s' \\sim T(s,a)$$ and $$o \\sim O(s',a)$$\n\n- $$a'_{b'}$$ is the softmax action the agent takes given new belief $$b'$$\n\n- the new belief state $$b'$$ is defined as:\n\n$$\nb'(s') \\propto I_{C_m}(s',a,o,d)\\sum_{s \\in S}{T(s,a,s')b(s)}\n$$\n\n\nwhere $$I_{C_m}(s',a,o,d) = O(s',a,o)$$ if $$d$$ < $$C_m$$ and $$I_{C_m}(s',a,o,d) = 1$$ otherwise.\n\nThe key change from POMDP agent is the definition of $$b'$$. The Update-myopic agent assumes his future self (after the cutoff $$C_m$$) updates only on his last action $$a$$ and not on observation $$o$$. For example, in a deterministic Gridworld the future self would keep track of his locations (as his location depends deterministically on his actions) but wouldn't update his belief about hidden states. \n\nThe implementation of the Update-myopic agent in WebPPL is a direct translation of the definition provided above.\n\n>**Exercise:** Modify the code for the POMDP agent to represent an Update-myopic agent. See this codebox or this library [script](https://github.com/agentmodels/webppl-agents/blob/master/src/agents/makePOMDPAgent.wppl).\n\n\n### Myopic Updating for Bandits\n\nThe Update-myopic agent performs well on a variety of Bandit problems. The following codeboxes compare the Update-myopic agent to the Optimal POMDP agent on binary, two-arm Bandits (see the specific example in Figure 3). \n\n\"diagram\"\n\n>**Figure 3**: Bandit problem. The agent's prior includes two hypotheses for the rewards of each arm, with the prior probability of each labeled to the left and right of the boxes. The priors on each arm are independent and so there are four hypotheses overall. Boxes with actual rewards have a bold border. \n
\n\n\n~~~~\n// Helper functions for Bandits:\n///fold:\n\n// HELPERS FOR CONSTRUCTING AGENT\n\nvar baseParams = {\n alpha: 1000,\n noDelays: false,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: 1 },\n discount: 0\n};\n\nvar getParams = function(agentPrior) {\n var params = extend(baseParams, { priorBelief: agentPrior });\n return extend(params);\n};\n\nvar getAgentPrior = function(numberOfTrials, priorArm0, priorArm1) {\n return Infer({ model() {\n var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }});\n};\n\n// HELPERS FOR CONSTRUCTING WORLD\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// Construct Bandit POMDP\nvar getBandit = function(numberOfTrials){\n return makeBanditPOMDP({\n numberOfArms: 2,\n\tarmToPrizeDist: { 0: probably0Dist, 1: probably1Dist },\n\tnumberOfTrials: numberOfTrials,\n\tnumericalPrizes: true\n });\n};\n\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n\n// Get score for a single episode of bandits\nvar score = function(out) {\n return listMean(map(getUtility, out));\n};\n///\n\n// Agent prior on arm rewards\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// True latentState:\n// arm0 is probably0Dist, arm1 is probably1Dist (and so is better)\n\n// Agent prior on arms: arm1 (better arm) has higher EV\nvar priorArm0 = function() {\n return categorical([0.5, 0.5], [probably1Dist, probably0Dist]);\n};\nvar priorArm1 = function(){\n return categorical([0.6, 0.4], [probably1Dist, probably0Dist]);\n};\n\n\nvar runAgent = function(numberOfTrials, optimal) {\n // Construct world and agents\n var bandit = getBandit(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n var prior = getAgentPrior(numberOfTrials, priorArm0, priorArm1);\n var agentParams = getParams(prior);\n\n var agent = makeBanditAgent(agentParams, bandit, \n optimal ? 'belief' : 'beliefDelay');\n\n return score(simulatePOMDP(startState, world, agent, 'states')); \n};\n\n// Run each agent 10 times and take average of scores\nvar means = map(function(optimal) {\n var scores = repeat(10, function(){ return runAgent(5,optimal); });\n var st = optimal ? 'Optimal: ' : 'Update-Myopic: ';\n print(st + 'Mean scores on 10 repeats of 5-trial bandits\\n' + scores);\n return listMean(scores);\n }, [true, false]);\n \nprint('Overall means for [Optimal,Update-Myopic]: ' + means);\n~~~~\n\n>**Exercise**: The above codebox shows that performance for the two agents is similar. Try varying the priors and the `armToPrizeDist` and verify that performance remains similar. How would you provide stronger empirical evidence that the two algorithms are equivalent for this problem?\n\nThe following codebox computes the runtime for Update-myopic and Optimal agents as a function of the number of Bandit trials. (This takes a while to run.) We see that the Update-myopic agent has better scaling even on a small number of trials. Note that neither agent has been optimized for Bandit problems.\n\n>**Exercise:** Think of ways to optimize the Update-myopic agent with $$C_m=1$$ for binary Bandit problems.\n\n\n~~~~\n///fold: Similar helper functions as above codebox\n\n// HELPERS FOR CONSTRUCTING AGENT\n\nvar baseParams = {\n alpha: 1000,\n noDelays: false,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: 1 },\n discount: 0\n};\n\nvar getParams = function(agentPrior){\n var params = extend(baseParams, { priorBelief: agentPrior });\n return extend(params);\n};\n\nvar getAgentPrior = function(numberOfTrials, priorArm0, priorArm1){\n return Infer({ model() {\n var armToPrizeDist = { 0: priorArm0(), 1: priorArm1() };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }});\n};\n\n// HELPERS FOR CONSTRUCTING WORLD\n\n// Possible distributions for arms\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\n\n\n// Construct Bandit POMDP\nvar getBandit = function(numberOfTrials) {\n return makeBanditPOMDP({\n numberOfArms: 2,\n armToPrizeDist: { 0: probably0Dist, 1: probably1Dist },\n numberOfTrials,\n numericalPrizes: true\n });\n};\n\nvar getUtility = function(state, action) {\n var prize = state.manifestState.loc;\n return prize === 'start' ? 0 : prize;\n};\n\n// Get score for a single episode of bandits\nvar score = function(out) {\n return listMean(map(getUtility, out));\n};\n\n\n// Agent prior on arm rewards\n\n// Possible distributions for arms\nvar probably0Dist = Categorical({ vs: [0, 1], ps: [0.6, 0.4] });\nvar probably1Dist = Categorical({ vs: [0, 1], ps: [0.4, 0.6] });\n\n// True latentState:\n// arm0 is probably0Dist, arm1 is probably1Dist (and so is better)\n\n// Agent prior on arms: arm1 (better arm) has higher EV\nvar priorArm0 = function() {\n return categorical([0.5, 0.5], [probably1Dist, probably0Dist]);\n};\nvar priorArm1 = function(){\n return categorical([0.6, 0.4], [probably1Dist, probably0Dist]);\n};\n\n\nvar runAgents = function(numberOfTrials) {\n // Construct world and agents\n var bandit = getBandit(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n\n var agentPrior = getAgentPrior(numberOfTrials, priorArm0, priorArm1);\n var agentParams = getParams(agentPrior);\n\n var optimalAgent = makeBanditAgent(agentParams, bandit, 'belief');\n var myopicAgent = makeBanditAgent(agentParams, bandit, 'beliefDelay');\n\n // Get average score across totalTime for both agents\n var runOptimal = function() {\n return score(simulatePOMDP(startState, world, optimalAgent, 'states')); \n };\n\n var runMyopic = function() {\n return score(simulatePOMDP(startState, world, myopicAgent, 'states'));\n };\n\n var optimalDatum = {\n numberOfTrials,\n runtime: timeit(runOptimal).runtimeInMilliseconds*0.001,\n agentType: 'optimal'\n };\n\n var myopicDatum = {\n numberOfTrials,\n runtime: timeit(runMyopic).runtimeInMilliseconds*0.001,\n agentType: 'myopic'\n };\n\n return [optimalDatum, myopicDatum];\n};\n///\n\n// Compute runtime as # Bandit trials increases\nvar totalTimeValues = _.range(9).slice(2);\n\nprint('Runtime in s for [Optimal, Myopic] agents:');\n\nvar runtimeValues = _.flatten(map(runAgents, totalTimeValues));\n\nviz.line(runtimeValues, { groupBy: 'agentType' });\n~~~~\n\n\n### Myopic Updating for the Restaurant Search Problem\n\nThe Update-myopic agent assumes they will not update beliefs after the bound $$C_m$$ and so does not make plans that depend on learning something after the bound.\n\nWe illustrate this limitation with a new problem:\n\n>**Restaurant Search:** You are looking for a good restaurant in a foreign city without the aid of a smartphone. You know the quality of some restaurants already and you are uncertain about the others. If you walk right up to a restaurant, you can tell its quality by seeing how busy it is inside. You care about the quality of the restaurant and about minimizing the time spent walking.\n\nHow does the Update-myopic agent fail? Suppose that a few blocks from agent is a great restaurant next to a bad restaurant and the agent doesn't know which is which. If the agent checked inside each restaurant, they would pick out the great one. But if they are Update-myopic, they assume they'd be unable to tell between them.\n\nThe codebox below depicts a toy version of this problem in Gridworld. The restaurants vary in quality between 0 and 5. The agent knows the quality of Restaurant A and is unsure about the other restaurants. One of Restaurants D and E is great and the other is bad. The Optimal POMDP agent will go right up to each restaurant and find out which is great. The Update-myopic agent, with low enough bound $$C_m$$, will either go to the known good restaurant A or investigate one of the restaurants that is closer than D and E.\n\n\n\n\n~~~~\nvar pomdp = makeRestaurantSearchPOMDP();\nvar world = pomdp.world;\nvar makeUtilityFunction = pomdp.makeUtilityFunction;\nvar startState = pomdp.startState;\n\nvar agentPrior = Infer({ model() {\n var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite)\n var latentState = {\n A: 3,\n B: uniformDraw(_.range(6)),\n C: uniformDraw(_.range(6)),\n D: rewardD,\n E: 5 - rewardD\n };\n return {\n manifestState: pomdp.startState.manifestState, \n latentState\n };\n}});\n\n// Construct optimal agent\nvar params = {\n utility: makeUtilityFunction(-0.01), // timeCost is -.01\n alpha: 1000,\n priorBelief: agentPrior\n};\n\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\nprint('Quality of restaurants: \\n' + \n JSON.stringify(pomdp.startState.latentState));\nviz.gridworld(pomdp.mdp, { trajectory: manifestStates });\n~~~~\n\n>**Exercise:** The codebox below shows the behavior the Update-myopic agent. Try different values for the `myopicBound` parameter. For values in $$[1,2,3]$$, explain the behavior of the Update-myopic agent. \n\n\n~~~~\n///fold: Construct world and agent prior as above\nvar pomdp = makeRestaurantSearchPOMDP();\nvar world = pomdp.world;\nvar makeUtilityFunction = pomdp.makeUtilityFunction;\n\nvar agentPrior = Infer({ model() {\n var rewardD = uniformDraw([0,5]); // D is bad or great (E is opposite)\n var latentState = {\n A: 3,\n B: uniformDraw(_.range(6)),\n C: uniformDraw(_.range(6)),\n D: rewardD,\n E: 5 - rewardD\n };\n return {\n manifestState: pomdp.startState.manifestState, \n latentState\n };\n}});\n///\n\nvar myopicBound = 1;\n\nvar params = {\n utility: makeUtilityFunction(-0.01),\n alpha: 1000,\n priorBelief: agentPrior,\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive',\n updateMyopic: { bound: myopicBound }\n};\n\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(pomdp.startState, world, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nprint('Rewards for each restaurant: ' + \n JSON.stringify(pomdp.startState.latentState));\nprint('Myopic bound: ' + myopicBound);\nviz.gridworld(pomdp.mdp, { trajectory: manifestStates });\n~~~~\n\nNext chapter: [Joint inference of biases and preferences I](/chapters/5d-joint-inference.html)\n\n
\n\n### Footnotes\n", "date_published": "2017-03-19T18:54:16Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5c-myopic.md"} +{"id": "f9925fa4aa8c50448d99bfdb6889ffa9", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3a-mdp.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Sequential decision problems: MDPs\"\ndescription: Markov Decision Processes, efficient planning with dynamic programming.\n---\n\n## Introduction\n\nThe [previous chapter](/chapters/3-agents-as-programs.html) introduced agent models for solving simple, one-shot decision problems. The next few sections introduce *sequential* problems, where an agent's choice of action *now* depends on the actions they will choose in the future. As in game theory, the decision maker must coordinate with another rational agent. But in sequential decision problems, that rational agent is their future self.\n\nAs a simple illustration of a sequential decision problem, suppose that an agent, Bob, is looking for a place to eat. Bob gets out of work in a particular location (indicated below by the blue circle). He knows the streets and the restaurants nearby. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. Here is a visualization of the street layout. The labels refer to different types of restaurants: a chain selling Donuts, a Vegetarian Salad Bar and a Noodle Shop. \n\n~~~~\nvar ___ = ' '; \nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({ grid, start: [3, 1] });\n\nviz.gridworld(mdp.world, { trajectory : [mdp.startState] });\n~~~~\n\n\n\n## Markov Decision Processes: Definition\n\nWe represent Bob's decision problem as a Markov Decision Process (MDP) and, more specifically, as a discrete \"Gridworld\" environment. An MDP is a tuple $$ \\left\\langle S,A(s),T(s,a),U(s,a) \\right\\rangle$$, including the *states*, the *actions* in each state, the *transition function* that maps state-action pairs to successor states, and the *utility* or *reward* function. In our example, the states $$S$$ are Bob's locations on the grid. At each state, Bob selects an action $$a \\in \\{ \\text{up}, \\text{down}, \\text{left}, \\text{right} \\} $$, which moves Bob around the grid (according to transition function $$T$$). In this example we assume that Bob's actions, as well as the transitions and utilities, are all deterministic. However, our approach generalizes to noisy actions, stochastic transitions and stochastic utilities.\n\nAs with the one-shot decisions of the previous chapter, the agent in an MDP will choose actions that *maximize expected utility*. This depends on the total utility of the *sequence* of states that the agent visits. Formally, let $$EU_{s}[a]$$ be the expected (total) utility of action $$a$$ in state $$s$$. The agent's choice is a softmax function of this expected utility:\n\n$$\nC(a; s) \\propto e^{\\alpha EU_{s}[a]}\n$$\n\nThe expected utility depends on both immediate utility and, recursively, on future expected utility:\n\n**Expected Utility Recursion**:\n\n$$\nEU_{s}[a] = U(s, a) + \\mathbb{E}_{s', a'}(EU_{s'}[a'])\n$$\n\n
\nwith the next state $$s' \\sim T(s,a)$$ and $$a' \\sim C(s')$$. The decision problem ends either when a *terminal* state is reached or when the time-horizon is reached. (In the next few chapters the time-horizon will always be finite). \n\nThe intuition to keep in mind for solving MDPs is that the expected utility propagates backwards from future states to the current action. If a high utility state can be reached by a sequence of actions starting from action $$a$$, then action $$a$$ will have high expected utility -- *provided* that the sequence of actions is taken with high probability and there are no low utility steps along the way.\n\n\n## Markov Decision Processes: Implementation\n\nThe recursive decision rule for MDP agents can be directly translated into WebPPL. The `act` function takes the agent's state as input, evaluates the expectation of actions in that state, and returns a softmax distribution over actions. The expected utility of actions is computed by a separate function `expectedUtility`. Since an action's expected utility depends on future actions, `expectedUtility` calls `act` in a mutual recursion, bottoming out when a terminal state is reached or when time runs out. \n\nWe illustrate this \"MDP agent\" on a simple MDP:\n\n### Integer Line MDP\n- **States**: Points on the integer line (e.g -1, 0, 1, 2).\n\n- **Actions/transitions**: Actions \"left\", \"right\" and \"stay\" move the agent deterministically along the line in either direction.\n\n- **Utility**: The utility is $$1$$ for the state corresponding to the integer $$3$$ and is $$0$$ otherwise. \n\n\nHere is a WebPPL agent that starts at the origin (`state === 0`) and that takes a first step (to the right):\n\n~~~~\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n \n var act = function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n };\n\n var expectedUtility = function(state, action, timeLeft){\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n };\n\n return { act };\n}\n\nvar act = makeAgent().act;\n\nvar startState = 0;\nvar totalTime = 4;\n\n// Agent's move '-1' means 'left', '0' means 'stay', '1' means 'right'\nprint(\"Agent's action: \" + sample(act(startState, totalTime)));\n~~~~\n\nThis code computes the agent's initial action, given that the agent will get to take four actions in total. To simulate the agent's entire trajectory, we add a third function `simulate`, which updates and stores the world state in response to the agent's actions: \n\n~~~~\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n var act = function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n };\n\n var expectedUtility = function(state, action, timeLeft) {\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0) {\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n };\n\n return { act };\n}\n\n\nvar act = makeAgent().act;\n\nvar simulate = function(state, timeLeft){\n if (timeLeft === 0){\n return [];\n } else {\n var action = sample(act(state, timeLeft));\n var nextState = transition(state, action); \n return [state].concat(simulate(nextState, timeLeft - 1))\n }\n};\n\nvar startState = 0;\nvar totalTime = 4;\nprint(\"Agent's trajectory: \" + simulate(startState, totalTime));\n~~~~\n\n>**Exercise**: Change the world such that it is a loop, i.e. moving right from state `3` moves to state `0`, and moving left from state `0` moves to state `3`. How does this change the agent's sequence of actions?\n\n>**Exercise**: Change the agent's action space such that the agent can also move two steps at a time. How does this change the agent's sequence of actions?\n\n>**Exercise**: Change the agent's utility function such that the agent moves as far as possible to the right, given its available total time.\n\nThe `expectedUtility` and `simulate` functions are similar. The `expectedUtilty` function includes the agent's own (*subjective*) simulation of the future distribution on states. In the case of an MDP and optimal agent, the agent's simulation is identical to the world simulator. In later chapters, we describe agents whose subjective simulations differ from the world simulator. These agents either have inaccurate models of their own future choices or innacurate models of the world.\n\nWe already mentioned the mutual recursion between `act` and `expectedUtility`. What does this recursion look like if we unroll it? In this example we get a tree that expands until `timeLeft` reaches zero. The root is the starting state (`startState === 0`) and this branches into three successor states (`-1`, `0`, `1`). This leads to an exponential blow-up in the runtime of a single action (which depends on how long into the future the agent plans):\n\n~~~~\n///fold: transition, utility, makeAgent, act, and simulate as above\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n\n var act = function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n };\n\n var expectedUtility = function(state, action, timeLeft) {\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0) {\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n };\n\n return { act };\n}\n\n\nvar act = makeAgent().act;\n\nvar simulate = function(state, timeLeft){\n if (timeLeft === 0){\n return [];\n } else {\n var action = sample(act(state, timeLeft));\n var nextState = transition(state, action); \n return [state].concat(simulate(nextState, timeLeft - 1))\n }\n};\n///\n\nvar startState = 0;\n\nvar getRuntime = function(totalTime) {\n return timeit(function() {\n return act(startState, totalTime);\n }).runtimeInMilliseconds.toPrecision(4);\n};\n\nvar numSteps = [3, 4, 5, 6, 7];\nvar runtimes = map(getRuntime, numSteps);\n\nprint('Runtime in ms for for a given number of steps: \\n')\nprint(_.zipObject(numSteps, runtimes));\nviz.bar(numSteps, runtimes);\n~~~~\n\nMost of this computation is unnecessary. If the agent starts at `state === 0`, there are three ways the agent could be at `state === 0` again after two steps: either the agent stays put twice or the agent goes one step away and then returns. The code above computes `agent(0, totalTime-2)` three times, while it only needs to be computed once. This problem can be resolved by *memoization*, which stores the results of a function call for re-use when the function is called again on the same input. This use of memoization results in a runtime that is polynomial in the number of states and the total time. In WebPPL, we use the higher-order function `dp.cache` to memoize the `act` and `expectedUtility` functions:\n\n~~~~\n///fold: transition, utility and makeAgent functions as above, but...\n// ...with `act` and `expectedUtility` wrapped in `dp.cache`\nvar transition = function(state, action) {\n return state + action;\n};\n\nvar utility = function(state) {\n if (state === 3) {\n return 1;\n } else {\n return 0;\n }\n};\n\nvar makeAgent = function() { \n var act = dp.cache(function(state, timeLeft) {\n return Infer({ model() {\n var action = uniformDraw([-1, 0, 1]);\n var eu = expectedUtility(state, action, timeLeft);\n factor(100 * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(function(state, action, timeLeft) {\n var u = utility(state, action);\n var newTimeLeft = timeLeft - 1;\n if (newTimeLeft === 0) {\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var nextAction = sample(act(nextState, newTimeLeft));\n return expectedUtility(nextState, nextAction, newTimeLeft);\n }}));\n }\n });\n\n return { act };\n}\n\n\nvar act = makeAgent().act;\n\nvar simulate = function(state, timeLeft){\n if (timeLeft === 0){\n return [];\n } else {\n var action = sample(act(state, timeLeft));\n var nextState = transition(state, action); \n return [state].concat(simulate(nextState, timeLeft - 1))\n }\n};\n///\n\nvar startState = 0;\n\nvar getRuntime = function(totalTime) {\n return timeit(function() {\n return act(startState, totalTime);\n }).runtimeInMilliseconds.toPrecision(4);\n};\n\nvar numSteps = [3, 4, 5, 6, 7];\nvar runtimes = map(getRuntime, numSteps);\n\nprint('WITH MEMOIZATION \\n');\nprint('Runtime in ms for for a given number of steps: \\n')\nprint(_.zipObject(numSteps, runtimes));\nviz.bar(numSteps, runtimes)\n~~~~\n\n>**Exercise**: Could we also memoize `simulate`? Why or why not?\n\n\n\n## Choosing restaurants in Gridworld\n\nThe agent model above that includes memoization allows us to solve Bob's \"Restaurant Choice\" problem efficiently. \n\nWe extend the agent model above by adding a `terminateAfterAction` to certain states to halt simulations when the agent reaches these states. For the Restaurant Choice problem, the restaurants are assumed to be terminal states. After computing the agent's trajectory, we use the [webppl-agents library](https://github.com/agentmodels/webppl-agents) to animate it. \n\n\n\n~~~~\n///fold: Restaurant constants, tableToUtilityFunction\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar tableToUtilityFunction = function(table, feature) {\n return function(state, action) {\n var stateFeatureName = feature(state).name;\n return stateFeatureName ? table[stateFeatureName] : table.timeCost;\n };\n};\n///\n\n// Construct world\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n start: [3, 1],\n totalTime: 9\n});\n\nvar world = mdp.world;\nvar transition = world.transition;\nvar stateToActions = world.stateToActions;\n\n\n// Construct utility function\n\nvar utilityTable = {\n 'Donut S': 1, \n 'Donut N': 1, \n 'Veg': 3,\n 'Noodle': 2, \n 'timeCost': -0.1\n};\n\nvar utility = tableToUtilityFunction(utilityTable, world.feature);\n\n\n// Construct agent\n\nvar makeAgent = function() {\n \n var act = dp.cache(function(state) {\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action);\n factor(100 * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(function(state, action){\n var u = utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action);\n var nextAction = sample(act(nextState));\n return expectedUtility(nextState, nextAction);\n }}));\n }\n });\n \n return { act };\n};\n\nvar act = makeAgent().act;\n\n\n// Generate and draw a trajectory\n\nvar simulate = function(state) {\n var action = sample(act(state));\n var nextState = transition(state, action);\n var out = [state, action];\n if (state.terminateAfterAction) {\n return [out];\n } else {\n return [out].concat(simulate(nextState));\n }\n};\n\nvar trajectory = simulate(mdp.startState);\n\nviz.gridworld(world, { trajectory: map(first, trajectory) });\n~~~~\n\n>**Exercise**: Change the utility table such that the agent goes to `Donut S`. What ways are there to accomplish this outcome?\n\n### Noisy agents, stochastic environments\n\nThis section looked at two MDPs that were essentially deterministic. Part of the difficulty of solving MDPs is that actions, rewards and transitions can be stochastic. The [next chapter](/chapters/3b-mdp-gridworld.html) explores both noisy agents and stochastic gridworld environments.\n", "date_published": "2017-06-16T23:10:13Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3a-mdp.md"} +{"id": "44d01d6d6c9feba874f40b861e6cc502", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6a-inference-dp.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Dynamic programming\ndescription: Exact enumeration of generative model computations + caching.\nstatus: stub\nis_section: false\nhidden: true\n---\n", "date_published": "2016-03-09T21:34:03Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6a-inference-dp.md"} +{"id": "d5e89ed22367f9ddf80e6ecc23bc7b71", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3d-reinforcement-learning.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Reinforcement Learning to Learn MDPs\"\ndescription: RL for Bandits, Thomson Sampling for learning MDPs. \n---\n\n## Introduction\n\nPrevious chapters assumed that the agent already knew the structure of the environment. In MDPs, the agent knows everything about the environment and just needs to compute a good plan. In POMDPs, the agent is ignorant of some hidden state but knows how the environment works *given* this hidden state. Reinforcement Learning (RL) methods apply when the agent doesn't know the structure of the environment. For example, suppose the agent faces an unknown MDP. Provided the agent observes the reward/utility of states, RL methods will eventually converge on the optimal policy for the MDP. That is, RL eventually learns the same policy that an agent with full knowledge of the MDP would compute.\n\nRL has been one of the key tools behind recent major breakthroughs in AI, such as defeating humans at Go refp:silver2016mastering and learning to play videogames from only pixel input refp:mnih2015human. This chapter applies RL to learning discrete MDPs. It's possible to generalize RL techniques to continuous state and action spaces and also to learning POMDPs refp:jaderberg2016reinforcement but that's beyond the scope of this tutorial. \n\n\n## Reinforcement Learning for Bandits\nThe previous chapter introduced the Multi-Arm Bandit problem. We computed the Bayesian optimal solution to Bandit problems by treating them as POMDPs. Here we apply Reinforcement Learning to Bandits. RL agents won't perform optimally but they often rapidly converge to the best arm and RL techniques are highly scalable and simple to implement. (In Bandits the agent already knows the structure of the MDP. So Bandits does not showcase the ability of RL to learn a good policy in a complex unknown MDP. We discuss more general RL techniques below). \n\nOutside of this chapter, we use term \"utility\" (e.g. in the definition of an MDP) rather than \"reward\". This chapter follows the convention in Reinforcement Learning of using \"reward\".\n\n\n### Softmax Greedy Agent\nThis section introduces an RL agent specialized to Bandit: a \"greedy\" agent with softmax action noise. The Softmax Greedy agent updates beliefs about the hidden state (the expected rewards for the arms) using Bayesian updates. Yet instead of making sequential plans that balance exploration (e.g. making informative observations) with exploitation (gaining high reward), the Greedy agent takes the action with highest *immediate* expected return[^greedy] (up to softmax noise).\n\nWe measure the agent's performance on Bernoulli-distributed Bandits by computing the *cumulative regret* over time. The regret for an action is the difference in expected returns between the action and the objective best action[^regret]. In the codebox below, the arms have parameter values (\"coin-weights\") of $$[0.5,0.6]$$ and there are 500 Bandit trials. \n\n[^greedy]: The standard Epsilon/Softmax Greedy agent from the Bandit literature maintains point estimates for the expected rewards of the arms. In WebPPL it's natural to use distributions instead. In a later chapter, we will implement a more general Greedy/Myopic agent by extending the POMDP agent.\n\n[^regret]:The \"regret\" is a standard Frequentist metric for performance. Bayesian metrics, which take into account the agent's priors, are beyond the scope of this chapter. \n\n~~~~\n///fold:\nvar cumsum = function (xs) {\n var acf = function (n, acc) { return acc.concat( (acc.length > 0 ? acc[acc.length-1] : 0) + n); }\n return reduce(acf, [], xs.reverse());\n }\n \n///\n \n\n// Define Bandit problem\n\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Given a state (a coin-weight p for each arm), sample reward\nvar observeStateAction = function(state, action){\n var armToCoinWeight = state;\n return sample(Bernoulli({p : armToCoinWeight[action]})) \n};\n\n\n// Greedy agent for Bandits\nvar makeGreedyBanditAgent = function(params) {\n var priorBelief = params.priorBelief;\n\n // Update belief about coin-weights from observed reward\n var updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var armToCoinWeight = sample(belief);\n condition( observation === observeStateAction(armToCoinWeight, action))\n return armToCoinWeight;\n }});\n };\n \n // Evaluate arms by expected coin-weight\n var expectedReward = function(belief, action){\n return expectation(Infer( { model() {\n var armToCoinWeight = sample(belief);\n return armToCoinWeight[action];\n }}))\n }\n\n // Choose by softmax over expected reward\n var act = dp.cache(\n function(belief) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n factor(params.alpha * expectedReward(belief, action))\n return action;\n }});\n });\n\n return { params, act, updateBelief };\n};\n\n// Run Bandit problem\nvar simulate = function(armToCoinWeight, totalTime, agent) {\n var act = agent.act;\n var updateBelief = agent.updateBelief;\n var priorBelief = agent.params.priorBelief;\n\n var sampleSequence = function(timeLeft, priorBelief, action) {\n var observation = (action !== 'noAction') &&\n observeStateAction(armToCoinWeight, action);\n var belief = ((action === 'noAction') ? priorBelief :\n updateBelief(priorBelief, observation, action));\n var action = sample(act(belief));\n\n return (timeLeft === 0) ? [action] : \n [action].concat(sampleSequence(timeLeft-1, belief, action));\n };\n return sampleSequence(totalTime, priorBelief, 'noAction');\n};\n\n\n// Agent params\nvar alpha = 30\nvar priorBelief = Infer({ model () {\n var p0 = uniformDraw([.1, .3, .5, .6, .7, .9]);\n var p1 = uniformDraw([.1, .3, .5, .6, .7, .9]);\n return { 0:p0, 1:p1};\n} });\n\n// Bandit params\nvar numberTrials = 500;\nvar armToCoinWeight = { 0: 0.5, 1: 0.6 };\n\nvar agent = makeGreedyBanditAgent({alpha, priorBelief});\nvar trajectory = simulate(armToCoinWeight, numberTrials, agent);\n\n// Compare to random agent\nvar randomTrajectory = repeat(\n numberTrials, \n function(){return uniformDraw([0,1]);}\n);\n\n// Compute agent performance\nvar regret = function(arm) { \n var bestCoinWeight = _.max(_.values(armToCoinWeight))\n return bestCoinWeight - armToCoinWeight[arm];\n};\n \nvar trialToRegret = map(regret,trajectory);\nvar trialToRegretRandom = map(regret, randomTrajectory)\nvar ys = cumsum( trialToRegret) \n\nprint('Number of trials: ' + numberTrials);\nprint('Total regret: [GreedyAgent, RandomAgent] ' + \n sum(trialToRegret) + ' ' + sum(trialToRegretRandom))\nprint('Arms pulled: ' + trajectory);\n\nviz.line(_.range(ys.length), ys, {xLabel:'Time', yLabel:'Cumulative regret'});\n~~~~\n\nHow well does the Greedy agent do? It does best when the difference between arms is large but does well even when the arms are close. Greedy agents perform well empirically on a wide range of Bandit problems refp:kuleshov2014algorithms and if their noise decays over time they can achieve asymptotic optimality. In contrast to the optimal POMDP agent from the previous chapter, the Greedy Agent scales well in both number of arms and trials.\n\n\n>**Exercises**:\n\n> 1. Modify the code above so that it's easy to repeatedly run the same agent on the same Bandit problem. Compute the mean and standard deviation of the agent's total regret averaged over 20 episodes on the Bandit problem above. Use WebPPL's library [functions](http://docs.webppl.org/en/master/functions/arrays.html). \n> 2. Set the softmax noise to be low. How well does the Greedy Softmax agent do? Explain why. Keeping the noise low, modify the agent's priors to be overly \"optimistic\" about the expected reward of each arm (without changing the support of the prior distribution). How does this optimism change the agent's performance? Explain why. (An optimistic prior assigns a high expected reward to each arm. This idea is known as \"optimism in the face of uncertainty\" in the RL literature.)\n> 3. Modify the agent so that the softmax noise is low and the agent has a \"bad\" prior (i.e. one that assigns a low probability to the truth) that is not optimistic. Will the agent always learn the optimal policy (eventually?) If so, after how many trials is the agent very likely to have learned the optimal policy? (Try to answer this question without doing experiments that take a long time to run.)\n\n\n### Posterior Sampling\nPosterior sampling (or \"Thompson sampling\") is the basis for another algorithm for Bandits. This algorithm generalizes to arbitrary discrete MDPs, as we show below. The Posterior-sampling agent updates beliefs using standard Bayesian updates. Before choosing an arm, it draws a sample from its posterior on the arm parameters and then chooses greedily given the sample. In Bandits, this is similar to Softmax Greedy but without the softmax parameter $$\\alpha$$.\n\n>**Exercise**:\n> Implement Posterior Sampling for Bandits by modifying the code above. (You only need to modify the `act` function.) Compare the performance of Posterior Sampling to Softmax Greedy (using the value for $$\\alpha$$ in the codebox above). You should vary the `armToCoinWeight` parameter and the number of arms. Evaluate each agent by computing the mean and standard deviation of rewards averaged over many trials. Which agent is better overall and why?\n\n\n\n\n\n-----------\n\n## RL algorithms for MDPs\nThe RL algorithms above are specialized to Bandits and so they aren't able to learn an arbitrary MDP. We now consider algorithms that can learn any discrete MDP. There are two kinds of RL algorithm:\n\n1. *Model-based* algorithms learn an explicit representation of the MDP's transition and reward functions. These representations are used to compute a good policy. \n\n2. *Model-free* algorithms do not explicitly represent or learn the transition and reward functions. Instead they explicitly represent either a value function (i.e. an estimate of the $$Q^*$$-function) or a policy.\n\nThe best known RL algorithm is [Q-learning](https://en.wikipedia.org/wiki/Q-learning), which works both for discrete MDPs and for MDPs with high-dimensional state spaces (where \"function approximation\" is required). Q-learning is a model-free algorithm that directly learns the expected utility/reward of each action under the optimal policy. We leave as an exercise the implementation of Q-learning in WebPPL. Due to the functional purity of WebPPL, a Bayesian version of Q-learning is more natural and in the spirit of this tutorial. See, for example \"Bayesian Q-learning\" refp:dearden1998bayesian and this review of Bayesian model-free approaches refp:ghavamzadeh2015bayesian.\n\n\n\n\n\n\n\n\n\n\n\n### Posterior Sampling Reinforcement Learning (PSRL)\n\nPosterior Sampling Reinforcemet Learning (PSRL) is a model-based algorithm that generalizes posterior-sampling for Bandits to discrete, finite-horizon MDPs refp:osband2016posterior. The agent is initialized with a Bayesian prior distribution on the reward function $$R$$ and transition function $$T$$. At each episode the agent proceeds as follows:\n\n> 1. Sample $$R$$ and $$T$$ (a \"model\") from the distribution. Compute the optimal policy for this model and follow it until the episode ends.\n> 2. Update the distribution on $$R$$ and $$T$$ on observations from the episode.\n\nHow does this agent efficiently balances exploration and exploitation to rapidly learn the structure of an MDP? If the agent's posterior is already concentrated on a single model, the agent will mainly \"exploit\". If the agent is uncertain over models, then it will sample various different models in turn. For each model, the agent will visit states with high reward on that model and so this leads to exploration. If the states turn out not to have high reward, the agent learns this and updates their beliefs away from the model (and will rarely visit the states again).\n\nThe PSRL agent is simple to implement in our framework. The Bayesian belief-updating re-uses code from the POMDP agent: $$R$$ and $$T$$ are treated as latent state and are observed every state transition. Computing the optimal policy for a sampled $$R$$ and $$T$$ is equivalent to planning in an MDP and we can re-use our MDP agent code. \n\nWe run the PSRL agent on Gridworld. The agent knows $$T$$ but does not know $$R$$. Reward is known to be zero everywhere but a single cell of the grid. The actual MDP is shown in Figure 1, where the time-horizon is 8 steps. The true reward function is specified by the variable `trueLatentReward` (where the order of the rows is the inverse of the displayed grid). The displays shows the agent's trajectory on each episode (where the number of episodes is set to 10). \n\n\n\n\n\"gridworld\n\n\n\n**Figure 1:** True latent reward for Gridworld below. Agent receives reward 1 in the cell marked \"G\" and zero elsewhere.\n\n\n\n\n\n\n~~~~\n///fold:\n\n// Construct Gridworld (transitions but not rewards)\nvar ___ = ' '; \n\nvar grid = [\n [ ___, ___, '#', ___],\n [ ___, ___, ___, ___],\n [ '#', ___, '#', '#'],\n [ ___, ___, ___, ___]\n];\n\nvar pomdp = makeGridWorldPOMDP({\n grid,\n start: [0, 0],\n totalTime: 8,\n transitionNoiseProbability: .1\n});\n\nvar transition = pomdp.transition\n\nvar actions = ['l', 'r', 'u', 'd'];\n\nvar utility = function(state, action) {\n var loc = state.manifestState.loc;\n var r = state.latentState.rewardGrid[loc[0]][loc[1]];\n \n return r;\n};\n\n\n// Helper function to generate agent prior\nvar getOneHotVector = function(n, i) {\n if (n==0) { \n return [];\n } else {\n var e = 1*(i==0);\n return [e].concat(getOneHotVector(n-1, i-1));\n }\n};\n///\n\nvar observeState = function(state) { \n return utility(state);\n};\n\n\n\n\nvar makePSRLAgent = function(params, pomdp) {\n var utility = params.utility;\n\n // belief updating: identical to POMDP agent from Chapter 3c\n var updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var state = sample(belief);\n var predictedNextState = transition(state, action);\n var predictedObservation = observeState(predictedNextState);\n condition(_.isEqual(predictedObservation, observation));\n return predictedNextState;\n }});\n };\n\n // this is the MDP agent from Chapter 3a\n var act = dp.cache(\n function(state) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n var eu = expectedUtility(state, action);\n factor(1000 * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action) {\n return expectation(\n Infer({ model() {\n var u = utility(state, action);\n if (state.manifestState.terminateAfterAction) {\n return u;\n } else {\n var nextState = transition(state, action);\n var nextAction = sample(act(nextState));\n return u + expectedUtility(nextState, nextAction);\n }\n }}));\n });\n\n return { params, act, expectedUtility, updateBelief };\n};\n\n\n\n\nvar simulatePSRL = function(startState, agent, numEpisodes) {\n var act = agent.act;\n var updateBelief = agent.updateBelief;\n var priorBelief = agent.params.priorBelief;\n\n var runSampledModelAndUpdate = function(state, priorBelief, numEpisodesLeft) {\n var sampledState = sample(priorBelief);\n var trajectory = simulateEpisode(state, sampledState, priorBelief, 'noAction');\n var newBelief = trajectory[trajectory.length-1][2];\n var newBelief2 = Infer({ model() {\n return extend(state, {latentState : sample(newBelief).latentState });\n }});\n var output = [trajectory];\n\n if (numEpisodesLeft <= 1){\n return output;\n } else {\n return output.concat(runSampledModelAndUpdate(state, newBelief2,\n numEpisodesLeft-1));\n }\n };\n\n var simulateEpisode = function(state, sampledState, priorBelief, action) {\n var observation = observeState(state);\n var belief = ((action === 'noAction') ? priorBelief : \n updateBelief(priorBelief, observation, action));\n\n var believedState = extend(state, { latentState : sampledState.latentState });\n var action = sample(act(believedState));\n var output = [[state, action, belief]];\n\n if (state.manifestState.terminateAfterAction){\n return output;\n } else {\n var nextState = transition(state, action);\n return output.concat(simulateEpisode(nextState, sampledState, belief, action));\n }\n };\n return runSampledModelAndUpdate(startState, priorBelief, numEpisodes);\n};\n\n\n// Construct agent's prior. The latent state is just the reward function.\n// The \"manifest\" state is the agent's own location. \n\n\n// Combine manifest (fully observed) state with prior on latent state\nvar getPriorBelief = function(startManifestState, latentStateSampler){\n return Infer({ model() {\n return {\n manifestState: startManifestState, \n latentState: latentStateSampler()};\n }});\n};\n\n// True reward function\nvar trueLatentReward = {\n rewardGrid : [\n [ 0, 0, 0, 0],\n [ 0, 0, 0, 0],\n [ 0, 0, 0, 0],\n [ 0, 0, 0, 1]\n ]\n};\n\n// True start state\nvar startState = {\n manifestState: { \n loc: [0, 0],\n terminateAfterAction: false,\n timeLeft: 8\n },\n latentState: trueLatentReward\n};\n\n// Agent prior on reward functions (*getOneHotVector* defined above fold)\nvar latentStateSampler = function() {\n var flat = getOneHotVector(16, randomInteger(16));\n return { \n rewardGrid : [\n flat.slice(0,4), \n flat.slice(4,8), \n flat.slice(8,12), \n flat.slice(12,16) ] \n };\n}\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentStateSampler);\n\n// Build agent (using *pomdp* object defined above fold)\nvar agent = makePSRLAgent({ utility, priorBelief, alpha: 100 }, pomdp);\n\nvar numEpisodes = 10\nvar trajectories = simulatePSRL(startState, agent, numEpisodes);\n\nvar concatAll = function(list) {\n var inner = function (work, i) { \n if (i < list.length-1) {\n return inner(work.concat(list[i]), i+1) \n } else {\n return work;\n }\n }\n return inner([], 0); \n}\n\nvar badState = [[ { manifestState : { loc : \"break\" } } ]];\n\nvar trajectories = map(function(t) { return t.concat(badState);}, trajectories);\nviz.gridworld(pomdp, {trajectory : concatAll(trajectories)});\n~~~~\n\n\n\n\n\n\n\n\n\n----------\n\n\n\n\n\n\n\n\n### Footnotes\n\n", "date_published": "2018-06-24T18:01:20Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3d-reinforcement-learning.md"} +{"id": "25ae0e08efcf1ce08fbf7423fd4fdb65", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5a-time-inconsistency.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Time inconsistency I\"\ndescription: Exponential vs. hyperbolic discounting, Naive vs. Sophisticated planning.\n\n---\n\n### Introduction\n\nTime inconsistency is part of everyday human experience. In the night you wish to rise early; in the morning you prefer to sleep in. There is an inconsistency between what you prefer your future self to do and what your future self prefers to do. Forseeing this inconsistency, you take actions in the night to bind your future self to get up. These range from setting an alarm clock to arranging for someone drag you out of bed.\n\nThis pattern is not limited to attempts to rise early. People make failed resolutions to attend a gym regularly. Students procrastinate on writing papers, planning to start early but delaying until the last minute. Empirical studies have highlighted the practical import of time inconsistency both to completing online courses refp:patterson2015can and to watching highbrow movies refp:milkman2009highbrow. Time inconsistency has been used to explain not just quotidian laziness but also addiction, procrastination, and impulsive behavior, as well an array of \"pre-commitment\" behaviors refp:ainslie2001breakdown.\n\nLab experiments of time inconsistency often use simple quantitative questions such as:\n\n>**Question**: Would you prefer to get $100 after 30 days or $110 after 31 days?\n\nMost people prefer the $110. But a significant proportion of people reverse their earlier preference once the 30th day comes around and they contemplate getting $100 immediately. How can this time consistency be captured by a formal model?\n\n\n### Time inconsistency due to hyperbolic discounting\n\nThis chapter models time inconsistency as resulting from *hyperbolic discounting*. The idea is that humans prefer receiving the same rewards sooner rather than later and the *discount function* describing this quantitatively is a hyperbola. Before describing the hyperbolic model, we provide some background on time discounting and incorporate it into our previous agent models. \n\n#### Exponential discounting for optimal agents\n\nThe examples of decision problems in previous chapters have a *known*, *finite* time horizon. Yet there are practical decision problems that are better modeled as having an *unbounded* or *infinite* time horizon. For example, if someone tries to travel home after a vacation, there is no obvious time limit for their task. The same holds for a person saving or investing for the long-term.\n\nGeneralizing the previous agent models to the unbounded case faces a difficulty. The *infinite* summed expected utility of an action will (generally) not converge. The standard solution is to model the agent as maximizing the *discounted* expected utility, where the discount function is exponential. This makes the infinite sums converge and results in an agent model that is analytically and computationally tractable. Aside from mathematical convenience, exponential discounting might also be an accurate model of the \"time preference\" of certain rational agents[^justification]. Exponential discounting represents a (consistent) preference for good things happening sooner rather than later[^exponential].\n\n[^justification]: People care about a range of things: e.g. the food they eat daily, their careers, their families, the progress of science, the preservation of the earth's environment. Many have argued that humans have a time preference. So models that infer human preferences from behavior should be able to represent this time preference. \n\n[^exponential]: There are arguments that exponential discounting is the uniquely rational mode of discounting for agents with time preference. The seminal paper by refp:strotz1955myopia proves that, \"in the continuous time setting, the only discount function such that the optimal policy doesn't vary in time is exponential discounting\". In the discrete-time setting, refp:lattimore2014general prove the same result, as well as discussing optimal strategies for sophisticated time-inconsistent agents.\n\nWhat are the effects of exponential discounting? We return to the deterministic Bandit problem from Chapter III.3 (see Figure 1). Suppose a person decides every year where to go on a skiing vacation. There is a fixed set of options {Tahoe, Chile, Switzerland} and a finite time horizon[^bandit]. The person discounts exponentially and so they prefer a good vacation now to an even better one in the future. This means they are less likely to *explore*, since exploration takes time to pay off.\n\n\n\"diagram\"\n\n>**Figure 1**: Deterministic Bandit problem. The agent tries different arms/destinations and receives rewards. The reward for Tahoe is known but Chile and Switzerland are both unknown. The actual best option is Tahoe. \n
\n\n[^bandit]: As noted above, exponential discounting is usually combined with an *unbounded* time horizon. However, if a human makes a series of decisions over a long time scale, then it makes sense to include their time preference. For this particular example, imagine the person is looking for the best skiing or sports facilities and doesn't care about variety. There could be a known finite time horizon because at some age they are too old for adventurous skiing. \n\n\n~~~~\n///fold:\nvar baseParams = {\n noDelays: false,\n discount: 0,\n sophisticatedOrNaive: 'naive'\n};\n\nvar armToPlace = function(arm){\n return {\n 0: \"Tahoe\",\n 1: \"Chile\",\n 2: \"Switzerland\"\n }[arm];\n};\n\nvar display = function(trajectory) {\n return map(armToPlace, most(trajectory));\n};\n///\n\n// Arms are skiing destinations:\n// 0: \"Tahoe\", 1: \"Chile\", 2: \"Switzerland\"\n\n// Actual utility for each destination\nvar trueArmToPrizeDist = {\n 0: Delta({ v: 1 }),\n 1: Delta({ v: 0 }),\n 2: Delta({ v: 0.5 })\n};\n\n// Constuct Bandit world\nvar numberOfTrials = 10;\nvar bandit = makeBanditPOMDP({\n numberOfArms: 3,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials,\n numericalPrizes: true\n});\n\nvar world = bandit.world;\nvar start = bandit.startState;\n\n// Agent prior for utility of each destination\nvar priorBelief = Infer({ model() {\n var armToPrizeDist = {\n // Tahoe has known utility 1:\n 0: Delta({ v: 1 }),\n // Chile has high variance:\n 1: categorical([0.9, 0.1],\n [Delta({ v: 0 }), Delta({ v: 5 })]),\n // Switzerland has high expected value:\n 2: uniformDraw([Delta({ v: 0.5 }), Delta({ v: 1.5 })]) \n };\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n}});\n\nvar discountFunction = function(delay) {\n return Math.pow(0.5, delay);\n};\n\nvar exponentialParams = extend(baseParams, { discountFunction, priorBelief });\nvar exponentialAgent = makeBanditAgent(exponentialParams, bandit,\n 'beliefDelay');\nvar exponentialTrajectory = simulatePOMDP(start, world, exponentialAgent, 'actions');\n\nvar optimalParams = extend(baseParams, { priorBelief });\nvar optimalAgent = makeBanditAgent(optimalParams, bandit, 'belief');\nvar optimalTrajectory = simulatePOMDP(start, world, optimalAgent, 'actions');\n\n\nprint('exponential discounting trajectory: ' + display(exponentialTrajectory));\nprint('\\noptimal trajectory: ' + display(optimalTrajectory));\n~~~~\n\n \n#### Discounting and time inconsistency\n\nExponential discounting is typically thought of as a *relative* time preference. A fixed reward will be discounted by a factor of $$\\delta^{-30}$$ if received on Day 30 rather than Day 0. On Day 30, the same reward is discounted by $$\\delta^{-30}$$ if received on Day 60 and not at all if received on Day 30. This relative time preference is \"inconsistent\" in a superficial sense. With $$\\delta=0.95$$ per day (and linear utility in money), $100 after 30 days is worth $21 and $110 at 31 days is worth $22. Yet when the 30th day arrives, they are worth $100 and $105 respectively[^inconsistent]! The key point is that whereas these *magnitudes* have changed, the *ratios* stay fixed. Indeed, the ratio between a pair of outcomes stays fixed regardless of when the exponential discounter evaluates them. In summary: while a discounting agent evaluates two prospects in the future as worth little compared to similar near-term prospects, the agent agrees with their future self about which of the two future prospects is better.\n\n[^inconsistent]: One can think of exponential discounting in a non-relative way by choosing a fixed staring time in the past (e.g. the agent's birth) and discounting everything relative to that. This results in an agent with a preference to travel back in time to get higher rewards!\n\nAny smooth discount function other than an exponential will result in preferences that reverse over time refp:strotz1955myopia. So it's not so suprising that untutored humans should be subject to such reversals[^reversal]. Various functional forms for human discounting have been explored in the literature. We describe the *hyperbolic discounting* model refp:ainslie2001breakdown because it is simple and well-studied. Other functional form can be substituted into our models.\n\n[^reversal]: Without computational aids, human representations of discrete and continuous quantities (including durations in time and dollar values) are systematically inaccurate. See refp:dehaene2011number. \n\nHyperbolic and exponential discounting curves are illustrated in Figure 2. We plot the discount factor $$D$$ as a function of time $$t$$ in days, with constants $$\\delta$$ and $$k$$ controlling the slope of the function. In this example, each constant is set to 2. The exponential is:\n\n$$\nD=\\frac{1}{\\delta^t}\n$$\n\nThe hyperbolic function is:\n\n$$\nD=\\frac{1}{1+kt}\n$$\n\nThe crucial difference between the curves is that the hyperbola is initially steep and then becomes almost flat, while the exponential continues to be steep. This means that exponential discounting is time consistent and hyperbolic discounting is not. \n\n~~~~\nvar delays = _.range(7);\nvar expDiscount = function(delay) {\n return Math.pow(0.5, delay); \n};\nvar hypDiscount = function(delay) {\n return 1.0 / (1 + 2*delay);\n};\nvar makeExpDatum = function(delay){\n return {\n delay, \n discountFactor: expDiscount(delay),\n discountType: 'Exponential discounting: 1/2^t'\n };\n};\nvar makeHypDatum = function(delay){\n return {\n delay,\n discountFactor: hypDiscount(delay),\n discountType: 'Hyperbolic discounting: 1/(1 + 2t)'\n };\n};\nvar expData = map(makeExpDatum, delays);\nvar hypData = map(makeHypDatum, delays);\nviz.line(expData.concat(hypData), { groupBy: 'discountType' });\n~~~~\n\n>**Figure 2:** Graph comparing exponential and hyperbolic discount curves. \n\n\n>**Exercise:** We return to our running example but with slightly different numbers. The agent chooses between receiving $100 after 4 days or $110 after 5 days. The goal is to compute the preferences over each option for both exponential and hyperbolic discounters, using the discount curves shown in Figure 2. Compute the following:\n\n> 1. The discounted utility of the $100 and $110 rewards relative to Day 0 (i.e. how much the agent values each option when the rewards are 4 or 5 days away).\n>2. The discounted utility of the $100 and $110 rewards relative to Day 4 (i.e. how much each option is valued when the rewards are 0 or 1 day away).\n\n### Time inconsistency and sequential decision problems\n\nWe have shown that hyperbolic discounters have different preferences over the $100 and $110 depending on when they make the evaluation. This conflict in preferences leads to complexities in planning that don't occur in the optimal (PO)MDP agents which either discount exponentially or do not discount at all.\n\nConsider the example in the exercise above and imagine you have time inconsistent preferences. On Day 0, you write down your preference but on Day 4 you'll be free to change your mind. If you know your future self would choose the $100 immediately, you'd pay a small cost now to *pre-commit* your future self. However, if you believe your future self will share your current preferences, you won't pay this cost (and so you'll end up taking the $100). This illustrates a key distinction. Time inconsistent agents can be \"Naive\" or \"Sophisticated\":\n\n- **Naive agent**: assumes his future self shares his current time preference. For example, a Naive hyperbolic discounter assumes his far future self has a nearly flat discount curve (rather than the \"steep then flat\" discount curve he actually has). \n\n- **Sophisticated agent**: has the correct model of his future self's time preference. A Sophisticated hyperbolic discounter has a nearly flat discount curve for the far future but is aware that his future self does not share this discount curve.\n\nBoth kinds of agents evaluate rewards differently at different times. To distinguish a hyperbolic discounter's current and future selves, we refer to the agent acting at time $$t_i$$ as the $$t_i$$-agent. A Sophisticated agent, unlike a Naive agent, has an accurate model of his future selves. The Sophisticated $$t_0$$-agent predicts the actions of the $$t$$-agents (for $$t>t_0$$) that would conflict with his preferences. To prevent these actions, the $$t_0$$-agent tries to take actions that *pre-commit* the future agents to outcomes the $$t_0$$-agent prefers[^sophisticated].\n\n[^sophisticated]: As has been pointed out previously, there is a kind of \"inter-generational\" conflict between agent's future selves. If pre-commitment actions are available at time $$t_0$$, the $$t_0$$-agent does better in expectation if it is Sophisticated rather than Naive. Equivalently, the $$t_0$$-agent's future selves will do better if the agent is Naive.\n\n\n### Naive and Sophisticated Agents: Gridworld Example\n\nBefore describing our formal model and implementation of Naive and Sophisticated hyperbolic discounters, we illustrate their contrasting behavior using the Restaurant Choice example. We use the MDP version, where the agent has full knowledge of the locations of restaurants and of which restaurants are open. Recall the problem setup: \n\n>**Restaurant Choice**: Bob is looking for a place to eat. His decision problem is to take a sequence of actions such that (a) he eats at a restaurant he likes and (b) he does not spend too much time walking. The restaurant options are: the Donut Store, the Vegetarian Salad Bar, and the Noodle Shop. The Donut Store is a chain with two local branches. We assume each branch has identical utility for Bob. We abbreviate the restaurant names as \"Donut South\", \"Donut North\", \"Veg\" and \"Noodle\".\n\nThe only difference from previous versions of Restaurant Choice is that restaurants now have *two* utilities. On entering a restaurant, the agent first receives the *immediate reward* (i.e. how good the food tastes) and at the next timestep receives the *delayed reward* (i.e. how good the person feels after eating it).\n\n**Exercise:** Run the codebox immediately below. Think of ways in which Naive and Sophisticated hyperbolic discounters with identical preferences (i.e. identical utilities for each restaurant) might differ for this decision problem. \n\n\n~~~~\n///fold: restaurant choice MDP\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\nviz.gridworld(mdp.world, { trajectory: [mdp.startState] });\n~~~~\n\nThe next two codeboxes show the behavior of two hyperbolic discounters. Each agent has the same preferences and discount function. They differ only in that the first is Naive and the second is Sophisticated.\n\n\n~~~~\n///fold: restaurant choice MDP, naiveTrajectory\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: naiveTrajectory });\n~~~~\n\n\n~~~~\n///fold: restaurant choice MDP, sophisticatedTrajectory\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: sophisticatedTrajectory });\n~~~~\n\n>**Exercise:** (Try this exercise *before* reading further). Your goal is to do preference inference from the observed actions in the codeboxes above (using only a pen and paper). The discount function is the hyperbola $$D=1/(1+kt)$$, where $$t$$ is the time from the present, $$D$$ is the discount factor (to be multiplied by the utility) and $$k$$ is a positive constant. Find a single setting for the utilities and discount function that produce the behavior in both the codeboxes above. This includes utilities for the restaurants (both *immediate* and *delayed*) and for the `timeCost` (the negative utility for each additional step walked), as well as the discount constant $$k$$. Assume there is no softmax noise. \n\n------\n\nThe Naive agent goes to Donut North, even though Donut South (which has identical utility) is closer to the agent's starting point. One possible explanation is that the Naive agent has a higher utility for Veg but gets \"tempted\" by Donut North on their way to Veg[^naive_path].\n\n[^naive_path]: At the start, no restaurants can be reached quickly and so the agent's discount function is nearly flat when evaluating each one of them. This makes Veg look most attractive (given its higher overall utility). But going to Veg means getting closer to Donut North, which becomes more attractive than Veg once the agent is close to it (because of the discount function). Taking an inefficient path -- one that is dominated by another path -- is typical of time-inconsistent agents. \n\nThe Sophisticated agent can accurately model what it *would* do if it ended up in location [3,5] (adjacent to Donut North). So it avoids temptation by taking the long, inefficient route to Veg. \n\nIn this simple example, the Naive and Sophisticated agents each take paths that optimal time-consistent MDP agents (without softmax noise) would never take. So this is an example where a bias leads to a *systematic* deviation from optimality and behavior that is not predicted by an optimal model. In Chapter 5.3 we explore inference of preferences for time inconsistent agents.\n\nNext chapter: [Time inconsistency II](/chapters/5b-time-inconsistency.html)\n\n
\n\n### Footnotes\n", "date_published": "2019-08-24T14:52:08Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5a-time-inconsistency.md"} +{"id": "8adf0ba4ce94372feb4380f99a96c790", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6c-inference-rl.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Reinforcement learning techniques\ndescription: Max-margin and linear programming methods for IRL.\nstatus: stub\nis_section: false\nhidden: true\n---\n\n- Could have appendix discussing Apprenticeship Learning ideas in Abbeel and Ng in more detail.\n", "date_published": "2016-03-09T21:34:01Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6c-inference-rl.md"} +{"id": "5e5b2764fe4ae3054bfcbde84adab3f0", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/3c-pomdp.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Environments with hidden state: POMDPs\"\ndescription: Mathematical formalism for POMDPs, Bandit and Restaurant Choice examples. \n---\n\n\n \n## Introduction: Learning about the world from observation\n\nThe previous chapters made two strong assumptions that often fail in practice. First, we assumed the environment was an MDP, where the state is fully observed by the agent at all times. Second, we assumed that the agent starts off with *full knowledge* of the MDP -- rather than having to learn its parameters from experience. This chapter relaxes the first assumption by introducing POMDPs. The next [chapter](/chapters/3d-reinforcement-learning.html) introduces **reinforcement learning**, an approach to learning MDPs from experience. \n\n\n## POMDP Agent Model\n\n### Informal overview\n\nIn an MDP the agent observes the full state of the environment at each timestep. In Gridworld, for instance, the agent always knows their precise position and is uncertain only about their future position. Yet in real-world problems, the agent often does not observe the full state every timestep. For example, suppose you are sailing at night without any navigation instruments. You might be very uncertain about your precise position and you only learn about it indirectly, by waiting to observe certain landmarks in the distance. For environments where the state is only observed partially and indirectly, we use Partially Observed Markov Decision Processes (POMDPs). \n\nIn a Partially Observed Markov Decision Process (POMDP), the agent knows the transition function of the environment. This distinguishes POMDPs from [Reinforcement Learning](/chapters/3d-reinforcement-learning.html) problems. However, the agent starts each episode uncertain about the precise state of the environment. For example, if the agent is choosing where to eat on holiday, they may be uncertain about their own location and uncertain about which restaurants are open. \n\nThe agent learns about the state indirectly via *observations*. At each timestep, they receive an observation that depends on the true state and their previous action (according to a fixed *observation function*). They update a probability distribution on the current state and then choose an action. The action causes a state transition just like in an MDP but the agent only receives indirect evidence about the new state.\n\nAs an example, consider the Restaurant Choice Problem. Suppose Bob doesn't know whether the Noodle Shop is open. Previously, the agent's state consisted of Bob's location on the grid as well as the remaining time. In the POMDP case, the state also represents whether or not the Noodle Shop is open, which determines whether Bob can enter the Noodle Shop. When Bob gets close enough to the Noodle Shop, he will observe whether or not it's open. Bob's planning should take this into account: if the Noodle Shop is closed then Bob will observe this can simply head to a different restaurant. \n\n\n\n### Formal model\n\n\nWe first define the class of decision probems (POMDPs) and then define an agent model for optimally solving these problems. Our definitions are based on reft:kaelbling1998planning.\n\nA Partially Observable Markov Decision Process (POMDP) is a tuple $$ \\left\\langle S,A(s),T(s,a),U(s,a),\\Omega,O \\right\\rangle$$, where:\n\n- $$S$$ (state space), $$A$$ (action space), $$T$$ (transition function), $$U$$ (utility or reward function) form an MDP as defined in [chapter 3.1](/chapters/3a-mdp.html), with $$U$$ assumed to be deterministic[^utility]. \n\n- $$\\Omega$$ is the finite space of observations the agent can receive.\n\n- $$O$$ is a function $$ O\\colon S \\times A \\to \\Delta \\Omega $$. This is the *observation function*, which maps an action $$a$$ and the state $$s'$$ resulting from taking $$a$$ to an observation $$o \\in \\Omega$$ drawn from $$O(s',a)$$.\n\n[^utility]: In the RL literature, the utility or reward function is often allowed to be *stochastic*. Our agent models assume that the agent's utility function is deterministic. To represent environments with stochastic \"rewards\", we treat the reward as a stochastic part of the environment (i.e. the world state). So in a Bandit problem, instead of the agent receiving a (stochastic) reward $$R$$, they transition to a state to which they assign a fixed utility $$R$$. (Why do we avoid stochastic utilities? One focus of this tutorial is inferring an agent's preferences. The preferences are fixed over time and non-stochastic. We want to identify the agent's utility function with their preferences). \n\nSo at each timestep, the agent transitions from state $$s$$ to state $$s' \\sim T(s,a)$$ (where $$s$$ and $$s'$$ are generally unknown to the agent) having performed action $$a$$. On entering $$s'$$ the agent receives an observation $$o \\sim O(s',a)$$ and a utility $$U(s,a)$$. \n\nTo characterize the behavior of an expected-utility maximizing agent, we need to formalize the belief-updating process. Let $$b$$, the current belief function, be a probability distribution over the agent's current state. Then the agent's succesor belief function $$b'$$ over their next state is the result of a Bayesian update on the observation $$o \\sim O(s',a)$$ where $$a$$ is the agent's action in $$s$$. That is:\n\n**Belief-update formula:**\n\n$$\nb'(s') \\propto O(s',a,o)\\sum_{s \\in S}{T(s,a,s')b(s)}\n$$\n\nIntuitively, the probability that $$s'$$ is the new state depends on the marginal probability of transitioning to $$s'$$ (given $$b$$) and the probability of the observation $$o$$ occurring in $$s'$$. The relation between the variables in a POMDP is summarized in Figure 1 (below).\n\n\"diagram\"\n\n>**Figure 1:** The dependency structure between variables in a POMDP.\n\nThe ordering of events in Figure 1 is as follows:\n\n>(1). The agent chooses an action $$a$$ based on belief distribution $$b$$ over their current state (which is actually $$s$$).\n\n>(2). The agent gets utility $$u = U(s,a)$$ when leaving state $$s$$ having taken $$a$$.\n\n>(3). The agent transitions to state $$s' \\sim T(s,a)$$, where it gets observation $$o \\sim O(s',a)$$ and updates its belief to $$b'$$ by updating $$b$$ on the observation $$o$$.\n\nIn our previous agent model for MDPs, we defined the expected utility of an action $$a$$ in a state $$s$$ recursively in terms of the expected utility of the resulting pair of state $$s'$$ and action $$a'$$. This same recursive characterization of expected utility still holds. The important difference is that the agent's action $$a'$$ in $$s'$$ depends on their updated belief $$b'(s')$$. Hence the expected utility of $$a$$ in $$s$$ depends on the agent's belief $$b$$ over the state $$s$$. We call the following the **POMDP Expected Utility of State Recursion**. This recursion defines the function $$EU_{b}$$, which is analogous to the *value function*, $$V_{b}$$, in reft:kaelbling1998planning.\n\n**POMDP Expected Utility of State Recursion:**\n\n$$\nEU_{b}[s,a] = U(s,a) + \\mathbb{E}_{s',o,a'}(EU_{b'}[s',a'_{b'}])\n$$\n\nwhere:\n\n- we have $$s' \\sim T(s,a)$$ and $$o \\sim O(s',a)$$\n\n- $$b'$$ is the updated belief function $$b$$ on observation $$o$$, as defined above\n\n- $$a'_{b'}$$ is the softmax action the agent takes given belief $$b'$$\n\nThe agent cannot use this definition to directly compute the best action, since the agent doesn't know the state. Instead the agent takes an expectation over their belief distribution, picking the action $$a$$ that maximizes the following:\n\n$$\nEU[b,a] = \\mathbb{E}_{s \\sim b}(EU_{b}[s,a])\n$$\n\nWe can also represent the expected utility of action $$a$$ given belief $$b$$ in terms of a recursion on the successor belief state. We call this the **Expected Utility of Belief Recursion**, which is closely related to the Bellman Equations for POMDPs: \n\n$$\nEU[b,a] = \\mathbb{E}_{s \\sim b}( U(s,a) + \\mathbb{E}_{s',o,a'}(EU[b',a']) )\n$$\n\nwhere $$s'$$, $$o$$, $$a'$$ and $$b'$$ are distributed as in the Expected Utility of State Recursion.\n\nUnfortunately, finding the optimal policy for POMDPs is intractable. Even in the special case where observations are deterministic and the horizon is finite, determining whether the optimal policy has expected utility greater than some constant is PSPACE-complete refp:papadimitriou1987complexity.\n\n### Implementation of the Model\n\n\nAs with the agent model for MDPs, we provide a direct translation of the equations above into an agent model for solving POMDPs. The variables `nextState`, `nextObservation`, `nextBelief`, and `nextAction` correspond to $$s'$$, $$o$$, $$b'$$ and $$a'$$ respectively, and we use the Expected Utility of Belief Recursion.\n\n\n~~~~\n\nvar updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var state = sample(belief);\n var predictedNextState = transition(state, action);\n var predictedObservation = observe(predictedNextState);\n condition(_.isEqual(predictedObservation, observation));\n return predictedNextState;\n }});\n};\n\nvar act = function(belief) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n var eu = expectedUtility(belief, action);\n factor(alpha * eu);\n return action;\n }});\n};\n\nvar expectedUtility = function(belief, action) {\n return expectation(\n Infer({ model() {\n var state = sample(belief);\n var u = utility(state, action);\n if (state.terminateAfterAction) {\n return u;\n } else {\n var nextState = transition(state, action);\n var nextObservation = observe(nextState);\n var nextBelief = updateBelief(belief, nextObservation, action);\n var nextAction = sample(act(nextBelief));\n return u + expectedUtility(nextBelief, nextAction);\n }\n }}));\n};\n\n// To simulate the agent, we need to transition\n// the state, sample an observation, then\n// compute agent's action (after agent has updated belief).\n\n// *startState* is agent's actual startState (unknown to agent)\n// *priorBelief* is agent's initial belief function\n\nvar simulate = function(startState, priorBelief) {\n\n var sampleSequence = function(state, priorBelief, action) {\n var observation = observe(state);\n var belief = updateBelief(priorBelief, observation, action);\n var action = sample(act(belief));\n var output = [ [state, action] ];\n\n if (state.terminateAfterAction){\n return output;\n } else {\n var nextState = transition(state, action);\n return output.concat(sampleSequence(nextState, belief, action));\n }\n };\n return sampleSequence(startState, priorBelief, 'noAction');\n};\n~~~~\n\n## Applying the POMDP agent model\n\n\n\n### Multi-arm Bandits\n\n[Multi-armed Bandits](https://en.wikipedia.org/wiki/Multi-armed_bandit) are an especially simple class of sequential decision problem. A Bandit problem has a single state and multiple actions (\"arms\"), where each arm has a distribution on rewards/utilities that is initially unknown. The agent has a finite time horizon and must balance exploration (i.e. learn about the reward distribution) with exploitation (obtain reward). \n\nBandits can be modeled as Reinforcement Learning problems, where the agent learns a good policy for an initially unknown MDP. This is the practical way to solve Bandits and the next [chapter](/chapters/3d-reinforcement-learning.html) illustrates this approach. Here we model Bandits as POMDPs and use the code above to find the optimal policy for some toy Bandit problems[^optimal]. (We choose a Bandit example to demonstrate the difficulty of exactly solving even the simplest POMDPs.)\n\n[^optimal]: In the standard Bandit problem, there is a single unknown MDP characterized by the reward distribution of each arm. In a more challenging generalization, the agent faces a sequence of random Bandit problems that are drawn from some prior. If we treat a standard Bandit as a POMDP, we compute the Bayes optimal policy for the single Bandit and by doing so, we implicitly compute the optimal policy for a sequence of Bandits drawn from the same prior. This is analogous to finding the optimal policy for an MDP: the optimal policy covers every possible state, including those occurring with tiny probability. Model-free RL, by contrast, will focus on the states that are actually encountered in practice.\n\nIn our examples, the arms are labeled with integers and arm $$i$$ has Bernoulli distributed rewards with parameter $$\\theta_i$$. In the first codebox (below), the true reward distribution, $$(\\theta_0,\\theta_1)$$, is $$(0.7,0.8)$$ but the agent's prior is uniform over $$(0.7,0.8)$$ and $$(0.7,0.2)$$. So the agent's only uncertainty is over $$\\theta_1$$. \n\nRather than implement everything in the codebox, we use the library [webppl-agents](https://github.com/agentmodels/webppl-agents). This includes functions for constructing a Bandit environment (`makeBanditPOMDP`), for constructing a POMDP agent (`makePOMDPAgent`) and for running the agent on the environment (`simulatePOMDP`). This [chapter](/chapters/guide-library.html) explains how to use webppl-agents. The Appendix includes a codebox with a full implementation of a POMDP agent on a Bandit problem. \n\n\n~~~~\n///fold: displayTrajectory\n\n// Takes a trajectory containing states and actions and returns one containing\n// locs and actions, getting rid of 'start' and the final meaningless action.\nvar displayTrajectory = function(trajectory) {\n var getPrizeAction = function(stateAction) {\n var state = stateAction[0];\n var action = stateAction[1];\n return [state.manifestState.loc, action];\n };\n\n var prizesActions = map(getPrizeAction, trajectory);\n var flatPrizesActions = _.flatten(prizesActions);\n var actionsPrizes = flatPrizesActions.slice(1, flatPrizesActions.length - 1);\n\n var printOut = function(n) {\n print('\\n Arm: ' + actionsPrizes[2*n] + ' -- Prize: '\n + actionsPrizes[2*n + 1]);\n };\n return map(printOut, _.range((actionsPrizes.length)*0.5));\n};\n///\n\n\n// 1. Construct Bandit POMDP\n\n// Reward distributions are Bernoulli\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\n// True reward distributions are [.7,.8].\nvar armToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.8)\n};\n\n// But the agent's prior is uniform over [.7,.8] and [.7,.2].\nvar alternateArmToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.2)\n}\n \n// Options for library function for Bandits. Number of trials = horizon.\nvar banditOptions = {\n numberOfArms: 2,\n armToPrizeDist: armToRewardDist, \n numberOfTrials: 11,\n numericalPrizes: true\n};\n\nvar bandit = makeBanditPOMDP(banditOptions);\nvar startState = bandit.startState;\nvar world = bandit.world;\n\n\n// 2. Construct POMDP agent\n\n// Prior as described above and *latentState* is an implementation detail\n// for the libraries implementation of POMDPs\nvar priorBelief = Infer({ model() {\n var armToRewardDist = uniformDraw([armToRewardDist,\n alternateArmToRewardDist]);\n return extend(startState, { latentState: armToRewardDist });\n}});\n\n\nvar utility = function(state, action) {\n var reward = state.manifestState.loc;\n return reward === 'start' ? 0 : reward;\n};\n\nvar params = { \n priorBelief, \n utility,\n alpha: 1000 \n};\n\nvar agent = makePOMDPAgent(params, bandit.world);\n\n\n// 3. Simulate agent and return state-action pairs\n\nvar trajectory = simulatePOMDP(startState, world, agent, 'stateAction');\ndisplayTrajectory(trajectory);\n\n~~~~\n\n\nSolving Bandit problems using the simple dynamic approach of our POMDP agent does not blows up as the horizon (\"number of trials\") and number of arms increase. The codebox below shows how runtime scales as a function of the number of trials. (This takes approximately 20 seconds to run.)\n\n\n\n~~~~\n///fold: Construct world and agent priorBelief as above\n\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\n// True reward distributions are [.7,.8].\nvar armToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.8)\n};\n\n// But the agent's prior is uniform over [.7,.8] and [.7,.2].\nvar alternateArmToRewardDist = {\n 0: getRewardDist(.7),\n 1: getRewardDist(.2)\n}\n\nvar makeBanditWithNumberOfTrials = function(numberOfTrials) {\n return makeBanditPOMDP({\n numberOfTrials,\n\tnumberOfArms: 2,\n\tarmToPrizeDist: armToRewardDist,\n\tnumericalPrizes: true\n });\n};\n\nvar getPriorBelief = function(numberOfTrials){\n return Infer({ model() {\n var armToPrizeDist = uniformDraw([armToRewardDist,\n alternateArmToRewardDist]);\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }})\n};\n\nvar baseParams = { alpha: 1000 };\n///\n\n// Simulate agent for a given number of Bandit trials\nvar getRuntime = function(numberOfTrials) {\n var bandit = makeBanditWithNumberOfTrials(numberOfTrials);\n var world = bandit.world;\n var startState = bandit.startState;\n var priorBelief = getPriorBelief(numberOfTrials)\n var params = extend(baseParams, { priorBelief });\n var agent = makeBanditAgent(params, bandit, 'belief');\n\n var f = function() {\n return simulatePOMDP(startState, world, agent, 'stateAction');\n };\n \n return timeit(f).runtimeInMilliseconds.toPrecision(3) * 0.001;\n};\n\n// Runtime as a function of number of trials\nvar numberOfTrialsList = _.range(15).slice(2);\nvar runtimes = map(getRuntime, numberOfTrialsList);\nviz.line(numberOfTrialsList, runtimes);\n\n~~~~\n\n\nScaling is much worse in the number of arms. The following may take over a minute to run:\n\n\n\n~~~~\n///fold:\n\nvar getRewardDist = function(theta){\n return Categorical({ vs:[0,1], ps: [1-theta, theta]});\n}\n\nvar makeArmToRewardDist = function(numberOfArms) {\n return map(function(x) { return getRewardDist(0.8); }, _.range(numberOfArms));\n};\n\nvar armToRewardDistSampler = function(numberOfArms) {\n return map(function(x) { return uniformDraw([getRewardDist(0.2),\n getRewardDist(0.8)]); },\n _.range(numberOfArms));\n};\n\nvar getPriorBelief = function(numberOfTrials, numberOfArms) {\n return Infer({ model() {\n var armToRewardDist = armToRewardDistSampler(numberOfArms);\n return makeBanditStartState(numberOfTrials, armToRewardDist);\n }});\n};\n\nvar baseParams = {alpha: 1000};\n///\n\nvar getRuntime = function(numberOfArms) {\n var armToRewardDist = makeArmToRewardDist(numberOfArms);\n var options = {\n numberOfTrials: 5,\n\tarmToPrizeDist: armToRewardDist,\n\tnumberOfArms,\n\tnumericalPrizes: true\n };\n var numberOfTrials = options.numberOfTrials;\n var bandit = makeBanditPOMDP(options);\n var world = bandit.world;\n var startState = bandit.startState;\n var priorBelief = getPriorBelief(numberOfTrials, numberOfArms);\n var params = extend(baseParams, { priorBelief });\n var agent = makeBanditAgent(params, bandit, 'belief');\n\n var f = function() {\n return simulatePOMDP(startState, world, agent, 'stateAction');\n };\n\n return timeit(f).runtimeInMilliseconds.toPrecision(3) * 0.001;\n};\n\n// Runtime as a function of number of arms\nvar numberOfArmsList = [1, 2, 3];\nvar runtimes = map(getRuntime, numberOfArmsList);\nviz.line(numberOfArmsList, runtimes);\n~~~~\n\n\n### Gridworld with observations\n\nA person looking for a place to eat will not be *fully* informed about all local restaurants. This section extends the [Restaurant Choice problem](/chapters/3a-mdp.html) to represent an agent with uncertainty about which restaurants are open. The agent *observes* whether a restaurant is open by moving to one of the grid locations adjacent to the restaurant. If the restaurant is open, the agent can enter and receive utility. \n\n\n\nThe POMDP version of Restaurant Choice is built from the MDP version. States now have the form:\n\n>`{manifestState: { ... }, latentState: { ... }}`\n\nThe `manifestState` contains the features of the world that the agent always observes directly (and so always knows). This includes the remaining time and the agent's location in the grid. The `latentState` contains features that are only observable in certain states. In our examples, `latentState` specifies whether each restaurant is open or closed. The transition function for the POMDP is the same as the MDP except that if a restaurant is closed the agent cannot transition to it.\n\n\n\nThe next two codeboxes use the same POMDP, where all restaurants are open but for Noodle. The first agent prefers the Donut Store and believes (falsely) that Donut South is likely closed. The second agent prefers Noodle and believes (falsely) that Noodle is likely open.\n\n\n~~~~\n///fold:\nvar getPriorBelief = function(startManifestState, latentStateSampler){\n return Infer({ model() {\n return {\n manifestState: startManifestState, \n latentState: latentStateSampler()};\n }});\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___], \n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar pomdp = makeGridWorldPOMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\nvar utilityTable = {\n 'Donut N': 5,\n 'Donut S': 5,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.1\n};\nvar utility = function(state, action) {\n var feature = pomdp.feature;\n var name = feature(state.manifestState).name;\n if (name) {\n return utilityTable[name];\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar latent = {\n 'Donut N': true,\n 'Donut S': true,\n 'Veg': true,\n 'Noodle': false\n};\nvar alternativeLatent = extend(latent, {\n 'Donut S': false,\n 'Noodle': true\n});\n\nvar startState = {\n manifestState: { \n loc: [3, 1],\n terminateAfterAction: false,\n timeLeft: 11\n },\n latentState: latent\n};\n\nvar latentStateSampler = function() {\n return categorical([0.8, 0.2], [alternativeLatent, latent]);\n};\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentStateSampler);\nvar agent = makePOMDPAgent({ utility, priorBelief, alpha: 100 }, pomdp);\nvar trajectory = simulatePOMDP(startState, pomdp, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nviz.gridworld(pomdp.MDPWorld, { trajectory: manifestStates });\n~~~~\n\nHere is the agent that prefers Noodle and falsely belives that it is open:\n\n\n~~~~\n///fold: Same world, prior, start state, and latent state as previous codebox\nvar getPriorBelief = function(startManifestState, latentStateSampler){\n return Infer({ model() {\n return {\n manifestState: startManifestState, \n latentState: latentStateSampler()\n };\n }});\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___], \n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar pomdp = makeGridWorldPOMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar latent = {\n 'Donut N': true,\n 'Donut S': true,\n 'Veg': true,\n 'Noodle': false\n};\nvar alternativeLatent = extend(latent, {\n 'Donut S': false,\n 'Noodle': true\n});\n\nvar startState = {\n manifestState: { \n loc: [3, 1],\n terminateAfterAction: false,\n timeLeft: 11\n },\n latentState: latent\n};\n\nvar latentSampler = function() {\n return categorical([0.8, 0.2], [alternativeLatent, latent]);\n};\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentSampler);\n///\n\nvar utilityTable = {\n 'Donut N': 1,\n 'Donut S': 1,\n 'Veg': 3,\n 'Noodle': 5,\n 'timeCost': -0.1\n};\nvar utility = function(state, action) {\n var feature = pomdp.feature;\n var name = feature(state.manifestState).name;\n if (name) {\n return utilityTable[name];\n } else {\n return utilityTable.timeCost;\n }\n};\nvar agent = makePOMDPAgent({ utility, priorBelief, alpha: 100 }, pomdp);\nvar trajectory = simulatePOMDP(startState, pomdp, agent, 'states');\nvar manifestStates = _.map(trajectory, _.property('manifestState'));\n\nviz.gridworld(pomdp.MDPWorld, { trajectory: manifestStates });\n~~~~\n\n\nWhen does it make sense to treat this Restaurant Choice problem as a POMDP? As with Bandits, if the problem we face is a fixed (but initially unknown) MDP, and we get many episodes in which to learn by trial and error, then Reinforcement Learning is a simple and scalable approach. If the MDP varies with every episode (e.g. the hidden state of whether a restaurant is open varies from day to day), then POMDP methods may work better. (Even in the case where the MDP is fixed, if the stakes are very high, it will be best to solve for the optimal POMDP policy.) Finally, if our goal is to model human planning, then POMDP models are worth considering as they are more sample efficient than RL techniques (and humans can often solve planning problems in very few tries). \n\nThe next [chapter](/chapters/3d-reinforcement-learning.html) is on reinforcement learning, an approach which *learns* to solve an initially unknown MDP.\n\n\n\n\n\n\n
\n\n\n### Appendix: Complete Implementation of POMDP agent for Bandits\n\nWe apply the POMDP agent to a simplified variant of the Multi-arm Bandit Problem. In this variant, pulling an arm produces a *prize* deterministically. The agent begins with uncertainty about the mapping from arms to prizes and learns by trying the arms. In our example, there are only two arms. The first arm is known to have the prize \"chocolate\" and the second arm either has \"champagne\" or has no prize at all (\"nothing\"). See Figure 2 (below) for details.\n\n\"diagram\"\n\n>**Figure 2:** Diagram for deterministic Bandit problem used in the codebox below. The boxes represent possible deterministic mappings from arms to prizes. Each prize has a reward/utility $$u$$. On the right are the agent's initial beliefs about the probability of each mapping. The true mapping (i.e. true *latent state*) has a solid outline.\n\nIn our implementation of this problem, the two arms are labeled \"0\" and \"1\" respectively. The *action* of pulling `Arm0` is also labeled \"0\" (and likewise for `Arm1`). After taking action `0`, the agent transitions to a state corresponding to the prize for `Arm0` and the gets to observe this prize. States are Javascript objects that contain a property for counting down the time (as in the MDP case) as well as a `prize` property. States also contain the *latent* mapping from arms to prizes (called `armToPrize`) that determines how an agent transitions on pulling an arm.\n\n~~~~\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Use latent \"armToPrize\" mapping in state to\n// determine which prize agent gets\nvar transition = function(state, action){\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n prize: state.armToPrize[action], \n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft == 1\n });\n};\n\n// After pulling an arm, agent observes associated prize\nvar observe = function(state){\n return state.prize;\n};\n\n// Starting state specifies the latent state that agent tries to learn\n// (In order that *prize* is defined, we set it to 'start', which\n// has zero utilty for the agent). \nvar startState = { \n prize: 'start',\n timeLeft: 3, \n terminateAfterAction:false,\n armToPrize: { 0: 'chocolate', 1: 'champagne' }\n};\n~~~~\n\nHaving illustrated our implementation of the POMDP agent and the Bandit problem, we put the pieces together and simulate the agent's behavior. The `makeAgent` function is a simplified version of the library function `makeBeliefAgent` used throughout the rest of this tutorial[^makeBelief].\n\nThe Belief-Update Formula is implemented by `updateBelief`. Instead of hand-coding a Bayesian belief update, we simply use WebPPL's built in inference primitives. This approach means our POMDP agent can do any kind of inference that WebPPL itself can do. For this tutorial, we use the inference function `Enumerate`, which captures exact inference over discrete belief spaces. By changing the inference function, we get a POMDP agent that does approximate inference and simulates their future selves as doing approximate inference. This inference could be over discrete or continuous belief spaces. (WebPPL includes Particle Filters, MCMC, and Hamiltonian Monte Carlo for differentiable models). \n\n[^makeBelief]: One difference between the functions is that `makeAgent` uses the global variables `transition` and `observation`, instead of having a `world` parameter.\n\n~~~~\n///fold: Bandit problem is defined as above\n\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Use latent \"armToPrize\" mapping in state to\n// determine which prize agent gets\nvar transition = function(state, action){\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n prize: state.armToPrize[action], \n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft == 1\n });\n};\n\n// After pulling an arm, agent observes associated prize\nvar observe = function(state){\n return state.prize;\n};\n\n// Starting state specifies the latent state that agent tries to learn\n// (In order that *prize* is defined, we set it to 'start', which\n// has zero utilty for the agent). \nvar startState = { \n prize: 'start',\n timeLeft: 3, \n terminateAfterAction:false,\n armToPrize: {0:'chocolate', 1:'champagne'}\n};\n///\n\n// Defining the POMDP agent\n\n// Agent params include utility function and initial belief (*priorBelief*)\n\nvar makeAgent = function(params) {\n var utility = params.utility;\n\n // Implements *Belief-update formula* in text\n var updateBelief = function(belief, observation, action){\n return Infer({ model() {\n var state = sample(belief);\n var predictedNextState = transition(state, action);\n var predictedObservation = observe(predictedNextState);\n condition(_.isEqual(predictedObservation, observation));\n return predictedNextState;\n }});\n };\n\n var act = dp.cache(\n function(belief) {\n return Infer({ model() {\n var action = uniformDraw(actions);\n var eu = expectedUtility(belief, action);\n factor(1000 * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(belief, action) {\n return expectation(\n Infer({ model() {\n var state = sample(belief);\n var u = utility(state, action);\n if (state.terminateAfterAction) {\n return u;\n } else {\n var nextState = transition(state, action);\n var nextObservation = observe(nextState);\n var nextBelief = updateBelief(belief, nextObservation, action);\n var nextAction = sample(act(nextBelief));\n return u + expectedUtility(nextBelief, nextAction);\n }\n }}));\n });\n\n return { params, act, expectedUtility, updateBelief };\n};\n\nvar simulate = function(startState, agent) {\n var act = agent.act;\n var updateBelief = agent.updateBelief;\n var priorBelief = agent.params.priorBelief;\n\n var sampleSequence = function(state, priorBelief, action) {\n var observation = observe(state);\n var belief = ((action === 'noAction') ? priorBelief : \n updateBelief(priorBelief, observation, action));\n var action = sample(act(belief));\n var output = [[state, action]];\n\n if (state.terminateAfterAction){\n return output;\n } else {\n var nextState = transition(state, action);\n return output.concat(sampleSequence(nextState, belief, action));\n }\n };\n // Start with agent's prior and a special \"null\" action\n return sampleSequence(startState, priorBelief, 'noAction');\n};\n\n\n\n//-----------\n// Construct the agent\n\nvar prizeToUtility = {\n chocolate: 1, \n nothing: 0, \n champagne: 1.5, \n start: 0\n};\n\nvar utility = function(state, action) {\n return prizeToUtility[state.prize];\n};\n\n\n// Define true startState (including true *armToPrize*) and\n// alternate possibility for startState (see Figure 2)\n\nvar numberTrials = 1;\nvar startState = { \n prize: 'start',\n timeLeft: numberTrials + 1, \n terminateAfterAction: false,\n armToPrize: { 0: 'chocolate', 1: 'champagne' }\n};\n\nvar alternateStartState = extend(startState, {\n armToPrize: { 0: 'chocolate', 1: 'nothing' }\n});\n\n// Agent's prior\nvar priorBelief = Categorical({ \n ps: [.5, .5], \n vs: [startState, alternateStartState]\n});\n\n\nvar params = { utility: utility, priorBelief: priorBelief };\nvar agent = makeAgent(params);\nvar trajectory = simulate(startState, agent);\n\nprint('Number of trials: ' + numberTrials);\nprint('Arms pulled: ' + map(second, trajectory));\n~~~~\n\nYou can change the agent's behavior by varying `numberTrials`, `armToPrize` in `startState` or the agent's prior. Note that the agent's final arm pull is random because the agent only gets utility when *leaving* a state.\n\n\n
\n\n### Footnotes\n\n", "date_published": "2017-03-19T19:27:27Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "3c-pomdp.md"} +{"id": "1c92e4186308d1ad03fed592fa57db19", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/1-introduction.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Introduction\ndescription: \"Motivating the problem of modeling human planning and inference using rich computational models.\"\nis_section: true\n---\n\nImagine a dataset that records how individuals move through a city. The figure below shows what a datapoint from this set might look like. It depicts an individual, who we'll call Bob, moving along a street and then stopping at a restaurant. This restaurant is one of two nearby branches of a chain of Donut Stores. Two other nearby restaurants are also shown on the map.\n\n![Donut temptation gridworld](/assets/img/ch1_donut_new.png)\n\nGiven Bob's movements alone, what can we infer about his preferences and beliefs? Since Bob spent a long time at the Donut Store, we infer that he bought something there. Since Bob could easily have walked to one of the other nearby eateries, we infer that Bob prefers donuts to noodles or salad.\n\nAssuming Bob likes donuts, why didn't he choose the store closer to his starting point (\"Donut South\")? The cause might be Bob's *beliefs* and *knowledge* rather than his *preferences*. Perhaps Bob doesn't know about \"Donut South\" because it just opened. Or perhaps Bob knows about Donut South but chose Donut North because it is open later.\n\nA different explanation is that Bob *intended* to go to the healthier \"Vegetarian Salad Bar\". However, the most efficient route to the Salad Bar takes him directly past Donut North, and once outside, he found donuts more tempting than salad.\n\nWe have described a variety of inferences about Bob which would explain his behavior. This tutorial develops models for inference that represent these different explanations and allow us to compute which explanations are most plausible. These models can also simulate an agent's behavior in novel scenarios: for example, predicting Bob's behavior if he looked for food in a different part of the city. \n\n\n\n## Agents as programs\n\n### Making rational plans\n\nFormal models of rational agents play an important role in economics refp:rubinstein2012lecture and in the cognitive sciences refp:chater2003rational as models of human or animal behavior. Core components of such models are *expected-utility maximization*, *Bayesian inference*, and *game-theoretic equilibria*. These ideas are also applied in engineering and in artificial intelligence refp:russell1995modern in order to compute optimal solutions to problems and to construct artificial systems that learn and reason optimally. \n\nThis tutorial implements utility-maximizing Bayesian agents as functional probabilistic programs. These programs provide a concise, intuitive translation of the mathematical specification of rational agents as code. The implemented agents explicitly simulate their own future choices via recursion. They update beliefs by exact or approximate Bayesian inference. They reason about other agents by simulating them (which includes simulating the simulations of others). \n\nThe first section of the tutorial implements agent models for sequential decision problems in stochastic environments. We introduce a program that solves finite-horizon MDPs, then extend it to POMDPs. These agents behave *optimally*, making rational plans given their knowledge of the world. Human behavior, by contrast, is often *sub-optimal*, whether due to irrational behavior or constrained resources. The programs we use to implement optimal agents can, with slight modification, implement agents with biases (e.g. time inconsistency) and with resource bounds (e.g. bounded look-ahead and Monte Carlo sampling).\n\n\n### Learning preferences from behavior\n\nThe example of Bob was not primarily about *simulating* a rational agent, but rather about the problem of *learning* (or *inferring*) an agent's preferences and beliefs from their choices. This problem is important to both economics and psychology. Predicting preferences from past choices is also a major area of applied machine learning; for example, consider the recommendation systems used by Netflix and Facebook.\n\nOne approach to this problem is to assume the agent is a rational utility-maximizer, to assume the environment is an MDP or POMDP, and to infer the utilities and beliefs and predict the observed behavior. This approach is called \"structural estimation\" in economics refp:aguirregabiria2010dynamic, \"inverse planning\" in cognitive science refp:ullman2009help, and \"inverse reinforcement learning\" (IRL) in machine learning and AI refp:ng2000algorithms. It has been applied to inferring the perceived rewards of education from observed work and education choices, preferences for health outcomes from smoking behavior, and the preferences of a nomadic group over areas of land (see cites in reft:evans2015learning). \n\n[Section IV](/chapters/4-reasoning-about-agents.html) shows how to infer the preferences and beliefs of the agents modeled in earlier chapters. Since the agents are implemented as programs, we can apply probabilistic programming techniques to perform this sort of inference with little additional code. We will make use of both exact Bayesian inference and sampling-based approximations (MCMC and particle filters).\n\n\n## Taster: probabilistic programming\n\nOur models of agents, and the corresponding inferences about agents, all run in \"code boxes\" in the browser, accompanied by animated visualizations of agent behavior. The language of the tutorial is [WebPPL](http://webppl.org), an easy-to-learn probabilistic programming language based on Javascript refp:dippl. As a taster, here are two simple code snippets in WebPPL:\n\n~~~~\n// Using the stochastic function `flip` we build a function that\n// returns 'H' and 'T' with equal probability:\n\nvar coin = function() {\n return flip(.5) ? 'H' : 'T';\n};\n\nvar flips = [coin(), coin(), coin()];\n\nprint('Some coin flips:');\nprint(flips);\n~~~~\n\n~~~~\n// We now use `flip` to define a sampler for the geometric distribution:\n\nvar geometric = function(p) {\n return flip(p) ? 1 + geometric(p) : 1\n};\n\nvar boundedGeometric = Infer({ \n model() { return geometric(0.5); },\n method: 'enumerate', \n maxExecutions: 20 \n});\n\nprint('Histogram of (bounded) Geometric distribution');\nviz(boundedGeometric);\n~~~~\n\nIn the [next chapter](/chapters/2-webppl.html), we will introduce WebPPL in more detail.\n", "date_published": "2017-04-16T22:22:12Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "1-introduction.md"} +{"id": "7334e96c06582304d08d94c2f873e6c7", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/8-guide-library.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Quick-start guide to the webppl-agents library\ndescription: Create your own MDPs and POMDPs. Create gridworlds and k-armed bandits. Use agents from the library and create your own.\nis_section: true\n---\n\n\n\n### Contents\n\n1. Introduction\n\n2. Creating MDPs\n\n3. Creating Gridworld MDPs\n\n4. Creating your own agents\n\n5. Creating POMDPs\n\n6. Creating k-armed bandits\n\n\n\n\n### Introduction\n\nThis is a quick-start guide to using the `webppl-agents` library. For a comprehensive explanation of the ideas behind the library (e.g. MDPs, POMDPs, hyperbolic discounting) and diverse examples of its use, go to the online textbook [agentmodels.org](http://agentmodels.org).\n\nThe webppl-agents library is built around two basic entities: *agents* and *environments*. These entities are combined by *simulating* an agent interacting with a particular environment. The library includes two standard RL environments as examples (Gridworld and Multi-armed Bandits). Four kinds of agent are included. Many combinations of environment and agent are possible. In addition, it's easy to add your own environments and agents -- as we illustrate below.\n\nNot all environments and agents can be combined. Among environments, we distinguish MDPs (Markov Decision Processes) and POMDPs (Partially Observable Markov Decision Processes). For a POMDP environment, the agent must be a \"POMDP agent\", which means they maintain a belief distribution on the state[^separation].\n\n[^separation]: This separation of POMDPs and MDPs is not necessary from a theoretical perspective, since POMDPs generalize MDPs. However, the separation is convenient in practice; it allows the MDP code to be short and perspicuous and it provides performance advantages.\n\n\n\n### Creating your own MDP environment\n\nWe begin by creating a very simple MDP environment and running two agents from the library on that environment.\n\nMDPs are defined [here](http://agentmodels.org/chapters/3a-mdp.html). For use in the library, MDP environments are Javascript objects with the following methods:\n\n>`{transition: ..., stateToActions: ...}`\n\nThe `transition` method is a function from state-action pairs to states (as in the function $$T$$ in the MDP definition). The `stateToAction` method is a mapping from states to the actions that are allowed in that state. (This is often a constant function).\n\nTo run an agent on an MDP, the agent object must have a `utility` method defined on the MDP's state-action space. This method is the agent's \"reward\" or \"utility\" function (we use the terms interchangeably).\n\n#### Creating the Line MDP environment\nOur first MDP environment is a discrete line (or one-dimensional gridworld) where the agent can move left or right (starting from the origin). More precisely, the Line MDP is as follows:\n\n- **States:** Points on the integer line (e.g ..., -1, 0, 1, 2, ...).\n\n- **Actions/transitions:** Actions “left”, “right” and “stay” move the agent deterministically along the line in either direction. We represent the actions as $$[-1,0,1]$$ in the code below.\n\nIn our examples, the agent's `startState` is the origin. The utility is 1 at the origin, 3 at the third state right of the origin (\"state 3\"), and 0 otherwise.\n\nThe transition function must also decrement the time. States are objects with a `terminateAfterAction` property. In the example below, `terminateAfterAction` is set to `true` when the state's `timeLeft` attribute gets down to 1; this causes the MDP to terminate. Here is an example state for the Line MDP (it's also the `startState`):\n\n>`{terminateAfterAction: false, timeLeft:5, loc:0}`\n\n~~~~\n// helper function that decrements time and triggers termination when\n// time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \"line\" MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n\n// save the MDP constructor for use in other codeboxes\nwpEditor.put('makeLineMDP', makeLineMDP);\n~~~~\n\nTo run an agent on this MDP, we use a `makeAgent` constructor and the library function `simulateMDP`. The constructor for MDP agents is `makeMDPAgent`:\n\n>`makeMDPAgent(params, world)`\n\nAgent constructors always have these same two arguments. The `world` argument is required for the agent's internal simulations of possible transitions. The `params` argument specifies the agent's parameters and whether the agent is optimal or biased.\n\nFor an optimal agent, the parameters are:\n\n>`{utility: , alpha: }`\n\nAn environment (or \"world\") and agent are combined with the `simulateMDP` function:\n\n>`simulateMDP(startState, world, agent, outputType)`\n\nGiven the utility function defined above, the highest utility state is at location 3 (three steps to the right from the origin). So an optimal agent (who doesn't hyperbolically discount) will move to this location and stay there.\n\n~~~~\n///fold: helper function that decrements time and triggers termination when time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \"line\" MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n///\n\n// Construct line MDP environment\nvar totalTime = 5;\nvar lineMDP = makeLineMDP(totalTime);\nvar world = lineMDP.world;\n\n// The lineMDP object also includes a utility function and startState\nvar utility = lineMDP.utility;\nvar startState = lineMDP.startState;\n\n\n// Construct MDP agent\nvar params = { alpha: 1000, utility };\nvar agent = makeMDPAgent(params, world);\n\n// Simulate the agent on the lineMDP with *outputType* set to *states*\nvar trajectory = simulateMDP(startState, world, agent, 'states');\n\n// Display start state\nprint(trajectory);\n~~~~\n\nWe described the agent above as \"optimal\" because it does not hyperbolically discount and it is not myopic. However, we can adjust its \"soft-max\" noise by modifying the parameter `alpha` and induce sub-optimal behavior. Moreover, we can change the agent's behavior on this MDP by over-writing the utility method in `params`.\n\nTo construct a time-inconsistent, hyperbolically-discounting MDP agent, we include additional attributes in the `params` argument:\n\n>`{ discount:, sophisticatedOrNaive: }`\n\nThese attributes are explained in the [chapter](/chapters/5a-time-inconsistency.html) on hyperbolic discounting. The discounting agent stays at the origin because it isn't willing to \"delay gratification\" in order to get a larger total reward at location 3.\n\n~~~~\n///fold: helper function that decrements time and triggers termination when time elapsed\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n// constructuor for the \"line\" MDP environment:\n// argument *totalTime* is the time horizon\nvar makeLineMDP = function(totalTime) {\n\n var stateToActions = function(state) {\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var world = { stateToActions, transition };\n\n var startState = {\n timeLeft: totalTime,\n terminateAfterAction: false,\n loc: 0\n };\n\n var utility = function(state, action){\n var table = { 0: 1, 3: 3 };\n return table[state.loc] ? table[state.loc] : 0;\n };\n\n return { world, startState, utility };\n};\n///\n\n// Construct line MDP environment\nvar totalTime = 5;\nvar lineMDP = makeLineMDP(totalTime);\nvar world = lineMDP.world;\n\n// The lineMDP object also includes a utility function and startState\nvar utility = lineMDP.utility;\nvar startState = lineMDP.startState;\n\n// Construct hyperbolic agent\nvar params = {\n alpha: 1000,\n utility,\n discount: 2,\n sophisticatedOrNaive: 'naive'\n};\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\nWe've shown how to create your own MDP and then run different agents on that MDP. You can also create your own MDP agent, as we illustrate below.\n\n>**Exercise:** Try some variations of the Line MDP by modifying the `transition` method in the `makeLineMDP` constructor above. For example, change the underlying graph structure from a line into a loop.\n\n-----------\n\n\n\n### Creating Gridworld MDPs\n\nGridworld is a standard toy environment for reinforcement learning problems. The library contains a constructor for making a gridworld with your choice of dimensions and reward function. There is also a function for displaying gridworlds in the browser.\n\nWe begin by creating a simple gridworld environment (using `makeGridWorldMDP`) and display it using `viz.gridworld`.\n\n~~~~\n// Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n var grid = [\n [___, ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\nviz.gridworld(world, {trajectory: [startState]});\n~~~~\n\nGridworld states have a `loc` attribute for the agent's location (using discrete Cartesian coordinates). The agent is able to move up, down, left and right but is not able to stay put.\n\nHaving created a gridworld, we construct a utility function (where utility depends only on the agent's grid location) and simulate an optimal MDP agent.\n\n~~~~\n///fold: Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n var grid = [\n [___, ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0,0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n///\n\n// `isEqual` is in *underscore* (included in webppl-agents)\nvar utility = function(state, action) {\n return _.isEqual(state.loc, [0, 3]) ? 1 : 0;\n};\n\nvar params = { utility, alpha: 1000 };\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent);\nviz.gridworld(world, {trajectory: trajectory});\n~~~~\n\nYou can create terminal gridworld states by using features with a name. These named-features can also be used to create a utility function without specifying grid coordinates.\n\n~~~~\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n\nvar params = { utility, alpha: 1000 };\nvar agent = makeMDPAgent(params, world);\nvar trajectory = simulateMDP(startState, world, agent);\nviz.gridworld(world, { trajectory });\n~~~~\n\nThere are many examples using gridworld in agentmodels.org, starting from this [chapter](/chapters/3b-mdp-gridworld.html).\n\n\n-------\n\n\n\n### Creating your own agents\n\nAs well as creating your own environments, it is straightfoward to create your own agents for MDPs and POMDPs. Much of agentmodels.org is a tutorial on creating agents (e.g. optimal agents, myopic agents, etc.). Rather than recapitulate agentmodels.org, this section is brief and focuses on the basic interface that agents need to present.\n\nWe begin by creating an agent that chooses actions uniformly at random. To run on agent on an environment using the `simulateMDP` function, an agent object must have an `act` method and a `params` attribute. The `act` method is a function from states to a distribution on the available actions. The `params` attribute indicates whether or not the agent is an MDP or POMDP agent.\n\nWe use the simple gridworld environment from the codebox above.\n\n~~~~\n///fold: Build gridworld environment\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n///\n\nvar actions = ['u', 'd', 'l', 'r'];\n\nvar act = function(state){\n return Infer({ model(){ return uniformDraw(actions); }});\n};\n\nvar randomAgent = { act, params: {} };\nvar trajectory = simulateMDP(startState, world, randomAgent);\nviz.gridworld(world, { trajectory });\n~~~~\n\nIn gridworld the same actions are available in each state. When the actions available depend on the state, the agent's `act` function needs access to the environment's `stateToActions` method.\n\n~~~~\n///fold: Create a constructor for our gridworld\nvar makeSimpleGridWorld = function() {\n\n // '#' indicates a wall, and ' ' indicates a normal cell\n var ___ = ' ';\n\n // named features are terminal\n var G = { name: 'gold' };\n var S = { name: 'silver' };\n\n var grid = [\n [ G , ___, ___],\n [ S , ___, ___],\n ['#', '#', ___],\n ['#', '#', ___],\n [___, ___, ___]\n ];\n\n return makeGridWorldMDP({ grid, transitionNoiseProbability: 0 })\n};\n\nvar simpleGridWorld = makeSimpleGridWorld();\nvar world = simpleGridWorld.world;\n\nvar startState = {\n loc: [0, 0],\n timeLeft: 10,\n terminateAfterAction: false\n};\n\n// The *makeUtilityFunction* method allows you to define\n// a utility function in terms of named features\nvar makeUtilityFunction = simpleGridWorld.makeUtilityFunction;\nvar table = {\n gold: 2,\n silver: 1.8,\n timeCost: -0.5\n};\nvar utility = makeUtilityFunction(table);\n///\n\nvar makeRandomAgent = function(world) {\n var stateToActions = world.stateToActions;\n var act = function(state) {\n return Infer({ model() {\n return uniformDraw(stateToActions(state));\n }});\n };\n return { act, params: {} };\n};\n\nvar randomAgent = makeRandomAgent(world);\nvar trajectory = simulateMDP(startState, world, randomAgent);\n\nviz.gridworld(world, { trajectory });\n~~~~\n\nIn the example above, the agent constructor `makeRandomAgent` takes the environment (`world`) as an argument in order to access `stateToActions`. Agent constructors will typically also use the environment's `transition` method to internally simulate state transitions.\n\n>**Exercise:** Implement an agent who takes the action with highest expected utility under the random policy. (You can do this by making use of the codebox above. Use the `makeRandomAgent` and `simulateMDP` function within a new agent constructor.)\n\nIn addition to writing agents from scratch, you can build on the agents available in the library.\n\n>**Exercise:** Start with the optimal MDP agent found [here](https://github.com/agentmodels/webppl-agents/blob/master/src/agents/makeMDPAgent.wppl#L3). Create a variant of this optimal agent that takes \"epsilon-greedy\" random actions instead of softmax random actions.\n\n--------\n\n\n\n### Creating POMDPs\n\nPOMDPs are introduced in agentmodels.org in this [chapter](/chapters/3c-pomdp.html). This section explains how to create your own POMDPs for use in the library.\n\nAs we explained above, MDPs in webppl-agents are objects with a `transition` method and a `stateToActions` method. POMDPs also have a `transition` method. Instead of `stateToActions`, they have a `beliefToActions` method, which maps a belief distribution over states to a set of available actions. POMDPs also have an `observe` method, which maps states to observations (typically represented as strings).\n\nHere is a simple POMDP based on the \"Line MDP\" example above. The agent moves along the integer line as before. This time the agent is uncertain whether or not there is high reward at location 3. The agent can only find out by moving to location 3 and receiving an observation.\n\n~~~~\n// States have the same structure as in MDPs:\n// the transition method needs to decrement\n// the state's *timeLeft* attribute until termination\n\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\n\nvar makeLinePOMDP = function() {\n\n var beliefToActions = function(belief){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var observe = function(state) {\n if (state.loc == 3) {\n return state.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { beliefToActions, transition, observe };\n\n};\n~~~~\n\nTo simulate an agent on this POMDP, we need to create a \"POMDP\" agent. POMDP agents have an `act` method which maps *beliefs* (rather than *states*) to distributions on actions. They also have an `updateBelief` method, mapping beliefs and observations to an updated belief.\n\nThis example uses the optimal POMDP agent. To construct a POMDP agent, we need to specify the agent's starting belief distribution on states. Here we assume the agent has a uniform distribution on whether or not there is \"treasure\" at location 3.\n\n~~~~\n///fold:\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n\nvar makeLinePOMDP = function() {\n\n var beliefToActions = function(belief){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.loc + action;\n var stateNewLoc = extend(state, {loc: newLoc});\n return advanceStateTime(stateNewLoc);\n };\n\n var observe = function(state) {\n if (state.loc == 3) {\n return state.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { beliefToActions, transition, observe };\n\n};\n///\n\nvar utility = function(state, action) {\n if (state.loc==3 && state.treasureAt3){ return 5; }\n if (state.loc==0){ return 1; }\n return 0;\n};\n\nvar trueStartState = {\n timeLeft: 7,\n terminateAfterAction: false,\n loc: 0,\n treasureAt3: false\n};\n\nvar alternativeStartState = extend(trueStartState, {treasureAt3: true});\nvar possibleStates = [trueStartState, alternativeStartState];\n\nvar priorBelief = Categorical({\n vs: possibleStates,\n ps: [.5, .5]\n});\n\nvar params = {\n alpha: 1000,\n utility,\n priorBelief,\n optimal: true\n};\n\nvar world = makeLinePOMDP();\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(trueStartState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\nIn POMDPs the agent does not directly observe their current state. However, in the Line POMDP (above) the \"location\" part of the agent's state is always known by the agent. The part of the state that is unknown is whether `treasureAt3` is true. So we could factor the state into attributes that are always known (\"manifest\") and parts that are not (\"latent\"). This factoring of the state can speed up the POMDP agent's belief-updating and is used for the POMDP environments in the library. The following codebox shows a factored version of the Line POMDP:\n\n~~~~\n///fold:\nvar advanceStateTime = function(state) {\n var newTimeLeft = state.timeLeft - 1;\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: newTimeLeft > 1 ? state.terminateAfterAction : true\n });\n};\n///\n\nvar makeLinePOMDP = function() {\n var manifestStateToActions = function(manifestState){\n return [-1, 0, 1];\n };\n\n var transition = function(state, action) {\n var newLoc = state.manifestState.loc + action;\n var manifestStateNewLoc = extend(state.manifestState,{loc: newLoc});\n var newManifestState = advanceStateTime(manifestStateNewLoc);\n return {\n manifestState: newManifestState,\n latentState: state.latentState\n };\n };\n\n var observe = function(state) {\n if (state.manifestState.loc == 3){\n return state.latentState.treasureAt3 ? 'treasure' : 'no treasure';\n }\n return 'noObservation';\n };\n\n return { manifestStateToActions, transition, observe};\n};\n\n\nvar utility = function(state, action) {\n if (state.manifestState.loc==3 && state.latentState.treasureAt3){ return 5; }\n if (state.manifestState.loc==0){ return 1; }\n return 0;\n};\n\nvar trueStartState = {\n manifestState: {\n timeLeft: 7,\n terminateAfterAction: false,\n loc: 0\n },\n latentState: {\n treasureAt3: false\n }\n};\n\nvar alternativeStartState = extend(trueStartState, {\n latentState: { treasureAt3: true }\n});\nvar possibleStates = [trueStartState, alternativeStartState];\n\nvar priorBelief = Categorical({\n vs: possibleStates,\n ps: [.5, .5]\n});\n\nvar params = {\n alpha: 1000,\n utility,\n priorBelief,\n optimal: true\n};\n\nvar world = makeLinePOMDP();\nvar agent = makePOMDPAgent(params, world);\nvar trajectory = simulatePOMDP(trueStartState, world, agent, 'states');\nprint(trajectory);\n~~~~\n\n\n\n---------\n\n\n### Footnotes\n", "date_published": "2016-12-04T12:14:18Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "8-guide-library.md"} +{"id": "f8263956785bbfdfbafe1c3994d28c97", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/2-webppl.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Probabilistic programming in WebPPL\"\ndescription: \"WebPPL is a functional subset of Javascript with automatic Bayesian inference via MCMC and gradient-based variational inference.\"\nis_section: true\n---\n\n## Introduction\n\nThis chapter introduces the probabilistic programming language WebPPL (pronounced \"web people\"). The models for agents in this tutorial are all implemented in WebPPL and so it's important to understand how the language works.\n\nWe begin with a quick overview of probabilistic programming. If you are new to probabilistic programming, you might want to read an informal introduction (e.g. [here](http://www.pl-enthusiast.net/2014/09/08/probabilistic-programming/) or [here](https://moalquraishi.wordpress.com/2015/03/29/the-state-of-probabilistic-programming/)) or a more technical [survey](https://scholar.google.com/scholar?cluster=16211748064980449900&hl=en&as_sdt=0,5). For a practical introduction to both probabilistic programming and Bayesian modeling, we highly recommend [ProbMods](http://probmods.org), which also uses the WebPPL language. \n\nThe only requirement to run the code for this tutorial is a modern browser (e.g. Chrome, Firefox, Safari). If you want to explore the models in detail and to create your own, we recommend running WebPPL from the command line. Installation is simple and is explained [here](http://webppl.org).\n\n\n## WebPPL: a purely functional subset of Javascript\n\nWebPPL includes a subset of Javascript, and follows the syntax of Javascript for this subset.\n\nThis example program uses most of the Javascript syntax that is available in WebPPL:\n\n~~~~\n// Define a function using two external primitives:\n// 1. Javascript's `JSON.stringify` for converting to strings\n// 2. Underscore's _.isFinite for checking if a value is a finite number\nvar coerceToPositiveNumber = function(x) {\n if (_.isFinite(x) && x > 0) {\n return x;\n } else {\n print('- Input ' + JSON.stringify(x) +\n ' was not a positive number, returning 1 instead');\n return 1;\n }\n};\n\n// Create an array with numbers, an object, an a Boolean\nvar inputs = [2, 3.5, -1, { key: 1 }, true];\n\n// Map the function over the array\nprint('Processing elements in array ' + JSON.stringify(inputs) + '...');\nvar result = map(coerceToPositiveNumber, inputs);\nprint('Result: ' + JSON.stringify(result));\n~~~~\n\nLanguage features with side effects are not allowed in WebPPL. The code that has been commented out uses assignment to update a table. This produces an error in WebPPL.\n\n~~~~\n// Don't do this:\n\n// var table = {};\n// table.key = 1;\n// table.key = table.key + 1;\n// => Syntax error: You tried to assign to a field of table, but you can\n// only assign to fields of globalStore\n\n\n// Instead do this:\n\nvar table = { key: 1 };\nvar tableTwo = { key: table.key + 1 };\nprint(tableTwo);\n\n// Or use the library function `extend`:\n\nvar tableThree = extend(tableTwo, { key: 3 })\nprint(tableThree);\n~~~~\n\nThere are no `for` or `while` loops. Instead, use higher-order functions like WebPPL's built-in `map`, `filter` and `zip`:\n\n~~~~\nvar xs = [1, 2, 3];\n\n// Don't do this:\n\n// for (var i = 0; i < xs.length; i++){\n// print(xs[i]);\n// }\n\n\n// Instead of for-loop, use `map`:\nmap(print, xs);\n\n\"Done!\"\n~~~~\n\nIt is possible to use normal Javascript functions (which make *internal* use of side effects) in WebPPL. See the [online book](http://dippl.org/chapters/02-webppl.html) on the implementation of WebPPL for details (section \"Using Javascript Libraries\").\n\n\n## WebPPL stochastic primitives\n\n### Sampling from random variables\n\nWebPPL has a large [library](http://docs.webppl.org/en/master/distributions.html) of primitive probability distributions. Try clicking \"Run\" repeatedly to get different i.i.d. random samples:\n\n~~~~\nprint('Fair coins (Bernoulli distribution):');\nprint([flip(0.5), flip(0.5), flip(0.5)]);\n\nprint('Biased coins (Bernoulli distribution):');\nprint([flip(0.9), flip(0.9), flip(0.9)]);\n\nvar coinWithSide = function(){\n return categorical([.45, .45, .1], ['heads', 'tails', 'side']);\n};\n\nprint('Coins that can land on their edge:')\nprint(repeat(5, coinWithSide)); // draw 5 i.i.d samples\n~~~~\n\nThere are also continuous random variables:\n\n~~~~\nprint('Two samples from standard Gaussian in 1D: ');\nprint([gaussian(0, 1), gaussian(0, 1)]);\n\nprint('A single sample from a 2D Gaussian: ');\nprint(multivariateGaussian(Vector([0, 0]), Matrix([[1, 0], [0, 10]])));\n~~~~\n\nYou can write your own functions to sample from more complex distributions. This example uses recursion to define a sampler for the Geometric distribution:\n\n~~~~\nvar geometric = function(p) {\n return flip(p) ? 1 + geometric(p) : 1\n};\n\ngeometric(0.8);\n~~~~\n\nWhat makes WebPPL different from conventional programming languages is its ability to perform *inference* operations using these primitive probability distributions. Distribution objects in WebPPL have two key features:\n\n1. You can draw *random i.i.d. samples* from a distribution using the special function `sample`. That is, you sample $$x \\sim P$$ where $$P(x)$$ is the distribution.\n\n2. You can compute the probability (or density) the distribution assigns to a value. That is, to compute $$\\log(P(x))$$, you use `dist.score(x)`, where `dist` is the distribution in WebPPL.\n\nThe functions above that generate random samples are defined in the WebPPL library in terms of primitive distributions (e.g. `Bernoulli` for `flip` and `Gaussian` for `gaussian`) and the built-in function `sample`:\n\n~~~~\nvar flip = function(p) {\n var p = (p !== undefined) ? p : 0.5;\n return sample(Bernoulli({ p }));\n};\n\nvar gaussian = function(mu, sigma) {\n return sample(Gaussian({ mu, sigma }));\n};\n\n[flip(), gaussian(1, 1)];\n~~~~\n\nTo create a new distribution, we pass a (potentially stochastic) function with no arguments---a *thunk*---to the function `Infer` that performs *marginalization*. For example, we can use `flip` as an ingredient to construct a Binomial distribution using enumeration:\n\n~~~~\nvar binomial = function() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n return a + b + c;\n};\n\nvar MyBinomial = Infer({ model: binomial });\n\n[sample(MyBinomial), sample(MyBinomial), sample(MyBinomial)];\n~~~~\n\n`Infer` is the *inference operator* that computes (or estimates) the marginal probability of each possible output of the function `binomial`. If no explicit inference method is specified, `Infer` defaults to enumerating each possible value of each random variable in the function body.\n\n### Bayesian inference by conditioning\n\nThe most important use of inference methods is for Bayesian inference. Here, our task is to *infer* the value of some unknown parameter by observing data that depends on the parameter. For example, if flipping three separate coins produce exactly two Heads, what is the probability that the first coin landed Heads? To solve this in WebPPL, we can use `Infer` to enumerate all values for the random variables `a`, `b` and `c`. We use `condition` to constrain the sum of the variables. The result is a distribution representing the posterior distribution on the first variable `a` having value `true` (i.e. \"Heads\").\n\n~~~~\nvar twoHeads = Infer({\n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n condition(a + b + c === 2);\n return a;\n }\n});\n\nprint('Probability of first coin being Heads (given exactly two Heads) : ');\nprint(Math.exp(twoHeads.score(true)));\n\nvar moreThanTwoHeads = Infer({\n model() {\n var a = flip(0.5);\n var b = flip(0.5);\n var c = flip(0.5);\n condition(a + b + c >= 2);\n return a;\n }\n});\n\nprint('\\Probability of first coin being Heads (given at least two Heads): ');\nprint(Math.exp(moreThanTwoHeads.score(true)));\n~~~~\n\n### Codeboxes and Plotting\n\nThe codeboxes allow you to modify our examples and to write your own WebPPL code. Code is not shared between boxes. You can use the special function `viz` to plot distributions:\n\n~~~~\nvar appleOrangeDist = Infer({\n model() {\n return flip(0.9) ? 'apple' : 'orange';\n }\n});\n\nviz(appleOrangeDist);\n~~~~\n\n~~~~\nvar fruitTasteDist = Infer({\n model() {\n return {\n fruit: categorical([0.3, 0.3, 0.4], ['apple', 'banana', 'orange']),\n tasty: flip(0.7)\n };\n }\n});\n\nviz(fruitTasteDist);\n~~~~\n\n~~~~\nvar positionDist = Infer({\n model() {\n return {\n X: gaussian(0, 1),\n Y: gaussian(0, 1)};\n },\n method: 'forward',\n samples: 1000\n});\n\nviz(positionDist);\n~~~~\n\n### Next\n\nIn the [next chapter](/chapters/3-agents-as-programs.html), we will implement rational decision-making using inference functions.\n", "date_published": "2017-03-19T18:54:16Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "2-webppl.md"} +{"id": "d62bdfd1eaa50efa1f628ecdc10da0df", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/4-reasoning-about-agents.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Reasoning about agents\ndescription: Overview of Inverse Reinforcement Learning. Inferring utilities and beliefs from choices in Gridworld and Bandits.\nis_section: true\n---\n\n\n## Introduction\nThe previous chapters have shown how to compute optimal actions for agents in MDPs and POMDPs. In many practical applications, this is the goal. For example, when controlling a robot, the goal is for the robot to act optimally given its utility function. When playing the stock market or poker, the goal is make money and one might use an approach based on the POMDP agent model from the [previous chapter](/chapters/3c-pomdp).\n\nIn other settings, however, the goal is to *learn* or *reason about* an agent based on their behavior. For example, in social science or psychology researchers often seek to learn about people's preferences (denoted $$U$$) and beliefs (denoted $$b$$). The relevant *data* (denoted $$\\{a_i\\}$$) are usually observations of human actions. In this situation, models of optimal action can be used as *generative models* of human actions. The generative model predicts the behavior *given* preferences and beliefs. That is:\n\n$$\nP( \\{a_i\\} \\vert U, b) =: \\text{Generative model of optimal action}\n$$\n\nStatistical inference infers the preferences $$U$$ and beliefs $$b$$ *given* the observed actions $$\\{a_i\\}$$. That is:\n\n$$\nP( U, b \\vert \\{a_i\\}) =: \\text{Invert generative model via statistical inference}\n$$\n\nThis approach, using generative models of sequential decision making, has been used to learn preferences and beliefs about education, work, health, and many other topics[^generative].\n\n[^generative]: The approach in economics closest to the one we outline here (with models of action based on sequential decision making) is called \"Structural Estimation\". Some particular examples are reft:aguirregabiria2010dynamic and reft:darden2010smoking. A related piece of work in AI or computational social science is reft:ermon2014learning.\n\nAgent models are also used as generative models in Machine Learning, under the label \"Inverse Reinforcement Learning\" (IRL). One motivation for learning human preferences and beliefs is to give humans helpful recommendations (e.g. for products they are likely to enjoy). A different goal is to build systems that mimic human expert performance. For some tasks, it is hard for humans to directly specify a utility/reward function that is both correct and that can be tractably optimized. An alternative is to *learn* the human's utility function by watching them perform the task. Once learned, the system can use standard RL techniques to optimize the function. This has been applied to building systems to park cars, to fly helicopters, to control human-like bots in videogames, and to play table tennis[^inverse].\n\n[^inverse]: The relevant papers on applications of IRL: parking cars in reft:abbeel2008apprenticeship, flying helicopters in reft:abbeel2010autonomous, controlling videogame bots in reft:lee2010learning, and table tennis in reft:muelling2014learning.\n\nThis chapter provides an array of illustrative examples of learning about agents from their actions. We begin with a concrete example and then provide a general formalization of the inference problem. A virtue of using WebPPL is that doing inference over our existing agent models requires very little extra code.\n\n\n## Learning about an agent from their actions: motivating example\n\nConsider the MDP version of Bob's Restaurant Choice problem. Bob is choosing between restaurants, all restaurants are open (and Bob knows this), and Bob also knows the street layout. Previously, we discussed how to compute optimal behavior *given* Bob's utility function over restaurants. Now we infer Bob's utility function *given* observations of the behavior in the codebox:\n\n~~~~\n///fold: restaurant constants, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar world = makeGridWorldMDP({ grid, start: [3, 1] }).world;\n\nviz.gridworld(world, { trajectory: donutSouthTrajectory });\n~~~~\n\nFrom Bob's actions, we infer that he probably prefers the Donut Store to the other restaurants. An alternative explanation is that Bob cares most about saving time. He might prefer Veg (the Vegetarian Cafe) but his preference is not strong enough to spend extra time getting there.\n\nIn this first example of inference, Bob's preference for saving time is held fixed and we infer (given the actions shown above) Bob's preference for the different restaurants. We model Bob using the MDP agent model from [Chapter 3.1](/chapters/3a-mdp.html). We place a uniform prior over three possible utility functions for Bob: one favoring Donut, one favoring Veg and one favoring Noodle. We compute a Bayesian posterior over these utility functions *given* Bob's observed behavior. Since the world is practically deterministic (with softmax parameter $$\\alpha$$ set high), we just compare Bob's predicted states under each utility function to the states actually observed. To predict Bob's states for each utility function, we use the function `simulate` from [Chapter 3.1](/chapters/3a-mdp.html).\n\n~~~~\n///fold: restaurant constants, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\nvar makeUtilityFunction = mdp.makeUtilityFunction;\nvar world = mdp.world;\n\nvar startState = donutSouthTrajectory[0][0];\n\nvar utilityTablePrior = function() {\n var baseUtilityTable = {\n 'Donut S': 1,\n 'Donut N': 1,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.04\n };\n return uniformDraw(\n [{ table: extend(baseUtilityTable, { 'Donut N': 2, 'Donut S': 2 }),\n favourite: 'donut' },\n { table: extend(baseUtilityTable, { Veg: 2 }),\n favourite: 'veg' },\n { table: extend(baseUtilityTable, { Noodle: 2 }),\n favourite: 'noodle' }]\n );\n};\n\nvar posterior = Infer({ model() {\n var utilityTableAndFavourite = utilityTablePrior();\n var utilityTable = utilityTableAndFavourite.table;\n var favourite = utilityTableAndFavourite.favourite;\n\n var utility = makeUtilityFunction(utilityTable);\n var params = {\n utility,\n alpha: 2\n };\n var agent = makeMDPAgent(params, world);\n\n var predictedStateAction = simulateMDP(startState, world, agent, 'stateAction');\n condition(_.isEqual(donutSouthTrajectory, predictedStateAction));\n return { favourite };\n}});\n\nviz(posterior);\n~~~~\n\n## Learning about an agent from their actions: formalization\n\nWe will now formalize the kind of inference in the previous example. We begin by considering inference over the utilities and softmax noise parameter for an MDP agent. Later on we'll generalize to POMDP agents and to other agents.\n\nFollowing [Chapter 3.1](/chapters/3a-mdp.html) the MDP agent is defined by a utility function $$U$$ and softmax parameter $$\\alpha$$. In order to do inference, we need to know the agent's starting state $$s_0$$ (which might include both their *location* and their *time horizon* $$N$$). The data we condition on is a sequence of state-action pairs:\n\n$$\n(s_0, a_0), (s_1, a_1), \\ldots, (s_n, a_n)\n$$\n\nThe index for the final timestep is less than or equal to the time horzion: $$n \\leq N$$. We abbreviate this sequence as $$(s,a)_{0:n}$$. The joint posterior on the agent's utilities and noise given the observed state-action sequence is:\n\n$$\nP(U,\\alpha | (s,a)_{0:n}) \\propto P( {(s,a)}_{0:n} | U, \\alpha) P(U, \\alpha)\n$$\n\nwhere the likelihood function $$P( {(s,a)}_{0:n} \\vert U, \\alpha )$$ is the MDP agent model (for simplicity we omit information about the starting state). Due to the Markov Assumption for MDPs, the probability of an agent's action in a state is independent of the agent's previous or later actions (given $$U$$ and $$\\alpha$$). This allows us to rewrite the posterior as **Equation (1)**:\n\n$$\nP(U,\\alpha | (s,a)_{0:n}) \\propto P(U, \\alpha) \\prod_{i=0}^n P( a_i | s_i, U, \\alpha)\n$$\n\n\nThe term $$P( a_i \\vert s_i, U, \\alpha)$$ can be rewritten as the softmax choice function (which corresponds to the function `act` in our MDP agent models). This equation holds for the case where we observe a sequence of actions from timestep $$0$$ to $$n \\leq N$$ (with no gaps). This tutorial focuses mostly on this case. It is trivial to extend the equation to observing multiple independently drawn such sequences (as we show below). However, if there are gaps in the sequence or if we observe only the agent's states (not the actions), then we need to marginalize over actions that were unobserved.\n\n\n## Examples of learning about agents in MDPs\n\n### Example: Inference from part of a sequence of actions\n\nThe expression for the joint posterior (above) shows that it is straightforward to do inference on a part of an agent's action sequence. For example, if we know an agent had a time horizon $$N=11$$, we can do inference from only the agent's first few actions.\n\nFor this example we condition on the agent making a single step from $$[3,1]$$ to $$[2,1]$$ by moving left. For an agent with low noise, this already provides very strong evidence about the agent's preferences -- not much is added by seeing the agent go all the way to Donut South.\n\n\n~~~~\n///fold: restaurant constants\nvar ___ = ' ';\nvar DN = { name: 'Donut N' };\nvar DS = { name: 'Donut S' };\nvar V = { name: 'Veg' };\nvar N = { name: 'Noodle' };\n///\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar world = makeGridWorldMDP({ grid }).world;\n\nvar trajectory = [\n {\n loc: [3, 1],\n timeLeft: 11,\n terminateAfterAction: false\n },\n {\n loc: [2, 1],\n timeLeft: 10,\n terminateAfterAction: false\n }\n];\n\nviz.gridworld(world, { trajectory });\n~~~~\n\nOur approach to inference is slightly different than in the example at the start of this chapter. The approach is a direct translation of the expression for the posterior in Equation (1) above. For each observed state-action pair, we compute the likelihood of the agent (with given $$U$$) choosing that action in the state. In contrast, the simple approach above becomes intractable for long, noisy action sequences -- as it will need to loop over all possible sequences.\n\n\n~~~~\n///fold: create restaurant choice MDP\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\nvar utilityTablePrior = function(){\n var baseUtilityTable = {\n 'Donut S': 1,\n 'Donut N': 1,\n 'Veg': 1,\n 'Noodle': 1,\n 'timeCost': -0.04\n };\n return uniformDraw(\n [{ table: extend(baseUtilityTable, { 'Donut N': 2, 'Donut S': 2 }),\n favourite: 'donut' },\n { table: extend(baseUtilityTable, { 'Veg': 2 }),\n favourite: 'veg' },\n { table: extend(baseUtilityTable, { 'Noodle': 2 }),\n favourite: 'noodle' }]\n );\n};\n\nvar observedTrajectory = [[{\n loc: [3, 1],\n timeLeft: 11,\n terminateAfterAction: false\n}, 'l']];\n\nvar posterior = Infer({ model() {\n var utilityTableAndFavourite = utilityTablePrior();\n var utilityTable = utilityTableAndFavourite.table;\n var utility = makeUtilityFunction(utilityTable);\n var favourite = utilityTableAndFavourite.favourite;\n\n var agent = makeMDPAgent({ utility, alpha: 2 }, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1];\n observe(act(state), action);\n },\n observedTrajectory);\n\n return { favourite };\n}});\n\nviz(posterior);\n~~~~\n\nNote that utility functions where Veg or Noodle are most preferred have almost the same posterior probability. Since they had the same prior, this means that we haven't received evidence about which the agent prefers. Moreover, assuming the agent's `timeCost` is negligible, then no matter where the agent above starts out on the grid, they choose Donut North or South. So we never get any information about whether they prefer the Vegetarian Cafe or Noodle Shop!\n\nActually, this is not quite right. If we wait long enough, the agent's softmax noise would eventually reveal information about which was preferred. However, we still won't be able to *efficiently* learn the agent's preferences by repeatedly watching them choose from a random start point. If there is no softmax noise, then we can make the stronger claim that even in the limit of arbitrarily many repeated i.i.d. observations, the agent's preferences are not *identified* by draws from this space of scenarios.\n\nUnidentifiability is a frequent problem when inferring an agent's beliefs or utilities from realistic datasets. First, agents with low noise reliably avoid inferior states (as in the present example) and so their actions provide little information about the relative utilities among the inferior states. Second, using richer agent models means there are more possible explanations of the same behavior. For example, agents with high softmax noise or with false beliefs might go to a restaurant even if they don't prefer it. One general approach to the problem of unidentifiability in IRL is **active learning**. Instead of passively observing the agent's actions, you select a sequence of environments that will be maximally informative about the agent's preferences. For recent work covering both the nature of unidentifiability in IRL as well as the active learning approach, see reft:amin2016towards.\n\n### Example: Inferring The Cost of Time and Softmax Noise\n\nThe previous examples assumed that the agent's `timeCost` (the negative utility of each timestep before the agent reaches a restaurant) and the softmax $$\\alpha$$ were known. We can modify the above example to include them in inference.\n\n~~~~\n// infer_utilities_timeCost_softmax_noise\n///fold: create restaurant choice MDP, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar vegDirectTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"u\"],\n [{\"loc\":[3,6],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5]},\"r\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[3,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":4,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n\n// Priors\n\nvar utilityTablePrior = function() {\n var foodValues = [0, 1, 2];\n var timeCostValues = [-0.1, -0.3, -0.6];\n var donut = uniformDraw(foodValues);\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': uniformDraw(foodValues),\n 'Noodle': uniformDraw(foodValues),\n 'timeCost': uniformDraw(timeCostValues)\n };\n};\n\nvar alphaPrior = function(){\n return uniformDraw([.1, 1, 10, 100]);\n};\n\n\n// Condition on observed trajectory\n\nvar posterior = function(observedTrajectory){\n return Infer({ model() {\n var utilityTable = utilityTablePrior();\n var alpha = alphaPrior();\n var params = {\n utility: makeUtilityFunction(utilityTable),\n alpha\n };\n var agent = makeMDPAgent(params, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1]\n observe(act(state), action);\n },\n observedTrajectory);\n\n // Compute whether Donut is preferred to Veg and Noodle\n var donut = utilityTable['Donut N'];\n var donutFavorite = (\n donut > utilityTable.Veg &&\n donut > utilityTable.Noodle);\n\n return {\n donutFavorite,\n alpha: alpha.toString(),\n timeCost: utilityTable.timeCost.toString()\n };\n }});\n};\n\nprint('Prior:');\nvar prior = posterior([]);\nviz.marginals(prior);\n\nprint('Conditioning on one action:');\nvar posterior = posterior(donutSouthTrajectory.slice(0, 1));\nviz.marginals(posterior);\n~~~~\n\n\n\nThe posterior shows that taking a step towards Donut South can now be explained in terms of a high `timeCost`. If the agent has a low value for $$\\alpha$$, this step to the left is fairly likely even if the agent prefers Noodle or Veg. So including softmax noise in the inference makes inferences about other parameters closer to the prior.\n\n>**Exercise:** Suppose the agent is observed going all the way to Veg. What would the posteriors on $$\\alpha$$ and `timeCost` look like? Check your answer by conditioning on the state-action sequence `vegDirectTrajectory`. You will need to modify other parts of the codebox above to make this work.\n\nAs we noted previously, it is simple to extend our approach to inference to conditioning on multiple sequences of actions. Consider the two sequences below:\n\n\n~~~~\n///fold: make restaurant choice MDP, naiveTrajectory, donutSouthTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar world = mdp.world;;\n\nmap(function(trajectory) { viz.gridworld(world, { trajectory }); },\n [naiveTrajectory, donutSouthTrajectory]);\n~~~~\n\nTo perform inference, we just condition on both sequences. (We use concatenation but we could have taken the union of all state-action pairs).\n\n\n~~~~\n///fold: World and agent are exactly as above\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar donutSouthTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"l\"],\n [{\"loc\":[2,1],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"l\"],\n [{\"loc\":[1,1],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[2,1]},\"l\"],\n [{\"loc\":[0,1],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[1,1]},\"d\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[0,1],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[0,0],\"terminateAfterAction\":true,\"timeLeft\":7,\"previousLoc\":[0,0],\"timeAtRestaurant\":1},\"l\"]\n];\n\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar world = mdp.world;\nvar makeUtilityFunction = mdp.makeUtilityFunction;\n\n\n// Priors\n\nvar utilityTablePrior = function() {\n var foodValues = [0, 1, 2];\n var timeCostValues = [-0.1, -0.3, -0.6];\n var donut = uniformDraw(foodValues);\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': uniformDraw(foodValues),\n 'Noodle': uniformDraw(foodValues),\n 'timeCost': uniformDraw(timeCostValues)\n };\n};\n\nvar alphaPrior = function(){\n return uniformDraw([.1, 1, 10, 100]);\n};\n\n\n// Condition on observed trajectory\n\nvar posterior = function(observedTrajectory){\n return Infer({ model() {\n var utilityTable = utilityTablePrior();\n var alpha = alphaPrior();\n var params = {\n utility: makeUtilityFunction(utilityTable),\n alpha\n };\n var agent = makeMDPAgent(params, world);\n var act = agent.act;\n\n // For each observed state-action pair, factor on likelihood of action\n map(\n function(stateAction){\n var state = stateAction[0];\n var action = stateAction[1]\n observe(act(state), action);\n },\n observedTrajectory);\n\n // Compute whether Donut is preferred to Veg and Noodle\n var donut = utilityTable['Donut N'];\n var donutFavorite = (\n donut > utilityTable.Veg &&\n donut > utilityTable.Noodle);\n\n return {\n donutFavorite,\n alpha: alpha.toString(),\n timeCost: utilityTable.timeCost.toString()\n };\n }});\n};\n\n///\nprint('Prior:');\nvar prior = posterior([]);\nviz.marginals(prior);\n\nprint('Posterior');\nvar posterior = posterior(naiveTrajectory.concat(donutSouthTrajectory));\nviz.marginals(posterior);\n~~~~\n\n\n\n## Learning about agents in POMDPs\n\n### Formalization\n\nWe can extend our approach to inference to deal with agents that solve POMDPs. One approach to inference is simply to generate full state-action sequences and compare them to the observed data. As we mentioned above, this approach becomes intractable in cases where noise (in transitions and actions) is high and sequences are long.\n\nInstead, we extend the approach in Equation (1) above. The first thing to notice is that Equation (1) has to be amended for POMDPs. In an MDP, actions are conditionally independent given the agent's parameters $$U$$ and $$\\alpha$$ and the state. For any pair of actions $$a_{i}$$ and $$a_j$$ and state $$s_i$$:\n\n$$\nP(a_i \\vert a_j, s_i, U,\\alpha) = P(a_i \\vert s_i, U,\\alpha)\n$$\n\nIn a POMDP, actions are only rendered conditionally independent if we also condition on the agent's *belief*. So Equation (1) can only be extended to the case where we know the agent's belief at each timestep. This will be realistic in some applications and not others. It depends on whether the agent's *observations* are part of the data that is conditioned on. If so, the agent's belief can be computed at each timestep (assuming the agent's initial belief is known). If not, we have to marginalize over the possible observations, making for a more complex inference computation.\n\nHere is the extension of Equation (1) to the POMDP case, where we assume access to the agent's observations. Our goal is to compute a posterior on the parameters of the agent. These include $$U$$ and $$\\alpha$$ as before but also the agent's initial belief $$b_0$$.\n\nWe observe a sequence of state-observation-action triples:\n\n$$\n(s_0,o_0,a_0), (s_1,o_1,a_1), \\ldots, (s_n,o_n,a_n)\n$$\n\nThe index for the final timestep is at most the time horzion: $$n \\leq N$$. The joint posterior on the agent's utilities and noise given the observed sequence is:\n\n$$\nP(U,\\alpha, b_0 | (s,o,a)_{0:n}) \\propto P( (s,o,a)_{0:n} | U, \\alpha, b_0)P(U, \\alpha, b_0)\n$$\n\nTo produce a factorized form of this posterior analogous to Equation (1), we compute the sequence of agent beliefs. This is given by the recursive Bayesian belief update described in [Chapter 3.3](/chapters/3c-pomdp):\n\n$$\nb_i = b_{i-1} \\vert s_i, o_i, a_{i-1}\n$$\n\n$$\nb_i(s_i) \\propto\nO(s_i,a_{i-1},o_i)\n\\sum_{s_i \\in S} { T(s_{i-1}, a_{i-1}, s_i) b_{i-1}(s_{i-1})}\n$$\n\nThe posterior can thus be written as **Equation (2)**: \n\n$$\nP(U, \\alpha, b_0 | (s,o,a)_{0:n}) \\propto P(U, \\alpha, b_0) \\prod_{i=0}^n P( a_i | s_i, b_i, U, \\alpha)\n$$\n\n\n### Application: Bandits\n\nTo learn the preferences and beliefs of a POMDP agent we translate Equation (2) into WebPPL. In a later [chapter](/chapters/5e-joint-inference.html), we apply this to the Restaurant Choice problem. Here we focus on the Bandit problems introduced in the [previous chapter](/chapters/3c-pomdp).\n\nIn the Bandit problems there is an unknown mapping from arms to non-numeric prizes (or distributions on such prizes) and the agent has preferences over these prizes. The agent tries out arms to discover the mapping and exploits the most promising arms. In the *inverse* problem, we get to observe the agent's actions. Unlike the agent, we already know the mapping from arms to prizes. However, we don't know the agent's preferences or the agent's prior about the mapping[^bandit].\n\n[^bandit]: If we did not know the mapping from arms to prizes, the inference problem would not change fundamentally. We get information about this mapping by observing the prizes the agent receives when pulling different arms.\n\nOften the agent's choices admit of multiple explanations. Recall the deterministic example in the previous chapter when (according to the agent's belief) `arm0` had the prize \"chocolate\" and `arm1` had either \"champagne\" or \"nothing\" (see also Figure 2 below). Suppose we observe the agent chosing `arm0` on the first of five trials. If we don't know the agent's utilities or beliefs, then this choice could be explained by either:\n\n(1). the agent's preference for chocolate over champagne, or\n\n(2). the agent's belief that `arm1` is very likely (e.g. 95%) to yield the \"nothing\" prize deterministically\n\nGiven this choice by the agent, we won't be able to identify which of (1) and (2) is true because exploration becomes less valuable every trial (and there's only 5 trials total).\n\nThe codeboxes below implements this example. The translation of Equation (2) is in the function `factorSequence`. This function iterates through the observed state-observation-action triples, updating the agent's belief at each timestep. It interleaves conditioning on an action (via `factor`) with computing the sequence of belief functions $$b_i$$. The variable names correspond as follows:\n\n- $$b_0$$ is `initialBelief` (an argument to `factorSequence`)\n\n- $$s_i$$ is `state`\n\n- $$b_i$$ is `nextBelief`\n\n- $$a_i$$ is `observedAction`\n\n~~~~\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n~~~~\n\nWe start with a very simple example. The agent is observed pulling `arm1` five times. The agent's prior is known and assigns equal weight to `arm1` yielding \"champagne\" and to it yielding \"nothing\". The true prize for `arm1` is \"champagne\" (see Figure 1).\n\n\"diagram\"\n\n> **Figure 1:** Bandit problem where agent's prior is known. (The true state has the bold outline).\n\nFrom the observation, it's obvious that the agent prefers champagne. This is what we infer below:\n\n~~~~\n///fold: inferBeliefsAndPreferences, getMarginal\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n\nvar getMarginal = function(dist, key){\n return Infer({ model() {\n return sample(dist)[key];\n }});\n};\n///\n// true prizes for arms\nvar trueArmToPrizeDist = {\n 0: Delta({ v: 'chocolate' }),\n 1: Delta({ v: 'champagne' })\n};\nvar bandit = makeBanditPOMDP({\n armToPrizeDist: trueArmToPrizeDist,\n numberOfArms: 2,\n numberOfTrials: 5\n});\n\n// simpleAgent always pulls arm 1\nvar simpleAgent = makePOMDPAgent({\n act: function(belief){\n return Infer({ model() { return 1; }});\n },\n updateBelief: function(belief){ return belief; },\n params: { priorBelief: Delta({ v: bandit.startState }) }\n}, bandit.world);\n\nvar observedSequence = simulatePOMDP(bandit.startState, bandit.world, simpleAgent,\n 'stateObservationAction');\n\n// Priors for inference\n\n// We know agent's prior, which is that either arm1 yields\n// nothing or it yields champagne.\nvar priorInitialBelief = Delta({ v: Infer({ model() {\n var armToPrizeDist = uniformDraw([\n trueArmToPrizeDist,\n extend(trueArmToPrizeDist, { 1: Delta({ v: 'nothing' }) })]);\n return makeBanditStartState(5, armToPrizeDist);\n}})});\n\n// Agent either prefers chocolate or champagne.\nvar likesChampagne = {\n nothing: 0,\n champagne: 5,\n chocolate: 3\n};\nvar likesChocolate = {\n nothing: 0,\n champagne: 3,\n chocolate: 5\n};\nvar priorPrizeToUtility = Categorical({\n vs: [likesChampagne, likesChocolate],\n ps: [0.5, 0.5]\n});\nvar baseParams = { alpha: 1000 };\nvar posterior = inferBeliefsAndPreferences(baseParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence);\n\nprint(\"After observing agent choose arm1, what are agent's utilities?\");\nprint('Posterior on agent utilities:');\nviz.table(getMarginal(posterior, 'prizeToUtility'));\n~~~~\n\n\nIn the codebox above, the agent's preferences are identified by the observations. This won't hold for the next example, which we introduced previously. The agent's utilities for prizes are still unknown and now the agent's prior is also unknown. Either the agent is \"informed\" and knows the truth that `arm1` yields \"champagne\". Or the agent is misinformed and believes `arm1` is likely to yield \"nothing\". These two possibilities are depicted in Figure 2.\n\n\"diagram\"\n\n> **Figure 2:** Bandit where agent's prior is unknown. The two large boxes depict the prior on the agent's initial belief. Each possibility for the agent's initial belief has probability 0.5.\n\nWe observe the agent's first action, which is pulling `arm0`. Our inference approach is the same as above:\n\n~~~~\n///fold:\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n///\nvar trueArmToPrizeDist = {\n 0: Delta({ v: 'chocolate' }),\n 1: Delta({ v: 'champagne' })\n};\nvar bandit = makeBanditPOMDP({\n numberOfArms: 2,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials: 5\n});\n\nvar simpleAgent = makePOMDPAgent({\n // simpleAgent always pulls arm 0\n act: function(belief){\n return Infer({ model() { return 0; }});\n },\n updateBelief: function(belief){ return belief; },\n params: { priorBelief: Delta({ v: bandit.startState }) }\n}, bandit.world);\n\nvar observedSequence = simulatePOMDP(bandit.startState, bandit.world, simpleAgent,\n 'stateObservationAction');\n\n// Agent either knows that arm1 has prize \"champagne\"\n// or agent thinks prize is probably \"nothing\"\n\nvar informedPrior = Delta({ v: bandit.startState });\nvar noChampagnePrior = Infer({ model() {\n var armToPrizeDist = categorical(\n [0.05, 0.95],\n [trueArmToPrizeDist,\n extend(trueArmToPrizeDist, { 1: Delta({ v: 'nothing' }) })]);\n return makeBanditStartState(5, armToPrizeDist);\n}});\n\nvar priorInitialBelief = Categorical({\n vs: [informedPrior, noChampagnePrior],\n ps: [0.5, 0.5]\n});\n\n// We are still uncertain about whether agent prefers chocolate or champagne\nvar likesChampagne = {\n nothing: 0,\n champagne: 5,\n chocolate: 3\n};\nvar likesChocolate = {\n nothing: 0,\n champagne: 3,\n chocolate: 5\n};\n\nvar priorPrizeToUtility = Categorical({\n ps: [0.5, 0.5],\n vs: [likesChampagne, likesChocolate]\n});\n\nvar baseParams = {alpha: 1000};\nvar posterior = inferBeliefsAndPreferences(baseParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence);\n\nvar utilityBeliefPosterior = Infer({ model() {\n var utilityBelief = sample(posterior);\n var chocolateUtility = utilityBelief.prizeToUtility.chocolate;\n var likesChocolate = chocolateUtility > 3;\n var isInformed = utilityBelief.priorBelief.support().length === 1;\n return { likesChocolate, isInformed };\n}});\n\nviz.table(utilityBeliefPosterior);\n~~~~\n\nExploration is more valuable if there are more Bandit trials in total. If we observe the agent choosing the arm they already know about (`arm0`) then we get stronger inferences about their preference for chocolate over champagne as the total trials increases.\n\n~~~~\n// TODO simplify the code here or merge with previous example.\n///fold:\nvar inferBeliefsAndPreferences = function(baseAgentParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence) {\n\n return Infer({ model() {\n\n // 1. Sample utilities\n var prizeToUtility = (priorPrizeToUtility ? sample(priorPrizeToUtility)\n : undefined);\n\n // 2. Sample beliefs\n var initialBelief = sample(priorInitialBelief);\n\n // 3. Construct agent given utilities and beliefs\n var newAgentParams = extend(baseAgentParams, { priorBelief: initialBelief });\n var agent = makeBanditAgent(newAgentParams, bandit, 'belief', prizeToUtility);\n var agentAct = agent.act;\n var agentUpdateBelief = agent.updateBelief;\n\n // 4. Condition on observations\n var factorSequence = function(currentBelief, previousAction, timeIndex){\n if (timeIndex < observedSequence.length) {\n var state = observedSequence[timeIndex].state;\n var observation = observedSequence[timeIndex].observation;\n var nextBelief = agentUpdateBelief(currentBelief, observation, previousAction);\n var nextActionDist = agentAct(nextBelief);\n var observedAction = observedSequence[timeIndex].action;\n factor(nextActionDist.score(observedAction));\n factorSequence(nextBelief, observedAction, timeIndex + 1);\n }\n };\n factorSequence(initialBelief,'noAction', 0);\n\n return {\n prizeToUtility,\n priorBelief: initialBelief\n };\n }});\n};\n///\n\nvar probLikesChocolate = function(numberOfTrials){\n var trueArmToPrizeDist = {\n 0: Delta({ v: 'chocolate' }),\n 1: Delta({ v: 'champagne' })\n };\n var bandit = makeBanditPOMDP({\n numberOfArms: 2,\n armToPrizeDist: trueArmToPrizeDist,\n numberOfTrials\n });\n\n var simpleAgent = makePOMDPAgent({\n // simpleAgent always pulls arm 0\n act: function(belief){\n return Infer({ model() { return 0; }});\n },\n updateBelief: function(belief){ return belief; },\n params: { priorBelief: Delta({ v: bandit.startState }) }\n }, bandit.world);\n\n var observedSequence = simulatePOMDP(bandit.startState, bandit.world, simpleAgent,\n 'stateObservationAction');\n\n var baseParams = { alpha: 100 };\n\n var noChampagnePrior = Infer({ model() {\n var armToPrizeDist = (\n flip(0.2) ?\n trueArmToPrizeDist :\n extend(trueArmToPrizeDist, { 1: Delta({ v: 'nothing' }) }));\n return makeBanditStartState(numberOfTrials, armToPrizeDist);\n }});\n var informedPrior = Delta({ v: bandit.startState });\n var priorInitialBelief = Categorical({\n vs: [noChampagnePrior, informedPrior],\n ps: [0.5, 0.5],\n });\n\n var likesChampagne = {\n nothing: 0,\n champagne: 5,\n chocolate: 3\n };\n var likesChocolate = {\n nothing: 0,\n champagne: 3,\n chocolate: 5\n };\n\n var priorPrizeToUtility = Categorical({\n vs: [likesChampagne, likesChocolate],\n ps: [0.5, 0.5],\n });\n\n var posterior = inferBeliefsAndPreferences(baseParams, priorPrizeToUtility,\n priorInitialBelief, bandit,\n observedSequence);\n\n var likesChocInformed = {\n prizeToUtility: likesChocolate,\n priorBelief: informedPrior\n };\n var probLikesChocInformed = Math.exp(posterior.score(likesChocInformed));\n var likesChocNoChampagne = {\n prizeToUtility: likesChocolate,\n priorBelief: noChampagnePrior\n };\n var probLikesChocNoChampagne = Math.exp(posterior.score(likesChocNoChampagne));\n return probLikesChocInformed + probLikesChocNoChampagne;\n};\n\nvar lifetimes = [5, 6, 7, 8, 9];\nvar probsLikesChoc = map(probLikesChocolate, lifetimes);\n\nprint('Probability of liking chocolate for lifetimes ' + lifetimes + '\\n'\n + probsLikesChoc);\n\nviz.bar(lifetimes, probsLikesChoc);\n~~~~\n\n\nThis example of inferring an agent's utilities from a Bandit problem may seem contrived. However, there are practical problems that have a similar structure. Consider a domain where $$k$$ **sources** (arms) produce a stream of content, with each piece of content having a **category** (prizes). At each timestep, a human is observed choosing a source. The human has uncertainty about the stochastic mapping from sources to categories. Our goal is to infer the human's beliefs about the sources and their preferences over categories. The sources could be blogs or feeds that tag posts using the same set of tags. Alternatively, the sources could be channels for TV shows or songs. In this kind of application, the same issue of identifiability arises. An agent may choose a source either because they know it produces content in the best categories or because they have a strong prior belief that it does.\n\nIn the next [chapter](/chapters/5-biases-intro.html), we start looking at agents with cognitive bounds and biases.\n\n
\n\n### Footnotes\n", "date_published": "2019-08-24T14:52:08Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "4-reasoning-about-agents.md"} +{"id": "c7e0d637497b77d5816892e32b4974c4", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6b-inference-sampling.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Sampling-based methods\ndescription: A brief review of sampling-based methods. Discuss MCMC over enumeration with caching for inverse planning.\nstatus: stub\nis_section: false\nhidden: true\n---\n", "date_published": "2016-03-09T21:34:02Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6b-inference-sampling.md"} +{"id": "eae3b96985732bb8089867fa0a722db3", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5b-time-inconsistency.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: \"Time inconsistency II\"\ndescription: Formal model of time-inconsistent agent, Gridworld and Procrastination examples.\n\n---\n\n## Formal Model and Implementation of Hyperbolic Discounting\n\n\n### Formal Model of Naive and Sophisticated Hyperbolic Discounters\n\nTo formalize Naive and Sophisticated hyperbolic discounting, we make a small modificiation to the MDP agent model. The key idea is to add a variable for measuring time, the *delay*, which is distinct from the objective time-index (called `timeLeft` in our implementation). The objective time-index is crucial to planning for finite-horizon MDPs because it determines how far the agent can travel or explore before time is up. By contrast, the delays are *subjective*. They are used by the agent in *evaluating* possible future rewards but they are not an independent feature of the decision problem.\n\nWe use delays because discounting agents have preferences over when they receive rewards. When evaluating future prospects, they need to keep track of how far ahead in time that reward occurs. Naive and Sophisticated agents *evaluate* future rewards in the same way. They differ in how they simulate their future actions.\n\nThe Naive agent at objective time $$t$$ assumes his future self at objective time $$t+c$$ (where $$c>0$$) shares his time preference. So he simulates the $$(t+c)$$-agent as evaluating a reward at time $$t+c$$ with delay $$=c$$ rather than the true delay $$=0$$. The Sophisticated agent correctly models his $$(t+c)$$-agent future self as evaluating an immediate reward with delay $$=0$$.\n\nTo be more concrete, suppose both Naive and Sophisticated have a discount function $$1/(1+kd)$$, where $$d$$ is how much the reward is delayed. When simulating his future self at $$t+c$$, the Naive agent assumes he'll discount immediate gains at rate $$1/(1+kc)$$ and the Sophisticated agent (correctly) assumes a rate of $$1/(1+0)$$. \n\nAdding delays to our model is straightforward. In defining the MDP agent, we presented Bellman-style recursions for the expected utility of state-action pairs. Discounting agents evaluate states and actions differently depending on their *delay* from the present. So we now define expected utilities of state-action-delay triples:\n\n$$\nEU[s,a,d] = \\delta(d)U(s, a) + \\mathbb{E}_{s', a'}(EU[s', a',d+1])\n$$\n\nwhere:\n\n- $$\\delta \\colon \\mathbb{N} \\to \\mathbb{R}$$ is the discount function from the delay to the discount factor. In our examples we have (where $$k>0$$ is the discount constant):\n\n$$\n\\delta(d) = \\frac{1}{1+kd}\n$$\n\n- $$s' \\sim T(s,a)$$ exactly as in the non-discounting case.\n\n- $$a' \\sim C(s'; d_P)$$ where $$d_P=0$$ for Sophisticated and $$d_P=d+1$$ for Naive.\n\n\nThe function $$C \\colon S \\times \\mathbb{N} \\to A$$ is again the *act* function. For $$C(s'; d+1)$$ we take a softmax over the expected value of each action $$a$$, namely, $$EU[s',a,d+1]$$. The act function now takes a delay argument. We interpret $$C(s';d+1)$$ as \"the softmax action the agent would take in state $$s'$$ given that their rewards occur with a delay $$d+1$$\".\n\nThe Naive agent simulates his future actions by computing $$C(s';d+1)$$; the Sophisticated agent computes the action that will *actually* occur, which is $$C(s';0)$$. So if we want to simulate an environment including a hyperbolic discounter, we can compute the agent's action with $$C(s;0)$$ for every state $$s$$. \n\n\n### Implementing the hyperbolic discounter\n \nAs with the MDP and POMDP agents, our WebPPL implementation directly translates the mathematical formulation of Naive and Sophisticated hyperbolic discounting. The variable names correspond as follows:\n\n- The function $$\\delta$$ is named `discountFunction`\n\n- The \"perceived delay\", which controls how the agent's simulated future self evaluates rewards, is $$d$$ in the math and `perceivedDelay` below. \n\n- $$s'$$, $$a'$$, $$d+1$$ correspond to `nextState`, `nextAction` and `delay+1` respectively. \n\nThis codebox simplifies the code for the hyperbolic discounter by omitting definitions of `transition`, `utility` and so on:\n\n~~~~\nvar makeAgent = function(params, world) {\n\n var act = dp.cache( \n function(state, delay){\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay); \n factor(params.alpha * eu);\n return action;\n }}); \n });\n \n var expectedUtility = dp.cache(\n function(state, action, delay){\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else { \n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1); \n }}));\n } \n });\n \n return { params, expectedUtility, act };\n};\n~~~~\n\nThe next codebox shows how the Naive agent can end up at Donut North in the Restaurant Choice problem, despite this being dominated for any possible utility function. The Naive agent first moves in the direction of Veg, which initially looks better than Donut South. When right outside Donut North, discounting makes it look better than Veg. To visualize this, we display the agent's expected utility calculations at different steps along its trajectory. The crucial values are the `expectedValue` of going left at [3,5] when `delay=0` compared with `delay=4`. The function `plannedTrajectories` uses `expectedValue` to access these values. For each timestep, we plot the agent's position and the expected utility of each action they might perform in the future. \n\n\n~~~~\n///fold: makeAgent, mdp, plannedTrajectories\nvar makeAgent = function(params, world) {\n var defaultParams = {\n alpha: 500, \n discount: 1\n };\n var params = extend(defaultParams, params);\n var stateToActions = world.stateToActions;\n var transition = world.transition;\n var utility = params.utility;\n var paramsDiscountFunction = params.discountFunction;\n\n var discountFunction = (\n paramsDiscountFunction ? \n paramsDiscountFunction : \n function(delay){ return 1/(1 + params.discount*delay); });\n\n var isNaive = params.sophisticatedOrNaive === 'naive';\n\n var act = dp.cache( \n function(state, delay) {\n var delay = delay || 0; // make sure delay is never undefined\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay);\n factor(params.alpha * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action, delay) {\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1);\n }}));\n }\n });\n\n return { params, expectedUtility, act };\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar MAPActionPath = function(state, world, agent, actualTotalTime, statesOrActions) { \n var perceivedTotalTime = state.timeLeft;\n assert.ok(perceivedTotalTime > 1 || state.terminateAfterAction==false,\n 'perceivedTime<=1. If=1 then should have state.terminateAfterAction,' +\n ' but then simulate wont work ' + JSON.stringify(state));\n\n var agentAction = agent.act;\n var expectedUtility = agent.expectedUtility;\n var transition = world.transition;\n\n var sampleSequence = function (state, actualTimeLeft) {\n var action = agentAction(state, actualTotalTime-actualTimeLeft).MAP().val;\n var nextState = transition(state, action); \n var out = {states:state, actions:action, both:[state,action]}[statesOrActions];\n if (actualTimeLeft==0 || state.terminateAfterAction){\n return [out];\n } else {\n return [ out ].concat( sampleSequence(nextState, actualTimeLeft-1));\n }\n };\n return sampleSequence(state, actualTotalTime);\n};\n\nvar plannedTrajectory = function(world, agent) {\n var getExpectedUtilities = function(trajectory, agent, actions) { \n var expectedUtility = agent.expectedUtility;\n var v = mapIndexed(function(i, state) {\n return [state, map(function (a) { return expectedUtility(state, a, i); }, actions)];\n }, trajectory );\n return v;\n };\n return function(state) {\n var currentPlan = MAPActionPath(state, world, agent, state.timeLeft, 'states');\n return getExpectedUtilities(currentPlan, agent, world.actions);\n };\n} \n\nvar plannedTrajectories = function(trajectory, world, agent) { \n var getTrajectory = plannedTrajectory(world, agent);\n return map(getTrajectory, trajectory);\n}\n///\n\nvar world = mdp.world;\nvar start = mdp.startState;\n\nvar utilityTable = {\n 'Donut N': [10, -10], // [immediate reward, delayed reward]\n 'Donut S': [10, -10],\n 'Veg': [-10, 20],\n 'Noodle': [0, 0],\n 'timeCost': -.01 // cost of taking a single action \n};\n\nvar restaurantUtility = function(state, action) {\n var feature = world.feature;\n var name = feature(state).name;\n if (name) {\n return utilityTable[name][state.timeAtRestaurant]\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar runAndGraph = function(agent) { \n var trajectory = simulateMDP(mdp.startState, world, agent);\n var plans = plannedTrajectories(trajectory, world, agent);\n viz.gridworld(world, {\n trajectory, \n dynamicActionExpectedUtilities: plans\n });\n};\n\nvar agent = makeAgent({\n sophisticatedOrNaive: 'naive', \n utility: restaurantUtility\n}, world);\n\nprint('Naive agent: \\n\\n');\nrunAndGraph(agent);\n~~~~\n\nWe run the Sophisticated agent with the same parameters and visualization. \n\n\n~~~~\n///fold: \nvar makeAgent = function(params, world) {\n var defaultParams = {\n alpha: 500, \n discount: 1\n };\n var params = extend(defaultParams, params);\n var stateToActions = world.stateToActions;\n var transition = world.transition;\n var utility = params.utility;\n var paramsDiscountFunction = params.discountFunction;\n\n var discountFunction = (\n paramsDiscountFunction ? \n paramsDiscountFunction : \n function(delay){ return 1/(1+params.discount*delay); });\n\n var isNaive = params.sophisticatedOrNaive === 'naive';\n\n var act = dp.cache( \n function(state, delay) {\n var delay = delay || 0; // make sure delay is never undefined\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action, delay);\n factor(params.alpha * eu);\n return action;\n }});\n });\n\n var expectedUtility = dp.cache(\n function(state, action, delay) {\n var u = discountFunction(delay) * utility(state, action);\n if (state.terminateAfterAction){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action); \n var perceivedDelay = isNaive ? delay + 1 : 0;\n var nextAction = sample(act(nextState, perceivedDelay));\n return expectedUtility(nextState, nextAction, delay+1);\n }}));\n }\n });\n\n return { params, expectedUtility, act };\n};\n\nvar ___ = ' '; \nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar world = mdp.world;\n\nvar utilityTable = {\n 'Donut N': [10, -10], // [immediate reward, delayed reward]\n 'Donut S': [10, -10],\n 'Veg': [-10, 20],\n 'Noodle': [0, 0],\n 'timeCost': -.01 // cost of taking a single action \n};\n\nvar restaurantUtility = function(state, action) {\n var feature = world.feature;\n var name = feature(state).name;\n if (name) {\n return utilityTable[name][state.timeAtRestaurant]\n } else {\n return utilityTable.timeCost;\n }\n};\n\nvar MAPActionPath = function(state, world, agent, actualTotalTime, statesOrActions) { \n var perceivedTotalTime = state.timeLeft;\n assert.ok(perceivedTotalTime > 1 || state.terminateAfterAction==false,\n 'perceivedTime<=1. If=1 then should have state.terminateAfterAction,' +\n ' but then simulate wont work ' + JSON.stringify(state));\n\n var agentAction = agent.act;\n var expectedUtility = agent.expectedUtility;\n var transition = world.transition;\n\n var sampleSequence = function (state, actualTimeLeft) {\n var action = agentAction(state, actualTotalTime-actualTimeLeft).MAP().val;\n var nextState = transition(state, action); \n var out = {states:state, actions:action, both:[state,action]}[statesOrActions];\n if (actualTimeLeft==0 || state.terminateAfterAction){\n return [out];\n } else {\n return [ out ].concat( sampleSequence(nextState, actualTimeLeft-1));\n }\n };\n return sampleSequence(state, actualTotalTime);\n};\n\nvar plannedTrajectory = function(world, agent) {\n var getExpectedUtilities = function(trajectory, agent, actions) { \n var expectedUtility = agent.expectedUtility;\n var v = mapIndexed(function(i, state) {\n return [state, map(function (a) { return expectedUtility(state, a, i); }, actions)];\n }, trajectory );\n return v;\n };\n return function(state) {\n var currentPlan = MAPActionPath(state, world, agent, state.timeLeft, 'states');\n return getExpectedUtilities(currentPlan, agent, world.actions);\n };\n};\n\nvar plannedTrajectories = function(trajectory, world, agent) { \n var getTrajectory = plannedTrajectory(world, agent);\n return map(getTrajectory, trajectory);\n};\n\nvar runAndGraph = function(agent) { \n var trajectory = simulateMDP(mdp.startState, world, agent);\n var plans = plannedTrajectories(trajectory, world, agent);\n viz.gridworld(world, {\n trajectory, \n dynamicActionExpectedUtilities: plans\n });\n};\n///\n\nvar agent = makeAgent({\n sophisticatedOrNaive: 'sophisticated', \n utility: restaurantUtility\n}, world);\n\nprint('Sophisticated agent: \\n\\n');\nrunAndGraph(agent);\n~~~~\n\n>**Exercise**: What would an exponential discounter with identical preferences to the agents above do on the Restaurant Choice problem? Implement an exponential discounter in the codebox above by adding a `discountFunction` property to the `params` argument to `makeAgent`. \n
\n\n--------\n\n\n\n### Example: Procrastinating on a task\n\nCompared to the Restaurant Choice problem, procrastination leads to (systematically biased) behavior that is especially hard to explain on the softmax noise mode.\n\n> **The Procrastination Problem**\n>
You have a hard deadline of ten days to complete a task (e.g. write a paper for class, apply for a school or job). Completing the task takes a full day and has a *cost* (it's unpleasant work). After the task is complete you get a *reward* (typically exceeding the cost). There is an incentive to finish early: every day you delay finishing, your reward gets slightly smaller. (Imagine that it's good for your reputation to complete tasks early or that early applicants are considered first).\n\nNote that if the task is worth doing at the last minute, then you should do it immediately (because the reward diminishes over time). Yet people often do this kind of task at the last minute -- the worst possible time to do it!\n\nHyperbolic discounting provides an elegant model of this behavior. On Day 1, a hyperbolic discounter will prefer that they complete the task tomorrow rather than today. Moreover, a Naive agent wrongly predicts they will complete the task tomorrow and so puts off the task till Day 2. When Day 2 arrives, the Naive agent reasons in the same way -- telling themself that they can avoid the work today by putting it off till tomorrow. This continues until the last possible day, when the Naive agent finally completes the task.\n\nIn this problem, the behavior of optimal and time-inconsistent agents with identical preferences (i.e. utility functions) diverges. If the deadline is $$T$$ days from the start, the optimal agent will do the task immediately and the Naive agent will do the task on Day $$T$$. Any problem where a time-inconsistent agent receives exponentially lower reward than an optimal agent contains a close variant of our Procrastination Problem refp:kleinberg2014time [^kleinberg]. \n\n[^kleinberg]: Kleinberg and Oren's paper considers a variant problem where the each cost/penalty for waiting is received immediately (rather than being delayed until the time the task is done). In this variant, the agent must eventually complete the task. The authors consider \"semi-myopic\" time-inconsistent agents, i.e. agents who do not discount their next reward, but discount all future rewards by $$\\beta < 1$$. They show that in any problem where a semi-myopic agent receives exponentially lower reward than an optimal agent, the problem must contain a copy of their variant of the Procrastination Problem.\n\nWe formalize the Procrastination Problem in terms of a deterministic graph. Suppose the **deadline** is $$T$$ steps from the start. Assume that after $$t$$ < $$T$$ steps the agent has not yet completed the task. Then the agent can take the action `\"work\"` (which has **work cost** $$-w$$) or the action `\"wait\"` with zero cost. After the `\"work\"` action the agent transitions to the `\"reward\"` state and receives $$+(R - t \\epsilon)$$, where $$R$$ is the **reward** for the task and $$\\epsilon$$ is how much the reward diminishes for every day of waiting (the **wait cost**). See Figure 3 below. \n\n\"diagram\"\n\n>**Figure 3:** Transition graph for Procrastination Problem. States are represented by nodes. Edges are state-transitions and are labeled with the action name and the utility of the state-action pair. Terminal nodes have a bold border and their utility is labeled below.\n\nWe simulate the behavior of hyperbolic discounters on the Procrastination Problem. We vary the discount rate $$k$$ while holding the other parameters fixed. The agent's behavior can be summarized by its final state (`\"wait_state\"` or `\"reward_state`) and by how much time elapses before termination. When $$k$$ is sufficiently high, the agent will not even complete the task on the last day. \n\n\n~~~~\n///fold: makeProcrastinationMDP, makeProcrastinationUtility\nvar makeProcrastinationMDP = function(deadlineTime) {\n var stateLocs = [\"wait_state\", \"reward_state\"];\n var actions = [\"wait\", \"work\", \"relax\"];\n\n var stateToActions = function(state) {\n return (state.loc === \"wait_state\" ? \n [\"wait\", \"work\"] :\n [\"relax\"]);\n };\n\n var advanceTime = function (state) {\n var newTimeLeft = state.timeLeft - 1;\n var terminateAfterAction = (newTimeLeft === 1 || \n state.loc === \"reward_state\");\n return extend(state, {\n timeLeft: newTimeLeft,\n terminateAfterAction: terminateAfterAction\n });\n };\n\n var transition = function(state, action) {\n assert.ok(_.includes(stateLocs, state.loc) && _.includes(actions, action), \n 'procrastinate transition:' + [state.loc,action]);\n \n if (state.loc === \"reward_state\") {\n return advanceTime(state);\n } else if (action === \"wait\") {\n var waitSteps = state.waitSteps + 1;\n return extend(advanceTime(state), { waitSteps });\n } else {\n var newState = extend(state, { loc: \"reward_state\" });\n return advanceTime(newState);\n }\n };\n\n var feature = function(state) {\n return state.loc;\n };\n\n var startState = {\n loc: \"wait_state\",\n waitSteps: 0,\n timeLeft: deadlineTime,\n terminateAfterAction: false\n };\n\n return {\n actions,\n stateToActions,\n transition,\n feature,\n startState\n };\n};\n\n\nvar makeProcrastinationUtility = function(utilityTable) {\n assert.ok(hasProperties(utilityTable, ['waitCost', 'workCost', 'reward']),\n 'makeProcrastinationUtility args');\n var waitCost = utilityTable.waitCost;\n var workCost = utilityTable.workCost;\n var reward = utilityTable.reward;\n\n // NB: you receive the *workCost* when you leave the *wait_state*\n // You then receive the reward when leaving the *reward_state* state\n return function(state, action) {\n if (state.loc === \"reward_state\") {\n return reward + state.waitSteps * waitCost;\n } else if (action === \"work\") {\n return workCost;\n } else {\n return 0;\n }\n };\n};\n///\n\n// Construct Procrastinate world \nvar deadline = 10;\nvar world = makeProcrastinationMDP(deadline);\n\n// Agent params\nvar utilityTable = {\n reward: 4.5,\n waitCost: -0.1,\n workCost: -1\n};\n\nvar params = {\n utility: makeProcrastinationUtility(utilityTable),\n alpha: 1000,\n discount: null,\n sophisticatedOrNaive: 'sophisticated'\n};\n\nvar getLastState = function(discount){\n var agent = makeMDPAgent(extend(params, { discount: discount }), world);\n var states = simulateMDP(world.startState, world, agent, 'states');\n return [last(states).loc, states.length];\n};\n\nmap(function(discount) {\n var lastState = getLastState(discount);\n print('Discount: ' + discount + '. Last state: ' + lastState[0] +\n '. Time: ' + lastState[1] + '\\n')\n}, _.range(8));\n~~~~\n\n\n>**Exercise:**\n\n> 1. Explain how an exponential discounter would behave on this task. Assume their utilities are the same as above and consider different discount rates.\n> 2. Run the codebox above with a Sophisticated agent. Explain the results. \n\n\nNext chapter: [Myopia for rewards and belief updates](/chapters/5c-myopic.html)\n\n
\n\n### Footnotes\n", "date_published": "2019-08-28T09:06:53Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5b-time-inconsistency.md"} +{"id": "9f19984716deb794d2bfcdf32a08282a", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/5e-joint-inference.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Joint inference of biases and preferences II\ndescription: Explaining temptation and pre-commitment using softmax noise and hyperbolic discounting.\n\n---\n\n## Restaurant Choice: Time-inconsistent vs. optimal MDP agents\n\nReturning to the MDP Restaurant Choice problem, we compare a model that assumes an optimal, non-discounting MDP agent to a model that includes both time-inconsistent and optimal agents. We also consider models that expand the set of preferences the agent can have.\n\n\n\n### Assume discounting, infer \"Naive\" or \"Sophisticated\"\n\nBefore making a direct comparison, we demonstrate that we can infer the preferences of time-inconsistent agents from observations of their behavior.\n\nFirst we condition on the path where the agent moves to Donut North. We call this the Naive path because it is distinctive to the Naive hyperbolic discounter (who is tempted by Donut North on the way to Veg):\n\n\n~~~~\n///fold: restaurant choice MDP, naiveTrajectory\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: naiveTrajectory });\n~~~~\n\nFor inference, we specialize the approach in the previous chapter for agents in MDPs that are potentially time inconsistent. So we infer $$\\nu$$ and $$k$$ (the hyperbolic discounting parameters) but not the initial belief state $$b_0$$. The function `exampleGetPosterior` is a slightly simplified version of the library function we use below.\n\n\n~~~~\nvar exampleGetPosterior = function(mdp, prior, observedStateAction){\n var world = mdp.world;\n var makeUtilityFunction = mdp.makeUtilityFunction;\n return Infer({ model() {\n\n // Sample parameters from prior\n var priorUtility = prior.priorUtility;\n var utilityTable = priorUtility();\n var priorDiscounting = prior.discounting\n var sophisticatedOrNaive = priorDiscounting().sophisticatedOrNaive;\n\n var priorAlpha = prior.priorAlpha;\n\n // Create agent with those parameters\n var agent = makeMDPAgent({\n utility: makeUtilityFunction(utilityTable),\n alpha: priorAlpha(),\n discount: priorDiscounting().discount,\n sophisticatedOrNaive : sophisticatedOrNaive\n }, world);\n\n var agentAction = agent.act;\n\n // Condition on observed actions\n map(function(stateAction) {\n var state = stateAction[0];\n var action = stateAction[1];\n observe(agentAction(state, 0), action);\n }, observedStateAction);\n\n // return parameters and summary statistics\n var vegMinusDonut = sum(utilityTable['Veg']) - sum(utilityTable['Donut N']);\n\n return {\n utility: utilityTable,\n sophisticatedOrNaive: discounting.sophisticatedOrNaive,\n discount: discounting.discount,\n alpha,\n vegMinusDonut,\n };\n }});\n};\n~~~~\n\nThis inference function allows for inference over the softmax parameter ($$\\alpha$$ or `alpha`) and the discount constant ($$k$$ or `discount`). For this example, we fix these values so that the agent has low noise ($$\\alpha=1000$$) and so $$k=1$$. We also fix the `timeCost` utility to be small and negative and Noodle's utility to be negative. We infer only the agent's utilities and whether they are Naive or Sophisticated.\n\n\n~~~~\n///fold: Call to hyperbolic library function and helper display function\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x){\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x){\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPosteriorDataTable,\n sophisticationPriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({donutTempting: x}),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n///\n\n// Prior on agent's utility function: each restaurant has an\n// *immediate* utility and a *delayed* utility (which is received after a\n// delay of 1).\nvar priorUtility = function(){\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function(){\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive', 'sophisticated'])\n };\n};\nvar priorAlpha = function(){ return 1000; };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, naiveTrajectory);\n\n// To get the prior, we condition on the empty list of observations\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nWe display maximum values and marginal distributions for both the prior and the posterior conditioned on the path shown above. To compute the prior, we simply condition on the empty list of observations.\n\nThe first graph shows the distribution over whether the agent is Sophisticated or Naive (labeled `sophisticatedOrNaive`). For the other graphs, we compute summary statistics of the agent's parameters and display the distribution over them. The variable `vegMinusDonut` is the difference in *total* utility between Veg and Donut, ignoring the fact that each restaurant has an *immediate* and *delayed* utility. Inference rules out cases where the total utility is equal (which is most likely in the prior), since the agent would simply go to Donut South in that case. Finally, we introduce a variable `donutTempting`, which is true if the agent prefers Veg to Donut North at the start but reverses this preference when adjacent to Donut North. The prior probability of `donutTempting` is less than $$0.1$$, since it depends on relatively delicate balance of utilities and the discounting behavior. The posterior is closer to $$0.9$$, suggesting (along with the posterior on `sophisticatedOrNaive`) that this is the explanation of the data favored by the model.\n\n--------\n\nUsing the same prior, we condition on the \"Sophisticated\" path (i.e. the path distinctive to the Sophisticated agent who avoids the temptation of Donut North and takes the long route to Veg):\n\n\n~~~~\n///fold:\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: sophisticatedTrajectory });\n~~~~\n\nHere are the results of inference:\n\n\n~~~~\n///fold: Definition of world, prior and inference function is same as above codebox\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x){\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPriorDataTable,\n sophisticationPosteriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'\n };\n },\n [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function(){\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){ return 1000; };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nIf the agent goes directly to Veg, then they don't provide information about whether they are Naive or Sophisticated. Using the same prior again, we do inference on this path:\n\n\n~~~~\n///fold:\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar vegDirectTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"u\"],\n [{\"loc\":[3,6],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5]},\"r\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[3,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":4,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\nviz.gridworld(mdp.world, { trajectory: vegDirectTrajectory });\n~~~~\n\nHere are the results of inference:\n\n\n~~~~\n// Definition of world, prior and inference function is same as above codebox\n\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(function(x) {\n return {sophisticatedOrNaive: x,\n probability: getPriorProb({sophisticatedOrNaive: x}),\n distribution: 'prior'};\n }, ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(function(x) {\n return {sophisticatedOrNaive: x,\n probability: getPosteriorProb({sophisticatedOrNaive: x}),\n distribution: 'posterior'};\n }, ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPriorDataTable,\n sophisticationPosteriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({vegMinusDonut: x}),\n distribution: 'prior'\n };\n }, [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutPosteriorDataTable = map(function(x){\n return {vegMinusDonut: x,\n probability: getPosteriorProb({vegMinusDonut: x}),\n distribution: 'posterior'};\n }, [-60, -50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50, 60]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, {groupBy: 'distribution'});\n\n\n var donutTemptingPriorDataTable = map(function(x){\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n }, [true, false]);\n\n var donutTemptingPosteriorDataTable = map(function(x){\n return {\n donutTempting: x,\n probability: getPosteriorProb({donutTempting: x}),\n distribution: 'posterior'\n };\n }, [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n var donut = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n var veg = [uniformDraw(utilityValues), uniformDraw(utilityValues)];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: 1,\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){return 1000;};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar vegDirectTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"u\"],\n [{\"loc\":[3,6],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5]},\"r\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[3,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":4,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar posterior = getPosterior(mdp.world, prior, vegDirectTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\n
\n\n---------\n\n### Assume non-discounting, infer preferences and softmax\n\nWe want to compare a model that assumes an optimal MDP agent with one that allows for time-inconsistency. We first show the inferences by the model that assumes optimality. This model can only explain the anomalous Naive and Sophisticated paths in terms of softmax noise (lower values for $$\\alpha$$). We display the prior and posteriors for both the Naive and Sophisticated paths.\n\n\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var vegMinusDonutPriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({ alpha: x }),\n distribution: 'prior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({ alpha: x }),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30, 40];\n // with no discounting, delayed utilities are ommitted\n var donut = [uniformDraw(utilityValues), 0];\n var veg = [uniformDraw(utilityValues), 0];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\n// We assume no discounting (so *sophisticated* has no effect here)\nvar priorDiscounting = function() {\n return {\n discount: 0,\n sophisticatedOrNaive: 'sophisticated'\n };\n};\n\nvar priorAlpha = function(){ return uniformDraw([0.1, 10, 100, 1000]); };\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar world = mdp.world;\n\nprint('Prior and posterior after observing Naive path');\nvar posteriorNaive = getPosterior(world, prior, naiveTrajectory);\ndisplayResults(getPosterior(world, prior, []), posteriorNaive);\n\nprint('Prior and posterior after observing Sophisticated path');\nvar posteriorSophisticated = getPosterior(world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(world, prior, []), posteriorSophisticated);\n~~~~\n\nThe graphs show two important results:\n\n1. For the Naive path, the agent is inferred to prefer Donut, while for the Sophisticated path, Veg is inferred. In both cases, the inference fits with where the agent ends up.\n\n2. High values for $$\\alpha$$ are ruled out in each case, showing that the model explains the behavior in terms of noise.\n\nWhat happens if we observe the agent taking the Naive path *repeatedly*? While noise is needed to explain the agent's path, too much noise is inconsistent with taking an identical path repeatedly. This is confirmed in the results below:\n\n\n~~~~\n///fold: Prior is same as above\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x){\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-50, -40, -30, -20, -10, 0, 10, 20, 30, 40, 50]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30, 40];\n // with no discounting, delayed utilities are ommitted\n var donut = [uniformDraw(utilityValues), 0];\n var veg = [uniformDraw(utilityValues), 0];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\n// We assume no discounting (so *sophisticated* has no effect here)\nvar priorDiscounting = function(){\n return {\n discount: 0,\n sophisticatedOrNaive: 'sophisticated'\n };\n};\n\nvar priorAlpha = function(){\n return uniformDraw([0.1, 10, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n///\n\nvar numberRepeats = 2; // with 2 repeats, we condition a total of 3 times\nvar posteriorNaive = getPosterior(mdp.world, prior, naiveTrajectory, numberRepeats);\nprint('Prior and posterior after conditioning 3 times on Naive path');\ndisplayResults(getPosterior(mdp.world, prior, []), posteriorNaive);\n~~~~\n\n
\n\n--------\n\n### Model that includes discounting: jointly infer discounting, preferences, softmax noise\n\nOur inference model now has the optimal agent as a special case but also includes time-inconsistent agents. This model jointly infers the discounting behavior, the agent's utilities and the softmax noise.\n\nWe show two different posteriors. The first is after conditioning on the Naive path (as above). In the second, we imagine that we have observed the agent taking the same path on multiple occasions (three times) and we condition on this.\n\n\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var sophisticationPriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPriorProb({ sophisticatedOrNaive: x }),\n distribution: 'prior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticationPosteriorDataTable = map(\n function(x) {\n return {\n sophisticatedOrNaive: x,\n probability: getPosteriorProb({ sophisticatedOrNaive: x }),\n distribution: 'posterior'\n };\n },\n ['naive', 'sophisticated']);\n\n var sophisticatedOrNaiveDataTable = append(sophisticationPosteriorDataTable,\n sophisticationPriorDataTable);\n\n viz.bar(sophisticatedOrNaiveDataTable, { groupBy: 'distribution' });\n\n var vegMinusDonutPriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPriorProb({ vegMinusDonut: x }),\n distribution: 'prior'\n };\n },\n [-10, 0, 10, 20, 30, 40, 50, 60, 70]);\n\n var vegMinusDonutPosteriorDataTable = map(\n function(x) {\n return {\n vegMinusDonut: x,\n probability: getPosteriorProb({ vegMinusDonut: x }),\n distribution: 'posterior'\n };\n },\n [-10, 0, 10, 20, 30, 40, 50, 60, 70]);\n\n var vegMinusDonutDataTable = append(vegMinusDonutPriorDataTable,\n vegMinusDonutPosteriorDataTable);\n\n viz.bar(vegMinusDonutDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({donutTempting: x}),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x){\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var alphaPriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 10, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x){\n return {\n alpha: x,\n probability: getPosteriorProb({ alpha: x }),\n distribution: 'posterior'\n };\n },\n [0.1, 10, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n};\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n// Prior on agent's utility function. We fix the delayed utilities\n// to make inference faster\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30];\n var donut = [uniformDraw(utilityValues), -10];\n var veg = [uniformDraw(utilityValues), 20];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: uniformDraw(['naive','sophisticated'])\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([.1, 10, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar world = mdp.world;\n\nvar posterior = getPosterior(world, prior, naiveTrajectory);\nprint('Prior and posterior after observing Naive path');\ndisplayResults(getPosterior(world, prior, []), posterior);\n\nprint('Prior and posterior after observing Naive path three times');\nvar numberRepeats = 2;\ndisplayResults(getPosterior(world, prior, []),\n getPosterior(world, prior, naiveTrajectory, numberRepeats));\n~~~~\n\nConditioning on the Naive path once, the probabilities of the agent being Naive and of `donutTempting` both go up. However, the probability of high softmax noise also goes up. In terms of preferences, we rule out a strong preference for Veg and slightly reduce a preference for Donut. So if the agent were Naive, tempted by Donut and with very low noise, our inference would not place most of the posterior on this explanation. There are two reasons for this. First, this agent is unlikely in the prior. Second, the explanation of the behavior in terms of noise is plausible. (In our Gridworld setup, we don't allow the agent to backtrack to the previous state. This means there are few cases where a softmax noisy agent would behavior differently than a low noise one.). Conditioning on the same Naive path three times makes the explanation in terms of noise much less plausible: the agent would makes the same \"mistake\" three times and makes no other mistakes. (The results for the Sophisticated path are similar.)\n\nIn summary, if we observe the agent repeatedly take the Naive path, the \"Optimal Model\" explains this in terms of a preference for Donut and significant softmax noise (explaining why the agent takes Donut North over Donut South). The \"Discounting Model\" is similar to the Optimal Model when it observes the Naive path *once*. However, observing it multiple times, it infers that the agent has low noise and an overall preference for Veg.\n\n
\n\n------\n\n### Preferences for the two Donut Store branches can vary\n\nAnother explanation of the Naive path is that the agent has a preference for the \"Donut N\" branch of the Donut Store over the \"Donut S\" branch. Maybe this branch is better run or has more space. If we add this to our set of possible preferences, inference changes significantly.\n\nTo speed up inference, we use a fixed assumption that the agent is Naive. There are three explanations of the agent's path:\n\n1. Softmax noise: measured by $$\\alpha$$\n2. The agent is Naive and tempted by Donut: measured by `discount` and `donutTempting`\n3. The agent prefers Donut N to Donut S: measured by `donutNGreaterDonutS` (i.e. Donut N's utility is greater than Donut S's).\n\nThese three can also be combined to explain the behavior.\n\n\n\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar naiveTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"u\"],\n [{\"loc\":[3,4],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"u\"],\n [{\"loc\":[3,5],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[3,4]},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[3,5],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[2,5],\"terminateAfterAction\":true,\"timeLeft\":6,\"previousLoc\":[2,5],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var alphaPriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var discountPriorDataTable = map(\n function(x) {\n return {\n discount: x,\n probability: getPriorProb({ discount: x }),\n distribution: 'prior'\n };\n },\n [0, 1]);\n\n var discountPosteriorDataTable = map(\n function(x) {\n return {\n discount: x,\n probability: getPosteriorProb({ discount: x }),\n distribution: 'posterior'\n };\n },\n [0, 1]);\n\n var discountDataTable = append(discountPriorDataTable,\n discountPosteriorDataTable);\n\n viz.bar(discountDataTable, { groupBy: 'distribution' });\n\n var donutNvsSPriorDataTable = map(\n function(x) {\n return {\n donutNGreaterDonutS: x,\n probability: getPriorProb({ donutNGreaterDonutS: x }),\n distribution: 'prior'\n };\n },\n [false, true]);\n\n var donutNvsSPosteriorDataTable = map(\n function(x) {\n return {\n donutNGreaterDonutS: x,\n probability: getPosteriorProb({ donutNGreaterDonutS: x }),\n distribution: 'posterior'\n };\n },\n [false, true]);\n\n var donutNvsSDataTable = append(donutNvsSPriorDataTable,\n donutNvsSPosteriorDataTable);\n\n viz.bar(donutNvsSDataTable, { groupBy: 'distribution' });\n};\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20];\n return {\n 'Donut N': [uniformDraw(utilityValues), -10],\n 'Donut S': [uniformDraw(utilityValues), -10],\n 'Veg': [20, uniformDraw(utilityValues)],\n 'Noodle': [-10, -10],\n 'timeCost': -.01\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: 'naive'\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([.1, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\n// Get world and observations\nvar posterior = getPosterior(mdp.world, prior, naiveTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nThe explanation in terms of Donut North being preferred does well in the posterior. This is because the discounting explanation (even assuming the agent is Naive) is unlikely a priori (due to our simple uniform priors on utilities and discounting). While high noise is more plausible a priori, the noise explanation still needs to posit a low probability series of events.\n\nWe see a similar result if we enrich the set of possible utilities for the Sophisticated path. This time, we allow the `timeCost`, i.e. the cost for taking a single timestep, to be positive. This means the agent prefers to spend as much time as possible moving around before reaching a restaurant. Here are the results:\n\nObserve the sophisticated path with possibly positive timeCost:\n\n\n~~~~\n///fold:\nvar restaurantHyperbolicInfer = getRestaurantHyperbolicInfer();\nvar getPosterior = restaurantHyperbolicInfer.getPosterior;\n\nvar displayResults = function(priorDist, posteriorDist) {\n\n var priorUtility = priorDist.MAP().val.utility;\n print('Prior highest-probability utility for Veg: ' + priorUtility['Veg']\n + '. Donut: ' + priorUtility['Donut N'] + ' \\n');\n\n var posteriorUtility = posteriorDist.MAP().val.utility;\n print('Posterior highest-probability utility for Veg: '\n + posteriorUtility['Veg'] + '. Donut: ' + posteriorUtility['Donut N']\n + ' \\n');\n\n var getPriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(priorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var getPosteriorProb = function(x) {\n var label = _.keys(x)[0];\n var dist = getMarginalObject(posteriorDist, label);\n return Math.exp(dist.score(x));\n };\n\n var alphaPriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPriorProb({alpha: x}),\n distribution: 'prior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaPosteriorDataTable = map(\n function(x) {\n return {\n alpha: x,\n probability: getPosteriorProb({alpha: x}),\n distribution: 'posterior'\n };\n },\n [0.1, 100, 1000]);\n\n var alphaDataTable = append(alphaPriorDataTable,\n alphaPosteriorDataTable);\n\n viz.bar(alphaDataTable, { groupBy: 'distribution' });\n\n var donutTemptingPriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPriorProb({ donutTempting: x }),\n distribution: 'prior'\n };\n },\n [true, false]);\n\n var donutTemptingPosteriorDataTable = map(\n function(x) {\n return {\n donutTempting: x,\n probability: getPosteriorProb({ donutTempting: x }),\n distribution: 'posterior'\n };\n },\n [true, false]);\n\n var donutTemptingDataTable = append(donutTemptingPriorDataTable,\n donutTemptingPosteriorDataTable);\n\n viz.bar(donutTemptingDataTable, { groupBy: 'distribution' });\n\n var discountPriorDataTable = map(\n function(x){\n return {\n discount: x,\n probability: getPriorProb({ discount: x }),\n distribution: 'prior'\n };\n },\n [0, 1]);\n\n var discountPosteriorDataTable = map(\n function(x){\n return {\n discount: x,\n probability: getPosteriorProb({ discount: x }),\n distribution: 'posterior'\n };\n },\n [0, 1]);\n\n var discountDataTable = append(discountPriorDataTable,\n discountPosteriorDataTable);\n\n viz.bar(discountDataTable, { groupBy: 'distribution' });\n\n var timeCostPriorDataTable = map(\n function(x) {\n return {\n timeCost: x,\n probability: getPriorProb({ timeCost: x }),\n distribution: 'prior'\n };\n },\n [-0.01, 0.1, 1]);\n\n var timeCostPosteriorDataTable = map(\n function(x) {\n return {\n timeCost: x,\n probability: getPosteriorProb({ timeCost: x }),\n distribution: 'posterior'\n };\n },\n [-0.01, 0.1, 1]);\n\n var timeCostDataTable = append(timeCostPriorDataTable,\n timeCostPosteriorDataTable);\n\n viz.bar(timeCostDataTable, { groupBy: 'distribution' });\n};\n\nvar sophisticatedTrajectory = [\n [{\"loc\":[3,1],\"terminateAfterAction\":false,\"timeLeft\":11},\"u\"],\n [{\"loc\":[3,2],\"terminateAfterAction\":false,\"timeLeft\":10,\"previousLoc\":[3,1]},\"u\"],\n [{\"loc\":[3,3],\"terminateAfterAction\":false,\"timeLeft\":9,\"previousLoc\":[3,2]},\"r\"],\n [{\"loc\":[4,3],\"terminateAfterAction\":false,\"timeLeft\":8,\"previousLoc\":[3,3]},\"r\"],\n [{\"loc\":[5,3],\"terminateAfterAction\":false,\"timeLeft\":7,\"previousLoc\":[4,3]},\"u\"],\n [{\"loc\":[5,4],\"terminateAfterAction\":false,\"timeLeft\":6,\"previousLoc\":[5,3]},\"u\"],\n [{\"loc\":[5,5],\"terminateAfterAction\":false,\"timeLeft\":5,\"previousLoc\":[5,4]},\"u\"],\n [{\"loc\":[5,6],\"terminateAfterAction\":false,\"timeLeft\":4,\"previousLoc\":[5,5]},\"l\"],\n [{\"loc\":[4,6],\"terminateAfterAction\":false,\"timeLeft\":3,\"previousLoc\":[5,6]},\"u\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":false,\"timeLeft\":2,\"previousLoc\":[4,6],\"timeAtRestaurant\":0},\"l\"],\n [{\"loc\":[4,7],\"terminateAfterAction\":true,\"timeLeft\":2,\"previousLoc\":[4,7],\"timeAtRestaurant\":1},\"l\"]\n];\n\nvar ___ = ' ';\nvar DN = { name : 'Donut N' };\nvar DS = { name : 'Donut S' };\nvar V = { name : 'Veg' };\nvar N = { name : 'Noodle' };\n\nvar grid = [\n ['#', '#', '#', '#', V , '#'],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', DN , ___, '#', ___],\n ['#', '#', '#', ___, '#', ___],\n ['#', '#', '#', ___, ___, ___],\n ['#', '#', '#', ___, '#', N ],\n [___, ___, ___, ___, '#', '#'],\n [DS , '#', '#', ___, '#', '#']\n];\n\nvar mdp = makeGridWorldMDP({\n grid,\n noReverse: true,\n maxTimeAtRestaurant: 2,\n start: [3, 1],\n totalTime: 11\n});\n///\n\n\n// Prior on agent's utility function\nvar priorUtility = function() {\n var utilityValues = [-10, 0, 10, 20, 30];\n var donut = [uniformDraw(utilityValues), -10]\n var veg = [uniformDraw(utilityValues), 20];\n return {\n 'Donut N': donut,\n 'Donut S': donut,\n 'Veg': veg,\n 'Noodle': [-10, -10],\n 'timeCost': uniformDraw([-0.01, 0.1, 1])\n };\n};\n\nvar priorDiscounting = function() {\n return {\n discount: uniformDraw([0, 1]),\n sophisticatedOrNaive: 'sophisticated'\n };\n};\nvar priorAlpha = function(){\n return uniformDraw([0.1, 100, 1000]);\n};\nvar prior = {\n utility: priorUtility,\n discounting: priorDiscounting,\n alpha: priorAlpha\n};\n\nvar posterior = getPosterior(mdp.world, prior, sophisticatedTrajectory);\ndisplayResults(getPosterior(mdp.world, prior, []), posterior);\n~~~~\n\nNext chapter: [Multi-agent models](/chapters/7-multi-agent.html)\n", "date_published": "2017-03-19T18:54:16Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "5e-joint-inference.md"} +{"id": "431d0938a4c47c6c60344dfaf570595a", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/meetup-2017.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Modeling Agents & Reinforcement Learning with Probabilistic Programming\nhidden: true\n---\n\n## Intro\n\n### Motivation\n\nWhy probabilistic programming?\n- **ML:** predictions based on prior assumptions and data\n- **Deep Learning:** lots of data + very weak assumptions\n- **Rule-based systems:** strong assumptions + little data\n- **Probabilistic programming:** a flexible middle ground\n\nWhy model agents?\n- Build **artificial agents** to automate decision-making\n - Example: stock trading\n- **Model humans** to build helpful ML systems\n - Examples: recommendation systems, dialog systems\n\n### Preview\n\nWhat to get out of this talk:\n- Intuition for programming in a PPL\n- Core PPL concepts\n- Why are PPLs uniquely suited for modeling agents?\n- Idioms for writing agents as PPs\n- How do RL and PP relate?\n\nWhat not to expect:\n- Lots of applications\n- Production-ready systems\n\n## Probabilistic programming basics\n\n### Our language: WebPPL\n\nTry it at [webppl.org](http://webppl.org)\n\n### A functional subset of JavaScript\n\nWhy JS?\n- Fast\n- Rich ecosystem\n- Actually a nice language underneath all the cruft\n- Runs locally via node.js, but also in browser:\n - [SmartPages](https://stuhlmueller.org/smartpages/)\n - [Image inference viz](http://dippl.org/examples/vision.html)\n - [Spaceships](http://dritchie.github.io/web-procmod/)\n - [Agent viz](http://agentmodels.org/chapters/3b-mdp-gridworld.html#hiking-in-gridworld)\n\n~~~~\nvar xs = [1, 2, 3, 4];\n\nvar square = function(x) {\n return x * x;\n};\n\nmap(square, xs);\n~~~~\n\n### Distributions and sampling\n\nDocs: [distributions](http://docs.webppl.org/en/dev/distributions.html)\n\n#### Discrete distributions\n\nExamples: `Bernoulli`, `Categorical`\n\nSampling helpers: `flip`, `categorical`\n\n~~~~\nvar dist = Bernoulli({ p: 0.3 });\n\nvar flip = function(p) {\n return sample(Bernoulli({ p }));\n}\n\nflip(.3)\n~~~~\n\n#### Continuous distributions\n\nExamples: `Gaussian`, `Beta`\n\n~~~~\nvar dist = Gaussian({ \n mu: 1,\n sigma: 0.5\n});\n\nviz(repeat(1000, function() { return sample(dist); }));\n~~~~\n\n#### Building complex distributions out of simple parts\n\nExample: geometric distribution\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n};\n\nviz(repeat(100, function() { return geometric(.5); }));\n~~~~\n\n### Inference\n\n#### Reifying distributions\n\n`Infer` reifies the geometric distribution so that we can compute probabilities:\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n};\n\nvar model = function() {\n return geometric(.5);\n};\n\nvar dist = Infer({\n model,\n maxExecutions: 100\n});\n\nviz(dist);\n\nMath.exp(dist.score(3))\n~~~~\n\n#### Computing conditional distributions\n\nExample: inferring the weight of a geometric distribution\n\n~~~~\nvar geometric = function(p) {\n if (flip(p)) {\n return 0;\n } else {\n return 1 + geometric(p);\n }\n}\n\nvar model = function() {\n var u = uniform(0, 1);\n var x = geometric(u);\n condition(x < 4);\n return u;\n}\n\nvar dist = Infer({\n model,\n method: 'rejection',\n samples: 1000\n})\n\ndist\n~~~~\n\n#### Technical note: three ways to condition\n\n~~~~\nvar model = function() {\n var p = flip(.5) ? 0.5 : 1;\n var coin = Bernoulli({ p });\n\n var x = sample(coin);\n condition(x === true);\n \n// observe(coin, true);\n \n// factor(coin.score(true));\n \n return { p };\n}\n\nviz.table(Infer({ model }));\n~~~~\n\n#### A slightly less toy example: regression\n\nDocs: [inference algorithms](http://docs.webppl.org/en/master/inference/methods.html)\n\n~~~~\nvar xs = [1, 2, 3, 4, 5];\nvar ys = [2, 4, 6, 8, 10];\n\nvar model = function() {\n var slope = gaussian(0, 10);\n var offset = gaussian(0, 10);\n var f = function(x) {\n var y = slope * x + offset;\n return Gaussian({ mu: y, sigma: .1 })\n };\n map2(function(x, y){\n observe(f(x), y)\n }, xs, ys)\n return { slope, offset };\n}\n\nInfer({\n model,\n method: 'MCMC',\n kernel: {HMC: {steps: 10, stepSize: .01}},\n samples: 2000,\n})\n~~~~\n\n## Agents as probabilistic programs\n\n### Deterministic choices\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar outcome = function(action) {\n if (action === 'italian') {\n return 'pizza';\n } else {\n return 'steak frites';\n }\n};\n\nvar actionDist = Infer({ \n model() {\n var action = uniformDraw(actions);\n condition(outcome(action) === 'pizza');\n return action;\n }\n});\n\nactionDist\n~~~~\n\n### Expected utility\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = ((action === 'italian') ? \n [0.2, 0.6, 0.2] : \n [0.05, 0.9, 0.05]);\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar expectedUtility = function(action) {\n var utilityDist = Infer({\n model: function() {\n var nextState = transition('initialState', action);\n var u = utility(nextState);\n return u;\n }\n });\n return expectation(utilityDist);\n};\n\nmap(expectedUtility, actions);\n~~~~\n\n### Softmax-optimal decision-making\n\n~~~~\nvar actions = ['italian', 'french'];\n\nvar transition = function(state, action) {\n var nextStates = ['bad', 'good', 'spectacular'];\n var nextProbs = ((action === 'italian') ? \n [0.2, 0.6, 0.2] : \n [0.05, 0.9, 0.05]);\n return categorical(nextProbs, nextStates);\n};\n\nvar utility = function(state) {\n var table = { \n bad: -10, \n good: 6, \n spectacular: 8 \n };\n return table[state];\n};\n\nvar alpha = 1;\n\nvar agent = function(state) {\n return Infer({ \n model() {\n\n var action = uniformDraw(actions);\n \n var expectedUtility = function(action) {\n var utilityDist = Infer({\n model: function() {\n var nextState = transition('initialState', action);\n var u = utility(nextState);\n return u;\n }\n });\n return expectation(utilityDist);\n };\n \n var eu = expectedUtility(action);\n \n factor(eu);\n \n return action;\n \n }\n });\n};\n\nagent('initialState');\n~~~~\n\n## Sequential decision problems\n\n- [Restaurant Gridworld](http://agentmodels.org/chapters/3a-mdp.html) (1, last)\n- Structure of expected utility recursion\n- Dynamic programming\n\n\n~~~~\nvar act = function(state) {\n return Infer({ model() {\n var action = uniformDraw(stateToActions(state));\n var eu = expectedUtility(state, action);\n factor(eu);\n return action;\n }});\n};\n\nvar expectedUtility = function(state, action){\n var u = utility(state, action);\n if (isTerminal(state)){\n return u; \n } else {\n return u + expectation(Infer({ model() {\n var nextState = transition(state, action);\n var nextAction = sample(act(nextState));\n return expectedUtility(nextState, nextAction);\n }}));\n }\n};\n~~~~\n\n- [Hiking Gridworld](http://agentmodels.org/chapters/3b-mdp-gridworld.html) (1, 2, 3, last)\n- Expected state-action utilities (Q values)\n- [Temporal inconsistency](http://agentmodels.org/chapters/5b-time-inconsistency.html) in Restaurant Gridworld\n \n\n## Reasoning about agents\n\n- [Learning about preferences from observations](http://agentmodels.org/chapters/4-reasoning-about-agents.html) (1 & 2)\n\n## Multi-agent models\n\n### A simple example: Coordination games\n\n~~~~\nvar locationPrior = function() {\n if (flip(.55)) {\n return 'popular-bar';\n } else {\n return 'unpopular-bar';\n }\n}\n\nvar alice = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n var bobLocation = sample(bob(depth - 1));\n condition(myLocation === bobLocation);\n return myLocation;\n }});\n});\n\nvar bob = dp.cache(function(depth) {\n return Infer({ model() {\n var myLocation = locationPrior();\n if (depth === 0) {\n return myLocation;\n } else {\n var aliceLocation = sample(alice(depth));\n condition(myLocation === aliceLocation);\n return myLocation;\n }\n }});\n});\n\nalice(5)\n~~~~\n\n### Other examples\n\n- [Game playing: tic-tac-toe](http://agentmodels.org/chapters/7-multi-agent.html)\n- [Language understanding](http://agentmodels.org/chapters/7-multi-agent.html)\n\n## Reinforcement learning\n\n### Algorithms vs Models\n\n- Models: encode world knowledge\n - PPLs suited for expressing models\n- Algorithms: encode mechanisms (for inference, optimization)\n - RL is mostly about algorithms\n- But some algorithms can be expressed using PPL components\n\n### Inference vs. Optimization\n\n~~~~\nvar k = 3; // number of heads\nvar n = 10; // number of coin flips\n\nvar model = function() {\n var p = sample(Uniform({ a: 0, b: 1}));\n var dist = Binomial({ p, n });\n observe(dist, k);\n return p;\n};\n\nvar dist = Infer({ \n model,\n method: ‘MCMC',\n samples: 100000,\n burn: 1000\n});\n\nexpectation(dist);\n~~~~\n\n~~~~\nvar k = 3; // number of heads\nvar n = 10; // number of coin flips\n\nvar model = function() {\n var p = Math.sigmoid(modelParam({ name: 'p' }));\n var dist = Binomial({ p, n });\n observe(dist, k);\n return p;\n};\n\nOptimize({\n model,\n steps: 1000,\n optMethod: { sgd: { stepSize: 0.01 }}\n});\n\nMath.sigmoid(getParams().p);\n~~~~\n\n\n\n### Policy Gradient\n\n~~~~\n///fold:\nvar numArms = 10;\n\nvar meanRewards = map(\n function(i) {\n if ((i === 7) || (i === 3)) {\n return 5;\n } else {\n return 0;\n }\n },\n _.range(numArms));\n\nvar blackBox = function(action) {\n var mu = meanRewards[action];\n var u = Gaussian({ mu, sigma: 0.01 }).sample();\n return u;\n};\n///\n\n// actions: [0, 1, 2, ..., 9]\n\n// blackBox: action -> utility\n\nvar agent = function() {\n var ps = softmax(modelParam({ dims: [numArms, 1], name: 'ps' }));\n var action = sample(Discrete({ ps }));\n var utility = blackBox(action);\n factor(utility);\n return action;\n};\n\n\nOptimize({ model: agent, steps: 10000 });\n\nvar params = getParams();\nviz.bar(\n _.range(10),\n _.flatten(softmax(params.ps[0]).toArray()));\n~~~~\n\n## Conclusion\n\nWhat to get out of this talk, revisited:\n\n- **Intuition for programming in a PPL**\n- **Core PPL concepts**\n - Distributions & samplers\n - Inference turns samplers into distributions\n - `sample` turns distributions into samples\n - Optimization fits free parameters\n- **Idioms for writing agents as probabilistic programs**\n - Planning as inference\n - Sequential planning via recursion into the future\n - Multi-agent planning via recursion into other agents' minds\n- **Why are PPLs uniquely suited for modeling agents?**\n - Agents are structured programs\n - Planning via nested conditional distributions\n- **How do RL and PP relate?**\n - Algorithms vs models\n - Policy gradient as a PP\n\nWhere to go from here:\n- [WebPPL](http://webppl.org) (webppl.org)\n- [AgentModels](http://agentmodels.org) (agentmodels.org)\n- andreas@ought.com\n", "date_published": "2017-03-30T16:34:45Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "meetup-2017.md"} +{"id": "5e84ffd2457a2329bd197d1f4b94e5ab", "title": "Modeling Agents with Probabilistic Programs", "url": "https://agentmodels.org/chapters/6-efficient-inference.html", "source": "agentmodels", "source_type": "markdown", "text": "---\nlayout: chapter\ntitle: Efficient inference\ndescription: Difficulty of inference, in particular for POMDPs and inverse planning. Outline of inference strategies.\nstatus: stub\nis_section: true\nhidden: true\n---\n", "date_published": "2016-03-09T21:34:05Z", "authors": ["Owain Evans", "Andreas Stuhlmüller", "John Salvatier", "Daniel Filan"], "summaries": [], "filename": "6-efficient-inference.md"}