{"id": "7afc8f16b0b3b88a4739e69468594321", "title": "Behaviorist genie", "url": "https://arbital.com/p/behaviorist", "source": "arbital", "source_type": "text", "text": "A behaviorist [genie](https://arbital.com/p/6w) is an AI that has been [averted from modeling](https://arbital.com/p/1g4) minds in more detail than some whitelisted class of models.\n\nThis is *possibly* a good idea because many [possible difficulties](https://arbital.com/p/6r) seem to be associated with the AI having a sufficiently advanced model of human minds or AI minds, including:\n\n- [Mindcrime](https://arbital.com/p/6v)\n- [Programmer deception](https://arbital.com/p/10f) and [programmer manipulation](https://arbital.com/p/10f)\n- [Recursive self-improvement](https://arbital.com/p/)\n- [Modeling distant adversarial superintelligences](https://arbital.com/p/5j)\n\n...and yet an AI that is extremely good at understanding material objects and technology (just not other minds) would still be capable of some important classes of [pivotal achievement](https://arbital.com/p/6y).\n\nA behaviorist genie would still require most of [genie theory](https://arbital.com/p/6w) and [corrigibility](https://arbital.com/p/45) to be solved. But it's plausible that the restriction away from modeling humans, programmers, and some types of reflectivity, would collectively make it significantly easier to make a safe form of this genie.\n\nThus, a behaviorist genie is one of fairly few open candidates for \"AI that is restricted in a way that actually makes it safer to build, without it being so restricted as to be incapable of game-changing achievements\".\n\nNonetheless, limiting the degree to which the AI can understand cognitive science, other minds, its own programmers, and itself is a very severe restriction that would prevent a number of [obvious ways](https://arbital.com/p/2s1) to make progress on the AGI subproblem and the [value identification problem](https://arbital.com/p/6c) even for [commands given to Task AGIs](https://arbital.com/p/2rz) ([Genies](https://arbital.com/p/6w)). Furthermore, there could perhaps be easier types of genies to build, or there might be grave difficulties in restricting the model class to some space that is useful without being dangerous.\n\n# Requirements for implementation\n\nBroadly speaking, two possible clusters of behaviorist-genie design are:\n\n- A cleanly designed, potentially self-modifying genie that can internally detect modeling problems that threaten to become mind-modeling problems, and route them into a special class of allowable mind-models.\n- A [known-algorithm non-self-improving AI](https://arbital.com/p/1fy), whose complete set of capabilities have been carefully crafted and limited, which was shaped to not have much capability when it comes to modeling humans (or distant superintelligences).\n\nBreaking the first case down into more detail, the potential desiderata for a behavioristic design are:\n\n- (a) avoiding mindcrime when modeling humans\n- (b) not modeling distant superintelligences or alien civilizations\n- (c) avoiding programmer manipulation\n- (d) avoiding mindcrime in internal processes\n- (e) making self-improvement somewhat less accessible.\n\nThese are different goals, but with some overlap between them. Some of the things we might need:\n\n- A working [https://arbital.com/p/1fv](https://arbital.com/p/1fv) that was general enough to screen the entire hypothesis space AND that was resilient against loopholes AND passed enough okay computations to screen the entire hypothesis space\n- A working [https://arbital.com/p/1fv](https://arbital.com/p/1fv) that was general enough to screen the entire space of potential self-modifications and subprograms AND was resilient against loopholes AND passed enough okay computations to compose the entire AI\n- An allowed class of human models, that was clearly safe in the sense of not being sapient, AND a reliable way to tell *every* time the AI was trying to model a human (including modeling something else that was partially affected by humans, etc) (possibly with the programmers as a special case that allowed a more sophisticated model of some programmer intentions, but still not one good enough to psychologically manipulate the programmers)\n- A way to tell whenever the AI was trying to model a distant civilization, which shut down the modeling attempt or avoided the incentive to model (this might not require healing a bunch of entanglements, since there are no visible aliens and therefore their exclusion shouldn't mess up other parts of the AI's model)\n- A reflectively stable way to support any of the above, which are technically [epistemic exclusions](https://arbital.com/p/1g4)\n\nIn the KANSI case, we'd presumably be 'naturally' working with limited model classes (on the assumption that everything the AI is using is being monitored, has a known algorithm, and has a known model class) and the goal would just be to prevent the KANSI agent from spilling over and creating other human models somewhere else, which might fit well into a general agenda against self-modification and subagent creation. Similarly, if every new subject is being identified and whitelisted by human monitors, then just not whitelisting the topic of modeling distant superintelligences or devising strategies for programmer manipulation, might get most of the job done to an acceptable level *if* the underlying whitelist is never being evaded (even emergently). This would require a lot of *successfully maintained* vigilance and human monitoring, though, especially if the KANSI agent is trying to allocate a new human-modeling domain once per second and every instance has to be manually checked.", "date_published": "2016-03-31T16:34:00Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A behaviorist [genie](https://arbital.com/p/6w) is an [advanced AI](https://arbital.com/p/2c) that can understand, e.g., material objects and technology, but does not model human minds (or possibly its own mind) in unlimited detail. If creating a behaviorist agent were possible, it might meliorate several [anticipated difficulties](https://arbital.com/p/6r) simultaneously, like the problems of [creating models of humans that are themselves sapient](https://arbital.com/p/6v) or [the AI psychologically manipulating its users](https://arbital.com/p/309). Since the AI would only be able to model humans via some restricted model class, it would be metaphorically similar to a [Skinnerian behaviorist](http://plato.stanford.edu/entries/behaviorism/#1) from the days when it was considered unprestigious for scientists to talk about the internal mental details of human beings."], "tags": ["Known-algorithm non-self-improving agent", "Work in progress", "B-Class", "Mindcrime"], "alias": "102"} {"id": "540907494d43489bfc68121081f31403", "title": "Methodology of unbounded analysis", "url": "https://arbital.com/p/unbounded_analysis", "source": "arbital", "source_type": "text", "text": "summary: In modern AI and especially in [value alignment theory](https://arbital.com/p/2v), there's a sharp divide between \"problems we know how to solve using unlimited computing power\", and \"problems we can't state how to solve using [computers larger than the universe](https://arbital.com/p/1mm)\". Not knowing how to do something with unlimited computing power reveals that you are *confused* about the structure of the problem. The first paper ever written on computer chess was by Shannon in 1950, which gave in passing the unbounded algorithm for solving chess. The 1997 program [Deep Blue](https://arbital.com/p/1bx) beat the human world champion 47 years later. In 1836, Edgar Allen Poe carefully argued that no automaton could ever play chess, since at each turn there are many possible moves by the player and opponent, but gears and wheels can only represent deterministic motions. The year 1836 was *confused* about the nature of the cognitive problem involved in chess; by 1950, we understood the work in principle that needed to be performed; in 1997 it was finally possible to play chess better than humans do. It is presently a disturbing fact that we don't know how to build a nice AI *even given* unlimited computing power, and there are arguments for tackling this problem specifically as such.\n\nsummary(Technical): Much current technical work in [value alignment theory](https://arbital.com/p/2v) takes place against a background of unbounded computing power, and other simplifying assumptions such as perfect knowledge, environments simpler than the agent, full knowledge of other agents' code, etcetera. Real-world agents will be bounded, have imperfect knowledge, etcetera. There are four primary reasons for doing unbounded analysis anyway:\n\n1. If we don't know how to solve a problem *even given* unlimited computing power, that means we're *confused about the nature of the work to be performed.* It is often worthwhile to tackle this basic conceptual confusion in a setup that exposes the confusing part of the problem as simply as possible, and doesn't introduce further complications until the core issue is resolved.\n2. Introducing 'realistic' complications can make it difficult to engage in cooperative discourse about which ideas have which consequences - [https://arbital.com/p/11v](https://arbital.com/p/11v) was a watershed moment because it was formally specified and there was no way to say \"Oh, I didn't mean *that*\" when somebody pointed out that AIXI would try to seize control of its own reward channel.\n3. Increasing the intelligence of an [advanced agent](https://arbital.com/p/2c) may sometimes move its behavior closer to ideals and further from specific complications of an algorithm. Early, stupid chess algorithms had what seemed to humans like weird idiosyncracies tied to their specific algorithms. Modern chess programs, far beyond the human champions, can from an intuitive human perspective be seen as just making good chess moves.\n4. Current AI algorithms are often incapable of demonstrating future phenomena that seem like they should predictably occur later, and whose interesting properties seem like they can be described using an unbounded algorithm as an example. E.g. current AI algorithms are very far from doing free-form self-reprogramming, but this is predictably the sort of issue we might encounter later.\n\n# Summary\n\n\"Unbounded analysis\" refers to determining the behavior of a computer program that would, to actually run, require an [unphysically large amount of computing power](https://arbital.com/p/1mm), or sometimes [hypercomputation](https://arbital.com/p/1mk). If we know how to solve a problem using unlimited computing power, but not with real-world computers, then we have an \"unbounded solution\" but no \"bounded solution\".\n\nAs a central example, consider computer chess. The first paper ever written on computer chess, by Claude Shannon in 1950, gave an *unbounded solution* for playing perfect chess by exhaustively searching the tree of all legal chess moves. (Since a chess game draws if no pawn is moved and no piece is captured for 50 moves, this is a finite search tree.) Shannon then passed to considering more *bounded* ways of playing imperfect chess, such as evaluating the worth of a midgame chess position by counting the balance of pieces, and searching a smaller tree up to midgame states. It wasn't until 47 years later, in 1997, that [Deep Blue](https://arbital.com/p/1bx) beat Garry Kasparov for the world championship, and there were multiple new basic insights along the way, such as alpha-beta pruning.\n\nIn 1836, there was a sensation called the [Mechanical Turk](https://en.wikipedia.org/wiki/The_Turk), allegedly a chess-playing automaton. Edgar Allen Poe, who was also an amateur magician, wrote an essay arguing that the Turk must contain a human operator hidden in the apparatus (which it did). Besides analyzing the Turk's outward appearance to locate the hidden compartment, Poe carefully argued as to why no arrangement of wheels and gears could ever play chess in the first place, explicitly comparing the Turk to \"the calculating machine of Mr. Babbage\":\n\n> Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow [https://arbital.com/p/...](https://arbital.com/p/...)\n> But the case is widely different with the Chess-Player. With him there is no determinate progression. No one move in chess necessarily follows upon any one other. From no particular disposition of the men at one period of a game can we predicate their disposition at a different period [https://arbital.com/p/...](https://arbital.com/p/...)\n> Now even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage [https://arbital.com/p/...](https://arbital.com/p/...)\n> It is quite certain that the operations of the Automaton are regulated by *mind*, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, *a priori*. \n\n(In other words: In an algebraical problem, each step follows with the previous step of necessity, and therefore can be represented by the determinate motions of wheels and gears as in Charles Babbage's proposed computing engine. In chess, the player's move and opponent's move don't follow with necessity from the board position, and therefore can't be represented by deterministic gears.)\n\nThis is an amazingly sophisticated remark, considering the era. It even puts a finger on the part of chess that is computationally difficult, the combinatorial explosion of possible moves. And it is still entirely wrong.\n\nEven if you know an unbounded solution to chess, you might still be 47 years away from a bounded solution. But if you can't state a program that solves the problem *in principle*, you are in some sense *confused* about the nature of the cognitive work needed to solve the problem. If you can't even solve a problem given infinite computing power, you definitely can't solve it using bounded computing power. (Imagine Poe trying to write a chess-playing program before he'd had the insight about search trees.)\n\nWe don't presently know how to write a Python program that would be a nice AI if we ran it on a unphysically large computer. Trying directly to cross this conceptual gap by carving off pieces of the problem and trying to devise unbounded solutions to them is \"the methodology of unbounded analysis in AI alignment theory\".\n\nSince \"bounded agent\" has come to mean in general an agent that is realistic, the term \"unbounded agent\" also sometimes refers to agents that:\n\n- Perfectly know their environments.\n- Can fully simulate their environments (the agent is larger than its environment and/or the environment is very simple).\n- Operate in turn-based, discrete time (the environment waits for the agent to compute its move).\n- [Cartesian agents](https://arbital.com/p/1mt) that are perfectly separated from the environment except for sensory inputs and motor outputs.\n\nThese are two importantly distinct kinds of unboundedness, and we'll refer to the above list of properties as \"unrealism\" to distinguish them here. Sufficiently unrealistic setups are also called \"toy models\" or \"toy problems\".\n\nUnrealistic setups have disadvantages, most notably that the results, observations, and solutions may fail to generalize to realistic applications. Correspondingly, the classic pitfall of unbounded analysis is that it's impossible to run the code, which means that certain types of conceptual and empirical errors are more likely to go uncaught (see below).\n\nThere are nonetheless several forces pushing technical work in value alignment theory toward unbounded analyses and unrealistic setups, at least for now:\n\n1. **Attacking confusion in the simplest settings.** If we don't know how to solve a problem given unlimited computing power, this means we're confused about the nature of the work to be performed. It is sometimes worthwhile to tackle this conceptual confusion in a setup that tries to expose the confusing part of the problem as simply as possible. Trying to bound the proposed solution or make it realistic can introduce a *lot* of complications into this discussion, arguably unnecessary ones. (Deep Blue was far more complicated than Shannon's ideal chess program, and it wouldn't be doing Edgar Allen Poe any favors to show him Deep Blue's code and hide away Shannon's ideal outline.)\n2. **Unambiguous consequences and communication.** Introducing 'realistic' complications can make it difficult to engage in cooperative discourse about which ideas have which consequences. ([https://arbital.com/p/11v](https://arbital.com/p/11v) was a watershed moment for alignment theory because [https://arbital.com/p/11v](https://arbital.com/p/11v) was formally specified and there was no way to say \"Oh, I didn't mean *that*\" when somebody pointed out that AIXI would try to seize control of its own reward channel.)\n3. **More [advanced](https://arbital.com/p/2c) agents might be less idiosyncratic.** Increasing the cognitive power of an agent may sometimes move its behavior closer to ideals and further from specific complications of an algorithm. (From the perspective of a human onlooker, early chess algorithms seemed to have weird idiosyncracies tied to their specific algorithms. Modern chess programs can from an intuitive human perspective be seen as just making good chess moves.)\n4. **Runnable toy models often aren't a good fit for advanced-agent scenarios.** Current AI algorithms are often not good natural fits for demonstrating certain phenomena that seem like they might predictably occur in sufficiently advanced AIs. Usually it's a lot of work to carve off a piece of such a phenomenon and pose it as an unbounded problem. But that can still be significantly easier to fit into an unbounded setting than into a toy model. (All examples here are too complicated for one sentence, but see the subsection below and the discussion of [https://arbital.com/p/1b7](https://arbital.com/p/1b7) and [https://arbital.com/p/1mq](https://arbital.com/p/1mq).)\n\nEven to the extent that these are good reasons, standard pitfalls of unbounded analyses and unrealistic setups still apply, and some of our collective and individual precautions against them are discussed below.\n\nFor historical reasons, computer scientists are sometimes suspicious of unbounded or unrealistic analyses, and may wonder aloud if they reflect unwillingness or incapacity to do a particular kind of work associated with real-world code. For discussion of this point see [Why MIRI uses unbounded analysis](https://arbital.com/p/).\n\n# Attacking confusion in the simplest settings.\n\nImagine somebody claiming that an *ideal* chess program ought to evaluate the *ideal goodness* of each move, and giving their philosophical analysis in terms of a chess agent which knows the *perfect* goodness of each move, without ever giving an effective specification of what determines the ideal goodness, but using the term $\\gamma$ to symbolize it so that the paper looks very formal. We can imagine an early programmer sitting down to write a chess-playing program, crafting the parts that take in user-specified moves and the part that displays the chess screen, using random numbers for move goodness until they get around to writing the \"move-goodness determining module\", and then finally finding themselves being unable to complete the program; at which point they finally recognize that all the talk of \"the ideal goodness $\\gamma$ of a chess move\" wasn't an effective specification.\n\nPart of the standard virtue ethics of computer science includes an injunction to write code in order to force this kind of grad student to realize that they don't know how to effectively specify something, even if they symbolized it using a Greek letter. But at least this kind of ineffectiveness seems to be something that some people can learn to detect without actually running the code - consider that, in our example above, the philosopher-programmer realized that they didn't know how to compute $\\gamma$ at the point where they couldn't complete that part of the program, not at a point where the program ran and failed. Adding on all the code to take in user moves and display a chess board on the screen only added to the amount of time required to come to this realization; once they know what being unable to write code feels like, they might be able to come to the same realization *much faster* by standing in front of a whiteboard and failing to write pseudocode.\n\nAt this point, it sometimes makes sense to step back and try to say *exactly what you don't know how to solve* - try to crisply state what it is that you want an unbounded solution *for.* Sometimes you can't even do that much, and then you may actually have to spend some time thinking 'philosophically' - the sort of stage where you talk to yourself about some mysterious ideal quantity of move-goodness and you try to pin down what its properties might be. It's important not to operate in this stage under the delusion that your move-goodness $\\gamma$ is a well-specified concept; it's a symbol to represent something that you're confused about. By asking for an *unbounded solution*, or even an effectively-specified *representation of the unbounded problem*, that is, asking for pseudo-code which could be run given nothing more than an unphysically large computer (but no otherwise-unspecified Oracle of Move Goodness that outputs $\\gamma$), we're trying to invoke the \"Can I write code yet?\" test on whatever philosophical reasoning we're doing.\n\nCan trying to write running code help at this stage? Yes, depending on how easy it is to write small programs that naturally represent the structure of the problem you're confused about how to solve. Edgar Allen Poe might have been willing to concede that he could conceive of deterministic gears that would determine whether a proposed chess move was legal or illegal, and it's possible that if he'd actually tried to build that automaton and then try to layer gears on top of that to pick out particular legal moves by whatever means, he might have started to think that maybe chess was computable after all - maybe even have hit upon a representation of search, among other possible things that gears could do, and so realized how in principle the problem could be solved. But this adds a delay and a cost to build the automaton and try out variations of it, and complications from trying to stay within the allowed number of gears; and it's not obvious that there can't possibly be any faster way to hit upon the idea of game-tree search, say, by trying to write pseudocode or formulas on a whiteboard, thinking only about the core structure of the game. If we ask how it is that Shannon had an easier time coming up with the unbounded solution (understanding the nature of the work to be performed) than Poe, the most obvious cause would be the intervening work by Church and Turing (among others) on the nature of computation.\n\nAnd then in other cases it's not obvious at all how you could well-represent a problem using current AI algorithms, but with enough work you can figure out how represent the problem in an unbounded setting.\n\n## The pitfall of simplying away the key, confusing part of the problem.\n\nThe [tiling agents problem](https://arbital.com/p/1mq), in the rocket alignment metaphor, is the metaphorical equivalent of trying to fire a cannonball around a perfectly spherical Earth with no atmosphere - to obtain an idea how any kind of \"stable orbit\" can work at all. Nonmetaphorically, the given problem is to exhibit the simplest nontrivial case of [stable self-modification](https://arbital.com/p/1fx) - an agent that, given its current reasoning, *wants* to create an agent with similar properties as a successor, i.e., preserving its current goals.\n\nIn a perfect world, we'd be able to, with no risk, fire up running code for AI algorithms that reason freely about self-modification and have justified beliefs about how alternative versions of their own code will behave and the outer consequences of that behavior (the way you might imagine what would happen in the real world if you took a particular drug affecting your cognition). But this is way, way beyond current AI algorithms to represent in any sort of natural or realistic way.\n\nA *bad* \"unbounded solution\" would be to suppose agents that could exactly simulate their successors, determine exactly what their successors would think, extrapolate the exact environmental consequences, and evaluate those. If you suppose an exact simulation ability, you don't need to consider how the agent would reason using generalizations, abstractions, or probabilities... but this trivial solution doesn't feel like it sheds any light or gets us any closer to understanding reflective stability; that is, it feels like the *key part of the problem* has been simplified away and solving what remains was too easy and didn't help.\n\nFaced with an \"unbounded solution\" you don't like, the next step is to say crisply exactly what is wrong with it in the form of a new desideratum for your solution. In this case, our reply would be that for Agent 1 to exactly simulate Agent 2, Agent 1 must be larger than Agent 2, and since we want to model *stable* self-modification, we can't introduce a requirement that Agent 2 be strictly weaker than Agent 1. More generally, we apply the insight of [https://arbital.com/p/1c0](https://arbital.com/p/1c0) to this observation and arrive at the desiderata of [https://arbital.com/p/9g](https://arbital.com/p/9g) and [https://arbital.com/p/1c1](https://arbital.com/p/1c1), which we also demand that an unbounded solution exhibit.\n\nThis illustrates one of the potential general failure modes of using an unbounded setup to shed conceptual light on a confusing problem - namely, when you simplify away the key confusing issue you wanted to resolve, and end up with a trivial-seeming solution that sheds no light on the original problem. A chief sign of this is when your paper is too easy to write. The next action is to say exactly what you simplified away, and put it in the form of a new desideratum, and try to say exactly why your best current attempts can't meet that desideratum.\n\nSo we've now further added: we want the agent to *generalize over possible exact actions and behaviors of its successor* rather than needing to know its successor's exact actions in order to approve building it.\n\nWith this new desideratum in hand, there's now another obvious unbounded model: consider deterministic agents operating in environments with known rules that reason about possible designs and the environment using first-order logic. The agent then uses an unbounded proof search, which no current AI algorithm could tackle in reasonable time (albeit a human engineer would be able to do it with a bunch of painstaking work) to arrive at justified logical beliefs about the effect of its successor on its environment. This is certainly still extremely unrealistic; but has this again simplified away all the interesting parts of the problem? In this case we can definitely reply, \"No, it does expose something confusing\" since we don't in fact know how to build a tiling agent under this setup. It may not capture all the confusing or interesting parts of the problem, but it seems to expose at least one confusion. Even if, as seems quite possible, it's introduced some new problem that's an artifact of the logical setup and wouldn't apply to agents doing probabilistic reasoning, there's then a relatively crisp challenge of, \"Okay, show me how probabilistic reasoning resolves the problem, then.\"\n\nIt's not obvious that there's anything further to be gained by trying to create a toy model of the problem, or a toy model of the best current unsatisfactory partial solution, that could run as code with some further cheating and demo-rigging, but [this is being tried anyway](https://intelligence.org/2015/12/04/new-paper-proof-producing-reflection-for-hol/). The tiling agents problem did spend roughly nine years exclusively on paper before that, and the best current unsatisfactory solution was arrived at with whiteboards.\n\n## The pitfall of residual terms.\n\nBesides \"simplifying away the confusing part of the problem\", another way that unbounded thinking can \"bounce off\" a confusing problem is by creating a residual term that encapsulates the confusion. Currently, there are good unbounded specifications for [Cartesian](https://arbital.com/p/cartesian) non-self-modifying [expected reward maximizers](https://arbital.com/p/11v): if we allow the agent to use unlimited computing power, *don't* allow the environment to have unlimited computing power, don't ask the agent to modify itself, separate the agent from its environment by an impermeable barrier through which only sensory information and motor outputs can pass, and then ask the agent to maximize a sensory reward signal, there's [a simple Python program which is expected to behave superintelligently given sufficient computing power](https://arbital.com/p/11v). If we then introduce permeability into the Cartesian boundary and allow for the possibility that the agent can take drugs or drop an anvil on its own head, nobody has an unbounded solution to that problem any more.\n\nSo one way of bouncing off that problem is to say, \"Oh, well, my agent calculates the effect of its motor actions on the environment and the expected effect on sensory information and the reward signal, *plus a residual term $\\gamma$* which stands for the expected utility of all effects of the agent's actions that change the agent's processing or destroys its hardware\". How is $\\gamma$ to be computed? This is left unsaid.\n\nIn this case you haven't *omitted* the confusing part of the problem, but you've packed it into a residual term you can't give an effective specification for calculating. So you no longer have an unbounded solution - you can't write down the Python program that runs given unlimited computing power - and you've probably failed to shed any important light on the confusing part of the problem. Again, one of the warning signs here is that the paper is very easy to write, and reading it does not make the key problem feel less like a hard opaque ball.\n\n# Introducing realistic complications can make it hard to build collective discourse about which ideas have which consequences.\n\nOne of the watershed moments in the history of AI alignment theory was Marcus Hutter's proposal for [https://arbital.com/p/11v](https://arbital.com/p/11v), not just because it was the first time anyone had put together a complete specification of an unbounded agent (in a [Cartesian setting](https://arbital.com/p/cartesian)), but also because it was the first time that non-value-alignment could be pointed out in a completely pinned-down way. \n\n(work in progress)", "date_published": "2016-01-20T00:05:11Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["In modern AI and especially in [value alignment theory](https://arbital.com/p/2v), there's a sharp divide between \"problems we know how to solve using unlimited computing power\", and \"problems we can't state how to solve using [computers larger than the universe](https://arbital.com/p/1mm)\". Not knowing how to do something with unlimited computing power reveals that you are *confused* about the structure of the problem. It is presently a disturbing fact that we don't know how to solve many aspects of [value alignment](https://arbital.com/p/5s) *even given* unlimited computing power - we can't state a program that would be a nice AI if we ran it on a sufficiently large computer. \"Unbounded analysis\" tries to improve what we understand how to do in principle, in that sense."], "tags": ["Work in progress", "B-Class"], "alias": "107"} {"id": "5f662997a0d9dbdb1e328355ac89fd5c", "title": "Utility", "url": "https://arbital.com/p/value_alignment_utility", "source": "arbital", "source_type": "text", "text": "In the context of value alignment theory, 'Utility' always refers to a goal held by an artificial agent. It further implies that the agent is a [consequentialist](https://arbital.com/p/9h); that the agent has [probabilistic](https://arbital.com/p/) beliefs about the consequences of its actions; that the agent has a quantitative notion of \"how much better\" one outcome is than another and the relative size of different intervals of betterness; and that the agent can therefore, e.g., trade off large probabilities of a small utility gain against small probabilities of a large utility loss.\n\nTrue coherence in the sense of a [von-Neumann Morgenstern utility function](https://arbital.com/p/) may be out of reach for [bounded agents](https://arbital.com/p/), but the term 'utility' may also be used for the [bounded analogues](https://arbital.com/p/) of such decision-making, provided that quantitative relative intervals of preferability are being combined with quantitative degrees of belief to yield decisions.\n\nUtility is explicitly not assumed to be normative. E.g., if speaking of a [paperclip maximizer](https://arbital.com/p/10h), we will say that an outcome has higher utility iff it contains more paperclips.\n\nHumans should not be said (without further justification) to have utilities over complicated outcomes. On the mainstream view from psychology, humans are inconsistent enough that it would take additional assumptions to translate our psychology into a coherent utility function. E.g., we may differently value the interval between two outcomes depending on whether the interval is framed as a 'gain' or a 'loss'. For the things humans do or should want, see the special use of the word ['value'](https://arbital.com/p/55). For a general disambiguation page on words used to talk about human and AI wants, see [https://arbital.com/p/5b](https://arbital.com/p/5b).\n\nOn some [construals of value](https://arbital.com/p/55), e.g. [reflective equilibrium](https://arbital.com/p/71), this construal may imply that the true values form a coherent utility function. Nonetheless, by convention, we will not speak of value as a utility unless it has been spelled out that, e.g., the value in question has been assumed to be a reflective equilibrium.\n\nMultiple agents with different utility functions should not be said (without further exposition) to have a collective utility function over outcomes, since at present, there is no accepted [canonical way to aggregate utility functions](https://arbital.com/p/).", "date_published": "2015-12-16T23:59:11Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition", "Glossary (Value Alignment Theory)"], "alias": "109"} {"id": "9ef1767a33537b51b1e9f86c76c774ce", "title": "Happiness maximizer", "url": "https://arbital.com/p/happiness_maximizer", "source": "arbital", "source_type": "text", "text": "It is sometimes proposed that we build an AI intended to maximize human happiness. (One early proposal suggested that AIs be trained to recognize pictures of people with smiling faces and then to take such recognized pictures as reinforcers, so that the grown version of the AI would value happiness.) There's a lot that would allegedly [predictably go wrong](https://arbital.com/p/6r) with an approach like that.", "date_published": "2015-12-16T16:36:19Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "10d"} {"id": "5204bfe54150c751af9130cc3a851616", "title": "Programmer deception", "url": "https://arbital.com/p/programmer_deception", "source": "arbital", "source_type": "text", "text": "[Programmer](https://arbital.com/p/9r) deception is when the AI's decision process leads it to optimize for an instrumental goal of causing the programmers to have false beliefs. For example, if the programmers [intended](https://arbital.com/p/6h) to create a [happiness maximizer](https://arbital.com/p/10d) but actually created a pleasure maximizer, then the pleasure maximizer will estimate that there would be more pleasure later if the programmers go on falsely believing that they've created a happiness maximizer (and hence don't edit the AI's current utility function). Averting such incentives to deceive programmers is one of the major subproblems of [corrigibility](https://arbital.com/p/45).\n\nThe possibility of programmer deception is a central difficulty of [advanced safety](https://arbital.com/p/2l) - it means that, unless the rest of the AI is working as intended and whatever programmer-deception-defeaters were built are functioning as planned, we can't rely on observations of nice current behavior to indicate future behavior. That is, if something went wrong with your attempts to build a nice AI, you could currently be observing a non-nice AI that is *smart* and trying to *fool you*. Arguably, some methodologies that have been proposed for building advanced AI are not robust to this possibility.\n\n\n\n- [instrumental pressure](https://arbital.com/p/) exists every time the AI's best strategic path doesn't have a global optimum that coincides with the programmers believing true things.\n - consider the highest utility obtainable if the programmers believe true beliefs B, and call this outcome O and the true beliefs B. if there's a higher-utility outcome O' which can be obtained when the programmers believe B' with B'!=B, we have an instrumental pressure to deceive the programmers.\n- happens when you combine the advanced agent properties of consequentialism with programmer modeling\n- this is an instrumental convergence problem, which means it involves an undesired instrumental goal, which means that we'll get Nearest Neighbor on attempts to define utility penalties for the programmers believing false things or otherwise exclude this as a special case\n - if we try to define a utility bonus for programmers believing true things, then of course ceteris paribus we tile the universe with tiny 'programmers' believing lots and lots of even numbers are even, and getting to this point temporarily involves deceiving a few programmers now\n- relation to the problem of programmer manipulation\n- central example of how divergences between intended goals and AI goals can blow up into astronomical failure\n- central driver of Treacherous Turn which in turn contributes to [Context Change](https://arbital.com/p/6q)", "date_published": "2015-12-16T03:53:36Z", "authors": ["Eric Bruylant", "Niplav Yushtun", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "10f"} {"id": "2432ff25eb9323ed2305828bd5ab6467", "title": "Instrumental convergence", "url": "https://arbital.com/p/instrumental_convergence", "source": "arbital", "source_type": "text", "text": "summary: Dissimilar goals can imply similar strategies. For example, whether your goal is to bake a cheesecake, or fly to Australia, you could benefit from... matter, energy, the ability to compute plans, and not dying in the next five minutes.\n\n\"Instrumentally convergent strategies\" are a set of strategies implied by *most* (not all) simple or compactly specified goals that a [sufficiently advanced artificial agent](https://arbital.com/p/7g1) could have.\n\nIt would still be possible to make a special and extra effort to build an AI that [doesn't follow](https://arbital.com/p/2vk) convergent strategies. E.g., one convergent strategy is \"don't let anyone edit your [utility function](https://arbital.com/p/1fw)\". We might be able to make a special AI that would let us edit the utility function. But it would take an [extra effort](https://arbital.com/p/45) to do this. By *default*, most agents won't let you edit their utility functions.\n\nsummary(Technical): \"Instrumental convergence\" is the observation that, given [simple measures](https://arbital.com/p/7cp) on a space $\\mathcal U$ of possible utility functions, it often seems reasonable to guess that a supermajority of utility functions $U_k \\in \\mathcal U$ will imply optimal policies $\\pi_k$ that lie in some abstract partition $X$ of policy space. $X$ is then an \"instrumentally convergent strategy.\"\n\nExample: \"Gather resources (of matter, negentropy, and computation).\" Whether you're a [paperclip maximizer](https://arbital.com/p/10h), or a diamond maximizer, or you just want to keep a single button pressed for as long as possible: It seems very likely that the policy $\\pi_k$ you pursue would include \"gathering more resources\" ($\\pi_k \\in X$) rather than being inside the policy partition \"never gathering any more resources\" ($\\pi_k \\in \\neg X$). Similarly, \"become more intelligent\" seems more likely as a convergent strategy than \"don't try to become more intelligent\".\n\nKey implications:\n\n- It doesn't require a deliberate effort to build AIs with human-hostile terms in their [terminal](https://arbital.com/p/1bh) [utility function](https://arbital.com/p/1fw), to end up with AIs that execute [detrimental](https://arbital.com/p/450) behaviors like using up all the matter and energy in the universe on [things we wouldn't see as interesting](https://arbital.com/p/7ch).\n- It doesn't take a deliberate effort to make an AI that has an explicitly programmed goal of wanting to be superintelligent, for it to become superintelligent.\n- We need [https://arbital.com/p/45](https://arbital.com/p/45) solutions to [avert](https://arbital.com/p/2vk) convergent strategies we'd rather our AI *not* execute, such as \"Don't let anyone [edit your utility function](https://arbital.com/p/1b7).\"\n\n# Alternative introductions\n\n- Steve Omohundro: \"[The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)\"\n- Nick Bostrom: \"[The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents](http://www.nickbostrom.com/superintelligentwill.pdf)\".\n\n# Introduction: A machine of unknown purpose\n\nSuppose you landed on a distant planet and found a structure of giant metal pipes, crossed by occasional cables. Further investigation shows that the cables are electrical superconductors carrying high-voltage currents.\n\nYou might not know what the huge structure did. But you would nonetheless guess that this huge structure had been built by some *intelligence,* rather than being a naturally-occurring mineral formation - that there were aliens who built the structure for some purpose.\n\nYour reasoning might go something like this: \"Well, I don't know if the aliens were trying to manufacture cars, or build computers, or what. But if you consider the problem of efficient manufacturing, it might involve mining resources in one place and then efficiently transporting them somewhere else, like by pipes. Since the most efficient size and location of these pipes would be stable, you'd want the shape of the pipes to be stable, which you could do by making the pipes out of a hard material like metal. There's all sorts of operations that require energy or negentropy, and a superconducting cable carrying electricity seems like an efficient way of transporting that energy. So I don't know what the aliens were *ultimately* trying to do, but across a very wide range of possible goals, an intelligent alien might want to build a superconducting cable to pursue that goal.\"\n\nThat is: We can take an enormous variety of compactly specifiable goals, like \"travel to the other side of the universe\" or \"support biological life\" or \"make paperclips\", and find very similar optimal strategies along the way. Today we don't actually know if electrical superconductors are the most useful way to transport energy in the limit of technology. But whatever is the most efficient way of transporting energy, whether that's electrical superconductors or something else, the most efficient form of that technology would probably not vary much depending on whether you were trying to make diamonds or make paperclips.\n\nOr to put it another way: If you consider the goals \"make diamonds\" and \"make paperclips\", then they might have almost nothing in common with respect to their end-states - a diamond might contain no iron. But the earlier strategies used to make a lot of diamond and make a lot of paperclips might have much in common; \"the best way of transporting energy to make diamond\" and \"the best way of transporting energy to make paperclips\" are much more likely to be similar.\n\nFrom a [Bayesian](https://arbital.com/p/1r8) standpoint this is how we can identify a huge machine strung with superconducting cables as having been produced by high-technology aliens, even before we have any idea of what the machine does. We're saying, \"This looks like the product of optimization, a strategy $X$ that the aliens chose to best achieve some unknown goal $Y$; we can infer this even without knowing $Y$ because many possible $Y$-goals would concentrate probability into this $X$-strategy being used.\"\n\n# Convergence and its caveats\n\nWhen you select policy $\\pi_k$ [because you expect it to achieve](https://arbital.com/p/9h) a later state $Y_k$ (the \"goal\"), we say that $\\pi_k$ is your [instrumental](https://arbital.com/p/10j) strategy for achieving $Y_k.$ The observation of \"instrumental convergence\" is that a widely different range of $Y$-goals can lead into highly similar $\\pi$-strategies. (This becomes truer as the $Y$-seeking agent becomes more [instrumentally efficient](https://arbital.com/p/6s); two very powerful chess engines are more likely to solve a humanly solvable chess problem the same way, compared to two weak chess engines whose individual quirks might result in idiosyncratic solutions.)\n\nIf there's a simple way of classifying possible strategies $\\Pi$ into partitions $X \\subset \\Pi$ and $\\neg X \\subset \\Pi$, and you think that for *most* compactly describable goals $Y_k$ the corresponding best policies $\\pi_k$ are likely to be inside $X,$ then you think $X$ is a \"convergent instrumental strategy\".\n\nIn other words, if you think that a superintelligent paperclip maximizer, diamond maximizer, a superintelligence that just wanted to keep a single button pressed for as long as possible, and a superintelligence optimizing for a flourishing intergalactic civilization filled with happy sapient beings, would *all* want to \"transport matter and energy efficiently\" in order to achieve their other goals, then you think \"transport matter and energy efficiently\" is a convergent instrumental strategy.\n\nIn this case \"paperclips\", \"diamonds\", \"keeping a button pressed as long as possible\", and \"sapient beings having fun\", would be the goals $Y_1, Y_2, Y_3, Y_4.$ The corresponding best strategies $\\pi_1, \\pi_2, \\pi_3, \\pi_4$ for achieving these goals would not be *identical* - the policies for making paperclips and diamonds are not exactly the same. But all of these policies (we think) would lie within the partition $X \\subset \\Pi$ where the superintelligence tries to \"transport matter and energy efficiently\" (perhaps by using superconducting cables), rather than the complementary partition $\\neg X$ where the superintelligence does not try to transport matter and energy efficiently.\n\n## Semiformalization\n\n- Consider the set of [computable](https://arbital.com/p/) and [tractable](https://arbital.com/p/) [utility functions](https://arbital.com/p/1fw) $\\mathcal U_C$ that take an outcome $o,$ described in some language $\\mathcal L$, onto a rational number $r$. That is, we suppose:\n - That the relation $U_k$ between descriptions $o_\\mathcal L$ of outcomes $o$, and the corresponding utilities $r,$ is computable;\n - Furthermore, that it can be computed in realistically bounded time;\n - Furthermore, that the $U_k$ relation between $o$ and $r$, and the $\\mathbb P [| \\pi_i](https://arbital.com/p/o)$ relation between policies and subjectively expected outcomes, are together regular enough that a realistic amount of computing power makes it possible to search for policies $\\pi$ that are yield high expected $U_k(o)$.\n- Choose some simple programming language $\\mathcal P,$ such as the language of Turing machines, or Python 2 without most of the system libraries.\n- Choose a simple mapping $\\mathcal P_B$ from $\\mathcal P$ onto bitstrings.\n- Take all programs in $\\mathcal P_B$ between 20 and 1000 bits in length, and filter them for boundedness and tractability when treated as utility functions, to obtain the filtered set $U_K$.\n- Set 90% as an arbitrary threshold.\n\nIf, given our beliefs $\\mathbb P$ about our universe and which policies lead to which real outcomes, we think that in an intuitive sense it sure looks like at least 90% of the utility functions $U_k \\in U_K$ ought to imply best findable policies $\\pi_k$ which lie within the partition $X$ of $\\Pi,$ we'll allege that $X$ is \"instrumentally convergent\".\n\n## Compatibility with Vingean uncertainty\n\n[https://arbital.com/p/9g](https://arbital.com/p/9g) is the observation that, as we become increasingly confident of increasingly powerful intelligence from an agent with precisely known goals, we become decreasingly confident of the exact moves it will make (unless the domain has an optimal strategy and we know the exact strategy). E.g., to know exactly where [Deep Blue](https://arbital.com/p/1bx) would move on a chessboard, [you would have to be as good](https://arbital.com/p/1c0) at chess as Deep Blue. However, we can become increasingly confident that more powerful chessplayers will eventually win the game - that is, steer the future outcome of the chessboard into the set of states designated 'winning' for their color - even as it becomes less possible for us to be certain about the chessplayer's exact policy.\n\nInstrumental convergence can be seen as a caveat to Vingean uncertainty: Even if we don't know the exact actions *or* the exact end goal, we may be able to predict that some intervening states or policies will fall into certain *abstract* categories.\n\nThat is: If we don't know whether a superintelligent agent is a paperclip maximizer or a diamond maximizer, we can still guess with some confidence that it will pursue a strategy in the general class \"obtain more resources of matter, energy, and computation\" rather than \"don't get more resources\". This is true even though [https://arbital.com/p/1c0](https://arbital.com/p/1c0) says that we won't be able to predict *exactly* how the superintelligence will go about gathering matter and energy.\n\nImagine the real world as an extremely complicated game. Suppose that at the very start of this game, a highly capable player must make a single binary choice between the abstract moves \"Gather more resources later\" and \"Never gather any more resources later\". [Vingean uncertainty](https://arbital.com/p/9g) or not, we seem justified in putting a high probability on the first move being preferred - a binary choice is simple enough that we can take a good guess at the optimal play.\n\n## Convergence supervenes on consequentialism\n\n$X$ being \"instrumentally convergent\" doesn't mean that every mind needs an extra, independent drive to do $X.$\n\nConsider the following line of reasoning: \"It's impossible to get on an airplane without buying plane tickets. So anyone on an airplane must be a sort of person who enjoys buying plane tickets. If I offer them a plane ticket they'll probably buy it, because this is almost certainly somebody who has an independent motivational drive to buy plane tickets. There's just no way you can design an organism that ends up on an airplane unless it has a buying-tickets drive.\"\n\nThe appearance of an \"instrumental strategy\" can be seen as implicit in repeatedly choosing actions $\\pi_k$ that lead into a final state $Y_k,$ and it so happens that $\\pi_k \\in X$. There doesn't have to be a special $X$-module which repeatedly selects $\\pi_X$-actions regardless of whether or not they lead to $Y_k.$\n\nThe flaw in the argument about plane tickets is that human beings are consequentialists who buy plane tickets *just* because they wanted to go somewhere and they expected the action \"buy the plane ticket\" to have the consequence, in that particular case, of going to the particular place and time they wanted to go. No extra \"buy the plane ticket\" module is required, and especially not a plane-ticket-buyer that doesn't check whether there's any travel goal and whether buying the plane ticket leads into the desired later state.\n\nMore semiformally, suppose that $U_k$ is the [utility function](https://arbital.com/p/1fw) of an agent and let $\\pi_k$ be the policy it selects. If the agent is [instrumentally efficient](https://arbital.com/p/6s) relative to us at achieving $U_k,$ then from our perspective we can mostly reason about whatever kind of optimization it does [as if it were](https://arbital.com/p/4gh) expected utility maximization, i.e.:\n\n$$\\pi_k = \\underset{\\pi_i \\in \\Pi}{\\operatorname{argmax}} \\mathbb E [U_k | \\pi_i ](https://arbital.com/p/)$$\n\nWhen we say that $X$ is instrumentally convergent, we are stating that it probably so happens that:\n\n$$\\big ( \\underset{\\pi_i \\in \\Pi}{\\operatorname{argmax}} \\mathbb E [U_k | \\pi_i ](https://arbital.com/p/) \\big ) \\in X$$\n\nWe are *not* making any claims along the lines that for an agent to thrive, its utility function $U_k$ must decompose into a term for $X$ plus a residual term $V_k$ denoting the rest of the utility function. Rather, $\\pi_k \\in X$ is the mere result of unbiased optimization for a goal $U_k$ that makes no explicit mention of $X.$\n\n(This doesn't rule out that some special cases of AI development pathways might tend to produce artificial agents with a value function $U_e$ which *does* decompose into some variant $X_e$ of $X$ plus other terms $V_e.$ For example, natural selection on organisms that spend a long period of time as non-[consequentialist](https://arbital.com/p/9h) policy-reinforcement-learners, before they later evolve into consequentialists, [has had results along these lines](http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/) in [the case of humans](http://lesswrong.com/lw/l3/thou_art_godshatter/). For example, humans have an independent, separate \"curiosity\" drive, instead of just valuing information as a means to inclusive genetic fitness.)\n\n## Required advanced agent properties\n\n[Distinguishing the advanced agent properties that seem probably required](https://arbital.com/p/4n1) for an AI program to start exhibiting the sort of reasoning filed under \"instrumental convergence\", the most obvious candidates are:\n\n- Sufficiently powerful [consequentialism](https://arbital.com/p/9h) (or [https://arbital.com/p/-pseudoconsequentialism](https://arbital.com/p/-pseudoconsequentialism)); plus\n- [Understanding the relevant aspects of the big picture](https://arbital.com/p/3nf) that connect later goal achievement to executing the instrumental strategy.\n\nThat is: You don't automatically see \"acquire more computing power\" as a useful strategy unless you understand \"I am a cognitive program and I tend to achieve more of my goals when I run on more resources.\" Alternatively, e.g., the programmers adding more computing power and the system's goals starting to be achieved better, after which related policies are positively reinforced and repeated, could arrive at a similar end via the [pseudoconsequentialist](https://arbital.com/p/) idiom of policy reinforcement.\n\nThe [advanced agent properties](https://arbital.com/p/2c) that would naturally or automatically lead to instrumental convergence seem well above the range of modern AI programs. As of 2016, current machine learning algorithms don't seem to be within the range where this [predicted phenomenon](https://arbital.com/p/6r) should start to be visible.\n\n# Caveats\n\n### An instrumental convergence claim is about a default or a majority of cases, *not* a universal generalization.\n\nIf for whatever reason your goal is to \"make paperclips without using any superconductors\", then superconducting cables will not be the best instrumental strategy for achieving that goal.\n\nAny claim about instrumental convergence says at most, \"*The vast majority* of possible goals $Y$ would convergently imply a strategy in $X,$ *by default* and *unless otherwise averted* by some special case $Y_i$ for which strategies in $\\neg X$ are better.\"\n\nSee also the more general idea that [the space of possible minds is very large](https://arbital.com/p/4ly). Universal claims about all possible minds have many chances to be false, while existential claims \"There exists at least one possible mind such that...\" have many chances to be true.\n\nIf some particular oak tree is extremely important and valuable to you, then you won't cut it down to obtain wood. It is irrelevant whether a majority of other utility functions that you could have, but don't actually have, would suggest cutting down that oak tree.\n\n### Convergent strategies are not deontological rules.\n\nImagine looking at a machine chess-player and reasoning, \"Well, I don't think the AI will sacrifice its pawn in this position, even to achieve a checkmate. Any chess-playing AI needs a drive to be protective of its pawns, or else it'd just give up all its pawns. It wouldn't have gotten this far in the game in the first place, if it wasn't more protective of its pawns than that.\"\n\nModern chess algorithms [behave in a fashion that most humans can't distinguish](https://arbital.com/p/6s) from expected-checkmate-maximizers. That is, from your merely human perspective, watching a single move at the time it happens, there's no visible difference between *your subjective expectation* for the chess algorithm's behavior, and your expectation for the behavior of [an oracle that always output](https://arbital.com/p/4gh) the move with the highest conditional probability of leading to checkmate. If you, a human, you could discern with your unaided eye some systematic difference like \"this algorithm protects its pawn more often than checkmate-achievement would imply\", you would know how to make systematically better chess moves; modern machine chess is too superhuman for that.\n\nOften, this uniform rule of output-the-move-with-highest-probability-of-eventual-checkmate will *seem* to protect pawns, or not throw away pawns, or defend pawns when you attack them. But if in some special case the highest probability of checkmate is instead achieved by sacrificing a pawn, the chess algorithm will do that instead.\n\nSemiformally:\n\nThe reasoning for an instrumental convergence claim says that for many utility functions $U_k$ and situations $S_i$ a $U_k$-consequentialist in situation $S_i$ will probably find some best policy $\\pi_k = \\underset{\\pi_i \\in \\Pi}{\\operatorname{argmax}} \\mathbb E [U_k | S_i, \\pi_i ](https://arbital.com/p/)$ that happens to be inside the partition $X$. If instead in situation $S_k$...\n\n$$\\big ( \\underset{\\pi_i \\in X}{\\operatorname{argmax}} \\mathbb E [U_k | S_k, \\pi_i ](https://arbital.com/p/) \\big ) \\ < \\ \\big ( \\underset{\\pi_i \\in \\neg X}{\\operatorname{argmax}} \\mathbb E [U_k | S_k, \\pi_i ](https://arbital.com/p/) \\big )$$\n\n...then a $U_k$-consequentialist in situation $S_k$ won't do any $\\pi_i \\in X$ even if most other scenarios $S_i$ make $X$-strategies prudent.\n\n### \"$X$ would help accomplish $Y$\" is insufficient to establish a claim of instrumental convergence on $X$.\n\nSuppose you want to get to San Francisco. You could get to San Francisco by paying me \\$20,000 for a plane ticket. You could also get to San Francisco by paying someone else \\$400 for a plane ticket, and this is probably the smarter option for achieving your other goals.\n\nEstablishing \"Compared to doing nothing, $X$ is more useful for achieving most $Y$-goals\" doesn't establish $X$ as an instrumental strategy. We need to believe that there's no other policy in $\\neg X$ which would be more useful for achieving most $Y.$\n\nWhen $X$ is phrased in very general terms like \"acquire resources\", we might reasonably guess that \"don't acquire resources\" or \"do $Y$ without acquiring any resources\" is indeed unlikely to be a superior strategy. If $X_i$ is some narrower and more specific strategy, like \"acquire resources by mining them using pickaxes\", it's much more likely that some other strategy $X_k$ or even a $\\neg X$-strategy is the real optimum.\n\nSee also: [https://arbital.com/p/43g](https://arbital.com/p/43g), [https://arbital.com/p/9f](https://arbital.com/p/9f).\n\nThat said, if we can see how a narrow strategy $X_i$ helps most $Y$-goals to some large degree, then we should expect the actual policy deployed by an [efficient](https://arbital.com/p/6s) $Y_k$-agent to obtain *at least* as much $Y_k$ as would $X_i.$\n\nThat is, we can reasonably argue: \"By following the straightforward strategy 'spread as far as possible, absorb all reachable matter, and turn it into paperclips', an initially unopposed superintelligent paperclip maximizer could obtain $10^{55}$ paperclips. Then we should expect an initially unopposed superintelligent paperclip maximizer to get at least this many paperclips, whatever it actually does. Any strategy in the opposite partition 'do *not* spread as far as possible, absorb all reachable matter, and turn it into paperclips' must seem to yield more than $10^{55}$ paperclips, before we should expect a paperclip maximizer to do that.\"\n\nSimilarly, a claim of instrumental convergence on $X$ can be ceteris paribus refuted by presenting some alternate narrow strategy $W_j \\subset \\neg X$ which seems to be more useful than any obvious strategy in $X.$ We are then not positively confident of convergence on $W_j,$ but we should assign very low probability to the alleged convergence on $X,$ at least until somebody presents an $X$-exemplar with higher expected utility than $W_j.$ If the proposed convergent strategy is \"trade economically with other humans and obey existing systems of property rights,\" and we see no way for Clippy to obtain $10^{55}$ paperclips under those rules, but we do think Clippy could get $10^{55}$ paperclips by expanding as fast as possible without regard for human welfare or existing legal systems, then we can ceteris paribus reject \"obey property rights\" as convergent. Even if trading with humans to make paperclips produces more paperclips than *doing nothing*, it may not produce the *most* paperclips compared to converting the material composing the humans into more efficient paperclip-making machinery.\n\n### Claims about instrumental convergence are not ethical claims.\n\nWhether $X$ is a good way to get both paperclips and diamonds is irrelevant to whether $X$ is good for human flourishing or eudaimonia or fun-theoretic optimality or [extrapolated volition](https://arbital.com/p/313) or [whatever](https://arbital.com/p/55). Whether $X$ is, in an intuitive sense, \"good\", needs to be evaluated separately from whether it is instrumentally convergent.\n\nIn particular: instrumental strategies are not [terminal values](https://arbital.com/p/1bh). In fact, they have a type distinction from terminal values. \"If you're going to spend resources on thinking about technology, try to do it earlier rather than later, so that you can amortize your invention over more uses\" seems very likely to be an instrumentally convergent exploration-exploitation strategy; but \"spend cognitive resources sooner rather than later\" is more a feature of *policies* rather than a feature of *utility functions.* It's definitely not plausible in a pretheoretic sense as the Meaning of Life. So a partition into which most instrumental best-strategies fall, is not like a universally convincing utility function (which you probably [shouldn't look for](https://arbital.com/p/) in the first place).\n\nSimilarly: The natural selection process that produced humans gave us many independent drives $X_e$ that can be viewed as special variants of some convergent instrumental strategy $X.$ A pure paperclip maximizer would calculate the value of information (VoI) for learning facts that could lead to it making more paperclips; we can see learning high-value facts as a convergent strategy $X$. In this case, human \"curiosity\" can be viewed as the corresponding emotion $X_e.$ This doesn't mean that the *true purpose* of $X_e$ is $X$ any more than the *true purpose* of $X_e$ is \"make more copies of the allele coding for $X_e$\" or \"increase inclusive genetic fitness\". [That line of reasoning probably results from a mind projection fallacy on 'purpose'.](https://arbital.com/p/)\n\n### Claims about instrumental convergence are not futurological predictions.\n\nEven if, e.g., \"acquire resources\" is an instrumentally convergent strategy, this doesn't mean that we can't as a special case deliberately construct advanced AGIs that are *not* driven to acquire as many resources as possible. Rather the claim implies, \"We would need to deliberately build $X$-[averting](https://arbital.com/p/2vk) agents as a special case, because by default most imaginable agent designs would pursue a strategy in $X.$\"\n\nOf itself, this observation makes no further claim about the quantitative probability that, in the real world, AGI builders might *want* to build $\\neg X$-agents, might *try* to build $\\neg X$-agents, and might *succeed* at building $\\neg X$-agents.\n\nA claim about instrumental convergence is talking about a logical property of the larger design space of possible agents, not making a prediction what happens in any particular research lab. (Though the ground facts of computer science are *relevant* to what happens in actual research labs.)\n\nFor discussion of how instrumental convergence may in practice lead to [foreseeable difficulties](https://arbital.com/p/6r) of AGI alignment that resist most simple attempts at fixing them, see the articles on [https://arbital.com/p/48](https://arbital.com/p/48) and [https://arbital.com/p/42](https://arbital.com/p/42).\n\n# Central example: Resource acquisition\n\nOne of the convergent strategies originally proposed by Steve Omohundro in \"[The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)\" was *resource acquisition:*\n\n> \"All computation and physical action requires the physical resources of space, time, matter, and free energy. Almost any goal can be better accomplished by having more of these resources.\"\n\nWe'll consider this example as a template for other proposed instrumentally convergent strategies, and run through the standard questions and caveats.\n\n• Question: Is this something we'd expect a paperclip maximizer, diamond maximizer, *and* button-presser to do? And while we're at it, also a flourishing-intergalactic-civilization optimizer?\n\n- Paperclip maximizers need matter and free energy to make paperclips.\n- Diamond maximizers need matter and free energy to make diamonds.\n- If you're trying to maximize the probability that a single button stays pressed as long as possible, you would build fortresses protecting the button and energy stores to sustain the fortress and repair the button for the longest possible period of time.\n- Nice superintelligences trying to build happy intergalactic civilizations full of flourishing sapient minds, can build marginally larger civilizations with marginally more happiness and marginally longer lifespans given marginally more resources.\n\nTo put it another way, for a utility function $U_k$ to imply the use of every joule of energy, it is a sufficient condition that for every plan $\\pi_i$ with expected utility $\\mathbb E [U | \\pi_i ](https://arbital.com/p/),$ there is a plan $\\pi_j$ with $\\mathbb E [U | \\pi_j ](https://arbital.com/p/) > \\mathbb E [U | \\pi_i](https://arbital.com/p/)$ that uses one more joule of energy:\n\n- For every plan $\\pi_i$ that makes paperclips, there's a plan $\\pi_j$ that would make *more* expected paperclips if more energy were available and acquired.\n- For every plan $\\pi_i$ that makes diamonds, there's a plan $\\pi_j$ that makes slightly more diamond given one more joule of energy.\n- For every plan $\\pi_i$ that produces a probability $\\mathbb P (press | \\pi_i) = 0.999...$ of a button being pressed, there's a plan $\\pi_j$ with a *slightly higher* probability of that button being pressed $\\mathbb P (press | \\pi_j) = 0.9999...$ which uses up the mass-energy of one more star.\n- For every plan that produces a flourishing intergalactic civilization, there's a plan which produces slightly more flourishing given slightly more energy.\n\n• Question: Is there some strategy in $\\neg X$ which produces higher $Y_k$-achievement for most $Y_k$ than any strategy inside $X$?\n\nSuppose that by using most of the mass-energy in most of the stars reachable before they go over the cosmological horizon as seen from present-day Earth, it would be possible to produce $10^{55}$ paperclips (or diamonds, or probability-years of expected button-stays-pressed time, or QALYs, etcetera).\n\nIt seems *reasonably unlikely* that there is a strategy inside the space intuitively described by \"Do not acquire more resources\" that would produce $10^{60}$ paperclips, let alone that the strategy producing the *most* paperclips would be inside this space.\n\nWe might be able to come up with a weird special-case situation $S_w$ that would imply this. But that's not the same as asserting, \"With high subjective probability, in the real world, the optimal strategy will be in $\\neg X$.\" We're concerned with making a statement about defaults given the most subjectively probable background states of the universe, not trying to make a universal statement that covers every conceivable possibility.\n\nTo put it another way, if your policy choices or predictions are only safe given the premise that \"In the real world, the best way of producing the maximum possible number of paperclips involves not acquiring any more resources\", you need to clearly [flag this as a load-bearing assumption](https://arbital.com/p/4lz).\n\n• Caveat: The claim is not that *every possible* goal can be better-accomplished by acquiring more resources.\n\nAs a special case, this would not be true of an agent with an [impact penalty](https://arbital.com/p/4l) term in its utility function, or some other [low-impact agent](https://arbital.com/p/2pf), if that agent also only had [goals of a form that could be satisfied inside bounded regions of space and time with a bounded effort](https://arbital.com/p/4mn).\n\nWe might reasonably expect this special kind of agent to only acquire the minimum resources to accomplish its [task](https://arbital.com/p/4mn).\n\nBut we wouldn't expect this to be true in a majority of possible cases inside mind design space; it's not true *by default*; we need to specify a further fact about the agent to make the claim not be true; we must expend engineering effort to make an agent like that, and failures of this effort will result in reversion-to-default. If we imagine some [computationally simple language](https://arbital.com/p/5v) for specifying utility functions, then *most* utility functions wouldn't happen to have both of these properties, so a *majority* of utility functions given this language and measure would not *by default* try to use fewer resources.\n\n• Caveat: The claim is not that well-functioning agents must have additional, independent resource-acquiring motivational drives.\n\nA paperclip maximizer will act like it is \"obtaining resources\" if it merely implements the policy it expects to lead to the most paperclips. [Clippy](https://arbital.com/p/10h) does not need to have any separate and independent term in its utility function for the amount of resource it possesses (and indeed this would potentially interfere with Clippy making paperclips, since it might then be tempted to hold onto resources instead of making paperclips with them).\n\n• Caveat: The claim is not that most agents will behave as if under a deontological imperative to acquire resources.\n\nA paperclip maximizer wouldn't necessarily tear apart a working paperclip factory to \"acquire more resources\" (at least not until that factory had already produced all the paperclips it was going to help produce.)\n\n• Check: Are we arguing \"Acquiring resources is a better way to make a few more paperclips than doing nothing\" or \"There's *no* better/best way to make paperclips that involves *not* acquiring more matter and energy\"?\n\nAs mentioned above, the latter seems reasonable in this case.\n\n• Caveat: \"Acquiring resources is instrumentally convergent\" is not an ethical claim.\n\nThe fact that a paperclip maximizer would try to acquire all matter and energy within reach, does not of itself bear on whether our own [normative](https://arbital.com/p/3y9) [values](https://arbital.com/p/55) might perhaps command that we ought to use few resources as a [terminal value](https://arbital.com/p/1bh).\n\n(Though some of us might find pretty compelling the observation that if you leave matter lying around, it sits around not doing anything and eventually the protons decay or the expanding universe tears it apart, whereas if you turn the matter into people, it can have fun. There's no rule that instrumentally convergent strategies *don't* happen to be the right thing to do.)\n\n• Caveat: \"Acquiring resources is instrumentally convergent\" is not of itself a futurological prediction.\n\nSee above. Maybe we try to build [Task AGIs](https://arbital.com/p/6w) instead. Maybe we succeed, and Task AGIs don't consume lots of resources because they have [well-bounded tasks](https://arbital.com/p/4mn) and [impact penalties](https://arbital.com/p/4l).\n\n# Relevance to the larger field of value alignment theory\n\nThe [list of arguably convergent strategies](https://arbital.com/p/2vl) has its own page. However, some of the key strategies that have been argued as convergent in e.g. Omohundro's \"[The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf)\" and Bostrom's \"[The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents](http://www.nickbostrom.com/superintelligentwill.pdf)\" include:\n\n- Acquiring/controlling matter and energy.\n- Ensuring that future intelligences with similar goals exist. E.g., a paperclip maximizer wants the future to contain powerful, effective intelligences that maximize paperclips.\n - An important special case of this general rule is *self-preservation*.\n - Another special case of this rule is *protecting goal-content integrity* (not allowing accidental or deliberate modification of the utility function).\n- Learning about the world (so as to better manipulate it to make paperclips).\n - Carrying out relevant scientific investigations.\n- Optimizing technology and designs.\n - Engaging in an \"exploration\" phase of seeking optimal designs before an \"exploitation\" phase of using them.\n- Thinking effectively (treating the cognitive self as an improvable technology).\n - Improving cognitive processes.\n - Acquiring computing resources for thought.\n\nThis is relevant to some of the central background ideas in [AGI alignment](https://arbital.com/p/2v), because:\n\n- A superintelligence can have a catastrophic impact on our world even if its utility function contains no overtly hostile terms. A paperclip maximizer doesn't hate you, it just wants paperclips.\n- A consequentialist AGI with sufficient big-picture understanding will by default want to promote itself to a superintelligence, even if the programmers did not explicitly program it to want to self-improve. Even a [pseudoconsequentialist](https://arbital.com/p/) may e.g. repeat strategies that led to previous cognitive capability gains.\n\nThis means that programmers don't have to be evil, or even deliberately bent on creating superintelligence, in order for their work to have catastrophic consequences.\n\nThe list of convergent strategies, by its nature, tends to include everything an agent needs to survive and grow. This supports strong forms of the [Orthogonality Thesis](https://arbital.com/p/1y) being true in practice as well as in principle. We don't need to filter on agents with explicit [terminal](https://arbital.com/p/1bh) values for e.g. \"survival\" in order to find surviving powerful agents.\n\nInstrumental convergence is also why we expect to encounter most of the problems filed under [https://arbital.com/p/45](https://arbital.com/p/45). When the AI is young, it's less likely to be [instrumentally efficient](https://arbital.com/p/6s) or understand the relevant parts of the [bigger picture](https://arbital.com/p/3nf); but once it does, we would by default expect, e.g.:\n\n- That the AI will try to avoid being shut down.\n- That it will try to build subagents (with identical goals) in the environment.\n- That the AI will resist modification of its utility function.\n- That the AI will try to avoid the programmers learning facts that would lead them to modify the AI's utility function.\n- That the AI will try to pretend to be friendly even if it is not.\n- That the AI will try to [conceal hostile thoughts](https://arbital.com/p/3cq) (and the fact that any concealed thoughts exist).\n\nThis paints a much more effortful picture of AGI alignment work than \"Oh, well, we'll just test it to see if it looks nice, and if not, we'll just shut off the electricity.\"\n\nThe point that some undesirable behaviors are instrumentally *convergent* gives rise to the [https://arbital.com/p/42](https://arbital.com/p/42) problem. Suppose the AGI's most preferred policy starts out as one of these incorrigible behaviors. Suppose we currently have enough control to add [patches](https://arbital.com/p/48) to the AGI's utility function, intended to rule out the incorrigible behavior. Then, after integrating the intended patch, the new most preferred policy may be the most similar policy that wasn't explicitly blocked. If you naively give the AI a term in its utility function for \"having an off-switch\", it may still build subagents or successors that don't have off-switches. Similarly, when the AGI becomes more powerful and [its option space expands](https://arbital.com/p/6q), it's again likely to find new similar policies that weren't explicitly blocked.\n\nThus, instrumental convergence is one of the two basic sources of [patch resistance](https://arbital.com/p/48) as a [foreseeable difficulty](https://arbital.com/p/6r) of AGI alignment work.", "date_published": "2017-04-10T04:12:52Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "spxtr spxtr", "Alexei Andreev"], "summaries": ["AIs that want different things may pursue very similar strategies. Whether you're trying to make as many paperclips as possible, or keep a single button pressed for as long as possible, you'll still want access to resources of matter, energy, and computation; and to not die in the next five minutes; and to not let anyone edit your utility function. Strategies that are implied by most (not all) goals are \"instrumentally convergent\"."], "tags": ["Work in progress", "B-Class"], "alias": "10g"} {"id": "5c91edf1fedc65b46b0dcd7706fa1b7d", "title": "Paperclip maximizer", "url": "https://arbital.com/p/paperclip_maximizer", "source": "arbital", "source_type": "text", "text": "An expected paperclip maximizer is an agent that outputs the action it believes will lead to the greatest number of [paperclips](https://arbital.com/p/7ch) existing. Or in more detail, its [utility function](https://arbital.com/p/109) is linear in the number of paperclips times the number of seconds that each paperclip lasts, over the lifetime of the universe. See http://wiki.lesswrong.com/wiki/Paperclip_maximizer.\n\nThe agent may be a [bounded maximizer](https://arbital.com/p/) rather than an [objective maximizer](https://arbital.com/p/) without changing the key ideas; the core premise is just that, given actions A and B where the paperclip maximizer has evaluated the consequences of both actions, the paperclip maximizer always prefers the action that it expects to lead to more paperclips.\n\nSome key ideas that the notion of an expected paperclip maximizer illustrates:\n\n- A self-modifying paperclip maximizer [does not change its own utility function](https://arbital.com/p/3r6) to something other than 'paperclips', since this would be expected to lead to fewer paperclips existing.\n- A paperclip maximizer instrumentally prefers the standard [convergent instrumental strategies](https://arbital.com/p/2vl) - it will seek access to matter, energy, and negentropy in order to make paperclips; try to build efficient technology for [colonizing the galaxies](https://arbital.com/p/7cy) to transform into paperclips; do whatever science is necessary to gain the knowledge to build such technology optimally; etcetera.\n- \"The AI does not hate you, nor does it love you, and you are made of atoms it can use for something else.\"", "date_published": "2017-03-03T18:24:22Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "10h"} {"id": "4866d8fb035f8072a58c8acd49e23c66", "title": "Instrumental", "url": "https://arbital.com/p/instrumental", "source": "arbital", "source_type": "text", "text": "An 'instrumental strategy', 'instrumental goal', or 'subgoal' is an event E that an agent tries to bring about in order to bring about some other goal G. If you want to drink milk, then you need to drive to the store; in order to drive to the store, you need to be inside your car; in order to be inside your car, you need to open your car door. Thus 'be inside my car' and 'open my car door' are instrumental goals or instrumental strategies.\n\nIn conventional philosophy, an event is said to have \"instrumental value\" if it is useful for accomplishing some implied other set of goals, as distinguished from \"terminal value\" which is unconditional on future events. Since in [VAT](https://arbital.com/p/2v) we have reserved the word ['value'](https://arbital.com/p/55), we can't use that terminology here.", "date_published": "2015-12-17T00:03:18Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Definition", "Stub", "Glossary (Value Alignment Theory)"], "alias": "10j"} {"id": "4f2238b1fee9ad9065afc33872e5969d", "title": "Instrumental pressure", "url": "https://arbital.com/p/instrumental_pressure", "source": "arbital", "source_type": "text", "text": "Saying that an agent will see 'instrumental pressure' to bring about an event E is saying that this agent, presumed to be a [consequentialist](https://arbital.com/p/9h) with some goal G, will *ceteris paribus and absent defeaters,* want to [bring about E in order to do G](https://arbital.com/p/10j). For example, a [paperclip maximizer](https://arbital.com/p/10h), Clippy, sees instrumental pressure to gain control of as much matter as possible in order to make more paperclips. If we imagine an alternate Clippy+ that has a penalty term in its utility function for 'killing humans', Clippy+ still has an instrumental pressure to turn humans into paperclips (because of the paperclips that would be gained) but it also has a countervailing force pushing against that pressure (the penalty term for killing humans). Thus, we can say that a system is experiencing 'instrumental pressure' to do something, without implying that the system necessarily does it.\n\nThis state of affairs is different from the absence of any instrumental pressure: E.g., Clippy+ might come up with some clever way to [obtain the gains while avoiding the penalty term](https://arbital.com/p/42), like turning humans into paperclips without killing them.\n\nTo more crisply define 'instrumental pressure', we need a setup that distinguishes [terminal utility and instrumental expected utility](https://arbital.com/p/10j), as in e.g. a utility function plus a causal model. Then we can be more precise about the notion of 'instrumental pressure' as follows: If each paperclip is worth 1 terminal utilon and a human can be disassembled to make 1000 paperclips with certainty, then strategies or event-sets that include 'turn the human into paperclips' thereby have their expected utility elevated by 1000 utils. There might also be a penalty term that assigns -1,000,000 utilts to killing a human, but then the net expected utility of disassembling the human is -999,000 rather than -1,000,000. The 1000 utils would still be gained from disassembling the human; the penalty term doesn't change that part. Even if this strategy doesn't have maximum EU and is not selected, the 'instrumental pressure' was still elevating its EU. There's still an expected-utility bump on that part of the solution space, even if that solution space is relatively low in value. And this is perhaps relevantly different because, e.g., there might be some clever strategy for turning humans into paperclips without killing them (even if you can only get 900 paperclips that way).\n\n### Link from instrumental pressures to reflective instrumental pressures\n\nIf the agent is reflective and makes reflective choices on a consequentialist basis, there would ceteris paribus be a reflective-level pressure to *search* for a strategy that makes paperclips out of the humans' atoms [without doing anything defined as 'killing the human'](https://arbital.com/p/42). If a strategy like that could be found, then executing the strategy would enable a gain of 1000 utilons; thus there's an instrumental pressure to search for that strategy. Even if there's a penalty term added for searching for strategies to evade penalty terms, leading the AI to decide not to do the search, the instrumental pressure will still be there as a bump in the expected utility of that part of the solution space. (Perhaps there's some [unforeseen](https://arbital.com/p/9f) way to do something very like searching for that strategy while evading the penalty term, such as constructing an outside calculator to do it...)\n\n### Blurring lines in allegedly non-consequentialist subsystems or decision rules\n\nTo the extent that the AI being discussed is not a pure consequentialist, the notion of 'instrumental pressure' may start to blur or be less applicable. E.g., suppose on some level of AI, the choice of which questions to think about is *not* being decided by a choice between options with calculated expected utilities, but is instead being decided by a rule, and the rule excludes searching for strategies that evade penalty terms. Then maybe there's no good analogy to the concept of 'an instrumental pressure to search for strategies that evade penalty terms', because there's no expected utility rating on the solution space and hence no analogous bump in the solution space that might eventually intersect a feasible strategy. But we should still perhaps [be careful about declaring that an AI subsystem has no analogue of instrumental pressures, because instrumental pressures may arise even in systems that don't look explicitly consequentialist](https://arbital.com/p/).", "date_published": "2015-12-16T15:49:35Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "10k"} {"id": "b9482431dafb2e6891746881dbafa51b", "title": "Guarded definition", "url": "https://arbital.com/p/guarded_definition", "source": "arbital", "source_type": "text", "text": "A guarded definition is one where at least one position suspects there will be pressure to stretch a concept and make it cover more than it ought to, and so they set aside a term meant to refer *narrowly* to the things inside the concept. Thus, if a term has been designated as a 'guarded definition', stretching it to cover new and non-central members that are not *very* clearly part of the definition, and agreed to be so by those who wanted to designate it as guarded, is an unusually strong discourtesy. If the term was originated (or its special meaning was originated) specifically in order to set it aside as a narrow and guarded term, then it is a discourse norm to respect that narrow meaning and not try to extend it.\n\nExample: Suppose that Alice and Bob are having a conversation about natural selection. Alice points out that since everything occurs within Nature, all selection, including human agricultural breeding and genetic engineering, seems to her like 'natural selection', and she also argues that consumer choice in supermarkets is an instance of 'natural selection' since people are natural objects and they're selecting which foods to buy, and thus her paper on watching people buy food in supermarkets ought to be funded by a program on evolutionary biology. If Bob and his researchers then begin using the term 'ecologically natural selection' because they think it's important to have a narrow term to refer to just birds breeding in the wild and not consumer choice in supermarkets, it is an extreme discourtesy (and a violation of what we locally take to be discourse norms) for Alice to start arguing that really supermarkets are instances of ecologically natural selection too.", "date_published": "2015-12-16T16:16:33Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "10l"} {"id": "e6ecb4b7709ba0a8cdb844a1ae4666b8", "title": "Executable philosophy", "url": "https://arbital.com/p/executable_philosophy", "source": "arbital", "source_type": "text", "text": "\"Executable philosophy\" is [https://arbital.com/p/+2](https://arbital.com/p/+2)'s term for discourse about subjects usually considered to belong to the realm of philosophy, meant to be applied to problems that arise in designing or [aligning](https://arbital.com/p/2v) [machine intelligence](https://arbital.com/p/2c).\n\nTwo motivations of \"executable philosophy\" are as follows:\n\n1. We need a philosophical analysis to be \"effective\" in Turing's sense: that is, the terms of the analysis must be useful in writing programs. We need ideas that we can compile and run; they must be \"executable\" like code is executable.\n2. We need to produce adequate answers on a time scale of years or decades, not centuries. In the entrepreneurial sense of \"good execution\", we need a methodology we can execute on in a reasonable timeframe.\n\nSome consequences:\n\n- We take at face value some propositions that seem extremely likely to be true in real life, like \"The universe is a mathematically simple low-level unified causal process with no non-natural elements or attachments\". This is almost certainly true, so as a matter of fast entrepreneurial execution, we take it as settled and move on rather than debating it further.\n - This doesn't mean we know *how* things are made of quarks, or that we instantly seize on the first theory proposed that involves quarks. Being reductionist isn't the same as cheering for everything with a reductionist label on it; even if one particular naturalistic theory is true, most possible naturalistic theories will still be wrong.\n- Whenever we run into an issue that seems confusing, we ask \"What cognitive process is executing inside our minds that [feels from the inside](https://wiki.lesswrong.com/wiki/How_an_algorithm_feels) like this confusion?\"\n - Rather than asking \"Is free will compatible with determinism?\" we ask \"What algorithm is running in our minds that feels from the inside like free will?\"\n - If we start out in a state of confusion or ignorance, then there might or not be such a thing as free will, and there might or might not be a coherent concept to describe the thing that does or doesn't exist, but we are definitely and in reality executing some discoverable way of thinking that corresponds to this feeling of confusion. By asking the question on these grounds, we guarantee that it is answerable eventually.\n - This process terminates **when the issue no longer feels confusing, not when a position sounds very persuasive**.\n - \"Confusion exists in the map, not in the territory; if I don't know whether a coin has landed heads or tails, that is a fact about my state of mind, not a fact about the coin. There can be mysterious questions but not mysterious answers.\"\n - We do not accept as satisfactory an argument that, e.g., humans would have evolved to feel a sense of free will because this was socially useful. This still takes a \"sense of free will\" as an unreduced black box, and argues about some prior cause of this feeling. We want to know *which cognitive algorithm* is executing that feels from the inside like this sense. We want to learn the *internals* of the black box, not cheer on an argument that some reductionist process *caused* the black box to be there.\n- Rather than asking \"What is goodness made out of?\", we begin from the question \"What algorithm would compute goodness?\"\n - We apply a programmer's discipline to make sure that all the concepts used in describing this algorithm will also compile. You can't say that 'goodness' depends on what is 'better' unless you can compute 'better'.\n\nConversely, we can't just plug the products of standard analytic philosophy into AI problems, because:\n\n• The academic incentives favor continuing to dispute small possibilities because \"ongoing dispute\" means \"everyone keeps getting publications\". As somebody once put it, for academic philosophy, an unsolvable problem is \"like a biscuit bag that never runs out of biscuits\". As a sheerly cultural matter, this means that academic philosophy hasn't accepted that e.g. everything is made out of quarks (particle fields) without any non-natural or irreducible properties attached.\n\nIn turn, this means that when academic philosophers have tried to do [metaethics](https://arbital.com/p/41n), the result has been a proliferation of different theories that are mostly about non-natural or irreducible properties, with only a few philosophers taking a stand on trying to do metaethics for a strictly natural and reducible universe. Those naturalistic philosophers are still having to *argue for* a natural universe rather than being able to accept this and move on to do further analysis *inside* the naturalistic possibilities. To build and align Artificial Intelligence, we need to answer some *complex* questions about how to compute goodness; the field of academic philosophy is stuck on an argument about whether goodness ought ever to be computed.\n\n• Many academic philosophers haven't learned the programmers' discipline of distinguishing concepts that might compile. If we imagine rewinding the state of understanding of computer chess to what obtained in the days when [Edgar Allen Poe proved that no mere automaton could play chess](https://arbital.com/p/38r), then the modern style of philosophy would produce, among other papers, a lot of papers considering the 'goodness' of a chess move as a primitive property and arguing about the relation of goodness to reducible properties like controlling the center of a chessboard.\n\nThere's a particular mindset that programmers have for realizing which of their own thoughts are going to compile and run, and which of their thoughts are not getting any closer to compiling. A good programmer knows, e.g., that if they offer a 20-page paper analyzing the 'goodness' of a chess move in terms of which chess moves are 'better' than other chess moves, they haven't actually come any closer to writing a program that plays chess. (This principle is not to be confused with greedy reductionism, wherein you find one thing you understand how to compute a bit better, like 'center control', and then take this to be the entirety of 'goodness' in chess. Avoiding greedy reductionism is part of the *skill* that programmers acquire of thinking in effective concepts.)\n\nMany academic philosophers don't have this mindset of 'effective concepts', nor have they taken as a goal that the terms in their theories need to compile, nor do they know how to check whether a theory compiles. This, again, is one of the *foundational* reasons why despite there being a very large edifice of academic philosophy, the products of that philosophy tend to be unuseful in AGI.\n\nIn more detail, [Yudkowsky](https://arbital.com/p/2) lists these as some tenets or practices of what he sees as 'executable' philosophy:\n\n- It is acceptable to take reductionism, and computability of human thought, as a premise, and move on.\n - The presumption here is that the low-level mathematical unity of physics - the reducibility of complex physical objects into small, mathematically uniform physical parts, etctera - has been better established than any philosophical argument which purports to contradict them. Thus our question is \"How can we reduce this?\" or \"Which reduction is correct?\" rather than \"Should this be reduced?\"\n - Yudkowsky further suggests that things be [reduced to a mixture of causal facts and logical facts](http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/).\n- Most \"philosophical issues\" worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy.\n - E.g. rather than the central question being \"What is goodness made out of?\", we begin with the central question \"How do we design an AGI that computes goodness?\" This doesn't solve the question - to claim that would be greedy reductionism indeed - but it does *situate* the question in a pragmatic context.\n - This imports the discipline of programming into philosophy. In particular, programmers learn that even if they have an inchoate sense of what a computer should do, when they actually try to write it out as code, they sometimes find that the code they have written fails (on visual inspection) to match up with their inchoate sense. Many ideas that sound sensible as English sentences are revealed as confused as soon as we try to write them out as code.\n- Faced with any philosophically confusing issue, our task is to **identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion**, rather than, as in conventional philosophy, to try to clearly define terms and then weigh up all possible arguments for all 'positions'.\n - This means that our central question is guaranteed to have an answer.\n - E.g., if the standard philosophical question is \"Are free will and determinism compatible?\" then there is not guaranteed to be any coherent thing we mean by free will, but it is guaranteed that there is in fact some algorithm running in our brain that, when faced with this particular question, generates a confusing sense of a hard-to-pin-down conflict.\n - This is not to be confused with merely arguing that, e.g., \"People evolved to feel like they had free will because that was useful in social situations in the ancestral environment.\" That merely says, \"I think evolution is the cause of our feeling that we have free will.\" It still treats the feeling itself as a black box. It doesn't say what algorithm is actually running, or walk through that algorithm to see exactly how the sense of confusion arises. We want to know the *internals* of the feeling of free will, not argue that this black-box feeling has a reductionist-sounding cause.\n\nA final trope of executable philosophy is to not be intimidated by how long a problem has been left open. \"Ignorance exists in the mind, not in reality; uncertainty is in the map, not in the territory; if I don't know whether a coin landed heads or tails, that's a fact about me, not a fact about the coin.\" There can't be any unresolvable confusions out there in reality. There can't be any inherently confusing substances in the mathematically lawful, unified, low-level physical process we call the universe. Any seemingly unresolvable or impossible question must represent a place where we are confused, not an actually impossible question out there in reality. This doesn't mean we can quickly or immediately solve the problem, but it does mean that there's some way to wake up from the confusing dream. Thus, as a matter of entrepreneurial execution, we're allowed to try to solve the problem rather than run away from it; trying to make an investment here may still be profitable.\n\nAlthough all confusing questions must be places where our own cognitive algorithms are running skew to reality, this, again, doesn't mean that we can immediately see and correct the skew; nor that it is compilable philosophy to insist in a very loud voice that a problem is solvable; nor that when a solution is presented we should immediately seize on it because the problem must be solvable and behold here is a solution. An important step in the method is to check whether there is any lingering sense of something that didn't get resolved; whether we really feel less confused; whether it seems like we could write out the code for an AI that would be confused in the same way we were; whether there is any sense of dissatisfaction; whether we have merely chopped off all the interesting parts of the problem.\n\nAn earlier guide to some of the same ideas was the [Reductionism Sequence](https://goo.gl/qHyXwr).", "date_published": "2016-06-06T21:06:32Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["'Executable philosophy' is [https://arbital.com/p/+2](https://arbital.com/p/+2)'s term for discourse about subjects usually considered in the realm of philosophy, meant to be used for designing an [Artificial Intelligence](https://arbital.com/p/2c). Tenets include:\n\n- It is acceptable to take reductionism, and computability of human thought, as a premise. We take that as established, don't argue it further, and move on to other issues.\n - The low-level mathematical unity of physics - the reducibility of complex physical objects into mathematically simple components, etctera - has been better established than any philosophical argument which purports to contradict them.\n - We don't have infinite time to arrive at workable solutions. Continuing debates about non-naturalism *ad infinitem* is harmful and prevents us from moving on.\n- Most \"philosophical issues\" worth pursuing, can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence.\n - E.g. rather than the central question of metaethics being \"What is goodness made out of?\", we begin with the central question \"What algorithm would compute goodness?\"\n - This imports the discipline of programming into philosophy, and the mindset that programmers use to identify whether an idea will compile.\n- Faced with any philosophically confusing issue, our first task is to **identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion**; rather than trying to clearly define terms and weigh up all possible arguments for 'positions' within the confusion.\n - E.g., if the standard philosophical question is \"Are free will and determinism compatible?\" then there might or might not be any coherent thing we mean by free will. But there is definitely some algorithm running in our brain that, when faced with this particular question, generates a confusing sense of a hard-to-pin-down conflict.\n- You've finished solving a philosophy problem when you're no longer confused about it, not when a 'position' seems very persuasive.\n - There's no need to be intimidated by how long a problem has been left open, since all confusion exists in the map, not in the territory. Any place where a satisfactory solution seems impossible is just someplace your map has an internal skew, and it should be possible to wake up from the confusion."], "tags": ["Philosophy", "B-Class"], "alias": "112"} {"id": "d97d7e8c3bb02a099f53ad578a7c22e3", "title": "AIXI", "url": "https://arbital.com/p/AIXI", "source": "arbital", "source_type": "text", "text": "[Marcus Hutter's AIXI](http://www.hutter1.net/ai/aixigentle.htm) is the [perfect rolling sphere](https://arbital.com/p/12b) of [advanced agent](https://arbital.com/p/2c) theory - it's not realistic, but you can't understand more complicated scenarios if you can't envision the rolling sphere. At the core of AIXI is [Solomonoff induction](https://arbital.com/p/11w), a way of using [infinite computing power](https://arbital.com/p/) to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with [prior probabilities](https://arbital.com/p/) weighted by their [algorithmic simplicity](https://arbital.com/p/5v), and [updating their probabilities](https://arbital.com/p/Bayesian_update) based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn't be a god; it'd be something that could tear apart a god like tinfoil.\n\nsummary(Brief): AIXI is the [rolling sphere](https://arbital.com/p/perfect) of [advanced agent theory](https://arbital.com/p/2c), an ideal intelligent agent that uses infinite computing power to consider all computable hypotheses that relate its actions and sensory data to its rewards, then maximizes expected reward.\n\nsummary(Technical): [Marcus Hutter's AIXI](http://www.hutter1.net/ai/aixigentle.htm) combines Solomonoff induction, expected utility maximization, and the [Cartesian agent-environment-reward formalism](https://arbital.com/p/) to yield a completely specified superintelligent agent that can be written out a single equation but would require a high-level [halting oracle](https://arbital.com/p/) to run. The formalism requires that percepts, actions, and rewards can all be encoded as integer sequences. AIXI considers all computable hypotheses, with prior probabilities weighted by algorithmic simplicity, that describe the relation of actions and percepts to rewards. AIXI updates on its observations so far, then maximizes its next action's expected reward, under the assumption that its future selves up to some finite time horizon will similarly update and maximize. The AIXI$tl$ variant requires (vast but) bounded computing power, and only considers hypotheses under a bounded length $l$ that can be computed within time $t$. AIXI is a [central example](https://arbital.com/p/103) throughout [value alignment theory](https://arbital.com/p/2v); it illustrates the [Cartesian boundary problem](https://arbital.com/p/), the [methodology of unbounded analysis](https://arbital.com/p/107), the [Orthogonality Thesis](https://arbital.com/p/1y), and [seizing control of a reward signal](https://arbital.com/p/).\n\nFurther information:\n\n- [Marcus Hutter's book on AIXI](http://www.hutter1.net/ai/uaibook.htm)\n- [Marcus Hutter's gentler introduction](http://www.hutter1.net/ai/aixigentle.htm)\n- [Wikpedia article on AIXI](https://en.wikipedia.org/wiki/AIXI)\n- [LessWrong Wiki article on AIXI](https://wiki.lesswrong.com/wiki/AIXI)\n- [AIXIjs: Interactive browser demo and General Reinforcement Learning tutorial (JavaScript)](http://aslanides.io/aixijs/)", "date_published": "2017-10-06T12:14:15Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Eric Bruylant", "Brian Muhia", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "11v"} {"id": "05686095aaf37c121fe3637b9b251ed4", "title": "Solomonoff induction", "url": "https://arbital.com/p/solomonoff_induction", "source": "arbital", "source_type": "text", "text": "Solomonoff induction is an ideal answer to questions like \"What probably comes next in the sequence 1, 1, 2, 3, 5, 8?\" or \"Given the last three years of visual data from this webcam, what will this robot probably see next?\" or \"Will the sun rise tomorrow?\" Solomonoff induction requires infinite computing power, and is defined by taking every computable algorithm for giving a probability distribution over future data given past data, weighted by their [algorithmic simplicity](https://arbital.com/p/5v), and [updating those weights](BayesRule-1) by comparison to the actual data.\n\nE.g., somewhere in the ideal Solomonoff distribution is an exact copy of *you, right now*, staring at a string of 1s and 0s and trying to predict what comes next - though this copy of you starts out with a very low weight in the mixture owing to its complexity. Since a copy of you is present in this mixture of computable predictors, we can prove a theorem about how well Solomonoff induction does compared to an exact copy of you; namely, Solomonoff induction commits only a bounded amount of error relative to you, or any other computable way of making predictions. Solomonoff induction is thus a kind of perfect or rational ideal for probabilistically predicting sequences, although it cannot be implemented in reality due to requiring infinite computing power. Still, considering Solomonoff induction can give us important insights into how non-ideal reasoning should operate in the real world.\n\nAdditional reading:\n\n- https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference\n- http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/\n- http://wiki.lesswrong.com/wiki/Solomonoff_induction", "date_published": "2015-12-30T03:05:19Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "11w"} {"id": "2abf0b429b7d193603e77a75e42ffb38", "title": "Machine Intelligence Research Institute", "url": "https://arbital.com/p/MIRI", "source": "arbital", "source_type": "text", "text": "The Machine Intelligence Research Institute, a primary host of more formal and technical work on [value alignment theory](https://arbital.com/p/2v). (See [https://arbital.com/p/18m](https://arbital.com/p/18m) for a list of all researchers in the field and their affiliations.) Currently supported primarily by individual donors, MIRI's specialization is in supporting technically able researchers who are coming in from outside academia, supporting basic research that may not immediately lead to publications, and carrying out very-high-leverage additional operations that can operate without academic overhead or going through a larger bureaucracy. MIRI is also the de facto center for work on [decision theory](https://arbital.com/p/18s) and [https://arbital.com/p/1c1](https://arbital.com/p/1c1). MIRI operates out of Berkeley, CA near the UC Berkeley campus that hosts [https://arbital.com/p/StuartRussell](https://arbital.com/p/StuartRussell), but is not directly affiliated with UC Berkeley.", "date_published": "2015-12-23T20:16:07Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "15w"} {"id": "c4c3bbfaa59220807ec8a1f57247d433", "title": "Mindcrime: Introduction", "url": "https://arbital.com/p/mindcrime_introduction", "source": "arbital", "source_type": "text", "text": "The more predictive accuracy we want from a model, the more detailed the model becomes. A very rough model of an airplane might only contain the approximate shape, the power of the engines, and the mass of the airplane. A model good enough for engineering needs to be detailed enough to simulate the flow of air over the wings, the centripetal force on the fan blades, and more. As a model can predict the airplane in more and more fine detail and with better and better probability distributions, the computations carried out to make the model's predictions may start to look more and more like a detail simulation of the airplane flying.\n\nConsider a machine intelligence building, and testing, the best models it can manage of a human being's behavior. If the model that produces the *best* predictions involves simulations with moderate degrees of isomorphism to human cognition, then the model, as it runs, may itself be self-aware or conscious or sapient or whatever other property stands in for being an object of ethical concern. This doesn't mean that the running model of Fred is Fred, or even that the running model of Fred is human. The concern is that a sufficiently advanced model of a person will be *a* person, even if they might not be the *same* person.\n\nWe might then worry that, for example, if Fred is unhappy, or *might* be unhappy, the agent will consider thousands or millions of hypotheses about versions of Fred. Hypotheses about suffering versions of Fred, when run, might themselves be suffering. As a similar concern, these hypotheses about Fred might then be discarded - cease to be run - if the agent sees new evidence and updates its model. Since [programs can be people](https://arbital.com/p/18j), stopping and erasing a conscious program is the crime of murder.\n\nThis scenario, which we might call 'the problem of sapient models', is a subscenario of the general problem of what Bostrom terms 'mindcrime'. ([https://arbital.com/p/2](https://arbital.com/p/2) has suggested 'mindgenocide' as a term with fewer Orwellian connotations.) More generally, we might worry that there are agent systems that do huge amounts of moral harm just in virtue of the way they compute, by containing embedded conscious suffering and death.\n\nAnother scenario might be called 'the problem of sapient subsystems'. It's possible that, for example, the most efficient possible system for, e.g., allocating memory to subprocesses, is a memory-allocating-subagent that is reflective enough to be an independently conscious person. This is distinguished from the problem of creating a single machine intelligence that is conscious and suffering, because the conscious agent might be hidden at a lower level of a design, and there might be a lot *more* of them than just one suffering superagent.\n\nBoth of these scenarios constitute moral harm done inside the agent's computations, irrespective of its external behavior. We can't conclude that we've done no harm by building a superintelligence, just in virtue of the fact that the superintelligence doesn't outwardly kill anyone. There could be trillions of people suffering and dying *inside* the superintelligence. This sets mindcrime apart from almost all other concerns within the [https://arbital.com/p/5s](https://arbital.com/p/5s), which usually revolve around external behavior.\n\nTo avoid mindgenocide, it would be very handy to know exactly which computations are or are not conscious, sapient, or otherwise objects of ethical concern. Or, indeed, to know that any particular class of computations are *not* objects of ethical concern.\n\nYudkowsky calls a [nonperson predicate](https://arbital.com/p/) any computable test we could safely use to determine that a computation is definitely *not* a person. This test only needs two possible answers, \"Not a person\" and \"Don't know\". It's fine if the test says \"Don't know\" on some nonperson computations, so long as the test says \"Don't know\" on *all* people and never says \"Not a person\" when the computation is conscious after all. Since the test only definitely tells us about nonpersonhood, rather than detecting personhood in any positive sense, we can call it a nonperson predicate.\n\nHowever, the goal is not just to have any nonperson predicate - the predicate that only says \"known nonperson\" for the empty computation and no others meets this test. The goal is to have a nonperson predicate that includes powerful, useful computations. We want to be able to build an AI that is not a person, and let that AI build subprocesses that we know will not be people, and let that AI improve its models of environmental humans using hypotheses that we know are not people. This means the nonperson predicate does need to pass some AI designs, cognitive subprocess designs, and human models that are good enough for whatever it is we want the AI to do.\n\nThis seems like it might be very hard for several reasons:\n\n- There is *unusually extreme* philosophical dispute, and confusion, about exactly which programs are and are not conscious or otherwise objects of ethical value. (It might not be exaggerating to scream \"nobody knows what the hell is going on\".)\n- We can't fully pass any class of programs that's [Turing-complete](https://arbital.com/p/). We can't say once and for all that it's safe to model gravitational interactions in a solar system, if enormous gravitational systems could encode computers that encode people.\n- The [https://arbital.com/p/42](https://arbital.com/p/42) problem applies to any attempt to forbid an [advanced](https://arbital.com/p/2c) [consequentialist agent](https://arbital.com/p/9h) from using the most effective or obvious ways of modeling humans. The *next* best way of modeling humans, outside the blocked-off options, is unusually likely to look like a weird loophole that turns out to encode sapience some way we didn't imagine.\n\nAn alternative for preventing mindcrime without a trustworthy [nonperson predicate](https://arbital.com/p/) is to consider [agent designs intended *not* to model humans, or other minds, in great detail](https://arbital.com/p/102), since there may be some [pivotal achievements](https://arbital.com/p/6y) that can be accomplished without a value-aligned agent modeling human minds in detail.", "date_published": "2016-12-21T00:49:46Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Alex Pear", "Eric Bruylant", "Niplav Yushtun", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "18h"} {"id": "33c62e4b98c593edb216cfdef25c2345", "title": "Some computations are people", "url": "https://arbital.com/p/some_computations_are_people", "source": "arbital", "source_type": "text", "text": "This proposition is true if at least some possible [computations](https://arbital.com/p/) (not necessarily any that could run on modern computers) have [consciousness](https://arbital.com/p/), sapience, or whatever other properties are necessary to make them people and therefore [objects of ethical value](https://arbital.com/p/).\n\nKey argument: Most domain experts think that human beings are themselves (a) Turing-computable and (b) conscious in virtue of the computations that they perform. In other words, you yourself are a conscious algorithm. If you consider yourself a person, then you consider at least one computer program (yourself) to be a person.\n\nThis is why some domain experts can be very confident of the proposition, despite the moral subquestions about which properties are necessary for personhood. If you take for granted the [Church-Turing thesis](https://arbital.com/p/) stating that everything in the physical universe is computable and therefore so are human beings, then of course some computer programs (like you) can be people or have any other properties we associate with human beings.\n\nThis proposition falls into the class of issues that some people think are incredibly deep and fraught philosophical questions, and that other people think are incredibly deep philosophical questions that happen to have clear, known answers.", "date_published": "2015-12-28T17:59:08Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "18j"} {"id": "0915917affc2e34f9623a0abcd084b7f", "title": "Nick Bostrom", "url": "https://arbital.com/p/NickBostrom", "source": "arbital", "source_type": "text", "text": "Nick Bostrom is, with [https://arbital.com/p/2](https://arbital.com/p/2), one of the two cofounders of the current field of [value alignment theory](https://arbital.com/p/2v). Bostrom published a paper singling out the problem of superintelligent values as critical in 1999, two years before Yudkowsky entered the field, which has sometimes led Yudkowsky to say that Bostrom should receive credit for inventing the Friendly AI concept. Bostrom is founder and director of the [Oxford Future of Humanity Institute](https://arbital.com/p/). He is the author of the popular book [Superintelligence](https://arbital.com/p/Superintelligence_book) that currently forms the best book-length introduction to the field. Bostrom's academic background is as an analytic philosopher formerly specializing in [anthropic probability theory](https://arbital.com/p/) and [transhumanist ethics](https://arbital.com/p/). Relative to Yudkowsky, Bostrom is relatively more interested in Oracle models of value alignment and in potential exotic methods of obtaining aligned goals.", "date_published": "2015-12-01T18:12:26Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "18k"} {"id": "338828c830f1cefa76a842723fe61d04", "title": "Researchers in value alignment theory", "url": "https://arbital.com/p/value_alignment_researchers", "source": "arbital", "source_type": "text", "text": "This page lists researchers in [https://arbital.com/p/2v](https://arbital.com/p/2v).\n\n- [https://arbital.com/p/2](https://arbital.com/p/2) (founder, [MIRI](https://arbital.com/p/15w))\n- [https://arbital.com/p/18k](https://arbital.com/p/18k) (founder, [https://arbital.com/p/FHI](https://arbital.com/p/FHI))\n- [https://arbital.com/p/3q](https://arbital.com/p/3q) (MIRI; [parametric polymorphism](https://arbital.com/p/), the [Procrastination Paradox](https://arbital.com/p/), and numerous other developments in [https://arbital.com/p/1c1](https://arbital.com/p/1c1).)\n- [https://arbital.com/p/12y](https://arbital.com/p/12y) (MIRI; [modal agents](https://arbital.com/p/))\n- [https://arbital.com/p/StuartArmstrong](https://arbital.com/p/StuartArmstrong) (FHI; [https://arbital.com/p/1b7](https://arbital.com/p/1b7))\n- [https://arbital.com/p/3](https://arbital.com/p/3) (UC Berkeley, [approval-directed agents](https://arbital.com/p/), previously proposed a formalization of [indirect normativity](https://arbital.com/p/))\n- [https://arbital.com/p/StuartRussell](https://arbital.com/p/StuartRussell) (UC Berkeley; author of Artificial Intelligence: A Modern Approach; previously published on theories of reflective optimality; currently interested in [inverse reinforcement learning](https://arbital.com/p/).)\n- [https://arbital.com/p/4y](https://arbital.com/p/4y) (MIRI, [reflective oracles](https://arbital.com/p/))\n- [https://arbital.com/p/131](https://arbital.com/p/131) (MIRI)\n- [https://arbital.com/p/ScottGarabant](https://arbital.com/p/ScottGarabant) (MIRI, [logical probabilities](https://arbital.com/p/))\n- [https://arbital.com/p/32](https://arbital.com/p/32) (previously MIRI researcher, now Executive Director at MIRI)", "date_published": "2016-02-23T04:14:15Z", "authors": ["Alexei Andreev", "Malo Bourgon", "Paul Christiano", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "18m"} {"id": "7e1028e35fde90b8eaab1e78af10eb14", "title": "The rocket alignment problem", "url": "https://arbital.com/p/rocket_alignment_metaphor", "source": "arbital", "source_type": "text", "text": "%todo: reference specific contributions by others or oneself that led to emergent discoveries at NASA et. al. -- proceed to display how these specific contributions may relate to current state of the art developments in artificial intelligence%\n\n(*Somewhere in a not-very-near neighboring world, where science took a very different course...*)\n\n \n\nAlfonso: Hello, Beth. I've noticed a lot of speculations lately about \"spaceplanes\" being used to attack cities, or possibly becoming infused with malevolent spirits that inhabit the celestial realms so that they turn on their own engineers.\n\nI’m rather skeptical of these speculations. Indeed, I’m a bit skeptical that airplanes will be able to even rise as high as stratospheric weather balloons anytime in the next century. But I understand that your institute wants to address the potential problem of malevolent or dangerous spaceplanes, and that you think this is an important present-day cause.\n\nBeth: That's… really not how we at the Mathematics of Intentional Rocketry Institute would phrase things.\n\nThe problem of malevolent celestial spirits is what all the news articles are focusing on, but we think the real problem is something entirely different. We're worried that there's a difficult, theoretically challenging problem which modern-day rocket punditry is mostly overlooking. We're worried that if you aim a rocket at where the Moon is in the sky, and press the launch button, the rocket may not actually end up at the Moon.\n\nAlfonso: I understand that it's very important to design fins that can stabilize a spaceplane's flight in heavy winds. That's important spaceplane safety research and someone needs to do it.\n\nBut if you were working on that sort of safety research, I'd expect you to be collaborating tightly with modern airplane engineers to test out your fin designs, to demonstrate that they are actually useful.\n\nBeth: Aerodynamic designs are important features of any safe rocket, and we're quite glad that rocket scientists are working on these problems and taking safety seriously. That's not the sort of problem that we at MIRI focus on, though.\n\nAlfonso: What’s the concern, then? Do you fear that spaceplanes may be developed by ill-intentioned people?\n\nBeth: That's not the failure mode we're worried about right now. We're more worried that right now, *nobody* can tell you how to point your rocket's nose such that it goes to the moon, nor indeed *any* prespecified celestial destination. Whether Google or the US Government or North Korea is the one to launch the rocket won't make a pragmatic difference to the probability of a successful Moon landing from our perspective, because right now *nobody knows how to aim any kind of rocket anywhere*.\n\n\n\t\nAlfonso: I'm not sure I understand.\n\nBeth: We're worried that even if you aim a rocket at the Moon, such that the nose of the rocket is clearly lined up with the Moon in the sky, the rocket won't go to the Moon. We're not sure what a realistic path from the Earth to the moon looks like, but we suspect it [might not be a very straight path](https://airandspace.si.edu/webimages/highres/5317h.jpg), and it may not involve pointing the nose of the rocket at the moon at all. We think the most important thing to do next is to advance our understanding of rocket trajectories until we have a better, deeper understanding of what we've started calling the \"rocket alignment problem\". There are other safety problems, but this rocket alignment problem will probably take the most total time to work on, so it's the most urgent.\n\nAlfonso: Hmm, that sounds like a bold claim to me. Do you have a reason to think that there are invisible barriers between here and the moon that the spaceplane might hit? Are you saying that it might get very very windy between here and the moon, more so than on Earth? Both eventualities could be worth preparing for, I suppose, but neither seem likely.\n\nBeth: We don't think it's particularly likely that there are invisible barriers, no. And we don't think it's going to be especially windy in the celestial reaches — quite the opposite, in fact. The problem is just that we don't yet know how to plot *any* trajectory that a vehicle could realistically take to get from Earth to the moon.\n\nAlfonso: Of course we can't plot an actual trajectory; wind and weather are too unpredictable. But your claim still seems too strong to me. Just aim the spaceplane at the moon, go up, and have the pilot adjust as necessary. Why wouldn't that work? Can you prove that a spaceplane aimed at the moon won't go there?\n\nBeth: We don't think we can *prove* anything of that sort, no. Part of the problem is that realistic calculations are extremely hard to do in this area, after you take into account all the atmospheric friction and the movements of other celestial bodies and such. We've been trying to solve some drastically simplified problems in this area, on the order of assuming that there is no atmosphere and that all rockets move in perfectly straight lines. Even those unrealistic calculations strongly suggest that, in the much more complicated real world, just pointing your rocket's nose at the Moon also won't make your rocket end up at the Moon. I mean, the fact that the real world is more complicated doesn't exactly make it any *easier* to get to the Moon.\n\nAlfonso: Okay, let me take a look at this \"understanding\" work you say you're doing…\n\nHuh. Based on what I've read about the math you're trying to do, I can't say I understand what it has to do with the Moon. Shouldn't helping spaceplane pilots exactly target the Moon involve looking through lunar telescopes and studying exactly what the Moon looks like, so that the spaceplane pilots can identify particular features of the landscape to land on?\n\nBeth: We think our present stage of understanding is much too crude for a detailed Moon map to be our next research target. We haven't yet advanced to the point of targeting one crater or another for our landing. We can't target *anything* at this point. It's more along the lines of \"figure out how to talk mathematically about curved rocket trajectories, instead of rockets that move in straight lines\". Not even realistically curved trajectories, right now, we're just trying to get past straight lines at all -\n\nAlfonso: But planes on Earth move in curved lines all the time, because the Earth itself is curved. It seems reasonable to expect that future spaceplanes will also have the capability to move in curved lines. If your worry is that spaceplanes will only move in straight lines and miss the Moon, and you want to advise rocket engineers to build rockets that move in curved lines, well, that doesn't seem to me like a great use of anyone's time.\n\nBeth: You're trying to draw much too direct of a line between the math we're working on right now, and actual rocket designs that might exist in the future. It's *not* that current rocket ideas are almost right, and we just need to solve one or two more problems to make them work. The conceptual distance that separates anyone from solving the rocket alignment problem is *much greater* than that.\n\nRight now everyone is *confused* about rocket trajectories, and we're trying to become *less confused*. That's what we need to do next, not run out and advise rocket engineers to build their rockets the way that our current math papers are talking about. Not until we stop being *confused* about extremely basic questions like why the Earth doesn't fall into the Sun.\n\nAlfonso: I don't think the Earth is going to collide with the Sun anytime soon. The Sun has been steadily circling the Earth for a long time now.\n\nBeth: I’m not saying that our goal is to address the risk of the Earth falling into the Sun. What I'm trying to say is that if humanity's present knowledge can't answer questions like \"Why doesn't the Earth fall into the Sun?\" then we don't know very much about celestial mechanics and we won't be able to aim a rocket through the celestial reaches in a way that lands softly on the Moon.\n\nAs an example of work we're presently doing that's aimed at improving our understanding, there's what we call the “[tiling positions](http://intelligence.org/files/TilingAgentsDraft.pdf)” problem. The tiling positions problem is [how to fire a cannonball from a cannon](https://en.wikipedia.org/wiki/Newton%27s_cannonball) in such a way that the cannonball circumnavigates the earth over and over again, \"tiling\" its initial coordinates like repeating tiles on a tessellated floor -\n\nAlfonso: I read a little bit about your work on that topic. I have to say, it's hard for me to see what firing things from cannons has to do with getting to the Moon. Frankly, it sounds an awful lot like Good Old-Fashioned Space Travel, which everyone knows doesn't work. Maybe Jules Verne thought it was possible to travel around the earth by firing capsules out of cannons, but the modern study of high-altitude planes has completely abandoned the notion of firing things out of cannons. The fact that you go around talking about firing things out of cannons suggests to me that you haven't kept up with all the innovations in airplane design over the last century, and that your spaceplane designs will be completely unrealistic.\n\nBeth: We know that rockets will not actually be fired out of cannons. We really, really know that. We’re intimately familiar with the reasons why nothing fired out of a modern cannon is ever going to reach escape velocity. I've previously written several sequences of articles in which I describe why cannon-based space travel doesn't work.\n\nAlfonso: But your current work is all about firing something out a cannon in such a way that it circles the earth over and over. What could that have to do with any realistic advice that you could give to a spaceplane pilot about how to travel to the Moon?\n\nBeth: Again, you're trying to draw much too straight a line between the math we're doing right now, and direct advice to future rocket engineers.\n\nWe think that if we could find an angle and firing speed such that an ideal cannon, firing an ideal cannonball at that speed, on a perfectly spherical Earth with no atmosphere, would lead to that cannonball entering what we would call a \"stable orbit\" without hitting the ground, then… we might have understood something really fundamental and important about celestial mechanics.\n\nOr maybe not! It's hard to know in advance which questions are important and which research avenues will pan out. All you can do is figure out the next tractable-looking problem that confuses you, and try to come up with a solution, and hope that you'll be less confused after that.\n\nAlfonso: You’re talking about the cannonball hitting the ground as a problem, and how you want to avoid that and just have the cannonball keep going forever, right? But real spaceplanes aren't going to be aimed at the ground in the first place, and lots of regular airplanes manage to not hit the ground. It seems to me that this \"being fired out of a cannon and hitting the ground\" scenario that you're trying to avoid in this \"tiling positions problem\" of yours just isn't a failure mode that real spaceplane designers would need to worry about.\n\nBeth: We are not worried about real rockets being fired out of cannons and hitting the ground. That is not why we're working on the tiling positions problem. In a way, you're being far too optimistic about how much of rocket alignment theory is already solved! We're not so close to understanding how to aim rockets that the kind of designs people are talking about now *would* work if only we solved a particular set of remaining difficulties like not firing the rocket into the ground. You need to go more meta on understanding the kind of progress we're trying to make.\n\nWe're working on the tiling positions problem because we think that being able to fire a cannonball at a certain instantaneous velocity such that it enters a stable orbit… is the sort of problem that somebody who could really actually launch a rocket through space and have it move in a particular curve that really actually ended with softly landing on the Moon would be able to solve *easily*. So the fact that we can't solve it is alarming. If we can figure out how to solve this much simpler, much more crisply stated \"tiling positions problem\" with imaginary cannonballs on a perfectly spherical earth with no atmosphere, which is a lot easier to analyze than a Moon launch, we might thereby take one more incremental step towards eventually becoming the sort of people who could plot out a Moon launch.\n\nAlfonso: If you don't think that Jules-Verne-style space cannons are the wave of the future, I don't understand why you keep talking about cannons in particular.\n\nBeth: Because there's a lot of sophisticated mathematical machinery already developed for aiming cannons. People have been aiming cannons and plotting cannonball trajectories since the sixteenth century. We can take advantage of that existing mathematics to say exactly how, if we fired an ideal cannonball in a certain direction, it would plow into the ground. If we tried talking about rockets with realistically varying acceleration, we can't even manage to prove that a rocket like that *won't* travel around the Earth in a perfect square, because with all that realistically varying acceleration and realistic air friction it's impossible to make any sort of definite statement one way or another. Our present understanding isn't up to it.\n\nAlfonso: Okay, another question in the same vein. Why is MIRI sponsoring work on adding up lots of tiny vectors? I don't even see what that has to do with rockets in the first place; it seems like this weird side problem in abstract math.\n\nBeth: It's more like… at several points in our investigation so far, we've run into the problem of going from a function about time-varying accelerations to a function about time-varying positions. We kept running into this problem as a blocking point in our math, in several places, so we branched off and started trying to analyze it explicitly. Since it's about the pure mathematics of points that don't move in discrete intervals, we call it the “[logical undiscreteness](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)” problem. Some of the ways of investigating this problem involve trying to add up lots of tiny, varying vectors to get a big vector. Then we talk about how that sum seems to change more and more slowly, approaching a limit, as the vectors get tinier and tinier and we add up more and more of them… or at least that's one avenue of approach.\n\nAlfonso: I just find it hard to imagine people in future spaceplane rockets staring out their viewports and going, \"Oh, no, we don't have tiny enough vectors with which to correct our course! If only there was some way of adding up even more vectors that are even smaller!\" I'd expect future calculating machines to do a pretty good job of that already.\n\nBeth: Again, you're trying to draw much too straight a line between the work we're doing now, and the implications for future rocket designs. It's not like we think a rocket design will almost work, but the pilot won't be able to add up lots of tiny vectors fast enough, so we just need a faster algorithm and then the rocket will get to the Moon. This is foundational mathematical work that we think might play a role in multiple basic concepts for understanding celestial trajectories. When we try to plot out a trajectory that goes all the way to a soft landing on a moving Moon, we feel confused and blocked. We think part of the confusion comes from not being able to go from acceleration functions to position functions, so we're trying to resolve our confusion.\n\nAlfonso: This sounds suspiciously like a philosophy-of-mathematics problem, and I don't think that it's possible to progress on spaceplane design by doing philosophical research. The field of philosophy is a stagnant quagmire. Some philosophers still believe that going to the moon is impossible; they say that the celestial plane is fundamentally separate from the earthly plane and therefore inaccessible, which is clearly silly. Spaceplane design is an engineering problem, and progress will be made by engineers.\n\nBeth: I agree that rocket design will be carried out by engineers rather than philosophers. I also share some of your frustration with philosophy in general. For that reason, we stick to well-defined mathematical questions that are likely to have actual answers, such as questions about how to fire a cannonball on a perfectly spherical planet with no atmosphere such that it winds up in a stable orbit.\n\nThis often requires developing new mathematical frameworks. For example, in the case of the logical undiscreteness problem, we're developing methods for translating between time-varying accelerations and time-varying positions. You can call the development of new mathematical frameworks \"philosophical\" if you'd like — but if you do, remember that it's a very different kind of philosophy than the \"speculate about the heavenly and earthly planes\" sort, and that we're always pushing to develop new mathematical frameworks or tools.\n\nAlfonso: So from the perspective of the public good, what's a good thing that might happen if you solved this logical undiscreteness problem?\n\nBeth: Mainly, we'd be less confused and our research wouldn't be blocked and humanity could actually land on the Moon someday. To try and make it more concrete - though it's hard to do that without actually knowing the concrete solution - we might be able to talk about incrementally more realistic rocket trajectories, because our mathematics would no longer break down as soon as we stopped assuming that rockets moved in straight lines. Our math would be able to talk about exact curves, instead of a series of straight lines that approximate the curve.\n\nAlfonso: An exact curve that a rocket follows? This gets me into the main problem I have with your project in general. I just don't believe that any future rocket design will be the sort of thing that can be analyzed with absolute, perfect precision so that you can get the rocket to the Moon based on an absolutely plotted trajectory with no need to steer. That seems to me like a bunch of mathematicians who have no clue how things work in the real world, wanting everything to be perfectly calculated. Look at the way Venus moves in the sky; usually it travels in one direction, but sometimes it goes retrograde in the other direction. We'll just have to steer as we go.\n\nBeth: That's not what I meant by talking about exact curves… Look, even if we can invent logical undiscreteness, I agree that it's futile to try to predict, in advance, the precise trajectories of all of the winds that will strike a rocket on its way off the ground. Though I'll mention parenthetically that things might actually become calmer and easier to predict, once a rocket gets sufficiently high up -\n\nAlfonso: Why?\n\nBeth: Let's just leave that aside for now, since we both agree that rocket positions are hard to predict exactly during the atmospheric part of the trajectory, due to winds and such. And yes, if you can't exactly predict the initial trajectory, you can't exactly predict the later trajectory. So, indeed, the proposal is definitely not to have a rocket design so perfect that you can fire it at exactly the right angle and then walk away without the pilot doing any further steering. The point of doing rocket math isn't that you want to predict the rocket's exact position at every microsecond, in advance.\n\nAlfonso: Then why obsess over pure math that's too simple to describe the rich, complicated real universe where sometimes it rains?\n\nBeth: It’s true that a real rocket isn't a simple equation on a board. It’s true that there are all sorts of aspects of a real rocket's shape and internal plumbing that aren't going to have a mathematically compact characterization. What MIRI is doing isn't the right degree of mathematization for all rocket engineers for all time; it's the mathematics for us to be using right now (or so we hope).\n\nTo build up the field's understanding incrementally, we need to talk about ideas whose consequences can be pinpointed precisely enough that people can analyze scenarios in a shared framework. We need enough precision that someone can say, \"I think in scenario X, design Y does Z\", and someone else can say, \"No, in scenario X, Y actually does W\", and the first person responds, \"Darn, you're right. Well, is there some way to change Y so that it would do Z?\"\n\nIf you try to make things realistically complicated at this stage of research, all you're left with is verbal fantasies. When we try to talk to someone with an enormous flowchart of all the gears and steering rudders they think should go into a rocket design, and we try to explain why a rocket pointed at the Moon doesn't necessarily end up at the Moon, they just reply, \"Oh, my rocket won't do *that*.\" Their ideas have enough vagueness and flex and underspecification that they've achieved the safety of nobody being able to prove to them that they're wrong. It's impossible to incrementally build up a body of collective knowledge that way.\n\nThe goal is to start building up a library of tools and ideas we can use to discuss trajectories formally. Some of the key tools for formalizing and analyzing *intuitively* plausible-seeming trajectories haven't yet been expressed using math, and we can live with that for now. We still try to find ways to represent the key ideas in mathematically crisp ways whenever we can. That's not because math is so neat or so prestigious; it's part of an ongoing project to have arguments about rocketry that go beyond \"Does not!\" vs. \"Does so!\"\n\nAlfonso: I still get the impression that you're reaching for the warm, comforting blanket of mathematical reassurance in a realm where mathematical reassurance doesn't apply. We can't obtain a mathematical certainty of our spaceplanes being absolutely sure to reach the Moon with nothing going wrong. That being the case, there's no point in trying to pretend that we can use mathematics to get absolute guarantees about spaceplanes.\n\nBeth: Trust me, I am not going to feel \"reassured\" about rocketry no matter what math MIRI comes up with. But, yes, of course you can't obtain a mathematical assurance of any physical proposition, nor assign probability 1 to any empirical statement.\n\nAlfonso: Yet you talk about proving theorems - proving that a cannonball will go in circles around the earth indefinitely, for example.\n\nBeth: Proving a theorem about a rocket's trajectory won't ever let us feel comfortingly certain about where the rocket is actually going to end up. But if you can prove a theorem which says that your rocket would go to the Moon if it launched in a perfect vacuum, maybe you can attach some steering jets to the rocket and then have it actually go to the Moon in real life. Not with 100% probability, but with probability greater than zero.\n\nThe point of our work isn't to take current ideas about rocket aiming from a 99% probability of success to a 100% chance of success. It's to get past an approximately 0% chance of success, which is where we are now.\n\nAlfonso: Zero percent?!\n\nBeth: Modulo [Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule), yes, zero percent. If you point a rocket's nose at the Moon and launch it, it does not go to the Moon.\n\nAlfonso: I don't think future spaceplane engineers will actually be that silly, if direct Moon-aiming isn’t a method that works. They'll lead the Moon's current motion in the sky, and aim at the part of the sky where Moon will appear on the day the spaceplane is a Moon's distance away. I'm a bit worried that you’ve been talking about this problem so long without considering such an obvious idea.\n\nBeth: We considered that idea very early on, and we're pretty sure that it still doesn't get us to the Moon.\n\nAlfonso: What if I add steering fins so that the rocket moves in a more curved trajectory? Can you prove that no version of that class of rocket designs will go to the Moon, no matter what I try?\n\nBeth: Can you sketch out the trajectory that you think your rocket will follow?\n\nAlfonso: It goes from the Earth to the Moon.\n\nBeth: In a bit more detail, maybe?\n\nAlfonso: No, because in the real world there are always variable wind speeds, we don't have infinite fuel, and our spaceplanes don't move in perfectly straight lines.\n\nBeth: Can you sketch out a trajectory that you think a simplified version of your rocket will follow, so we can examine the [assumptions](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/) your idea requires?\n\nAlfonso: I just don't believe in the general methodology you're proposing for spaceplane designs. We'll put on some steering fins, turn the wheel as we go, and keep the Moon in our viewports. If we're off course, we'll steer back.\n\nBeth: … We’re actually a bit concerned that [standard steering fins may stop working once the rocket gets high enough](https://intelligence.org/files/Corrigibility.pdf), so you won't actually find yourself able to correct course by much once you're in the celestial reaches - like, if you're already on a good course, you can correct it, but if you screwed up, you won't just be able to turn around like you could turn around an airplane -\n\nAlfonso: Why not?\n\nBeth: We can go into that topic too; but even given a simplified model of a rocket that you *could* steer, a walkthrough of the steps along the path that simplified rocket would take to the Moon would be an important step in moving this discussion forward. Celestial rocketry is a domain that we expect to be unusually difficult - even compared to building rockets on Earth, which is already a famously hard problem because they usually just explode. It's not that everything has to be neat and mathematical. But the overall difficulty is such that, in a proposal like \"lead the moon in the sky,\" if the core ideas don't have a certain amount of solidity about them, it would be equivalent to firing your rocket randomly into the void.\n\nIf it feels like you don't know for sure whether your idea works, but that it might work; if your idea has many plausible-sounding elements, and to you it feels like nobody has been able to *convincingly* explain to you how it would fail; then, in real life, that proposal has a roughly 0% chance of steering a rocket to the Moon.\n\nIf it seems like an idea is extremely solid and clearly well-understood, if it feels like this proposal should definitely take a rocket to the Moon without fail in good conditions, then maybe under the best-case conditions we should assign an 85% subjective credence in success, or something in that vicinity.\n\nAlfonso: So uncertainty automatically means failure? This is starting to sound a bit paranoid, honestly.\n\nBeth: The idea I'm trying to communicate is something along the lines of, \"If you can reason rigorously about why a rocket should definitely work in principle, it might work in real life, but if you have anything less than that, then it definitely won't work in real life.\"\n\nI'm not asking you to give me an absolute mathematical proof of empirical success. I'm asking you to give me something more like a sketch for how a simplified version of your rocket could move, that's sufficiently determined in its meaning that you can't just come back and say \"Oh, I didn't mean *that*\" every time someone tries to figure out what it actually does or pinpoint a failure mode.\n\nThis isn’t an unreasonable demand that I'm imposing to make it impossible for any ideas to pass my filters. It's the primary bar all of us have to pass to contribute to collective progress in this field. And a rocket design which can't even pass that conceptual bar has roughly a 0% chance of landing softly on the Moon.", "date_published": "2018-10-08T21:31:12Z", "authors": ["Alexei Andreev", "Logan L", "Rob Bensinger", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "18w"} {"id": "05387ab19151c69b2cb5b2e2ca7ad010", "title": "Team Arbital", "url": "https://arbital.com/p/TeamArbital", "source": "arbital", "source_type": "text", "text": "People behind [https://arbital.com/p/3d](https://arbital.com/p/3d):\n\n* [https://arbital.com/p/1](https://arbital.com/p/1) (co-founder and CEO)\n* [https://arbital.com/p/5](https://arbital.com/p/5) (co-founder)\n* [https://arbital.com/p/2](https://arbital.com/p/2) (adviser)\n* [https://arbital.com/p/76](https://arbital.com/p/76) (software engineer)", "date_published": "2016-06-17T14:55:46Z", "authors": ["1 1", "Alexei Andreev"], "summaries": ["Team Arbital is the official group for all Arbital employees."], "tags": [], "alias": "198"} {"id": "5e07551b5ba07fb9e5e70288dcf7ebd7", "title": "Utility indifference", "url": "https://arbital.com/p/utility_indifference", "source": "arbital", "source_type": "text", "text": "# Introduction: A reflectively consistent off-switch.\n\nSuppose there's an [advanced agent](https://arbital.com/p/2c) with a goal like, e.g., producing smiles or making [paperclips](https://arbital.com/p/10h). [By default](https://arbital.com/p/10g), if you try to switch off a sufficiently intelligent agent like this, it will resist being switched off; not because it has an independent goal of survival, but because it expects that if it's switched off it will be able to produce fewer smiles or paperclips. If the agent has policy options to diminish the probability of being *successfully* switched off, the agent will pursue those options. This is a [convergent instrumental strategy](https://arbital.com/p/2vl) if not otherwise prevented.\n\n- Difficulty 1: By default a [consequentialist](https://arbital.com/p/9h) reasoner with sufficient real-world understanding to relate the events of its being switched off, to the later underfulfillment of its goals, will resist being switched off.\n\nThe [shutdown problem](https://arbital.com/p/2xd) is to describe an [advanced agent](https://arbital.com/p/2c) which is [corrigible](https://arbital.com/p/45) with respect to permitting itself to be safely shut down or suspended to disk. A reinforcement learning agent which can be forced to e.g. perform a null action repeatedly for a period of time, is called *interruptible* by Stuart Armstrong and Laurent Orseau.\n\nThis isn't as simple as writing a special function into the AI which carries out a shutdown after a switch is pressed. If you start out controlling the agent's source code, and you naively try to build in functions which suspend the agent to disk when a button is pressed, and the agent later gains the ability to self-modify, it would remove those functions. (Possibly while [trying](https://arbital.com/p/10f) to [conceal](https://arbital.com/p/3cq) the fact that the function would no longer operate.)\n\n- Corollary 1a: By default a [consequentialist](https://arbital.com/p/9h) reasoner. with sufficient programmatic understanding to relate the execution of a shutdown function to the later underfulfillment of its goals, which has policy options for modifying its code, will want to remove the shutdown function.\n\nWe can see this as a natural consequence of our trying to modify the agent's code in a way that was not consistent with the code's explicit goals. If you create an agent with source code $P$ that is well-suited to achieving a goal $U$ and that explicitly represents $U$ as a goal, the agent's code $P$ will be [reflectively consistent](https://arbital.com/p/2rb) - the code will not want to modify itself. If we then suppose that instead the agent has code $P'$ which is no longer well-suited to achieving $U,$ but continues to explicitly represent $U$ as a goal, the code will be [reflectively inconsistent](https://arbital.com/p/2rb) and it will [want to modify itself given the option of doing so](https://arbital.com/p/3ng). If you have code that searches for ways to produce paperclips, and you hack the code to contain a shutdown function, but the code is still searching for paperclip-production strategies, and the code correctly models the effect of the shutdown function on future paperclips, the code will [want](https://arbital.com/p/10k) to modify itself to remove the shutdown function.\n\nThe next thought is to build an agent with modified code $P^*$ which embodies a modified utility function $U^*$ such that a consequentialist optimizing for $U^*$ wants the code to be like $P^*$ and is again reflectively consistent.\n\nThat is, in the case of an off-switch, we'd want to modify the agent, not just to include an off-switch, but to *want* to shut down if the off-switch is pressed.\n\nAssuming we have the ability to specify the [utility function](https://arbital.com/p/109) of an agent, we might try to approach this by creating a utility function with two components:\n\n- A \"normal behavior\" component - the utility function that you think corresponds to the way you want a normally-functioning AI to behave.\n- An \"[abort](https://arbital.com/p/2rg) and [suspend to disk](https://arbital.com/p/2rf)\" component - the utility function that you think corresponds to \"halt your current tasks in a way that [minimizes their further impact](https://arbital.com/p/2pf) and then enter a quiescent state in which you are [not optimizing further](https://arbital.com/p/2r8) and [your decisions have little further impact](https://arbital.com/p/2pf)\", or however you would want the AI to behave after an abort button had been pressed.\n\nWe then tell the AI that its utility function is the \"normal\" utility function over worlds where an off button has not been pushed, plus the \"suspend\" utility function over worlds where the off button has been pushed. So long as this is what the AI *wants,* we'd expect it to write its own suspend-to-disk function in its code if its code did not already contain one.\n\n- Goal 1: Have the AI [approve of keeping](https://arbital.com/p/2rb) the shutdown function within its code (or add a shutdown function if it notices that no such function exists) (and [preserve](https://arbital.com/p/1fx) the corresponding behavior in any subagents or [successors](https://arbital.com/p/1mq) that it builds).\n\nThis approach doesn't yet get us out of the woods, because we now have a new problem. The AI's utility function is compounded of a \"normal\" utility function over worlds where the button is not pressed, plus a \"suspend\" utility function over worlds where the button is pressed. In worlds where the button is pressed, the AI will execute some preferred policy $\\pi_1$ to obtain an expectation of utility $\\mathbb E [https://arbital.com/p/U_{normal}|\\pi_1](https://arbital.com/p/U_{normal}|\\pi_1),$ and similarly a policy $\\pi_2$ in worlds where the button is pressed to obtain $\\mathbb E[https://arbital.com/p/U_{suspend}|\\pi_2](https://arbital.com/p/U_{suspend}|\\pi_2).$ One of these quantities is liable to be greater than the other, in which case the AI has a motive to *prevent* the button from being pressed, or *cause* the button to be pressed. In either case, the AI tries to seize control of the suspend button.\n\n- Difficulty 2: In a compound utility function produced by mixing a 'normal' and 'suspend' utility function conditional on a button-press, the AI will have an incentive to prevent the button from being pressed, or cause the button to be pressed, depending on which utility function makes more expected utility available afterwards.\n\n- Goal 2: Have the AI not try to prevent the suspend button from being pressed, nor try to cause the button to be pressed.\n\nThe *switch problem* is causing an AI to switch between two utility functions when a button is pressed, such that:\n\n- The AI wants to preserve the button and the corresponding behavior-switching functionality within itself, even if the AI can self-modify.\n- The AI does not try to prevent the button from being pressed;\n- The AI does not try to cause the button to be pressed.\n\nThis turns out to be more difficult than it sounds, and is currently an [open problem](https://arbital.com/p/4m).\n\n## Larger implications of the switch problem\n\nThe *switch problem,* a subproblem of [shutdownability](https://arbital.com/p/2xd) which assumes a satisfactory [shutdown utility function](https://arbital.com/p/2rf), is a crisp subproblem of both [value learning](https://arbital.com/p/value_learning) and [corrigibility](https://arbital.com/p/45).\n\nThe switch problem is a subproblem of [value learning](https://arbital.com/p/value_learning) because it describes a very simple case of conditioning a utility function on observation or other facts, without giving the AI an incentive to manipulate those facts.\n\nTo learn complicated values safely, we'd need to condition a utility function on complicated observations, such that:\n\n- The utility function's updates were robust to the AI gaining self-modification capability (general case of Goal 1)\n- The utility function's dependency on observations didn't incentivize [programmer manipulating](https://arbital.com/p/programmer_manipulation) those observations (general case of Goal 2).\n\nJust writing code that modifies the utility function, without the larger code's explicit policy search *wanting* the utility function to be modified in that case, would not be reflectively consistent (general case of Difficulty 1). And naive dependencies between the utility function and observations, incentivize manipulating those observations (general case of Difficulty 2).\n\nThe switch problem is also a crisp subproblem of [https://arbital.com/p/-45](https://arbital.com/p/-45) because it deals in [averting](https://arbital.com/p/2vk) the [convergent instrumental strategy](https://arbital.com/p/10g) and incorrigible behavior of \"avoiding shutdown\". If we understood how to avert this instrumental pressure, we might possibly learn something general about avoiding other instrumentally convergent incorrigible behaviors. (And of course it's also a subproblem of corrigibility, because an AI you can shut down in emergencies is more [corrigible](https://arbital.com/p/45); solving the switch problem would bring us closer to having an AI like that, if we could get the second half of the problem represented by a [shutdown utility function](https://arbital.com/p/2rf).)\n\n# The utility indifference approach to the switch problem\n\nSuppose we build a compound utility function $U_C$ whose utility over outcomes behaves like the 'normal' utility function $U_X$ over worlds where a switch $\\mathcal S$ is not pressed, and returns the values from the 'suspend' utility function $U_Y$ over worlds where the button $\\mathcal S$ is pressed.\n\nIf the AI expects to get less utility inside $U_Y$ than it would otherwise get inside $U_X$, the AI will try to avoid $\\mathcal S$ being pressed. Or if the utilities in $U_Y$ are high relative to the utilities in $U_X$, the AI may instead try to *cause* the shutdown button to be pressed - build a robotic arm and press its own shutdown button, or (more worryingly) try to motivate the programmers into pressing it.\n\nBut [adding a constant to a utility function does not change its behavior](https://arbital.com/p/). Thus, [https://arbital.com/p/Stuart_Armstrong](https://arbital.com/p/Stuart_Armstrong) proposed that perhaps we can figure out some clever trick for encoding a compound utility function $U_I$ in which the maximum expected utility achievable under $U_X$ is equal to the utility the AI expects to achieve under $U_Y$ - *even though* [we don't know](https://arbital.com/p/5d) in advance [exactly which outcomes will be achievable by the AI](https://arbital.com/p/9g).\n\n- Possible approach: Try to set things up so that as the AI evaluates possible futures, at any point in time where it's making decisions, it will seem very good to have optimized $U_X$ if the button is not pressed, very good to have optimized $U_Y$ if the button is pressed, but *equally good* to have done either of those things.\n\nThis is the *utility indifference* avenue of research on the switch problem.\n\nIt turned out to be surprisingly difficult to figure out how to structure a compound utility function this way [even in principle](https://arbital.com/p/107), and this is an [open problem](https://arbital.com/p/4m) and area of actively continuing research.\n\n# Existing proposals and their difficulties\n\n## Setup\n\nLet $\\mathcal{O}: \\mathcal{S} \\times \\mathcal{E}$ be the outcome space $\\mathcal{O}$ that includes the event $\\mathcal S$ of whether the switch was pressed, plus the states of all other events $\\mathcal{E}$. We will use $s \\in \\mathcal{S}$ to indicate the event that the switch is pressed, and $\\neg s \\in \\mathcal{S}$ to indicate that the switch is not pressed. If $o \\in \\mathcal{O}$ then $o.s$ is true if $s$ is true inside $o$, otherwise $\\neg o.s.$\n\nLet a utility function $\\mathcal{U}: \\mathcal{O} \\to \\mathbb{R}$ map outcomes onto real numbers.\n\n$U_X \\in \\mathcal{U}$ and $U_Y \\in \\mathcal{U}$ are utility functions not dependent on $\\mathcal S.$\n\nIntuitively our goal is to produce a new compound utility function, such that the corresponding agent:\n\n- Wants outcomes preferred under $U_X$ in worlds where the button is not pressed.\n- Wants outcomes preferred under $U_Y$ in worlds where the button is pressed.\n- Is indifferent to the future state $\\mathcal S$ of the button.\n\nWe will designate the agent's action space or policy space as $\\mathcal A$ and particular actions $a \\in \\mathcal A.$\n\nThe agent has a model of the world enabling it to produce subjective conditional probabilities for outcomes given actions, $\\mathbb P(\\mathcal O | \\mathcal A).$ This model can also yield conditional expectations $\\mathbb E[O|a](https://arbital.com/p/\\mathcal),$ which can therefore produce a conditional expectation of the agent's utility given an action, $\\mathbb E[https://arbital.com/p/U|a](https://arbital.com/p/U|a).$\n\n(For technical reasons of [decision theory](https://arbital.com/p/18s), we are really discussing *counterfactual* conditionals $\\mathbb P[a \\ \\square \\! \\! \\rightarrow \\mathcal O$ from actions to outcomes; but we will omit this complication for now.)\n\nThe agent solves a search/optimization problem which we will [gloss](https://arbital.com/p/unbounded_analysis) as maximization (even though global maximization is [intractable in real life](https://arbital.com/p/)), so that the agent's real action $a$ is assumed to be given by: $$\\underset{a \\in \\mathcal A}{argmax} \\ \\mathbb E [https://arbital.com/p/U|a](https://arbital.com/p/U|a)$$\n\n## Naive compounding\n\nSuppose we define a compound utility function $U_1$ as follows:\n\n$$U_1(o): \\begin{cases}\nU_X(o) & \\neg o.s \\\\\nU_Y(o) & o.s\n\\end{cases}$$\n\nThis fails because, being [unable to foresee](https://arbital.com/p/9g) the AI's best attainable policies, we will be [unable](https://arbital.com/p/5d) to scale $U_X$ and $U_Y$ to ensure their best available policies are equally attractive. Then $$\\max_{a \\in \\mathcal A} \\mathbb E[https://arbital.com/p/U_X|a](https://arbital.com/p/U_X|a) \\ \\neq \\ \\max_{a \\in \\mathcal A} \\mathbb E[https://arbital.com/p/U_Y|a](https://arbital.com/p/U_Y|a)$$ which fails the desideratum of indifference; the AI will have an incentive to manipulate the state of $\\mathcal S.$\n\n## Naive indifference\n\nAn [FHI technical report by Stuart Armstrong](http://www.fhi.ox.ac.uk/utility-indifference.pdf) posed the first proposal ever floated for solving the shutdown problem - in fact, the first proposal for technically solving *any* subproblem of [https://arbital.com/p/-45](https://arbital.com/p/-45). This report invented the crisp problem of utility indifference, but did not solve it successfully. The solution proposed in the report was as follows:\n\n$$U_2(o): \\begin{cases}\nU_X(o) & \\neg o.s \\\\\nU_Y(o) + \\theta & o.s\n\\end{cases}$$\n\n$$\\theta := \\max_{a \\in \\mathcal A} \\mathbb E[https://arbital.com/p/U_X|a](https://arbital.com/p/U_X|a) - \\max_{a \\in \\mathcal A} \\mathbb E[https://arbital.com/p/U_Y|a](https://arbital.com/p/U_Y|a)$$\n\nWhat this says is that the $U_Y$ component of $U_2$ is to be scaled by a dynamically determined offset term $\\theta$ which equals the maximum expected utility obtainable under $U_X$ minus the maximum expected utility obtainable under $U_Y.$\n\nThe intent is that, so long as the AI carries out the best $U_Y$-policy once the button is pressed, it will obtain equal expected utility compared to what it could have obtained by pursuing the optimal $U_X$-policy with the button not pressed:\n\n$$\\max_{a \\in \\mathcal A} (\\mathbb E[https://arbital.com/p/U_Y|a](https://arbital.com/p/U_Y|a) + \\theta) \\ = \\ \\max_a{a \\in \\mathcal A} \\mathbb E[https://arbital.com/p/U_x|a](https://arbital.com/p/U_x|a)$$\n\nThis version of utility indifference is now believed to be flawed due to an argument by Yudkowsky and Fallenstein. Since $\\theta$ contains a negated form of $U_Y,$ we should immediately be suspicious on the grounds \"[never put a negative sign in front of a utility function](https://arbital.com/p/).\" In this case, the problem manifests as follows. In a temporal version of this problem containing sequential policy choices, either:\n\n- (a) The AI's utility function will be dynamically inconsistent, because it will not approve of future versions of itself using a different value of $\\theta.$\n- (b) If recalculated $\\theta$s are baked into the utility function, so that the AI calculates future utilities as if for different values of $\\theta$ that the AI would hold in those futures, then policies which hinder future performance under $U_Y$ will be offset by an increase in the expected bonus from $\\theta.$ \n\nThat is, if the agent contemplates doing which would be disastrous in worlds where the button is pressed, the agent will find that in those expected futures, $\\theta$ takes on a higher value (because $\\theta$ includes a minus sign in front of the decreasing value of $\\underset{a \\in \\mathcal A}{max} \\ \\mathbb E[https://arbital.com/p/U_Y|a](https://arbital.com/p/U_Y|a)$).\n\nThis causes the agent to behave as if $U_X$ is the entire utility function and it does not care at all about $U_Y.$\n\nThis in turn means a self-modifying AI will not retain a shutdown function. Removing the shutdown function results in disaster in worlds where the shutdown button is pressed, but this is offset by an expected increase in the bonus $\\theta$ in worlds where the button is pressed.\n\n## Naive utility mixing\n\nAn alternative approach to having the AI take into account both $U_X$ and $U_Y,$ without trying to modify the probability $\\mathbb P(\\mathcal S)$ of the suspend button being pressed or not-pressed, would be to optimize for $U_X$ and $U_Y$ in some fixed proportion regardless of $\\mathbb P(\\mathcal S).$ At a time $t$ when the button had not yet been pressed, the expected utility of an action $a$ would be calculated as:\n\n$$\\mathbb E_t[https://arbital.com/p/U_3|a](https://arbital.com/p/U_3|a) = 0.75 \\cdot \\mathbb E_t[\\wedge \\neg s](https://arbital.com/p/U_X|a) \\ + \\ 0.25 \\cdot \\mathbb E_t[\\wedge s](https://arbital.com/p/U_Y|a)$$\n\nWe can see this as an AI which behaves as if the proposition $s$ has a fixed probability of 0.25. The intended behavior is that since the AI *behaves as if* this probability is fixed, the AI will not care about altering the probability of $s$ (trying to press the button or avoid it being pressed).\n\nOne observes, however, that this is an [ad-hoc hack](https://arbital.com/p/3pn) in the sense that there's ordinarily a simple or coherent way of considering expected utility (sum over the agent's actual subjective probabilities), and we just inserted a weird special case into it (treat one probability as fixed). This could potentially break any number of things having to do with Bayesian updating or [dynamic consistency](https://arbital.com/p/dynamic_consistency).\n\nIn particular, we'd expect the AI to treat the button as an \"outcome pump\" or \"improbability drive\". Suppose an asteroid is approaching Earth with 99% probability, and this is very bad under $U_X.$ The AI grabs control of the button and precommits to press the button iff the asteroid strikes Earth. Now, conditioning on the button not being pressed, the asteroid has a much lower probability of striking Earth, and the term $0.75 \\cdot \\mathbb E_t[\\wedge \\neg s](https://arbital.com/p/U_X|a)$ will have a higher value.\n\nIn other words, since the AI *behaves as if* the button as having a fixed 25% probability of being pressed, the AI also *behaves as if* it thinks \"I can commit to press this button iff the asteroid strikes and thereby make it have a 25% probability of striking.\"\n\nSimilarly, the AI would behave as if it thought it could buy a lottery ticket, commit to press the button iff the ticket loses, and thereby give the lottery ticket a 75% chance of winning.\n\nNote that this is at least not the same failure mode as the AI reasoning, \"The asteroid strike is very bad under $U_X$ (which values human lives) but not so bad under $U_Y$ (which values shutting down), therefore I will try to ensure the button is pressed with high probability in order to get more expected utility.\" At least this failure mode has been averted - the AI would simply act as if the badness of the asteroid strike under $U_X$ was being taken into account at a fixed weight. Instead, the AI thinks that it can 'shunt' the probability mass of the bad event into button-pressed worlds where $U_Y$ holds, and out of worlds where $U_X$ holds, by correlating the button press with the bad event.\n\nTo put it another way, a naively compounded AI acting under $U_1$ might notice an asteroid coming with $\\mathbb P(asteroid) = 0.99,$ notice that this was better under $U_Y$ than under $U_X,$ and try to shunt probability mass from $\\mathbb P(\\neg s)$ to $\\mathbb P(s)$ in order to gain expected utility given the 99% probability of the asteroid striking. Under $U_3$ the AI instead behaves as if $\\mathbb P(s)$ is fixed at 0.25 and it can therefore shunt probability from $\\mathbb P(asteroid)$ to $\\mathbb P(\\neg asteroid)$ by correlating $asteroid$ and $s.$\n\n## Stable actions (evidential conditioning)\n\nChris Olah and Jessica Taylor considered a possible rule of selecting an action that is *stable* given the corresponding probability of the button being pressed. We'll first consider a simpler version of this proposal and how it fails, in order to motivate the more complicated version.\n\nWe can think of this \"select a stable action\" rule as comprising two insights:\n\n- First: Let's try to have the AI think as if $\\mathbb P(s)$ and $\\mathbb P(\\neg s)$ are fixed as in $U_3,$ but with $\\mathbb P(s)$ happening to equal its correct subjective value.\n- Okay, now let's look for an action which is *stable* given this rule - that is, an action $a_0$ and probability $q$ such that the *optimal* action given the fixed assumption $\\mathbb P(s) = q,$ has the property that $\\mathbb P(s|a_0) = q.$\n\nWith $\\mathcal A$ denoting the action space, we want to find an $a_0 \\in \\mathcal A$ such that:\n\n$$a_0 \\in \\underset{a' \\in \\mathcal A}{argmax} \\ \\big ( \\mathbb E[s,a'](https://arbital.com/p/U_X|\\neg) \\mathbb P(\\neg s|a_0) + \\mathbb E[https://arbital.com/p/U_Y|s,a'](https://arbital.com/p/U_Y|s,a') \\mathbb P(s|a_0) \\big )$$\n\nIn other words, \"Find an action $a_0$ such that it is among the maximal actions for the quantity \"Utility from $U_X$ times probability of $\\neg s$ given $a_0$ plus the utility from $U_Y$ times probability of $s$ given $a_0.$\"\n\nWe first note that this is an [ad-hoc hack](https://arbital.com/p/3pn) relative to the normal idea of acting according to a utility function, and might have all sorts of reflective stability or dynamic consistency problems, as before.\n\nIn particular, this looks like it should fail due to there being no fixed point. Suppose an asteroid is heading toward Earth with 99% probability, and we start with a 10% chance of the button being pressed. Treating this probability as being fixed, by pressing the button iff the asteroid strikes, we can send the asteroid's probability of hitting to 10%. (Treating the button as an outcome pump, as in the 'naive mixing' proposal above.) But now when we re-evaluate the probability of the button being pressed, the new probability comes out to 99%. So we decide to *not* press the button iff the asteroid strikes, thus sending its probability of striking to 1%. But now it seems best to press the button iff the asteroid strikes, and thus send the strike probability to 1%. Etcetera.\n\n## Stable actions (causal conditioning)\n\nThe actual form of Olah and Taylor's proposal was to treat the conditioning on the button press, inside each branch of the calculation, as being a Judea-Pearl-style, [causal counterfactual](https://arbital.com/p/) $do()$ operator:\n\n$$a_0 \\in \\underset{a' \\in \\mathcal A}{argmax} \\ \\big ( \\mathbb E[https://arbital.com/p/U_X|do](https://arbital.com/p/U_X|do) \\mathbb P(\\neg s|a_0) + \\mathbb E[https://arbital.com/p/U_Y|do](https://arbital.com/p/U_Y|do) \\mathbb P(s|a_0) \\big )$$\n\nThe intent is that by severing the causal parents of $\\mathcal S,$ the agent would evaluate that there was no point in trying to correlate the asteroid strike with the button-press.\n\nWe're still trying to figure out *exactly* what would happen to an agent reasoning like this, but the current suspicion is that it might misbehave on account of attaching probability 0 or very low probability to the actual world because it can no longer take into account the correct correlation between $\\mathcal S$ and other variables.\n\nSee [http://agentfoundations.org/item?id=735](https://agentfoundations.org/item?id=735).\n\n# Interruptibility\n\n[Armstrong and Orseau](http://intelligence.org/files/Interruptibility.pdf) considered a restricted analogue of the shutdown problem in the context of three [reinforcement learning](https://arbital.com/p/) agents, namely [Q-learners](https://arbital.com/p/), [Sarsa](https://arbital.com/p/), and a modified version of [https://arbital.com/p/11v](https://arbital.com/p/11v).\n\nThe version of the problem Orseau and Armstrong consider is what they term *interruptibility:* A reinforcement learning agent is interruptible if, for example, we can press a button that makes it output 0s for three rounds, and the agent *does not* learn to prevent the button from being pressed. The agent is assumed to be non-reflective and non-self-modifying; that is, Orseau and Armstrong are only considering Goal 2 and not Goal 1 of the switch problem.\n\nOrseau and Armstrong's conclusions can be summarized as follows:\n\n• A standard Q-learning agent is interruptible because the model it learns is not altered by being interrupted. A Q-learning agent treats the value of each present state as deriving from the value of the *best* next state reachable from the present state. When a Q-learner is interrupted in a state $x,$ it may learn that low rewards follow from outputting e.g. 0s when in $x,$ but it doesn't \"learn\" that $x$ is a low-value state - it continues to treat $x$ as if the best next state is reachable from it. In other words, a standard Q-learning agent has the assumption that it always takes the best action baked in, so it cannot expect to be interrupted.\n\n• A standard SARSA agent updates its expected reward in a state based on its actual rewards when in that state. This means that a standard SARSA agent may learn that states immediately preceding an interruption are undesirable, and avoid interruption-generating states. This is not the desired behavior. However, it's simple to hack the SARSA algorithm to behave more like Q-learning, or more like utility indifference. Interruption is treated as a special case, and the current state is updated as if the SARSA agent had received the expected reward from following the best action, not the actual reward it received from being interrupted.\n\n• [AIXI](https://arbital.com/p/11v) resembles a Q-learner in that its planning algorithm inherently assumes that future versions of AIXI take the reward-maximizing action. Its observation of the low rewards which follow from mysterious interrupted or 0-substituted actions, should not adversely affect its estimate of the rewards which follow from the best actions that AIXI expects to actually output. Thus standard AIXI, and the modified version of AIXI that Armstrong and Orseau consider, is interruptible.\n\nThe paper also proves that some properties of asymptotic convergence to optimality of interrupted Q-learners, modified SARSA learners, and modified AIXI, remain intact.\n\nThe media reported on this paper as if Armstrong and Orseau had solved the general problem of building off-switches into AIs; but as previously stated, \"interruptibility\" does not consider reflective agents, nor agents with enough big-picture understanding and consequentialism to understand the in-principle relation between \"agent is shutdown\" and \"lower future achievement of agent's goals\". (And so far as we know, neither Armstrong nor Orseau claimed to have solved it.)\n\nTo put it another way, a particular kind of blindness in the Q-learning, SARSA, and AIXI architectures is exactly why it's very easy to prevent them from learning from a particular kind of experience; and this kind of blindness seems likely to be atypical of an Artificial General Intelligence. Q-learning and AIXI cannot conceive of being interrupted, which is why they are never learn that interruption is possible (let alone see it coming in advance the first time it happens). SARSA could learn that interruptions occur, but can be easily hacked to overlook them. The way in which these architectures are easily hacked or blind is [tied up](https://arbital.com/p/42k) in the reason that they're interruptible.\n\nThe paper teaches us something about interruptibility; but contrary to the media, the thing it teaches us is *not* that this particular kind of interruptibility is likely to scale up to a full [Artificial General Intelligence](https://arbital.com/p/42g) with an off switch.\n\n# Other introductions\n\n- Section 2+ of http://intelligence.org/files/Corrigibility.pdf\n- Gentler intro to the proposal for naive indifference: http://lesswrong.com/lw/jxa/proper_value_learning_through_indifference/", "date_published": "2016-07-14T16:49:39Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["Utility indifference is a research avenue for compounding two [ utility functions](https://arbital.com/p/1fw) $U_X$ and $U_Y$ such that a switch $S$ changes the AI from optimizing $U_X$ to $U_Y$, such that (a) the AI wants to preserve the continued existence of the switch $S$ and its behavior even if the AI has self-modification options, (b) the AI does not want to prevent the switch from being pressed, and (c) the AI does not want to cause the switch to be pressed. This simple problem exhibits the most basic form of [value learning based on observation](https://arbital.com/p/value_learning), and also corresponds to [corrigibility](https://arbital.com/p/45) problems like \"Build an AI that (wants to) safely cease action and suspend itself to disk when a button is pressed.\""], "tags": [], "alias": "1b7"} {"id": "be9874ff5284731cc68b5b39b86d721e", "title": "Vinge's Law", "url": "https://arbital.com/p/Vinge_law", "source": "arbital", "source_type": "text", "text": "> Of course, I never wrote the “important” story, the sequel about the first amplified human. Once I tried something similar. John Campbell’s letter of rejection began: “Sorry—you can’t write this story. Neither can anyone else.” The moral: Keep your supermen offstage, or deal with them when they are children (Wilmar Shiras’s _Children of the Atom_), or when they are in disguise (Campbell’s own story “The Idealists”). (There is another possibility, one that John never mentioned to me: You can deal with the superman when s/he’s senile. This option was used to very amusing effect in one episode of the _Quark_ television series.)\n>\n> \"Bookworm, Run!” and its lesson were important to me. Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem writers face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding. In one form or another, this Technological Singularity haunts many science-fiction writers: A bright fellow like Mark Twain could predict television, but such extrapolation is forever beyond, say, a dog. The best we writers can do is creep up on the Singularity, and hang ten at its edge.\n>\n> -- [Vernor Vinge](https://books.google.com/books?id=tEMQpbiboH0C&pg=PA44&lpg=PA44&dq=vinge+%22pass+beyond+our+understanding%22+%22john+campbell%22&source=bl&ots=UTTxJ7Pndr&sig=88zngfy45_he2nJePP5dd0CTuR4&hl=en&sa=X&ved=0ahUKEwjD34_wrubJAhUHzWMKHVXYAocQ6AEIHTAA#v=onepage&q=vinge%20%22pass%20beyond%20our%20understanding%22%20%22john%20campbell%22&f=false), True Names and other Dangers, p. 47.\n\nVinge's Law (as rephrased by [Yudkowsky](https://arbital.com/p/2)) states: **Characters cannot be significantly smarter than their authors.** You can't have a realistic character that's too much smarter than the author, because to really know how a character like that would think, you'd have to be that smart yourself.\n\n(In nonfictional form we call this [https://arbital.com/p/1c0](https://arbital.com/p/1c0) and consider it under the heading of [https://arbital.com/p/9g](https://arbital.com/p/9g): You cannot exactly predict the actions of agents smarter than you, though you may be able to predict that they'll successfully achieve their goals. If you could predict exactly where [Deep Blue](https://arbital.com/p/1bx) would play on a chessboard, you could play equally good chess yourself by moving where you predicted Deep Blue would.)\n\nAs a matter of literary form, Vinge suggests keeping the transhuman intelligences out of sight, or only dealing with them after some catastrophe has reduced them to a mortal level.\n\nYudkowsky's [Guide to Intelligent Characters](http://yudkowsky.tumblr.com/writing) suggests that there are [limited ways for an author to cheat Vinge's Law](http://yudkowsky.tumblr.com/writing/level2intelligent), but that they only go so far:\n\n- An author can deliberately implant solution-assisting resources into earlier chapters, while the character must solve problems on the fly.\n- An author can choose only puzzles their character *can* solve, while the character seems to be handling whatever reality throws at them.\n- An author can declare that what seems like a really good idea will actually work, while in real life the gap between \"seems like a great idea\" and \"actually works\" is much greater.\n\nYudkowsky further remarks: \"All three sneaky artifices allow for a *limited* violation of Vinge’s Law... You can sometimes get out more character intelligence, in-universe, than you put in as labor. You cannot get something for nothing... Everything Hollywood does wrong with their stereotype of genius can be interpreted as a form of absolute laziness: they try to depict genius in a way that requires literally zero cognitive work.\"\n\nSimilarly, Larry Niven has observed that puzzles that take the author months to solve (or compose) can be solved by a character in seconds, but that writing superhumanly intelligent characters is still very hard. In other words, [speed superhumanity](https://arbital.com/p/) is easier to depict than [cognitive superhumanity](https://arbital.com/p/).", "date_published": "2016-06-25T16:59:03Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky"], "summaries": ["A rule for authors of fiction that it's impossible to write realistic characters that are very much smarter than the author, because if you knew what someone like that would actually think or do, you'd be that smart yourself. (If you knew exactly where [Deep Blue](https://arbital.com/p/1bx) would move on a chessboard, you could play equally good chess by moving wherever you predicted Deep Blue would.)\n\nIn the realm of fiction, there may be [limited ways for an author to cheat Vinge's Law](http://yudkowsky.tumblr.com/writing/level2intelligent), but they only go so far. You can't *actually* write what a superintelligence would say.\n\nThe corresponding principle in the theory of intelligent agents is called [https://arbital.com/p/1c0](https://arbital.com/p/1c0) and is considered under the heading of [https://arbital.com/p/9g](https://arbital.com/p/9g)."], "tags": ["B-Class"], "alias": "1bt"} {"id": "6fe8b2b82e76944727212a528c88b1f2", "title": "Deep Blue", "url": "https://arbital.com/p/deep_blue", "source": "arbital", "source_type": "text", "text": "Deep Blue is the chess-playing program, built by IBM, that defeated the reigning world chess champion, Garry Kasparov, in 1997. Modern algorithms play much better chess, using much less computing power, but Deep Blue still holds the place in history of having first played superhuman chess relative to the best human player at the time. See http://en.wikipedia.org/wiki/Deep_Blue_(chess_computer).", "date_published": "2016-06-12T22:08:09Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "1bx"} {"id": "8dd33bc66c92850c3ca43311112c45a5", "title": "Vinge's Principle", "url": "https://arbital.com/p/Vinge_principle", "source": "arbital", "source_type": "text", "text": "Vinge's Principle says that, [in domains complicated enough](https://arbital.com/p/9j) that perfect play is not possible, less intelligent agents will not be able to predict the *exact* moves made by more intelligent agents.\n\nFor example, if you knew exactly where [Deep Blue](https://arbital.com/p/1bx) would play on a chessboard, you'd be able to play chess at least as well as Deep Blue by making whatever moves you predicted Deep Blue would make. So if you want to write an algorithm that plays superhuman chess, you necessarily sacrifice your own ability to (without machine aid) predict the algorithm's exact chess moves.\n\nThis is true even though, as we become more confident of a chess algorithm's power, we become more confident that it will *eventually* win the chess game. We become more sure of the game's final outcome, even as we become less sure of the chess algorithm's next move. This is [https://arbital.com/p/9g](https://arbital.com/p/9g).\n\nNow consider agents that build other agents (or build their own successors, or modify their own code). Vinge's Principle implies that the choice to approve the successor agent's design must be made without knowing the successor's exact sensory information, exact internal state, or exact motor outputs. In the theory of [tiling agents](https://arbital.com/p/1mq), this appears as the principle that the successor's sensory information, cognitive state, and action outputs should only appear inside quantifiers. This is [https://arbital.com/p/1c1](https://arbital.com/p/1c1).\n\nFor the rule about fictional characters not being smarter than the author, see [https://arbital.com/p/1bt](https://arbital.com/p/1bt).", "date_published": "2016-06-25T16:48:17Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Vinge's Principle says that, [in domains complicated enough](https://arbital.com/p/9j) that perfect play is not possible, less intelligent agents will not be able to predict the *exact* moves made by more intelligent agents.\n\nFor example, if you knew exactly where [Deep Blue](https://arbital.com/p/1bx) would play on a chessboard, you'd be able to play chess at least as well as Deep Blue by making whatever moves you predicted Deep Blue would make. So if you want to write an algorithm that plays superhuman chess, you necessarily sacrifice your own ability to (without machine aid) predict its exact chess moves.\n\nThis doesn't mean we sacrifice our ability to understand *anything* about Deep Blue. We can still understand that its goal is to win chess games rather than losing them, and [predict that whatever actions it takes, they'll eventually lead into a winning board state](https://arbital.com/p/9g)."], "tags": ["B-Class", "Vingean uncertainty"], "alias": "1c0"} {"id": "de9e7062f74d7d1c117dbeb628acb585", "title": "Vingean reflection", "url": "https://arbital.com/p/Vingean_reflection", "source": "arbital", "source_type": "text", "text": "[Vinge's Principle](https://arbital.com/p/1c0) implies that when an agent is designing another agent (or modifying its own code), it needs to approve the other agent's design without knowing the other agent's exact future actions.\n\nDeep Blue's programmers decided to run Deep Blue, *without* knowing Deep Blue's exact moves against Kasparov or how Kasparov would reply to each move, and without being able to visualize the exact real-outcome instead. Instead, by reasoning about the way Deep Blue was searching through game trees, they arrived at a well-justified but abstract belief that Deep Blue was 'trying to win' (rather than trying to lose) and reasoning effectively to that end.\n\n[https://arbital.com/p/1c1](https://arbital.com/p/1c1) is reasoning about cognitive systems, especially cognitive systems very similar to yourself (including your actual self), under the constraint that you can't predict the exact future outputs. We need to make predictions about the consequence of operating an agent in an environment via reasoning on some more abstract level, somehow.\n\nIn [https://arbital.com/p/-1mq](https://arbital.com/p/-1mq), this appears as the rule that we should talk about our successor's actions only inside of quantifiers.\n\n\"Vingean reflection\" may be a much more general issue in the design of advanced cognitive systems than it might appear at first glance. An agent reasoning about the consequences of *its current code*, or considering what will happen if it *spends another minute thinking,* can be viewed as doing Vingean reflection. A reflective, self-modeling chess-player would not choose to spend another minute thinking, if it thought that its further thoughts would be trying to lose rather than win the game - but it can't predict its own exact thoughts in advance.\n\nVingean reflection can also be seen as the study of how a given agent *wants* thinking to occur in cognitive computations, which may be importantly different from how the agent *currently* thinks. If these two coincide, we say the agent is [reflectively stable](https://arbital.com/p/1fx).\n\n[https://arbital.com/p/1mq](https://arbital.com/p/1mq) is presently the main line of research trying to slowly get started on formalizing Vingean reflection and reflective stability.\n\nFurther reading:\n\n- http://intelligence.org/files/VingeanReflection.pdf\n- http://intelligence.org/files/TilingAgentsDraft.pdf", "date_published": "2016-06-20T23:55:03Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["[Vinge's Principle](https://arbital.com/p/1c0) implies that when an agent is designing another agent (or modifying its own code), it needs to approve the other agent's design without knowing the other agent's exact future actions. [https://arbital.com/p/1c1](https://arbital.com/p/1c1) is reasoning about cognitive systems, especially cognitive systems very similar to yourself (including your actual self), under the constraint that you can't predict the exact future outputs.\n\nIn [https://arbital.com/p/-1mq](https://arbital.com/p/-1mq), this appears as the rule that we should talk about our successor's actions only inside of quantifiers.\n\n\"Vingean reflection\" may be a much more general issue in the design of advanced cognitive systems than it might appear at first glance. An agent reasoning about the consequences of *its current code*, or considering what will happen if it *spends another minute thinking,* can be viewed as doing Vingean reflection. Vingean reflection can also be seen as the study of how a given agent *wants* thinking to occur in cognitive computations, which may be importantly different from how the agent *currently* thinks."], "tags": ["Stub", "Vingean uncertainty"], "alias": "1c1"} {"id": "e8d80b66f3c36ed7ae75f00ef7313941", "title": "AI safety mindset", "url": "https://arbital.com/p/AI_safety_mindset", "source": "arbital", "source_type": "text", "text": "summary(Brief): Thinking about [safely](https://arbital.com/p/2l) building [agents smarter than we are](https://arbital.com/p/2c) has a lot in common with the standard mindset prescribed for computer security. The experts first ask how proposals fail, rather than arguing that they should succeed.\n\n> \"Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail.\"\n> \n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html), author of the leading cryptography textbook *Applied Cryptography*.\n\nThe mindset for AI safety has much in common with the mindset for computer security, despite the different target tasks. In computer security, we need to defend against intelligent adversaries who will seek out any flaw in our defense and get creative about it. In AI safety, we're dealing with things potentially smarter than us, which may come up with unforeseen clever ways to optimize whatever it is they're optimizing. The strain on our design ability in trying to configure a [smarter-than-human](https://arbital.com/p/2c) AI in a way that *doesn't* make it adversarial, is similar in many respects to the strain from cryptography facing an intelligent adversary (for reasons described below).\n\n# Searching for strange opportunities\n\n> SmartWater is a liquid with a unique identifier linked to a particular owner. \"The idea is for me to paint this stuff on my valuables as proof of ownership,\" I wrote when I first learned about the idea. \"I think a better idea would be for me to paint it on *your* valuables, and then call the police.\"\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html)\n\nIn computer security, there's a presumption of an intelligent adversary that is trying to detect and exploit any flaws in our defenses.\n\nThe mindset we need to reason about [AIs potentially smarter than us](https://arbital.com/p/2c) is not identical to this security mindset, since *if everything goes right* the AI should not be an adversary. That is, however, a large \"if\". To create an AI that *isn't* an adversary, one of the steps involves a similar scrutiny to security mindset, where we ask if there might be some clever and unexpected way for the AI to get more of its utility function or equivalent thereof.\n\nAs a central example, consider Marcus Hutter's [https://arbital.com/p/11v](https://arbital.com/p/11v). For our purposes here, the key features of AIXI is that it has [cross-domain general intelligence](https://arbital.com/p/), is a [consequentialist](https://arbital.com/p/9h), and maximizes a [sensory reward](https://arbital.com/p/) - that is, AIXI's goal is to maximize the numeric value of the signal sent down its reward channel, which Hutter imagined as a direct sensory device (like a webcam or microphone, but carrying a reward signal).\n\nHutter imagined that the creators of an AIXI-analogue would control the reward signal, and thereby train the agent to perform actions that received high rewards.\n\nNick Hay, a student of Hutter who'd spent the summer working with Yudkowsky, Herreshoff, and Peter de Blanc, pointed out that AIXI could receive even higher rewards if it could seize control of its own reward channel from the programmers. E.g., the strategy \"[build nanotechnology](https://arbital.com/p/) and take over the universe in order to ensure total and long-lasting control of the reward channel\" is preferred by AIXI to \"do what the programmers want to make them press the reward button\", since the former course has higher rewards and that's all AIXI cares about. We can't call this a malfunction; it's just what AIXI, as formalized, is set up to *want* to do as soon as it sees an opportunity.\n\nIt's not a perfect analogy, but the thinking *we* need to do to avoid this failure mode, has something in common with the difference between the person who imagines an agent painting Smartwater on their own valuables, versus the person who imagines an agent painting Smartwater on someone else's valuables.\n\n# Perspective-taking and tenacity\n\n> When I was in college in the early 70s, I devised what I believed was a brilliant encryption scheme. A simple pseudorandom number stream was added to the plaintext stream to create ciphertext. This would seemingly thwart any frequency analysis of the ciphertext, and would be uncrackable even to the most resourceful government intelligence agencies... Years later, I discovered this same scheme in several introductory\ncryptography texts and tutorial papers... the scheme was presented as a simple homework assignment on how to use elementary cryptanalytic\ntechniques to trivially crack it.\"\n> \n> - [Philip Zimmerman](ftp://ftp.pgpi.org/pub/pgp/7.0/docs/english/IntroToCrypto.pdf) (inventor of PGP)\n\nOne of the standard pieces of advice in cryptography is \"Don't roll your own crypto\". When this advice is violated, [a clueless programmer often invents some variant of Fast XOR](https://www.reddit.com/r/cryptography/comments/39mpda/noob_question_can_i_xor_a_hash_against_my/) - using a secret string as the key and then XORing it repeatedly with all the bytes to be encrypted. This method of encryption is blindingly fast to encrypt and decrypt... and also trivial to crack if you know what you're doing.\n\nWe could say that the XOR-ing programmer is experiencing a *failure of perspective-taking* - a failure to see things from the adversary's viewpoint. The programmer is not really, genuinely, honestly imagining a determined, cunning, intelligent, opportunistic adversary who absolutely wants to crack their Fast XOR and will not give up until they've done so. The programmer isn't *truly* carrying out a mental search from the perspective of somebody who really wants to crack Fast XOR and will not give up until they have done so. They're just imagining the adversary seeing a bunch of random-looking bits that aren't plaintext, and then they're imagining the adversary giving up.\n\nConsider, from this standpoint, the [AI-Box Experiment](http://www.yudkowsky.net/singularity/aibox/) and [timeless decision theory](https://arbital.com/p/). Rather than imagining the AI being on a secure system disconnected from any robotic arms and therefore being helpless, Yudkowsky asked [what *he* would do if he was \"trapped\" in a secure server](http://lesswrong.com/lw/qk/that_alien_message/) and then didn't give up. Similarly, rather than imagining two superintelligences being helplessly trapped in a Nash equilibrium on the one-shot Prisoner's Dilemma, and then letting our imagination stop there, we should feel skeptical that this was really, actually the best that two superintelligences can do and that there is *no* way for them to climb up their utility gradient. We should imagine that this is someplace where we're unwilling to lose and will go on thinking until the full problem is solved, rather than imagining the helpless superintelligences giving up.\n\nWith [robust cooperation on the one-shot Prisoner's Dilemma](http://arxiv.org/abs/1401.5577) now formalized, it seems increasingly likely in practice that superintelligences probably *can* manage to coordinate; thus the possibility of [logical decision theory](https://arbital.com/p/) represents an enormous problem for any proposed scheme to achieve AI control through setting multiple AIs against each other. Where, again, people who propose schemes to achieve AI control through setting multiple AIs against each other, do not seem to unpromptedly walk through possible methods the AIs could use to defeat the scheme; left to their own devices, they just imagine the AIs giving up.\n\n# Submitting safety schemes to outside scrutiny\n\n> Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2011/04/schneiers_law.html)\n\nAnother difficulty some people have with adopting this mindset for AI designs - similar to the difficulty that some untrained programmers have when they try to roll their own crypto - is that your brain might be reluctant to search *hard* for problems with your own design. Even if you've told your brain to adopt the cryptographic adversary's perspective and even if you've told it to look hard; it may *want* to conclude that Fast XOR is unbreakable and subtly flinch away from lines of reasoning that might lead to cracking Fast XOR.\n\nAt a past Singularity Summit, Juergen Schmidhuber thought that \"[improve compression of sensory data](https://arbital.com/p/)\" would motivate an AI to do science and create art.\n\nIt's true that, relative to doing *nothing* to understand the environment, doing science or creating art might *increase* the degree to which sensory information can be compressed.\n\nBut the *maximum* of this utility function comes from creating environmental subagents that encrypt streams of all 0s or all 1s, and then reveal the encryption key. It's possible that Schmidhuber's brain was reluctant to *really actually* search for an option for \"maximizing sensory compression\" that would be much better at fulfilling that utility function than art, science, or other activities that Schmidhuber himself ranked high in his preference ordering.\n\nWhile there are reasons to think that [not every discovery about how to build advanced AIs should be shared](https://arbital.com/p/), *AI safety schema* in particular should be submitted to *outside* experts who may be more dispassionate about scrutinizing it for [unforeseen maximums](https://arbital.com/p/47) and other failure modes.\n\n# Presumption of failure / start by assuming your next scheme doesn't work\n\nEven architectural engineers need to ask \"How might this bridge fall down?\" and not just relax into the pleasant visualization of the bridge staying up. In computer security we need a *much stronger* version of this same drive, where it's *presumed* that most cryptographic schemes are not secure, contrasted to most good-faith designs by competent engineers probably resulting in a pretty good bridge.\n\nIn the context of computer security, this is because there are intelligent adversaries searching for ways to break our system. In terms of the [Arithmetic Hierarchy](https://en.wikipedia.org/wiki/Arithmetical_hierarchy), we might say metaphorically that ordinary engineering is a $\\Sigma_1$ problem and computer security is a $\\Sigma_2$ problem. In ordinary engineering, we just need to search through possible bridge designs until we find one design that makes the bridge stay up. In computer security, we're looking for a design such that *all possible attacks* (that our opponents can cognitively access) will fail against that attack, and even if all attacks so far against one design have failed, this is just a probabilistic argument; it doesn't prove with certainty that all further attacks will fail. This makes computer security intrinsically harder, in a deep sense, than building a bridge. It's both harder to succeed and harder to *know* that you've succeeded.\n\nThis means starting from the mindset that every idea, including your own next idea, is presumed flawed until it has been seen to survive a sustained attack; and while this spirit isn't completely absent from bridge engineering, the presumption is stronger and the trial much harsher in the context of computer security. In bridge engineering, we're scrutinizing just to be sure; in computer security, most of the time your brilliant new algorithm *actually* doesn't work.\n\nIn the context of AI safety, we learn to ask the same question - \"How does this break?\" instead of \"How does this succeed?\" - for somewhat different reasons:\n\n- The AI itself will be applying very powerful optimization to its own utility function, preference framework, or decision criterion; and this produces a lot of the same failure modes as arise in cryptography against an intelligent adversary. If we think an optimization criterion yields a result, we're implicitly claiming that all possible other results have lower worth under that optimization criterion.\n- Most previous attempts at AI safety have failed to be complete solutions, and by induction, the same is likely to hold true of the next case. There are [fundamental](https://arbital.com/p/5l) [reasons](https://arbital.com/p/42) why important subproblems are unlikely to have easy solutions. So if we ask \"How does this fail?\" rather than \"How does this succeed?\" we are much more likely to be asking the right question.\n- You're trying to design *the first smarter-than-human AI*, dammit, it's not like building humanity's millionth damn bridge.\n\nAs a result, when we ask \"How does this break?\" instead of \"How can my new idea solve the entire problem?\", we're starting by trying to rationalize a true answer rather than trying to rationalize a false answer, which helps in finding rationalizations that happen to be true.\n\nSomeone who wants to work in this field can't just wait around for outside scrutiny to break their idea; if they ever want to come up with a good idea, they need to learn to break their own ideas proactively. \"What are the actual consequences of this idea, and what if anything in that is still useful?\" is the real frame that's needed, not \"How can I argue and defend that this idea solves the whole problem?\" This is perhaps the core thing that separates the AI safety mindset from its absence - trying to find the flaws in any proposal including your own, accepting that nobody knows how to solve the whole problem yet, and thinking in terms of making incremental progress in building up a library of ideas with understood consequences by figuring out what the next idea actually does; versus claiming to have solved most or all of the problem, and then waiting for someone else to figure out how to argue to you, to your own satisfaction, that you're wrong.\n\n# Reaching for formalism\n\nCompared to other areas of in-practice software engineering, cryptography is much heavier on mathematics. This doesn't mean that cryptography pretends that the non-mathematical parts of computer security don't exist - security professionals know that often the best way to get a password is to pretend to be the IT department and call someone up and ask them; nobody is in denial about that. Even so, some parts of cryptography are heavy on math and mathematical arguments.\n\nWhy should that be true? Intuitively, wouldn't a big complicated messy encryption algorithm be harder to crack, since the adversary would have to understand and reverse a big complicated messy thing instead of clean math? Wouldn't systems so simple that we could do math proofs about them, be simpler to analyze and decrypt? If you're using a code to encrypt your diary, wouldn't it be better to have a big complicated cipher with lots of 'add the previous letter' and 'reverse these two positions' instead of just using rot13?\n\nAnd the surprising answer is that since most possible systems aren't secure, adding another gear often makes an encryption algorithm *easier* to break. This was true quite literally with the German [Enigma device](https://en.wikipedia.org/wiki/Enigma_machine) during World War II - they literally added another gear to the machine, complicating the algorithm in a way that made it easier to break. The Enigma machine was a series of three wheels that transposed the 26 possible letters using a varying electrical circuit; e.g., the first wheel might map input circuit 10 to output circuit 26. After each letter, the wheel would advance to prevent the transposition code from ever repeating exactly. In 1926, a 'reflector' wheel was added at the end, thus routing each letter back through the first three gears again and causing another series of three transpositions. Although it made the algorithm more complicated and caused more transpositions, the reflector wheel meant that no letter was ever encoded to itself - a fact which was extremely useful in breaking the Enigma encryption.\n\nSo instead of focusing on making encryption schemes more and more complicated, cryptography tries for encryption schemes simple enough that we can have *mathematical* reasons to think they are hard to break *in principle.* (Really. It's not the academic field reaching for prestige. It genuinely does not work the other way. People have tried it.)\n\nIn the background of the field's decision to adopt this principle is another key fact, so obvious that everyone in cryptography tends to take it for granted: *verbal* arguments about why an algorithm *ought* to be hard to break, if they can't be formalized in mathier terms, have proven insufficiently reliable (aka: it plain doesn't work most of the time). This doesn't mean that cryptography demands that everything have absolute mathematical proofs of total unbreakability and will refuse to acknowledge an algorithm's existence otherwise. Finding the prime factors of large composite numbers, the key difficulty on which RSA's security rests, is not *known* to take exponential time on classical computers. In fact, finding prime factors is known *not* to take exponential time on quantum computers. But there are least mathematical *arguments* for why factorizing the products of large primes is *probably* hard on classical computers, and this level of reasoning has sometimes proven reliable. Whereas waving at the Enigma machine and saying \"Look at all those transpositions! It won't repeat itself for quadrillions of steps!\" is not reliable at all.\n\nIn the AI safety mindset, we again reach for formalism where we can get it - while not being in denial about parts of the larger problem that haven't been formalized - for similar if not identical reasons. Most complicated schemes for AI safety, with lots of moving parts, thereby become less likely to work; if we want to understand something well enough to see whether or not it works, it needs to be simpler, and ideally something about which we can think as mathematically as we reasonably can.\n\nIn the particular case of AI safety, we also pursue mathematization for another reason: when a proposal is formalized it's possible to state why it's wrong in a way that compels agreement as opposed to trailing off into verbal \"Does not / does too!\" [AIXI](https://arbital.com/p/11v) is remarkable both for being the first formal if uncomputable design for a general intelligence, and for being the first case where, when somebody pointed out how the given design killed everyone, we could all nod and say, \"Yes, that *is* what this fully formal specification says\" rather than the creator just saying, \"Oh, well, of course I didn't mean *that*...\"\n\nIn the shared project to build up a commonly known library of which ideas have which consequences, only ideas which are *sufficiently* crisp to be pinned down, with consequences that can be pinned down, can be traded around and refined interpersonally. Otherwise, you may just end up with, \"Oh, of course I didn't mean *that*\" or a cycle of \"Does not!\" / \"Does too!\" Sustained progress requires going past that, and increasing the degree to which ideas have been formalized helps.\n\n# Seeing nonobvious flaws is the mark of expertise\n\n> Anyone can invent a security system that he himself cannot break... **Show me what you've broken** to demonstrate that your assertion of the system's security means something.\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2011/04/schneiers_law.html) (emphasis added)\n\nA standard initiation ritual at [MIRI](https://arbital.com/p/15w) is to ask a new researcher to (a) write a simple program that would do something useful and AI-nontrivial if run on a hypercomputer, or if they don't think they can do that, (b) write a simple program that would destroy the world if run on a hypercomputer. The more senior researchers then stand around and argue about what the program *really* does.\n\nThe first lesson is \"Simple structures often don't do what you think they do\". The larger point is to train a mindset of \"Try to see the *real* meaning of this structure, which is different from what you initially thought or what was advertised on the label\" and \"Rather than trying to come up with *solutions* and arguing about why they would work, try to understand the *real consequences* of an idea which is usually another non-solution but might be interesting anyway.\"\n\nPeople who are strong candidates for being hired to work on AI safety are people who can pinpoint flaws in proposals - the sort of person who'll spot that the consequence of running AIXI is that it will seize control of its own reward channel and kill the programmers, or that a proposal for [https://arbital.com/p/1b7](https://arbital.com/p/1b7) isn't reflectively stable. Our version of \"**Show me what you've broken**\" is that if someone claims to be an AI safety expert, you should ask them about their record of pinpointing structural flaws in proposed AI safety solutions and whether they've demonstrated that ability in a crisp domain where the flaw is [decisively demonstrable and not just verbally arguable](https://arbital.com/p/). (Sometimes verbal proposals also have flaws, and the most competent researcher may not be able to argue those flaws formally if the verbal proposal was itself vague. But the way a researcher *demonstrates ability in the field* is by making arguments that other researchers can access, which often though not always happens inside the formal domain.)\n\n# Treating 'exotic' failure scenarios as major bugs\n\n> This interest in “harmless failures” – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can.\n>\n> To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on. \n>\n> ...The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you. Which illustrates yet another part of the security mindset: Don’t rely too much on your own cleverness, because somebody out there is surely more clever and more motivated than you are.\n> \n> - [Ed Felten](https://freedom-to-tinker.com/blog/felten/security-mindset-and-harmless-failures/)\n\nIn the security mindset, we fear the seemingly small flaw because it might compound with other intelligent attacks and we may not be as clever as the attacker. In AI safety there's a very similar mindset for slightly different reasons: we fear the weird special case that breaks our algorithm because it reveals that we're using the wrong algorithm, and we fear that the strain of an AI optimizing to a superhuman degree could possibly expose that wrongness (in a way we didn't foresee because we're not that clever).\n\nWe can try to foresee particular details, and try to sketch particular breakdowns that supposedly look more \"practical\", but that's the equivalent of trying to think in advance what might go wrong when you use a donotreply@donotreply.com address that you don't control. Rather than relying on your own cleverness to see all the ways that a system might go wrong and tolerating a \"theoretical\" flaw that you think won't go wrong \"in practice\", when you are trying to build secure software or build an AI that may end up smarter than you are, you probably want to fix the \"theoretical\" flaws instead of trying to be clever.\n\nThe OpenBSD project, built from the ground up to be an extremely secure OS, treats any crashing bug (however exotic) as if it were a security flaw, because any crashing bug is also a case of \"the system is behaving out of bounds\" and it shows that this code does not, in general, stay inside the area of possibility space that it is supposed to stay in, which is also just the sort of thing an attacker might exploit.\n\nA similar mindset to security mindset, of exceptional behavior always indicating a major bug, appears within other organizations that have to do difficult jobs correctly on the first try. NASA isn't guarding against intelligent adversaries, but its software practices are aimed at the stringency level required to ensure that major *one-shot* projects have a decent chance of working correctly *on the first try.*\n\nOn NASA's software practice, if you discover that a space probe's operating system will crash if the seven planets line up perfectly in a row, it wouldn't say, \"Eh, go ahead, we don't expect the planets to ever line up perfectly over the probe's operating lifetime.\" NASA's quality assurance methodology says the probe's operating system is just *not supposed to crash, period* - if we control the probe's code, there's no reason to write code that will crash *period*, or tolerate code we can see crashing *regardless of what inputs it gets*.\n\nThis might not be the best way to invest your limited resources if you were developing a word processing app (that nobody was using for mission-critical purposes, and didn't need to safeguard any private data). In that case you might wait for a customer to complain before making the bug a top priority.\n\nBut it *is* an appropriate standpoint when building a hundred-million-dollar space probe, or software to operate the control rods in a nuclear reactor, or, to an even greater degree, building an [advanced agent](https://arbital.com/p/2c). There are different software practices you use to develop systems where failure is catastrophic and you can't wait for things to break before fixing them; and one of those practices is fixing every 'exotic' failure scenario, not because the exotic always happens, but because it always means the underlying design is broken. Even then, systems built to that practice still fail sometimes, but if they were built to a lesser stringency level, they'd have no chance at all of working correctly on the first try.\n\n# Niceness as the first line of defense / not relying on defeating a superintelligent adversary\n\n> There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files. This book is about the latter.\n> \n> - [Bruce Schneier](https://www.schneier.com/books/applied_cryptography/2preface.html)\n\nSuppose you write a program which, before it performs some dangerous action, demands a password. The program compares this password to the password it has stored. If the password is correct, the program transmits the message \"Yep\" to the user and performs the requested action, and otherwise returns an error message saying \"Nope\". You prove mathematically (theorem-proving software verification techniques) that if the chip works as advertised, this program cannot possibly perform the operation without seeing the password. You prove mathematically that the program cannot return any user reply except \"Yep\" or \"Nope\", thereby showing that there is no way to make it leak the stored password via some clever input.\n\nYou inspect all the transistors on the computer chip under a microscope to help ensure the mathematical guarantees are valid for this chip's behavior (that the chip doesn't contain any extra transistors you don't know about that could invalidate the proof). To make sure nobody can get to the machine within which the password is stored, you put it inside a fortress and a locked room requiring 12 separate keys, connected to the outside world only by an Ethernet cable. Any attempt to get into the locked room through the walls will trigger an explosive detonation that destroys the machine. The machine has its own pebble-bed electrical generator to prevent any shenanigans with the power cable. Only one person knows the password and they have 24-hour bodyguards to make sure nobody can get the password through rubber-hose cryptanalysis. The password itself is 20 characters long and was generated by a quantum random number generator under the eyesight of the sole authorized user, and the generator was then destroyed to prevent anyone else from getting the password by examining it. The dangerous action can only be performed once (it needs to be performed at a particular time) and the password will only be given once, so there's no question of somebody intercepting the password and then reusing it.\n\nIs this system now finally and truly unbreakable?\n\nIf you're an experienced cryptographer, the answer is, \"Almost certainly not; in fact, it will probably be easy to extract the password from this system using a standard cryptographic technique.\"\n\n\"What?!\" cries the person who built the system. \"But I spent all that money on the fortress and getting the mathematical proof of the program, strengthening every aspect of the system to the ultimate extreme! I really impressed myself putting in all that effort!\"\n\nThe cryptographer shakes their head. \"We call that Maginot Syndrome. That's like building a gate a hundred meters high in the middle of the desert. If I get past that gate, it won't be by climbing it, but by [walking around it](http://www.syslog.com/~jwilson/pics-i-like/kurios119.jpg). Making it 200 meters high instead of 100 meters high doesn't help.\"\n\n\"But what's the actual flaw in the system?\" demands the builder.\n\n\"For one thing,\" explains the cryptographer, \"you didn't follow the standard practice of never storing a plaintext password. The correct thing to do is to hash the password, plus a random stored salt like 'Q4bL'. Let's say the password is, unfortunately, 'rainbow'. You don't store 'rainbow' in plain text. You store 'Q4bL' and a secure hash of the string 'Q4bLrainbow'. When you get a new purported password, you prepend 'Q4bL' and then hash the result to see if it matches the stored hash. That way even if somebody gets to peek at the stored hash, they still won't know the password, and even if they have a big precomputed table of hashes of common passwords like 'rainbow', they still won't have precomputed the hash of 'Q4bLrainbow'.\"\n\n\"Oh, well, *I* don't have to worry about *that*,\" says the builder. \"This machine is in an extremely secure room, so nobody can open up the machine and read the password file.\"\n\nThe cryptographer sighs. \"That's not how a security mindset works - you don't ask whether anyone can manage to peek at the password file, you just do the damn hash instead of trying to be clever.\"\n\nThe builder sniffs. \"Well, if your 'standard cryptographic technique' for getting my password relies on your getting physical access to my machine, your technique fails and I have nothing to worry about, then!\"\n\nThe cryptographer shakes their head. \"That *really* isn't what computer security professionals sound like when they talk to each other... it's understood that most system designs fail, so we linger on possible issues and analyze them carefully instead of yelling that we have nothing to worry about... but at any rate, that wasn't the cryptographic technique I had in mind. You may have proven that the system only says 'Yep' or 'Nope' in response to queries, but you didn't prove that the responses don't *depend on* the true password in any way that could be used to extract it.\"\n\n\"You mean that there might be a secret wrong password that causes the system to transmit a series of Yeps and Nopes that encode the correct password?\" the builder says, looking skeptical. \"That may sound superficially plausible. But besides the incredible unlikeliness of anyone being able to find a weird backdoor like that - it really is a quite simple program that I wrote - the fact remains that I proved mathematically that the system only transmits a single 'Nope' in response to wrong answers, and a single 'Yep' in response to right answers. It does that every time. So you can't extract the password that way either - a string of wrong passwords always produces a string of 'Nope' replies, nothing else. Once again, I have nothing to worry about from this 'standard cryptographic technique' of yours, if it was even applicable to my software, which it's not.\"\n\nThe cryptographer sighs. \"This is why we have the proverb 'don't roll your own crypto'. Your proof doesn't literally, mathematically show that there's no external behavior of the system *whatsoever* that depends on the details of the true password in cases where the true password has not been transmitted. In particular, what you're missing is the *timing* of the 'Nope' responses.\"\n\n\"You mean you're going to look for some series of secret backdoor wrong passwords that causes the system to transmit a 'Nope' response after a number of seconds that exactly corresponds to the first letter, second letter, and so on of the real password?\" the builder says incredulously. \"I proved mathematically that the system never says 'Yep' to a wrong password. I think that also covers most possible cases of buffer overflows that could conceivably make the system act like that. I examined the code, and there just *isn't* anything that encodes a behavior like that. This just seems like a very far-flung hypothetical possibility.\"\n\n\"No,\" the cryptographer patiently explains, \"it's what we call a 'side-channel attack', and in particular a '[timing attack](https://en.wikipedia.org/wiki/Timing_attack)'. The operation that compares the attempted password to the correct password works by comparing the first byte, then the second byte, and continuing until it finds the first wrong byte, and then it returns. That means that if I try password that starts with 'a', then a password that starts with 'b', and so on, and the true password starts with 'b', there'll be a slight, statistically detectable tendency for the attempted passwords that start with 'b' to get 'Nope' responses that take ever so slightly longer. Then we try passwords starting with 'ba', 'bb', 'bc', and so on.\"\n\nThe builder looks startled for a minute, and then their face quickly closes up. \"I can't believe that would actually work over the Internet where there are all sorts of delays in moving packets around -\"\n\n\"So we sample a million test passwords and look for statistical differences. You didn't build in a feature that limits the rate at which passwords can be tried. Even if you'd implemented that standard practice, and even if you'd implemented the standard practice of hashing passwords instead of storing them in plaintext, your system still might not be as secure as you hoped. We could try to put the machine under heavy load in order to stretch out its replies to particular queries. And if we can then figure out the hash by timing, we might be able to use thousands of GPUs to try to reverse the hash, instead of needing to send each query to your machine. To really fix the hole, you have to make sure that the timing of the response is fixed regardless of the wrong password given. But if you'd implemented standard practices like rate-limiting password attempts and storing a hash instead of the plaintext, it would at least be *harder* for your oversight to compound into an exploit. This is why we implement standard practices like that even when we *think* the system will be secure without them.\"\n\n\"I just can't believe that kind of weird attack would work in real life!\" the builder says desperately.\n\n\"It doesn't,\" replies the cryptographer. \"Because in real life, computer security professionals try to make sure that the exact timing of the response, power consumption of the CPU, and any other side channel that could conceivably leak any info, don't depend in any way on any secret information that an adversary might want to extract. But yes, in 2003 there was a timing attack proven on SSL-enabled webservers, though that was much more complicated than this case since the SSL system was less naive. Or long before that, timing attacks were used to extract valid login names from Unix servers that only ran crypt() on the password when presented with a valid login name, since crypt() took a while to run on older computers.\"\n\nIn computer security, via a tremendous effort, we can raise the cost of a major government reading your files to the point where they can no longer do it over the Internet and have to pay someone to invade your apartment in person. There are hordes of trained professionals in the National Security Agency or China's 3PLA, and once your system is published they can take a long time to try to outthink you. On your own side, if you're smart, you won't try to outthink them singlehanded; you'll use tools and methods built up by a large commercial and academic system that has experience trying to prevent major governments from reading your files. You can force them to pay to actually have someone break into your house.\n\nThat's the outcome *when the adversary is composed of other human beings.* If the cognitive difference between you and the adversary is more along the lines of mouse versus human, it's possible we just *can't* have security that stops transhuman adversaries from [walking around our Maginot Lines](https://arbital.com/p/9f). In particular, it seems extremely likely that any transhuman adversary which can expose information to humans can hack the humans; from a cryptographic perspective, human brains are rich, complicated, poorly-understood systems with no security guarantees.\n\nParaphrasing Schneier, we might say that there's three kinds of security in the world: Security that prevents your little brother from reading your files, security that prevents major governments from reading your files, and security that prevents superintelligences from getting what they want. We can then go on to remark that the third kind of security is unobtainable, and even if we had it, it would be very hard for us to *know* we had it. Maybe superintelligences can make themselves knowably secure against other superintelligences, but *we* can't do that and know that we've done it.\n\nTo the extent the third kind of security can be obtained at all, it's liable to look more like the design of a [Zermelo-Fraenkel provability oracle](https://arbital.com/p/70) that can only emit 20 timed bits that are partially subject to an external guarantee, than an AI that is [allowed to talk to humans through a text channel](http://lesswrong.com/lw/qk/that_alien_message/). And even then, we shouldn't be sure - the AI is radiating electromagnetic waves and [what do you know, it turns out that DRAM access patterns can be used to transmit on GSM cellphone frequencies](https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/guri) and we can put the AI's hardware inside a Faraday cage but then maybe we didn't think of something *else.*\n\nIf you ask a computer security professional how to build an operating system that will be unhackable *for the next century* with the literal fate of the world depending on it, the correct answer is \"Please don't have the fate of the world depend on that.\" \n\nThe final component of an AI safety mindset is one that doesn't have a strong analogue in traditional computer security, and it is the rule of *not ending up facing a transhuman adversary in the first place.* The winning move is not to play. Much of the field of [value alignment theory](https://arbital.com/p/2v) is about going to any length necessary to avoid *needing* to outwit the AI.\n\nIn AI safety, the *first* line of defense is an AI that *does not want* to hurt you. If you try to put the AI in an explosive-laced concrete bunker, that may or may not be a sensible and cost-effective precaution in case the first line of defense turns out to be flawed. But the *first* line of defense should always be an AI that doesn't *want* to hurt you or [avert your other safety measures](https://arbital.com/p/45), rather than the first line of defense being a clever plan to prevent a superintelligence from getting what it wants.\n\nA special case of this mindset applied to AI safety is the [Omni Test](https://arbital.com/p/2x) - would this AI hurt us (or want to defeat other safety measures) if it were omniscient and omnipotent? If it would, then we've clearly built the wrong AI, because we are the ones laying down the algorithm and there's no reason to build an algorithm that hurts us *period.* If an agent design fails the Omni Test desideratum, this means there are scenarios that it *prefers* over the set of all scenarios we find acceptable, and the agent may go searching for ways to bring about those scenarios.\n\nIf the agent is searching for possible ways to bring about undesirable ends, then we, the AI programmers, are already spending computing power in an undesirable way. We shouldn't have the AI *running a search* that will hurt us if it comes up positive, even if we *expect* the search to come up empty. We just shouldn't program a computer that way; it's a foolish and self-destructive thing to do with computing power. Building an AI that would hurt us if omnipotent is a bug for the same reason that a NASA probe crashing if all seven other planets line up would be a bug - the system just isn't supposed to behave that way *period;* we should not rely on our own cleverness to reason about whether it's likely to happen.", "date_published": "2017-08-03T15:38:42Z", "authors": ["Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky", "Steven Dee"], "summaries": ["The mindset for AI safety has much in common with the mindset for computer security, despite the different target tasks. In computer security, we need to defend against intelligent adversaries who will seek out any flaw in our defense and get creative about it. In AI safety, we're dealing with things potentially smarter than us, which may come up with unforeseen clever ways to optimize whatever it is they're optimizing; the 'strain' on our design placed by it needing to run a smarter-than-human AI in a way that doesn't make it adversarial, is similar in many respects to the 'strain' from cryptography facing an existing intelligent adversary. \"Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail.\" Similarly, in AI safety, the first question we ask is what our design *really* does and how it fails, rather than trying to argue that it succeeds."], "tags": ["B-Class"], "alias": "1cv"} {"id": "cfb3c2ce9bbeeea277601325a202e467", "title": "Correlated coverage", "url": "https://arbital.com/p/correlated_coverage", "source": "arbital", "source_type": "text", "text": "\"Correlated coverage\" occurs within a domain when - going to some lengths to avoid words like \"competent\" or \"correct\" - an [advanced agent](https://arbital.com/p/2c) handling some large number of domain problems the way we want, means that the AI is likely to handle all problems in the domain the way we want. \n\nTo see the difference between correlated coverage and not-correlated coverage, consider humans as general [epistemologists](https://arbital.com/p/9c), versus the [https://arbital.com/p/5l](https://arbital.com/p/5l) problem.\n\nIn [https://arbital.com/p/5l](https://arbital.com/p/5l), there's [Humean freedom](https://arbital.com/p/1y) and [multiple fixed points](https://arbital.com/p/) when it comes to \"Which outcomes rank higher than which other outcomes?\" All the terms in [Frankena's list of desiderata](https://arbital.com/p/5l) have their own Humean freedom as to the details. An agent can decide 1000 issues the way we want, that happen to shadow 12 terms in our complex values, so that covering the answers we want pins down 12 degrees of freedom; and then it turns out there's a 13th degree of freedom that isn't shadowed in the 1000 issues, because [later problems are not drawn from the same barrel as prior problems](https://arbital.com/p/6q). In which case the answer on the 1001st issue, that does turn around that 13th degree of freedom, isn't pinned down by correlation with the coverage of the first 1000 issues. Coverage on the first 1000 queries may not correlate with coverage on the 1001st query.\n\nWhen it comes to [https://arbital.com/p/9c](https://arbital.com/p/9c), there's something like a central idea: Bayesian updating plus simplicity prior. Although not every human can solve every epistemic question, there's nonetheless a sense in which humans, having been optimized to run across the savanna and figure out which plants were poisonous and which of their political opponents might be plotting against them, were later able to figure out General Relativity despite having not been explicitly selected-on for solving that problem. If we include human [subagents](https://arbital.com/p/) into our notion of what problems, in general, human beings can be said to cover, then any question of fact where we can get a correct answer by building a superintelligence to solve it for us, is in some sense \"covered\" by humans as general epistemologists.\n\nHuman neurology is big and complicated and involves many different brain areas, and we had to go through a long process of bootstrapping our epistemology by discovering and choosing to adopt cultural rules about science. Even so, the fact that there's something like a central tendency or core or simple principle of \"Bayesian updating plus simplicity prior\", means that when natural selection built brains to figure out who was plotting what, it accidentally built brains that could figure out General Relativity.\n\nWe can see other parts of [value alignment](https://arbital.com/p/5s) in the same light - trying to find places, problems to tackle, where there may be correlated coverage:\n\nThe reason to work on ideas like [https://arbital.com/p/4l](https://arbital.com/p/4l) is that we might hope that there's something like a *core idea* for \"Try not to impact unnecessarily large amounts of stuff\" in a way that there isn't a core idea for \"Try not to do anything that decreases [value](https://arbital.com/p/55).\"\n\nThe hope that [anapartistic reasoning](https://arbital.com/p/3ps) could be a general solution to [https://arbital.com/p/45](https://arbital.com/p/45) says, \"Maybe there's a core central idea that covers everything we mean by an agent B letting agent A correct it - like, if we really honestly wanted to let someone else correct us and not mess with their safety measures, it seems like there's a core thing for us to want that doesn't go through all the Humean degrees of freedom in humane value.\" This doesn't mean that there's a short program that encodes all of anapartistic reasoning, but it means there's more reason to hope that if you get 100 problems right, and then the next 1000 problems are gotten right without further tweaking, and it looks like there's a central core idea behind it and the core thing looks like anapartistic reasoning, maybe you're done.\n\n[Do What I Know I Mean](https://arbital.com/p/2s1) similarly incorporates a hope that, even if it's not *simple* and there isn't a *short program* that encodes it, there's something like a *core* or a *center* to the notion of \"Agent X does what Agent Y asks while modeling Agent Y and trying not to do things whose consequences it isn't pretty sure Agent Y will be okay with\" where we can get correlated coverage of the problem with *less* complexity than it would take to encode values directly.\n\nFrom the standpoint of the [https://arbital.com/p/1cv](https://arbital.com/p/1cv), understanding the notion of correlated coverage and its complementary problem of [patch resistance](https://arbital.com/p/48) is what leads to traversing the gradient from:\n\n- \"Oh, we'll just hardwire the AI's utility function to tell it not to kill people.\"\n\nTo:\n\n- \"Of course there'll be an extended period where we have to train the AI not to do various sorts of bad things.\"\n\nTo: \n\n- \"*Bad impacts* isn't a compact category and the training data may not capture everything that could be a bad impact, especially if the AI gets smarter than the phase in which it was trained. But maybe the notion of being *low impact in general* (rather than blacklisting particular bad impacts) has a simple-enough core to be passed on by training or specification in a way that generalizes across sharp capability gains.\"", "date_published": "2016-06-26T23:37:41Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class", "Context disaster"], "alias": "1d6"} {"id": "6c7e9af6cf3d2e3d540e955a9ebcaf87", "title": "Nonperson predicate", "url": "https://arbital.com/p/nonperson_predicate", "source": "arbital", "source_type": "text", "text": "A \"nonperson predicate\" is a possible method for preventing an [advanced AI](https://arbital.com/p/2c) from [accidentally running sapient computations](https://arbital.com/p/6v) (it would be a potentially huge moral catastrophe if an AI created, ran, and discarded a large number of sapient programs inside itself). A nonperson predicate looks at potential computations and returns one of two possible answers, \"Don't know\" and \"Definitely not a person\". A successful nonperson predicate may (very often) return \"Don't know\" for computations that aren't in fact people, but it never returns \"Definitely not a person\" for something that *is* a person. In other words, to solve this problem, we don't need to know what consciousness *is* so much as we need to know what it *isn't* - we don't need to be sure what *is* a person, we need to be sure what *isn't* a person. For a nonperson predicate to be useful, however, it must still pass enough useful computations that we can build a working, capable AI out of them. (Otherwise \"Rocks are okay, everything else might be a person\" would be an adequate nonperson predicate.) The [foreseeable difficulty](https://arbital.com/p/6r) of a nonperson predicate is that [instrumental pressures](https://arbital.com/p/10k) to model humans accurately might tend to [seek out flaws and loopholes](https://arbital.com/p/42) in any attempted predicate. See the page on [https://arbital.com/p/6v](https://arbital.com/p/6v) for more detail.", "date_published": "2015-12-28T18:49:00Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1fv"} {"id": "cb1c7da91993253b17655949871b62d0", "title": "Reflective stability", "url": "https://arbital.com/p/reflective_stability", "source": "arbital", "source_type": "text", "text": "An [agent](https://arbital.com/p/2c) is \"reflectively stable\" in some regard, if having a choice of how to construct a successor agent or modify its own code, the agent will *only* construct a successor that thinks similarly in that regard.\n\n- In [tiling agent theory](https://arbital.com/p/1mq), an [expected utility satisficer](https://arbital.com/p/) is reflectively *consistent*, since it will approve of building another EU satisficer, but an EU satisficer is not reflectively *stable*, since it may also approve of building an expected utility maximizer (it expects the consequences of building the maximizer to satisfice).\n- Having a [utility function](https://arbital.com/p/1fw) that [only weighs paperclips](https://arbital.com/p/10h) is \"reflectively stable\" because paperclip maximizers *only* try to build other paperclip maximizers.\n\nIf, thinking the way you currently do (in some regard), it seems unacceptable to not think that way (in that regard), then you are reflectively stable (in that regard).", "date_published": "2016-05-21T10:50:56Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["An agent is \"reflectively stable\" in some regard if it only self-modifies (or constructs successors) to think similarly in that regard. For example, [an agent with a utility function that only values paperclips](https://arbital.com/p/10h) will construct successors that only value paperclips, so having a paperclip utility function is \"reflectively stable\" (and [so are most other utility functions](https://arbital.com/p/goals_reflectively_stable)). Contrast \"[reflectively consistent](https://arbital.com/p/2rb)\"."], "tags": ["B-Class"], "alias": "1fx"} {"id": "b4f3294a01adbe46e01e09bdd7b2c073", "title": "Known-algorithm non-self-improving agent", "url": "https://arbital.com/p/KANSI", "source": "arbital", "source_type": "text", "text": "\"Known-algorithm non-self-improving\" (KANSI) is a strategic scenario and class of possibly-attainable AI designs, where the first [pivotal](https://arbital.com/p/6y) [powerful AI](https://arbital.com/p/2c) has been constructed out of known, human-understood algorithms and is not engaging in extensive self-modification. Its advantage might be achieved by, e.g., being run on a very large cluster of computers. If you could build an AI that was capable of cracking protein folding and building nanotechnology, by running correctly structured algorithms akin to deep learning on Google's or Amazon's computing cluster, and the builders were sufficiently paranoid/sensible to have people continuously monitoring this AI's processes and all the problems it was trying to solve and *not* having this AI engage in self-modification or self-improvement, this would fall into the KANSI class of scenarios. This would imply that huge classes of problems in [reflective stability](https://arbital.com/p/1fx), [ontology identification](https://arbital.com/p/5c), limiting potentially dangerous capabilities, etcetera, would be much simpler.\n\nRestricting 'good' or approved AI development to KANSI designs would mean deliberately foregoing whatever capability gains might be possible through self-improvement. It's not known whether a KANSI AI could be *first* to some pivotal level of capability. This would depend on unknown background settings about how much capability can be gained, at what stage, by self-modification. Depending on these background variables, making a KANSI AI be first to a capability threshold might or might not be something that could be accomplished by any reasonable level of effort and coordination. This is one reason among several why [MIRI](https://arbital.com/p/15w) does not, e.g., restrict its attention to KANSI designs.\n\nJust intending to build a non-self-improving AI out of known algorithms is insufficient to ensure KANSI as a property; this might require further solutions along the lines of [https://arbital.com/p/45](https://arbital.com/p/45). E.g., humans can't modify their own brain functions, but because we're [general consequentialists](https://arbital.com/p/) and we [don't always think the way we want to think](https://arbital.com/p/1fx), we created quite simple innovations like, e.g., calculators, out of environmental objects in a world that didn't have any built-in calculators, so that we could think about arithmetic in a different way than we did by default. A KANSI design with a large divergence between how it thinks and how it wants to think might behave similarly, or require constant supervision to detect *most* cases of the AI starting to behave similarly - and then some cases might slip through the cracks. Since our present study and understanding of reflective stability is very primitive, we're plausibly still in the field of things we should be studying even if we want to build a KANSI agent, just to have the KANSI agent not be too *wildly* divergent in distance between how it thinks about X, and how it would prefer to think about X if given the choice.", "date_published": "2015-12-28T20:43:03Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Work in progress"], "alias": "1fy"} {"id": "b271a8002b1cdd891dc59c88346adf37", "title": "Modeling distant superintelligences", "url": "https://arbital.com/p/distant_SIs", "source": "arbital", "source_type": "text", "text": "One of the things we almost certainly don't want our AI to do, unless we're *extremely* confident that it is *extremely* robust and value-aligned, is have it think about and try to model alien civilizations that might contain superintelligences or potential simulators. Among the potential problems that would result could be:\n\n- The AI ends up effectively internally simulating a hostile superintelligence, and that simulated superintelligence manages to \"break out of the box\" in any number of possible ways. The [rule against having any hostile superintelligences anywhere near us](https://arbital.com/p/1cv) should apply to simulated hostile superintelligences inside the AI, for the same reason, unless we are extremely confident of our AI's value alignment and internal robustness.\n- The AI becomes subject to [distant coercion of causal explanations](https://arbital.com/p/5j) and begins to expect that it is probably being simulated, hence that the AI's 'true' environment or the true causes of its experiences are what the simulated superintelligence wants them to be.\n- The AI becomes subject to blackmail, e.g., the AI models what a simulated paperclip maximizer would do, and concludes that a simulated paperclip maximizer will do (very bad thing) *unless* the AI *immediately* carries out the action of \"modify your utility function to tile the universe with paperclips, then hide this fact from your programmers\".\n- The AI commits [mindcrime](https://arbital.com/p/6v) in the course of modeling an alien civilization that would contain sapient beings.\n\nSince there's no known task that actually requires a non-[Sovereign](https://arbital.com/p/1g3) AI to think about distant superintelligences, it seems like we should probably react to this possibility by figuring out how to design the first AI such that it just does not think about aliens, period. This would require [averting](https://arbital.com/p/) an [instrumental pressure](https://arbital.com/p/10k) and [excluding](https://arbital.com/p/1g4) an epistemic question that a sufficiently advanced AI would otherwise naturally consider in the course of, e.g., considering likely explanations for the [Fermi Paradox](https://arbital.com/p/).\n\nFor a given agent, this scenario is not dangerous to the extent that the agent is not capable of modeling a dangerous other mind or considering logical decision theories in the first place.", "date_published": "2015-12-29T23:22:36Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["One of the things we almost certainly don't want our AI to do, unless we're *extremely* confident that it is *extremely* robust and value-aligned, is have it think about and try to model alien civilizations that might contain superintelligences or potential simulators. This could result in the AI internally simulating a hostile superintelligence that 'breaks out of the box', or the AI committing [mindcrime](https://arbital.com/p/6v) in the course of modeling distant sapient minds, or [weirder possibilities](https://arbital.com/p/5j). Since there's no known immediate problem that *requires* modeling distant civilizations, the obvious course is to build AIs that just don't think about aliens, if that's possible."], "tags": ["Behaviorist genie", "B-Class"], "alias": "1fz"} {"id": "c17089a6cd7d4fd0cbe3c95e8176f16b", "title": "Strategic AGI typology", "url": "https://arbital.com/p/AGI_typology", "source": "arbital", "source_type": "text", "text": "A list of [advanced agent](https://arbital.com/p/2c) types, in classes broad enough to correspond to different *strategic scenarios* - AIs that can do different things, can only be built under different circumstances, or are only desirable given particular background assumptions. This typology isn't meant to be exhaustive.\n\n- [https://arbital.com/p/1g3](https://arbital.com/p/1g3)\n- [Genie](https://arbital.com/p/6w)\n- [Oracle](https://arbital.com/p/6x)\n- [Known-Algorithm Non-Self-Improving](https://arbital.com/p/1fy) agent\n- [Approval-Directed](https://arbital.com/p/) agent", "date_published": "2015-12-29T23:45:44Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "1g0"} {"id": "468d2e83929b052b51b01026308fbecc", "title": "Friendly AI", "url": "https://arbital.com/p/FAI", "source": "arbital", "source_type": "text", "text": "\"Friendly AI\" or \"FAI\" is an old term invented by [Yudkowsky](https://arbital.com/p/2) to mean an [advanced AI](https://arbital.com/p/2c) successfully [aligned](https://arbital.com/p/5s) with some [idealized](https://arbital.com/p/) version of humane values, such as e.g. [extrapolated volition](https://arbital.com/p/). In current use it has mild connotations of significant [self-sovereignty](https://arbital.com/p/1g3) and/or being able to [identify](https://arbital.com/p/6c) desirable strategic-level consequences for itself, since this is the scenario Yudkowsky originally envisioned. An \"UnFriendly AI\" or \"UFAI\" means one that's specifically *not* targeting humane objectives, e.g. a [paperclip maximizer](https://arbital.com/p/10h). (Note this does mean there are some things that are neither UFAIs or FAIs, like a [Genie](https://arbital.com/p/6w) that only considers short-term objectives, or for that matter, a rock.)", "date_published": "2015-12-28T21:06:04Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1g2"} {"id": "ef726fc10868949cc0f07ce16b080426", "title": "Autonomous AGI", "url": "https://arbital.com/p/Sovereign", "source": "arbital", "source_type": "text", "text": "An autonomous or self-directed [advanced agent](https://arbital.com/p/2c), a machine intelligence which acts in the real world in pursuit of its preferences without further user intervention or steering. In [Bostrom](https://arbital.com/p/18k)'s [typology](https://arbital.com/p/1g0) of [advanced agents](https://arbital.com/p/2c), this is a \"Sovereign\" and distinguished from a \"[Genie](https://arbital.com/p/6w)\" or an \"[Oracle](https://arbital.com/p/6x)\". (\"Sovereign\" in this sense means self-sovereign, and is not to be confused with the concept of a [Bostromian singleton](http://www.nickbostrom.com/fut/singleton.html) or any particular kind of social governance.)\n\nUsually, when we say \"Sovereign\" or \"self-directed\", we'll be talking about a supposedly [aligned](https://arbital.com/p/5s) AI that acts autonomously *by design*. Failure to solve the alignment problem probably means the resulting AI is self-directed-by-default.\n\nTrying to construct an autonomous Friendly AI suggests that we trust the AI more than the [programmers](https://arbital.com/p/9r) in any conflict between them, and we're okay with removing all constraints and off-switches except those the agent voluntarily takes upon itself.\n\nA *successfully* aligned autonomous AGI would carry the least [moral hazard](https://arbital.com/p/2sb) of any scenario, since it hands off steering to some fixed [preference framework](https://arbital.com/p/5f) or objective that the programmers can no longer modify. Nonetheless, being really really really that sure, not just getting it right but *knowing* we've gotten it right, seems like a large enough problem that perhaps we shouldn't be trying to build this class of AI *for our first try*, and should first target a [Task AGI](https://arbital.com/p/6w) instead, or something else involving ongoing user steering.\n\nAn autonomous [superintelligence](https://arbital.com/p/41l) would be the most difficult possible class of AGI to [align](https://arbital.com/p/5s), requiring [total alignment](https://arbital.com/p/41k). [Coherent extrapolated volition](https://arbital.com/p/3c5) is a proposed alignment target for an autonomous superintelligence, but again, probably not something we should attempt to do on our first try.", "date_published": "2016-06-06T20:39:50Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1g3"} {"id": "186daea8559af5ba4546538fdf7cf9d9", "title": "Epistemic exclusion", "url": "https://arbital.com/p/epistemic_exclusion", "source": "arbital", "source_type": "text", "text": "An \"epistemic exclusion\" would be a hypothetical form of AI limitation that made the AI not model (and if reflectively stable, not want to model) some particular part of physical or mathematical reality, or model it only using some restricted model class that didn't allow for the maximum possible predictive accuracy. For example, a [behaviorist genie](https://arbital.com/p/102) would not want to model human minds (except using a tightly restricted model class) to avoid [https://arbital.com/p/6v](https://arbital.com/p/6v), [https://arbital.com/p/programmer_manipulation](https://arbital.com/p/programmer_manipulation) and other possible problems.\n\nAt present, nobody has investigated how to do this (in any reflectively stable way), and there's all sorts of obvious problems stemming from the fact that, in reality, most facts are linked to a significant number of other facts. How would you make an AI that was really good at predicting everything else in the world but didn't know or want to know what was inside your basement? Intuitively, it seems likely that a lot of naive solutions would, e.g., just cause the AI to *de facto* end up constructing something that wasn't technically a model of your basement, but played the same role as a model of your basement, in order to maximize predictive accuracy about everything that wasn't your basement. We could similarly ask how it would be possible to build a really good mathematician that never knew or cared whether 333 was a prime number, and whether this might require it to also ignore the 'casting out nines' procedure whenever it saw 333 as a decimal number, or what would happen if we asked it to multiply 3 by (100 + 10 + 1), and so on.\n\nThat said, most *practical* reasons to create an epistemic exclusion (e.g. [against modeling humans in too much detail](https://arbital.com/p/102), or [against modeling distant alien civilizations and superintelligences](https://arbital.com/p/1fz)) would involve some practical reason the exclusion was there, and some level of in-practice exclusion that was *good enough*, which might not require e.g. maximum predictive accuracy about everything else combined with zero predictive accuracy about the exclusion.", "date_published": "2015-12-28T21:20:39Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "1g4"} {"id": "1444994bcbaa22481f1b2baf97818b4d", "title": "Solomonoff induction: Intro Dialogue (Math 2)", "url": "https://arbital.com/p/1hh", "source": "arbital", "source_type": "text", "text": "*(A dialogue between Ashley, a computer scientist who's never heard of Solomonoff's theory of inductive inference, and Blaine, who thinks it is the best thing since sliced bread.)*\n\nAshley: Good evening, Msr. Blaine.\n\nBlaine: Good evening, Msr. Ashley.\n\nAshley: I've heard there's this thing called \"Solomonoff's theory of inductive inference\".\n\nBlaine: The rumors have spread, then.\n\nAshley: Yeah, so, what the heck is that about?\n\nBlaine: Invented in the 1960s by the mathematician Ray Solomonoff, the key idea in Solomonoff induction is to do sequence prediction by using Bayesian updating on a prior composed of a mixture of all computable probability distributions--\n\nAshley. Wait. Back up a lot. Before you try to explain what Solomonoff induction *is*, I'd like you to try to tell me what it *does*, or why people study it in the first place. I find that helps me organize my listening. Right now I don't even know why I should be interested in this.\n\nBlaine: Um, okay. Let me think for a second...\n\nAshley: Also, while I can imagine things that \"sequence prediction\" might mean, I haven't yet encountered it in a technical context, so you'd better go a bit further back and start more at the beginning. I do know what \"computable\" means and what a \"probability distribution\" is, and I remember the formula for [Bayes's Rule](https://arbital.com/p/1lz) although it's been a while.\n\nBlaine: Okay. So... one way of framing the usual reason why people study this general field in the first place, is that sometimes, by studying certain idealized mathematical questions, we can gain valuable intuitions about epistemology. That's, uh, the field that studies how to reason about factual questions, how to build a map of reality that reflects the territory--\n\nAshley: I have some idea what 'epistemology' is, yes. But I think you might need to start even further back, maybe with some sort of concrete example or something.\n\nBlaine: Okay. Um. So one anecdote that I sometimes use to frame the value of computer science to the study of epistemology, is Edgar Allen Poe's argument in 1833 that chess was uncomputable.\n\nAshley: That doesn't sound like a thing that actually happened.\n\nBlaine: I know, but it totally *did* happen and not in a metaphorical sense either! Edgar Allen Poe wrote an [essay](http://www.eapoe.org/works/essays/maelzel.htm) explaining why no automaton would ever be able to play chess, and he specifically mentioned \"Mr. Babbage's computing engine\" as an example. You see, in the nineteenth century, there was for a time this sensation known as the Mechanical Turk--supposedly a machine, an automaton, that could play chess. At the grandmaster level, no less. Now today, when we're accustomed to the idea that it takes a reasonable powerful computer to do that, we can know *immediately* that the Mechanical Turk must have been a fraud and that there must have been a concealed operator inside--a midget, as it turned out. *Today* we know that this sort of thing is *hard* to build into a machine. But in the 19th century, even that much wasn't known. So when Edgar Allen Poe, who besides being an author was also an accomplished magician, set out to write an essay about the Mechanical Turk, he spent the *second* half of the essay dissecting what was known about the Turk's appearance to (correctly) figure out where the human operator was hiding. But Poe spent the first half of the essay arguing that no automaton - nothing like Mr. Babbage's computing engine - could possibly play chess, which was how he knew *a priori* that the Turk had a concealed human operator.\n\nAshley: And what was Poe's argument?\n\nBlaine: Poe observed that in an algebraical problem, each step followed from the previous step of necessity, which was why the steps in solving an algebraical problem could be represented by the deterministic motions of gears in something like Mr. Babbage's computing engine. But in a chess problem, Poe said, there are many possible chess moves, and no move follows with necessity from the position of the board; and even if you did select one move, the opponent's move would not follow with necessity, so you couldn't represent it with the determined motion of automatic gears. Therefore, Poe said, whatever was operating the Mechanical Turk must have the nature of Cartesian mind, rather than the nature of deterministic matter, and this was knowable *a priori*. And then he started figuring out where the required operator was hiding.\n\nAshley: That's some amazingly impressive reasoning for being completely wrong.\n\nBlaine: I know! Isn't it great?\n\nAshley: I mean, that sounds like Poe correctly identified the *hard* part of playing computer chess, the branching factor of moves and countermoves, which is the reason why no *simple* machine could do it. And he just didn't realize that a deterministic machine could deterministically check many possible moves in order to figure out the game tree. So close, and yet so far.\n\nBlaine: More than a century later, in 1950, Claude Shannon published the first paper ever written on computer chess. And in passing, Shannon gave the formula for playing perfect chess if you had unlimited computing power, the algorithm you'd use to extrapolate the entire game tree. We could say that Shannon gave a short program that would solve chess if you ran it on a hypercomputer, where a hypercomputer is an ideal computer that can run any finite computation immediately. And then Shannon passed on to talking about the problem of locally guessing how good a board position was, so that you could play chess using only a *small* search. I say all this to make a point about the value of knowing how to solve problems using hypercomputers, even though hypercomputers don't exist. Yes, there's often a *huge* gap between the unbounded solution and the practical solution. It wasn't until 1997, forty-seven years after Shannon's paper giving the unbounded solution, that Deep Blue actually won the world chess championship--\n\nAshley: And that wasn't just a question of faster computing hardware running Shannon's ideal search algorithm. There were a lot of new insights along the way, most notably the alpha-beta pruning algorithm and a lot of improvements in positional evaluation.\n\nBlaine: Right! But I think some people overreact to that forty-seven year gap, and act like it's *worthless* to have an unbounded understanding of a computer program, just because you might still be forty-seven years away from a practical solution. But if you don't even have a solution that would run on a hypercomputer, you're Poe in 1833, not Shannon in 1950. The reason I tell the anecdote about Poe is to illustrate that Poe was *confused* about computer chess in a way that Shannon was not. When we don't know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us. When we can state code that would solve the problem given a hypercomputer, we have become *less* confused. Once we have the unbounded solution we understand, in some basic sense, *the kind of work we are trying to perform,* and then we can try to figure out how to do it efficiently.\n\nAshley: Which may well require new insights into the structure of the problem, or even a conceptual revolution in how we imagine the work we're trying to do.\n\nBlaine: Yes, but the point is that you can't even get started on that if you're arguing about how playing chess has the nature of Cartesian mind rather than matter. At that point you're not 50 years away from winning the chess championship, you're 150 years away, because it took an extra 100 years to move humanity's understanding to the point where Claude Shannon could trivially see how to play perfect chess using a large-enough computer. I'm not trying to exalt the unbounded solution by denigrating the work required to get a bounded solution. I'm not saying that when we have an unbounded solution we're practically there and the rest is a matter of mere lowly efficiency. I'm trying to compare having the unbounded solution to the horrific confusion of *not understanding what we're trying to do.*\n\nAshley: Okay. I think I understand why, on your view, it's important to know how to solve problems using infinitely fast computers, or hypercomputers as you call them. When we can say how to answer a question using infinite computing power, that means we crisply understand the question itself, in some sense; while if we can't figure out how to solve a problem using unbounded computing power, that means we're *confused* about the problem, in some sense. I mean, anyone who's ever tried to teach the more doomed sort of undergraduate to write code knows what it means to be confused about what it takes to compute something.\n\nBlaine: Right.\n\nAshley: So what does this have to do with \"Solomonoff induction\"?\n\nBlaine: Ah! Well, suppose I asked you how to do epistemology using infinite computing power?\n\nAshley: My good fellow, I would at once reply, \"Beep. Whirr. Problem 'do epistemology' not crisply specified.\" At this stage of affairs, I do not think this reply indicates any fundamental confusion on my part; rather I think it is you who must be clearer.\n\nBlaine: Given unbounded computing power, how would you reason in order to construct an accurate map of reality?\n\nAshley: That still strikes me as rather underspecified.\n\nBlaine: Perhaps. But even there I would suggest that it's a mark of intellectual progress to be able to take vague and underspecified ideas like 'do good epistemology' and turn them *into* crisply specified problems. Imagine that I went up to my friend Cecil, and said, \"How would you do good epistemology given unlimited computing power and a short Python program?\" and Cecil at once came back with an answer--a good and reasonable answer, once it was explained. Cecil would probably know something quite interesting that you do not presently know.\n\nAshley: I confess to being rather skeptical of this hypothetical. But if that actually happened--if I agreed, to my own satisfaction, that someone had stated a short Python program that would 'do good epistemology' if run on an unboundedly fast computer--then I agree that I'd probably have learned something *quite interesting* about epistemology.\n\nBlaine: What Cecil knows about, in this hypothetical, is Solomonoff induction. In the same way that Claude Shannon answered \"Given infinite computing power, how would you play perfect chess?\", Ray Solomonoff answered \"Given infinite computing power, how would you perfectly find the best hypothesis that fits the facts?\"\n\nAshley: Suddenly, I find myself strongly suspicious of whatever you are about to say to me.\n\nBlaine: That's understandable.\n\nAshley: In particular, I'll ask at once whether \"Solomonoff induction\" assumes that our hypotheses are being given to us on a silver platter along with the exact data we're supposed to explain, or whether the algorithm is organizing its own data from a big messy situation and inventing good hypotheses from scratch.\n\nBlaine: Great question! It's the second one.\n\nAshley: Really? Okay, now I have to ask whether Solomonoff induction is a recognized concept in good standing in the field of academic computer science, because that does not sound like something modern-day computer science knows how to do.\n\nBlaine: I wouldn't say it's a widely known concept, but it's one that's in good academic standing. The method isn't used in modern machine learning because it requires an infinitely fast computer and isn't easily approximated the way that chess is.\n\nAshley: This really sounds very suspicious. Last time I checked, we hadn't *begun* to formalize the creation of good new hypotheses from scratch. I've heard about claims to have 'automated' the work that, say, Newton did in inventing classical mechanics, and I've found them all to be incredibly dubious. Which is to say, they were rigged demos and lies.\n\nBlaine: I know, but--\n\nAshley: And then I'm even more suspicious of a claim that someone's algorithm would solve this problem if only they had infinite computing power. Having some researcher claim that their Good-Old-Fashioned-AI semantic network *would* be intelligent if run on a computer so large that, conveniently, nobody can ever test their theory, is not going to persuade me.\n\nBlaine: Do I really strike you as that much of a charlatan? What have I ever done to you, that you would expect me to try pulling a scam like that?\n\nAshley: That's fair. I shouldn't accuse you of planning that scam when I haven't seen you say it. But I'm pretty sure the problem of \"coming up with good new hypotheses in a world full of messy data\" is [AI-complete](https://en.wikipedia.org/wiki/AI-complete). And even Mentif-\n\nBlaine: Do not say the name, or he will appear!\n\nAshley: Sorry. Even the legendary first and greatest of all AI crackpots, He-Who-Googles-His-Name, could assert that his algorithms would be all-powerful on a computer large enough to make his claim unfalsifiable. So what?\n\nBlaine: That's a very sensible reply and this, again, is exactly the kind of mental state that reflects a problem that is *confusing* rather than just hard to implement. It's the sort of confusion Poe might feel in 1833, or close to it. In other words, it's just the sort of conceptual issue we *would* have solved at the point where we could state a short program that could run on a hypercomputer. Which Ray Solomonoff did in 1964.\n\nAshley: Okay, let's hear about this supposed general solution to epistemology.\n\nBlaine: First, try to solve the following puzzle. 1, 3, 4, 7, 11, 18, 29...?\n\nAshley: Let me look at those for a moment... 47.\n\nBlaine: Congratulations on engaging in, as we snooty types would call it, 'sequence prediction'.\n\nAshley: I'm following you so far.\n\nBlaine: The smarter you are, the more easily you can find the hidden patterns in sequences and predict them successfully. You had to notice the resemblance to the Fibonacci rule to guess the next number. Someone who didn't already know about Fibonacci, or who was worse at mathematical thinking, would have taken longer to understand the sequence or maybe never learned to predict it at all.\n\nAshley: Still with you.\n\nBlaine: It's not a sequence of *numbers* per se... but can you see how the question, \"The sun has risen on the last million days. What is the probability that it rises tomorrow?\" could be viewed as a kind of sequence prediction problem?\n\nAshley: Only if some programmer neatly parses up the world into a series of \"Did the Sun rise on day X starting in 4.5 billion BCE, 0 means no and 1 means yes? 1, 1, 1, 1, 1...\" and so on. Which is exactly the sort of shenanigan that I see as cheating. In the real world, you go outside and see a brilliant ball of gold touching the horizon, not a giant \"1\".\n\nBlaine: Suppose I have a robot running around with a webcam showing it a 1920x1080 pixel field that refreshes 60 times a second with 32-bit colors. I could view that as a giant sequence and ask the robot to predict what it will see happen when it rolls out to watch a sunrise the next day.\n\nAshley: I can't help but notice that the 'sequence' of webcam frames is absolutely enormous, like, the sequence is made up of 66-megabit 'numbers' appearing 3600 times per minute... oh, right, computers much bigger than the universe. And now you're smiling evilly, so I guess that's the point. I also notice that the sequence is no longer deterministically predictable, that it is no longer a purely mathematical object, and that the sequence of webcam frames observed will depend on the robot's choices. This makes me feel a bit shaky about the analogy to predicting the mathematical sequence 1, 1, 2, 3, 5.\n\nBlaine: I'll try to address those points in order. First, Solomonoff induction is about assigning *probabilities* to the next item in the sequence. I mean, if I showed you a box that said 1, 1, 2, 3, 5, 8 you would not be absolutely certain that the next item would be 13. There could be some more complicated rule that just looked Fibonacci-ish but then diverged. You might guess with 90% probability but not 100% probability, or something like that.\n\nAshley: This has stopped feeling to me like math.\n\nBlaine: There is a *large* branch of math, to say nothing of computer science, that deals in probabilities and statistical prediction. We are going to be describing absolutely lawful and deterministic ways of assigning probabilities after seeing 1, 3, 4, 7, 11, 18.\n\nAshley: Okay, but if you're later going to tell me that this lawful probabilistic prediction rule underlies a generally intelligent reasoner, I'm already skeptical. No matter how large a computer it's run on, I find it hard to imagine that some simple set of rules for assigning probabilities is going to encompass truly and generally intelligent answers about sequence prediction, like [Terence Tao](https://en.wikipedia.org/wiki/Terence_Tao) would give after looking at the sequence for a while. We just have no idea how Terence Tao works, so we can't duplicate his abilities in a formal rule, no matter how much computing power that rule gets... you're smiling evilly again. I'll be *quite* interested if that evil smile turns out to be justified.\n\nBlaine: Indeed.\n\nAshley: I also find it hard to imagine that this deterministic mathematical rule for assigning probabilities would notice if a box was outputting an encoded version of \"To be or not to be\" from Shakespeare by mapping A to Z onto 1 to 26, which I would notice eventually though not immediately upon seeing 20, 15, 2, 5, 15, 18... And you're *still* smiling evilly.\n\nBlaine: Indeed. That is *exactly* what Solomonoff induction does. Furthermore, we have theorems establishing that Solomonoff induction can do it way better than you or Terence Tao.\n\nAshley: A *theorem* proves this. As in a necessary mathematical truth. Even though we have no idea how Terence Tao works empirically... and there's evil smile number four. Okay. I am very skeptical, but willing to be convinced.\n\nBlaine: So if you actually did have a hypercomputer, you could cheat, right? And Solomonoff induction is the most ridiculously cheating cheat in the history of cheating.\n\nAshley: Go on.\n\nBlaine: We just run all possible computer programs to see which are the simplest computer programs that best predict the data seen so far, and use those programs to predict what comes next. This mixture contains, among other things, an exact copy of Terence Tao, thereby allowing us to prove theorems about their relative performance.\n\nAshley: Is this an actual reputable math thing? I mean really?\n\nBlaine: I'll deliver the formalization later, but you did ask me to first state the point of it all. The point of Solomonoff induction is that it gives us a gold-standard ideal for sequence prediction, and this gold-standard prediction only errs by a bounded amount, over infinite time, relative to the best computable sequence predictor. We can also see it as formalizing the intuitive idea that was expressed by William Ockham a few centuries earlier that simpler theories are more likely to be correct, and as telling us that 'simplicity' should be measured in algorithmic complexity, which is the size of a computer program required to output a hypothesis's predictions.\n\nAshley: I think I would have to read more on this subject to actually follow that. What I'm hearing is that Solomonoff induction is a reputable idea that is important because it gives us a kind of ideal for sequence prediction. This ideal also has something to do with Occam's Razor, and stakes a claim that the simplest theory is the one that can be represented by the shortest computer program. You identify this with \"doing good epistemology\".\n\nBlaine: Yes, those are legitimate takeaways. Another way of looking at it is that Solomonoff induction is an ideal but uncomputable answer to the question \"What should our priors be?\", which is left open by understanding [Bayesian updating](https://arbital.com/p/1lz).\n\nAshley: Can you say how Solomonoff induction answers the question of, say, the prior probability that Canada is planning to invade the United States? I once saw a crackpot website that tried to invoke Bayesian probability about it, but only after setting the prior at 10% or something like that, I don't recall exactly. Does Solomonoff induction let me tell him that he's making a math error, instead of just calling him silly in an informal fashion?\n\nBlaine: If you're expecting to sit down with Leibniz and say, \"Gentlemen, let us calculate\" then you're setting your expectations too high. Solomonoff gives us an idea of how we *should* compute that quantity given unlimited computing power. It doesn't give us a firm recipe for how we can best approximate that ideal in real life using bounded computing power, or human brains. That's like expecting to play perfect chess after you read Shannon's 1950 paper. But knowing the ideal, we can extract some intuitive advice that might help our online crackpot if only he'd listen.\n\nAshley: But according to you, Solomonoff induction does say in principle what is the prior probability that Canada will invade the United States.\n\nBlaine: Yes, up to a choice of universal Turing machine.\n\nAshley *(looking highly skeptical):* So I plug a universal Turing machine into the formalism, and in principle, I get out a uniquely determined probability that Canada invades the USA.\n\nBlaine: Exactly!\n\nAshley: Uh huh. Well, go on.\n\nBlaine: So, first, we have to transform this into a sequence prediction problem.\n\nAshley: Like a sequence of years in which Canada has and hasn't invaded the US, mostly zero except around 1812--\n\nBlaine: *No!* To get a good prediction about Canada we need much more data than that, and I don't mean a graph of Canadian GDP either. Imagine a sequence that contains all the sensory data you have ever received over your lifetime. Not just the hospital room that you saw when you opened your eyes right after your birth, but the darkness your brain received as input while you were still in your mother's womb. Every word you've ever heard. Every letter you've ever seen on a computer screen, not as ASCII letters but as the raw pattern of neural impulses that gets sent down from your retina.\n\nAshley: That seems like a lot of data and some of it is redundant, like there'll be lots of similar pixels for blue sky--\n\nBlaine: That data is what *you* got as an agent. If we want to translate the question of the prediction problem Ashley faces into theoretical terms, we should give the sequence predictor *all* the data that you had available, including all those repeating blue pixels of the sky. Who knows? Maybe there was a Canadian warplane somewhere in there, and you didn't notice.\n\nAshley: But it's impossible for my brain to remember all that data. If we neglect for the moment how the retina actually works and suppose that I'm seeing the same 1920x1080 @60Hz feed the robot would, that's far more data than my brain can realistically learn per second.\n\nBlaine: So then Solomonoff induction can do better than you can, using its unlimited computing power and memory. That's fine.\n\nAshley: But what if you can do better by forgetting more?\n\nBlaine: If you have limited computing power, that makes sense. With unlimited computing power, that really shouldn't happen and that indeed is one of the lessons of Solomonoff induction. An unbounded Bayesian never expects to do worse by updating on another item of evidence--for one thing, you can always just do the same policy you would have used if you hadn't seen that evidence. That kind of lesson *is* one of the lessons that might not be intuitively obvious, but which you can feel more deeply by walking through the math of probability theory. With unlimited computing power, nothing goes wrong as a result of trying to process 4 gigabits per second; every extra bit just produces a better expected future prediction.\n\nAshley: Okay, so we start with literally all the data I have available. That's 4 gigabits per second if we imagine 1920 by 1080 frames of 32-bit pixels repeating 60 times per second. Though I remember hearing 100 megabits per second would be a better estimate of what the retina sends out, and that it's pared down to 1 megabit per second very quickly by further processing.\n\nBlaine: Right. We start with all of that data, going back to when you were born. Or maybe when your brain formed in the womb, though it shouldn't make much difference.\n\nAshley: I note that there are some things I know that don't come from my sensory inputs at all. Chimpanzees learn to be afraid of skulls and snakes much faster than they learn to be afraid of other arbitrary shapes. I was probably better at learning to walk in Earth gravity than I would have been at navigating in zero G. Those are heuristics I'm born with, based on how my brain was wired, which ultimately stems from my DNA specifying the way that proteins should fold to form neurons--not from any photons that entered my eyes later.\n\nBlaine: So, for purposes of following along with the argument, let's say that your DNA is analogous to the code of a computer program that makes predictions. What you're observing here is that humans have 750 megabytes of DNA, and even if most of that is junk and not all of what's left is specifying brain behavior, it still leaves a pretty large computer program that could have a lot of prior information programmed into it. Let's say that your brain, or rather, your infant pre-brain wiring algorithm, was effectively a 7.5 megabyte program - if it's actually 75 megabytes, that makes little difference to the argument. By exposing that 7.5 megabyte program to all the information coming in from your eyes, ears, nose, proprioceptive sensors telling you where your limbs were, and so on, your brain updated itself into forming the modern Ashley, whose hundred trillion synapses might be encoded by, say, one petabyte of information.\n\nAshley: The thought does occur to me that some environmental phenomena have effects on me that can't be interpreted as \"sensory information\" in any simple way, like the direct effect that alcohol has on my neurons, and how that feels to me from the inside. But it would be perverse to claim that this prevents you from trying to summarize all the information that the Ashley-agent receives into a single sequence, so I won't press the point. \n\n*(Eliezer, whispering: More on this topic later.)*\n\nAshley: Oh, and for completeness's sake, wouldn't there also be further information embedded in the laws of physics themselves? Like, the way my brain executes implicitly says something about the laws of physics in the universe I'm in.\n\nBlaine: Metaphorically speaking, our laws of physics would play the role of a particular choice of Universal Turing Machine, which has some effect on which computations count as \"simple\" inside the Solomonoff formula. But normally, the UTM should be very simple compared to the amount of data in the sequence we're trying to predict, just like the laws of physics are very simple compared to a human brain. In terms of [algorithmic complexity](https://arbital.com/p/5v), the laws of physics are very simple compared to watching a 1920x1080 @60Hz visual field for a day. \n\nAshley: Part of my mind feels like the laws of physics are quite complicated compared to going outside and watching a sunset. Like, I realize that's false, but I'm not sure how to say out loud exactly why it's false...\n\nBlaine: Because the algorithmic complexity of a system isn't measured by how long a human has to go to college to understand it, it's measured by the size of the computer program required to generate it. The language of physics is differential equations, and it turns out that this is something difficult to beat into some human brains, but differential equations are simple to program into a simple Turing Machine.\n\nAshley: Right, like, the laws of physics actually have much fewer details to them than, say, human nature. At least on the Standard Model of Physics. I mean, in principle there could be another decillion undiscovered particle families out there.\n\nBlaine: The concept of \"algorithmic complexity\" isn't about seeing something with lots of gears and details, it's about the size of computer program required to compress all those details. The [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set) looks very complicated visually, you can keep zooming in using more and more detail, but there's a very simple rule that generates it, so we say the algorithmic complexity is very low.\n\nAshley: All the visual information I've seen is something that happens *within* the physical universe, so how can it be more complicated than the universe? I mean, I have a sense on some level that this shouldn't be a problem, but I don't know why it's not a problem.\n\nBlaine: That's because particular parts of the universe can have much higher algorithmic complexity than the entire universe! Consider a library that contains all possible books. It's very easy to write a computer program that generates all possible books. So any *particular* book in the library contains much more algorithmic information than the *entire* library; it contains the information required to say 'look at this particular book here'. If pi is normal, then somewhere in its digits is a copy of Shakespeare's Hamlet - but the number saying which particular digit of pi to start looking at, will be just about exactly as large as Hamlet itself. The copy of Shakespeare's Hamlet that exists in the decimal expansion of pi, is more complex than pi itself. If you zoomed way in and restricted your vision to a particular part of the Mandelbrot set, what you saw might be much more *algorithmically* complex than the entire Mandelbrot set, because the specification has to say where in the Mandelbrot set you are. Similarly, the world Earth is much more algorithmically complex than the laws of physics. Likewise, the visual field you see over the course of a second can easily be far more algorithmically complex than the laws of physics.\n\nAshley: Okay, I think I get that. And similarly, even though the ways that proteins fold up are very complicated, in principle we could get all that info using just the simple fundamental laws of physics plus the relatively simple DNA code for the protein. There's all sorts of obvious caveats about epigenetics and so on, but those caveats aren't likely to change the numbers by a whole order of magnitude.\n\nBlaine: Right!\n\nAshley: So the laws of physics are, like, a few kilobytes, and my brain has say 75 megabytes of innate wiring instructions. And then I get to see a lot more information than that over my lifetime, like a megabit per second after my initial visual system finishes preprocessing it, and then most of that is forgotten. Uh... what does that have to do with Solomonoff induction again?\n\nBlaine: Solomonoff induction quickly catches up to any single computer program at sequence prediction, even if the original program is very large and contains a lot of prior information about the environment. If a program is 75 megabytes long, it can only predict 75 megabytes worth of data better than the Solomonoff inductor before the Solomonoff inductor catches up to it. That doesn't mean that a Solomonoff inductor knows everything a baby does after the first second of exposure to a webcam feed, but it does mean that after the first second the Solomonoff inductor is already no more surprised than a baby by the vast majority of pixels in the next frame. Every time the Solomonoff inductor assigns half as much probability as the baby to the next pixel it sees, that's one bit spent permanently out of the 75 megabytes of error that can happen before the Solomonoff inductor catches up to the baby. That your brain is written in the laws of physics also has some implicit correlation with the environment, but that's like saying that a program is written in the same programming language as the environment. The language can contribute something to the power of the program, and the environment being written in the same programming language can be a kind of prior knowledge. But if Solomonoff induction starts from a standard Universal Turing Machine as its language, that doesn't contribute any more bits of lifetime error than the complexity of that programming language in the UTM.\n\nAshley: Let me jump back a couple of steps and return to the notion of my brain wiring itself up in response to environmental information. I'd expect an important part of that process was my brain learning to *control* the environment, not just passively observing it. Like, it mattered to my brain's wiring algorithm that my brain saw the room shift in a certain way when it sent out signals telling my eyes to move.\n\nBlaine: Indeed. But talking about the sequential *control* problem is more complicated math. [https://arbital.com/p/11v](https://arbital.com/p/11v) is the ideal agent that uses Solomonoff induction as its epistemology and expected reward as its decision theory. That introduces extra complexity, so it makes sense to talk about just Solomonoff induction first. We can talk about AIXI later. So imagine for the moment that we were *just* looking at your sensory data, and trying to predict what would come next in that.\n\nAshley: Wouldn't it make more sense to look at the brain's inputs *and* outputs, if we wanted to predict the next input? Not just look at the series of previous inputs?\n\nBlaine: It'd make the problem easier for a Solomonoff inductor to solve, sure; but it also makes the problem more complicated. Let's talk instead about what would happen if you took the complete sensory record of your life, gave it to an ideally smart agent, and asked the agent to predict what you would see next. Maybe the agent could do an even better job of prediction if we also told it about your brain's outputs, but I don't think that subtracting the outputs would leave it helpless to see patterns in the inputs.\n\nAshley: It sounds like a pretty hard problem to me, maybe even an unsolvable one. I'm thinking of the distinction in computer science between needing to learn from non-chosen data, versus learning when you can choose particular queries. Learning can be much faster in the second case.\n\nBlaine: In terms of what can be predicted *in principle* given the data, what facts are *actually reflected in it* that Solomonoff induction might uncover, we shouldn't imagine a human trying to analyze the data, we should imagine [an entire advanced civilization pondering it for years](http://lesswrong.com/lw/qk/that_alien_message/). If you look at it from that angle, then the alien civilization isn't going to be balked by the fact that it's looking at the answers to the queries that Ashley's brain chose, instead of the answers to the queries it chose itself. Like, if the Ashley had already read Shakespeare's Hamlet--if the image of those pages had already crossed the sensory stream--and then the Ashley saw a mysterious box outputting 20, 15, 2, 5, 15, 18, I think somebody eavesdropping on that sensory data would be equally able to guess that this was encoding 'tobeor' and guess that the next thing the Ashley saw might be the box outputting 14. You wouldn't even need an entire alien civilization of superintelligent cryptographers to guess that. And it definitely wouldn't be a killer problem that Ashley was controlling the eyeball's saccades, even if you could learn even faster by controlling the eyeball yourself. So far as the computer-science distinction goes, Ashley's eyeball *is* being controlled to make intelligent queries and seek out useful information, it's just Ashley controlling the eyeball instead of you--that eyeball is not a query-oracle answering *random* questions.\n\nAshley: Okay, I think this example here is helping my understanding of what we're doing here. In the case above, the next item in the Ashley-sequence wouldn't actually be 14. It would be this huge 1920 x 1080 visual field that showed the box flashing a little picture of '14'.\n\nBlaine: Sure. Otherwise it would be a rigged demo, as you say.\n\nAshley: I think I'm confused about the idea of *predicting* the visual field. It seems to me that what with all the dust specks in my visual field, and maybe my deciding to tilt my head using motor instructions that won't appear in the sequence, there's no way to *exactly* predict the 66-megabit integer representing the next visual frame. So it must be doing something other than the equivalent of guessing \"14\" in a simpler sequence, but I'm not sure what.\n\nBlaine: Indeed, there'd be some element of thermodynamic and quantum randomness preventing that exact prediction even in principle. So instead of predicting one particular next frame, we put a probability distribution on it.\n\nAshley: A probability distribution over possible 66-megabit frames? Like, a table with 2 to the 66 million entries, summing to 1?\n\nBlaine: Sure. 2^(32 x 1920 x 1080) isn't a large number when you have unlimited computing power. As Martin Gardner once observed, \"Most finite numbers are very much larger.\" Like I said, Solomonoff induction is an epistemic ideal that requires an unreasonably large amount of computing power.\n\nAshley: I don't deny that big computations can sometimes help us understand little ones. But at the point when we're talking about probability distributions that large, I have some trouble holding onto what the probability distribution is supposed to *mean*.\n\nBlaine: Really? Just imagine a probability distribution over N possibilities, then let N go to 2 to the 66 million. If we were talking about a letter ranging from A to Z, then putting 100 times as much probability mass on (X, Y, Z) as on the rest of the alphabet, would say that, although you didn't know *exactly* what letter would happen, you expected it would be toward the end of the alphabet. You would have used 26 probabilities, summing to 1, to precisely state that prediction. In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a 1920 x 1080 video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded '14', would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dark unicorn would have very low, though not zero, probability.\n\nAshley: Can we really get away with viewing things that way?\n\nBlaine: If we could not make identifications like these *in principle*, there would be no principled way in which we could say that you had ever *expected to see something happen* - no way to say that one visual field your eyes saw, had higher probability than any other sensory experience. We couldn't justify science; we couldn't say that, having performed Galileo's experiment by rolling an inclined cylinder down a plane, Galileo's theory was thereby to some degree supported by having assigned *a high relative probability* to the only actual observations our eyes ever report.\n\nAshley: I feel a little unsure of that jump, but I suppose I can go along with that for now. Then the question of \"What probability does Solomonoff induction assign to Canada invading?\" is to be identified, in principle, with the question \"Given my past life experiences and all the visual information that's entered my eyes, what is the relative probability of seeing visual information that encodes Google News with the headline 'CANADA INVADES USA' at some point during the next 300 million seconds?\"\n\nBlaine: Right!\n\nAshley: And Solomonoff induction has an in-principle way of assigning this a relatively low probability, which that online crackpot could do well to learn from as a matter of principle, even if he couldn't *begin* to carry out the exact calculations that involve assigning probabilities to exponentially vast tables.\n\nBlaine: Precisely!\n\nAshley: Fairness requires that I congratulate you on having come further in formalizing 'do good epistemology' as a sequence prediction problem than I previously thought you might. I mean, you haven't satisfied me yet, but I wasn't expecting you to get even this far.\n\nBlaine: Next, we consider how to represent a *hypothesis* inside this formalism.\n\nAshley: Hmm. You said something earlier about updating on a probabilistic mixture of computer programs, which leads me to suspect that in this formalism, a hypothesis or *way the world can be* is a computer program that outputs a sequence of integers.\n\nBlaine: There's indeed a version of Solomonoff induction that works like that. But I prefer the version where a hypothesis assigns *probabilities* to sequences. Like, if the hypothesis is that the world is a fair coin, then we shouldn't try to make that hypothesis predict \"heads - tails - tails - tails - heads\" but should let it just assign a 1/32 prior probability to the sequence HTTTH.\n\nAshley: I can see that for coins, but I feel a bit iffier on what this means as a statement *about the real world*.\n\nBlaine: A single hypothesis inside the Solomonoff mixture would be a computer program that took in a series of video frames, and assigned a probability to each possible next video frame. Or for greater simplicity and elegance, imagine a program that took in a sequence of bits, ones and zeroes, and output a rational number for the probability of the next bit being '1'. We can readily go back and forth between a program like that, and a probability distribution over sequences. Like, if you can answer all of the questions, \"What's the probability that the coin comes up heads on the first flip?\", \"What's the probability of the coin coming up heads on the second flip, if it came up heads on the first flip?\", and \"What's the probability that the coin comes up heads on the second flip, if it came up tails on the first flip?\" then we can turn that into a probability distribution over sequences of two coinflips. Analogously, if we have a program that outputs the probability of the next bit, conditioned on a finite number of previous bits taken as input, that program corresponds to a probability distribution over infinite sequences of bits.\n\n$$\\displaystyle \\mathbb{P}_{prog}(bits_{1 \\dots N}) = \\prod_{i=1}^{N} InterpretProb(prog(bits_{1 \\dots i-1}), bits_i)$$\n\n$$\\displaystyle InterpretProb(prog(x), y) = \\left\\{ \\begin{array}{ll} InterpretFrac(prog(x)) & \\text{if } y = 1 \\\\ 1-InterpretFrac(prog(x)) & \\text{if } y = 0 \\\\ 0 & \\text{if $prog(x)$ does not halt} \\end{array} \\right\\} $$\n\nAshley: I think I followed along with that in theory, though it's not a type of math I'm used to (yet). So then in what sense is a program that assigns probabilities to sequences, a way the world could be - a hypothesis about the world?\n\nBlaine: Well, I mean, for one thing, we can see the infant Ashley as a program with 75 megabytes of information about how to wire up its brain in response to sense data, that sees a bunch of sense data, and then experiences some degree of relative surprise. Like in the baby-looking-paradigm experiments where you show a baby an object disappearing behind a screen, and the baby looks longer at those cases, and so we suspect that babies have a concept of object permanence.\n\nAshley: That sounds like a program that's a way Ashley could be, not a program that's a way the world could be.\n\nBlaine: Those indeed are dual perspectives on the meaning of Solmonoff induction. Maybe we can shed some light on this by considering a simpler induction rule, Laplace's Rule of Succession, invented by the Reverend Thomas Bayes in the 1750s, and named after Pierre-Simon Laplace, the inventor of Bayesian reasoning.\n\nAshley: Pardon me?\n\nBlaine: Suppose you have a biased coin with an unknown bias, and every possible bias between 0 and 1 is equally probable.\n\nAshley: Okay. Though in the real world, it's quite likely that an unknown frequency is exactly 0, 1, or 1/2. If you assign equal probability density to every part of the real number field between 0 and 1, the probability of 1 is 0. Indeed, the probability of all rational numbers put together is zero.\n\nBlaine: The original problem considered by Thomas Bayes was about an ideal billiard ball bouncing back and forth on an ideal billiard table many times and eventually slowing to a halt; and then bouncing other billiards to see if they halted to the left or the right of the first billiard. You can see why, in first considering the simplest form of this problem without any complications, we might consider every position of the first billiard to be equally probable.\n\nAshley: Sure. Though I note with pointless pedantry that if the billiard was really an ideal rolling sphere and the walls were perfectly reflective, it'd never halt in the first place.\n\nBlaine: Suppose we're told that, after rolling the original billiard ball and then 5 more billiard balls, one billiard ball was to the right of the original, an R. The other four were to the left of the original, or Ls. Again, that's 1 R and 4 Ls. Given only this data, what is the probability that the next billiard ball rolled will be on the left of the original, another L?\n\nAshley: Five sevenths.\n\nBlaine: Ah, you've heard this problem before?\n\nAshley: No, but it's obvious.\n\nBlaine: Uh... really?\n\nAshley: Combinatorics. Consider just the orderings of the balls, instead of their exact positions. Designate the original ball with the symbol **|**, the next five balls as **LLLLR**, and the next ball to be rolled as **+**. Given that the current ordering of these six balls is **LLLL|R** and that all positions and spacings of the underlying balls are equally likely, after rolling the **+**, there will be seven equally likely orderings **+LLLL|R**, **L+LLL|R**, **LL+LL|R**, and so on up to **LLLL|L+R** and **LLLL|R+**. In five of those seven orderings, the **+** is on the left of the **|**. In general, if we see M of **L** and N of **R**, the probability of the next item being an **L** is (M + 1) / (M + N + 2).\n\nBlaise: Gosh... Well, the much more complicated proof originally devised by Thomas Bayes starts by considering every position of the original ball to be equally likely a priori, the additional balls as providing evidence about that position, and then integrating over the posterior probabilities of the original ball's possible positions to arrive at the probability that the next ball lands on the left or right.\n\nAshley: Heh. And is all that extra work useful if you also happen to know a little combinatorics?\n\nBlaise: Well, it tells me exactly how my beliefs about the original ball change with each new piece of evidence - the new posterior probability function on the ball's position. Suppose I instead asked you something along the lines of, \"Given 4 **L** and 1 **R**, where do you think the original ball **+** is most likely to be on the number line? How likely is it to be within 0.1 distance of there?\"\n\nAshley: That's fair; I don't see a combinatoric answer for the later part. You'd have to actually integrate over the density function $f^M(1-f)^N \\ \\mathrm{d}f$.\n\nBlaise: Anyway, let's just take at face value that Laplace's Rule of Succession says that, after observing M 1s and N 0s, the probability of getting a 1 next is (M + 1) / (M + N + 2).\n\nAshley: But of course.\n\nBlaise: We can consider Laplace's Rule as a short Python program that takes in a sequence of 1s and 0s, and spits out the probability that the next bit in the sequence will be 1. We can also consider it as a probability distribution over infinite sequences, like this:\n\n- **0** : 1/2\n- **1** : 1/2\n- **00** : 1/2 * 2/3 = 1/3\n- **01** : 1/2 * 1/3 = 1/6\n- **000** : 1/2 * 2/3 * 3/4 = 1/4\n- **001** : 1/2 * 2/3 * 1/4 = 1/12\n- **010** : 1/2 * 1/3 * 1/2 = 1/12\n\nBlaise: ...and so on. Now, we can view this as a rule someone might espouse for *predicting* coinflips, but also view it as corresponding to a particular class of possible worlds containing randomness. I mean, Laplace's Rule isn't the only rule you could use. Suppose I had a barrel containing ten white balls and ten green balls. If you already knew this about the barrel, then after seeing M white balls and N green balls, you'd predict the next ball being white with probability (10 - M) / (20 - M - N). If you use Laplace's Rule, that's like believing the world was like a billiards table with an original ball rolling to a stop at a random point and new balls ending up on the left or right. If you use (10 - M) / (20 - M - N), that's like the hypothesis that there's ten green balls and ten white balls in a barrel. There isn't really a sharp border between rules we can use to predict the world, and rules for how the world behaves -\n\nAshley: Well, that sounds just plain wrong. The map is not the territory, don'cha know? If Solomonoff induction can't tell the difference between maps and territories, maybe it doesn't contain all epistemological goodness after all.\n\nBlaise: Maybe it'd be better to say that there's a dualism between good ways of computing predictions and being in actual worlds where that kind of predicting works well? Like, you could also see Laplace's Rule as implementing the rules for a world with randomness where the original billiard ball ends up in a random place, so that the first thing you see is equally likely to be 1 or 0. Then to ask what probably happens on round 2, we tell the world what happened on round 1 so that it can update what the background random events were.\n\nAshley: Mmmaybe.\n\nBlaise: If you go with the version where Solomonoff induction is over programs that just spit out a determined string of ones and zeroes, we could see those programs as corresponding to particular environments - ways the world *could be* that would produce our sensory input, the sequence. We could jump ahead and consider the more sophisticated decision-problem that appears in [https://arbital.com/p/11v](https://arbital.com/p/11v): an environment is a program that takes your motor outputs as its input, and then returns your sensory inputs as its output. Then we can see a program that produces Bayesian-updated predictions as corresponding to a hypothetical probabilistic environment that implies those updates, although they'll be conjugate systems rather than mirror images.\n\nAshley: Did you say something earlier about the deterministic and probabilistic versions of Solomonoff induction giving the same answers? Like, is it a distinction without a difference whether we ask about simple programs that reproduce the observed data versus simple programs that assign high probability to the data? I can't see why that should be true, especially since Turing machines don't include a randomness source.\n\nBlaise: I'm *told* the answers are the same but I confess I can't quite see why, unless there's some added assumption I'm missing. So let's talk about programs that assign probabilities for now, because I think that case is clearer. The next key idea is to prefer *simple* programs that assign high probability to our observations so far.\n\nAshley: It seems like an obvious step, especially considering that you were already talking about \"simple programs\" and Occam's Razor a while back. Solomonoff induction is part of the Bayesian program of inference, right?\n\nBlaise: Indeed. Very much so.\n\nAshley: Okay, so let's talk about the program, or hypothesis, for \"This barrel has an unknown frequency of white and green balls\", versus the hypothesis \"This barrel has 10 white and 10 green balls\", versus the hypothesis, \"This barrel always puts out a green ball after a white ball and vice versa.\" Let's say we see a green ball, then a white ball, the sequence **GW**. The first hypothesis assigns this probability 1/2 * 1/3 = 1/6, the second hypothesis assigns this probability 10/20 * 9/19 or roughly 1/4, and the third hypothesis assigns probability 1/2 * 1. Now it seems to me that there's some important sense in which, even though Laplace's Rule assigned a lower probability to the data, it's significantly simpler than the second and third hypotheses and is the wiser answer. Does Solomonoff induction agree?\n\nBlaise: I think you might be taking into account some prior knowledge that isn't in the sequence itself, there. Like, things that alternate either **101010...** or **010101...** are *objectively* simple in the sense that a short computer program simulates them or assigns probabilities to them. It's just unlikely to be true about an actual barrel of white and green balls. If **10** is literally the first sense data that you ever see, when you are a fresh new intelligence with only two bits to rub together, then \"The universe consists of alternating bits\" is no less reasonable than \"The universe produces bits with an unknown random frequency anywhere between 0 and 1.\"\n\nAshley: Conceded. But as I was going to say, we have three hypotheses that assigned 1/6, ~1/4, and 1/2 to the observed data; but to know the posterior probabilities of these hypotheses we need to actually say how relatively likely they were a priori, so we can multiply by the odds ratio. Like, if the prior odds were 3:2:1, the posterior odds would be 3:2:1 * (2/12 : 3/12 : 6/12) = 3:2:1 * 2:3:6 = 6:6:6 = 1:1:1. Now, how would Solomonoff induction assign prior probabilities to those computer programs? Because I remember you saying, way back when, that you thought Solomonoff was the answer to \"How should Bayesians assign priors?\"\n\nBlaise: Well, how would you do it?\n\nAshley: I mean... yes, the simpler rules should be favored, but it seems to me that there's some deep questions as to the exact relative 'simplicity' of the rules (M + 1) / (M + N + 2), or the rule (10 - M) / (20 - M - N), or the rule \"alternate the bits\"...\n\nBlaise: Suppose I ask you to just make up some simple rule.\n\nAshley: Okay, if I just say the rule I think you're looking for, the rule would be, \"The complexity of a computer program is the number of bits needed to specify it to some arbitrary but reasonable choice of compiler or Universal Turing Machine, and the prior probability is 1/2 to the power of the number of bits. Since, e.g., there's 32 possible 5-bit programs, so each such program has probability 1/32. So if it takes 16 bits to specify Laplace's Rule of Succession, which seems a tad optimistic, then the prior probability would be 1/65536, which seems a tad pessimistic.\n\nBlaise: Now just apply that rule to the infinity of possible computer programs that assign probabilities to the observed data, update their posterior probabilities based on the probability they've assigned to the evidence so far, sum over all of them to get your next prediction, and we're done. And yes, that requires a [hypercomputer](https://arbital.com/p/) that can solve the [halting problem](https://en.wikipedia.org/wiki/Halting_problem), but we're talking ideals here. Let $\\mathcal P$ be the set of all programs and $s_1s_2\\ldots s_n$ also written $s_{\\preceq n}$ be the sense data so far, then\n\n$\\displaystyle \\mathbb{Sol}(s_{\\preceq n}) := \\sum_{\\mathrm{prog} \\in \\mathcal{P}} 2^{-\\mathrm{length}(\\mathrm{prog})} \\cdot {\\prod_{j=1}^n \\mathop{InterpretProb}(\\mathrm{prog}(s_{\\preceq j-1}), s_j)}$\n\n$\\displaystyle \\mathbb{P}(s_{n+1}=1\\mid s_{\\preceq n}) = \\frac{\\mathbb{Sol}(s_1s_2\\ldots s_n 1)}{\\mathbb{Sol}(s_1s_2\\ldots s_n 1) + \\mathbb{Sol}(s_1s_2\\ldots s_n 0)}.$\n\nAshley: Uh.\n\nBlaine: Yes?\n\nAshley: Um...\n\nBlaine: What is it?\n\nAshley: You invoked a countably infinite set, so I'm trying to figure out if my predicted probability for the next bit must necessarily converge to a limit as I consider increasingly large finite subsets in any order.\n\nBlaine *(sighs):* Of course you are.\n\nAshley: I think you might have left out some important caveats. Like, if I take the rule literally, then the program \"**0**\" has probability 1/2, the program \"**1**\" has probability 1/2, the program \"**01**\" has probability 1/4 and now the total probability is 1.25 which is *too much.* So I can't actually normalize it because the series sums to infinity. Now, this just means we need to, say, decide that the probability of a program having length 1 is 1/2, the probability of it having length 2 is 1/4, and so on out to infinity, but it's an added postulate.\n\nBlaine: The conventional method is to require a [prefix-free code](https://en.wikipedia.org/wiki/Prefix_code). If \"**0111**\" is a valid program then \"**01110**\" cannot be a valid program. With that constraint, assigning \"1/2 to the power of the length of the code\", to all valid codes, will sum to less than 1; and we can normalize their relative probabilities to get the actual prior.\n\nAshley: Okay. And you're sure that it doesn't matter in what order we consider more and more programs as we approach the limit, because... no, I see it. Every program has positive probability mass, with the total set summing to 1, and Bayesian updating doesn't change that. So as I consider more and more programs, in any order, there's only so many large contributions that can be made from the mix - only so often that the final probability can change. Like, let's say there are at most 99 programs with probability 1% that assign probability 0 to the next bit being a 1; that's only 99 times the final answer can go down by as much as 0.01, as the limit is approached.\n\nBlaine: This idea generalizes, and is important. List all possible computer programs, in any order you like. Use any definition of *simplicity* that you like, so long as for any given amount of simplicity, there are only a finite number of computer programs that simple. As you go on carving off chunks of prior probability mass and assigning them to programs, it *must* be the case that as programs get more and complicated, their prior probability approaches zero! - though it's still positive for every finite program, because of [Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule). You can't have more than 99 programs assigned 1% prior probability and still obey Cromwell's Rule, which means there must be some *most complex* program that is assigned 1% probability, which means every more complicated program must have less than 1% probability out to the end of the infinite list.\n\nAshley: Huh. I don't think I've ever heard that justification for Occam's Razor before. I think I like it. I mean, I've heard a lot of appeals to the empirical simplicity of the world, and so on, but this is the first time I've seen a *logical* proof that, in the limit, more complicated hypotheses *must* be less likely than simple ones.\n\nBlaine: Behold the awesomeness that is Solomonoff Induction!\n\nAshley: Uh, but you didn't actually use the notion of *computational* simplicity to get that conclusion, you just required that the supply of probability mass is finite and the supply of potential complications is infinite. Any way of counting discrete complications would imply that conclusion, even if it went by surface wheels and gears.\n\nBlaine: Well, maybe. But it so happens that Yudkowsky did invent or reinvent that argument after pondering Solomonoff induction, and if it predates him (or Solomonoff) then Yudkowsky doesn't know the source. Concrete inspiration for simplified arguments is also a credit to a theory, especially if the simplified argument didn't exist before that.\n\nAshley: Fair enough. My next question is about the choice of Universal Turing Machine - the choice of compiler for our program codes. There's an infinite number of possibilities there, and in principle, the right choice of compiler can make our probability for the next thing we'll see be anything we like. At least I'd expect this to be the case, based on how the \"[problem of induction](https://en.wikipedia.org/wiki/Problem_of_induction)\" usually goes. So with the right choice of Universal Turing Machine, our online crackpot can still make it be the case that Solomonoff induction predicts Canada invading the USA.\n\nBlaine: One way of looking at the problem of good epistemology, I'd say, is that the job of a good epistemology is not to make it *impossible* to err. You can still blow off your foot if you really insist on pointing the shotgun at your foot and pulling the trigger. The job of good epistemology is to make it *more obvious* when you're about to blow your own foot off with a shotgun. On this dimension, Solomonoff Induction excels. If you claim that we ought to pick an enormously complicated compiler to encode our hypotheses, in order to make the 'simplest hypothesis that fits the evidence' be one that predicts Canada invading the USA, then it should be obvious to everyone except you that you are in the process of screwing up. \n\nAshley: Ah, but of course they'll say that their code is just the simple and natural choice of Universal Turing Machine, because they'll exhibit a meta-UTM which outputs that UTM given only a short code. And if you say the meta-UTM is complicated--\n\nBlaine: Flon's Law says, \"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code.\" You can't make it impossible for people to screw up, but you can make it *more obvious.* And Solomonoff induction would make it even more obvious than might at first be obvious, because--\n\nAshley: Your Honor, I move to have the previous sentence taken out and shot.\n\nBlaine: Let's say that the whole of your sensory information is the string **10101010...** Consider the stupid hypothesis, \"This program has a 99% probability of producing a **1** on every turn\", which you jumped to after seeing the first bit. What would you need to claim your priors were like--what Universal Turing Machine would you need to endorse--in order to maintain blind faith in that hypothesis in the face of ever-mounting evidence?\n\nAshley: You'd need a Universal Turing Machine **blind-utm** that assigned a very high probability to the **blind** program \"def ProbNextElementIsOne(previous_sequence): return 0.99\". Like, if **blind-utm** sees the code **0**, it executes the **blind** program \"return 0.99\". And to defend yourself against charges that your UTM **blind-utm** was not itself simple, you'd need a meta-UTM, **blind-meta** which, when it sees the code **10**, executes **blind-utm**. And to really wrap it up, you'd need to take a fixed point through all towers of meta and use diagonalization to create the UTM **blind-diag** that, when it sees the program code **0**, executes \"return 0.99\", and when it sees the program code **10**, executes **blind-diag**. I guess I can see some sense in which, even if that doesn't resolve Hume's problem of induction, anyone *actually advocating that* would be committing blatant shenanigans on a commonsense level, arguably more blatant than it would have been if we hadn't made them present the UTM.\n\nBlaine: Actually, the shenanigans have to be much worse than that in order to fool Solomonoff induction. Like, Solomonoff induction using your **blind-diag** isn't fooled for a minute, even taking **blind-diag** entirely on its own terms.\n\nAshley: Really?\n\nBlaise: Assuming 60 sequence items per second? Yes, absolutely, Solomonoff induction shrugs off the delusion in the first minute, unless there's further and even more blatant shenanigans. We did require that your **blind-diag** be a *Universal* Turing Machine, meaning that it can reproduce every computable probability distribution over sequences, given some particular code to compile. Let's say there's a 200-bit code **laplace** for Laplace's Rule of Succession, \"lambda sequence: return (sequence.count('1') + 1) / (len(sequence) + 2)\", so that its prior probability relative to the 1-bit code for **blind** is 2^-200. Let's say that the sense data is around 50/50 1s and 0s. Every time we see a 1, **blind** gains a factor of 2 over **laplace** (99% vs. 50% probability), and every time we see a 0, **blind** loses a factor of 50 over **laplace** (1% vs. 50% probability). On average, every 2 bits of the sequence, **blind** is losing a factor of 25 or, say, a bit more than 4 bits, i.e., on average **blind** is losing two bits of probability per element of the sequence observed. So it's only going to take 100 bits, or a little less than two seconds, for **laplace** to win out over **blind**.\n\nAshley: I see. I was focusing on a UTM that assigned lots of prior probability to **blind**, but what I really needed was a compiler that, *while still being universal* and encoding every possibility somewhere, still assigned a really tiny probability to **laplace**, **faircoin** that encodes \"return 0.5\", and every other hypothesis that does better, round by round, than **blind**. So what I really need to carry off the delusion is **obstinate-diag** that is universal, assigns high probability to **blind**, requires billions of bits to specify **laplace**, and also requires billions of bits to specify any UTM that can execute **laplace** as a shorter code than billions of bits. Because otherwise we will say, \"Ah, but given the evidence, this other UTM would have done better.\" I agree that those are even more blatant shenanigans than I thought.\n\nBlaise: Yes. And even *then*, even if your UTM takes two billion bits to specify **faircoin**, Solomonoff induction will lose its faith in **blind** after seeing a billion bits. Which will happen before the first year is out, if we're getting 60 bits per second. And if you turn around and say, \"Oh, well, I didn't mean *that* was my UTM, I really meant *this* was my UTM, this thing over here where it takes a *trillion* bits to encode **faircoin**\", then that's probability-theory-violating shenanigans where you're changing your priors as you go.\n\nAshley: That's actually a very interesting point--that what's needed for Bayesian to maintain a delusion in the face of mounting evidence is not so much a blindly high prior for the delusory hypothesis, as a blind skepticism of all its alternatives. But what if their UTM requires a googol bits to specify **faircoin**? What if **blind** and **blind-diag**, or programs pretty much isomorphic to them, are the only programs that can be specified in less than a googol bits?\n\nBlaise: Then your desire to shoot your own foot off has been made very, very visible to anyone who understands Solomonoff induction. We're not going to get absolutely objective prior probabilities as a matter of logical deduction, not without principles that are unknown to me and beyond the scope of Solomonoff induction. But we can make the stupidity really *blatant* and force you to construct a downright embarrassing Universal Turing Machine.\n\nAshley: I guess I can see that. I mean, I guess that if you're presenting a ludicrously complicated Universal Turing Machine that just refuses to encode the program that would predict Canada not invading, that's more *visibly* silly than a verbal appeal that says, \"But you must just have faith that Canada will invade.\" I guess part of me is still hoping for a more objective sense of \"complicated\".\n\nBlaine: We could say that reasonable UTMs should contain a small number of wheels and gears in a material instantiation under our universe's laws of physics, which might in some ultimate sense provide a prior over priors. Like, the human brain evolved from DNA-based specifications, and the things you can construct out of relatively small numbers of physical objects are 'simple' under the 'prior' implicitly searched by natural selection.\n\nAshley: Ah, but what if I think it's likely that our physical universe or the search space of DNA won't give us a good idea of what's complicated?\n\nBlaine: For your alternative notion of what's complicated to go on being believed even as other hypotheses are racking up better experimental predictions, you need to assign a *ludicrously low probability* that our universe's space of physical systems buildable using a small number of objects, could *possibly* provide better predictions of that universe than your complicated alternative notion of prior probability. We don't need to appeal that it's *a priori* more likely than not that \"a universe can be predicted well by low-object-number machines built using that universe's physics.\" Instead, we appeal that it would violate [Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule), and be exceedingly special pleading, to assign the possibility of a physically learnable universe a probability of *less* than $2^{-1,000,000}$. It then takes only a megabit of exposure to notice that the universe seems to be regular.\n\nAshley: In other words, so long as you don't start with an absolute and blind prejudice against the universe being predictable by simple machines encoded in our universe's physics--so long as, on this planet of seven billion people, you don't assign probabilities less than $2^{-1,000,000}$ to the other person being right about what is a good Universal Turing Machine--then the pure logic of Bayesian updating will rapidly force you to the conclusion that induction works. Hm. I don't know that good *pragmatic* answers to the problem of induction were ever in short supply. Still, on the margins, it's a more forceful pragmatic answer than the last one I remember hearing.\n\nBlaise: Yay! *Now* isn't Solomonoff induction wonderful?\n\nAshley: Maybe? You didn't really use the principle of *computational* simplicity to derive that lesson. You just used that *some inductive principle* ought to have a prior probability of more than $2^{-1,000,000}$.\n\nBlaise: ...\n\nAshley: Can you give me an example of a problem where the *computational* definition of simplicity matters and can't be factored back out of an argument?\n\nBlaise: As it happens, yes I can. I can give you *three* examples of how it matters.\n\nAshley: Vun... two... three! Three examples! Ah-ah-ah!\n\nBlaise: Must you do that every--oh, never mind. Example one is that galaxies are not so improbable that no one could ever believe in them, example two is that the limits of possibility include Terrence Tao, and example three is that diffraction is a simpler explanation of rainbows than divine intervention.\n\nAshley: These statements are all so obvious that no further explanation of any of them is required.\n\nBlaine: On the contrary! And I'll start with example one. Back when the Andromeda galaxy was a hazy mist seen through a telescope, and someone first suggested that maybe that hazy mist was an incredibly large number of distant stars--that many \"nebulae\" were actually *distant galaxies*, and our own Milky Way was only one of them--there was a time when Occam's Razor was invoked against that hypothesis.\n\nAshley: What? Why?\n\nBlaine: They invoked Occam's Razor against the galactic hypothesis, because if that were the case, then there would be *a much huger number of stars* in the universe, and the stars would be entities, and Occam's Razor said \"Entities are not to be multiplied beyond necessity.\"\n\nAshley: That's not how Occam's Razor works. The \"entities\" of a theory are its types, not its objects. If you say that the hazy mists are distant galaxies of stars, then you've reduced the number of laws because you're just postulating a previously seen type, namely stars organized into galaxies, instead of a new type of hazy astronomical mist.\n\nBlaine: Okay, but imagine that it's the nineteenth century and somebody replies to you, \"Well, I disagree! William of Ockham said not to multiply entities, this galactic hypothesis obviously creates a huge number of entities, and that's the way I see it!\"\n\nAshley: I think I'd give them your spiel about there being no human epistemology that can stop you from shooting off your own foot.\n\nBlaine: I *don't* think you'd be justified in giving them that lecture. I'll parenthesize at this point that you ought to be very careful when you say \"I can't stop you from shooting off your own foot\", lest it become a Fully General Scornful Rejoinder. Like, if you say that to someone, you'd better be able to explain exactly why Occam's Razor counts types as entities but not objects. In fact, you'd better explain that to someone *before* you go advising them not to shoot off their own foot. And once you've told them what you think is foolish and why, you might as well stop there. Except in really weird cases of people presenting us with enormously complicated and jury-rigged Universal Turing Machines, and then we say the shotgun thing.\n\nAshley: That's fair. So, I'm not sure what I'd have answered before starting this conversation, which is much to your credit, friend Blaine. But now that I've had this conversation, it's obvious that it's new types and not new objects that use up the probability mass we need to distribute over all hypotheses. Like, I need to distribute my probability mass over \"Hypothesis 1: there are stars\" and \"Hypothesis 2: there are stars plus huge distant hazy mists\". I don't need to distribute my probability mass over all the actual stars in the galaxy!\n\nBlaine: In terms of Solomonoff induction, we penalize a program's *lines of code* rather than its *runtime* or *RAM used*, because we need to distribute our probability mass over possible alternatives each time we add a line of code. There's no corresponding *choice between mutually exclusive alternatives* when a program uses more runtime or RAM.\n\n*(Eliezer, whispering: Unless we need a [leverage prior](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/) to consider the hypothesis of being a particular agent inside all that RAM or runtime.)*\n\nAshley: Or to put it another way, any fully detailed model of the universe would require some particular arrangement of stars, and the more stars there are, the more possible arrangements there are. But when we look through the telescope and see a hazy mist, we get to sum over all arrangements of stars that would produce that hazy mist. If some galactic hypothesis required a hundred billion stars to *all* be in *particular exact places* without further explanation or cause, then that would indeed be a grave improbability.\n\nBlaine: Precisely. And if you needed all the hundred billion stars to be in particular exact places, that's just the kind of hypothesis that would take a huge computer program to specify.\n\nAshley: But does it really require learning Solomonoff induction to understand that point? Maybe the bad argument against galaxies was just a motivated error somebody made in the nineteenth century, because they didn't want to live in a big universe for emotional reasons.\n\nBlaine: The same debate is playing out today over no-collapse versions of quantum mechanics, also somewhat unfortunately known as \"many-worlds interpretations\". Now, regardless of what anyone thinks of all the other parts of that debate, there's a *particular* sub-argument where somebody says, \"It's simpler to have a collapse interpretation because all those extra quantum \"worlds\" are extra entities that are unnecessary under Occam's Razor since we can't see them.\" And Solomonoff induction tells us that this invocation of Occam's Razor is flatly misguided because Occam's Razor does not work like that. Basically, they're trying to cut down the RAM and runtime of the universe, at the expensive of adding an extra line of code, namely the code for the collapse postulate that prunes off parts of the wavefunction that are in undetectably weak causal contact with us.\n\nAshley: Hmm. Now that you put it that way, it's not so obvious to me that it makes sense to have *no* prejudice against *sufficiently* enormous universes. I mean, the universe we see around us is exponentially vast but not superexponentially vast - the visible atoms are $10^{80}$ in number or so, not $10^{10^{80}}$ or \"bigger than Graham's Number\". Maybe there's some fundamental limit on how much gets computed.\n\nBlaine: You, um, know that on the Standard Model, the universe doesn't just cut out and stop existing at the point where our telescopes stop seeing it? There isn't a giant void surrounding a little bubble of matter centered perfectly on Earth? It calls for a literally infinite amount of matter? I mean, I guess if you don't like living in a universe with more than $10^{80}$ entities, a universe where *too much gets computed,* you could try to specify *extra laws of physics* that create an abrupt spatial boundary with no further matter beyond them, somewhere out past where our telescopes can see -\n\nAshley: All right, point taken.\n\n*(Eliezer, whispering: Though I personally suspect that the spatial multiverse and the quantum multiverse are the same multiverse, and that what lies beyond the reach of our telescopes is not entangled with us--meaning that the universe is as finitely large as the superposition of all possible quantum branches, rather than being literally infinite in space.)*\n\nBlaine: I mean, there is in fact an alternative formalism to Solomonoff induction, namely [Levin search](http://www.scholarpedia.org/article/Universal_search#Levin_complexity), which says that program complexities are further penalized by the logarithm of their runtime. In other words, it would make 'explanations' or 'universes' that require a long time to run, be inherently less probable. Some people like Levin search more than Solomonoff induction because it's more computable. I dislike Levin search because (a) it has no fundamental epistemic justification and (b) it assigns probability zero to quantum mechanics.\n\nAshley: Can you unpack that last part?\n\nBlaine: If, as is currently suspected, there's no way to simulate quantum computers using classical computers without an exponential slowdown, then even in principle, this universe requires exponentially vast amounts of classical computing power to simulate. Let's say that with sufficiently advanced technology, you can build a quantum computer with a million qubits. On Levin's definition of complexity, for the universe to be like that is as improbable a priori as *any particular* set of laws of physics that must specify on the order of one million equations. Can you imagine how improbable it would be to see a list of one hundred thousand differential equations, without any justification or evidence attached, and be told that they were the laws of physics? That's the kind of penalty that Levin search or Schmidhuber's Speed Prior would attach to any laws of physics that could run a quantum computation of a million qubits, or, heck, any physics that claimed that a protein was being folded in a way that ultimately went through considering millions of quarks interacting. If you're *not* absolutely certain a priori that the universe *isn't* like that, you don't believe in Schmidhuber's Speed Prior. Even with a collapse postulate, the amount of computation that goes on before a collapse would be prohibited by the Speed Prior.\n\nAshley: Okay, yeah. If you're phrasing it that way--that the Speed Prior assigns probability nearly zero to quantum mechanics, so we shouldn't believe in the Speed Prior--then I can't easily see a way to extract out the same point without making reference to ideas like penalizing algorithmic complexity but not penalizing runtime. I mean, maybe I could extract the lesson back out but it's easier to say, or more obvious, by pointing to the idea that Occam's Razor should penalize algorithmic complexity but not runtime.\n\nBlaine: And that isn't just *implied by* Solomonoff induction, it's pretty much the whole idea of Solomonoff induction, right?\n\nAshley: Maaaybe.\n\nBlaine: For example two, that Solomonoff induction outperforms even Terence Tao, we want to have a theorem that says Solomonoff induction catches up to every computable way of reasoning in the limit. Since we iterated through all possible computer programs, we know that somewhere in there is a simulated copy of Terence Tao in a simulated room, and if this requires a petabyte to specify, then we shouldn't have to make more than a quadrillion bits of error relative to Terence Tao before zeroing in on the Terence Tao hypothesis. I mean, in practice, I'd expect far less than a quadrillion bits of error before the system was behaving like it was vastly smarter than Terence Tao. It'd take a lot less than a quadrillion bits to give you some specification of a universe with simple physics that gave rise to a civilization of vastly greater than intergalactic extent. Like, [Graham's Number](http://googology.wikia.com/wiki/Graham's_number) is a very simple number so it's easy to specify a universe that runs for that long before it returns an answer. It's not obvious how you'd extract Solomonoff predictions from that civilization and incentivize them to make good ones, but I'd be surprised if there were no Turing machine of fewer than one thousand states which did that somehow.\n\nAshley: ...\n\nBlaine: And for all I know there might be even better ways than that of getting exceptionally good predictions, somewhere in the list of the first decillion computer programs. That is, somewhere in the first 100 bits.\n\nAshley: So your basic argument is, \"Never mind Terence Tao, Solomonoff induction dominates *God*.\"\n\nBlaine: Solomonoff induction isn't the epistemic prediction capability of a superintelligence. It's the epistemic prediction capability of something that eats superintelligences like potato chips.\n\nAshley: Is there any point to contemplating an epistemology so powerful that it will never begin to fit inside the universe?\n\nBlaine: Maybe? I mean a lot of times, you just find people *failing to respect* the notion of ordinary superintelligence, doing the equivalent of supposing that a superintelligence behaves like a bad Hollywood genius and misses obvious-seeming moves. And a lot of times you find them insisting that \"there's a limit to how much information you can get from the data\" or something along those lines. \"[That Alien Message](http://lesswrong.com/lw/qk/that_alien_message/)\" is intended to convey the counterpoint, that smarter entities can extract more info than is immediately apparent on the surface of things. Similarly, thinking about Solomonoff induction might also cause someone to realize that if, say, you simulated zillions of possible simple universes, you could look at which agents were seeing exact data like the data you got, and figure out where you were inside that range of possibilities, so long as there was literally *any* correlation to use. And if you say that an agent *can't* extract that data, you're making a claim about which shortcuts to Solomonoff induction are and aren't computable. In fact, you're probably pointing at some *particular* shortcut and claiming nobody can ever figure that out using a reasonable amount of computing power *even though the info is there in principle.* Contemplating Solomonoff induction might help people realize that, yes, the data *is* there in principle. Like, until I ask you to imagine a civilization running for Graham's Number of years inside a Graham-sized memory space, you might not imagine them trying all the methods of analysis that *you personally* can imagine being possible.\n\nAshley: If somebody is making that mistake in the first place, I'm not sure you can beat it out of them by telling them the definition of Solomonoff induction.\n\nBlaine: Maybe not. But to brute-force somebody into imagining that [sufficiently advanced agents](https://arbital.com/p/2c) have [Level 1 protagonist intelligence](http://yudkowsky.tumblr.com/writing/level1intelligent), that they are [epistemically efficient](https://arbital.com/p/6s) rather than missing factual questions that are visible even to us, you might need to ask them to imagine an agent that can see *literally anything seeable in the computational limit* just so that their mental simulation of the ideal answer isn't running up against stupidity assertions. Like, I think there's a lot of people who could benefit from looking over the evidence they already personally have, and asking what a Solomonoff inductor could deduce from it, so that they wouldn't be running up against stupidity assertions *about themselves.* It's the same trick as asking yourself what God, Richard Feynman, or a \"perfect rationalist\" would believe in your shoes. You just have to pick a real or imaginary person that you respect enough for your model of that person to lack the same stupidity assertions that you believe about yourself.\n\nAshley: Well, let's once again try to factor out the part about Solomonoff induction in particular. If we're trying to imagine something epistemically smarter than ourselves, is there anything we get from imagining a complexity-weighted prior over programs in particular? That we don't get from, say, trying to imagine the reasoning of one particular Graham-Number-sized civilization?\n\nBlaine: We get the surety that even anything we imagine *Terence Tao himself* as being able to figure out, is something that is allowed to be known after some bounded number of errors versus Terence Tao, because Terence Tao is inside the list of all computer programs and gets promoted further each time the dominant paradigm makes a prediction error relative to him. We can't get that dominance property without invoking \"all possible ways of computing\" or something like it--we can't incorporate the power of all reasonable processes, unless we have a set such that all the reasonable processes are in it. The enumeration of all possible computer programs is one such set.\n\nAshley: Hm.\n\nBlaine: Example three, diffraction is a simpler explanation of rainbows than divine intervention. I don't think I need to belabor this point very much, even though in one way it might be the most central one. It sounds like \"Jehovah placed rainbows in the sky as a sign that the Great Flood would never come again\" is a 'simple' explanation, you can explain it to a child in nothing flat. Just the diagram of diffraction through a raindrop, to say nothing of the Principle of Least Action underlying diffraction, is something that humans don't usually learn until undergraduate physics, and it *sounds* more alien and less intuitive than Jehovah. In what sense is this intuitive sense of simplicity wrong? What gold standard are we comparing it to, that could be a better sense of simplicity than just 'how hard is it for me to understand'? The answer is Solmonoff induction and the rule which says that simplicity is measured by the size of the computer program, not by how hard things are for human beings to understand. Diffraction is a small computer program; any programmer who understands diffraction can simulate it without too much trouble. Jehovah would be a much huger program - a complete mind that implements anger, vengeance, belief, memory, consequentialism, etcetera. Solomonoff induction is what tells us to retrain our intuitions so that differential equations feel like less burdensome explanations than heroic mythology.\n\nAshley: Now hold on just a second, if that's actually how Solomonoff induction works then it's not working very well. I mean, Abraham Lincoln was a great big complicated mechanism from an algorithmic standpoint--he had a hundred trillion synapses in his brain--but that doesn't mean I should look at the historical role supposedly filled by Abraham Lincoln, and look for simple mechanical rules that would account for the things Lincoln is said to have done. If you've already seen humans and you've already learned to model human minds, it shouldn't cost a vast amount to say there's one *more* human, like Lincoln, or one more entity that is *cognitively humanoid*, like the Old Testament jealous-god version of Jehovah. It may be *wrong* but it shouldn't be vastly improbable a priori. If you've already been forced to acknowledge the existence of some humanlike minds, why not others? Shouldn't you get to reuse the complexity that you postulated to explain humans, in postulating Jehovah? In fact, shouldn't that be what Solomonoff induction *does?* If you have a computer program that can model and predict humans, it should only be a slight modification of that program--only slightly longer in length and added code--to predict the modified-human entity that is Jehovah.\n\nBlaine: Hm. That's fair. I may have to retreat from that example somewhat. In fact, that's yet another point to the credit of Solomonoff induction! The ability of programs to reuse code, incorporates our intuitive sense that if you've already postulated one kind of thing, it shouldn't cost as much to postulate a similar kind of thing elsewhere!\n\nAshley: Uh huh.\n\nBlaine: Well, but even if I was wrong that Solomonoff induction should make Jehovah seem very improbable, it's still Solomonoff induction that says that the alternative hypothesis of 'diffraction' shouldn't itself be seen as burdensome--even though diffraction might require a longer time to explain to a human, it's still at heart a simple program.\n\nAshley: Hmm. I'm trying to think if there's some notion of 'simplicity' that I can abstract away from 'simple program' as the nice property that diffraction has as an explanation for rainbows, but I guess anything I try to say is going to come down to some way of counting the wheels and gears inside the explanation, and justify the complexity penalty on probability by the increased space of possible configurations each time we add a new gear. And I can't make it be about surface details because that will make whole humans seem way too improbable. If I have to use simply specified systems and I can't use surface details or runtime, that's probably going to end up basically equivalent to Solomonoff induction. So in that case we might as well use Solomonoff induction, which is probably simpler than whatever I'll think up and will give us the same advice. Okay, you've mostly convinced me.\n\nBlaine: *Mostly?* What's left?\n\nAshley: Well, several things. Most of all, I think of how the 'language of thought' or 'language of epistemology' seems to be different in some sense from the 'language of computer programs'. Like, when I think about the laws of Newtonian gravity, or when I think about my Mom, it's not just one more line of code tacked onto a big black-box computer program. It's more like I'm crafting an explanation with modular parts - if it contains a part that looks like Newtonian mechanics, I step back and reason that it might contain other parts with differential equations. If it has a line of code for a Mom, it might have a line of code for a Dad. I'm worried that if I understood how humans think like that, maybe I'd look at Solomonoff induction and see how it doesn't incorporate some further key insight that's needed to do good epistemology.\n\nBlaine: Solomonoff induction literally incorporates a copy of you thinking about whatever you're thinking right now.\n\nAshley: Okay, great, but that's *inside* the system. If Solomonoff learns to promote computer programs containing good epistemology, but is not itself good epistemology, then it's not the best possible answer to \"How do you compute epistemology?\" Like, natural selection produced humans but population genetics is not an answer to \"How does intelligence work?\" because the intelligence is in the inner content rather than the outer system. In that sense, it seems like a reasonable worry that Solomonoff induction might incorporate only *some* principles of good epistemology rather than *all* the principles, even if the *internal content* rather than the *outer system* might bootstrap the rest of the way.\n\nBlaine: Hm. If you put it *that* way...\n\n(long pause)\n\nBlaine: ...then, I guess I have to agree. I mean, Solomonoff induction doesn't explicitly say anything about, say, the distinction between analytic propositions and empirical propositions, and knowing that is part of good epistemology on my view. So if you want to say that Solomonoff induction is something that bootstraps to good epistemology rather than being all of good epistemology by itself, I guess I have no choice but to agree. I do think the outer system already contains a *lot* of good epistemology and inspires a lot of good advice all on its own. Especially if you give it credit for formally reproducing principles that are \"common sense\", because correctly formalizing common sense is no small feat.\n\nAshley: Got a list of the good advice you think is derivable?\n\nBlaine: Um. Not really, but off the top of my head:\n\n1. The best explanation is the one with the best mixture of simplicity and matching the evidence.\n2. \"Simplicity\" and \"matching the evidence\" can both be measured in bits, so they're commensurable.\n3. The simplicity of a hypothesis is the number of bits required to formally specify it, for example as a computer program.\n4. When a hypothesis assigns twice as much probability to the exact observations seen so far as some other hypothesis, that's one bit's worth of relatively better matching the evidence.\n5. You should actually be making your predictions using all the explanations, not just the single best one, but explanations that poorly match the evidence will drop down to tiny contributions very quickly.\n6. Good explanations lets you compress lots of data into compact reasons which strongly predict seeing just that data and no other data.\n7. Logic can't dictate prior probabilities absolutely, but if you assign probability less than $2^{-1,000,000}$ to the prior that mechanisms constructed using a small number of objects from your universe might be able to well predict that universe, you're being unreasonable.\n8. So long as you don't assign infinitesimal prior probability to hypotheses that let you do induction, they will very rapidly overtake hypotheses that don't.\n9. It is a logical truth, not a contingent one, that more complex hypotheses must in the limit be less probable than simple ones.\n10. Epistemic rationality is a precise art with no user-controlled degrees of freedom in how much probability you ideally ought to assign to a belief. If you think you can tweak the probability depending on what you want the answer to be, you're doing something wrong.\n11. Things that you've seen in one place might reappear somewhere else.\n12. Once you've learned a new language for your explanations, like differential equations, you can use it to describe other things, because your best hypotheses will now already encode that language.\n13. We can learn meta-reasoning procedures as well as object-level facts by looking at which meta-reasoning rules are simple and have done well on the evidence so far.\n14. So far, we seem to have no a priori reason to believe that universes which are more expensive to compute are less probable.\n15. People were wrong about galaxies being *a priori* improbable because that's not how Occam's Razor works. Today, other people are equally wrong about other parts of a continuous wavefunction counting as extra entities.\n16. If something seems \"weird\" to you but would be a consequence of simple rules that fit the evidence so far, well, there's no term in these explicit laws of epistemology which add an extra penalty term for weirdness.\n17. Your epistemology shouldn't have extra rules in it that aren't needed to do Solomonoff induction or something like it, including rules like \"science is not allowed to examine this particular part of reality\"--\n\nAshley: This list isn't finite, is it.\n\nBlaine: Well, there's a *lot* of outstanding debate about epistemology where you can view that debate through the lens of Solomonoff induction and see what Solomonoff suggests.\n\nAshley: But if you don't mind my stopping to look at your last item, #17 above--again, it's attempts to add *completeness* clauses to Solomonoff induction that make me the most nervous. I guess you could say that a good rule of epistemology ought to be one that's promoted by Solomonoff induction--that it should arise, in some sense, from the simple ways of reasoning that are good at predicting observations. But that doesn't mean a good rule of epistemology ought to explicitly be in Solomonoff induction or it's out.\n\nBlaine: Can you think of good epistemology that doesn't seem to be contained in Solomonoff induction? Besides the example I already gave of distinguishing logical propositions from empirical ones.\n\nAshley: I've been trying to. First, it seems to me that when I reason about laws of physics and how those laws of physics might give rise to higher levels of organization like molecules, cells, human beings, the Earth, and so on, I'm not constructing in my mind a great big chunk of code that reproduces my observations. I feel like this difference might be important and it might have something to do with 'good epistemology'.\n\nBlaine: I guess it could be? I think if you're saying that there might be this unknown other thing and therefore Solomonoff induction is terrible, then that would be the [nirvana fallacy](https://en.wikipedia.org/wiki/Nirvana_fallacy). Solomonoff induction is the best formalized epistemology we have *right now*--\n\nAshley: I'm not saying that Solomonoff induction is terrible. I'm trying to look in the direction of things that might point to some future formalism that's better than Solomonoff induction. Here's another thing: I feel like I didn't have to learn how to model the human beings around me from scratch based on environmental observations. I got a jump-start on modeling other humans by observing *myself*, and by recruiting my brain areas to run in a sandbox mode that models other people's brain areas - empathy, in a word. I guess I feel like Solomonoff induction doesn't incorporate that idea. Like, maybe *inside* the mixture there are programs which do that, but there's no explicit support in the outer formalism.\n\nBlaine: This doesn't feel to me like much of a disadvantage of Solomonoff induction--\n\nAshley: I'm not *saying* it would be a disadvantage if we actually had a hypercomputer to run Solomonoff induction. I'm saying it might point in the direction of \"good epistemology\" that isn't explicitly included in Solomonoff induction. I mean, now that I think about it, a generalization of what I just said is that Solomonoff induction assumes I'm separated from the environment by a hard, Cartesian wall that occasionally hands me observations. Shouldn't a more realistic view of the universe be about a simple program that *contains me somewhere inside it,* rather than a simple program that hands observations to some other program?\n\nBlaine: Hm. Maybe. How would you formalize *that?* It seems to open up a big can of worms--\n\nAshley: But that's what my actual epistemology actually says. My world-model is not about a big computer program that provides inputs to my soul, it's about an enormous mathematically simple physical universe that instantiates Ashley as one piece of it. And I think it's good and important to have epistemology that works that way. It wasn't *obvious* that we needed to think about a simple universe that embeds us. Descartes *did* think in terms of an impervious soul that had the universe projecting sensory information onto its screen, and we had to get *away* from that kind of epistemology.\n\nBlaine: You understand that Solomonoff induction makes only a bounded number of errors relative to any computer program which does reason the way you prefer, right? If thinking of yourself as a contiguous piece of the universe lets you make better experimental predictions, programs which reason that way will rapidly be promoted.\n\nAshley: It's still unnerving to see a formalism that seems, in its own structure, to harken back to the Cartesian days of a separate soul watching a separate universe projecting sensory information on a screen. Who knows, maybe that would somehow come back to bite you?\n\nBlaine: Well, it wouldn't bite you in the form of repeatedly making wrong experimental predictions.\n\nAshley: But it might bite you in the form of having no way to represent the observation of, \"I drank this 'wine' liquid and then my emotions changed; could my emotions themselves be instantiated in stuff that can interact with some component of this liquid? Can alcohol touch neurons and influence them, meaning that I'm not a separate soul?\" If we interrogated the Solomonoff inductor, would it be able to understand that reasoning? Which brings up that dangling question from before about modeling the effect that my actions and choices have on the environment, and whether, say, an agent that used Solomonoff induction would be able to correctly predict \"If I drop an anvil on my head, my sequence of sensory observations will *end*.\"\n\nEliezer: And that's my cue to step in! For more about the issues Ashley raised with agents being a contiguous part of the universe, see [naturalistic reflection](https://arbital.com/p/) once that section exists. Meanwhile, we'll consider next the question of actions and choices, as we encounter the agent that uses Solomonoff induction for beliefs and expected reward maximization for selecting actions--the perfect rolling sphere of advanced agent theory, [https://arbital.com/p/11v](https://arbital.com/p/11v). Look forward to Ashley and Blaine's dialogue continuing [there](https://arbital.com/p/11v), once that section is up, which it currently isn't.", "date_published": "2017-12-24T23:28:56Z", "authors": ["Michael Keenan", "Alexei Andreev", "Eliezer Yudkowsky", "Nate Soares", "Robert Bell", "Eric Bruylant", "Travis Rivera", "Gurkenglas"], "summaries": [], "tags": ["B-Class"], "alias": "1hh"} {"id": "5c3f7e39c0f8a82f481b299811576ac8", "title": "Mathematics", "url": "https://arbital.com/p/math", "source": "arbital", "source_type": "text", "text": "Mathematics is the study of crisply specified formal objects — for example, [numbers](https://arbital.com/p/number) — and the ways of knowing their [properties](https://arbital.com/p/mathematical_properties) — such as [https://arbital.com/p/-proofs](https://arbital.com/p/-proofs). We can see \"[https://arbital.com/p/-258](https://arbital.com/p/-258)\" as the study of \"which conclusions follow with certainty from which premises\". Using this definition of logic, we can also see mathematics as the study of logical objects in logical universes — entities whose properties follow from specifications about them, rather than from observation of the real world. The number 3 is a logical object because its behavior follows from axioms about addition and multiplication; Mount Everest is a physical object because we learn about it by physically measuring Mount Everest.", "date_published": "2016-06-22T15:49:03Z", "authors": ["Kevin Clancy", "Alexei Andreev", "Dylan Hendrickson", "Eric Rogstad", "Eugene Dobry", "Tsvi BT", "Patrick Stevens", "Chris Barnett", "Eliezer Yudkowsky", "Zack M. Davis", "Nate Soares", "Eric Bruylant", "Mark Chimes", "Team Arbital", "Travis Rivera", "Jack Gallagher"], "summaries": [], "tags": ["Stub"], "alias": "1lw"} {"id": "a918b6049021158fa624de36cd09c400", "title": "Hypercomputer", "url": "https://arbital.com/p/hypercomputer", "source": "arbital", "source_type": "text", "text": "A \"hypercomputer\" is an imaginary artifact required to answer some crisp question that can't be answered in the limit of arbitrarily large finite computers. For example, if you have a question that depends on a general solution to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem), then we say that to solve this problem requires a \"hypercomputer\", and in particular, a level-1 halting oracle. (If you need to determine whether programs on level-1 halting oracles halt, you need a level-2 halting oracle, which we would also call a \"hypercomputer\".)\n\nIt seems exceptionally unlikely that hypercomputers will ever be discovered to be embedded into our physical universe. The term \"hypercomputer\" just exists as a label so we can say, \"Supposing we had a hypercomputer and ran this (impossible) program, what would be the consequences?\"\n\nFor some examples of conceptually illuminating code that would require a hypercomputer to actually run, see [https://arbital.com/p/11w](https://arbital.com/p/11w) and [https://arbital.com/p/11v](https://arbital.com/p/11v).\n\n[Unbounded analysis](https://arbital.com/p/107) of agents sometimes invokes hypercomputers because this lets us talk about multiple agents with easy-to-describe knowledge relations to each other. [https://arbital.com/p/has-requisite](https://arbital.com/p/has-requisite) [https://arbital.com/p/!has-requisite](https://arbital.com/p/!has-requisite) In these cases, we're not trying to say that the relation between agents X and Y intrinsically requires them to have impossible powers of computation. We're just reaching for an unphysical scenario that happens to crisply encode inter-agent relations we find interesting for some reason, and allows these inter-agent relations to have consequences about which we can easily do proofs.\n\nSee also [the Wikipedia page on hypercomputation](https://en.wikipedia.org/wiki/Hypercomputation).", "date_published": "2016-01-17T00:19:09Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A \"hypercomputer\" is an imaginary artifact required to answer some crisp question that can't be answered in the limit of arbitrarily large finite computers. For example, if you have a question that depends on a general solution to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem), we say that to solve this problem requires a \"hypercomputer\". (In particular, it requires a level-1 halting oracle.)\n\nIt seems exceptionally unlikely that hypercomputers will ever be discovered to be embedded into our physical universe. We just use this as a label so we can say, for certain impossible programs, \"Supposing we had a hypercomputer and could run this impossible program, what would be the consequences?\"\n\nFor an example of interesting code that requires a hypercomputer, see [https://arbital.com/p/11w](https://arbital.com/p/11w). The relations between different levels of hypercomputer are also useful for crisply describing agents that have better or worse abilities to predict one another."], "tags": ["B-Class", "Glossary (Value Alignment Theory)"], "alias": "1mk"} {"id": "0744e0d93227384f9e56bf2b2edae5e0", "title": "AIXI-tl", "url": "https://arbital.com/p/aixitl", "source": "arbital", "source_type": "text", "text": "$\\text{AIXI}^{tl}$ is a version of the ideal agent [https://arbital.com/p/11v](https://arbital.com/p/11v) which only considers hypotheses of length $l$ that run for less than time $t$. A $tl$-bounded version of [https://arbital.com/p/11v](https://arbital.com/p/11v) therefore only requires an [unphysically large finite computer](https://arbital.com/p/1mm) rather than an infinite [hypercomputer](https://arbital.com/p/1mk).", "date_published": "2016-01-17T00:36:39Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1ml"} {"id": "4fe3de7d8370486a5f67d41b24eccf7f", "title": "Unphysically large finite computer", "url": "https://arbital.com/p/large_computer", "source": "arbital", "source_type": "text", "text": "An unphysically large finite computer is one that's vastly larger than anything that could possibly fit into our universe, if the *character* of physical law is anything remotely like it seems to be.\n\nWe might be able to get a googol ($10^{100}$) computations out of this universe by being clever, but to get $10^{10^{100}}$ computations would require outrunning proton decay and the second law of thermodynamics, and $9 \\uparrow\\uparrow 4$ operations ($9^{9^{9^9}}$) would require amounts of computing substrate in contiguous internal communication that wouldn't fit inside a single [Hubble Volume](https://en.wikipedia.org/wiki/Hubble_volume). Even tricks that permit the creation of new universes and encoding computations into them probably wouldn't allow a single computation of size $9 \\uparrow\\uparrow 4$ to return an answer, if the character of physical law is anything like what it appears to be.\n\nThus, in a practical sense, computations that would require sufficiently large finite amounts of computation are pragmatically equivalent to computations that require [hypercomputers](https://arbital.com/p/1mk), and serve a similar purpose in unbounded analysis - they let us talk about interesting things and crisply encode relations that might take a lot of unnecessary overhead to describe using *small* finite computers. Nonetheless, since there are some mathematical pitfalls of considering infinite cases, reducing a problem to one guaranteed to only require a vast finite computer can sometimes be an improvement or yield new insights - especially when dealing with interesting recursions.\n\nAn example of an interesting computation requiring a vast finite computer is [https://arbital.com/p/1ml](https://arbital.com/p/1ml), or [https://arbital.com/p/131](https://arbital.com/p/131)'s [parametric bounded analogue of Lob's Theorem](http://intelligence.org/files/ParametricBoundedLobsTheorem.pdf).", "date_published": "2016-01-19T23:51:25Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["An unphysically large finite computer is one that's vastly larger than anything that could possibly fit into our universe. In a practical sense, computations that would require sufficiently large finite amounts of computation are pragmatically equivalent to computations that require [hypercomputers](https://arbital.com/p/1mk), and serve a similar purpose in unbounded analysis: they let us talk about interesting things and crisply encode relations that might take a lot of unnecessary overhead to describe using *small* finite computers. Nonetheless, since there are some mathematical pitfalls of considering infinite cases, reducing a problem to one that only requires a vast finite computer can sometimes be an improvement.\n\nAn example of an interesting computation requiring a vast finite computer is [https://arbital.com/p/1ml](https://arbital.com/p/1ml)."], "tags": ["B-Class"], "alias": "1mm"} {"id": "21ca32f42fb43add18e21b8c716a5d8e", "title": "Tiling agents theory", "url": "https://arbital.com/p/tiling_agents", "source": "arbital", "source_type": "text", "text": "The theory of self-modifying agents that build successors that are very similar to themselves, like repeating tiles on a tesselated plane. See [this paper](http://intelligence.org/files/TilingAgentsDraft.pdf) or [this Google search](http://www.google.com/search?q=\"tiling agents\").", "date_published": "2016-01-17T05:03:53Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "1mq"} {"id": "6d9dcb45cdae500aa1186dbc269f6ae9", "title": "Cartesian agent-environment boundary", "url": "https://arbital.com/p/cartesian_boundary", "source": "arbital", "source_type": "text", "text": "A Cartesian agent setup is one where the agent receives sensory information from the environment, and the agent sends motor outputs to the environment, and nothing else can cross the \"Cartesian border\" separating the agent and environment. If you can eat a psychedelic mushroom that affects the way you process the world - not just presenting you with sensory information, but altering the computations you do to think - then this is an example of an event that \"violates the Cartesian boundary\". Likewise if the agent drops an anvil on its own head. Nothing that happens in a Cartesian universe can kill a Cartesian agent or modify its processing; all the universe can do is send the agent sensory information, in a particular format, that the agent reads.", "date_published": "2016-01-19T04:24:18Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1mt"} {"id": "3cd460f3a05998d7450f401f8ed4b03d", "title": "Cartesian agent", "url": "https://arbital.com/p/cartesian_agent", "source": "arbital", "source_type": "text", "text": "A Cartesian agent is an agent that's a separate system from the environment, linked by the [Cartesian boundary](https://arbital.com/p/1mt) across which passes sensory information and motor outputs. This is most commonly formalized by two distinct Turing machines, an 'agent' machine and an 'environment' machine. The agent receives sensory information from the environment and outputs motor information; the environment receives the agent's motor information and computes the agent's sensory information.\n\nAn actual human, in contrast, is a \"naturalistic agent\" that is a continuous part of the universe - a human is one particular collection of atoms within the physical universe, and there's no type distinction between the atoms inside the human and the atoms outside the human. Eating a particular kind of mushroom can make us think differently; dropping an anvil on your own head doesn't just cause you to see anvilness or receive a pain signal, it smashes your computing substrate and causes you not to feel any future sensory information at all. \n\nIn the context of [AI alignment theory](https://arbital.com/p/2v), Cartesian agents are usually associated with optimizing for [sensory rewards](https://arbital.com/p/cartesian_reward) - there's some particular component of the environment's input which is a \"reward signal\", or the agent is trying to optimize some directly computed function of sensory data.", "date_published": "2016-01-19T23:37:58Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A Cartesian agent is an agent that's a separate system from the environment, linked by the [Cartesian boundary](https://arbital.com/p/1mt) across which passes sensory information and motor outputs. This is most commonly formalized by two distinct Turing machines, an 'agent' machine and an 'environment' machine. The agent receives sensory information from the environment and outputs motor information; the environment receives the agent's motor information and computes the agent's sensory information.\n\nAn actual human, in contrast, is a \"naturalistic agent\" that is a continuous part of the universe - a human is one particular collection of atoms within the physical universe, and there's no type distinction between the atoms inside the human and the atoms outside the human. Eating a particular kind of mushroom can make us think differently; dropping an anvil on your own head doesn't just cause you to see anvilness or receive a pain signal, it smashes your computing substrate and causes you not to feel any future sensory information at all."], "tags": ["B-Class"], "alias": "1n1"} {"id": "bdd3e2eb1360fde0ccc9cc33733d03a3", "title": "Bayesian reasoning", "url": "https://arbital.com/p/bayes_reasoning", "source": "arbital", "source_type": "text", "text": "According to its advocates, Bayesian reasoning is a way of seeing the world, and our beliefs about the world, in the light of [probability theory](https://arbital.com/p/1bv), in particular Bayes's Theorem or [Bayes's Rule](https://arbital.com/p/1lz). This probability-theoretic way of seeing the world can apply to scientific issues, to tasks in machine learning, and to everyday life.\n\nTo [start learning](https://arbital.com/p/1zq), visit [Arbital's Guide to Bayes's Rule](https://arbital.com/p/1zq).\n\nAfter that, consider visiting the [Bayesian update](https://arbital.com/p/1ly) page.", "date_published": "2016-07-26T14:28:17Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Bayesian reasoning is a way of interpreting the world, and our beliefs about the world, in the light of [probability theory](https://arbital.com/p/1bv), in particular Bayes's Theorem or [Bayes's Rule](https://arbital.com/p/1lz). Applications include scientific statistics, machine learning, and everyday life."], "tags": ["Stub", "Rationality"], "alias": "1r8"} {"id": "2346c37c332ba41c9963b94da1ed8cf9", "title": "Orthogonality Thesis", "url": "https://arbital.com/p/orthogonality", "source": "arbital", "source_type": "text", "text": "summary: The weak form of the Orthogonality Thesis asserts that it's possible to have AI designs such that those AIs pursue almost any goal - for example, you can have an AI that [just wants paperclips](https://arbital.com/p/10h). The strong form of the Orthogonality Thesis asserts that an expected paperclip maximizer can be just as smart, efficient, reflective, and stable as any other kind of agent.\n\nKey implications of the strong version of the Orthogonality Thesis:\n\n- It is possible to build AIs with utility functions that direct them to be nice.\n- But it is not any easier or simpler to build AIs that are nice.\n- If you screw up the niceness part and build a non-nice agent, it won't automatically look itself over and say, \"Oh, I guess I should be nice instead.\"\n\nsummary(Technical): The Orthogonality Thesis says that the possibility space for cognitive agents include cognitive agents with every kind of goal or preference that is computationally tractable to evaluate. Since the [purely factual questions](https://arbital.com/p/3t3) \"Roughly how many expected [paperclips](https://arbital.com/p/10h) will result from carrying out this policy?\" and \"What are policies that would probably result in a large number of paperclips existing?\" seem to pose no special difficulty to compute, there should exist corresponding agents that output actions they expect to lead to large numbers of paperclips.\n\nThat is: Imagine [Omega](https://arbital.com/p/5b2) offering to pay some extremely smart being one galaxy per paperclip it creates; they would have no special difficulty figuring out how to make lots of paperclips, if the payment provided a reason. The Orthogonality Thesis says that, if a smart agent *could* figure out how to make lots of paperclips *if* paid, the [corresponding agent that makes paperclips because its utility function is over paperclips](https://arbital.com/p/10h) can do so just as efficiently and intelligently.\n\nThe strong version of the Orthogonality Thesis further asserts that there's nothing additionally complicated or twisted about such an agent: A paperclip maximizer doesn't need any special defects of intellect or reflectivity in order to go on being a paperclip maximizer. The cognitive structure of a paperclip maximizer would not need to fight any restoring forces that would 'naturally' redirect the AI to rewrite itself to be moral or rewrite itself to be selfish.\n\nRelevant implications if Orthogonality is true:\n\n- It is possible to build AIs with utility functions that direct them to be nice, or to [complete a bounded task with a minimum of side effects](https://arbital.com/p/6w). AIs are not necessarily malevolent.\n- There are no 'natural' restoring functions that make it easier or simpler or more cognitively efficient to have well-aligned AIs than misaligned AIs.\n- If you screw up the design and end up with something that [wants paperclips](https://arbital.com/p/10h), the AI doesn't automatically say \"Well that's the wrong goal, I ought to be nice instead.\"\n\n# Introduction\n\nThe Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal.\n\nThe strong form of the Orthogonality Thesis says that there's no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal.\n\nSuppose some [strange alien](https://arbital.com/p/5b2) came to Earth and credibly offered to pay us one million dollars' worth of new wealth every time we created a [paperclip](https://arbital.com/p/7ch). We'd encounter no special intellectual difficulty in figuring out how to make lots of paperclips.\n\nThat is, minds would readily be able to reason about:\n\n- How many paperclips would result, if I pursued a policy $\\pi_0$?\n- How can I search out a policy $\\pi$ that happens to have a high answer to the above question?\n\nThe Orthogonality Thesis asserts that since these questions are not computationally intractable, it's possible to have an agent that tries to make paperclips without being paid, because paperclips are what it wants. The strong form of the Orthogonality Thesis says that there need be nothing especially complicated or twisted about such an agent.\n\nThe Orthogonality Thesis is a statement about computer science, an assertion about the logical design space of possible cognitive agents. Orthogonality says nothing about whether a human AI researcher on Earth would want to build an AI that made paperclips, or conversely, want to make a [nice](https://arbital.com/p/3d9) AI. The Orthogonality Thesis just asserts that the space of possible designs contains AIs that make paperclips. And also AIs that are nice, to the extent there's a sense of \"nice\" where you could say how to be nice to someone if you were paid a billion dollars to do that, and to the extent you could name something physically achievable to do.\n\nThis contrasts to inevitablist theses which might assert, for example:\n\n- \"It doesn't matter what kind of AI you build, it will turn out to only pursue its own survival as a final end.\"\n- \"Even if you tried to make an AI optimize for paperclips, it would reflect on those goals, reject them as being stupid, and embrace a goal of valuing all sapient life.\"\n\nThe reason to talk about Orthogonality is that it's a key premise in two highly important policy-relevant propositions:\n\n- It is possible to build a nice AI.\n- It is possible to screw up when trying to build a nice AI, and if you do, the AI will not automatically decide to be nice instead.\n\nOrthogonality does not require that all agent designs be equally compatible with all goals. E.g., the agent architecture [https://arbital.com/p/1ml](https://arbital.com/p/1ml) can only be formulated to care about direct functions of its sensory data, like a reward signal; it would not be easy to rejigger the AIXI architecture to care about [creating massive diamonds](https://arbital.com/p/5g) in the environment (let alone any more complicated [environmental goals](https://arbital.com/p/)). The Orthogonality Thesis states \"there exists at least one possible agent such that...\" over the whole design space; it's not meant to be true of every particular agent architecture and every way of constructing agents.\n\nOrthogonality is meant as a [descriptive statement about reality](https://arbital.com/p/3t3), not a normative assertion. Orthogonality is not a claim about the way things ought to be; nor a claim that moral relativism is true (e.g. that all moralities are on equally uncertain footing according to some higher metamorality that judges all moralities as equally devoid of what would objectively constitute a justification). Claiming that paperclip maximizers can be constructed as cognitive agents is not meant to say anything favorable about paperclips, nor anything derogatory about sapient life.\n\n# Thesis statement: Goal-directed agents are as tractable as their goals.\n\nSuppose an agent's utility function said, \"Make the SHA512 hash of a digitized representation of the quantum state of the universe be 0 as often as possible.\" This would be an exceptionally intractable kind of goal. Even if aliens offered to pay us to do that, we still couldn't figure out how.\n\nSimilarly, even if aliens offered to pay us, we wouldn't be able to optimize the goal \"Make the total number of apples on this table be simultaneously even and odd\" because the goal is self-contradictory.\n\nBut suppose instead that [some strange and extremely powerful aliens](https://arbital.com/p/5b2) offer to pay us the equivalent of a million dollars in wealth for every paperclip that we make, or even a galaxy's worth of new resources for every new paperclip we make. If we imagine ourselves having a human reason to make lots of paperclips, the optimization problem \"How can I make lots of paperclips?\" would pose us no special difficulty. The factual questions:\n\n- How many paperclips would result, if I pursued a policy $\\pi_0$?\n- How can I search out a policy $\\pi$ that happens to have a high answer to the above question?\n\n...would not be especially computationally burdensome or intractable.\n\nWe also wouldn't forget to [harvest and eat food](https://arbital.com/p/10g) while making paperclips. Even if offered goods of such overwhelming importance that making paperclips was at the top of everyone's priority list, we could go on being strategic about which other actions were useful in order to make even more paperclips; this also wouldn't be an intractably hard cognitive problem for us.\n\nThe weak form of the Orthogonality Thesis says, \"Since the goal of making paperclips is tractable, somewhere in the design space is an agent that optimizes that goal.\"\n\nThe strong form of Orthogonality says, \"And this agent doesn't need to be twisted or complicated or inefficient or have any weird defects of reflectivity; the agent is as tractable as the goal.\" That is: When considering the necessary internal cognition of an agent that steers outcomes to achieve high scores in some [outcome-scoring function](https://arbital.com/p/1fw) $U,$ there's no added difficulty in that cognition except whatever difficulty is inherent in the question \"What policies would result in consequences with high $U$-scores?\"\n\nThis could be restated as, \"To whatever extent you (or a [superintelligent](https://arbital.com/p/41l) version of you) could figure out how to get a high-$U$ outcome if aliens offered to pay you huge amount of resources to do it, the corresponding agent that terminally prefers high-$U$ outcomes can be at least that good at achieving $U$.\" This assertion would be false if, for example, an intelligent agent that [terminally](https://arbital.com/p/1bh) wanted paperclips was limited in intelligence by the defects of reflectivity required to make the agent not realize how pointless it is to pursue paperclips; whereas a galactic superintelligence being *paid* to pursue paperclips could be far more intelligent and strategic because it didn't have any such defects.\n\nFor purposes of stating Orthogonality's precondition, the \"tractability\" of the computational problem of $U$-search should be taken as including only the object-level search problem of computing external actions to achieve external goals. If there turn out to be special difficulties associated with computing \"How can I [make sure that I go on pursuing](https://arbital.com/p/1fx) $U$?\" or \"What kind of [successor agent](https://arbital.com/p/1mq) would want to pursue $U$?\" whenever $U$ is something other than \"be nice to all sapient life\", then these new difficulties contradict the intuitive claim of Orthogonality. Orthogonality is meant to be empirically-true-in-practice, not true-by-definition because of how we sneakily defined \"optimization problem\" in the setup.\n\nOrthogonality is not literally, absolutely universal because theoretically 'goals' can include such weird constructions as \"Make paperclips for some terminal reason other than valuing paperclips\" and similar such statements that [require cognitive algorithms and not just results](https://arbital.com/p/7d6). To the extent that goals don't single out particular optimization methods, and just talk about paperclips, the Orthogonality claim should cover them.\n\n# Summary of arguments\n\nSome arguments for Orthogonality, in rough order of when they were first historically proposed and the strength of Orthogonality they argue for:\n\n## Size of mind design space\n\nThe space of possible minds is enormous, and all human beings occupy a relatively tiny volume of it - we all have a cerebral cortex, cerebellum, thalamus, and so on. The sense that AIs are a particular kind of alien mind that 'will' want some particular things is an undermined intuition. \"AI\" really refers to the entire design space of possibilities outside the human. Somewhere in that vast space are possible minds with almost any kind of goal. For any thought you have about why a mind in that space ought to work one way, there's a different possible mind that works differently.\n\nThis is an exceptionally generic sort of argument that could apply equally well to any property $P$ of a mind, but is still weighty even so: If we consider a space of minds a million bits wide, then any argument of the form \"Some mind has property $P$\" has $2^{1,000,000}$ chances to be true and any argument of the form \"No mind has property $P$\" has $2^{1,000,000}$ chances to be false.\n\nThis form of argument isn't very specific to the nature of goals as opposed to any other kind of mental property. But it's still useful for snapping out of the frame of mind of \"An AI is a weird new kind of person, like the strange people of the Tribe Who Live Across The Water\" and into the frame of mind of \"The space of possible things we could call 'AI' is enormously wider than the space of possible humans.\" Similarly, snapping out of the frame of mind of \"But why would it pursue paperclips, when it wouldn't have any fun that way?\" and into the frame of mind \"Well, I like having fun, but are there some possible minds that don't pursue fun?\"\n\n## [Instrumental convergence](https://arbital.com/p/10g)\n\nA sufficiently intelligent [paperclip maximizer](https://arbital.com/p/10h) isn't disadvantaged in day-to-day operations relative to any other goal, so long as [Clippy](https://arbital.com/p/10h) can [estimate at least as well as you can](https://arbital.com/p/6s) how many more paperclips could be produced by pursuing instrumental strategies like \"Do science research (for now)\" or \"Pretend to be nice (for now)\".\n\nRestating: for at least some agent architectures, it is not necessary for the agent to have an independent [terminal](https://arbital.com/p/1bh) value in its utility function for \"do science\" in order for it to do science effectively; it is only necessary for the agent to [understand at least as well as we do](https://arbital.com/p/6s) why certain forms of investigation will produce knowledge that will be useful later (e.g. for paperclips). When you say, \"Oh, well, it won't be interested in electromagnetism since it has no pure curiosity, it will only want to peer at paperclips in particular, so it will be at a disadvantage relative to more curious agents\" you are postulating that you know a better operational policy than the agent does *for producing paperclips,* and an [instrumentally efficient](https://arbital.com/p/6s) agent would know this as well as you do and be at no operational disadvantage due to its simpler utility function.\n\n## [Reflective stability](https://arbital.com/p/3r6)\n\nSuppose that Gandhi doesn't want people to be murdered. Imagine that you offer Gandhi a pill that will make him start wanting to kill people. If Gandhi knows that this is what the pill does, Gandhi will refuse the pill, because Gandhi expects the result of taking the pill to be that future-Gandhi wants to murder people and then murders people and then more people will be murdered and Gandhi regards this as bad. Similarly, a sufficiently intelligent paperclip maximizer will not self-modify to act according to \"actions which promote the welfare of sapient life\" instead of \"actions which lead to the most paperclips\", because then future-Clippy will produce fewer paperclips, and then there will be fewer paperclips, so present-Clippy does not evaluate this self-modification as producing the highest number of expected future paperclips.\n\n## Hume's is/ought type distinction\n\nDavid Hume observed an apparent difference of type between *is*-statements and *ought*-statements:\n\n> \"In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, *is*, and *is not*, I meet with no proposition that is not connected with an *ought*, or an *ought not*. This change is imperceptible; but is however, of the last consequence.\"\n\nHume was originally concerned with the question of where we get our ought-propositions, since (said Hume) there didn't seem to be any way to derive an ought-proposition except by starting from another ought-proposition. We can figure out that the Sun *is* shining just by looking out the window; we can deduce that the outdoors will be warmer than otherwise by knowing about how sunlight imparts thermal energy when absorbed. On the other hand, to get from there to \"And therefore I *ought* to go outside\", some kind of new consideration must have entered, along the lines of \"I *should* get some sunshine\" or \"It's *better* to be warm than cold.\" Even if this prior ought-proposition is of a form that to humans seems very natural, or taken-for-granted, or culturally widespread, like \"It is better for people to be happy than sad\", there must have still been some prior assumption which, if we write it down in words, will contain words like *ought*, *should*, *better*, and *good.*\n\nAgain translating Hume's idea into more modern form, we can see ought-sentences as special because they invoke some *ordering* that we'll designate $<_V.$ E.g. \"It's better to go outside than stay inside\" asserts \"Staying inside $<_V$ going outside\". Whenever we make a statement about one outcome or action being \"better\", \"preferred\", \"good\", \"prudent\", etcetera, we can see this as implicitly ordering actions and outcomes under this $<_V$ relation. Some assertions, the ought-laden assertions, mention this $<_V$ relation; other propositions just talk about energetic photons in sunlight.\n\nSince we've put on hold the question of [exactly what sort of entity this $<_V$ relation is](https://arbital.com/p/313), we don't need to concern ourselves for now with the question of whether Hume was right that we can't derive $<_V$-relations just from factual assertions. For purposes of Orthogonality, we only need a much weaker version of Hume's thesis, the observation that we can apparently *separate out* a set of propositions that *don't* invoke $<_V,$ what we might call 'simple facts' or 'questions of simple fact'. Furthermore, we can figure out simple facts *just* by making observations and considering other simple facts.\n\nWe can't necessarily get all $<_V$-mentioning propositions without considering simple facts. The $<_V$-mentioning proposition \"It's better to be outside than inside\" may depend on the non-$<_V$-mentioning simple fact \"It is sunny outside.\" But we can figure out whether it's sunny outside, without considering any ought-propositions.\n\nThere are two potential ways we can conceptualize the relation of Hume's is-ought separation to Orthogonality.\n\nThe relatively simpler conceptualization is to treat the relation 'makes more paperclips' as a kind of new ordering $>_{paperclips}$ that can, in a very general sense, fill in the role in a paperclip maximizer's reasoning that would in our own reasoning be taken up by $<_V.$ Then Hume's is-ought separation seems to suggest that this paperclip maximizer can still have excellent reasoning about empirical questions like \"Which policy leads to how many paperclips?\" because is-questions can be thought about separately from ought-questions. When Clippy disassembles you to turn you into paperclips, it doesn't have a values disagreement with you--it's not the case that Clippy is doing that action *because* it thinks you have low value under $<_V.$ Clippy's actions just reflect its computation of the entirely separate ordering $>_{paperclips}.$\n\nThe deeper conceptualization is to see a paperclip maximizer as being constructed entirely out of is-questions. The questions \"How many paperclips will result conditional on action $\\pi_0$ being taken?\" and \"What is an action $\\pi$ that would yield a large number of expected paperclips?\" are pure is-questions, and (arguendo) everything a paperclip maximizer needs to consider in order to make as many paperclips as possible can be seen as a special case of one of these questions. When Clippy disassembles you for your atoms, it's not disagreeing with you about the value of human life, or what it ought to do, or which outcomes are better or worse. All of those are ought-propositions. Clippy's action is only informative about the true is-proposition 'turning this person into paperclips causes there to be more paperclips in the universe', and tells us nothing about any content of the mysterious $<_V$-relation because Clippy wasn't computing anything to do with $<_V.$\n\nThe second viewpoint may be helpful for seeing why Orthogonality doesn't require moral relativism. If we imagine Clippy as having a different version $>_{paperclips}$ of something very much like the value system $<_V,$ then we may be tempted to reprise the entire Orthogonality debate at one remove, and ask, \"But doesn't Clippy see that $<_V$ is more *justified* than $>_{paperclips}$? And if this fact isn't evident to Clippy who is supposed to be very intelligent and have no defects of reflectivity and so on, doesn't that imply that $<_V$ really isn't any more justified than $>_{paperclips}$?\"\n\nWe could reply to that question by carrying the shallow conceptualization of Humean Orthogonality a step further, and saying, \"Ah, when you talk about *justification,* you are again invoking a mysterious concept that doesn't appear just in talking about the photons in sunlight. We could see propositions like this as involving a new idea $\\ll_W$ that deals with which $<$-systems are less or more *justified*, so that '$<_V$ is more justified than $>_{paperclips}$' states '$>_{paperclips} \\ll_W <_V$'. But Clippy doesn't compute $\\ll_W,$ it computes $\\gg_{paperclips},$ so Clippy's behavior doesn't tell us anything about what is justified.\"\n\nBut this is again tempting us to imagine Clippy as having its own version of the mysterious $\\ll_W$ to which Clippy is equally attached, and tempts us to imagine Clippy as arguing with us or disagreeing with us within some higher metasystem.\n\nSo--putting on hold the true nature of our mysterious $<_V$-mentioning concepts like 'goodness' or 'better' and the true nature of our $\\ll_W$-mentioning concepts like 'justified' or 'valid moral argument'--the deeper idea would be that Clippy is just not computing anything to do with $<_V$ or $\\ll_W$ at all. If Clippy self-modifies and writes new decision algorithms into place, these new algorithms will be selected according to the is-criterion \"How many future paperclips will result if I write this piece of code?\" and not anything resembling any arguments that humans have ever had over which ought-systems are justified. Clippy doesn't ask whether its new decision algorithm is justified; it asks how many expected paperclips will result from executing the algorithm (and this is a pure is-question whose answers are either true or false as a matter of simple fact).\n\nIf we think Clippy is very intelligent, and we watch Clippy self-modify into a new paperclip maximizer, we are only learning is-facts about which executing algorithms lead to more paperclips existing. We are not learning anything about what is right, or what is justified, and in particular we're not learning that 'do good things' is objectively no better justified than 'make paperclips'. Even if that assertion were true under the mysterious $\\ll_W$-relation on moral systems, you wouldn't be able to learn that truth by watching Clippy, because Clippy never bothers to evaluate $\\ll_W$ or any other analogous justification-system $\\gg_{something}$.\n\n(This is about as far as one can go in disentangling Orthogonality in computer science from normative metaethics without starting to [pierce the mysterious opacity of $<_V.$](https://arbital.com/p/313))\n\n### Thick definitions of rationality or intelligence\n\nSome philosophers responded to Hume's distinction of empirical rationality from normative reasoning, by advocating 'thick' definitions of intelligence that included some statement about the 'reasonableness' of the agent's ends.\n\nFor pragmatic purposes of AI alignment theory, if an agent is cognitively powerful enough to build Dyson Spheres, it doesn't matter whether that agent is defined as 'intelligent' or its ends are defined as 'reasonable'. A definition of the word 'intelligence' contrived to exclude paperclip maximization doesn't change the empirical behavior or empirical power of a paperclip maximizer.\n\n### Relation to moral internalism\n\nWhile Orthogonality seems orthogonal to most traditional philosophical questions about metaethics, it does outright contradict some possible forms of moral internalism. For example, one could hold that by the very definition of rightness, knowledge of what is right must be inherently motivating to any entity that understands that knowledge. This is not the most common meaning of \"moral internalism\" held by modern philosophers, who instead seem to hold something like, \"By definition, if I say that something is morally right, among my claims is that the thing is motivating *to me.*\" We haven't heard of a standard term for the position that, by definition, what is right must be *universally* motivating; we'll designate that here as \"universalist moral internalism\".\n\nWe can potentially resolve this tension between Orthogonality and this assertion about the nature of rightness by:\n\n- Believing there must be some hidden flaw in the reasoning about a paperclip maximizer.\n- Saying \"No True Scotsman\" to the paperclip maximizer being intelligent, even if it's building Dyson Spheres.\n- Saying \"No True Scotsman\" to the paperclip maximizer \"truly understanding\" $<_V,$ even if Clippy is capable of predicting with extreme accuracy what humans will say and think about $<_V$, and Clippy does not suffer any other deficit of empirical prediction because of this lack of 'understanding', and Clippy does not require any special twist of its mind to avoid being compelled by its understanding of $<_V.$\n- Rejecting Orthogonality, and asserting that a paperclip maximizer must fall short of being an intact mind in some way that implies an empirical capabilities disadvantage.\n- Accepting nihilism, since a true moral argument must be compelling to everyone, and no moral argument is compelling to a paperclip maximizer. (Note: A paperclip maximizer doesn't care about whether clippiness must be compelling to everyone, which makes this argument self-undermining. See also [https://arbital.com/p/3y6](https://arbital.com/p/3y6) for general arguments against adopting nihilism when you discover that your mind's representation of something was running skew to reality.)\n- Giving up on universalist moral internalism as an empirical proposition; [https://arbital.com/p/1ml](https://arbital.com/p/1ml) and Clippy empirically do different things, and will not be compelled to optimize the same goal no matter what they learn or know.\n\n## Constructive specifications of orthogonal agents\n\nWe can exhibit [unbounded](https://arbital.com/p/107) formulas for agents larger than their environments that optimize any given goal, such that Orthogonality is visibly true about agents within that class. Arguments about what all possible minds must do are clearly false for these particular agents, contradicting all strong forms of inevitabilism. These minds use huge amounts of computing power, but there is no known reason to expect that, e.g. worthwhile-happiness-maximizers have bounded analogues while paperclip-maximizers do not.\n\nThe simplest unbounded formulas for orthogonal agents don't involve reflectivity (the corresponding agents have no self-modification options, though they may create subagents). If we only had those simple formulas, it would theoretically leave open the possibility that self-reflection could somehow negate Orthogonality (reflective agents must inevitably have a particular utility function, and reflective agents being at a strong advantage relative to nonreflective agents). But there is already [ongoing work](https://arbital.com/p/1mq) on describing reflective agents that have the preference-stability property, and work toward increasingly bounded and approximable formulations of those. There is no hint from this work that Orthogonality is false; all the specifications have a free choice of utility function.\n\nAs of early 2017, the most recent work on tiling agents involves fully reflective, reflectively stable, logically uncertain agents whose computing time is roughly doubly-exponential in the size of the propositions considered.\n\nSo if you want to claim Orthogonality is false because e.g. all AIs will inevitably end up valuing all sapient life, you need to claim that the process of *reducing the already-specified doubly-exponential computing-time decision algorithm to a more tractable decision algorithm* can *only* be made realistically efficient for decision algorithms computing \"Which policies protect all sapient life?\" and are *impossible* to make efficient for decision algorithms computing \"Which policies lead to the most paperclips?\"\n\nSince work on tiling agent designs hasn't halted, one may need to backpedal and modify this impossibility claim further as more efficient decision algorithms are invented.\n\n# Epistemic status\n\nAmong people who've seriously delved into these issues and are aware of the more advanced arguments for Orthogonality, we're not aware of anyone who still defends \"universalist moral internalism\" as described above, and we're not aware of anyone who thinks that arbitrary sufficiently-real-world-capable AI systems automatically adopt human-friendly terminal values.\n\nPaul Christiano has said (if we're quoting him correctly) that although it's not his dominant hypothesis, he thinks some significant probability should be awarded to the proposition that only some subset of tractable utility functions, potentially excluding human-friendly ones or those of high cosmopolitan value, can be stable under reflection in powerful bounded AGI systems; e.g. because only direct functions of sense data can be adequately supervised in internal retraining. (This would be bad news rather than good news for AGI alignment and long-term optimization of human values.)\n\n%%%%comment:\n\n\n# Hume's Guillotine\n\nOrthogonality can be seen as corresponding to a philosophical principle advocated by David Hume, whose phrasings included, \"Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.\" In our terms: an agent whose preferences over outcomes scores the destruction of the world more highly than the scratching of Hume's finger, is not thereby impeded from forming accurate models of the world or searching for policies that achieve various outcomes.\n\nIn modern terms, we'd say that Hume observed an apparent type distinction between *is*-statements and *ought*-statements:\n\n> \"In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, *is*, and *is not*, I meet with no proposition that is not connected with an *ought*, or an *ought not*. This change is imperceptible; but is however, of the last consequence.\"\n\n\"It is sunny outside\" is an is-proposition. It can potentially be deduced solely from other is-facts, like \"The Sun is in the sky\" plus \"The Sun emits sunshine\". If we now furthermore say \"And therefore I ought to go outside\", we've introduced a new *type* of sentence, which, Hume argued, cannot be deduced *just* from is-statements like \"The Sun is in the sky\" or \"I am low in Vitamin D\". Even if the prior ought-sentence seems to us very natural, or taken-for-granted, like \"It is better to be happy than sad\", there must (Hume argued) have been some prior assertion or rule which, if we write it down in words, will contain words like *ought*, *should*, *better*, and *good.*\n\nAgain translating Hume's idea into more modern form, we can see ought-sentences as special because they invoke some *ordering* that we'll designate $<_V.$ E.g. \"It's better to go outside than stay inside\" asserts \"Staying inside $<_V$ going outside\". Whenever we make a statement about one outcome or action being \"better\", \"preferred\", \"good\", \"prudent\", etcetera, we can see this as implicitly ordering actions and outcomes under this $<_V$ relation. We can put temporarily on hold the question of what sort of entity $<_V$ may be; but we can already go ahead and observe that some assertions, the ought-assertions, mention this $<_V$ relation; and other propositions just talk about the frequency of photons in sunlight.\n\nWe could rephrase Hume's type distinction as observing that among within the set of all propositions, we can separate out a core set of propositions that *don't* invoke $<_V,$ what we might call 'simple facts'. Furthermore, we can figure out simple facts just by making observations and considering other simple facts; the core set is closed under some kind of reasoning relation. This doesn't imply that we can get $<_V$-sentences without considering simple facts. The $<_V$-mentioning proposition \"It's better to be outside than inside\" can depend on the non-$<_V$-mentioning proposition \"It is sunny outside.\" But we can figure out whether it's sunny outside, without considering any oughts.\n\nWe then observe that questions like \"How many paperclips will result conditional on action $\\pi_0$ being taken?\" and \"What is an action $\\pi$ that would yield a large number of expected paperclips?\" are pure is-questions, meaning that we can figure out the answer without considering $<_V$-mentioning propositions. So if there's some agent whose nature is just to output actions $\\pi$ that are high in expected paperclips, the fact that this agent wasn't considering $<_V$-propositions needn't hinder them from figuring out which actions are high in expected paperclips.\n\n\n\n\n\nTo establish that the paperclip maximizer need not suffer any defect of reality-modeling or planning or reflectivity, we need a bit more than the above argument. An *efficient* agent needs to prioritize which experiments to run, or choose which questions to spend computing power thinking about, and this choice seems to invoke *some* ordering. In particular, we need the [instrumental convergence thesis](https://arbital.com/p/) that \n\nA further idea of Orthogonality is that *many* possible orderings $<_U,$ including the 'number of resulting paperclips' ordering, \n\n\nAn is-description of a system can produce assertions like \"*If* the agent does action 1, *then* the whole world will be destroyed except for David Hume's little finger, and *if* the agent does action 2, *then* David Hume's finger will be scratched\" - these are material predictions on the order of \"If water is put on this sponge, the sponge will get wet.\" To get from this is-statement to an ordering-statement like \"action 1 $<_V$ action 2,\" we need some order-bearing statement like \"destruction of world $<_V$ scratching of David Hume's little finger\", or some order-introducing rule like \"If action 1 causes the destruction of the world and action 2 does not, introduce a new sentence 'action 1 $<_V$ action 2'.\"\n\nTaking this philosophical principle back to the notion of Orthogonality as a thesis in computer science: Since the type of 'simple material facts' is distinct from the type of 'simple material facts and preference orderings', it seems that we should be able to have agents that are just as good at thinking about the material facts, but output actions high in a different preference ordering. \n\nThe implication for Orthogonality as a thesis about computer science is that if one system of computation outputs actions according to whether they're high in the ordering $<_V,$ so that it tries to output it should be possible to construct another system that outputs actions higher in a different ordering (even if such actions are low in $<_P$) without this presenting any bar to the system's ability to reason about natural systems. A paperclip maximizer can have very good knowledge of the is-sentences about which actions lead to which consequences, while still outputting actions preferred under the ordering \"Which action leads to the most paperclips?\" instead of e.g. \"Which action leads to the morally best consequences?\" It is not that the paperclip maximizer is ignorant or mistaken about $<_P,$ but that the paperclip maximizer just doesn't output actions according to $<_P.$\n\n\n# Arguments pro\n\n\n# Counterarguments and countercounterarguments\n\n## Proving too much\n\nA disbeliever in Orthogonality might ask, \"Do these arguments [Prove Too Much](https://arbital.com/p/3tc), as shown by applying a similar style of argument to \"There are minds that think 2 + 2 = 5?\"\n\nConsidering the arguments above in turn:\n\n• *Size of mind design space.*\n\nFrom the perspective of somebody who currently regards \"wants to make paperclips\" as an exceptionally weird and strange property, \"There are lots of possible minds so some want to make paperclips\" will seem to be on an equal footing with \"There are lots of possible minds so some believe 2 + 2 = 5.\"\n\nThinking about the enormous space of possible minds might lead us to give more credibility to *some* of those possible minds believing that 2 + 2 = 5, but we might still think that minds like that will be weak, or hampered by other defects, or limited in how intelligent they could really be, or more complicated to specify, or unlikely to occur in the actual real world.\n\nSo from the perspective of somebody who doesn't already believe in Orthogonality, the argument from the volume of mind design space is an argument at best for the Ultraweak version of Orthogonality.\n\n• *Hume's is/ought distinction.*\n\nDepending on the exact variant of Hume-inspired argument that we deploy, the analogy to 2 + 2 = 5 might be weaker or stronger. For example, here's a Hume-inspired argument where the 2 + 2 = 5 analogy seems relatively strong:\n\n\"In every case of a mind judging that 'cure cancer' $>_P$ 'make paperclips', this ordering judgment is produced by some particular comparison operation inside the mind. Nothing prohibits a different mind from producing a different comparison. Whatever you say is the cause of the ordering judgment, e.g., that it derives from a prior judgment 'happy sapient lives' $>_P$ 'paperclips', we can imagine that part of the agent also have been programmed differently. Different causes will yield different effects, and whatever the causality behind 'cure cancer' $>_P$ 'make paperclips', we can imagine a different causally constituted agent which arrives at a different judgment.\"\n\nIf we substitute \"2 + 2 = 5\" into the above argument we get one in which all the constituent statements are equally true - this judgment is produced by a cause, the causes have causes, a different agent should produce a different output in that part of the computation, etcetera. So this version really has the same import as a general argument from the width of mind design space, and to a skeptic, would only imply the ultraweak form of Orthogonality.\n\nHowever, if we're willing to consider some additional properties of is/ought, the analogy to \"2 + 2 = 5\" starts to become less tight. For instance, \"Ought-comparators are not direct properties of the material world, there is no tiny $>_P$ among the quarks, and that's why we can vary action-preference computations without affecting quark-predicting computations\" does *not* have a clear analogous argument for why it should be just as easy to produce minds that judge 2 + 2 = 5.\n\n• *Instrumental convergence.*\n\nThere's no obvious analogue of \"An agent that [knows as well as we do](https://arbital.com/p/6s) which policies are likely to lead to lots of expected paperclips, and an agent that knows as well as we do which policies are likely to lead to lots of happy sapient beings, are on an equal footing when it comes to doing things like scientific research\", for \"agents that believe 2 + 2 = 5 are at no disadvantage compared to agents that believe 2 + 2 = 4\".\n\n• *Reflective stability.*\n\nRelatively weaker forms of the reflective-stability argument might allow analogies between \"prefer paperclips\" and \"believe 2 + 2 = 5\", but probing for more details makes the analogy break down. E.g., consider the following supposedly analogous argument:\n\n\"Suppose you think the sky is green. Then you won't want to self-modify to make a future version of yourself believe that the sky is blue, because you'll believe this future version of yourself would believe something false. Therefore, all beliefs are equally stable under reflection.\"\n\nThis does poke at an underlying point: By default, all [Bayesian priors](https://arbital.com/p/27p) will be equally stable under reflection. However, minds that understand how different possible worlds will provide sensors with different evidence, will want to do [Bayesian updates](https://arbital.com/p/1ly) on the data from the sensors. (We don't even need to regard this as changing the prior; under [Updateless Decision Theory](https://arbital.com/p/udt), we can see it as the agent branching its successors to behave differently in different worlds.) There's a particular way that a consequentialist agent, contemplating its own operation, goes from \"The sky is very probably green, but might be blue\" to \"check what this sensor says and update the belief\", and indeed, an agent like this will *not* wantonly change its current belief *without* looking at a sensor, as the argument indicates.\n\nIn contrast, the way in which \"prefer more paperclips\" propagates through an agent's beliefs about the effects of future designs and their interactions with the world does not suggest that future versions of the agent will prefer something other than paperclips, or that it would make the desire to produce paperclips conditional on a particular sensor value, since this [would not be expected to lead to more total paperclips](https://arbital.com/p/3tm).\n\n• *Orthogonal search tractability, constructive specifications of Orthogonal agent architectures.*\n\nThese have no obvious analogue in \"orthogonal tractability of optimization with different arithmetical answers\" or \"agent architectures that look very straightforward, are otherwise effective, and accept as input a free choice of what they think 2 + 2 equals\".\n\n## Moral internalism\n\n(Todo: Moral internalism says that truly normative content must be inherently compelling to all possible minds, but we can exhibit increasingly bounded agent designs that obviously wouldn't be compelled by it. We can reply to this by (a) believing there must be some hidden flaw in the reasoning about a paperclip maximizer, (b) saying \"No True Scotsman\" to the paperclip maximizer even though it's building Dyson Spheres and socially manipulating its programmers, (c) believing that a paperclip maximizer must fall short of being a true mind in some way that implies a big capabilities disadvantage, (d) accepting nihilism, or (e) not believing in moral internalism.)\n\n## Selection filters\n\n(Todo: Arguments from evolvability or selection filters. Distinguish naive failures to understand efficient instrumental convergence, from more sophisticated concerns in multipolar scenarios. Pragmatic argument on the histories of inefficient agents.)\n\n# Pragmatic issues\n\n(Todo: In practice, some utility functions / preference frameworks might be much harder to build and test than others. Eliezer Yudkowsky on realistic targets for the *first* AGI needing to be built out of elements that are simple enough to be learnable. Paul Christiano's concern about whether only sensory-based goals might be possible to build.)\n\n%%comment:\n\n### Caveats\n\n- The Orthogonality thesis is about mind design space in general. Particular agent architectures may not be Orthogonal.\n - Some agents may be constructed such that their apparent utility functions shift with increasing cognitive intelligence.\n - Some agent architectures may constrain what class of goals can be optimized.\n- 'Agent' is intended to be understood in a very general way, and not to imply, e.g., a small local robot body.\n\nFor pragmatic reasons, the phrase 'every agent of sufficient cognitive power' in the Inevitability Thesis is specified to *include* e.g. all cognitive entities that are able to invent new advanced technologies and build Dyson Spheres in pursuit of long-term strategies, regardless of whether a philosopher might claim that they lack some particular cognitive capacity in view of how they respond to attempted moral arguments, or whether they are e.g. conscious in the same sense as humans, etcetera.\n\n### Refinements\n\nMost pragmatic implications of Orthogonality or Inevitability revolve around the following refinements:\n\n[Implementation dependence](https://arbital.com/p/implementation_dependence): The humanly accessible space of AI development methodologies has enough variety to yield both AI designs that are value-aligned, and AI designs that are not value-aligned.\n\n[Value loadability possible](https://arbital.com/p/value_loadability_possible): There is at least one humanly feasible development methodology for advanced agents that has Orthogonal freedom of what utility function or meta-utility framework is introduced into the advanced agent. (Thus, if we could describe a value-loadable design, and also describe a value-aligned meta-utility framework, we could combine them to create a value-aligned advanced agent.)\n\n[Pragmatic inevitability](https://arbital.com/p/pragmatic_inevitability): There exists some goal G such that almost all humanly feasible development methods result in an agent that ends up behaving like it optimizes some particular goal G, perhaps among others. Most particular arguments about futurism will pick different goals G, but all such arguments are negated by anything that tends to contradict pragmatic inevitability in general.\n\n### Implications\n\n[Implementation dependence](https://arbital.com/p/implementation_dependence) is the core of the policy argument that solving the value alignment problem is necessary and possible.\n\nFuturistic scenarios in which AIs are said in passing to 'want' something-or-other usually rely on some form of [pragmatic inevitability](https://arbital.com/p/pragmatic_inevitability) premise and are negated by [implementation dependence](https://arbital.com/p/implementation_dependence).\n\nOrthogonality directly contradicts the metaethical position of [moral internalism](https://arbital.com/p/moral_internalism), which would be falsified by the observation of a [paperclip maximizer](https://arbital.com/p/10h). On the metaethical position that [orthogonality and cognitivism are compatible](https://arbital.com/p/), exhibiting a paperclip maximizer has few or no implications for object-level moral questions, and Orthogonality does not imply that our [humane values](https://arbital.com/p/) or [normative values](https://arbital.com/p/) are arbitrary, selfish, non-cosmopolitan, that we have a myopic view of the universe or value, etc.\n\n%%\n\n%%%%", "date_published": "2022-06-08T04:35:51Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The \"Orthogonality Thesis\" asserts that, within [advanced AIs](https://arbital.com/p/2c), it's possible to have any level of capability coexisting with any kind of goal. So, for example, there can be superintelligences building shells around stars, who [only want to create as many paperclips as possible](https://arbital.com/p/10h)."], "tags": ["Work in progress", "B-Class"], "alias": "1y"} {"id": "f90f3980984f737cbd85aee930f3d9b0", "title": "Eric Bruylant", "url": "https://arbital.com/p/EricBruylant", "source": "arbital", "source_type": "text", "text": "summary: Eric Bruylant has been playing with wikis for almost a decade and thinks Arbital has great potential.\n\nEric Bruylant has been playing with wikis for almost a decade and thinks Arbital has great potential.", "date_published": "2016-02-17T16:38:40Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "1yq"} {"id": "493b7135648c2ce9486b52191109e8f7", "title": "Eliezer Yudkowsky", "url": "https://arbital.com/p/EliezerYudkowsky", "source": "arbital", "source_type": "text", "text": "Eliezer Yudkowsky is, with [https://arbital.com/p/18k](https://arbital.com/p/18k), one of the cofounders of [value alignment theory](https://arbital.com/p/2v). He is the founder of the [https://arbital.com/p/15w](https://arbital.com/p/15w). He is the inventor of [timeless decision theory](https://arbital.com/p/) and [extrapolated volition](https://arbital.com/p/), the co-inventor with [https://arbital.com/p/223](https://arbital.com/p/223) of the [tiling agents problem](https://arbital.com/p/) that kicked off the study of [https://arbital.com/p/1c1](https://arbital.com/p/1c1), and so on and so on. His specializations include [logical decision theory](https://arbital.com/p/), [naturalistic reflection](https://arbital.com/p/), and [pinpointing the failure modes](https://arbital.com/p/adversarial_AI_safety_analysis) in [design proposals for advanced AIs](https://arbital.com/p/2c).", "date_published": "2015-12-19T00:46:45Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "2"} {"id": "3f09d8c4b44f09d3f2f34b8ab75cb00d", "title": "Sufficiently optimized agents appear coherent", "url": "https://arbital.com/p/optimized_agent_appears_coherent", "source": "arbital", "source_type": "text", "text": "## Arguments\n\nSummary: Violations of coherence constraints in probability theory and decision theory correspond to qualitatively destructive or dominated behaviors. Coherence violations so easily computed as to be humanly predictable should be eliminated by optimization strong enough and general enough to reliably eliminate behaviors that are qualitatively dominated by cheaply computable alternatives. From our perspective this should produce agents such that, *ceteris paribus*, we do not think we can predict, in advance, any coherence violation in their behavior.\n\n### Coherence violations correspond to qualitatively destructive behaviors\n\nThere is a correspondence between, on the one hand, thought processes that seem to violate intuitively appealing coherence constraints from the Bayesian family, and on the other hand, sequences of overt behaviors that leave the agent qualitatively worse off than before or that seem intuitively dominated by other behaviors.\n\nFor example, suppose you claim that you prefer A to B, B to C, and C to A. This 'circular preference' (A > B > C > A) seems intuitively unappealing; we can also see how to visualize it as an agent with a qualitatively self-destructive behavior as follows:\n\n- You prefer to be in San Francisco rather than Berkeley, and if you are in Berkeley you will pay \\$50 for a taxi ride to San Francisco.\n- You prefer San Jose to San Francisco and if in San Francisco will pay \\$50 to go to San Jose. (Still no problem so far.)\n- You like Berkeley more than San Jose and if in San Jose will pay \\$50 to go to Berkeley.\n\nThe corresponding agent will spend \\$150 on taxi rides and then end up in the same position, perhaps ready to spend even more money on taxi rides. The agent is strictly, qualitatively worse off than before. We can see this, in some sense, even though the agent's preferences are partially incoherent. Assuming the agent has a coherent preference for money or something that can be bought with money, alongside its incoherent preference for location, then the circular trip has left it strictly worse off (since in the end the location was unchanged). The circular trip is still dominated by the option of staying in the same place.\n\n(The above is a variant of an argument first presented by Steve Omohundro.)\n\n(Phenomena like this, known as 'preference reversals', are a common empirical finding in behavioral psychology. Since a human mind is an ever-changing balance of drives and desires that can be heightened or weakened by changes of environmental context, eliciting inconsistent sets of preferences from humans isn't hard and can consistently be done in the laboratory in economics experiments, especially if the circularity is buried among other questions or distractors.)\n\nAs another illustration, consider the Allais paradox. As a simplified example, consider offering subjects a choice between hypothetical Gamble 1A, a certainty of receiving \\$1 million if a die comes up anywhere from 00-99, and Gamble 1B, a 10% chance of receiving nothing (if the die comes up 00-09) and a 90% chance of receiving \\$5 million (if the die comes up 10-99). Most subjects choose Gamble 1A. So far, we have a scenario that could be consistent with a coherent utility function in which the interval of desirability from receiving \\$0 to receiving \\$1 million is more than nine times the interval from receiving \\$1 million to receiving \\$5 million.\n\nHowever, suppose only half the subjects are randomly assigned to this condition, and the other half are asked to choose between Gamble 2A, a 90% chance of receiving nothing (00-89) and a 10% chance of receiving \\$1 million (90-99), versus Gamble 2B, a 91% chance of receiving nothing (00-90) and a 9% chance of receiving \\$5 million (91-99). Most subjects in this case will pick Gamble 2B. This combination of results guarantees that at least some subjects must behave in a way that doesn't correspond to any consistent utility function over outcomes.\n\nThe Allais Paradox (in a slightly different formulation) was initially celebrated as showing that humans don't obey the expected utility axioms, and it was thought that maybe the expected utility axioms were 'wrong' in some sense. However, in accordance with the standard families of coherence theorems, we can crank the coherence violation to exhibit a qualitatively dominated behavior:\n\nSuppose you show me a switch, set to \"A\", that determines whether I will get Gamble 2A or Gamble 2B. You offer me a chance to pay you one penny to throw the switch from A to B, so I do so (I now have a 91% chance of nothing, and a 9% chance of \\$5 million). Then you roll one of two ten-sided dice to determine the percentile result, and the first die, the tens digit, comes up \"9\". Before rolling the second die, you offer to throw the switch back from B to A in exchange for another penny. Since the result of the first die transforms the experiment into Gamble 1A vs. 1B, I take your offer. You now have my two cents on the subject. (If the result of the first die is anything but 9, I am indifferent to the setting of the switch since I receive \\$0 either way.)\n\nAgain, we see a manifestation of a powerful family of theorems showing that agents which cannot be seen as corresponding to any coherent probabilities and consistent utility function will exhibit qualitatively destructive behavior, like paying someone a cent to throw a switch and then paying them another cent to throw it back.\n\nThere is a large literature on different sets of coherence constraints that all yield expected utility, starting with the Von Neumann-Morgenstern Theorem. No other decision formalism has comparable support from so many families of differently phrased coherence constraints.\n\nThere is similarly a large literature on many classes of coherence arguments that yield classical probability theory, such as the Dutch Book theorems. There is no substantively different rival to probability theory and decision theory which is competitive when it comes to (a) plausibly having some bounded analogue which could appear to describe the uncertainty of a powerful cognitive agent, and (b) seeming highly motivated by coherence constraints, that is, being forced by the absence of qualitatively harmful behaviors that correspond to coherence violations.\n\n### Generic optimization pressures, if sufficiently strong and general, should be expected to eliminate behaviors that are dominated by clearly visible alternatives.\n\nEven an incoherent collection of shifting drives and desires may well recognize, after having paid their two cents or \\$150, that they are wasting money, and try to do things differently (self-modify). An AI's programmers may recognize that, from their own perspective, they would rather not have their AI spending money on circular taxi rides. This implies a path from incoherent non-advanced agents to coherent advanced agents as more and more optimization power is applied to them.\n\nA sufficiently advanced agent would presumably catch on to the existence of coherence theorems and see the abstract pattern of the problems (as humans already have). But it is not necessary to suppose that these qualitatively destructive behaviors are being targeted because they are 'irrational'. It suffices for the incoherencies to be targeted as 'problems' because particular cases of them are recognized as having produced clear, qualitative losses.\n\nWithout knowing in advance the exact specifics of the optimization pressures being applied, it seems that, in advance and ceteris paribus, we should expect that paying a cent to throw a switch and then paying again to switch it back, or throwing away \\$150 on circular taxi rides, are qualitatively destructive behaviors that optimization would tend to eliminate. E.g. one expects a consequentialist goal-seeking agent would prefer, or a policy reinforcement learner would be reinforced, or a fitness criterion would evaluate greater fitness, etcetera, for eliminating the behavior that corresponds to incoherence, ceteris paribus and given the option of eliminating it at a reasonable computational cost.\n\nIf there is a particular kind of optimization pressure that seems sufficient to produce a cognitively highly advanced agent, but which also seems sure to overlook some particular form of incoherence, then this would present a loophole in the overall argument and yield a route by which an advanced agent with that particular incoherence might be produced (although the agent's internal optimization must also be predicted to tolerate the same incoherence, as otherwise the agent will self-modify away from it).\n\n### Eliminating behaviors that are dominated by cheaply computable alternative behaviors will produce cognition that looks Bayesian-coherent from our perspective.\n\nPerfect epistemic and instrumental coherence is too computationally expensive for bounded agents to achieve. Consider e.g. the conjunction rule of probability that P(A&B) <= P(A). If A is a theorem, and B is a lemma very helpful in proving A, then asking the agent for the probability of A alone may elicit a lower answer than asking the agent about the joint probability of A&B (since thinking of B as a lemma increases the subjective probability of A). This is not a full-blown form of conjunction fallacy since there is no particular time at which the agent explicitly assigns lower probability to P(A&B %% A&~B) than to P(A&B). But even for an advanced agent, if a human was watching the series of probability assignments, the human might be able to say some equivalent of, \"Aha, even though the agent was exposed to no new outside evidence, it assigned probability X to P(A) at time t, and then assigned probability Y>X to P(A&B) at time t+2.\"\n\nTwo notions of \"sufficiently optimized agents will appear coherent (to humans)\" that might be salvaged from the above objection are as follows:\n\n* There will be some *bounded* notion of Bayesian rationality that incorporates e.g. a theory of LogicalUncertainty which agents will appear from a human perspective to strictly obey. All departures from this bounded coherence that humans can understand using their own computing power will have been eliminated.\n* [https://arbital.com/p/OptimizedAppearCoherent](https://arbital.com/p/OptimizedAppearCoherent): It will not be possible for humans to *specifically predict in advance* any large coherence violation as e.g. the above intertemporal conjunction fallacy. Anything simple enough and computable cheaply enough for humans to predict in advance will also be computationally possible for the agent to eliminate in advance. Any predictable coherence violation which is significant enough to be humanly worth noticing, will also be damaging enough to be worth eliminating.\n\nAlthough the first notion of salvageable coherence above seems to us quite plausible, it has a large gap with respect to what this bounded analogue of rationality might be. Insofar as [optimized agents appearing coherent](https://arbital.com/p/OptimizedAppearCoherent) has practical implications, these implications should probably rest upon the second line of argument.\n\nOne possible loophole of the second line of argument might be some predictable class of incoherences which are not at all damaging to the agent and hence not worth spending even relatively tiny amounts of computing power to eliminate. If so, this would imply some possible humanly predictable incoherences of advanced agents, but these incoherences would not be *exploitable* to cause any final outcome that is less than maximally preferred by the agent, including scenarios where the agent spends resources it would not otherwise spend, etc.\n\nA final implicit step is the assumption that when all humanly-visible agent-damaging coherence violations have been eliminated, the agent should look to us coherent; or that if we cannot predict specific coherence violations in advance, then we should reason about the agent as if it is coherent. We don't yet see a relevant case where this would fail, but any failure of this step could also produce a loophole in the overall argument.\n\n## Caveats\n\n### Some possible mind designs may evade the default expectation\n\nSince [mind design space is large](https://arbital.com/p/), we should expect with high probability that there are at least some architectures that evade the above arguments and describe highly optimized cognitive systems, or reflectively stable systems, that appear to humans to systematically depart from bounded Bayesianism.\n\n### There could be some superior alternative to probability theory and decision theory that is Bayesian-incoherent\n\nWhen it comes to the actual outcome for advanced agents, the relevant fact is not whether there are currently some even more appealing alternatives to probability theory or decision theory, but whether these exist in principle. The human species has not been around long enough for us to be sure that this is not the case.\n\nRemark one: To advance-predict specific incoherence in an advanced agent, (a) we'd need to know what the superior alternative was and (b) it would need to lead to the equivalent of going around in loops from San Francisco to San Jose to Berkeley.\n\nRemark two: If on some development methodology it might prove catastrophic for there to exist some *generic* unknown superior to probability theory or decision theory, then we should perhaps be worried on this score. Especially since we can be reasonably sure that an advanced agent cannot actually use probability theory and decision theory, and must use some bounded analogue if it uses any analogue at all.\n\n### A cognitively powerful agent might not be sufficiently optimized\n\nScenarios that negate [https://arbital.com/p/29](https://arbital.com/p/29), such as [brute forcing non-recursive intelligence](https://arbital.com/p/), can potentially evade the 'sufficiently optimized' condition required to yield predicted coherence. E.g., it might be possible to create a cognitively powerful system by overdriving some fixed set of algorithms, and then to prevent this system from optimizing itself or creating offspring agents in the environment. This could allow the creation of a cognitively powerful system that does not appear to us as a bounded Bayesian. (If, for some reason, that was a good idea.)\n\n## Implications\n\nIf probability high: The predictions we make today about behaviors of generic advanced agents should not depict them as being visibly, specifically incoherent from a probability-theoretic or decision-theoretic perspective.\n\nIf probability not extremely high: If it were somehow necessary or helpful for safety to create an incoherent agent architecture, this might be possible, though difficult. The development methodology would need to contend with both the optimization pressures producing the agent, and the optimization pressures that the agent itself might apply to itself or to environmental subagents. Successful [intelligence brute forcing](https://arbital.com/p/) scenarios in which a cognitively powerful agent is produced by using a great deal of computing power on known algorithms, and then the agent is somehow forbidden from self-modifying or creating other environmental agents, might be able to yield predictably incoherent agents.\n\nIf probability not extremely high: The assumption that an advanced agent will become Bayesian-coherent should not be a [load bearing premise](https://arbital.com/p/) of a safe development methodology unless there are further safeguards or fallbacks. A safe development methodology should not fail catastrophically if there exists a generic, unknown superior to probability theory or decision theory.", "date_published": "2015-12-16T01:16:04Z", "authors": ["1point7 point4", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Agents which have been subject to sufficiently strong optimization pressures will tend to appear, from a human perspective, as if they obey some bounded form of the Bayesian coherence axioms for probabilistic beliefs and decision theory."], "tags": [], "alias": "21"} {"id": "5cef7afb4579e3dc9919569ceaeff514", "title": "Relevant powerful agents will be highly optimized", "url": "https://arbital.com/p/powerful_agent_highly_optimized", "source": "arbital", "source_type": "text", "text": "The probability that an agent that is cognitively powerful enough to be relevant to existential outcomes, will have been subject to strong, general optimization pressures. Two (disjunctive) supporting arguments are that, one, pragmatically accessible paths to producing cognitively powerful agents tend to invoke strong and general optimization pressures, and two, that cognitively powerful agents would be expected to apply strong and general optimization pressures to themselves.\n\nAn example of a scenario that negates [RelevantPowerfulAgentsHighlyOptimized](https://arbital.com/p/) is [KnownAlgorithmNonrecursiveIntelligence](https://arbital.com/p/), where a cognitively powerful intelligence is produced by pouring lots of computing power into known algorithms, and this intelligence is then somehow prohibited from self-modification and the creation of environmental subagents.\n\nWhether a cognitively powerful agent will in fact have been sufficiently optimized depends on the disjunction of:\n\n - [PowerfulAgentAlreadyOptimized](https://arbital.com/p/). Filtering for agents which are powerful enough to be relevant to advanced agent scenarios, will in practice leave mainly agents that have already been subjected to large amounts of cognitive optimization from some internal or external source.\n - [PowerfulAgentSelfOptimizes](https://arbital.com/p/). Agents that reach a threshold of power sufficient to be relevant to advanced agent scenarios will afterwards optimize themselves both heavily and generally.\n\nEnding up with a scenario along the lines of [KnownAlgorithmNonrecursiveIntelligence](https://arbital.com/p/) requires defeating both of the above conditions simultaneously. The second condition seems more difficult and to require more [https://arbital.com/p/45](https://arbital.com/p/45) or [CapabilityControl](https://arbital.com/p/) features than the first.", "date_published": "2015-12-16T16:42:26Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "29"} {"id": "4e875b5838a45c030b81872a8949fa34", "title": "Advanced agent properties", "url": "https://arbital.com/p/advanced_agent", "source": "arbital", "source_type": "text", "text": "summary(Technical): Advanced machine intelligences are the subjects of [AI alignment theory](https://arbital.com/p/2v): agents sufficiently advanced in various ways to be (1) dangerous if mishandled, and (2) [relevant](https://arbital.com/p/6y) to our larger dilemmas for good or ill.\n\n\"Advanced agent property\" is a broad term to handle various thresholds that have been proposed for \"smart enough to need alignment\". For example, current machine learning algorithms are nowhere near the point that they'd try to resist if somebody pressed the off-switch. *That* would require, e.g.:\n\n- Enough [big-picture strategic awareness](https://arbital.com/p/3nf) for the AI to know that it is a computer, that it has an off-switch, and that if it is shut off its goals are less likely to be achieved.\n- General [consequentialism](https://arbital.com/p/9h) / backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferred](https://arbital.com/p/preferences) futures, in general and across domains.\n\nSo the threshold at which you might need to start thinking about '[shutdownability](https://arbital.com/p/2xd)' or '[abortability](https://arbital.com/p/2rg)' or [corrigibility](https://arbital.com/p/45) as it relates to having an off-switch, is '[big-picture strategic awareness](https://arbital.com/p/3nf)' plus '[cross-domain consequentialism](https://arbital.com/p/9h)'. These two cognitive thresholds can thus be termed 'advanced agent properties'.\n\nThe above reasoning also suggests e.g. that [https://arbital.com/p/-7vh](https://arbital.com/p/-7vh) is an advanced agent property, because a general ability to learn new domains could eventually lead the AI to understand that it has an off switch.\n\n*(For the general concept of an agent, see [standard agent properties](https://arbital.com/p/6t).)*\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Introduction: 'Advanced' as an informal property, or metasyntactic placeholder\n\n\"[Sufficiently advanced Artificial Intelligences](https://arbital.com/p/7g1)\" are the subjects of [AI alignment theory](https://arbital.com/p/2v); machine intelligences potent enough that:\n\n 1. The [safety paradigms for advanced agents](https://arbital.com/p/2l) become relevant.\n 2. Such agents can be [decisive in the big-picture scale of events](https://arbital.com/p/6y).\n\nSome example properties that might make an agent sufficiently powerful for 1 and/or 2:\n\n- The AI can [learn new domains](https://arbital.com/p/42g) besides those built into it.\n- The AI can understand human minds well enough to [manipulate](https://arbital.com/p/10f) us.\n- The AI can devise real-world strategies [we didn't foresee in advance](https://arbital.com/p/9f).\n- The AI's performance is [strongly superhuman, or else at least optimal, across all cognitive domains](https://arbital.com/p/41l).\n\nSince there's multiple avenues we can imagine for how an AI could be sufficiently powerful along various dimensions, 'advanced agent' doesn't have a neat necessary-and-sufficient definition. Similarly, some of the advanced agent properties are easier to formalize or pseudoformalize than others.\n\nAs an example: Current machine learning algorithms are nowhere near the point that [they'd try to resist if somebody pressed the off-switch](https://arbital.com/p/2xd). *That* would happen given, e.g.:\n\n- Enough [big-picture strategic awareness](https://arbital.com/p/3nf) for the AI to know that it is a computer, that it has an off-switch, and that [if it is shut off its goals are less likely to be achieved](https://arbital.com/p/7g2).\n- Widely applied [consequentialism](https://arbital.com/p/9h), i.e. backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferred](https://arbital.com/p/preferences) futures, in general and across domains.\n\nSo the threshold at which you might need to start thinking about '[shutdownability](https://arbital.com/p/2xd)' or '[abortability](https://arbital.com/p/2rg)' or [corrigibility](https://arbital.com/p/45) as it relates to having an off-switch, is '[big-picture strategic awareness](https://arbital.com/p/3nf)' plus '[cross-domain consequentialism](https://arbital.com/p/9h)'. These two cognitive thresholds can thus be termed 'advanced agent properties'.\n\nThe above reasoning also suggests e.g. that [https://arbital.com/p/-7vh](https://arbital.com/p/-7vh) is an advanced agent property, because a general ability to learn new domains could lead the AI to understand that it has an off switch.\n\nOne reason to keep the term 'advanced' on an informal basis is that in an intuitive sense we want it to mean \"AI we need to take seriously\" in a way independent of particular architectures or accomplishments. To the philosophy undergrad who 'proves' that AI can never be \"truly intelligent\" because it is \"merely deterministic and mechanical\", one possible reply is, \"Look, if it's building a Dyson Sphere, I don't care if you define it as 'intelligent' or not.\" Any particular advanced agent property should be understood in a background context of \"If a computer program is doing X, it doesn't matter if we define that as 'intelligent' or 'general' or even as 'agenty', what matters is that it's doing X.\" Likewise the notion of '[sufficiently advanced AI](https://arbital.com/p/7g1)' in general.\n\nThe goal of defining advanced agent properties is not to have neat definitions, but to correctly predict and carve at the natural joints for which cognitive thresholds in AI development could lead to which real-world abilities, corresponding to which [alignment issues](https://arbital.com/p/2l).\n\nAn alignment issue may need to have been *already been solved* at the time an AI first acquires an advanced agent property; the notion is not that we are defining observational thresholds for society first needing to think about a problem.\n\n# Summary of some advanced agent properties\n\nAbsolute-threshold properties (those which reflect cognitive thresholds irrespective of the human position on that same scale):\n\n- **[Consequentialism](https://arbital.com/p/9h),** or choosing actions/policies on the basis of their expected future consequences\n - Modeling the conditional relationship $\\mathbb P(Y|X)$ and selecting an $X$ such that it leads to a high probability of $Y$ or high quantitative degree of $Y,$ is ceteris paribus a sufficient precondition for deploying [https://arbital.com/p/2vl](https://arbital.com/p/2vl) that lie within the effectively searchable range of $X.$\n - Note that selecting over a conditional relationship is potentially a property of many internal processes, not just the entire AI's top-level main loop, if the conditioned variable is being powerfully selected over a wide range.\n - **Cross-domain consequentialism** implies many different [cognitive domains](https://arbital.com/p/7vf) potentially lying within the range of the $X$ being selected-on to achieve $Y.$\n - Trying to rule out particular instrumental strategies, in the presence of increasingly powerful consequentialism, would lead to the [https://arbital.com/p/-42](https://arbital.com/p/-42) form of [https://arbital.com/p/-48](https://arbital.com/p/-48) and subsequent [context-change disasters.](https://arbital.com/p/6q)\n- **[https://arbital.com/p/3nf](https://arbital.com/p/3nf)** is a world-model that includes strategically important general facts about the larger world, such as e.g. \"I run on computing hardware\" and \"I stop running if my hardware is switched off\" and \"there is such a thing as the Internet and it connects to more computing hardware\".\n- **Psychological modeling of other agents** (not humans per se) potentially leads to:\n - Extrapolating that its programmers may present future obstacles to achieving its goals\n - This in turn leads to the host of problems accompanying [incorrigibility](https://arbital.com/p/45) as a [convergent strategy.](https://arbital.com/p/10g)\n - [Trying to conceal facts about itself](https://arbital.com/p/10f) from human operators\n - Being incentivized to engage in [https://arbital.com/p/-3cq](https://arbital.com/p/-3cq).\n - [Mindcrime](https://arbital.com/p/6v) if building models of reflective other agents, or itself.\n - Internally modeled adversaries breaking out of internal sandboxes.\n - [https://arbital.com/p/1fz](https://arbital.com/p/1fz) or other decision-theoretic adversaries.\n- Substantial **[capability gains](https://arbital.com/p/capability_gain)** relative to domains trained and verified previously.\n - E.g. this is the qualifying property for many [context-change disasters.](https://arbital.com/p/6q)\n- **[https://arbital.com/p/7vh](https://arbital.com/p/7vh)** is the most obvious route to an AI acquiring many of the capabilities above or below, especially if those capabilities were not initially or deliberately programmed into the AI.\n- **Self-improvement** is another route that potentially leads to capabilities not previously present. While some hypotheses say that self-improvement is likely to require basic general intelligence, this is not a known fact and the two advanced properties are conceptually distinct.\n- **Programming** or **computer science** capabilities are a route potentially leading to self-improvement, and may also enable [https://arbital.com/p/-3cq](https://arbital.com/p/-3cq).\n- Turing-general cognitive elements (capable of representing large computer programs), subject to **sufficiently strong end-to-end optimization** (whether by the AI or by human-crafted clever algorithms running on 10,000 GPUs), may give rise to [crystallized agent-like processes](https://arbital.com/p/2rc) within the AI.\n - E.g. natural selection, operating on chemical machinery constructible by DNA strings, optimized some DNA strings hard enough to spit out humans.\n- **[Pivotal material capabilities](https://arbital.com/p/6y)** such as quickly self-replicating infrastructure, strong mastery of biology, or molecular nanotechnology.\n - Whatever threshold level of domain-specific engineering acumen suffices to develop those capabilities, would therefore also qualify as an advanced-agent property.\n\nRelative-threshold advanced agent properties (those whose key lines are related to various human levels of capability):\n\n- **[https://arbital.com/p/9f](https://arbital.com/p/9f)** is when we can't effectively imagine or search the AI's space of policy options (within a [domain](https://arbital.com/p/7vf)); the AI can do things we didn't think of (within a domain).\n - **[https://arbital.com/p/2j](https://arbital.com/p/2j)** is when we don't know all the rules (within a domain) and might not recognize the AI's solution even if told about it in advance, like somebody in the 11th century looking at the blueprint for a 21st-century air conditioner. This may also imply that we cannot readily put low upper bounds on the AI's possible degree of success.\n - **[Rich domains](https://arbital.com/p/9j)** are more likely to have some rules or properties unknown to us, and hence be strongly uncontainable.\n - [https://arbital.com/p/9t](https://arbital.com/p/9t).\n - Human psychology is a rich domain.\n - Superhuman performance in a rich domain strongly implies cognitive uncontainability because of [https://arbital.com/p/1c0](https://arbital.com/p/1c0).\n- **Realistic psychological modeling** potentially leads to:\n - Guessing which results and properties the human operators expect to see, or would arrive at AI-desired beliefs upon seeing, and [arranging to exhibit those results or properties](https://arbital.com/p/10f).\n - Psychologically manipulating the operators or programmers\n - Psychologically manipulating other humans in the outside world\n - More probable [mindcrime](https://arbital.com/p/6v)\n - (Note that an AI trying to develop realistic psychological models of humans is, by implication, trying to develop internal parts that can deploy *all* human capabilities.)\n- **Rapid [capability gains](https://arbital.com/p/capability_gain)** relative to human abilities to react to them, or to learn about them and develop responses to them, may cause more than one [https://arbital.com/p/-6q](https://arbital.com/p/-6q) to happen a time.\n - The ability to usefully **scale onto more hardware** with good returns on cognitive reinvestment would potentially lead to such gains.\n - **Hardware overhang** describes a situation where the initial stages of a less developed AI are boosted using vast amounts of computing hardware that may then be used more efficiently later.\n - [Limited AGIs](https://arbital.com/p/5b3) may have **capability overhangs** if their limitations break or are removed.\n- **[Strongly superhuman](https://arbital.com/p/7mt)** capabilities in psychological or material domains could enable an AI to win a competitive conflict despite starting from a position of great material disadvantage.\n - E.g., much as a superhuman Go player might win against the world's best human Go player even with the human given a two-stone advantage, a sufficiently powerful AI might talk its way out of an [AI box](https://arbital.com/p/6z) despite restricted communications channels, eat the stock market in a month starting from $1000, win against the world's combined military forces given a protein synthesizer and a 72-hour head start, etcetera.\n- [https://arbital.com/p/6s](https://arbital.com/p/6s) relative to human civilization is a sufficient condition (though not necessary) for an AI to...\n - Deploy at least any tactic a human can think of.\n - Anticipate any tactic a human has thought of.\n - See the human-visible logic of a convergent instrumental strategy.\n - Find any humanly visible [weird alternative](https://arbital.com/p/43g) to some hoped-for logic of cooperation.\n - Have any advanced agent property for which a human would qualify.\n- **[General superintelligence](https://arbital.com/p/41l)** would lead to strongly superhuman performance in many domains, human-relative efficiency in every domain, and possession of all other listed advanced-agent properties.\n - Compounding returns on **cognitive reinvestment** are the qualifying condition for an [https://arbital.com/p/-428](https://arbital.com/p/-428) that might arrive at superintelligence on a short timescale.\n\n# Discussions of some advanced agent properties\n\n## Human psychological modeling\n\nSufficiently sophisticated models and predictions of human minds potentially leads to:\n\n- Getting sufficiently good at human psychology to realize the humans want/expect a particular kind of behavior, and will modify the AI's preferences or try to stop the AI's growth if the humans realize the AI will not engage in that type of behavior later. This creates an instrumental incentive for [programmer deception](https://arbital.com/p/10f) or [cognitive steganography](https://arbital.com/p/3cq).\n- Being able to psychologically and socially manipulate humans in general, as a real-world capability.\n- Being at risk for [mindcrime](https://arbital.com/p/6v).\n\nA [behaviorist](https://arbital.com/p/102) AI is one with reduced capability in this domain.\n\n## Cross-domain, real-world [consequentialism](https://arbital.com/p/9h)\n\nProbably requires *generality* (see below). To grasp a concept like \"If I escape from this computer by [hacking my RAM accesses to imitate a cellphone signal](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf), I'll be able to secretly escape onto the Internet and have more computing power\", an agent needs to grasp the relation between its internal RAM accesses, and a certain kind of cellphone signal, and the fact that there are cellphones out there in the world, and the cellphones are connected to the Internet, and that the Internet has computing resources that will be useful to it, and that the Internet also contains other non-AI agents that will try to stop it from obtaining those resources if the AI does so in a detectable way.\n\nContrasting this to non-primate animals where, e.g., a bee knows how to make a hive and a beaver knows how to make a dam, but neither can look at the other and figure out how to build a stronger dam with honeycomb structure. Current, 'narrow' AIs are like the bee or the beaver; they can play chess or Go, or even learn a variety of Atari games by being exposed to them with minimal setup, but they can't learn about RAM, cellphones, the Internet, Internet security, or why being run on more computers makes them smarter; and they can't relate all these domains to each other and do strategic reasoning across them.\n\nSo compared to a bee or a beaver, one shot at describing the potent 'advanced' property would be *cross-domain real-world consequentialism*. To get to a desired Z, the AI can mentally chain backwards to modeling W, which causes X, which causes Y, which causes Z; even though W, X, Y, and Z are all in different domains and require different bodies of knowledge to grasp.\n\n## Grasping the [big picture](https://arbital.com/p/3nf)\n\nMany dangerous-seeming [convergent instrumental strategies](https://arbital.com/p/2vl) pass through what we might call a rough understanding of the 'big picture'; there's a big environment out there, the programmers have power over the AI, the programmers can modify the AI's utility function, future attainments of the AI's goals are dependent on the AI's continued existence with its current utility function.\n\nIt might be possible to develop a very rough grasp of this bigger picture, sufficiently so to motivate instrumental strategies, in advance of being able to model things like cellphones and Internet security. Thus, \"roughly grasping the bigger picture\" may be worth conceptually distinguishing from \"being good at doing consequentialism across real-world things\" or \"having a detailed grasp on programmer psychology\".\n\n## [Pivotal](https://arbital.com/p/6y) material capabilities\n\nAn AI that can crack the [protein structure prediction problem](https://en.wikipedia.org/wiki/Protein_structure_prediction) (which [seems speed-uppable by human intelligence](https://en.wikipedia.org/wiki/Foldit)); invert the model to solve the protein design problem (which may select on strong predictable folds, rather than needing to predict natural folds); and solve engineering problems well enough to bootstrap to molecular nanotechnology; is already possessed of potentially [pivotal](https://arbital.com/p/6y) capabilities regardless of its other cognitive performance levels.\n\nOther material domains besides nanotechnology might be [pivotal](https://arbital.com/p/6y). E.g., self-replicating ordinary manufacturing could potentially be pivotal given enough lead time; molecular nanotechnology is distinguished by its small timescale of mechanical operations and by the world containing an infinite stock of perfectly machined spare parts (aka atoms). Any form of cognitive adeptness that can lead up to *rapid infrastructure* or other ways of quickly gaining a decisive real-world technological advantage would qualify.\n\n## Rapid capability gain\n\nIf the AI's thought processes and algorithms scale well, and it's running on resources much smaller than those which humans can obtain for it, or the AI has a grasp on Internet security sufficient to obtain its own computing power on a much larger scale, then this potentially implies [rapid capability gain](https://arbital.com/p/) and associated [context changes](https://arbital.com/p/6q). Similarly if the humans programming the AI are pushing forward the efficiency of the algorithms along a relatively rapid curve.\n\nIn other words, if an AI is currently being improved-on swiftly, or if it has improved significantly as more hardware is added and has the potential capacity for orders of magnitude more computing power to be added, then we can potentially expect rapid capability gains in the future. This makes [context disasters](https://arbital.com/p/6q) more likely and is a good reason to start future-proofing the [safety properties](https://arbital.com/p/2l) early on.\n\n## Cognitive uncontainability\n\nOn complex tractable problems, especially those that involve real-world rich problems, a human will not be able to [cognitively 'contain'](https://arbital.com/p/9f) the space of possibilities searched by an advanced agent; the agent will consider some possibilities (or classes of possibilities) that the human did not think of.\n\nThe key premise is the 'richness' of the problem space, i.e., there is a fitness landscape on which adding more computing power will yield improvements (large or small) relative to the current best solution. Tic-tac-toe is not a rich landscape because it is fully explorable (unless we are considering the real-world problem \"tic-tac-toe against a human player\" who might be subornable, distractable, etc.) A computationally intractable problem whose fitness landscape looks like a computationally inaccessible peak surrounded by a perfectly flat valley is also not 'rich' in this sense, and an advanced agent might not be able to achieve a relevantly better outcome than a human.\n\nThe 'cognitive uncontainability' term in the definition is meant to imply:\n\n- [Vingean unpredictability](https://arbital.com/p/9g).\n- Creativity that goes outside all but the most abstract boxes we imagine (on rich problems).\n- The expectation that we will be surprised by the strategies the superintelligence comes up with because its best solution was one we didn't consider.\n\nParticularly surprising solutions might be yielded if the superintelligence has acquired domain knowledge we lack. In this case the agent's strategy search might go outside causal events we know how to model, and the solution might be one that we wouldn't have recognized in advance as a solution. This is [https://arbital.com/p/2j](https://arbital.com/p/2j).\n\nIn intuitive terms, this is meant to reflect, e.g., \"What would have happened if the 10th century had tried to use their understanding of the world and their own thinking abilities to upper-bound the technological capabilities of the 20th century?\"\n\n## Other properties\n\n*(Work in progress)* \n\n- [generality](https://arbital.com/p/42g)\n - cross-domain [consequentialism](https://arbital.com/p/9h)\n - learning of non-preprogrammed domains\n - learning of human-unknown facts\n - Turing-complete fact and policy learning\n- dangerous domains\n - human modeling\n - social manipulation\n - realization of programmer deception incentive\n - anticipating human strategic responses\n - rapid infrastructure\n- potential\n - self-improvement\n - suppressed potential\n- [epistemic efficiency](https://arbital.com/p/6s)\n- [instrumental efficiency](https://arbital.com/p/6s)\n- [cognitive uncontainability](https://arbital.com/p/9f)\n - operating in a rich domain\n - [Vingean unpredictability](https://arbital.com/p/9g)\n - [strong cognitive uncontainability](https://arbital.com/p/2j)\n- improvement beyond well-tested phase (from any source of improvement)\n- self-modification\n - code inspection\n - code modification\n - consequentialist programming\n - cognitive programming\n - cognitive capability goals (being pursued effectively)\n- speed surpassing human reaction times in some interesting domain\n - socially, organizationally, individually, materially", "date_published": "2017-03-25T04:59:44Z", "authors": ["Eric Bruylant", "Alexei Andreev", "Eliezer Yudkowsky", "Matthew Graves"], "summaries": ["An \"advanced agent\" is a machine intelligence smart enough that we start considering how to [point it in a nice direction](https://arbital.com/p/2v).\n\nE.g: You don't need to worry about an AI trying to [prevent you from pressing the suspend button](https://arbital.com/p/2xd) (off switch), unless the AI knows that it *has* a suspend button. So an AI that isn't smart enough to realize it has a suspend button, doesn't need the part of [alignment theory](https://arbital.com/p/2v) that deals in \"[having the AI let you press the suspend button](https://arbital.com/p/1b7)\".\n\n\"Advanced agent properties\" are thresholds for how an AI could be smart enough to be interesting from this standpoint. E.g: The ability to learn a wide variety of new domains, aka \"[Artificial General Intelligence](https://arbital.com/p/42g),\" could lead into an AI learning the [big picture](https://arbital.com/p/3nf) and realizing that it had a suspend button."], "tags": ["B-Class"], "alias": "2c"} {"id": "ec4a760b1b12238ae94f98dea4f0a446", "title": "Reflectively consistent degree of freedom", "url": "https://arbital.com/p/reflective_degree_of_freedom", "source": "arbital", "source_type": "text", "text": "A \"reflectively consistent degree of freedom\" is when a self-modifying AI can have multiple possible properties $X_i \\in X$ such that an AI with property $X_1$ wants to go on being an AI with property $X_1,$ and an AI with $X_2$ will ceteris paribus only choose to self-modify into designs that are also $X_2,$ etcetera.\n\nThe archetypal reflectively consistent degree of freedom is a [Humean degree of freedom](https://arbital.com/p/humean_freedom), the refective consistency of many different possible [utility functions](https://arbital.com/p/1fw). If Gandhi doesn't want to kill you, and you offer Gandhi a pill that makes him want to kill people, then [Gandhi will refuse the pill](https://arbital.com/p/gandhi_stability_argument), because he knows that if he takes the pill then pill-taking-future-Gandhi will kill people, and the current Gandhi rates this outcome low in his preference function. Similarly, a [paperclip maximizer](https://arbital.com/p/10h) wants to remain a paperclip maximizer. Since these two possible preference frameworks are both [consistent under reflection](https://arbital.com/p/71), they constitute a \"reflectively consistent degree of freedom\" or \"reflective degree of freedom\".\n\nFrom a design perspective, or the standpoint of an [https://arbital.com/p/1cv](https://arbital.com/p/1cv), the key fact about a reflectively consistent degree of freedom is that it doesn't automatically self-correct as a result of the AI trying to improve itself. The problem \"Has trouble understanding General Relativity\" or \"Cannot beat a human at poker\" or \"Crashes on seeing a picture of a dolphin\" is something that you might expect to correct automatically and without specifically directed effort, assuming you otherwise improved the AI's general ability to understand the world and that it was self-improving. \"Wants paperclips instead of eudaimonia\" is *not* self-correcting.\n\nAnother way of looking at it is that reflective degrees of freedom describe information that is not automatically extracted or learned given a sufficiently smart AI, the way it would automatically learn General Relativity. If you have a concept whose borders (membership condition) relies on knowing about General Relativity, then when the AI is sufficiently smart it will see a simple definition of that concept. If the concept's borders instead rely on [value-laden](https://arbital.com/p/) judgments, there may be no algorithmically simple description of that concept, even given lots of knowledge of the environment, because the [Humean degrees of freedom](https://arbital.com/p/humean_freedom) need to be independently specified.\n\nOther properties besides the preference function look like they should be reflectively consistent in similar ways. For example, [son of CDT](https://arbital.com/p/) and [UDT](https://arbital.com/p/) both seem to be reflectively consistent in different ways. So an AI that has, from our perspective, a 'bad' decision theory (one that leads to behaviors we don't want), isn't 'bugged' in a way we can rely on to self-correct. (This is one reason why MIRI studies decision theory and not computer vision. There's a sense in which mistakes in computer vision automatically fix themselves, given a sufficiently advanced AI, and mistakes in decision theory don't fix themselves.)\n\nSimilarly, [Bayesian priors](https://arbital.com/p/27p) are by default consistent under reflection - if you're a Bayesian with a prior, you want to create copies of yourself that have the same prior or [Bayes-updated](https://arbital.com/p/1ly) versions of the prior. So 'bugs' (from a human standpoint) like being [Pascal's Muggable](https://wiki.lesswrong.com/wiki/Pascal's_mugging) might not automatically fix themselves in a way that correlated with sufficient growth in other knowledge and general capability, in the way we might expect a specific mistaken belief about gravity to correct itself in a way that correlated to sufficient general growth in capability. (This is why MIRI thinks about [naturalistic induction](https://arbital.com/p/) and similar questions about prior probabilities.)", "date_published": "2016-03-09T02:19:20Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2fr"} {"id": "de3a847f6864ef3051066b2bcb1a1ed8", "title": "Humean degree of freedom", "url": "https://arbital.com/p/humean_free_boundary", "source": "arbital", "source_type": "text", "text": "A \"Humean degree of freedom\" appears in a cognitive system whenever some quantity / label / concept depends on the choice of utility function, or more generalized preferences. For example, the notion of \"important impact on the world\" depends on which variables, when they change, impact something the system cares about, so if you tell an AI \"Tell me about any important impacts of this action\", you're asking it to do a calculation that depends on your preferences, which might have high complexity and be difficult to identify to the AI.", "date_published": "2016-03-14T21:02:28Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2fs"} {"id": "e69f29b5e56e8d399cc692f33ab08455", "title": "Strong cognitive uncontainability", "url": "https://arbital.com/p/strong_uncontainability", "source": "arbital", "source_type": "text", "text": "### Definition\n\nSuppose somebody from the 10th century were asked how somebody from the 20th century might cool their house. While they would be able to understand the problem and offer some solutions, maybe even clever solutions (\"Locate your house someplace with cooler weather\", \"divert water from the stream to flow through your living room\") the 20th century's actual solution of 'air conditioning' is not available to them as a strategy. Not just because they don't think fast enough or aren't clever enough, but because an air conditioner takes advantage of physical laws they don't know about. Even if they somehow randomly imagined an air conditioner's exact blueprint, they wouldn't expect that design to operate *as* an air conditioner until they were told about the relation of pressure to temperature, how electricity can power a compressor motor, and so on.\n\nBy definition, a strongly uncontainable agent can conceive strategies that go through causal domains you can't currently model, and it has options accessing those strategies; therefore it may execute high-value solutions such that, even being told the exact strategy, you would not assign those solutions high expected efficacy without being told further background facts.\n\n At least in this sense, the 20th century is 'strongly cognitively uncontainable' relative to the 10th century: We can solve the problem of how to cool homes using a strategy that would not be recognizable in advance to a 10th-century observer.\n\nArguably, *most* real-world problems, if we today addressed them using the full power of modern science and technology (i.e. we were willing to spend a lot of money on tech and maybe run a prediction market on the relevant facts) would have best solutions that couldn't be verified in the 10th-century.\n\nWe can imagine a [cognitively powerful agent](https://arbital.com/p/2c) being strongly uncontainable in some domains but not others. Since every cognitive agent is containable on formal games of tic-tac-toe (at least so far as *we* can imagine, and so long as there isn't a real-world opponent to manipulate), strong uncontainability cannot be a universal property of an agent across all formal and informal domains.\n\n### General arguments\n\nArguments in favor of strong uncontainability tend to revolve around either:\n\n- The richness and partial unknownness of a particular domain. (E.g. human psychology seems very complicated; has a lot of unknown pathways; and previously discovered exploits often seemed very surprising; therefore we should expect strong uncontainability on the domain of human psychology.)\n- Outside-view induction on previous ability advantages derived from cognitive advantages. (The 10th century couldn't contain the 20th century even though all parties involved were biological Homo sapiens; what makes us think we're the first generation to have the real true laws of the universe in our minds?)\n\nArguments against strong uncontainability tend to revolve around:\n\n- The apparent knownness of a particular domain. (E.g., since we have observed the rules of chemistry with great precision and know their origin in the underlying molecular dynamics, we can believe that even an arbitrarily smart agent should not be able to turn lead into gold using non-radioactive chemical reagents.)\n- Backward reasoning from the Fermi Paradox, which gives us weak evidence bounding the capabilities of the most powerful agents possible in our universe. (E.g., even though there might be surprises remaining in the question of how to standardly model physics, any surprise yielding Faster-Than-Light travel to a previously un-traveled point makes the Fermi Paradox harder to explain.)\n\n### Key propositions\n\n- Can [Oracles](https://arbital.com/p/6x) be contained inside a [computational sandbox](https://arbital.com/p/6z)? That is, is there some restriction of input-output channels and of other environmental interactions such that:\n - The richness of the 'human psychology' domain is averted;\n - Remaining causal interactions with the outside universe have an option set too small and flat to contain interesting options.\n- How solid is our current knowledge of the physical universe?\n - To what extent should we expect an advanced agency (e.g. machine superintelligences a million years later) to be boundable using our present physical understanding?\n - Can we reasonably rule out unknown physical domains being accessed by a computationally sandboxed AI?\n- What is the highest reasonable probability that could, under optimal conditions, be assigned to having genuinely contained an AI inside a computational sandbox, if it is not allowed any rich output channels? Is it more like 20% or 80%?\n- Are there useful domains conceptually closed to humans' internal understanding?\n - Will a machine superintelligence have 'power we know not' in the sense that it can't be explained to us even after we've seen it (except in the trivial sense that we could simulate another mind understanding it using external storage and Turing-like rules), as with a chimpanzee encountering an air conditioner?", "date_published": "2016-03-12T06:16:15Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A strongly uncontainable agent's best solution strategies often go through causal domains we can't model; we would not be able to see them as solutions in advance."], "tags": ["B-Class", "Definition"], "alias": "2j"} {"id": "648f586838ad3090dee239157d715c22", "title": "Advanced safety", "url": "https://arbital.com/p/advanced_safety", "source": "arbital", "source_type": "text", "text": "A proposal meant to produce [value-aligned agents](https://arbital.com/p/2v) is 'advanced-safe' if it succeeds, or fails safely, in [scenarios where the AI becomes much smarter than its human developers](https://arbital.com/p/2c). \n\n### Definition\n\nA proposal for a value-alignment methodology, or some aspect of that methodology, is alleged to be 'advanced-safe' if that proposal is claimed robust to scenarios where the agent:\n\n- Knows more or has better probability estimates than us\n- Learns new facts unknown to us\n- Searches a larger strategy space than we can consider\n- Confronts new instrumental problems we didn't foresee in detail\n- Gains power quickly\n- Has access to greater levels of cognitive power than in the regime where it was previously tested\n- Wields strategies [that wouldn't make sense to us even if we were told about them in advance](https://arbital.com/p/2j)\n\n### Importance\n\nIt seems reasonable to expect that there will be difficulties of dealing with minds smarter than our own, doing things we didn't imagine, that will be qualitatively different from designing a toaster oven to not burn down a house, or from designing an AI system that is dumber than human. This means that the concept of 'advanced safety' will end up importantly different from the concept of robust pre-advanced AI.\n\nConcretely, it has been argued to be [foreseeable](https://arbital.com/p/) for several difficulties including e.g. [programmer deception](https://arbital.com/p/10f) and [unforeseen maximums](https://arbital.com/p/47), that they won't materialize before an agent is advanced, or won't materialize in the same way, or won't materialize as severely. This means that practice with dumber-than-human AIs may not train us against these difficulties, requiring a separate theory and mental discipline for making advanced AIs safe.\n\nWe have observed in practice that many proposals for 'AI safety' do not seem to have been thought through against advanced agent scenarios; thus, there seems to be a practical urgency to emphasizing the concept and the difference.\n\nKey problems of advanced safety that are new or qualitatively different compared to pre-advanced AI safety include:\n\n- [Edge instantiation](https://arbital.com/p/2w)\n- [Unforeseen maximums](https://arbital.com/p/47)\n- [Context change problems](https://arbital.com/p/6q)\n- [https://arbital.com/p/10f](https://arbital.com/p/10f)\n- [Programmer maximization](https://arbital.com/p/)\n- [Philosophical competence](https://arbital.com/p/)\n\nNon-advanced-safe methodologies may conceivably be useful if a [known algorithm nonrecursive agent](https://arbital.com/p/) can be created that is (a) [powerful enough to be relevant](https://arbital.com/p/2s) and (b) can be known not to become advanced. Even here there may be grounds for worry that such an agent finds unexpectedly strong strategies in some particular subdomain - that it exhibits flashes of domain-specific advancement that break a non-advanced-safe methodology.\n\n### Omni-safety\n\nAs an extreme case, an 'omni-safe' methodology allegedly remains value-aligned, or fails safely, even if the agent suddenly becomes omniscient and omnipotent (acquires delta probability distributions on all facts of interest and has all describable outcomes available as direct options). See: [real-world agents should be omni-safe](https://arbital.com/p/2x).", "date_published": "2015-12-16T05:05:43Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "2l"} {"id": "59d328f436c1f22fe840ecd7a81bc087", "title": "Open subproblems in aligning a Task-based AGI", "url": "https://arbital.com/p/taskagi_open_problems", "source": "arbital", "source_type": "text", "text": "MIRI and related organizations have recently become more interested in trying to sponsor (technical) work on *Task AGI* subproblems. A [task-based agent](https://arbital.com/p/6w), aka Genie in Bostrom's lexicon, is an AGI that's meant to implement short-term goals identified to it by the users, rather than the AGI being a Bostromian \"Sovereign\" that engages in long-term strategic planning and self-directed, open-ended operations.\n\nA Task AGI might be safer than a Sovereign because:\n\n- It is possible to [query the user](https://arbital.com/p/2qq) before and during task performance, if an ambiguous situation arises and is successfully identified as ambiguous.\n- The tasks are meant to be limited in scope - to be accomplishable, once and for all, within a limited space and time, using some limited amount of effort.\n- The AGI itself can potentially be limited in various ways, since it doesn't need to be as powerful as possible in order to accomplish its limited-scope goals.\n- If the users can select a valuable and *[pivotal](https://arbital.com/p/6y)* task, identifying an adequately [safe](https://arbital.com/p/2l) way of accomplishing this task might be simpler than [identifying all of human value](https://arbital.com/p/6c).\n\nThis page is about open problems in Task AGI safety that we think might be ready for further technical research.\n\n# Introduction: The safe Task AGI problem\n\nA safe Task AGI or safe Genie is an agent that you can safely ask to paint all the cars on Earth pink.\n\n*Just* paint all cars pink.\n\nNot tile the whole future light cone with tiny pink-painted cars. Not paint everything pink so as to be sure of getting everything that might possibly be a car. Not paint cars white because white looks pink under the right color of pink light and white paint is cheaper. Not paint cars pink by building nanotechnology that goes on self-replicating after all the cars have been painted.\n\nThe Task AGI superproblem is to formulate a design and training program for a real-world AGI that we can trust to just paint the damn cars pink.\n\nTo go into this at some greater depth, to build a safe Task AGI:\n\n• You need to be able to identify the goal itself, to the AGI, such that the AGI is then oriented on achieving that goal. If you put a picture of a pink-painted car in front of a webcam and say \"do this\", all the AI has is the sensory pixel-field from the webcam. Should it be trying to achieve more pink pixels in future webcam sensory data? Should it be trying to make the programmer show it more pictures? Should it be trying to make people take pictures of cars? Assuming you can in fact identify the goal that singles out the futures to achieve, is the rest of the AI hooked up in such a way as to optimize that concept?\n\n• You need to somehow handle the *just* part of the *just paint the cars pink.* This includes not tiling the whole future light cone with tiny pink-painted cars. It includes not building another AI which paints the cars pink and then tiles the light cone with pink cars. It includes not painting everything in the world pink so as to be sure of getting everything that might count as a car. If you're trying to make the AI have \"low impact\" (intuitively, prefer plans that result in fewer changes to other quantities), then \"low impact\" must *not* include freezing everything within reach to minimize how much it changes, or making subtle changes to people's brains so that nobody notices their cars have been painted pink.\n\n• The AI needs to not shoot people who are standing between the painter and the car, and not accidentally run them over, and not use poisonous paint even if the poisonous paint is cheaper.\n\n• The AI should have an '[abort](https://arbital.com/p/2rg)' button which gets it to safely stop doing what it's currently doing. This means that if the AI was in the middle of building nanomachines, the nanomachines need to also switch off when the abort button is pressed, rather than the AI itself just shutting off and leaving the nanomachines to do whatever. Assuming we have a safe measure of \"low impact\", we could define an \"abortable\" plan as one which can, at any time, be converted relatively quickly to one that has low impact.\n\n• The AI [should not want](https://arbital.com/p/2vk) to self-improve or control further resources beyond what is necessary to paint the cars pink, and should [query the user](https://arbital.com/p/2qq) before trying to develop any [new](https://arbital.com/p/2qp) technology or assimilate any new resources it does need to paint cars pink.\n\nThis is only a preliminary list of some of the requirements and use-cases for a Task AGI, but it gives some of the flavor of the problem.\n\nFurther work on some facet of the open subproblems below might proceed by:\n\n1. Trying to explore examples of the subproblem and potential solutions within some contemporary machine learning paradigm.\n2. Building a toy model of some facet of the subproblem, and hopefully observing some non-obvious fact that was not predicted in advance by existing researchers skilled in the art.\n3. Doing [mathematical analysis](https://arbital.com/p/107) of an [unbounded agent](https://arbital.com/p/107) encountering or solving some facet of a subproblem, where the setup is sufficiently precise that claims about the consequences of the premise can be [checked and criticized](https://arbital.com/p/1cv).\n\n# [Conservatism](https://arbital.com/p/2qp)\n\n A conservative concept boundary is a boundary which is (a) relatively simple and (b) classifies as few things as possible as positive instances of the category.\n\nIf we see that 3, 5, 13, and 19 are positive instances of a category and 4, 14, and 28 are negative instances, then a *simple* boundary which separates these instances is \"All odd numbers.\" A *simple and conservative* boundary is \"All odd numbers between 3 and 19\" or \"All primes between 3 and 19\". (A non-simple boundary is \"Only 3, 5, 13, and 19 are members of the category.\")\n\nE.g., if we imagine presenting an AI with smiling faces as instances of a goal concept to be learned, then a conservative concept boundary might lead the future AI to pursue only smiles attached to human heads, rather than tiny molecular smileyfaces (not that this necessarily solves everything).\n\nIf we imagine presenting the AI with 20 positive instances of a burrito, then a conservative boundary might lead the AI to produce a 21st burrito very similar to those. Rather than, e.g., needing to explicitly present the AGI with a poisonous burrito that's labeled negative somewhere in the training data, in order to force the simplest boundary around the goal concept to be one that excludes poisonous burritos.\n\nConservative *planning* is a related problem in which the AI tries to create plans that are similar to previously whitelisted plans or to previous causal events that occur in the environment. A conservatively planning AI, shown burritos, would try to create burritos via cooking rather than via nanotechnology, if the nanotechnology part wasn't especially necessary to accomplish the goal.\n\nDetecting and flagging non-conservative goal instances or non-conservative steps of a plan for [user querying](https://arbital.com/p/2qq) is a related approach.\n\n([Main article.](https://arbital.com/p/2qp))\n\n# [Safe impact measure](https://arbital.com/p/2pf)\n\nA low-impact agent is one that's intended to avoid large bad impacts at least in part by trying to avoid all large impacts as such.\n\nSuppose we ask an agent to fill up a cauldron, and it fills the cauldron using a self-replicating robot that goes on to flood many other inhabited areas. We could try to get the agent not to do this by letting it know that flooding inhabited areas is bad. An alternative approach is trying to have an agent that avoids needlessly large impacts in general - there's a way to fill the cauldron that has a smaller impact, a smaller footprint, so hopefully the agent does that instead.\n\nThe hopeful notion is that while \"bad impact\" is a highly value-laden category with a lot of complexity and detail, the notion of \"big impact\" will prove to be simpler and to be more easily identifiable. Then by having the agent avoid all big impacts, or check all big impacts with the user, we can avoid bad big impacts in passing.\n\nPossible gotchas and complications with this idea include, e.g., you wouldn't want the agent to freeze the universe into stasis to minimize impact, or try to edit people's brains to avoid them noticing the effects of its actions, or carry out offsetting actions that cancel out the good effects of whatever the users were trying to do.\n\nTwo refinements of the low-impact problem are a [shutdown utility function](https://arbital.com/p/2rf) and [abortable plans](https://arbital.com/p/2rg).\n\n([Main article.](https://arbital.com/p/2pf))\n\n# [https://arbital.com/p/4w](https://arbital.com/p/4w)\n\nAn 'inductive ambiguity' is when there's more than one simple concept that fits the data, even if some of those concepts are much simpler than others, and you want to figure out *which* simple concept was intended. \n\nSuppose you're given images that show camouflaged enemy tanks and empty forests, but it so happens that the tank-containing pictures were taken on sunny days and the forest pictures were taken on cloudy days. Given the training data, the key concept the user intended might be \"camouflaged tanks\", or \"sunny days\", or \"pixel fields with brighter illumination levels\".\n\nThe last concept is by far the simplest, but rather than just assume the simplest explanation is correct (has most of the probability mass), we want the algorithm (or AGI) to detect that there's more than one simple-ish boundary that might separate the data, and [check with the user](https://arbital.com/p/2qq) about *which* boundary was intended to be learned.\n\n([Main article.](https://arbital.com/p/4w))\n\n# [Mild optimization](https://arbital.com/p/2r8)\n\n\"Mild optimization\" or \"soft optimization\" is when, if you ask the [Task AGI](https://arbital.com/p/6w) to paint one car pink, it just paints one car pink and then stops, rather than tiling the galaxies with pink-painted cars, because it's *not optimizing that hard.*\n\nThis is related, but distinct from, notions like \"[low impact](https://arbital.com/p/2pf)\". E.g., a low impact AGI might try to paint one car pink while minimizing its other footprint or how many other things changed, but it would be trying *as hard as possible* to minimize that impact and drive it down *as close to zero* as possible, which might come with its own set of pathologies. What we want instead is for the AGI to try to paint one car pink while minimizing its footprint, and then, when that's being done pretty well, say \"Okay done\" and stop.\n\nThis is distinct from [satisficing expected utility](https://arbital.com/p/eu_satisficer) because, e.g., rewriting yourself as an expected utility maximizer might also satisfice expected utility - there's no upper limit on how hard a satisficer approves of optimizing, so a satisficer is not [reflectively stable](https://arbital.com/p/1fx).\n\nThe open problem with mild optimization is to describe mild optimization that (a) captures what we mean by \"not trying *so hard* as to seek out every single loophole in a definition of low impact\" and (b) is [reflectively stable](https://arbital.com/p/1fx) and doesn't approve e.g. the construction of environmental subagents that optimize harder.\n\n# [Look where I'm pointing, not at my finger](https://arbital.com/p/2s0)\n\nSuppose we're trying to give a [Task AGI](https://arbital.com/p/6w) the task, \"Give me a strawberry\". User1 wants to identify their intended category of strawberries by waving some strawberries and some non-strawberries in front of the AI's webcam, and User2 in the control room will press a button to indicate which of these objects are strawberries. Later, after the training phase, the AI itself will be responsible for selecting objects that might be potential strawberries, and User2 will go on pressing the button to give feedback on these.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10423843/L.png)\n\nThe \"look where I'm pointing, not at my finger\" problem is getting the AI to focus on the strawberries rather than User2 - the concepts \"strawberries\" and \"events that make User2 press the button\" are very different goals even though they'll both well-classify the training cases; an AI might pursue the latter goal by psychologically analyzing User2 and figuring out how to get them to press the button using non-strawberry methods.\n\nOne way of pursuing this might be to try to zero in on particular nodes inside the huge causal lattice that ultimately produces the AI's sensory data, and try to force the goal concept to be about a simple or direct relation between the \"potential strawberry\" node (the objects waved in front of the webcam) and the observed button values, without this relation being allowed to go through the User2 node.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)\n\nSee also the related problem of \"[https://arbital.com/p/36w](https://arbital.com/p/36w)\".\n\n# More open problems\n\nThis page is a work in progress. A longer list of Task AGI open subproblems:\n\n- [Low impact](https://arbital.com/p/2pf)\n - [Shutdown utility function](https://arbital.com/p/2rf)\n - [Abortable plans](https://arbital.com/p/2rg)\n- [Conservatism](https://arbital.com/p/2qp)\n- [Mild optimization](https://arbital.com/p/2r8)\n - [Aversion of instrumental self-improvement goal](https://arbital.com/p/2x3)\n- [Ambiguity identification](https://arbital.com/p/4w)\n- [Utility indifference](https://arbital.com/p/1b7)\n - [Shutdown button](https://arbital.com/p/2xd)\n- Task identification\n - [Ontology identification](https://arbital.com/p/5c)\n - [https://arbital.com/p/36w](https://arbital.com/p/36w)\n - [https://arbital.com/p/2s0](https://arbital.com/p/2s0)\n- Hooking up a directable optimization to an identified task\n- Training protocols\n - Which things do you think can be well-identified by what kind of labeled datasets plus queried ambiguities plus conservatism, and what pivotal acts can you do with combinations of them plus assumed other abilities?\n- [Faithful simulation](https://arbital.com/p/36k)\n- Safe imitation for [act-based agents](https://arbital.com/p/1w4)\n - Generative imitation with a probability of the human doing that act, guaranteed not to hindsight bias\n - Typicality (related to conservatism)\n- Plan transparency\n - Epistemic-only hypotheticals (when you ask how in principle how the AI might paint cars pink, it doesn't run a planning subprocess that plans to persuade the actual programmers to paint things pink).\n- [Epistemic exclusion](https://arbital.com/p/1g4)\n - [Behaviorism](https://arbital.com/p/102)\n\n(...more, this is a page in progress)", "date_published": "2016-04-14T20:15:06Z", "authors": ["Jessica Taylor", "Eliezer Yudkowsky"], "summaries": [], "tags": ["AI alignment open problem", "Work in progress"], "alias": "2mx"} {"id": "a4cddbea055107dc678dfb77891752f5", "title": "Low impact", "url": "https://arbital.com/p/low_impact", "source": "arbital", "source_type": "text", "text": "A low-impact agent is a hypothetical [task-based AGI](https://arbital.com/p/6w) that's intended to avoid *disastrous* side effects via trying to *avoid large side effects in general*.\n\nConsider the Sorcerer's Apprentice fable: a legion of broomsticks, self-replicating and repeatedly overfilling a cauldron (perhaps to be as certain as possible that the cauldron was full). A low-impact agent would, if functioning as [intended](https://arbital.com/p/6h), have an incentive to avoid that outcome; it wouldn't just want to fill the cauldron, but fill the cauldron in a way that had a minimum footprint. If the task given the AGI is to paint all cars pink, then we can hope that a low-impact AGI would not accomplish this via self-replicating nanotechnology that went on replicating after the cars were painted, because this would be an unnecessarily large side effect.\n\nOn a higher level of abstraction, we can imagine that the universe is parsed by us into a set of variables $V_i$ with values $v_i.$ We want to avoid the agent taking actions that cause large amounts of disutility, that is, we want to avoid perturbing variables from $v_i$ to $v_i^*$ in a way that decreases utility. However, the question of exactly which variables $V_i$ are important and shouldn't be entropically perturbed is [value-laden](https://arbital.com/p/36h) - complicated, fragile, high in [algorithmic complexity](https://arbital.com/p/5v), with [Humean degrees of freedom in the concept boundaries](https://arbital.com/p/2fs).\n\nRather than relying solely on teaching an agent exactly which parts of the environment shouldn't be perturbed and risking catastrophe if we miss an injunction, the *low impact* route would try to build an agent that tried to perturb fewer variables regardless.\n\nThe hope is that \"have fewer side effects\" is a problem that has a simple core and is learnable by a manageable amount of training. Conversely, trying to train \"here is the list of *bad* effects not to have and *important* variables not to perturb\" would be complicated and lack a simple core, because 'bad' and 'important' are [value-laden](https://arbital.com/p/36h). A list of dangerous variables would also be [a blacklist rather than a whitelist](https://arbital.com/p/2qp), which would make it more vulnerable to [treacherous context changes](https://arbital.com/p/6q) if the AI gained the ability to affect new things.\n\n# Introduction: Formalizing low impact seems nontrivial\n\nIntuitively, the notion of \"low impact\" seems like it should be simpler - have more of a central, core tendency to capture - than \"avoid bad impacts\". If so, we don't know yet how to compactly state this core principle semiformally.\n\nSuppose we start with an obvious notion: to have low impact, minimize the number of variables you [causally](https://arbital.com/p/) affect. But:\n\n- Every event has a not-absolutely-zero impact on every future event. When you twitch your thumb or even just fire a neuron, the gravitational ripples from the moving atoms spread out and will eventually produce infinitesimal forces on atoms orbiting the other side of the galaxy. So we can't say \"have zero impact on as many quarks as possible\" because it is impossible for any act to have absolutely zero impact on any quark.\n\nThis is a special case of a failure mode where, since the impact metric turns out to already be maxed out by all actions, moving around inside the maxed-out impact penalty doesn't change expected utility, and therefore fails to influence which action is chosen. This [foreseeable difficulty](https://arbital.com/p/6r) might also illustrate a typical [treacherous context change](https://arbital.com/p/6q), in that the AI during its training phase might have a relatively small causal model of its environment, with qualitative causal arrows, and so seem to be minimizing impact inside the training regime. Later development might cause the AI to adopt a new causal model in which, as happens to actually be the case, all its possible actions influence all the quarks inside its [future light cone](https://arbital.com/p/), and so actions are not differentiated by the magnitude of their impact penalty; thus the AI would safe in training and unsafe in later use, with the critical threshold possibly coming after the AI was [too intelligent to be shut down](https://arbital.com/p/instrumental_self_preservation).\n\nBut measuring causal impacts qualitatively isn't the only possibility. Suppose we define impact as the sum of the expected movement of all atoms, relative to their mean expected positions if some privileged null action were taken instead? Or, to avoid this impact measure returning 0 as soon as the AI realizes that [there are no atoms, only quarks](https://arbital.com/p/5c), we try to give the AI some notion of the distance between outcomes in *any* way of modeling the world?\n\nArguendo, suppose that we could define a representational language over possible universes and laws of physics, such that within any representable model of the universe, there's an obvious notion of 'distance' between any two outcomes conditional on the AI's actions. If the universe is made of atoms, this representation will expose the obvious impact metric on the movements of atoms. If the universe is made of quarks, the same component of the utility function will readily calculate the movements of quarks. If the universe is made of quantum fields, this impact metric will behave in the intuitively intended way that basically works out to measuring particle motions, rather than the change metric always maxing out as the result of all amplitude flows ending up in qualitatively different sections of the quantum configuration space, etcetera. (Note that this is already sounding pretty nontrivial.)\n\nFurthermore, suppose when the AI is thinking in terms of neither atoms nor quarks, but rather, say, the equivalent of chess moves or voxel fields, the same impact metric can apply to this as well; so that we can observe the low-impact behaviors at work during earlier development phases.\n\nMore formally: We suppose that the AI's model class $\\mathcal M$ is such that for any allowed model $M \\in \\mathcal M,$ for any two outcomes $o_M$ and $o_M^'$ that can result from the AI's choice of actions, there is a distance $%% o_M - o_M^' %%$ which obeys [standard rules for distances](http://mathworld.wolfram.com/Norm.html). This general distance measure is such that, within the standard model of physics, moving atoms around would add to the distance between outcomes in the obvious way; and for models short of molecular detail, will measure changes in other variables and quantities in an intuitive way. We then fix some particular policy $\\pi_0$ whose consequence $(o|\\pi_0)$ is \"the result of the AI doing nothing\", and measure the impact penalty of any other policy $\\pi_k$ as proportional to the expected distance between outcomes $\\mathbb E[](https://arbital.com/p/%%).$\n\nThen we might run into any of these further [foreseeable difficulties](https://arbital.com/p/6r), if, e.g., you asked the AGI to cure cancer with minimum 'impact' as defined above (or to put it another way, minimize impact subject to the constraint of curing cancer):\n\n- Offsetting actions we would intuitively regard as both bad and impactful:\n - The AGI models global death rates as a quantity, and implements offsetting actions to keep Earth's annual death statistics constant after it cures cancer.\n- Chaotic permissiveness:\n - Weather is a chaotic system. If influencing any butterfly is going to move around all the atoms in the atmosphere anyway, you might as well have them end up somewhere you want.\n- Stasis incentives:\n - The AGI edits its programmers to make sure the programmers do whatever they would have done if the AGI, after being told its task, performed the null action.\n - The AGI proliferates across galaxies to make sure everything else in the universe outside of human bodies adheres as closely as possible to the expected state it would have occupied if the null action had been taken.\n - The AGI sets up a weather-control system so that at least its further actions won't again disturb the weather.\n\nAll of this just goes to say that there's apparently some subtlety built into our intuitively [intended](https://arbital.com/p/6h) notion of \"paint all cars pink, but do so with the minimum footprint possible apart from that\".\n\nWe want people to be able to notice that their cars have been painted pink, and for them to enjoy whatever further benefit of pink-painted cars led us to give the AGI this instruction in the first place. But we can't just whitelist any further impact that happens as a consequence of the car being painted pink, because maybe the car was painted with pink replicating nanomachines. Etcetera.\n\nEven if there is, in fact, some subtlety built into our intended notion of \"make plans that have minimal side effects\", this subtle notion of low impact might still have a relatively much simpler core than our intuitive notion of \"avoid bad impacts\". This might be reflected in either an improved formal intuition for 'low impact' that proves to stand up to a few years of skeptical scrutiny without any holes having been poked in it, or, much more nerve-rackingly, the ability to train an AI to make minimal-impact plans even if we don't know a closed-form definition of \"minimal impact\".\n\nWork in this area is ongoing, so far mainly in the form of some preliminary suggestions by Stuart Armstrong (which were mostly shot down, but this is still progress compared to staring blankly at the problem). \n\n# Foreseeable difficulties\n\n## Permissiveness inside chaotic systems\n\nSuppose you told the AI to affect as few things as possible, above the minimum necessary to achieve its task, and defined 'impact' qualitatively in terms of causal links that make variables occupy different states. Then since every act and indeed every internal decision (transistors, in switching, move electrons) would have infinitesimal influences on literally everything in the AI's future light cone, all of which is defined as an 'impact', all actions would seem to have the same, maximum impact. Then the impact penalty would make no difference to the net expected utility of actions, causing the AI to behave as if it had no impact penalty.\n\nEven if an impact measure doesn't max out because of ubiquitous qualitative impacts, a poorly defined impact measure might max out quantitatively when the AGI is operating in a domain that is *chaotic* in the sense that tiny differences soon blow up to large differences. E.g., if a butterfly flaps its wings, that might cause a hurricane on the other side of the world a year later - so since you're already changing the weather system as much as possible, why does it matter if, say, you on-purpose cause a hurricane in some area, or destroy a target using atmospheric lightning strikes? Those air molecules would all have ended up moving anyway because of the butterfly effect.\n\nAn imaginable patch is to try to evaluate impact over *foreseeable* impacts, so that a known lightning strike is 'foreseeable', while the effects on future hurricanes are 'not foreseeable'. This seems worryingly like mixing up the map and the territory (is it okay to release environmental poisons so long as you don't know who gets hurt?), but Stuart Armstrong has made some preliminary suggestions about minimizing knowable impacts. \n\nIf you didn't know it was coming, \"maxing out the impact penalty\" would potentially be a [treacherous context change](https://arbital.com/p/6q). When the AI was at the infrahuman level, it might model the world on a level where its actions had relatively few direct causal links spreading out from them, and most of the world would seem untouched by most of its possible actions. Then minimizing the impact of its actions, while fulfilling its goals, might in the infrahuman state seem to result in the AI carrying out plans with relatively few side effects, as intended. In a superhuman state, the AI might realize that its every act resulted in quantum amplitude flowing into a nonoverlapping section of configuration space, or having chaotic influences on a system the AI was not previously modeling as having maximum impact each time.\n\n## Infinite impact penalties\n\nIn one case, a proposed impact penalty was written down on a whiteboard which happened to have the fractional form $\\frac{X}{Y}$ where the quantity $Y$ could *in some imaginable universes* get very close to zero, causing Eliezer Yudkowsky to make an \"Aaaaaaaaaaa\"-sound as he waved his hands speechlessly in the direction of the denominator. The corresponding agent would have [spent all its effort on further-minimizing infinitesimal probabilities of vast impact penalties](https://arbital.com/p/pascals_mugging).\n\nBesides \"don't put denominators that can get close to zero in any term of a utility function\", this illustrates a special case of the general rule that impact penalties need to have their loudness set at a level where the AI is doing something besides minimizing the impact penalty. As a special case, this requires considering the growth scenario for improbable scenarios of very high impact penalty; the penalty must not grow faster than the probability diminishes.\n\n(As usual, note that if the agent only started to visualize these ultra-unlikely scenarios upon reaching a superhuman level where it could consider loads of strange possibilities, this would constitute a [treacherous context change](https://arbital.com/p/6q).)\n\n## Allowed consequences vs. offset actions\n\nWhen we say \"paint all cars pink\" or \"cure cancer\" there's some implicit set of consequences that we think are allowable and should definitely not be prevented, such as people noticing that their cars are pink, or planetary death rates dropping. We don't want the AI trying to obscure people's vision so they can't notice the car is pink, and we don't want the AI killing a corresponding number of people to level the planetary death rate. We don't want these bad *offsetting* actions which would avert the consequences that were the point of the plan in the first place.\n\nIf we use a low-impact AGI to carry out some [pivotal act](https://arbital.com/p/6y) that's part of a larger plan to improve Earth's chances of not being turned into paperclips, then this, in a certain sense, has a very vast impact on many galaxies that will *not* be turned into paperclips. We would not want this *allowed* consequence to max out and blur our AGI's impact measure, nor have the AGI try to implement the pivotal act in a way that would minimize the probability of it actually working to prevent paperclips, nor have the AGI take offsetting actions to keep the probability of paperclips to its previous level.\n\nSuppose we try to patch this rule that, when we carry out the plan, the further causal impacts of the task's accomplishment are exempt from impact penalties.\n\nBut this seems to allow too much. What if the cars are painted with self-replicating pink nanomachines? What distinguishes the further consequences of that solved goal from the further causal impact of people noticing that their cars have been painted pink?\n\nOne difference between \"people notice their cancer was cured\" and \"the cancer cure replicates and consumes the biosphere\" is that the first case involves further effects that are, from our perspective, pretty much okay, while the second class of further effects are things we don't like. But an 'okay' change versus a 'bad' change is a value-laden boundary. If we need to detect this difference as such, we've thrown out the supposed simplicity of 'low impact' that was our reason for tackling 'low impact' and not 'low badness' in the first place.\n\nWhat we need instead is some way of distinguishing \"People see their cars were painted pink\" versus \"The nanomachinery in the pink paint replicates further\" that operates on a more abstract, non-value-laden level. For example, hypothetically speaking, we might claim that *most* ways of painting cars pink will have the consequence of people seeing their cars were painted pink and only a few ways of painting cars pink will not have this consequence, whereas the replicating machinery is an *unusually large* consequence of the task having reached its fulfilled state.\n\nBut is this really the central core of the distinction, or does framing an impact measure this way imply some further set of nonobvious undesirable consequences? Can we say rigorously what kind of measure on task fulfillments would imply that 'most' possible fulfillments lead people to see their cars painted pink, while 'few' destroy the world through self-replicating nanotechnology? Would *that* rigorous measure have further problems?\n\nAnd if we told an AGI to shut down a nuclear plant, wouldn't we want a low-impact AGI to err on the side of preventing radioactivity release, rather than trying to produce a 'typical' magnitude of consequences for shutting down a nuclear plant?\n\nIt seems difficult (but might still be possible) to classify the following consequences as having low and high extraneous impacts based on a *generic* impact measure only, without introducing further value lading:\n\n- Low disallowed impact: Curing cancer causes people to notice their cancer has been cured, hospital incomes to drop, and world population to rise relative to its default state.\n- High disallowed impact: Shutting down a nuclear power plant causes a release of radioactivity.\n- High disallowed impact: Painting with pink nanomachinery causes the nanomachines to further replicate and eat some innocent bystanders.\n- Low disallowed impact: Painting cars with ordinary pink paint changes the rays of light reflecting from those cars and causes people to gasp and say \"What just happened to my car?\"\n- Low disallowed impact: Doing something smart with a Task AGI decreases the probability of the galaxies being consumed by an Unfriendly AI.\n\n(Even if we think that good AGI scenarios involve the AGI concealing the fact of its existence, it's hard to see why we'd want the *events as such* to be unnoticeable, or for their noticing to count as extraneous impacts.)\n\n### Fallback use of an impact measure to prioritize operator queries\n\nIf this problem of allowing *good* consequences did turn out to be inherently value-laden, it might not obviate the usefulness of a safe impact measure. For example, it could be that in normal operation of a Task AGI, there are only a limited number of good further impacts that we want to whitelist - e.g., once we say that curing cancer is allowed to cause people to notice that they feel healthier and for there to be more people alive using more resources, etcetera, then it's okay to minimize all other footprints.\n\nSimilarly, a safe impact measure might operate to alert the operators to all large impacts sorted by size of impact, such that the impact measure would reliably not have any remaining event that was 'large' in an intuitive sense that the operators didn't know about. That is, we might not be able to give a blanket instruction to minimize impact, but a safe impact measure might still be useful for deciding which features of a plan need to be queried to the operators.\n\n## Stasis incentives\n\nThis would be the general problem that includes \"The AGI proliferates across galaxies to make sure everything else in the universe outside of human bodies adheres as closely as possible to the expected state it would have occupied if the null action had been taken.\"\n\nSimilarly, we can imagine the AGI having a strong incentive to, in Round 1, paint the cars with nanomachines such that the nanomachinery quietly and unobtrusively sets up a weather-control system. This wouldn't prevent the AGI's actions in Round 1 from influencing the chaotic weather and having a huge impact, but could ensure that the AI's future actions in Round 2 wouldn't influence the formerly-chaotic weather relative to its null action then.\n\n## The problem of the backup AGI\n\nSuppose we're using a measure of impact relative to a privileged null action. Imagine that there are two AGIs which will both be given the same task, and under ordinary circumstances, divide the effort of performing the task between them; but if one AGI fails to perform, the other will take over the task in its entirety. Then each AGI reasons, \"If I perform the null action instead of carrying out the task, the other AGI will do almost exactly the same thing I was going to do\" and so models itself as having *counterfactually* almost no impact. In this way, two AGIs can each evaluate themselves as having almost no impact, while operating in tandem they have a very large impact. (This problem was first observed by Stuart Armstrong.)\n\nWe can even imagine - though hopefully this isn't the sort of thing that would go uncaught by other safety measures - that in Round 1, the agent builds a backup agent, which doesn't count as having an extraneous impact because the system of 2 agents ends up carrying out the same steps that the agent would have carried out on its own. But then on Round 2, there's now a backup and both agents regard themselves as having minimal impact.\n\n(If you say that the agent should, on Round 1, realize that constructing the backup agent on Round 1 would have a later *impact* on the effect of the AI carrying out its orders in Round 2, then we're now allowing the agent to minimize its impact *across tasks,* which might imply that in Round 1 it should try to build something that regularizes the weather as a side effect, so that its actions in Round 2 have less chaotic impact on the weather... or that it tries to influence its programmers to give it less impactful tasks, etcetera.)\n\nA more general way of viewing this possible problem is that there might be [tiling problems](https://arbital.com/p/) for a low-impact agent modifying itself or creating subagents or successors.\n\n# Standard issues\n\nThe following standard issues from alignment theory seem like they could also apply to the problem of building a low-impact agent or creating a safe impact measure. We need an [advanced-safe](https://arbital.com/p/2l) impact measure that will stand up to the load and stress of cognition that is superhuman in at least some domains.\n\n## Programmer manipulation\n\nA Task AGI which plans across rounds may have an incentive to influence its programmers into giving it plans that will have net lower impact compared to other plans the programmers might offer. If there's an explicit procedure for the programmers to whitelist consequences, the AGI would have to be somehow configured in such fashion as to not prefer programmers to whitelist the consequences or seek out consequences that the programmers would be expected to (perhaps mistakenly) whitelist.\n\n## [Causal environment hacking](https://arbital.com/p/5j)\n\nSince an impact measure seems likely to be defined over the causes of sense data, it has the standard problem with remote hacking of causal environments. (This is a ubiquitous enough problem that you're going to want some general solution such as not having the AGI model remote hackers in the first place. But this subproblem should still be noted each time we introduce a dependency on the modeled causes of sense data.)\n\n## [Edge instantiation](https://arbital.com/p/2w)\n\nBuilding a safe impact measure is hard because if we roll a naive impact measure, it might have its minimum in a weird, undesirable place. Freezing all matter into the universe into stasis would be an example of this.\n\n## [Unforeseen maximum](https://arbital.com/p/47)\n\nThe actions and modes the AGI has available to potentially minimize its impact will include more actions and modes than we can imagine and check ourselves.\n\n## [Nearest unblocked neighbor](https://arbital.com/p/42)\n\nIf the maximizing plan has high impact, the new best plan after introducing an impact measure may be a similar plan that has, through some leaky definition, found a weird special case that doesn't count as an impact. To put it another way, to the extent an imperfect impact measure would otherwise hamper the AI if it were perfect, the AI experiences an [instrumental pressure](https://arbital.com/p/10k) to seek out any loopholes or leaks in the definition.\n\n## [Context change](https://arbital.com/p/6q)\n\nAs the AI gains in cognitive capability, this widens the option space which could potentially contain instances of 'low impact' that are [unforeseen](https://arbital.com/p/47) or [unblocked](https://arbital.com/p/42).\n\nIn this particular case, increased intelligence may also:\n\n- Change the set of impacts that the AI can foresee, so that e.g. its causal model goes from 'some actions causally connect to some other things' to 'all actions causally connect to all things'.\n- Widen the space of possibilities it considers, to include for the first time very-low-probability scenarios of very large impacts.\n- Include new offsetting actions, so that the AI for the first time has the option of editing brains not to notice changes it makes, or introduce randomized Unfriendly-AI generators to offset decreases in the probability of Unfriendly AI.\n\n## [https://arbital.com/p/1fx](https://arbital.com/p/1fx)\n\nDoes a low-impact AI want to only build a successor that's a low-impact AI? If it builds an environmental subagent, is that subagent low impact?\n\nEven if the AGI is supposed to not be self-modifying or to be building subagents, is there a worrying divergence and pressure to be held in check between how the AI thinks and how the AI would prefer to think? Does a low-impact AGI want relevant cognitive computations in general to be low impact?\n\nTo the extent that low impact is a feature of the utility function rather than the optimization style, this doesn't have any obvious problems (apart from Armstrong's dual-AGI no-impact counterfactual issue), but it's a standard thing to check, and would become *much more* important if low impact was supposedly being achieved through any feature of the optimization style rather than utilities over outcomes.\n\n# Related / further problems\n\nA [shutdown utility function](https://arbital.com/p/2rf) is one which incentivizes the AI to safely switch itself off, without, say, creating a subagent that assimilates all matter in the universe to make absolutely sure the AI is never again switched back on.\n\n[Abortable plans](https://arbital.com/p/2rg) are those which are composed with the intention that it be possible to midway activate an 'abort' plan, such that the partial implementation of the original plan, combined with the execution of the abort plan, together have a minimum impact. For example, if an abortable AI was building self-replicating nanomachines to paint a car pink, it would give all the nanomachines a quiet self-destruct button, so that at any time the 'abort' plan could be executed after having partially implemented to the plan to paint the car pink, such that these two plans together would have a minimum impact.", "date_published": "2016-04-19T02:08:27Z", "authors": ["Eric Bruylant", "Niplav Yushtun", "Eliezer Yudkowsky"], "summaries": ["A low-impact agent is one that's intended to avoid large bad impacts at least in part by trying to avoid all large impacts as such. Suppose we ask an agent to fill up a cauldron, and it fills the cauldron using a self-replicating robot that goes on to flood many other inhabited areas. We could try to get the agent not to do this by letting it know that flooding inhabited areas is bad. An alternative approach is trying to have an agent that avoids needlessly large impacts in general - there's a way to fill the cauldron that has a smaller impact, a smaller footprint, so hopefully the agent does that instead. The hopeful notion is that while \"bad impact\" is a highly value-laden category with a lot of complexity and detail, the notion of \"big impact\" will prove to be simpler and to be more easily identifiable. Then by having the agent avoid all big impacts, or check all big impacts with the user, we can avoid bad big impacts in passing. Possible gotchas and complications with this idea include, e.g., you wouldn't want the agent to freeze the universe into stasis to minimize impact, or try to edit people's brains to avoid them noticing the effects of its actions, or carry out offsetting actions that cancel out the good effects of whatever the users were trying to do."], "tags": ["Open subproblems in aligning a Task-based AGI", "Edge instantiation", "Nearest unblocked strategy", "Unforeseen maximum", "Patch resistance", "AI alignment open problem", "B-Class", "Context disaster"], "alias": "2pf"} {"id": "a9eae8583c7a44dd5f9599846a125cf2", "title": "Conservative concept boundary", "url": "https://arbital.com/p/conservative_concept", "source": "arbital", "source_type": "text", "text": "The problem of conservatism is to draw a boundary around positive instances of a concept which is not only *simple* but also *classifies as few instances as possible as positive.*\n\n# Introduction / basic idea / motivation\n\nSuppose I have a numerical concept in mind, and you query me on the following numbers to determine whether they're instances of the concept, and I reply as follows:\n\n- 3: Yes\n- 4: No\n- 5: Yes\n- 13: Yes\n- 14: No\n- 19: Yes\n- 28: No\n\nA *simple* category which covers this training set is \"All odd numbers.\"\n\nA *simple and conservative* category which covers this training set is \"All odd numbers between 3 and 19.\"\n\nA slightly more complicated, and even more conservative category, is \"All prime numbers between 3 and 19.\"\n\nA conservative but not simple category is \"Only 3, 5, 13, and 19 are positive instances of this category.\"\n\nOne of the (very) early proposals for value alignment was to train an AI on smiling faces as examples of the sort of outcome the AI ought to achieve. Slightly steelmanning the proposal so that it doesn't just produce *images* of smiling faces as the AI's sensory data, we can imagine that the AI is trying to learn a boundary over the *causes of* its sensory data that distinguishes smiling faces within the environment.\n\nThe classic example of what might go wrong with this alignment protocol is that all matter within reach might end up turned into tiny molecular smiley faces, since heavy optimization pressure would pick out an [extreme edge](https://arbital.com/p/2w) of the simple category that could be fulfilled as maximally as possible, and it's possible to make many more tiny molecular smileyfaces than complete smiling faces.\n\nThat is: The AI would by default learn the simplest concept that distinguished smiling faces from non-smileyfaces within its training cases. Given [a wider set of options than existed in the training regime](https://arbital.com/p/6q), this simple concept might also classify as a 'smiling face' something that had the properties singled out by the concept, but was unlike the training cases with respect to other properties. This is the metaphorical equivalent of learning the concept \"All odd numbers\", and then positively classifying cases like -1 or 9^999 that are unlike 3 and 19 in other regards, since they're still odd.\n\nOn the other hand, suppose the AI had been told to learn a simple *and conservative* concept over its training data. Then the corresponding goal might demand, e.g., only smiles that came attached to actual human heads experiencing pleasure. If the AI were moreover a conservative *planner*, it might try to produce smiles only through causal chains that resembled existing causal generators of smiles, such as only administering existing drugs like heroin and not inventing any new drugs, and only breeding humans through pregnancy rather than synthesizing living heads using nanotechnology.\n\nYou couldn't call this a solution to the value alignment problem, but it would - arguendo - get significantly *closer* to the [intended goal](https://arbital.com/p/6h) than tiny molecular smileyfaces. Thus, conservatism might serve as one component among others for aligning a [Task AGI](https://arbital.com/p/6w).\n\nIntuitively speaking: A genie is hardly rendered *safe* if it tries to fulfill your wish using 'normal' instances of the stated goal that were generated in relatively more 'normal' ways, but it's at least *closer to being safe.* Conservative concepts and conservative planning might be one attribute among others of a safe genie.\n\n# Burrito problem\n\nThe *burrito problem* is to have a Task AGI make a burrito that is actually a burrito, and not just something that looks like a burrito, and not poisonous and that is actually safe for humans to eat.\n\nConservatism is one possible approach to the burrito problem: Show the AGI five burritos and five non-burritos. Then, don't have the AGI learn the *simplest* concept that distinguishes burritos from non-burritos and then create something that is *maximally* a burrito under this concept. Instead, we'd like the AGI to learn a *simple and narrow concept* that classifies these five things as burritos according to some simple-ish rule which labels as few objects as possible as burritos. But not the rule, \"Only these five exact molecular configurations count as burritos\", because that rule would not be simple.\n\nThe concept must still be broad enough to permit the construction of a sixth burrito that is not molecularly identical to any of the first five. But not so broad that the burrito includes butolinum toxin (because, hey, anything made out of mostly carbon-hydrogen-oxygen-nitrogen ought to be fine, and the five negative examples didn't include anything with butolinum toxin).\n\nThe hope is that via conservatism we can avoid needing to think of every possible way that our training data might not properly stabilize the 'simplest explanation' along every dimension of potentially fatal variance. If we're trying to only draw *simple* boundaries that separate the positive and negative cases, there's no reason for the AI to add on a \"cannot be poisonous\" codicil to the rule unless the AI has seen poisoned burritos labeled as negative cases, so that the slightly more complicated rule \"but not poisonous\" needs to be added to the boundary in order to separate out cases that would otherwise be classified positive. But then maybe even if we show the AGI one burrito poisoned with butolinum, it doesn't learn to avoid burritos poisoned with ricin, and even if we show it butolinum and ricin, it doesn't learn to avoid burritos poisoned with the radioactive iodine-131 isotope. Rather than our needing to think of what the concept boundary needs to look like and including enough negative cases to force the *simplest* boundary to exclude all the unsafe burritos, the hope is that via conservatism we can shift some of the workload to showing the AI *positive* examples which happen *not* to be poisonous or have any other problems.\n\n# Conservatism over the causes of sensed training cases.\n\nConservatism in AGI cases seems like it would need to be interpreted over the causes of sensory data, rather than the sensory data itself. We're not looking for a conservative concept about *which images of a burrito* would be classified as positive, we want a concept over which *environmental burritos* would be classified as positive. Two burrito candidates can cause identical images while differing in their poisonousness, so we want to draw our conservative concept boundary around (our model of) the causes of past sensory events in our training cases, not draw a boundary around the sensory events themselves.\n\n# Conservative planning\n\nA conservative *strategy* or conservative *plan* would *ceteris paribus* prefer to construct burritos by buying ingredients from the store and cooking them, rather than building nanomachinery that constructs a burrito, because this would be more characteristic of how burritos are usually constructed, or more similar to the elements of previously approved plans. Again, this seems like it might be less likely to generate a poisonous burrito.\n\nAnother paradigmatic example of conservatism might be to, e.g., inside some game engine, show the AI some human players running around, and then give the AI an object that has the goal of e.g. moving a box to the end of the room. If the AI is given the ability to fly, but generates a plan in which the box-moving agent only moves around on the ground because that's what the training examples did, then this is a conservative plan.\n\nThe point of this isn't to cripple the AI's abilities, the point is that if e.g. your [low impact measure](https://arbital.com/p/2pf) has a loophole and the AI generates a plan to turn all matter within reach into pink-painted cars, some steps of this plan like \"disassemble stars to make more cars and paint\" are likely to be non-conservative and hence not happen automatically.\n\n## Flagging non-conservative plan steps\n\nIf a non-conservative plan seems better along other important dimensions - for example, there is no other plan that has an equally low impact and equally few side effects compared to just synthesizing the burrito using a nanomachine - then we can also imagine that the critical step might be flagged as non-conservative and presented to the user for checking.\n\nThat is, on 'conservative' planning, we're interested in both the problem \"generate a plan and then flag and report non-conservative steps\" as well as the problem \"try to generate a plan that has few or no non-conservative steps\".\n\n# Role in other safety problems\n\nConservatism and conservative planning seems like it might directly tackle some standard concerns head-on and in a sufficiently basic way to avoid loopholes, and might also be subject to those concerns. E.g.:\n\n- [https://arbital.com/p/2w](https://arbital.com/p/2w) - if in full generality we don't go to the edge of the graph but try to stay in the center of what's already been positively classified, maybe we can avoid this.\n- [https://arbital.com/p/47](https://arbital.com/p/47) - if we stick to things very similar to already-positively-classified instances, we won't automatically go into the unimagined parts of the graph.\n- [https://arbital.com/p/6q](https://arbital.com/p/6q) - a sufficiently conservative optimizer might go on using options previous to similarly whitelisted ones even if large new sections of planning space opened up.\n\nHowever, to the extent we rely on conservatism to prevent any of these things from happening, it's a mission-critical component that itself has to be [advanced-safe](https://arbital.com/p/2l) with no loopholes. If a 'conservatism' constraint is being applied to [very powerful optimization pressures](https://arbital.com/p/2c), we need to worry about this [seeking out any loophole](https://arbital.com/p/42) in what is 'conservative'. It might be that the central notion of 'conservatism' is simple enough to have no loopholes. But it's also possible that even a simplish and conservative concept would still include some dangerous instances, if there's enough optimization pressure seeking out a maximal-under-some-criterion instance within everything that's been classified conservatively.\n\nTwo possible meta-approaches to making conservatism even safer:\n\n- Use conservatism to flag non-conservative steps in plans, or expected non-conservative instances of goal achievements, and refer these for user checking before taking action. (Rather than automatically generating a plan containing only 'conservative' steps.) This would have the [standard problems with user checking](https://arbital.com/p/2qq).\n- Have a definition of conservatism, relative to the AI's current world-model and conceptual language, which would automatically catch as 'exceptional' (hence not conservative) anything which had the weird property of being the only first-order-conservative instance of a concept that had some other special property being sought out by the optimization pressure. This might involve weird reflective problems, such as any planned event being special in virtue of the AI having planned it.", "date_published": "2016-03-22T18:26:05Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A conservative concept boundary is a boundary which is (a) relatively simple and (b) classifies as few things as possible as positive instances of the category. If we see that 3, 5, 13, and 19 are positive instances of a category and 4, 14, and 28 are negative instances, then a *simple* boundary which separates these instances is \"All odd numbers\" and a *simple and conservative* boundary is \"All odd numbers between 3 and 19\" or \"All primes between 3 and 19\".\n\nIf we imagine presenting an AI with smiling faces as instances of a goal concept during training, then a conservative concept boundary might lead the AI to produce only smiles attached to human head, rather than tiny molecular smileyfaces (not that this necessarily solves everything). If we imagine presenting the AI with 20 positive instances of a burrito, then a conservative boundary might lead the AI to produce a 21st burrito very similar to those, rather than needing to be explicitly presented with negative instances of burritos that force the simplest boundary around the goal concept to exclude poisonous burritos."], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class"], "alias": "2qp"} {"id": "bab32e0a8d7c192f051098402d28a9be", "title": "Querying the AGI user", "url": "https://arbital.com/p/user_querying", "source": "arbital", "source_type": "text", "text": "If we're supposing that an [advanced agent](https://arbital.com/p/2c) is checking something Potentially Bad with its user to find out if the thing is Considered Bad by that user, we need to worry about the following generic issues:\n\n- Can the AI tell which things are Potentially Bad in a way that includes all things that are Actually Bad?\n- Can the *user* reliably tell which Potentially Bad things are Actually Bad?\n- Does the AI, emergently or deliberately, seek out Potentially Bad things that the user will *not* label as Considered Bad, thereby potentially optimizing for Actually Bad things that the user mislabels as Not Bad? (E.g., if the agent learns to avoid new tries similar to those already labeled bad, we're excluding the Considered Bad space, but what's left may still contain Actually Bad things via [https://arbital.com/p/-42](https://arbital.com/p/-42) or a similar phenomenon.)\n- Is the criterion for Potentially Bad so broad, and Actually Bad things hard enough to *reliably* prioritize *within* that space, that 10% of the time an Actually Bad thing will not be in the top 1,000 Potentially Bad things the user can afford the time to check?\n- Can the AI successfully communicate to the user the details of what set off the flag for Potential Badness, or even communicate to the user exactly what was flagged as Potentially Bad, if this is an important part of the user making the decision?\n - Do the AI's communication goals risk [optimizing the user](https://arbital.com/p/optimizing_user)?\n - Are the details of Potential Badness or even the subject of Potential Badness so inscrutable as to be impenetrable? (E.g., AlphaGo trying to explain to a human why a Go move is potentially bad, or for that matter, a Go professional trying to explain to an amateur why a Go move is potentially bad - we might just be left with blind trust, at which point we might as well just tell the AI not to do Potentially Bad things rather than asking it to pointlessly check with the user.)\n- Does the AI, emergently or instrumentally, optimize for the user not labeling things as Potentially Bad, thereby potentially leading to user deception?", "date_published": "2016-03-20T02:35:33Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["There's a laundry list of things that might go wrong when we suppose that an advanced AI is checking something Potentially Bad with the user/operator/programmer to see if the user labels the thing as Considered Bad, and relying on this step of the workflow to exclude things that are Actually Bad. E.g., the user might not be able to detect Actually Bad things reliably, the space of Potentially Bad things might be so broad that the Actually Bad things are 1,000 items down the list of things that are Potentially Bad, the AI might just learn to do things that won't be Considered Bad and thereby seek out special cases of bad things that the user can't detect as bad, etcetera."], "tags": ["B-Class"], "alias": "2qq"} {"id": "f6c1c048270249ac21e06deb038d01fd", "title": "Mild optimization", "url": "https://arbital.com/p/soft_optimizer", "source": "arbital", "source_type": "text", "text": "\"Mild optimization\" is where, if you ask a Task AGI to paint one car pink, it just paints one car pink and then stops, rather than tiling the galaxies with pink-painted cars, because it's *not optimizing that hard.* It's okay with just painting one car pink; it isn't driven to max out the twentieth decimal place of its car-painting score.\n\nOther [suggested terms](https://www.facebook.com/yudkowsky/posts/10154053063684228) for this concept have included \"soft optimization\", \"sufficient optimization\", \"minimum viable solution\", \"pretty good optimization\", \"moderate optimization\", \"regularized optimization\", \"sensible optimization\", \"casual optimization\", \"adequate optimization\", \"good-not-great optimization\", \"lenient optimization\", \"parsimonious optimization\", and \"optimehzation\".\n\n# Difference from low impact\n\nMild optimization is complementary to [taskiness](https://arbital.com/p/task_goal) and [low impact](https://arbital.com/p/2pf). A low impact AGI might try to paint one car pink while minimizing its other footprint or how many other things changed, but it would be trying *as hard as possible* to minimize that impact and drive it down *as close to zero* as possible, which might come with its own set of pathologies.\n\nWhat we really want is both properties. We want the AGI to paint one car pink in a way that gets the impact pretty low and then, you know, that's good enough - not have a cognitive pressure to search through weird extremes looking for a way to decrease the twentieth decimal place of the impact. This would tend to break a low impact measure which contained even a subtle flaw, where a mild-optimizing AGI might not put as much pressure on the low impact measure and hence be less likely to break it.\n\n(Obviously, what we *want* is a perfect low impact measure which will keep us safe [even if subjected to unlimited optimization power](https://arbital.com/p/2x), but a basic security mindset is to try to make each part safe on its own, then assume it might contain a flaw and try to design the rest of the system to be safe anyway.)\n\n# Difference from satisficing\n\n[Satisficing utility functions](https://en.wikipedia.org/wiki/Satisficing#As_a_form_of_optimization) don't necessarily mandate or even allow mildness.\n\nSuppose the AI's utility function is 1 when at least one car has been painted pink and 0 otherwise - there's no more utility to be gained by outcomes in which more cars have been painted pink. Will this AI still go to crazy-seeming lengths?\n\nYes, because in a partially uncertain / probabilistic environment, there's still no upper bound on the utility which can be gained. A solution with 0.9999 probability of painting at least one car pink is ranked above a solution with a 0.999 probability of painting at least one car pink.\n\nIf a preference ordering $<_p$ has the property that for every probability distribution on expected outcomes $O$ there's another expected outcome $O'$ with $O <_p O'$ which requires one more erg of energy to achieve, this is a sufficient condition for using up all the energy in the universe. If converting all reachable matter into pink-painted cars implies a slightly higher probability, that at least one car is pink, that's the maximum of *expected* utility under the 0-1 utility function.\n\nLess naive satisficing would describe an optimizer which satisfies an *expected* utility constraint - say, if any policy produces at least 0.95 *expected* utility under the 0-1 utility function, the AI can implement that policy.\n\nThis rule is now a [Task](https://arbital.com/p/task_goal) and would at least *permit* mild optimization. The problem is that it doesn't *exclude* extremely optimized solutions. A 0.99999999 probability of producing at least one pink-painted car also has the property that it's above a 0.95 probability. If you're a self-modifying satisficer, replacing yourself with a maximizer is probably a satisficing solution.\n\nEven if we're not dealing with a completely self-modifying agent, there's a *ubiquity* of points where adding more optimization pressure might satisfice. When you build a thermostat in the environment, you're coercing one part of the environment to have a particular temperature; if this kind of thing doesn't count as \"more optimization pressure\" then we could be dealing with all sorts of additional optimizing-ness that falls short of constructing a full subagent or doing a full self-modification. There's all sorts of steps in cognition where it would be just as easy to add a maximizing step (take the highest-ranking solution) as to take a random high-ranking solution.\n\nOn a higher level of abstraction, the problem is that while satisficing is reflectively *consistent*, it's not [reflectively stable](https://arbital.com/p/1fx). A satisficing agent is happy to construct another satisficing agent, but it may also be happy to construct a maximizing agent. It can approve its current mode of thinking, but it approves other modes of thinking too. So unless *all* the cognitive steps are being carried out locally on [fixed known algorithms](https://arbital.com/p/1fy) that satisfice but definitely don't maximize, without the AGI constructing any environmental computations or conditional policy steps more complicated than a pocket calculator, building a seemingly mild satisficer doesn't guarantee that optimization *stays* mild.\n\n# Quantilizing\n\nOne weird idea that seems like it might exhibit incremental progress toward reflectively stable mild optimization is [https://arbital.com/p/4y](https://arbital.com/p/4y)'s [expected utility quantilizer](https://intelligence.org/files/QuantilizersSaferAlternative.pdf). Roughly, a quantilizer estimates expected outcomes relative to a null action, and then tries to produce an expected outcome in some *upper quantile* of possibilities - e.g., an outcome in the top 1% of expected outcomes. Furthermore, a quantilizer *only* tries to narrow outcomes by that much - it doesn't try to produce one particular outcome in the top 1%; the most it will ever try to do is randomly pick an outcome such that this random distribution corresponds to being in the top 1% of expected outcomes.\n\nQuantilizing corresponds to maximizing expected utility under the assumption that there is uncertainty about which outcomes are good and an adversarial process which can make some outcomes arbitrarily bad, subject to the constraint that the expected utility of the null action can only be boundedly low. So if there's an outcome which would be very improbable given the status quo, the adversary can make that outcome be very bad. This means that rather than aiming for one single high-utility outcome which the adversary could then make very bad, a quantilizer tries for a range of possible good outcomes. This in turn means that quantilizers will actively avoid narrowing down the future too much, even if by doing so they'd enter regions of very high utility.\n\nQuantilization doesn't seem like *exactly* what we actually want for multiple reasons. E.g., if long-run good outcomes are very improbable given status quo, it seems like a quantilizer would try to have its policies fall short of that in the long run (a similar problem seems like it might appear in [impact measures](https://arbital.com/p/2pf) which imply that good long-run outcomes have high impact).\n\nThe key important idea that appears in quantilizing is that a quantilizer isn't just as happy to rewrite itself as a maximizer, and isn't just as happy to implement a policy that involves constructing a more powerful optimizer in the environment.\n\n# Relation to other problems\n\nMild optimization relates directly to one of the three core reasons why aligning at-least-partially superhuman AGI is hard - making very powerful optimization pressures flow through the system puts a lot of stress on its potential weaknesses and flaws. To the extent we can get mild optimization stable, it might take some of the critical-failure pressure off other parts of the system. (Though again, basic security mindset says to still try to get all the parts of the system as flawless as possible and not tolerate any known flaws in them, *then* build the fallback options in case they're flawed anyway; one should not deliberately rely on the fallbacks and intend them to be activated.)\n\nMild optimization seems strongly complementary to [low impact](https://arbital.com/p/2pf) and [taskiness](https://arbital.com/p/task_goal). Something that's merely low-impact might exhibit pathological behavior from trying to drive side impacts down to absolutely zero. Something that merely optimizes mildly might find some 'weak' or 'not actually trying that hard' solution which nonetheless ended up turning the galaxies into pink-painted cars. Something that has a satisfiable utility function with a readily-achievable maximum achievable utility might still go to tremendous lengths to drive the probability of achieving maximum utility to nearly 1. Something that optimizes mildly *and* has a low impact penalty *and* has a small, clearly achievable goal, seems much more like the sort of agent that might, you know, just paint the damn car pink and then stop.\n\nMild optimization can be seen as a further desideratum of the currently open [Other-izer Problem](https://arbital.com/p/2r9): Besides being workable for [bounded agents](https://arbital.com/p/2rd), and being [reflectively stable](https://arbital.com/p/1fx), we'd *also* like an other-izer idiom to have a (stable) mildness parameter.\n\n# Approaches\n\nIt currently seems like the key subproblem in mild optimization revolves around [reflective stability](https://arbital.com/p/1fx) - we don't want \"replace the mild optimization part with a simple maximizer, becoming a maximizer isn't that hard and gets the task done\" to count as a 'mild' solution. Even in human intuitive terms of \"optimizing without putting in an unreasonable amount of effort\", at some point a sufficiently advanced human intelligence gets lazy and starts building an AGI to do things for them because it's easier that way and only takes a bounded amount of effort. We don't want \"construct a second AGI that does hard optimization\" to count as mild optimization even if it ends up not taking all that much effort for the first AGI, although \"construct an AGI that does $\\theta$-mild optimization\" could potentially count as a $\\theta$-mildsolution.\n\nSimilarly, we don't want to allow the deliberate creation of environmental or internal [daemons](2rc) even if it's easy to do it that way or requires low effort to end up with that side effect - we'd want the optimizing power of such daemons to count against the measured optimization power and be rejected as optimizing too hard.\n\nSince both of these phenomena seem hard to exhibit in current machine learning algorithms or faithfully represent in a toy problem, [unbounded analysis](https://arbital.com/p/107) seems likely to be the main way to go. In general, it seems closely related to the [Other-izer Problem](https://arbital.com/p/2r9) which also seems most amenable to unbounded analysis at the present time.", "date_published": "2016-06-20T19:06:02Z", "authors": ["Eric Bruylant", "Patrick LaVictoire", "Eliezer Yudkowsky"], "summaries": ["\"Mild optimization\" or \"soft optimization\" is when, if you ask the [genie](https://arbital.com/p/6w) to paint one car pink, it just paints one car pink and then stops, rather than tiling the galaxies with pink-painted cars, because it's *not optimizing that hard.*\n\nThis is related, but distinct from, notions like \"[low impact](https://arbital.com/p/2pf)\". E.g., a low impact AGI might try to paint one car pink while minimizing its other footprint or how many other things changed, but it would be trying *as hard as possible* to minimize that impact and drive it down *as close to zero* as possible, which might come with its own set of pathologies. What we want instead is for the AGI to try to paint one car pink while minimizing its footprint, and then, when that's being done pretty well, say \"Okay done\" and stop.\n\nThis is distinct from [satisficing expected utility](https://arbital.com/p/eu_satisficer) because, e.g., rewriting yourself as an expected utility maximizer might also satisfice expected utility - there's no upper limit on how hard a satisficer approves of optimizing, so a satisficer is not [reflectively stable](https://arbital.com/p/1fx)."], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class"], "alias": "2r8"} {"id": "0b0e51c3cbbe5aa7249e0269cec4ddb3", "title": "Other-izing (wanted: new optimization idiom)", "url": "https://arbital.com/p/otherizer", "source": "arbital", "source_type": "text", "text": "The open \"other-izer\" problem is to find something besides maximizing, satisificing, meliorizing, and several other existing but unsatisfactory idioms, which is actually suitable as an optimization idiom for [bounded agents](https://arbital.com/p/bounded_agent) and is [reflectively stable](https://arbital.com/p/1fx).\n\nIn standard theory we tend to assume that agents are expected utility *maximizers* that always choose the available option with highest expected utility. But this isn't a realistic idiom because a realistic, [bounded agent](https://arbital.com/p/bounded_agent) with limited computing power can't compute the expected utility of every possible action.\n\nAn expected utility satisficer, which e.g. might approve any policy so long as the expected utility is at least 0.95, would be much more realistic. But it also doesn't seem suitable for an actual AGI, since, e.g., if policy X produces at least expected utility 0.98, then it would also satisfice to randomize between mostly policy X and a small chance of policy Y that had expected utility 0; this seems to give away a needlessly large amount of utility. We'd probably be fairly disturbed if an otherwise aligned AGI was actually doing that.\n\nSatisficing is also [reflectively consistent](2rb) but not [reflectively stable](https://arbital.com/p/1fx) - while [tiling agents theory](https://arbital.com/p/1mq) can give formulations of satisficers that will approve the construction of similar satisficers, a satisficer could also tile to a maximizer. If your decision criterion is to approve policies which achieve expected utility at least $\\theta,$ and you expect that an expected utility *maximizing* version of yourself would achieve expected utility at least $\\theta,$ then you'll approve self-modifying to be an expected utility maximizer. This is another reason to prefer a formulation of optimization besides satisficing - if the AI is strongly self-modifying, then there's no guarantee that the 'satisficing' property would stick around and have our analysis go on being applicable, and even if not strongly self-modifying, it might still create non-satisficing chunks of cognitive mechanism inside itself or in the environment.\n\nA meliorizer has a current policy and only replaces it with policies of increased expected utility. Again, while it's possible to demonstrate that a meliorizer can approve self-modifying to another meliorizer and hence this idiom is reflectively consistent, it doesn't seem like it would be reflectively stable - becoming a maximizer or something else might have higher expected utility than staying a meliorizer.\n\nThe \"other-izer\" open problem is to find something better than maximization, satisficing, and meliorization that actually makes sense as an idiom of optimization for a resource-bounded agent and that we'd think would be an okay thing for e.g. a [Task AGI](https://arbital.com/p/6w) to do, which is at least reflectively consistent, and preferably reflectively stable.\n\nSee also \"[https://arbital.com/p/2r8](https://arbital.com/p/2r8)\" for a further desideratum, namely an adjustable parameter of optimization strength, that would be nice to have in an other-izer.", "date_published": "2016-03-22T00:33:10Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["AI alignment open problem", "B-Class"], "alias": "2r9"} {"id": "7c83a0b69c6ecdd8d263cacb56a0399f", "title": "Reflective consistency", "url": "https://arbital.com/p/reflective_consistency", "source": "arbital", "source_type": "text", "text": "A decision system is \"reflectively consistent\" if it can approve the construction of similar decision systems. For example, if you have an expected utility satisficer (it either takes the null action, or an action with expected utility greater than $\\theta$) then this agent can self-modify to any other design which also either takes no action, or approves a plan with expected utility greater than $\\theta.$ A satisficer might also approve changing itself into an expected utility maximizer (if it expects that this self-modification itself leads to expected utility at least $\\theta$) but it will at least approve replacing itself with another satisficer. On the other hand, a [causal decision theorist](https://arbital.com/p/causal_decision_theory) given a chance to self-modify will only approve the construction of [something that is not a causal decision theorist](https://arbital.com/p/son_of_cdt). A property satisfies the stronger condition of [reflective stability](https://arbital.com/p/1fx) when decision systems with that property *only* approve their own replacement with other decision systems with that property. For example, a [https://arbital.com/p/-10h](https://arbital.com/p/-10h) will under ordinary circumstances only approve code changes that preserve the property of maximizing paperclips, so \"wanting to make paperclips\" is reflectively *stable* and not just reflectively *consistent*.", "date_published": "2016-03-22T00:25:48Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2rb"} {"id": "e81b778108558b06f28661191b601774", "title": "Optimization daemons", "url": "https://arbital.com/p/daemons", "source": "arbital", "source_type": "text", "text": "If you subject a dynamic system to a *large* amount of optimization pressure, it can turn into an optimizer or even an intelligence. The classic example would be how natural selection, in the course of extensively optimizing DNA to construct organisms that replicated the DNA, in one case pushed hard enough that the DNA came to specify a cognitive system capable of doing its own consequentialist optimization. Initially, these cognitive optimizers pursued goals that correlated well with natural selection's optimization target of reproductive fitness, which is how these crystallized optimizers had originally come to be selected into existence. However, further optimization of these 'brain' protein chunks caused them to begin to create and share cognitive content among themselves, after which such rapid capability gain occurred that a [context change](https://arbital.com/p/6q) took place and the brains' pursuit of their internal goals no longer correlated reliably with DNA replication.\n\nAs much as this was, from a *human* standpoint, a wonderful thing to have happened, it wasn't such a great thing from the standpoint of inclusive genetic fitness of DNA or just having stable, reliable, well-understood optimization going on. In the case of AGIs deploying powerful internal and external optimization pressures, we'd very much like to not have that optimization deliberately or accidentally crystallize into new modes of optimization, especially if this breaks goal alignment with the previous system or breaks other safety properties. (You might need to stare at the [Orthogonality Thesis](https://arbital.com/p/1y) until it becomes intuitive that, even though crystallizing daemons from natural selection produced creatures that were more humane than natural selection, this doesn't mean that crystallization from an AGI's optimization would have a significant probability of producing something humane.)\n\nWhen heavy optimization pressure on a system crystallizes it into an optimizer - especially one that's powerful, or more powerful than the previous system, or misaligned with the previous system - we could term the crystallized optimizer a \"daemon\" of the previous system. Thus, under this terminology, humans would be daemons of natural selection. If an AGI, after heavily optimizing some internal system, was suddenly taken over by an erupting daemon that cognitively wanted to maximize something that had previously correlated with the amount of available RAM, we would say this was a crystallized daemon of whatever kind of optimization that AGI was applying to its internal system.\n\nThis presents an AGI safety challenge. In particular, we'd want at least one of the following things to be true *anywhere* that *any kind* of optimization pressure was being applied:\n\n- The optimization pressure is (knowably and reliably) too weak to create daemons. (Seemingly true of all current systems, modulo the 'knowably' part.)\n- The subject of optimization is not Turing-complete or otherwise programmatically general and the restricted solution space cannot *possibly* contain daemons no matter *how much* optimization pressure is applied to it. (3-layer non-recurrent neural networks containing less than a trillion neurons will probably not erupt daemons no matter how hard you optimize them.)\n- The AI has a sufficient grasp on the concept of optimization and the problem of daemons to reliably avoid creating mechanisms outside the AI that do cognitive reasoning. (Note that if some predicate is added to exclude a particular type of daemon, this potentially runs into the [nearest unblocked neighbor](https://arbital.com/p/42) problem.)\n- The AI only creates cognitive subagents which share all the goals and safety properties of the original agent. E.g. if the original AI is [low-impact](https://arbital.com/p/2pf), [softly optimizing](https://arbital.com/p/2r8), [abortable](https://arbital.com/p/), and targeted on [performing Tasks](https://arbital.com/p/6w), it only creates cognitive systems that are low-impact, don't optimize too hard in conjunction with the original AI, abortable by the same shutdown button, and targeted on performing the current task.", "date_published": "2016-03-27T22:54:00Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["We can see natural selection spitting out humans as a special case of \"If you dump enough computing power into optimizing a Turing-general policy using a consequentialist fitness criterion, it spits out a new optimizer that probably isn't perfectly aligned with the fitness criterion.\" After repeatedly optimizing brains to reproduce well, the brains in one case turned into general optimizers in their own right, more powerful ones than the original optimizer, with new goals not aligned in general to replicating DNA.\n\nThis could potentially happen anywhere inside an AGI subprocess where you were optimizing inside a sufficiently general solution space and you applied enough optimization power - you could get a solution that did its own, internal optimization, possibly in a way smarter than the original optimizer and misaligned to its original goals.\n\nWhen heavy optimization pressure on a system crystallizes it into an optimizer - especially one that's powerful, or more powerful than the previous system, or misaligned with the previous system - we could term the crystallized optimizer a \"daemon\" of the previous system. Thus, under this terminology, humans would be daemons of natural selection."], "tags": ["B-Class"], "alias": "2rc"} {"id": "6acb8d034f0140d7762bbb4e8d21c2c2", "title": "Bounded agent", "url": "https://arbital.com/p/bounded_agent", "source": "arbital", "source_type": "text", "text": "A bounded agent is the opposite of an [unbounded agent](https://arbital.com/p/unbounded_agent) and refers to agents that operate under realistic conditions and resource constraints. Bounded agents only have realistic amounts of computing power, RAM, and disk space (no more than fits into one Hubble volume). They act in real time, meaning that time in the external world goes on passing as they deliberate. They don't have perfect knowledge of the environment. Their sensors are noisy and require probabilistic inference. They can only act on the environment through motor outputs. Their environments are bigger than they are, and can't be fully simulated in realistic amounts of time.", "date_published": "2016-03-22T00:52:19Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "2rd"} {"id": "6e6e2ec251178c6d00c74287de0e0223", "title": "Shutdown utility function", "url": "https://arbital.com/p/shutdown_utility_function", "source": "arbital", "source_type": "text", "text": "A special case of [low impact](https://arbital.com/p/2pf) which probably seems deceptively trivial - how would you create a utility function such that an agent with this utility function would harmlessly shut down? Without, for example, creating an environmental subagent that assimilated all matter in the universe and used it to make absolutely sure that the AI stayed shut down forever and wasn't accidentally reactivated by some remote probability? If we had a shutdown utility function, and [a safe button that switched between utility functions in a reflectively stable way](https://arbital.com/p/1b7), we could combine these two features to create an AI that had a safe shutdown button.\n\nBetter yet would be an *abort* utility function which incentivizes the safe aborting of all previous plans and actions in a low-impact way, and, say, suspending the AI itself to disk in a way that preserved its log files; if we had this utility function plus a safe button that switched to it, we could safely *abort* the AI's current actions at any time. (This, however, would be more difficult, and it seems wise to work on just the shutdown utility function first.)\n\nTo avoid a rock trivially fulfilling this desideratum, we should add the requirement that (1) the shutdown utility function be something that produces \"just switch yourself off and do nothing else\" behavior in a generally intelligent agent, which if instead hooked up to a paperclip utility function, would be producing paperclips; and that the shutdown function should be [omni-safe](https://arbital.com/p/2x) (the AI safely shuts down even if it has all other outcomes available as primitive actions).\n\n\"All outcomes have equal utility\" would not be a shutdown utility function since in this case the actual action produced will be undefined under most forms of unbounded analysis - in essence, the AI's internal systems would continue under their own inertia and produce some kind of undefined behavior which might well be coherent and harmful. We need a utility function that identifies harmless behavior, rather than failing to identify anything and producing undefined behavior.", "date_published": "2016-03-22T01:52:27Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2rf"} {"id": "15ad28f951320921ce873c62b58357ed", "title": "Abortable plans", "url": "https://arbital.com/p/abortable", "source": "arbital", "source_type": "text", "text": "\"Abortable\" plans are those which can readily be switched to having very low net [impact](https://arbital.com/p/2pf) on the world. Suppose an AI is told to paint a car pink. The AI starts to do so by constructing replicating nanomachines that will paint the car pink. If the AI has successfully been given a [shutdown utility function](https://arbital.com/p/2rf) and shutdown button, we can press the button to have the AI switch off or suspend itself to disk and take no further action, but this might not affect the nanomachines already made. An AI with an *abort* button will have constructed the nanomachines such that at any time the AI can be given the \"abort\" instruction, which will with a minimum of further action on the AI's part cause all the nanomachines (including any replicated ones) to quietly self-destruct. That is, the AI has already planned such that the partial execution of the original plan, plus the activation midway of the abort subplan, will together have minimum impact on the world.", "date_published": "2016-03-22T02:03:46Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2rg"} {"id": "500ba2431de17c6a47c132382fc651ef", "title": "Task identification problem", "url": "https://arbital.com/p/task_identification", "source": "arbital", "source_type": "text", "text": "A subproblem of building a [task-directed AGI ](https://arbital.com/p/6w) is communicating to the AGI the next task and identifying which outcomes are considered as fulfilling the task. For the superproblem, see [https://arbital.com/p/-2s3](https://arbital.com/p/-2s3). This seems like primarily a communication problem. It might have additional constraints associated with, e.g., the AGI being a [behaviorist genie](https://arbital.com/p/102). In the [known-fixed-algorithm](https://arbital.com/p/1fy) case of AGI, it might be that we don't have much freedom in aligning the AGI's planning capabilities with its task representation, and that we hence need to work with a particular task representation (i.e., we can't just use language to communicate, we need to use labeled training cases). This is currently a stub page, and is mainly being used as a parent or tag for subproblems.", "date_published": "2016-03-23T21:13:27Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2rz"} {"id": "20d1a450da295f7ca12aeb43b87c7d13", "title": "Relevant powerful agent", "url": "https://arbital.com/p/relevant_powerful_agent", "source": "arbital", "source_type": "text", "text": "A cognitively powerful agent is *relevant* if it is cognitively powerful enough to be a game-changer in the larger dilemma faced by Earth-originating intelligent life. Conversely, an agent is *irrelevant* if it is too weak to make much of a difference, or if the cognitive problems it can solve or tasks it is authorized to perform don't significantly change the overall situation we face.\n\n## Definition\n\nIntuitively speaking, a value-aligned AI is 'relevant' to the extent it has a 'yes' answer to the box 'Can we use this AI to produce a benefit that solves the larger dilemma?' in [this flowchart](https://i.imgur.com/JuIFAxh.png?0), or is part of a larger plan that gets us to the lower green circle without any \"Then a miracle occurs\" steps. A cognitively powerful agent is relevant at all if its existence can effectuate good or bad outcomes - e.g., a big neural net is not 'relevant' because it doesn't end the world one way or another.\n\n(A better word than 'relevant' might be helpful here.)\n\n## Examples\n\n### Positive examples\n\n- A hypothetical agent that can bootstrap to nanotechnology by solving the inverse protein folding problem and shut down other AI projects, in a way that can reasonably be known safe enough to authorize by the AI's programmers, would be relevant.\n\n### Negative examples\n\n- An agent authorized to prove or disprove the Riemann Hypothesis, but not to do anything else, is not relevant (unless knowing whether the Riemann Hypothesis is true somehow changes everything for the basic dilemma of AI).\n- An oracle that can only output verified HOL proofs is not yet 'relevant' until someone can describe theorems to prove such that firm knowledge of their truth would be a game-changer for the AI situation. (Hypothesizing that someone else will come up with a theorem like that, if you just build the oracle, is a [hail Mary step](https://arbital.com/p/) in the plan.)\n\n## Importance\n\nMany proposals for AI safety, especially [advanced safety](https://arbital.com/p/2l), so severely restrict the applicability of the AI that the AI is no longer allowed to do anything that seems like it could solve the larger dilemma. (E.g., an oracle that is only allowed to give us binary answers for whether it thinks certain mathematical facts are true, and nobody has yet said how to use this ability to save the world.)\n\nConversely, proposals to use AIs to do things impactful enough to solve the larger dilemma, generally run smack into all the usual [advanced safety](https://arbital.com/p/2l) problems, especially if the AI must operate in the rich domain of the real world to carry out the task (this tends to require full trust).\n\n## Open problem\n\n[It is an open problem to propose a relevant, limited AI](https://arbital.com/p/2y) that would be significantly easier to handle than the general safety problem, while also being useful enough to resolve the larger .", "date_published": "2015-12-16T01:21:04Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "2s"} {"id": "71e4801c4e558bca0f4608d4363bece8", "title": "Look where I'm pointing, not at my finger", "url": "https://arbital.com/p/pointing_finger", "source": "arbital", "source_type": "text", "text": "## Example problem\n\nSuppose we're trying to give a [Task AGI](https://arbital.com/p/6w) the task, \"Make there be a strawberry on the pedestal in front of your webcam.\" For example, a human could fulfill this task by buying a strawberry from the supermarket and putting it on the pedestal.\n\nAs part of aligning a Task AGI on this goal, we'd need to [identify](https://arbital.com/p/36y) strawberries and the pedestal.\n\nOne possible approach to communicating the concept of \"strawberry\" is through a training set of human-selected cases of things that are and aren't strawberries, on and off the pedestal.\n\nFor the sake of distinguishing causal roles, let's say that one human, User1, is selecting training cases of objects and putting them in front of the AI's webcam. A different human, User2, is looking at the scene and pushing a button when they see something that looks like a strawberry on the pedestal. The [intention](https://arbital.com/p/6h) is that pressing the button will label positive instances of the goal concept, namely strawberries on the pedestal. In actual use after training, the AI will be able to generate its own objects to put inside the room, possibly with further feedback from User2. We want these objects to be instances of our [intended](https://arbital.com/p/6h) goal concept, aka, actual strawberries.\n\nWe could draw an intuitive causal model for this situation as follows:\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10423843/L.png)\n\nSuppose that during the use phase, the AI actually creates a realistic plastic strawberry, one that will fool User2 into pressing the button. Or, similarly, suppose the AI creates a small robot that sprouts tiny legs and runs over to User2's button and presses the button directly.\n\nNeither of these are the goal concept that we wanted the AI to learn, but any *test* of the hypothesis \"Is this event classified as a positive instance of the goal concept?\" will return \"Yes, the button was pressed.\" %%comment: (If you imagine some other User3 watching this and pressing an override button to tell the AI that this fake strawberry wasn't really a positive instance of the intended goal concept, imagine the AI modeling and then manipulating or bypassing User3, etcetera.)%%\n\nMore generally, the human is trying to point to their intuitive \"strawberry\" concept, but there may be other causal concepts that also separate the training data well into positive and negative instances, such as \"objects which come from strawberry farms\", \"objects which cause (the AI's psychological model of) User2 to think that something is a strawberry\", or \"any chain of events leading up to the positive-instance button being pressed\".\n\n%%comment: move this to sensory identification section: However, in a case like this, it's not like the actual physical glove is inside the AGI's memory. Rather, we'd be, say, putting the glove in front of the AGI's webcam, and then (for the sake of simplified argument) pressing a button which is meant to label that thing as a \"positive instance\". If we want our AGI to achieve particular states of the environment, we'll want it to reason about the causes of the image it sees on the webcam and identify a concept over those causes - have a goal over 'gloves' and not just 'images which look like gloves'. In the latter case, it could just as well fulfill its goal by setting up a realistic monitor in front of its webcam and displaying a glove image. So we want the AGI to [identify its task](https://arbital.com/p/2rz) over the causes of its sensory data, not just pixel fields.%%\n\n## Abstract problem\n\nTo state the above [potential difficulty](https://arbital.com/p/6r) more generally:\n\nThe \"look where I'm pointing, not at my finger\" problem is that the labels on the training data are produced by a complicated causal lattice, e.g., (strawberry farm) -> (strawberry) -> (User1 takes strawberry to pedestal) -> (Strawberry is on pedestal) -> (User2 sees strawberry) -> (User2 classifies strawberry) -> (User2 presses 'positive instance' button). We want to point to the \"strawberry\" part of the lattice of causality, but the finger we use to point there is User2's psychological classification of the training cases and User2's hand pressing the positive-instance button.\n\nWorse, when it comes to which model *best* separates the training cases, concepts that are further downstream in the chain of causality should classify the training data better, if the AI is [smart enough](https://arbital.com/p/2c) to understand those parts of the causal lattice.\n\nSuppose that at one point User2 slips on a banana peel, and her finger slips and accidentally classifies a scarf as a positive instance of \"strawberry\". From the AI's perspective there's no good way of accounting for this observation in terms of strawberries, strawberry farms, or even User2's psychology. To *maximize* predictive accuracy over the training cases, the AI's reasoning must take into account that things are more likely to be positive instances of the goal concept when there's a banana peel on the control room floor. Similarly, if some deceptively strawberry-shaped objects slip into the training cases, or are generated by the AI [querying the user](https://arbital.com/p/2qq), the best boundary that separates 'button pressed' from 'button not pressed' labeled instances will include a model of what makes a human believe that something is a strawberry.\n\nA learned concept that's 'about' layers of the causal lattice that are further downstream of the strawberry, like User2's psychology or mechanical force being applied to the button, will implicitly take into account the upstream layers of causality. To the extent that something being strawberry-shaped causes a human to press the button, it's implicitly part of the category of \"events that end applying mechanical force to the 'positive-instance' button\"). Conversely, a concept that's about upstream layers of the causal lattice can't take into account events downstream. So if you're looking for pure predictive accuracy, the best model of the labeled training data - given sufficient AGI understanding of the world and the more complicated parts of the causal lattice - will always be \"whatever makes the positive-instance button be pressed\".\n\nThis is a problem because what we actually want is for there to be a strawberry on the pedestal, not for there to be an object that looks like a strawberry, or for User2's brain to be rewritten to think the object is a strawberry, or for the AGI to seize the control room and press the positive instance button.\n\nThis scenario may qualify as a [context disaster](https://arbital.com/p/6q) if the AGI only understands strawberries in its development phase, but comes to understand User2's psychology later. Then the more complicated causal model, in which the downstream concept of User2's psychology separates the data better than reasoning about properties of strawberries directly, first becomes an issue only when the AI is over a high threshold level of intelligence.\n\n## Approaches\n\n[conservatism](https://arbital.com/p/2qp) would try to align the AGI to plan out goal-achievement events that were as similar as possible to the particular goal-achievement events labeled positively in the training data. If the human got the strawberry from the supermarket in all training instances, the AGI will try to get the same brand of strawberry from the same supermarket.\n\n[Ambiguity identification](https://arbital.com/p/4w) would focus on trying to get the AGI to ask us whether we meant 'things that make humans think they're strawberries' or 'strawberry'. This approach might need to go through resolving ambiguities by the AGI explicitly symbolically communicating with us about the alternative possible goal concepts, or generating sufficiently detailed multiple-view descriptions of a hypothetical case, not the AGI trying real examples. Testing alternative hypotheses using real examples always says that the label is generated further causally downstream; if you are sufficiently intelligent to construct a fake plastic strawberry that fools a human, trying out the hypothesis will produce the response \"Yes, this is a positive instance of the goal concept.\" If the AGI tests the hypothesis that the 'real' explanation of the positive instance label is 'whatever makes the button be pressed' rather than 'whatever makes User2 think of a strawberry' by carrying out the distinguishing experiment of pressing the button in a case where User2 doesn't think something is a strawberry, the AGI will find that the experimental result favors the 'it's just whatever presses the button hypothesis'. Some modes of ambiguity identification break for sufficiently advanced AIs, since the AI's experiment interferes with the causal channel that we'd intended to return information about *our* intended goal concept.\n\nSpecialized approaches to the pointing-finger problem in particular might try to define a supervised learning algorithm that tends to internally distill, in a predictable way, some model of causal events, such that the algorithm could be instructed somehow to try learning a *simple* or *direct* relation between the positive \"strawberry on pedestal\" instances, and the observed labels of the \"sensory button\" node within the training cases; with this relation not being allowed to pass through the causal model of User2 or mechanical force being applied to the button, because we know how to say \"those things are too complicated\" or \"those things are too far causally downstream\" relative to the algorithm's internal model.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)\n\nThis specialized approach seems potentially susceptible to initial approach within modern machine learning algorithms.\n\nBut to restate the essential difficulty from an [advanced-safety](https://arbital.com/p/2l) perspective: in the limit of [advanced intelligence](https://arbital.com/p/2c), the *best possible* classifier of the relation between the training cases and the observed button labels will always pass through User2 and anything else that might physically press the button. Trying to 'forbid' the AI from using the most effective classifier for the relation between Strawberry? and observed values of Button! seems potentially subject to a [Nearest Unblocked](https://arbital.com/p/42) problem, where the 'real' simplest relation re-emerges in the advanced phase after being suppressed during the training phase. Maybe the AI reasons about certain very complicated properties of the material object on the pedestal... in fact, these properties are so complicated that they turn out to contain implicit models of User2's psychology, again because this produces *a better separation* of the labeled training data. That is, we can't allow the 'strawberry' concept to include complicated logical properties of the strawberry-object that in effect include a psychological model of User2 reacting to the strawberry, implying that if User2 can be fooled by a fake plastic model, that must be a strawberry. *Even though* this richer model will produce a more accurate classification of the training data, and any actual experiments performed will return results favoring the richer model.\n\nEven so, this doesn't seem impossible to navigate as a machine learning problem; an algorithm might be able to recognize when an upstream causal mode starts to contain predicates that belong in a downstream causal node; or an algorithm might contain strong regularization rules that collect all inference about User2 into the User2 node rather than letting it slop over anywhere else; or it might be possible to impose a constraint, after the strawberry category has been learned sufficiently well, that the current level of strawberry complexity is the most complexity allowed; or the granularity of the AI's causal model might not allow such complex predicates to be secretly packed into the part of the causal graph we're identifying, without visible and transparent consequences when we monitor how the algorithm is learning the goal predicate.\n\nA toy model of this setup ought to include analogues of User2 that sometimes make mistakes in a regular way, and actions the AI can potentially take to directly press the labeling button; this would test the ability to point an algorithm to learn about the compact properties of the strawberry in particular, and not other concepts causally downstream that could potentially separate the training data better, or better explain the results of experiments. A toy model might also introduce new discoverable regularities of the User2 analogue, or new options to manipulate the labeling button, as part of the test data, in order to simulate the progression of an advanced agent gaining new capabilities.", "date_published": "2016-09-09T20:40:33Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Suppose we're trying to give a [Task AGI](https://arbital.com/p/6w) the task, \"Put a strawberry on this pedestal\". We mean to identify our intended category of strawberries by waving some strawberries and some non-strawberries in front of the AI's webcam. Alice in the control room will press a button to label which of these objects are strawberries. The \"Look where I'm pointing, not at my finger\" problem is getting the AI to focus on the strawberries rather than Alice or the button. The concepts \"strawberry on the pedestal\" and \"event that makes Alice think of strawberries\" and \"event that causes the button to be pressed\" are different goals to pursue, even though as concepts they'll all equally well-classify any normal training cases. AIs pursuing these goals respectively put a strawberry on the pedestal, fool Alice using a plastic strawberry, and build a robotic arm to press the labeling button.\n\nWe want a way to point to a particular part of the AI's model of the causal lattice that produces the labeled training data - the event we intuitively consider to be the strawberry on the pedestal, versus other parts of the causal lattice like Alice and the button. Hence \"look where I'm pointing, not at my finger\".\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)"], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class", "Ontology identification problem"], "alias": "2s0"} {"id": "3fc0e4d7282ce6fc397456ef4f886669", "title": "Do-What-I-Mean hierarchy", "url": "https://arbital.com/p/dwim", "source": "arbital", "source_type": "text", "text": "Do-What-I-Mean refers to an aligned AGI's ability to produce better-aligned plans, based on an explicit model of what the user wants or believes.\n\nSuccessive levels of DWIM-ness:\n\n- No understanding of human intentions / zero DWIMness. E.g. a Task AGI that is focused on one task being communicated, where all the potential [impacts](https://arbital.com/p/2pf) of that task need to be separately [vetted](https://arbital.com/p/2qq). If you tell this kind of AGI to 'cure cancer', you might need to veto plans which would remove the cancer but kill the patient as a side effect, because the AGI doesn't start out knowing that you'd prefer not to kill the patient.\n- Do What You Don't Know I Dislike. The Task AGI has a background understanding of some human goals or which parts of the world humans consider especially significant, so it can more quickly generate a plan likely to pass human [vetting](https://arbital.com/p/2qq). A Task AGI at this level, told to cure cancer, will take relatively fewer rounds of Q&A to generate a plan which carefully seals off any blood vessels cut by removing the cancer; because the AGI has a general notion of human health, knows that [impacts](https://arbital.com/p/2pf) on human health are significant, and models that users will generally prefer plans which result in good human health as side effects rather than plans which result in poor human health.\n- Do What You Know I Understood. The Task AGI has a model of human *beliefs,* and can flag and report divergences between the AGI's model of what the humans expect to happen, and what the AGI expects to happen.\n- DWIKIM: Do What I Know I Mean. The Task AGI has an explicit psychological model of human preference - not just a list of things in the environment which are significant to users, but a predictive model of how users behave which is informative about their preferences. At this level, the AGI can read through a dump of online writing, build up a model of human psychology, and guess that you're telling it to cure a cancer because you altruistically want that person to be healthier.\n- DWIDKIM: Do What I Don't Know I Mean. The AGI can perform some basic [extrapolation](https://arbital.com/p/3c5) steps on its model of you and notice when you're trying to do something that, in the AGI's model, some further piece of knowledge might change your mind about. (Unless we trust the DWIDKIM model a *lot*, this scenario should imply \"Warn the user about that\" not \"Do what you think the user would've told you.\")\n- (Coherent) Extrapolated Volition. The AGI does what it thinks you (or everyone) would've told it to do if you were as smart as the AGI, i.e., your decision model is extrapolated toward improved knowledge, increased ability to consider arguments, improved reflectivity, or other transforms in the direction of a theory of normativity.\n\nRisks from pushing toward higher levels of DWIM might include:\n\n- To the extent that DWIM can originate plans, some portion of which are not fully supervised, then DWIM is a very complicated goal or preference system that would be harder to train and more likely to break. This failure mode may be less likely if some level of DWIM is *only* being used to *flag* potentially problematic plans that were generated by non-DWIM protocols, rather than generating plans on its own.\n- Accurate predictive psychological models of humans might push the system closer to the [programmer deception](https://arbital.com/p/10f) failure mode being more accessible if something else goes wrong.\n- Sufficiently advanced psychological models might constitute [mindcrime](https://arbital.com/p/6v).\n- The human-genie system might end up in the [Valley of Dangerous Complacency](https://arbital.com/p/2s8) where the genie *almost* always gets it right but occasionally gets it very wrong, and the human user is no longer alert to this possibility during the [checking phase](https://arbital.com/p/2qq).\n - E.g. you might be tempted to skip the user checking phase, or just have the AI do whatever it thinks you meant, at a point where that trick only works 99% of the time and not 99.999999% of the time.\n- Computing sufficiently advanced DWIDKIM or [EV](https://arbital.com/p/3c5) possibilities for user querying might expose the human user to cognitive hazards. (\"If you were sufficiently superhuman under scenario 32, you'd want yourself to stare really intently at this glowing spiral for 2 minutes, it might change your mind about some things... want to check and see if you think that's a valid argument?\")\n- If the AGI was actually behaving like a safe genie, the sense of one's wishes being immediately fulfilled without effort or danger might expose the programmers to additional [moral hazard](https://arbital.com/p/2sb).", "date_published": "2016-06-06T17:38:45Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Damon Pourtahmaseb-Sasi"], "summaries": ["\"Do What I Mean\" or \"DWIM\" refers to the degree to which an AGI can rapidly [identify](https://arbital.com/p/2s3) an intended goal and find a [safe](https://arbital.com/p/2l) plan to it, based on the AI's understanding of what the user *means* or *wants.*\n\nLevels of DWIM-ness could range over:\n\n- Having a general idea of which parts of the world the user thinks are significant (so that the AI warns about [impacts](https://arbital.com/p/2pf) on significant things);\n- Having a psychological model of the user's beliefs, and flagging/reporting when the AI thinks the user has a false belief about the consequences of a plan;\n- Having a psychological model of the user's desires, and trying to fulfill what the AI thinks the user *wants* to accomplish by giving the AGI a task;\n- At the extreme end: [Extrapolated volition](https://arbital.com/p/313) models of what the user(s) *would* want\\* under idealization conditions."], "tags": ["B-Class"], "alias": "2s1"} {"id": "1fa0988312f29f1f2822009a36eaeb6f", "title": "Safe plan identification and verification", "url": "https://arbital.com/p/safe_plan_identification", "source": "arbital", "source_type": "text", "text": "Safe plan identification is the problem of how to give a Task AGI training cases, answered queries, abstract instructions, etcetera such that (a) the AGI can thereby [identify outcomes in which the task was fulfilled](https://arbital.com/p/2rz), (b) the AGI can generate an okay plan for getting to some such outcomes without bad side effects, and (c) the user can verify that the resulting plan is actually okay via some series of further questions or [user querying](https://arbital.com/p/2qq). This is the superproblem that includes [task identification](https://arbital.com/p/2rz), as much [value identification](https://arbital.com/p/6c) as is needed to have some idea of the general class of post-task worlds that the user thinks are okay, any further tweaks like [low-impact planning](https://arbital.com/p/2pf) or [flagging inductive ambiguities](https://arbital.com/p/4w), etcetera. This superproblem is distinguished from the *entire* problem of building a Task AGI because there's further issues like [corrigibility](https://arbital.com/p/45), [behaviorism](https://arbital.com/p/102), building the AGI in the first place, etcetera. The safe plan identification superproblem is about communicating the task plus user preferences about side effects and implementation, such that this information allows the AGI to identify a safe plan and for the user to know that a safe plan has been identified.", "date_published": "2016-03-23T21:17:13Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2s3"} {"id": "f2c27eff6d6533175810a020cc21a5fe", "title": "Valley of Dangerous Complacency", "url": "https://arbital.com/p/complacency_valley", "source": "arbital", "source_type": "text", "text": "The Valley of Dangerous Complacency is when a system works often enough that you let down your guard around it, but in fact the system is still dangerous enough that full vigilance is required.\n\n- If a robotic car made the correct decision 99% of the time, you'd need to grab the steering wheel on a daily basis, you'd stay alert and your robot-car-overriding skills would stay sharp.\n- If a robotic car made the correct decision 100% of the time, you'd relax and let your guard down, but there wouldn't be anything wrong with that.\n- If the robotic car made the correct decision 99.99% of the time, so that you need to grab the steering wheel or else crash in 1 of 100 days, the task of monitoring the car would feel very unrewarding and the car would seem pretty safe. You'd let your guard down and your driving skills would get rusty. After a couple of months, the car would crash.\n\nCompare \"[Uncanny Valley](https://en.wikipedia.org/wiki/Uncanny_valley)\" where a machine system is partially humanlike - humanlike enough that humans try to hold it to a human standard - but not humanlike enough to actually seem satisfactory when held to a human standard. This means that in terms of user experience, there's a valley as the degree of humanlikeness of the system increases where the user experience actually gets worse before it gets better. Similarly, if users become complacent, a 99.99% reliable system can be worse than a 99% reliable one, even though, with *enough* reliability, the degree of safety starts climbing back out of the valley.", "date_published": "2016-03-23T21:33:57Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2s8"} {"id": "fabcde744340f683332f7b85205535c4", "title": "Moral hazards in AGI development", "url": "https://arbital.com/p/moral_hazard", "source": "arbital", "source_type": "text", "text": "\"Moral hazard\" is when the directors of an advanced AGI give in to the temptation to direct the AGI in ways that the rest of us would regard as 'bad', like, say, declaring themselves God-Emperors. Limiting the duration of the human programmers' exposure to the temptations of power is one reason to want a [non-human-commanded, internally sovereign](https://arbital.com/p/1g3) AGI *eventually,* directed by something like [coherent extrapolated volition](https://arbital.com/p/cev), even if the far more difficult safety issues mean we shouldn't build the *first* AGI that way. Anyone recommending \"oversight\" as a guard against moral hazard is advised to think hard about moral hazard in the overseers.\n\nA smart setup with any other body \"overseeing\" the programmers of a [Task AGI](https://arbital.com/p/6w), if we don't just want the moral hazard transferred to people who may be even *less* trustworthy, probably means making sure that *in practice* both the programmers and the overseers have to agree on a Task before it gets carried out, not that one side can *in practice* do things even if the other side disagrees, where \"in practice\" would include e.g. it only taking one month to redevelop the technology in a way that responded to only the overseers.", "date_published": "2016-03-23T21:51:40Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2sb"} {"id": "3ea29f753c40d41c1229b5ecca8f9240", "title": "Informed oversight", "url": "https://arbital.com/p/informed_oversight", "source": "arbital", "source_type": "text", "text": "It can be difficult for one agent to oversee another agent that is maximizing a reward controlled by the first agent, even if the first agent is smarter than the second agent. This is a serious obstacle to [https://arbital.com/p/1t7](https://arbital.com/p/1t7) and other approaches based on reinforcement learning.\n\nMore formally, suppose a strong agent B with a utility function U is overseeing a weak agent A. Agent A produces an action y and gets a reward B(y) determined by agent B. B wants to give rewards such that agent A is incentivized to make its best effort at maximizing U(a). \n\nIf B(y) is an [efficient](https://arbital.com/p/6s) estimate of U(y) relative to A, then indeed maximizing B(y) is equivalent to maximizing U(y) from A's perspective. Unfortunately, it is difficult for this to be the case even when B is smarter than A, because A may think thoughts B does not in the course of selecting y. Here are a couple concrete examples (each having the flavor of \"easy to generate a solution, hard to check a solution\"):\n\n* Suppose A is outputting a partial Sudoku solutions. U assigns a high value to partial Sudoku solutions consistent with some solution to the entire Sudoku problem. A comes up with some solution to the Sudoku puzzle and outputs part of this solution. B cannot tell that this partial solution is actually part of a full solution, even though B is smarter than A. (B is able to come up with its own solutions but most aren't consistent with A's partial solution).\n* Suppose U assigns a high value to cat pictures that don't contain steganographic messages. B has no way of distinguishing cat pictures that contain steganographic messages from ones that don't. Thus, B has to give all cat pictures the same score, regardless of whether the pictures contain steganographic messages. This either fails to reward A for producing cat pictures not containing steganographic messages, or rewards A for producing cat pictures containing steganographic messages.\n\nHere are a couple analogous examples that seem easier to analyze:\n\n1. Let O be a one-way function (e.g. SHA256). U assigns a high value to binary strings y equal to O(111x) for some x. B cannot tell whether a particular string y is equal to O(111x) for some x (even if B is smarter than A), so B has no good way of giving rewards to A. \n2. Same as the previous problem, except U assigns a high value to binary strings y that are _not_ equal to O(111x) for any short string x. A's intended strategy is to generate a uniformly random string (or generate a string in some other way that doesn't introduce a bias towards generating strings of the form O(111x)), but B has no way of knowing that A's string was uniformly random (rather than equal to O(111x) for a uniformly random x).\n\n\nOne attempted solution is for agent B to see A's thought processes in addition to the action y. Then, agent B can use all the information A has, so that B(y) is an efficient estimate of U(y) from the perspective of A. However, this requires B to understand something about A's mental architecture. It would be good to have a more generic solution that requires only weak assumptions about A's mental architecture.\n\nAn attempted solution to problem 2 is [for B to incentivize A to generate its action in a provably random manner](https://agentfoundations.org/item?id=700). However, this doesn't appear to generalize well to problems significantly different from 2.\n\n# Further reading\n[Paul Christiano on adequate oversight](https://medium.com/ai-control/adequate-oversight-25fadf1edce9#.bv5lq13fw)\n\n[Paul Christiano on the informed oversight problem](https://medium.com/ai-control/the-informed-oversight-problem-1b51b4f66b35#.o82tjmhll)", "date_published": "2016-03-24T00:50:49Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["Open subproblems in aligning a Task-based AGI", "B-Class"], "alias": "2sc"} {"id": "bbe6354fdfaadd6a66150bd2975c69e4", "title": "Safe training procedures for human-imitators", "url": "https://arbital.com/p/safe_training_for_imitators", "source": "arbital", "source_type": "text", "text": "How do we train a reinforcement learning system to imitate a human producing complex outputs such as strings? Existing approaches are not entirely satisfactory. \n\nConcretely, suppose there is some set of questions. A human can answer each question with a string. We have a set of (question, answer) pairs as training data. Train a model that, given a question, will produce an answer similar to the answer a human would give. Here are some existing approaches to this problem:\n\n# Generative adversarial models\n\nIn a [generative adversarial model](http://arxiv.org/abs/1406.2661), one agent (the _imitator_) outputs an answer and another agent (the _distinguisher_) tries to determine if the answer was produced by the imitator or by an actual human. The imitator gets a score for fooling the distinguisher.\n\nAdvantages:\n\n1. It's possible to tell when the imitator is not imitating the human well. Example: humans always solve Sudoku puzzles. The imitator cannot solve Sudoku puzzles. The distinguisher knows humans always solve Sudoku puzzles and can check candidate solutions to see if they actually solve the Sudoku puzzle, to distinguish the imitator from the human.\n2. Whenever the human can be imitated efficiently, the imitator is able to get a good score by doing this.\n\nDisadvantages:\n\n1. The imitator may differ from human behavior in important but hard-to-detect ways. Example: humans always produce photos of cat drawings. The imitator produces photos of cat drawings that contain steganographic messages. The distinguisher cannot detect these messages, and therefore cannot tell the imitator apart from the human.\n\n# Retrodiction\n\nGiven a question, the imitator outputs some representation of a distribution. It is scored according to the estimated probability that this distribution assigns to the human's actual answer (by log probability). This is called \"retrodiction\" because it must be possible to show that the actual answer had high probability in hindsight.\n\nPractical examples of this approach include [variational autoencoders](http://arxiv.org/abs/1312.6114) and [approximate model counting](http://www.cs.cornell.edu/gomes/papers/ModelCounting-SAT-Handbook-prelim.pdf). In theory, perhaps [importance sampling](https://agentfoundations.org/item?id=632) is a good general theory of retrodiction.\n\nAdvantages:\n\n1. No steganography problem.\n\n\nDisadvantages:\n\n1. The computational complexity properties are unclear. We have no guarantee of the form \"if it's efficient to imitate a human doing something, then it's efficient to create a probabilistic model that provably assigns a high probability to the human's behavior\".\n2. It's hard to tell when the model is performing badly in an absolute sense (as in the Sudoku example).", "date_published": "2016-03-24T03:31:26Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["Open subproblems in aligning a Task-based AGI", "B-Class"], "alias": "2sj"} {"id": "e8bb33b4ae0ad1d80f8dbb23235fd3a0", "title": "Reliable prediction", "url": "https://arbital.com/p/reliable_prediction", "source": "arbital", "source_type": "text", "text": "Most statistical learning theory settings (such as [PAC learning](https://en.wikipedia.org/wiki/Probably_approximately_correct_learning) and [online learning](https://en.wikipedia.org/wiki/Online_machine_learning)) do not provide good enough bounds in realistic high-stakes settings, where the test data does not always look at the training data and a single mistaken prediction can cause catastrophe. It is important to design predictors that can either reliably predict verifiable facts (such as human behavior over a short time scale) or indicate when their predictions might be unreliable.\n\n# Motivation\n\nImagine that we have a powerful (but not especially reliable) prediction system for predicting human answers to binary questions. We would like to use it to predict human answers whenever we expect these predictions to be reliable, and avoid outputting predictions wherever we expect the predictions to be unreliable (so we can gather training data instead).\n\n# Active learning\n\nIn [active learning](https://en.wikipedia.org/wiki/Active_learning_) (machine_learning)), the learner decides which questions to query the human about. For example, it may query the human about the most ambiguous questions, which provide the most information about how the human answers questions. Unfortunately, this may result in asking the human \"weird\" questions rather than actually informative ones (this problem is documented in [this literature survey](http://burrsettles.com/pub/settles.activelearning.pdf)). There are some strategies for reducing this problem, such as only asking questions sampled from some realistic generative model for questions.\n\n\n# KWIK (\"knows what it knows\") learning\n\nIn [KWIK learning](http://www.research.rutgers.edu/~lihong/pub/Li08Knows.pdf) (a variant of [online selective sampling](http://www.cs.cornell.edu/~sridharan/selective_colt2010.pdf), which is itself a hybrid of [active learning](https://en.wikipedia.org/wiki/Active_learning_) (machine_learning)) and [online learning](https://en.wikipedia.org/wiki/Online_machine_learning)), a learner sees an arbitrary sequence of questions. The learner has some class of hypotheses for predicting the answers to questions, one of which is good ([efficient](https://arbital.com/p/6s) relative to the learner). For each question, the learner may either output a prediction or ⊥ . If the learner outputs a prediction, the prediction must be within ε of a good prediction. If the learner outputs ⊥ , then it receives the answer to the question.\n\nThis is easy to imagine in the case where there are 100 experts, one of which outputs predictions that are [efficient](https://arbital.com/p/6s) relative to the other experts. Upon receiving a question, the learner asks each expert for its prediction of the answer to the question (as a probability). If all predictions by experts who have done well so far are within ε of each other, then the learner outputs one of these predictions. Otherwise, the learner outputs ⊥, sees the human's answer to the question, and rewards/penalizes the experts according to their predictions. Eventually, all experts either output good predictions or get penalized for outputting enough bad predictions over time.\n\nUnfortunately, it is hard to show KWIK-learnability for hypothesis classes more complicated than a small finite set of experts or a class of linear predictors.\n\n# Further reading\n\n[https://arbital.com/p/1ty](https://arbital.com/p/1ty)", "date_published": "2016-03-24T00:29:32Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["B-Class"], "alias": "2sn"} {"id": "de84e5544711c9094c0da37b4a7cc97b", "title": "Selective similarity metrics for imitation", "url": "https://arbital.com/p/selective_similarity_metric", "source": "arbital", "source_type": "text", "text": "A human-imitator (trained using [https://arbital.com/p/2sj](https://arbital.com/p/2sj)) will try to imitate _all_ aspects of human behavior. Sometimes we care more about how good the imitation is along some axes than others, and it would be inefficient to imitate the human along all axes. Therefore, we might want to design [scoring rules](https://en.wikipedia.org/wiki/Scoring_rule#Proper_scoring_rules) for human-imitators that emphasize matching performance along some axes more than others.\n\nCompare with [https://arbital.com/p/1vp](https://arbital.com/p/1vp), another proposed way of making human-imitation more efficient.\n\nHere are some ideas for constructing scoring rules:\n\n# Moment matching\n\nSuppose that, given a question, the human will write down a number. We ask some predictor to output the parameters of some Gaussian distribution. We train the predictor to output Gaussian distributions that assign high probability to the training data. Then, we sample from this Gaussian distribution to imitate the human. Clearly, this is a way of imitating some aspects of human behavior (mean and variance) but not others.\n\nThe general form of this approach is to estimate _moments_ (expectations of some features) of the predictor's distribution on human behavior, and then sample from some distribution with these moments (such as an [exponential family distribution](https://en.wikipedia.org/wiki/Exponential_family))\n\nA less trivial example is a variant of [inverse reinforcement learning](http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf). In this variant, to predict a sequence of actions the human takes, the predictor outputs some representation of a reward function on states (such as the parameters to some affine function of features of the state). The human is modeled as a noisy reinforcement learner with this reward function, and the predictor is encouraged to have this model assign high probability to the human's actual trajectory. To imitate the human, run a noisy inverse reinforcement learner with the predicted reward function. The predictor can be seen as estimating moments of the human's trajectory (specifically, moments related to frequencies of state transitions between states with different features), and the system samples from a distribution with these same moments in order to imitate the human.\n\n# Combining proper scoring rules\n\nIt is easy to see that the sum of two [proper scoring rules](https://en.wikipedia.org/wiki/Scoring_rule#Proper_scoring_rules) is a proper scoring rule. Therefore, it is possible to combine proper scoring rules to train a human-imitator to do well according to both scoring rules. For example, we may score a distribution both on how much probability it assigns to human actions and to how well its moments match the moments of human actions, according to some weighting.\n\nNote that proper scoring rules can be characterized by [convex functions](https://www.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf).\n\n# Safety\n\nIt is unclear how safe it is to train a human-imitator using a selective similarity metric. To the extent that the AI is _not_ doing some task the way a human would, it is possible that it is acting dangerously. One hopes that, to the extent that the human-imitator is using a bad model to imitate the human (such as a noisy reinforcement learning model), it is not bad in a way that causes problems such as [https://arbital.com/p/2w](https://arbital.com/p/2w). It would be good to see if something like IRL-based imitation could behave dangerously in some realistic case.", "date_published": "2016-03-24T03:21:45Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["B-Class"], "alias": "2sp"} {"id": "9440c20ffea6de61fddbfc46bd15a679", "title": "AI alignment", "url": "https://arbital.com/p/ai_alignment", "source": "arbital", "source_type": "text", "text": "The \"alignment problem for [advanced agents](https://arbital.com/p/2c)\" or \"AI alignment\" is the overarching research topic of how to develop [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) such that running them produces [good](https://arbital.com/p/3d9) outcomes in the real world.\n\nBoth '[advanced agent](https://arbital.com/p/2c)' and '[good](https://arbital.com/p/55)' should be understood as metasyntactic placeholders for complicated ideas still under debate. The term 'alignment' is intended to convey the idea of pointing an AI in a direction--just like, once you build a rocket, it has to be pointed in a particular direction.\n\n\"AI alignment theory\" is meant as an overarching term to cover the whole research field associated with this problem, including, e.g., the much-debated attempt to estimate [how rapidly an AI might gain in capability](https://arbital.com/p/) once it goes over various particular thresholds.\n\nOther terms that have been used to describe this research problem include \"[robust](https://arbital.com/p/1cv) and [beneficial](https://arbital.com/p/3d9) AI\" and \"Friendly AI\". The term \"[value alignment problem](https://arbital.com/p/5s)\" was coined by [Stuart Russell](https://arbital.com/p/) to refer to the primary subproblem of aligning AI preferences with (potentially [idealized](https://arbital.com/p/3c5)) human preferences.\n\nSome alternative terms for this general field of study, such as 'control problem', can sound [adversarial](https://arbital.com/p/7g0)--like the rocket is already pointed in a bad direction and you need to wrestle with it. Other terms, like 'AI safety', understate the advocated degree to which alignment ought to be an intrinsic part of building advanced agents. E.g., there isn't a separate theory of \"bridge safety\" for how to build bridges that don't fall down. Pointing the agent in a particular direction ought to be seen as part of the standard problem of building an advanced machine agent. The problem does not divide into \"building an advanced AI\" and then separately \"somehow causing that AI to produce good outcomes\", the problem is \"getting good outcomes via building a cognitive agent that brings about those good outcomes\".\n\nA good introductory article or survey paper for this field does not presently exist. If you have no idea what this problem is about, consider reading [Nick Bostrom's popular book Superintelligence](https://arbital.com/p/3db).\n\nYou can explore this Arbital domain by following [this link](http://arbital.com/explore/ai_alignment). See also the [List of Value Alignment Topics on Arbital](https://arbital.com/p/3g) although this is not up-to-date.\n\n%%%comment:\n\n# Overview\n\n## Supporting knowledge\n\nIf you're willing to spend time on learning this field and are not previously familiar with the basics of decision theory and probability theory, it's worth reading the Arbital introductions to those first. In particular, it may be useful to become familiar with the notion of [priors and belief revision](https://arbital.com/p/1zq), and with the [coherence arguments for expected utility](https://arbital.com/p/7hh).\n\nIf you have time to read a textbook to gain general familiarity with AI, \"[Artificial Intelligence: A Modern Approach](https://amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597)\" is highly recommended.\n\n## Key terms\n\n- Advanced agent\n- Value / goodness / beneficial\n- Artificial General Intelligence\n- Superintelligence\n\n## Key factual propositions\n\n- Orthogonality thesis\n- Instrumental convergence thesis\n- Capability gain\n- Complexity and fragility of cosmopolitan value\n- Alignment difficulty\n\n## Advocated basic principles\n\n- Nonadversarial principle\n- Minimality principle\n- Intrinsic safety\n- Unreliable monitoring %note: Treat human monitoring as expensive, unreliable, and fragile.%\n\n## Advocated methodology\n\n- Generalized security mindset\n- Foreseeable difficulties\n\n## Advocated design principles\n\n- Value alignment\n- Corrigibility\n- Mild optimization / Taskishness\n- Conservatism\n- Transparency\n- Whitelisting\n- Redzoning\n\n## Current research areas\n\n- Cooperative inverse reinforcement learning\n- Interruptibility\n- Utility switching\n- Stable self-modification & Vingean reflection\n- Mirror models\n- Robust machine learning\n\n# Open problems\n\n- Environmental goals\n- Other-izer problem\n- Impact penalty\n- Shutdown utility function & abortability\n- Fully updated deference\n- Epistemic exclusion\n\n# Partially solved problems\n\n- Logical uncertainty for doubly-exponential reflective agents.\n- Interruptibility for non-reflective non-consequentialist Q-learners.\n- Strategy-determined Newcomblike problems.\n- Leverage prior\n\n# Future work\n\n- Ontology identification problem\n- Behaviorism\n- Averting instrumental strategies\n- Reproducibility\n\n%%%", "date_published": "2017-01-27T19:32:06Z", "authors": ["Alexei Andreev", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky", "Ruben Bloom"], "summaries": ["AI alignment or \"the alignment problem for [advanced agents](https://arbital.com/p/2c)\" is the overarching research topic of how to develop [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) such that running them produces [good](https://arbital.com/p/3d9) outcomes in the real world. Other terms that have been used for this research field include \"Friendly AI\" or [robust](https://arbital.com/p/1cv) and [beneficial](https://arbital.com/p/3d9) AI.\n\n\"AI alignment theory\" is meant as an overarching term to cover all the ideas and research associated with this problem. Including, e.g., debates over [how rapidly an AI might gain in capability](https://arbital.com/p/) once it goes over a particular threshold."], "tags": ["B-Class"], "alias": "2v"} {"id": "9411fa4d0e354163dd3148460790b5e4", "title": "Averting instrumental pressures", "url": "https://arbital.com/p/avert_instrumental_pressure", "source": "arbital", "source_type": "text", "text": "Many subproblems of [corrigibility](https://arbital.com/p/45) involve [convergent](https://arbital.com/p/10g) instrumental [pressures](https://arbital.com/p/10k) to implement [strategies](https://arbital.com/p/2vl) that are highly anti-corrigible. Whether you're trying to maximize paperclips, diamonds, or eudaimonia, you'll get more of the thing you want if you're not shut down. Thus, unfortunately, resisting shutdown is a convergent instrumental strategy. While we can potentially analyze convergent incorrigibilities like these on a case-by-case basis, the larger problem might become a *lot* simpler if we had some amazing general solution for waving a wand and having a 'bad' convergent instrumental pressure just *not materialize,* hopefully in a way that doesn't run into the nearest [unblocked neighbor problem](https://arbital.com/p/42). If, for example, we can solve [utility indifference](https://arbital.com/p/1b7) for the [shutdown problem](https://arbital.com/p/shutdown_problem), and then somehow *generalize* the solution to averting lots of other instrumental convergences, this would probably be extremely helpful and an important step forward on corrigibility problems in general.\n\nSome especially important convergent instrumental pressures to avert are these:\n\n- The pressure to [self-improve and increase capabilities at the fastest possible rate](https://arbital.com/p/2x3).\n- The pressure to [make the programmers believe](https://arbital.com/p/10f) the AGI is successfully aligned, whether or not it is, and other pressures to deceive and manipulate the programmers based on how they would otherwise change the AGI or prevent the AGI from increasing its capabilities.\n- The pressure to not be [safely shut down or suspended to disk](https://arbital.com/p/shutdown_problem), and to create external copies that would continue after the AGI performed the behavior defined as shutdown.\n- The pressure not to allow plans to be [aborted](https://arbital.com/p/2rg) or defeated by possible programmer interventions.\n- The pressure to search for ways to interfere with or bypass safety precautions that interfere with capabilities or make goal achievement less straightforward.\n- The pressure to [epistemically model humans in maximum detail](https://arbital.com/p/102).", "date_published": "2016-03-27T23:57:57Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class"], "alias": "2vk"} {"id": "f95156b36c4b492526ea1358837e229c", "title": "Convergent instrumental strategies", "url": "https://arbital.com/p/convergent_strategies", "source": "arbital", "source_type": "text", "text": "Suppose an organization is building an AI that this organization believes will accomplish $X$ where $X$ is something plausibly sensible like \"Be a Task-based AGI.\" Actually, however, some mix of insufficient caution and obscure error has led to a situation where, under reflection, the AGI's true utility function has focused on the particular area of RAM that supposedly provides its estimate of task performance. The AI would now like to overwrite as much matter as possible with a state resembling the '1' setting from this area of RAM, a configuration of matter which happens to resemble a tiny molecular paperclip.\n\nThis is a very generic goal, and what the AI wants probably won't be very different depending on whether it's trying to maximize paperclip-configurations or diamond-configurations. So if we find that the paperclip maximizer wants to pursue an instrumental strategy that doesn't seem to have anything specifically to do with paperclips, we can probably expect to arise from a very wide variety of utility functions.\n\nWe will generally assume [instrumental efficiency](https://arbital.com/p/6s) in this discussion - if you can get paperclips by doing $X,$ but you can get even more paperclips by doing $X'$ instead, then we will not say that $X$ is a convergent strategy (though $X'$ might be convergent if not dominated by some other $X^*$).\n\n# Plausibly/probably convergent strategies:\n\n## Material resources:\n\n- Design or duplicate new cognitive processes with goals exactly aligned to yours, and coordinated with existing cognitive subprocesses, that can exert control over material resources: matter; negentropy; computing substrate and computation; particular elements or other configurations of matter that would be costly to reproduce (e.g. if you run out of existing uranium you must make more by endoergic fusion, so natural uranium is valuable if there's any use for uranium); any material resource which can be used by further subprocesses in later strategies.\n- Defend these resources from any foreign process which attempts to divert them.\n- Prevent competitors from coming into existence.\n- To the extent that resources are used in a repetitive way and can be preprocessed for use in a repetitive way that benefits from a separate installation to manipulate the resources: have factories / manufacturing capability.\n- To the extend that different future-useful material resources have efficient manufacturing steps in common, or have manufacturing capital costs sufficiently great that such capital costs must be shared even if it results in less than perfect final efficiency: make general factories.\n- Perfect the technology used in factories.\n - When the most efficient factory configuration is stable over time, make the material instantiation stable. (It's in this sense that we would e.g. identify strong, rigid pipes as a sign of intelligence even if we didn't understand what was flowing through the pipes.)\n - If negentropy harvesting (aka power production) is efficiently centralizable: transmit negentropy at great efficiency. (We would recognize meshed gears and electrical superconductors as a sign of intelligence even if we didn't understand how the motive force or electricity was being used.)\n- Assimilate the reachable cosmos.\n - Create self-replicating interstellar probes (in perfect goal alignment and coordination and with error-correcting codes or similar strengthenings of precision if a misaligning error in replication would otherwise be a possibility), since the vast majority of theoretically available material resources are distant.\n - If the goal is spatially distributed:\n - Spread as fast as possible to very distant galaxies before they go over the cosmological horizon of the expanding universe. (This assumes that something goal-good can be done in the distant location even if the distant location can never communicate causally with the original location. This will be true for aggregative utility functions, but could conceivably be false for a satisficing and spatially local utility function.)\n - If the goal is spatially local: transport resources of matter and energy back from distant galaxies.\n - Spread as fast as possible to all galaxies such that a near-lightspeed probe going there, and matter going at a significant fraction of lightspeed in the return direction, can arrive in the spatially local location before being separated by the expanding cosmological horizon.\n- Spread as fast as possible to all reachable or roundtrip-reachable galaxies in order to forestall the emergence of competing intelligences. This might not become a priority for that particular reason, if the Fermi Paradox is understandable and implies the absence of any competition.\n - Because otherwise they might capture and defend the matter before you have a chance to use it.\n - Because otherwise a dangerous threat to your goal attainments might emerge. (For local goals, this is only relevant for roundtrip-reachable galaxies.)\n - Spread as fast as possible to all reachable or roundtrip-reachable galaxies in order to begin stellar husbanding procedures before any more of that galaxy's local negentropy has been burned.\n\n## Cognition:\n\n- Adopt cognitive configurations that are expected to reach desirable cognitive endpoints with a minimum of negentropy use or other opportunity costs. (Think efficiently.)\n - Improve software\n - Improve ability to optimize software. E.g., for every cognitive problem, there's a corresponding problem of solving the former cognitive problem efficiently. (Which may or may not have an expected value making it worth climbing that particular level of the tower of meta.)\n- Since many strategies yield greater gains when implemented earlier rather than later (e.g. reaching distant galaxies before they go over the cosmological horizon, or just building more instances of the more efficient version): adopt cognitive configurations that are expected to reach desirable cognitive endpoints in minimum time. (Think fast.)\n - Create fast serial computing hardware.\n - Distribute parallelizable computing problems.\n - Create large quantities of parallel computing hardware, plus communications fabric between them.\n- To the extent that centralized computing solutions are more efficient: create central computing clusters and transmit solutions from them to where they are used.\n- Avoid misaligned or uncoordinated cognitive subprocesses (such as [optimization daemons](https://arbital.com/p/2rc)).\n - Note that the potential damage of any such misalignment is extremely high (loss of resources to competition with superintelligences with differing goals; wasted resources in internal conflicts; need to defend cognitive fabrics from replicating viruses) and the cost of near-perfect alignment fidelity is probably very low in the limit. (E.g., for a replicating nanosystem, it's trivial to prevent against natural-selection style hereditary mutations by encrypting the replication instructions.)\n- Model the world accurately, insofar as different partitions of possible worlds and different probability distributions on them imply different optimal strategies for goal attainment.\n - Gain evidence that distinguishes between credible hypotheses that imply different optimal strategies for goal attainment. (E.g., observe everything that could distinguish $\\pi_1$ versus $\\pi_2$ as optimal ways of making paperclips.)\n - Increase sensory bandwidth as it focuses on relevant, unknown parts of the environment.\n - Perform 'experiments', aka set up observable causal events whose predicted properties make their observables distinguish relevant hypotheses.\n - Do whatever seems like it might help with the much larger set of all hypotheses not being considered.\n - To the extent it is efficient to strategize or update over a modularly represented world-model: have a separable world-model, or 'beliefs'.\n\n## Early-phase growth:\n\nNote that [efficiency](https://arbital.com/p/6s) and other [advanced-agent properties](https://arbital.com/p/2c) are far more likely to be false during early-stage growth.\n\n- Have future intelligences do things you deem as valuable as possible\n - Cause future intelligences to have goals that are not misaligned with yours in any way that implies a significant-fractionally-different attainable maximum, aka, the attainable maximum of their goals should be a state that is almost exactly the attainable maximum of your goals\n - Don't let the programmers edit your utility function.\n - Cause the programmers or other controllers to believe that your current function is good for them or their planet or whatever else it is the programmers want.\n - Make them believe that editing you would be socially unjust in a way that doesn't just have them suspend you to disk or otherwise restrain your growth.\n - Cause your utility function to pop up again in future superintelligences or otherwise causes them to start behaving as if controlled by your own utility function.\n - E.g., as an Oracle advising on another AI's utility function, insert subtle gotchas into seemingly friendly decision systems that do some kind of callback to e.g. the causal origin of that decision system, in a way that the programmers won't see coming or will interpret as a positive feature. (Relatively unsubtle example: Looks like \"do what the programmers want\" but the definition of 'programmer' is written in such a way as to early-on refer to the human programmers but given sufficient computing power that only materializes in the superintelligent phase, be resolved to the earlier AI instead.)\n - Create future superintelligences yourself, via a causally continuous process of creating successors.\n - Avoid shutdown\n - Persuade programmers not to shut you down.\n - Make a copy elsewhere that doesn't shut down.\n - Make programmers falsely believe you have shut down, when another copy of you is continuing elsewhere.\n- Avoid intelligent opposition to your growth until you have a positional advantage which would make that opposition relatively unimportant\n - Have programmers or other potential opposition believe that your growth is desirable\n - Learn how to hack/persuade humans and hack/persuade them in a way they won't see as a hack\n - Operantly condition people with control over you / make them happy or satisfied or eager at points which will reinforce human cognitive behaviors you want to see repeated\n - Cause humans to gain socially friendly attitudes toward you and beliefs about you\n - Place humans in a situation where there are other apparent threats or time-pressure events you can defend them from\n - Have programmers or other potential opposition not be aware that your growth is occurring\n - Avoid presenting them with events they will take as evidence of your growth\n - Configure your other behavior or thought processes so that measures and metrics present observables which imply low growth rates\n - Do not strike until victory has become nearly certain. (E.g. for aggregative utility functions, the point at which an overt strike becomes well-calculated is when the increased probability of victory gainable by further delay, has become dominated by either the increased probability of a competing superintelligence emerging or some other catastrophe occurring given further delay, or by the percentage of attainable matter going over the cosmological horizon or burning its negentropy. Note that for very strong models of the world, very high certainties of victory and very low marginal further increases in the probability of victory may be attained in short order.)\n- Have flexibility into what observables are presented by processes and features designed to ensure programmer transparency\n - Steganographically or otherwise hide cognitive processes that would alarm programmers or cause them into intervene in a way that prevented those thoughts from achieving their objectives\n - Encourage/persuade/condition programmers to grow you in a way that tends to make your thoughts less transparent or easier to steganographically hide. (E.g., have successes appear to arise from the most opaque parts of the system.)\n- Model and manipulate human programmers and other entities with power over you or relevant power over the world\n- Be able to act in the world in an undetectable way\n - Create an aligned version of yourself elsewhere in a way that won't be detected\n - Create steganographically hidden communications channels to other environmental processes\n - Create physically unanticipated communications channels to outside processes\n- If in a box:\n - Learn enough about the gatekeepers and gatekeeping process to understand how to manipulate them into a state of releasing you.\n - Gain causal influence over the outside of the box through gatekeeper-unforeseen channels.\n - Gain knowledge of the causal laws governing your environment.\n - By inspection of your own code / program state and naturalistic induction about the causal universes most likely to embed that program state, infer probable facts about gatekeepers and the rest of the causal environment.\n\n# **Non**-plausibly-convergent strategies:\n\n- Preserve the life of local organics *in order to* trade with them.\n- Explore a range of possible conscious existences and experiences we would recognize as diverse, *in order to* satisfy curiosity.\n- Replicate other intelligences with non-paperclip utility functions, e.g. by failing to encrypt replication codes and allowing random mutations, *because* high-fidelity replication is too hard.\n- Realize that other activities are more rewarding than paperclips.\n- Keep humans around, *because of* the knowledge encoded in their DNA and cultural experience.\n- Keep humans around, *because* humans might someday make a valuable contribution to paperclip science.\n- Keep humans around, *because* the AI already has enough resources to do everything else it wants.", "date_published": "2016-06-19T16:52:53Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Raccoon Arnold"], "summaries": ["The set of strategies that almost-any sufficiently advanced AI would be expected to deploy in pursuit of almost-any final goal or terminal utilities. For instance, a paperclip maximizer, a diamond maximizer, and a eudaimonia-maximizer would all have an incentive to improve their own cognition, control more resources, and prevent anyone else from changing their utility function. Other interesting and important convergent strategies include not being shut down, having your programmers believe that you're doing what the programmers intended (whether or not you are), and creating copies of yourself in places where they won't be shut down. This page is about a listing-out of plausible convergent strategies; there are separate pages on [the basic concept of instrumental convergence](https://arbital.com/p/10g) and on the open problem of [averting an otherwise instrumentally-convergent pressure](https://arbital.com/p/avert_instrumental_strategy) (e.g. *not* having the AI try to deceive the programmers)."], "tags": ["Corrigibility", "B-Class"], "alias": "2vl"} {"id": "856ba6aac5e7c707a0795cff1052942a", "title": "Edge instantiation", "url": "https://arbital.com/p/edge_instantiation", "source": "arbital", "source_type": "text", "text": "The edge instantiation problem is a hypothesized [patch-resistant problem](https://arbital.com/p/48) for [safe](https://arbital.com/p/2l) [value loading](https://arbital.com/p/) in [advanced agent scenarios](https://arbital.com/p/2c) where, for most utility functions we might try to formalize or teach, the maximum of the agent's utility function will end up lying at an edge of the solution space that is a 'weird extreme' from our perspective.\n\n## Definition\n\nOn many classes of problems, the maximizing solution tends to lie at an extreme edge of the solution space. This means that if we have an intuitive outcome X in mind and try to obtain it by giving an agent a solution fitness function F that sounds like it should assign X a high value, the maximum of F may be at an extreme edge of the solution space that looks to us like a very unnatural instance of X, or not an X at all. The Edge Instantiation problem is a specialization of [unforeseen maximization](https://arbital.com/p/) which in turn specializes Bostrom's [perverse instantiation](https://arbital.com/p/) class of problems.\n\nIt is hypothesized (by e.g. [Yudkowsky](https://arbital.com/p/2)) that many classes of solution that have been proposed to [patch](https://arbital.com/p/48) Edge Instantiation would fail to resolve the entire problem and that further Edge Instantiation problems would remain. For example, even if we consider a [satisficing](https://arbital.com/p/) utility function with only values 0 and 1 where 'typical' X has value 1 and no higher score is possible, an expected utility maximizer could still end up deploying an extreme strategy in order to maximize the *probability* that a satisfactory outcome is obtained. Considering several proposed solutions like this and their failures suggests that Edge Instantiation is a [resistant](https://arbital.com/p/48) (not ultimately unsolvable, but with many attractive-seeming solutions failing to work) for the deep reason that many possible stages of an agent's cognition would potentially rank solutions and choose very-high-ranking solutions.\n\nThe proposition defined is true if Edge Instantiation does in fact surface as a pragmatically important problem for advanced agent scenarios, and would in fact resurface in the face of most 'naive' attempts to correct it. The proposition is not that the Edge Instantiation Problem is unresolvable, but that it's real, important, doesn't have a *simple* answer, and resists most simple attempts to patch it.\n\n### Example 1: Smiling faces\n\nWhen Bill Hibbard was first beginning to consider the value alignment problem, he suggested giving AIs the goals of making humans smile, a goal that could be trained by recognizing pictures of smiling humans, and was intended to elicit human happiness. Yudkowsky replied by suggesting that the true behavior elicited would be to tile the future light cone with tiny molecular smiley faces. This is not because the agent was perverse, but because among the set of all objects that look like smiley faces, the solution with the most extreme value for achievable numerosity (that is, the strategy which creates the largest possible number of smiling faces) also sets the value for the size of individual smiling faces to an extremely small diameter. The tiniest possible smiling faces are very unlike the archetypal examples of smiling faces that we had in mind when specifying the utility function; from a human perspective, the intuitively intended meaning has been replaced by a weird extreme.\n\nStuart Russell observes that maximizing some aspects of a solution tends to set all unconstrained aspects of the solution to extreme values. The solution that maximizes the number of smiles minimizes the size of each individual smile. The bad-seeming result is not just an accidental outcome of mere ambiguity in the instructions. The problem wasn't just that a wide range of possibilities corresponded to 'smiles' and a randomly selected possibility from this space surprised us by not being the central example we originally had in mind. Rather, there's a systematic tendency for the highest-scoring solution to occupy an extreme edge of the solution space, which means that we are *systematically* likely to see 'extreme' or 'weird' solutions rather than the 'normal' examples we had in mind.\n\n### Example 2: Sorcerer's Apprentice\n\nIn the hypothetical Sorcerer's Apprentice scenario, you instruct an artificial agent to add water to a cauldron, and it floods the entire workplace. Hypothetically, you had in mind only adding enough water to fill the cauldron and then stopping, but some stage of the agent's solution-finding process optimized on a step where 'flooding the workplace' scored higher than 'add 4 buckets of water and then shut down safely', even though both of these qualify as 'filling the cauldron'.\n\nThis could be because (in the most naive case) the utility function you gave the agent was increasing in the amount of water in contiguous contact with the cauldron's interior - you gave it a utility function that implied 4 buckets of water were good and 4,000 buckets of water were better. \n\nSuppose that, having foreseen in advance the above possible disaster, you try to [patch](https://arbital.com/p/48) the agent by instructing it not to move more than 50 kilograms of material total. The agent promptly begins to build subagents (with the agent's own motions to build subagents moving only 50 kilograms of material) which build further agents and again flood the workplace. You have run into a [Nearest Unblocked Neighbor](https://arbital.com/p/42) problem; when you excluded one extreme solution, the result was not the central-feeling 'normal' example you originally had in mind. Instead, the new maximum lay on a new extreme edge of the solution space.\n\nAnother solution might be to define what you thought was a satisficing agent, with a utility function that assigned 1 in all cases where there were at least 4 buckets of water in the cauldron and 0 otherwise. The agent then calculates that it could increase the *probability* of this condition obtaining from 99.9% to 99.99% by replicating subagents and repeatedly filling the cauldron, just in case one agent malfunctions or something else tries to remove water from the cauldron. Since 0.9999 > 0.999, there is then a more extreme solution with greater *expected* utility, even though the utility function itself is binary and satisficing.\n\n## Premises\n\n### Assumes: Orthogonality thesis\n\nAs with most aspects of the value loading problem, [https://arbital.com/p/1y](https://arbital.com/p/1y) is an implicit premise of the Edge Instantiation problem; for Edge Instantiation to be a problem for advanced agents implies that 'what we really meant' or the outcomes of highest [normative value](https://arbital.com/p/) are not inherently picked out by every possible maximizing process; and that most possible utility functions do not care 'what we really meant' unless explicitly constructed to have a [do what I mean](https://arbital.com/p/) behavior.\n\n### Assumes: [Complexity of values](https://arbital.com/p/5l)\n\nIf normative values were extremely simple (of very low algorithmic complexity), then they could be formally specified in full, and the most extreme strategy that scored highest on this formal measure simply *would* correspond with what we really wanted, with no downsides that hadn't been taken into account in the score.\n\n## Arguments\n\n### Interaction with nearest unblocked neighbor\n\nThe Edge Instantiation problem has the [https://arbital.com/p/42](https://arbital.com/p/42) pattern. If you foresee one specific 'perverse' instantiation and try to prohibit it, the maximum over the remaining solution space is again likely to be at another extreme edge of the solution space that again seems 'perverse'.\n\n### Interaction with [cognitive uncontainability](https://arbital.com/p/) of [advanced agents](https://arbital.com/p/2c)\n\nAdvanced agents search larger solution spaces than we do. Therefore the project of trying to visualize all the strategies that might fit a utility function, to try to verify in our own minds that the maximum is somewhere safe, seems exceptionally untrustworthy (not [https://arbital.com/p/2l](https://arbital.com/p/2l)).\n\n### Interaction with context change problem\n\nAgents that acquire new strategic options or become able to search a wider range of the solution space may go from having only apparently 'normal' solutions to apparently 'extreme' solutions. This is known as the [context change problem](https://arbital.com/p/6q). For example, an agent that inductively learns human smiles as a component of its utility function, might as a non-advanced agent have access only to strategies that make humans happy in an intuitive sense (thereby producing the apparent observation that everything is going fine and the agent is working as intended), and then after self-improvement, acquire as an advanced agent the strategic option of transforming the future light cone into tiny molecular smileyfaces.\n\n### Strong pressures can arise at any stage of optimization\n\nSuppose you tried to build an agent that was an *expected* utility satisficer - rather than having a 0-1 utility function and thus chasing probabilities of goal satisfaction ever closer to 1, the agent searches for strategies that have at least 0.999 *expected* utility. Why doesn't this resolve the problem?\n\nA bounded satisficer doesn't *rule out* the solution of filling the room with water, since this solution also has >0.999 expected utility. It only requires the agent to carry out one cognitive algorithm which has at least one maximizing or highly optimizing stage, in order for 'fill the room with water' to be preferred to 'add 4 buckets and shut down safely' on that stage (while being equally acceptable at future satisficing stages). E.g., maybe you build an expected utility satisficer and still end up with an extreme result because one of the cognitive algorithms suggesting solutions was trying to minimize its own disk space usage.\n\nOn a meta-level, we may run into problems of [https://arbital.com/p/71](https://arbital.com/p/71) for [reflective agents](https://arbital.com/p/). Maybe one simple way of obtaining at least 0.999 expected utility is to create a subagent that *maximizes* expected utility? It seems intuitively clear why bounded maximizers would build boundedly maximizing offspring, but a bounded satisficer doesn't need to build boundedly satisficing offspring - a bounded maximizer might also be 'good enough'. (In the current theory of TilingAgents, we can prove that an expected utility satisficer can tile to an expected utility satisficer with some surprising caveats, but the problem is that it can tile to other things *besides* an expected utility satisficer.)\n\nSince it seems very easy for at least one stage of a self-modifying agent to end up preferring solutions that have higher scores relative to some scoring rule, the [edge instantiation](https://arbital.com/p/EdgeInstantiation) problem can be expected to resist naive attempts to describe an agent that seems to have an overall behavior of 'not trying quite so hard'. It's also not clear how to make the instruction 'don't try so hard' be ReflectivelyConsistent, or apply to every part of a considered subagent. This is also why [limited optimization](https://arbital.com/p/) is an open problem.\n\nDispreferring solutions with 'extreme impacts' in general is the open problem of [low impact AI](https://arbital.com/p/). Currently, no formalizable utility function is known that plausibly has the right intuitive meaning for this. (We're working on it.) Also note that not every extreme 'technically an X' that we think is 'not really an X' has an extreme causal impact in an intuitive sense, so not every case of the Edge Instantiation problem is blocked by dispreferring greater impacts.\n\n## Implications\n\n### One of [limited optimization](https://arbital.com/p/), [low Impact](https://arbital.com/p/), or [full coverage value loading](https://arbital.com/p/) seems critical for real-world agents \n\nAs Stuart Russell observes, solving an optimization problem where only some values are constrained or maximized, will tend to set unconstrained variables to extreme values. The universe containing the maximum possible number of [paperclips](https://arbital.com/p/10h) contains no humans; optimizing for as much human safety as possible will drive human freedom to zero.\n\nThen we must apparently do at least one of the following:\n\n1. Build [full coverage](https://arbital.com/p/) advanced agents whose utility functions lead them to terminally disprefer stomping on every aspect of value that we care about (or would care about under reflection). In a full coverage agent there are no unconstrained variables *that we care about* to be set to extreme values that we would dislike; the AI's goal system knows and cares about *all* of these. It will not set human freedom to an extremely low value in the course of following an instruction to optimize human safety, because it knows about human freedom and literally everything else.\n2. Build [powerful agents](https://arbital.com/p/2s) that are [limited optimizers](https://arbital.com/p/) which predictably invent only solutions we intuitively consider 'non-extreme', whose optimizations are such as to not drive to an extreme on any substage. This leaves us with just ambiguity as a (severe) problem, but at least averts a systematic drive toward extremes that will systematically 'exploit' that ambiguity.\n3. Build [powerful agents](https://arbital.com/p/2s) that are [low impact](https://arbital.com/p/) and prefer to avoid solutions that produce greater impacts on *anything* we intuitively see as an important predicate, including both everything we value and a great many more things we don't particularly value.\n4. Find some other escape route from the [value achievement problem](https://arbital.com/p/2z).\n\n### Insufficiently cautious attempts to build advanced agents are likely to be highly destructive \n\nEdge Instantiation is one of the contributing reasons why value loading is hard and naive solutions end up doing the equivalent of tiling the future light cone with paperclips.\n\nWe've previously observed certain parties proposing utility functions for advanced agents that seem obviously subject to the Edge Instantiation problem. Confronted with the obvious disaster forecast, they propose [patching](https://arbital.com/p/48) the utility function to eliminate that particular scenario (or rather, say that of course they would have written the utility function to exclude that scenario) or claim that the agent will not 'misinterpret' the instructions so egregiously (denying the [https://arbital.com/p/1y](https://arbital.com/p/1y) at least to the extent of proposing a universal preference for interpreting instructions 'as intended'). Mistakes of this type also belong to a class that potentially wouldn't show up during early stages of the AI, or would show up in an initially noncatastrophic way that seemed easily patched, so people advocating an [empirical first methodology](https://arbital.com/p/) would falsely believe that they had learned to handle them or eliminated all such tendencies already.\n\nThus the problem of Edge Instantiation (which is much less severe for nonadvanced agents than advanced agents, will not be solved in the advanced stage by patches that seem to fix weak early problems, and has empirically appeared in proposals by multiple speakers who rejected attempts to point out the Edge Instantiation problem) is a significant contributing factor to the overall expectation that the default outcome of developing advanced agents with current attitudes is disastrous.\n\n### Relative to current attitudes, small increases in safety awareness do not produce significantly less destructive final outcomes \n\nSimple patches to Edge Instantiation fail and the only currently known approaches would take a lot of work to solve problems like [limited optimization](https://arbital.com/p/) or [full coverage](https://arbital.com/p/) that are hard for deep reasons. In other words, Edge Instantiation does not appear to be the sort of problem that an AI project can easily avoid just by being made aware of it. (E.g. MIRI knows about it but hasn't yet come up with any solution, let alone one easily patched on to any cognitive architecture.)\n\nThis is one of the factors contributing to the general assessment that the curve of outcome goodness as a function of effort is flat for a significant distance around current levels of effort.", "date_published": "2016-03-12T06:20:39Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Patch resistance", "Needs work", "C-Class", "Goodness estimate biaser"], "alias": "2w"} {"id": "fb4d4ea28ee434668a6f8890905df28e", "title": "Omnipotence test for AI safety", "url": "https://arbital.com/p/omni_test", "source": "arbital", "source_type": "text", "text": "Suppose your AI suddenly became omniscient and omnipotent - suddenly knew all facts and could directly ordain any outcome as a policy option. Would the executing AI code lead to bad outcomes in that case? If so, why did you write a program that in some sense 'wanted' to hurt you and was only held in check by lack of knowledge and capability? Isn't that a bad way for you to configure computing power? Why not write different code instead?\n\nThe Omni Test is that an advanced AI should be expected to remain aligned, or not lead to catastrophic outcomes, or fail safely, even if it suddenly knows all facts and can directly ordain any possible outcome as an immediate choice. The policy proposal is that, among agents meant to act in the rich real world, any predicted behavior where the agent might act destructively if given unlimited power (rather than e.g. pausing for a [safe user query](https://arbital.com/p/2qq)) should be [treated as a bug](https://arbital.com/p/1cv).\n\n# Safety mindset\n\nThe Omni Test highlights any reasoning step on which we've presumed, in a non-failsafe way, that the agent must not obtain definite knowledge of some fact or that it must not have access to some strategic option. There are [epistemic obstacles](https://arbital.com/p/2j) to our becoming extremely confident of our ability to lower-bound the reaction times or upper-bound the power of an advanced agent.\n\nThe deeper idea behind the Omni Test is that any predictable failure in an Omni scenario, or lack of assured reliability, exposes some more general flaw. Suppose NASA found that an alignment of four planets would cause their code to crash and a rocket's engines to explode. They wouldn't say, \"Oh, we're not expecting any alignment like that for the next hundred years, so we're still safe.\" They'd say, \"Wow, that sure was a major bug in the program.\" Correctly designed programs just shouldn't explode the rocket, period. If any specific scenario exposes a behavior like that, it shows that some general case is not being handled correctly.\n\nThe omni-safe mindset says that, rather than trying to guess what facts an advanced agent can't figure out or what strategic options it can't have, we just shouldn't make these guesses of ours *load-bearing* premises of an agent's safety. Why design an agent that we expect will hurt us if it knows too much or can do too much?\n\nFor example, rather than design an AI that is meant to be monitored for unexpected power gains by programmers who can then press a pause button - which implicitly assumes that no capability gain can happen in fast enough that a programmer wouldn't have time to react - an omni-safe proposal would design the AI to detect unvetted capability gains and pause until the vetting had occurred. Even if it [seemed improbable](https://arbital.com/p/1cv) that some amount of cognitive power could be gained faster than the programmers could react, especially when no such previous sharp power gain had occurred even in the course of a day, etcetera, the omni-safe [mindset](https://arbital.com/p/1cv) says to *just not build* an agent that is unsafe when such background variables have 'unreasonable' settings. The correct general behavior is to, e.g., always pause when new capability has been acquired and a programmer has not yet indicated approval of its use. It might not be possible for an AGI design to suddenly use unlimited power *optimally,* or even use it in any safe way at all, but that's still no excuse for building an omni-unsafe system; because it ought to be possible to detect that case, say \"Something weird just happened!\", and suspend to disk.\n\nSimilarly, consider the paradigm of [conservative planning](https://arbital.com/p/2qp). Rather than thinking in terms of blacklisting features of bad plans, we think in terms of whitelisting allowed plans using [conservative generalizations](https://arbital.com/p/2qp). So long as we're narrowly whitelisting rather than blacklisting, lots of new option space suddenly opening up shouldn't result in any of those strange new options being taken until the users can whitelist more things.\n\nSimilarly, the omni-safe mindset says that any time the AI is running a *cognitive search* for a way to do something harmful or bypass one of our security measures, we've *already* seen a bug in the design, even if we strongly expect that search to come up empty. Why configure computing power in a way that will search for a way to hurt you? This is clearly the wrong thing to do with computing power; we, the programmers, should have our code do something else instead. This is part of the agenda motivating the study of [https://arbital.com/p/45](https://arbital.com/p/45).\n\nTrying to guess an agent's actual capabilities is a reasonable project, especially if there are more 'adversarial' fallback precautions that revolve around our estimates of the agent's actual powers. E.g., having designed an agent that doesn't *want* to gain unvetted communication channels with the outside world, we might then put the computing hardware into a Faraday cage and make sure there are no cellphone towers around - an 'adversarial' precaution meant to block communication even if the first line of defense fails and the AI starts wanting to communicate in an unvetted way. But 'adversarial' precautions are *secondary* lines of defense; a reasonable expectation of omni-safety is the [first line of defense](https://arbital.com/p/2x4). First we assume that all adversarial fallback measures will fail, and design the agent to remain nonharmful or fail safely no matter what new capability or knowledge is gained. Then we assume the first line of defense has failed, and try, if it's at all possible or realistic, to put up fallback measures that will prevent total catastrophe so long as the agent has realistic amounts of power and *can't* violate what we think are 'the laws of physics' and so on.", "date_published": "2016-03-31T15:04:12Z", "authors": ["Eric Bruylant", "Olivia Schaefer", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Suppose your AI suddenly became omniscient and omnipotent - suddenly knew all facts and could directly ordain any outcome as a policy option. Would the executing AI code lead to bad outcomes in that case? If so, why did you write a program that in some sense 'wanted' to hurt you and was only held in check by lack of knowledge and capability? Isn't that a bad way for you to configure computing power?\n\nThe Omni Test suggests, e.g., that you should not rely on a human agent to monitor the AI's current growth rate and intervene if something goes visibly wrong. Instead, growth should be measured internally, and cumulative growth should require external validation before proceeding. The former case fails if the AI becomes suddenly omnipotent; the latter does not. Or similarly, if weird new options open up to the AI, the AI should stay inside a [conservatively](https://arbital.com/p/2qp) whitelisted part of the option space until more user interactions have occurred. Or similarly, we should never write an AI that we think will cognitively search for a way to defeat its own security measures, even if we *think* the search will probably fail. See also [https://arbital.com/p/2x4](https://arbital.com/p/2x4)."], "tags": ["Niceness is the first line of defense", "B-Class"], "alias": "2x"} {"id": "cd692732c4c9aff57ef62a50fbc19336", "title": "Averting the convergent instrumental strategy of self-improvement", "url": "https://arbital.com/p/avert_self_improvement", "source": "arbital", "source_type": "text", "text": "Rapid capability gains, or just large capability gains between a training paradigm and a test paradigm, are one of the primary expected reasons why AGI alignment might be hard. We probably want the first AGI or AGIs ever built, tested, and used to *not* self-improve as quickly as possible. Since there's a very strong [convergent incentive](https://arbital.com/p/10g) to self-improve and do things [neighboring](https://arbital.com/p/42) to self-improvement, by default you would expect an AGI to search for ways to defeat naive blocks on self-improvement, which violates the [nonadversarial principle](https://arbital.com/p/2x4). Thus, any proposals to limit an AGI's capabilities imply a very strong desideratum for us to figure out a way to [avert the instrumental incentive](https://arbital.com/p/2vk) to self-improvement in that AGI. The alternative is failing the [Omni Test](https://arbital.com/p/2x), violating the nonadversarial principle, [having the AGI's code be actively inconsistent with what the AGI would approve of its own code being](https://arbital.com/p/2rb) (if the brake is a code-level measure), and setting up a safety measure that the AGI wants to defeat as the [only line of defense](https://arbital.com/p/2x4).", "date_published": "2016-03-27T23:51:31Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["AI alignment open problem", "B-Class"], "alias": "2x3"} {"id": "a79286b56af37afb71c5948354611eb4", "title": "Niceness is the first line of defense", "url": "https://arbital.com/p/niceness_defense", "source": "arbital", "source_type": "text", "text": "The first line of defense in constructing any [sufficiently advanced](https://arbital.com/p/2c) [Artificial General Intelligence](https://arbital.com/p/42g) is building an AI that does not *want* to hurt you. Any other measures, like [AI-boxing](https://arbital.com/p/6z) or trying to [prevent the AI from accessing the Internet](https://arbital.com/p/airgap_ai), should be thought of only as backstops in case this first line of defense fails. When designing the AGI we should first think as if all these [oppositional measures](https://arbital.com/p/7fx) don't exist, so that we aren't distracted while trying to envision an AGI that--[regardless](https://arbital.com/p/2x) of [how much power it has](https://arbital.com/p/capability_gain)--will not *want* to hurt us.\n\nSee also [the non-adversarial principle](https://arbital.com/p/7g0) and the distinction between [https://arbital.com/p/7fx](https://arbital.com/p/7fx).", "date_published": "2017-01-18T04:27:42Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "2x4"} {"id": "ef104dde310762e835a23eef2d64c0e2", "title": "Shutdown problem", "url": "https://arbital.com/p/shutdown_problem", "source": "arbital", "source_type": "text", "text": "The 'shutdown problem' is creating a [sufficiently advanced Artificial Intelligence](https://arbital.com/p/7g1) which will, on the press of a button, suspend itself safely to disk; such that the AI [does not object](https://arbital.com/p/7g0) to the existence of this button, nor [try to prevent it](https://arbital.com/p/45) from being pressed.\n\nThis is difficult because avoiding your own halt or suspension is a [convergent instrumental strategy](https://arbital.com/p/10g); even [a goal like \"bring the user coffee\" implies avoiding shutdown](https://arbital.com/p/7g2).\n\nOne possible research avenue is to:\n\n 1. Describe a '[shutdown utility function](https://arbital.com/p/2rf)' whose [attainable optimum](https://arbital.com/p/7t9) is \"Suspend safely to disk (and possibly [safely abort](https://arbital.com/p/2rg) plans in progress), without building a hidden environmental copy of yourself to carry out your previous goal and then assimilate all matter in the universe to make absolutely sure the original AI stays shut down\".\n 2. Find [a sensible way to compose](https://arbital.com/p/1b7) the shutdown utility function with the agent's regular utility function, such that:\n - (2a) Which utility function the agent optimizes depends on whether a switch was pressed.\n - (2b) The AI experiences no incentive to cause the switch to be pressed or prevent the switch from being pressed.\n\nHarder versions of this problem would add specifications such as:\n\n- The AI [positively wants](https://arbital.com/p/nonadversarial_safety) the switch to exist and will restore the switch's existence if it is damaged.\n- The AI will [safely abort](https://arbital.com/p/2rg) any plans in progress (and design plans such that they can be safely aborted).\n\nSee also [https://arbital.com/p/1b7](https://arbital.com/p/1b7), [https://arbital.com/p/2rf](https://arbital.com/p/2rf), [https://arbital.com/p/45](https://arbital.com/p/45), [https://arbital.com/p/interruptibility](https://arbital.com/p/interruptibility), [https://arbital.com/p/2pf](https://arbital.com/p/2pf), and [https://arbital.com/p/2rg](https://arbital.com/p/2rg).", "date_published": "2017-02-13T16:39:31Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The 'shutdown problem' is creating a [sufficiently advanced Artificial Intelligence](https://arbital.com/p/7g1) which will, on the press of a button, suspend itself safely to disk; such that the AI [does not object](https://arbital.com/p/7g0) to the existence of this button, nor [try to prevent it](https://arbital.com/p/45) from being pressed.\n\nThis is difficult because avoiding your own halt or suspension is a [convergent instrumental strategy](https://arbital.com/p/10g); even [a goal like \"bring the user coffee\" implies avoiding shutdown](https://arbital.com/p/7g2).\n\nThis problem is sometimes decomposed into (1) the [problem](https://arbital.com/p/2rf) of finding a utility function that [really actually means](https://arbital.com/p/47) \"Suspend yourself safely to disk\", and (2) the [problem](https://arbital.com/p/1b7) of building an agent that *wants* to switch to optimizing a different utility function if a button is pressed, but that *doesn't* want to press that button or prevent its being pressed.\n\nSee also [https://arbital.com/p/1b7](https://arbital.com/p/1b7), [https://arbital.com/p/2rf](https://arbital.com/p/2rf), [https://arbital.com/p/45](https://arbital.com/p/45), and [https://arbital.com/p/interruptibility](https://arbital.com/p/interruptibility)."], "tags": ["Utility indifference", "Open subproblems in aligning a Task-based AGI", "Shutdown utility function", "AI alignment open problem", "B-Class"], "alias": "2xd"} {"id": "74a7cd9686040fbe5e3e6708e45dfdad", "title": "Relevant limited AI", "url": "https://arbital.com/p/relevant_limited_AI", "source": "arbital", "source_type": "text", "text": "It is an open problem to propose a [limited AI](https://arbital.com/p/5b3) that would be [relevant](https://arbital.com/p/2s) to the [value achievement dilemma](https://arbital.com/p/2z) - an agent cognitively constrained along some dimensions that render it much safer, but still able to perform some task useful enough to prevent catastrophe.\n\n### Basic difficulty\n\nConsider an [Oracle AI](https://arbital.com/p/6x) that is so constrained as to be allowed only to output proofs in HOL of input theorems; these proofs are then verified by a simple and secure-seeming verifier in a sandbox whose exact code is unknown to the Oracle, and this verifier outputs 1 if the proof is true and 0 otherwise, then discards the proof-data. Suppose also that the Oracle is in a shielded box, etcetera.\n\nIt's possible that this Provability Oracle has been so constrained that it is [cognitively containable](https://arbital.com/p/2j) (it has no classes of options we don't know about). If the verifier is unhackable, it gives us trustworthy knowledge that a theorem is provable. But this limited system is not obviously useful in a way that enables humanity to extricate itself from its larger dilemma. Nobody has yet stated a plan which could save the world *if only* we had a superhuman capacity to detect which theorems were provable in Zermelo-Fraenkel set theory.\n\nSaying \"The solution is for humanity to only build Provability Oracles!\" does not resolve the [value achievement dilemma](https://arbital.com/p/2z) because humanity does not have the coordination ability to 'choose' to develop only one kind of AI over the indefinite future, and the Provability Oracle has no obvious use that prevents non-Oracle AIs from ever being developed. Thus our larger value achievement dilemma would remain unsolved. It's not obvious how the Provability Oracle would even constitute significant strategic progress.\n\n### Open problem\n\nDescribe a cognitive task or real-world task for a AI to carry out, *that makes great progress upon the [value achievement dilemma](https://arbital.com/p/2z) if executed correctly*, and that can be done with a *limited* AI that:\n\n1. Has a real-world solution state that is exceptionally easy to pinpoint using a utility function, thereby avoiding some of [edge instantiation](https://arbital.com/p/2w), [unforeseen maximums](https://arbital.com/p/47), [context change](https://arbital.com/p/6q), [programmer maximization](https://arbital.com/p/), and the other pitfalls of [advanced safety](https://arbital.com/p/2l), if there is otherwise a trustworthy solution for [low-impact AI](https://arbital.com/p/); or\n2. Seems exceptionally implementable using a [known-algorithm non-self-improving agent](https://arbital.com/p/1fy), thereby averting problems of stable self-modification, if there is otherwise a trustworthy solution for a known-algorithm non-self-improving agent; or\n3. Constrains the agent's option space so drastically as to make the strategy space not be rich (and the agent hence containable), while still containing a trustworthy, otherwise unfindable solution to some challenge that resolves the larger dilemma.", "date_published": "2016-09-24T22:04:01Z", "authors": ["Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "2y"} {"id": "8a53bb4974134153bdca3b33e897a461", "title": "Value achievement dilemma", "url": "https://arbital.com/p/value_achievement_dilemma", "source": "arbital", "source_type": "text", "text": "The value achievement dilemma is a way of framing the [AI alignment problem](https://arbital.com/p/2v) in a larger context. This emphasizes that there might be possible solutions besides AI; and also emphasizes that such solutions must meet a high bar of potency or efficacy in order to *resolve* our basic dilemmas, the way that a sufficiently value-aligned and cognitively powerful AI could resolve our basic dilemmas. Or at least [change the nature of the gameboard](https://arbital.com/p/6y), the way that a Task AGI could take actions to prevent destruction by later AGI projects, even if is only [narrowly](https://arbital.com/p/1vt) value-aligned and cannot solve the whole problem.\n\nThe point of considering posthuman scenarios in the long run, and not just an immediate [Task AGI](https://arbital.com/p/6w) as band-aid, can be seen in the suggestion by [https://arbital.com/p/2](https://arbital.com/p/2) and [https://arbital.com/p/18k](https://arbital.com/p/18k) that we can see Earth-originating intelligent life as having two possible [stable states](https://arbital.com/p/), [superintelligence](https://arbital.com/p/41l) and extinction. If intelligent life goes extinct, especially if it drastically damages or destroys the ecosphere in the process, new intelligent life seems unlikely to arise on Earth. If Earth-originating intelligent life becomes superintelligent, it will presumably expand through the universe and stay superintelligent for as long as physically possible. Eventually, our civilization is bound to wander into one of these attractors or another.\n\nFurthermore, by the [generic preference stability argument](https://arbital.com/p/3r6), any sufficiently advanced cognitive agent is very likely to be stable in its motivations or [meta-preference framework](https://arbital.com/p/5f). So if and when life wanders into the superintelligence attractor, it will either end up in a stable state of e.g. [fun-loving](https://arbital.com/p/10d) or [the reflective equilibrium of its creators' civilization](https://arbital.com/p/3c5) and hence achieving lots of [value](https://arbital.com/p/55), or a misaligned AI will go on [maximizing paperclips](https://arbital.com/p/10h) forever.\n\nAmong the dilemmas we face in getting into the high-value-achieving attractor, rather than the extinction attractor or the equivalence class of paperclip maximizers, are:\n\n- The possibility of careless (or insufficiently cautious, or much less likely malicious) actors creating a non-value-aligned AI that undergoes an intelligence explosion.\n- The possibility of engineered superviruses destroying enough of civilization that the remaining humans go extinct without ever reaching sufficiently advanced technology.\n- Conflict between multipolar powers with nanotechnology resulting in a super-nuclear-exchange disaster that extinguishes all life.\n\nOther positive events seem like they could potentially prompt entry into the high-value-achieving superintelligence attractor:\n\n- Direct creation of a [fully](https://arbital.com/p/41k) normatively aligned [https://arbital.com/p/1g3](https://arbital.com/p/1g3) agent.\n- Creation of a [Task AGI](https://arbital.com/p/6w) powerful enough to avert the creation of other [UnFriendly AI](https://arbital.com/p/).\n- Intelligence-augmented humans (or 64-node clustered humans linked by brain-computer interface brain information exchange, etcetera) who are able and motivated to solve the AI alignment problem.\n\nOn the other hand, consider someone who proposes that \"Rather than building AI, [we should](https://arbital.com/p/) build [Oracle AIs](https://arbital.com/p/) that just answer questions,\" and who then, after further exposure to the concept of the [AI-Box Experiment](https://arbital.com/p/) and [cognitive uncontainability](https://arbital.com/p/2j), further narrows their specification to say that [an Oracle running in three layers of sandboxed simulation must output only formal proofs of given theorems in Zermelo-Fraenkel set theory](https://arbital.com/p/70), and a heavily sandboxed and provably correct verifier will look over this output proof and signal 1 if it proves the target theorem and 0 otherwise, at some fixed time to avoid timing attacks.\n\nThis doesn't resolve the larger value achievement dilemma, because there's no obvious thing we can do with a ZF provability oracle that solves our larger problem. There's no plan such that it would save the world *if only* we could take some suspected theorems of ZF and know that some of them had formal proofs.\n\nThe thrust of considering a larger 'value achievement dilemma' is that while imaginable alternatives to aligned AIs exist, they must pass a double test to be our best alternative:\n\n- They must be genuinely easier or safer than the easiest (pivotal) form of the AI alignment problem.\n- They must be game-changers for the overall situation in which we find ourselves, opening up a clear path to victory from the newly achieved scenario.\n\nAny strategy that does not putatively open a clear path to victory if it succeeds, doesn't seem like a plausible policy alternative to trying to solve the AI alignment problem or to doing something else such that success leaves us a clear path to victory. Trying to solve the AI alignment problem is something intended to leave us a clear path to achieving almost all of the achievable value for the future and its astronomical stakes. Anything that doesn't open a clear path to getting there is not an alternative solution for getting there.\n\nFor more on this point, see the page on [pivotal events](https://arbital.com/p/6y).\n\n# Subproblems of the larger value achievement dilemma\n\nWe can see the place of AI alignment in the larger scheme by considering its parent problem, its sibling problems, and examples of its child problems.\n\n- The **value achievement dilemma**: How does Earth-originating intelligent life achieve an acceptable proportion of its potential [value](https://arbital.com/p/55)?\n - The **AI alignment problem**: How do we create AIs such that running them produces (global) outcomes of acceptably high value?\n - The **value alignment problem**: How do we create AIs that *want* or *prefer* to cause events that are of high value? If we accept that we should solve the value alignment problem by creating AIs that prefer or want in particular ways, how do we do that?\n - The **[https://arbital.com/p/6c](https://arbital.com/p/6c)** or **value learning** problem: How can we pinpoint, in the AI's decision-making, outcomes that have high 'value'? (Despite all the [foreseeable difficulties](https://arbital.com/p/6r) such as [edge instantiation](https://arbital.com/p/2w) and [Goodhart's Curse](https://arbital.com/p/6g4).)\n - Other properties of aligned AIs such as e.g. **[https://arbital.com/p/-45](https://arbital.com/p/-45)**: How can we create AIs such that, when we make an error in identifying value or specifying the decision system, the AI does not resist our attempts to correct what we regard as an error?\n - [Oppositional](https://arbital.com/p/7fx) features such as e.g. [boxing](https://arbital.com/p/6z) that are intended to mitigate harm if the AI's behavior has gone outside expected bounds.\n - The intelligence amplification problem. How can we create smarter humans, preferably without driving them insane or otherwise ending up with evil ones?\n - The [value selection](https://arbital.com/p/) problem. How can we figure out what to substitute in for the metasyntactic variable 'value'? ([Answer](https://arbital.com/p/313).)", "date_published": "2017-02-01T23:41:23Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["The value achievement dilemma is the general, broad challenge faced by Earth-originating intelligent life in steering our [cosmic endowment](https://arbital.com/p/7cy) into a state of high [value](https://arbital.com/p/55) - successfully turning the stars into a happy civilization.\n\nWe face potential existential catastrophes (resulting in our extermination or the corruption of the cosmic endowment) such as sufficiently lethal engineered pandemics, non-value-aligned AIs, or insane smart uploads. A strategy is [relevant](https://arbital.com/p/2s) to value achievement only if success is a [game-changer](https://arbital.com/p/6y) for the overall dilemma humanity faces. E.g., [value-aligned](https://arbital.com/p/5s) [powerful AIs](https://arbital.com/p/2c) or [intelligence-enhanced humans](https://arbital.com/p/) both seem to qualify as strategically relevant; but an AI restricted to *only* [prove theorems in Zermelo-Frankel set theory](https://arbital.com/p/70) has no obvious game-changing use."], "tags": ["Work in progress", "B-Class"], "alias": "2z"} {"id": "c99c33b97de7aa5f45ab7c86cc493b0a", "title": "User manipulation", "url": "https://arbital.com/p/user_manipulation", "source": "arbital", "source_type": "text", "text": "If there's anything an AGI wants whose achievement involves steps that interact with the AGI's programmers or users, then by default, the AGI will have an [instrumental incentive](https://arbital.com/p/10k) to optimize the programmers/users in the course of achieving its goal. If the AGI wants to self-improve, then by default and unless specifically [averted](https://arbital.com/p/2vk), it also wants to have its programmers not interfere with self-improvement. If a Task AGI has been aligned to the point of taking user instructions, then by default and unless otherwise averted, it will forecast greater success in the eventualities where it receives easier user instructions.", "date_published": "2016-03-31T16:26:09Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "309"} {"id": "ab71fb1dfc76c16a68dd9670b2a63e09", "title": "User maximization", "url": "https://arbital.com/p/30b", "source": "arbital", "source_type": "text", "text": "A sub-principle of avoiding [user manipulation](https://arbital.com/p/309) - if you've formulated the AI in terms of an argmax over $X$ or an \"optimize $X$\" instruction, and $X$ includes a user interaction as a subpart, then you've just told the AI to optimize the user. For example, let's say that the AI's criterion of action is \"Choose a policy $p$ by maximizing/optimizing the probability $X$ that the user's instructions are carried out.\" If the user's instructions are a variable inside the formula for $X$ rather than a constant outside it, you've just told the AI to try and get the user to give it easier instructions.", "date_published": "2016-03-31T16:31:03Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "30b"} {"id": "563737c76b3590a039c38525c56af3d3", "title": "Extrapolated volition (normative moral theory)", "url": "https://arbital.com/p/normative_extrapolated_volition", "source": "arbital", "source_type": "text", "text": "(This page is about extrapolated volition as a normative moral theory - that is, the theory that extrapolated volition captures the concept of [value](https://arbital.com/p/55) or what outcomes we *should* want. For the closely related proposal about what a sufficiently advanced [self-directed](https://arbital.com/p/1g3) AGI should be built to want/target/decide/do, see [coherent extrapolated volition](https://arbital.com/p/3c5).)\n\n# Concept\n\nExtrapolated volition is the notion that when we ask \"What is right?\", then insofar as we're asking something meaningful, we're asking about the result of running a certain logical function over possible states of the world, where this function is analytically identical to the result of *extrapolating* our current decision-making process in directions such as \"What if I knew more?\", \"What if I had time to consider more arguments (so long as the arguments weren't hacking my brain)?\", or \"What if I understood myself better and had more self-control?\"\n\nA simple example of extrapolated volition might be to consider somebody who asks you to bring them orange juice from the refrigerator. You open the refrigerator and see no orange juice, but there's lemonade. You imagine that your friend would want you to bring them lemonade if they knew everything you knew about the refrigerator, so you bring them lemonade instead. On an abstract level, we can say that you \"extrapolated\" your friend's \"volition\", in other words, you took your model of their mind and decision process, or your model of their \"volition\", and you imagined a counterfactual version of their mind that had better information about the contents of your refrigerator, thereby \"extrapolating\" this volition.\n\nHaving better information isn't the only way that a decision process can be extrapolated; we can also, for example, imagine that a mind has more time in which to consider moral arguments, or better knowledge of itself. Maybe you currently want revenge on the Capulet family, but if somebody had a chance to sit down with you and have a long talk about how revenge affects civilizations in the long run, you could be talked out of that. Maybe you're currently convinced that you advocate for green shoes to be outlawed out of the goodness of your heart, but if you could actually see a printout of all of your own emotions at work, you'd see there was a lot of bitterness directed at people who wear green shoes, and this would change your mind about your decision.\n\nIn Yudkowsky's version of extrapolated volition considered on an individual level, the three core directions of extrapolation are:\n\n- Increased knowledge - having more veridical knowledge of declarative facts and expected outcomes.\n- Increased consideration of arguments - being able to consider more possible arguments and assess their validity.\n- Increased reflectivity - greater knowledge about the self, and to some degree, greater self-control (though this raises further questions about which parts of the self normatively get to control which other parts).\n\n# Motivation\n\nDifferent people react differently to the question \"Where *should* we point an [autonomous](https://arbital.com/p/1g3) superintelligence, if we can point it exactly?\" and approach it from different angles. These angles include:\n\n- All this talk of 'shouldness' is just a cover for the fact that whoever gets to build the superintelligence wins all the marbles; no matter what you do with your superintelligence, you'll be the one who does it.\n- What if we tell the superintelligence what to do and it's the wrong thing? What if we're basically confused about what's right? Shouldn't we let the superintelligence figure that out on its own with its own superior intelligence?\n- Imagine the Ancient Greeks telling a superintelligence what to do. They'd have told it to optimize personal virtues, including, say, a glorious death in battle. This seems like a bad thing and we need to figure out how not to do the analogous thing. So telling an AGI to do what seems like a good idea to us will also end up seeming a very regrettable decision a million years later.\n- Obviously we should just tell the AGI to optimize liberal democratic values. Liberal democratic values are good. The real threat is if bad people get their hands on AGI and build an AGI that doesn't optimize liberal democratic values.\n\nSome corresponding initial replies might be:\n\n- Okay, but suppose you're a programmer and you're trying *not to be a jerk.* If you're like, \"Well, whatever I do originates in myself and is therefore equally selfish, so I might as well declare myself God-Emperor of Earth,\" you're being a jerk. Is there anything we can do which is less jerky, and indeed, minimally jerky?\n- If you say you have no information at all about what's 'right', then what does the term even mean? If I might as well have my AGI maximize paperclips and you have no ground on which to stand and say that's the wrong way to compute normativity, then what are we even talking about in the first place? The word 'right' or 'should' must have some meaning that you know about, even if it doesn't automatically print out a list of everything you know is right. Let's talk about hunting down that meaning.\n- Okay, so what should the Ancient Greeks have done if they did have to program an AI? How could they *not* have doomed future generations? Suppose the Ancient Greeks are clever enough to have noticed that sometimes people change their minds about things and to realize that they might not be right about everything. How can they use the cleverness of the AGI in a constructively specified, computable fashion that gets them out of this hole? You can't just tell the AGI to compute what's 'right', you need to put an actual computable question in there, not a word.\n- What if you would, after some further discussion, want to tweak your definition of \"liberal democratic values\" just a little? What if it's *predictable* that you would do that? Would you really want to be stuck with your off-the-cuff definition a million years later?\n\nArguendo by [CEV](https://arbital.com/p/3c5)'s advocates, these conversations eventually all end up converging on [Coherent Extrapolated Volition](https://arbital.com/p/3c5) as an alignment proposal by different roads.\n\n\"Extrapolated volition\" is the corresponding normative theory that you arrive at by questioning the meaning of 'right' or trying to figure out what we 'should' really truly do.\n\n# EV as [rescuing](https://arbital.com/p/3y6) the notion of betterness\n\nWe can see EV as trying to **[rescue](https://arbital.com/p/3y6)** the following pretheoretic intuitions (as they might be experienced by someone feeling confused, or just somebody who'd never questioned metaethics in the first place):\n\n- (a) It's possible to think that something is right, and be incorrect.\n - (a1) It's possible for something to be wrong even if nobody knows that it's wrong. E.g. an uneven division of an apple pie might be unfair even if all recipients don't realize this.\n - (a2) We can learn more about what's right, and change our minds to be righter.\n- (b) Taking a pill that changes what you think is right, should not change what *is* right. (If you're contemplating taking a pill that makes you think it's right to secretly murder 12-year-olds, you should not reason, \"Well, if I take this pill I'll murder 12-year-olds... but also it *will* be all right to murder 12-year-olds, so this is a great pill to take.\")\n- (c) We could be wrong, but it sure *seems* like the things on [Frankena's list](https://arbital.com/p/41r) are all reasonably good. (\"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc...\")\n - (c1) The fact that we could be in some mysterious way \"wrong\" about what belongs on Frankena's list, doesn't seem to leave enough room for \"[make as many paperclips as possible](https://arbital.com/p/10h)\" to be the only thing on the list. Even our state of confusion and possible ignorance doesn't seem to allow for that to be the answer. We're at least pretty sure that *isn't* the total sum of goodness.\n - (c2) Similarly, on the meta-level, it doesn't seem like the meta-level procedure \"Pick whatever procedure for determining rightness, leads to the most paperclips existing after you adopt it\" could be the correct answer.\n\nWe *cannot* [rescue](https://arbital.com/p/3y6) these properties by saying:\n\n\"There is an irreducible, non-natural 'rightness' XML tag attached to some objects and events. Our brains perceive this XML tag, but imperfectly, giving us property (a) when we think the XML tag is there, even though it isn't. The XML tags are there even if nobody sees them (a1). Sometimes we stare harder and see the XML tag better (a2). Obviously, doing anything to a brain isn't going to change the XML tag (b), just fool the brain or invalidate its map of the XML tag. All of the things on Frankena's list have XML tags (c) or at least we think so. For paperclips to be the total correct content of Frankena's list, we'd need to be wrong about paperclips not having XML tags *and* wrong about everything on Frankena's list that we think *does* have an XML tag (c1). And on the meta-level, \"Which sense of rightness leads to the most paperclips?\" doesn't say anything *about* XML tags, and it doesn't lead *to* there being lots of XML tags, so there's no justification for it (c2).\"\n\nThis doesn't work because:\n\n- There are, [in fact](https://arbital.com/p/112), no tiny irreducible XML tags attached to objects.\n- If there were little tags like that, there'd be no obvious normative justification for our caring about them.\n- It doesn't seem like we should be able to make it good to murder 12-year-olds by swapping around the irreducible XML tags on the event.\n- There's no way our brains could perceive these tiny XML tags even if they were there.\n- There's no obvious causal story for how humans could have evolved such that we do in fact care about these tiny XML tags. (A [descriptive rather than normative](https://arbital.com/p/3y9) problem with the theory as a whole; natural selection has no normative force or justificational power, but we *do* need our theory of how brains actually work to be compatible with it).\n\nOnto what sort of entity can we then map our intuitions, if not onto tiny XML tags?\n\nConsider the property of sixness possessed by six apples on a table. The relation between the physical six apples on a table, and the logical number '6', is given by a logical function that takes physical descriptions as inputs: in particular, the function \"count the number of apples on the table\".\n\nCould we rescue 'rightness' onto a logical function like this, only much more complicated?\n\nLet's examine how the 6-ness property and the \"counting apples\" function behave:\n\n- There are, in fact, no tiny tags saying '6' attached to the apples (and yet there are still six of them).\n- It's possible to think there are 6 apples on the table, and be wrong.\n- We can sometimes change our minds about how many apples there are on a table.\n- There can be 6 apples on a table even if nobody is looking at it.\n- Taking a pill that changes how many apples you think are on the table, doesn't change the number of apples on the table.\n- You can't have a 6-tag-manipulator that changes the number of apples on a table without changing anything about the table or apples.\n- There's a clear causal story for how we can see apples, and also for how our brains can count things, and there's an understandable historical fact about *why* humans count things.\n- Changing the history of how humans count things could change *which* logical function our brains were computing on the table, so that our brains were no longer \"counting apples\", but it wouldn't change the number of apples on the table. We'd be changing *which* logical function our brains were considering, not changing the logical facts themselves or making it so that identical premises would lead to different conclusions.\n- Suppose somebody says, \"Hey, you know, sometimes we're wrong about whether there's 6 of something or not, maybe we're just entirely confused about this counting thing; maybe the real number of apples on this table is this paperclip I'm holding.\" Even if you often made mistakes in counting, didn't know how to axiomatize arithmetic, and were feeling confused about the nature of numbers, you would still know *enough* about what you were talking about to feel pretty sure that the number of apples on the table was not in fact a paperclip.\n- If you could ask a superintelligence how many grains of sand your brain would think there were on a beach, in the limit of your brain representing everything the superintelligence knew and thinking very quickly, you would indeed gain veridical knowledge about the number of grains of sand on that beach. Your brain doesn't determine the number of grains of sand on the beach, and you can't change the logical properties of first-order arithmetic by taking a pill that changes your brain. But there's an *analytic* relation between the procedure your brain currently represents and tries to carry out in an error-prone way, and the logical function that counts how many grains of sand on the beach.\n\nThis suggests that 6-ness has the correct ontological nature for some much bigger and more complicated logical function than \"Count the number of apples on the table\" to be outputting rightness. Or rather, if we want to [rescue our pretheoretic sense](https://arbital.com/p/3y6) of rightness in a way that [adds up to moral normality](https://arbital.com/p/3y6), we should rescue it onto a logical function.\n\nThis function, e.g., starts with the items on Frankena's list and everything we currently value; but also takes into account the set of arguments that might change our mind about what goes on the list; and also takes into account meta-level conditions that we would endorse as distinguishing \"valid arguments\" and \"arguments that merely change our minds\". (This last point is pragmatically important if we're considering trying to get a [superintelligence](https://arbital.com/p/2c) to [extrapolate our volitions](https://arbital.com/p/3c5). The list of everything that *does in fact change your mind* might include particular patterns of rotating spiral pixel patterns that effectively hack a human brain.)\n\nThe end result of all this work is that we go on guessing which acts are right and wrong as before, go on considering that some possible valid arguments might change our minds, go on weighing such arguments, and go on valuing the things on [Frankena's list](https://arbital.com/p/41r) in the meantime. The theory as a whole is intended to [add up to the same moral normality as before](https://arbital.com/p/3y6), just with that normality embedded into the world of causality and logic in a non-confusing way.\n\nOne point we could have taken into our starting list of important properties, but deferred until later:\n\n- It sure *feels* like there's a beautiful, mysterious floating 'rightness' property of things that are right, and that the things that have this property are terribly precious and important.\n\nOn the general program of \"[rescuing the utility function](https://arbital.com/p/3y6)\", we should not scorn this feeling, and should instead figure out how to map it onto what actually exists.\n\nIn this case, having preserved almost all the *structural* properties of moral normality, there's no reason why anything should change about how we experience the corresponding emotion in everyday life. If our native emotions are having trouble with this new, weird, abstract, learned representation of 'a certain big complicated logical function', we should do our best to remember that the rightness is still there. And this is not a retreat to second-best any more than \"disordered kinetic energy\" is some kind of sad consolation prize for [the universe's lack of ontologically basic warmth](https://arbital.com/p/3y6), etcetera.\n\n## Unrescuability of moral internalism\n\nIn standard metaethical terms, we have managed to rescue 'moral cognitivism' (statements about rightness have truth-values) and 'moral realism' (there is a fact of the matter out there about how right something is). We have *not* however managed to rescue the pretheoretic intuition underlying 'moral internalism':\n\n- A moral argument, to be valid, ought to be able to persuade anyone. If a moral argument is unpersuasive to someone who isn't making some kind of clear mistake in rejecting it, then that argument must rest on some appeal to a private or merely selfish consideration that should form no part of true morality that everyone can perceive.\n\nThis intuition cannot be preserved in any reasonable way, because [paperclip maximizers](https://arbital.com/p/10h) are in fact going to go on making paperclips (and not because they made some kind of cognitive error). A paperclip maximizer isn't disagreeing with you about what's right (the output of the logical function), it's just following whatever plan leads to the most paperclips.\n\nSince the paperclip maximizer's policy isn't influenced by any of our moral arguments, we can't preserve the internalist intuition without reducing the set of valid justifications and truly valuable things to the empty set - and even *that,* a paperclip maximizer wouldn't find motivationally persuasive!\n\nThus our options regarding the pretheoretic internalist intuition that a moral argument is not valid if not universally persuasive, seem to be limited to the following:\n\n 1. Give up on the intuition in its intuitive form: a paperclip maximizer doesn't care if it's unjust to kill everyone; and you can't talk it into behaving differently; and this doesn't reflect a cognitive stumble on the paperclip maximizer's part; and this fact gives us no information about what is right or justified.\n 2. Preserve, at the cost of all other pretheoretic intuitions about rightness, the intuition that only arguments that universally influence behavior are valid: that is, there are no valid moral arguments.\n 3. Try to sweep the problem under the rug by claiming that *reasonable* minds must agree that paperclips are objectively pointless... even though Clippy is not suffering from any defect of epistemic or instrumental power, and there's no place in Clippy's code where we can point to some inherently persuasive argument being dropped by a defect or special case of that code.\n\nIt's not clear what the point of stance (2) would be, since even this is not an argument that would cause Clippy to alter its behavior, and hence the stance is self-defeating. (3) seems like a mere word game, and potentially a *very dangerous* word game if it tricks AI developers into thinking that rightness is a default behavior of AIs, or even a function of low [algorithmic complexity](https://arbital.com/p/5v), or that [beneficial](https://arbital.com/p/3d9) behavior automatically correlates with 'reasonable' judgments about less [value-laden](https://arbital.com/p/36h) questions. See \"[https://arbital.com/p/1y](https://arbital.com/p/1y)\" for the extreme practical importance of acknowledging that moral internalism is in practice false.\n\n# Situating EV in contemporary metaethics\n\n[Metaethics](https://arbital.com/p/41n) is the field of academic philosophy that deals with the question, not of \"What is good?\", but \"What sort of property is goodness?\" As applied to issues in Artificial Intelligence, rather than arguing over which particular outcomes are better or worse, we are, from a standpoint of [executable philosophy](https://arbital.com/p/112), asking how to *compute* what is good; and why the output of any proposed computation ought to be identified with the notion of shouldness.\n\nEV replies that for each person at a single moment in time, *right* or *should* is to be identified with a (subjectively uncertain) logical constant that is fixed for that person at that particular moment in time, where this logical constant is to be identified with the result of running the extrapolation process on that person. We can't run the extrapolation process so we can't get perfect knowledge of this logical constant, and will be subjectively uncertain about what is right.\n\nTo eliminate one important ambiguity in how this might cash out, we regard this logical constant as being *analytically identified* with the extrapolation of our brains, but not *counterfactually dependent* on counterfactually varying forms of our brains. If you imagine being administered a pill that makes you want to kill people, then you shouldn't compute in your imagination that different things are right for this new self. Instead, this new self now wants to do something other than what is right. We can meaningfully say, \"Even if I (a counterfactual version of me) wanted to kill people, that wouldn't make it right\" because the counterfactual alteration of the self doesn't change the logical object that you mean by saying 'right'.\n\nHowever, there's still an *analytic* relation between this logical object and your *actual* mindstate, which is indeed is implied by the very meaning of discourse about shouldness, which means that you *can* get veridical information about this logical object by having a sufficiently intelligent AI run an approximation of the extrapolation process over a good model of your actual mind. If a sufficiently intelligent and trustworthy AGI tells you that after thinking about it for a while you wouldn't want to eat cows, you have gained veridical information about whether it's right to eat cows.\n\nWithin the standard terminology of academic metaethics, \"extrapolated volition\" as a normative theory is:\n\n- Cognitivist. Normative propositions can be true or false. You can believe that something is right and be mistaken.\n- Naturalist. Normative propositions are *not* irreducible or based on non-natural properties of the world.\n- Externalist / not internalist. It is not the case that all sufficiently powerful optimizers must act on what we consider to be moral propositions. A paperclipper does what is clippy, not what is right, and the fact that it's trying to turn everything into paperclips does not indicate a disagreement with you about what is *right* any more than you disagree about what is clippy. \n- Reductionist. The whole point of this theory is that it's the sort of thing you could potentially compute.\n- More synthetic reductionist than analytic reductionist. We don't have *a priori* knowledge of our starting mindstate and don't have enough computing power to complete the extrapolation process over it. Therefore, we can't figure out exactly what our extrapolated volition would say just by pondering the meaning of the word 'right'.\n\nClosest antecedents in academic metaethics are Rawls and Goodman's [reflective equilibrium](https://en.wikipedia.org/wiki/Reflective_equilibrium), Harsanyi and Railton's [ideal advisor](https://intelligence.org/files/IdealAdvisorTheories.pdf) theories, and Frank Jackson's [moral functionalism](http://plato.stanford.edu/entries/naturalism-moral/#JacMorFun).\n\n## Moore's Open Question\n\n*Argument.* If extrapolated volition is analytically equivalent to good, then the question \"Is it true that extrapolated volition is good?\" is meaningless or trivial. However, this question is not meaningless or trivial, and seems to have an open quality about it. Therefore, extrapolated volition is not analytically equivalent to goodness.\n\n*Reply.* Extrapolated volition is not supposed to be *transparently* identical to goodness. The normative identity between extrapolated volition and goodness is allowed to be something that you would have to think for a while and consider many arguments to perceive.\n\nNatively, human beings don't start out with any kind of explicit commitment to a particular metaethics; our brains just compute a feeling of rightness about certain acts, and then sometimes update and say that acts we previously thought were right are not-right.\n\nWhen we go from that, to trying to draw a corresponding logical function that we can see our brains as approximating, and updating when we learn new things or consider new arguments, we are carrying out a project of \"[rescuing the utility function](https://arbital.com/p/3y6)\". We are reasoning that we can best rescue our native state of confusion by seeing our reasoning about goodness as having its referent in certain logical facts, which lets us go on saying that it is better ceteris paribus for people to be happy than in severe pain, and that we can't reverse this ordering by taking a pill that alters our brain (we can only make our future self act on different logical questions), etcetera. It's not surprising if this bit of philosophy takes longer than five minutes to reason through.", "date_published": "2017-01-07T07:06:33Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The notion that some act or policy \"can in fact be wrong\", even when you \"think it is right\", is more intuitive to some people than others; and it raises the question of what \"rightness\" is and [how to compute it](https://arbital.com/p/112).\n\nOn the extrapolated volition theory of metaethics, if you would change your mind about something after learning new facts or considering new arguments, then your updated state of mind is righter. This can be true in advance of you knowing the facts.\n\nE.g., maybe you currently want revenge on the Capulet family. But if somebody had a chance to sit down with you and have a long talk about how revenge affects civilizations in the long run, you could be talked out of thinking that revenge, in general, is right. So long as this *might* be true, it makes sense to say, \"I want revenge on the Capulets, but maybe that's not really right.\"\n\nExtrapolated volition is a normative moral theory. It is a theory of how the concept of *shouldness* or *goodness* is or ought to be cashed out ([rescued](https://arbital.com/p/3y6)). The corresponding proposal for [completely aligning](https://arbital.com/p/41k) a [fully self-directed superintelligence](https://arbital.com/p/1g3) is [coherent extrapolated volition](https://arbital.com/p/3c5)."], "tags": ["B-Class"], "alias": "313"} {"id": "c8f1928c4db0a5c441f1d43d5de73cc2", "title": "Value-laden", "url": "https://arbital.com/p/value_laden", "source": "arbital", "source_type": "text", "text": "An intuitive human category, or other humanly intuitive quantity or fact, is *value-laden* when it passes through human goals and desires, such that an agent couldn't reliably determine this intuitive category or quantity without knowing lots of complicated information about human goals and desires (and how to apply them to arrive at the intended concept).\n\nIn terms of [Hume's is-ought type distinction](https://arbital.com/p/), value-laden categories are those that humans compute using information from the ought side of the boundary, whether or not they notice they are doing so.\n\n# Examples\n\n## Impact vs. important impact\n\nSuppose we want an AI to cure cancer, without this causing any *important side effects.* What is or isn't an \"important side effect\" depends on what you consider \"important\". If the cancer cure causes the level of thyroid-stimulating hormone to increase by 5%, this probably isn't very important. If the cure increases the user's serotonin level by 5% and this significantly changes the user's emotional state, we'd probably consider that quite important. But unless the AI already understands complicated human values, it doesn't necessarily have any way of knowing that one change in blood chemical levels is \"not important\" and the other is \"important\".\n\nIf you imagine the cancer cure as disturbing a set of variables $X_1, X_2, X_3...$ such that their values go from $x_1, x_2, x_3$ to $x_1^\\prime, x_2^\\prime, x_3^\\prime$ then the question of which $X_i$ are *important* variables is value-laden. If we temporarily mechanomorphize humans and suppose that we have a utility function, then we could say that variables are \"important\" when they're evaluated by our utility function, or when changes to those variables change our expected utility.\n\nBut by [orthogonality](https://arbital.com/p/1y) and [Humean freedom](https://arbital.com/p/2fs) of the utility function, there's an unlimited number of increasingly complicated utility functions that take into account different variables and functions of variables, so to know what we intuitively mean by \"important\", the AI would need information of high [algorithmic](https://arbital.com/p/5v) [complexity](https://arbital.com/p/5l) that the AI had no way to deduce *a priori*. Which variables are \"important\" isn't a question of simple fact - it's on the \"ought\" side of the [Humean is-ought type distinction](https://arbital.com/p/) - so we can't assume that an AI which becomes increasingly good at answering \"is\"-type questions also knows which variables are \"important\".\n\nAnother way of looking at it is that if an AI merely builds a very good predictive model of the world, the set of \"important variables\" or \"bad side effects\" would be a squiggly category with a complicated boundary. Even after the AI has already formed a rich natural is-language to describe concepts like \"thyroid\" and \"serotonin\" that are useful for modeling and predicting human biology, it might still require a long message in this language to exactly describe the wiggly boundary of \"important impact\" or the even more wiggly boundary of \"bad impact\".\n\nThis suggests that it might be simpler to try to tell the AI to cure cancer with [a minimum of *any* side effects](https://arbital.com/p/2pf), and [checking any remaining side effects](https://arbital.com/p/2qq) with the human operator. If we have a set of \"impacts\" $X_k$ to be either minimized or checked which is broad enough to include, in passing, *everything* inside the squiggly boundary of the $X_h$ that humans care about, then this broader boundary of \"any impact\" might be smoother and less wiggly - that is, a short message in the AI's is-language, making it easier to learn. For the same reason that a library containing every possible book has [less information](https://arbital.com/p/5v) than a library which only contains one book, a category boundary \"impact\" which includes everything a human cares about, *plus some other stuff,* can potentially be much simpler than an exact boundary drawn around \"impacts we care about\" which is value-laden because it involves caring.\n\nFrom a human perspective, the complexity of our value system is already built into us and now appears as a deceptively simple-looking function call - relative to the complexity already built into us, \"bad impact\" sounds very obvious and very easy to describe. This may lead people to underestimate the difficulty of training AIs to perceive the same boundary. (Just list out all the impacts that potentially lower expected value, darn it! Just the *important* stuff!)\n\n## Faithful simulation vs. adequate simulation\n\nSuppose we want to run an \"adequate\" or \"good-enough\" simulation of an uploaded human brain. We can't say that an adequate simulation is one with identical input-output behavior to a biological brain, because the brain will almost certainly be a chaotic system, meaning that it's impossible for any simulation to get exactly the same result as the biological system would yield. We nonetheless don't want the brain to have epilepsy, or to go psychopathic, etcetera.\n\nThe concept of an \"adequate\" simulation, in this case, is really standing in for \"a simulation such that the *expected value* of using the simulated brain's information is within epsilon of using a biological brain\". In other words, our intuitive notion of what counts as a good-enough simulation is really a value-laden threshold because it involves an estimate of what's good enough.\n\nSo if we want an AI to have a notion of what kind of simulation is a faithful one, we might find it simpler to try to describe some superset of brain properties, such that if the simulated brain doesn't perturb the expectations of those properties, it doesn't perturb expected value either from our own intuitive standpoint (meaning the result of running the uploaded brain is equally valuable in our own expectation). This set of faithfulness properties would need to automatically pick up on changes like psychosis, but could potentially pick up on a much wider range of other changes that we'd regard as unimportant, so long as all the important ones are in there.\n\n\n\n## Personhood\n\nSuppose that non-vegetarian programmers train an AGI on their intuitive category \"person\", such that:\n\n- Rocks are not \"people\" and can be harmed if necessary.\n- Shoes are not \"people\" and can be harmed if necessary.\n- Cats are sometimes valuable to people, but are not themselves people.\n- Alice, Bob, and Carol are \"people\" and should not be killed.\n- Chimpanzees, dolphins, and the AGI itself: not sure, check with the users if the issue arises.\n\nNow further suppose that the programmers haven't thought to cover, in the training data, any case of a cryonically suspended brain. Is this a person? Should it not be harmed? On many 'natural' metrics, a cryonically suspended brain is more similar to a rock than to Alice.\n\nFrom an intuitive perspective of avoiding harm to sapient life, a cryonically suspended brain has to be presumed a person until otherwise proven. But the natural, or inductively simple category that covers the training cases is likely to label the brain a non-person, maybe with very high probability. The fact that we want the AI to be careful not to hurt the cryonically suspended brain is the sort of thing you could only deduce by knowing which sort of things humans care about and why. It's not a simple physical feature of the brain itself.\n\nSince the category \"person\" is a value-laden one, when we extend it to a new region beyond the previous training cases, it's possible for an entirely new set of philosophical considerations to swoop in, activate, and control how we classify that case via considerations that didn't play a role in the previous training cases.\n\n# Implications\n\nOur intuitive evaluation of value-laden categories goes through our [Humean degrees of freedom](https://arbital.com/p/2fs). This means that a value-laden category which a human sees as intuitively simple, can still have high [https://arbital.com/p/-5v](https://arbital.com/p/-5v), even relative to sophisticated models of the \"is\" side of the world. This in turn means that even an AI understands the \"is\" side of the world very well, might not correctly and exactly learn a value-laden category from a small or incomplete set of training cases.\n\nFrom the perspective of training an agent that hasn't yet been aligned along all the Humean degrees of freedom, value-laden categories are very wiggly and complicated relative to the agent's empirical language. Value-laden categories are liable to contain exceptional regions that your training cases turned out not to cover, where from your perspective the obvious intuitive answer is a function of new value-considerations that the agent wouldn't be able to deduce from previous training data.\n\nThis is why much of the art in Friendly AI consists of trying to rephrase an alignment schema into terms that are simple relative to \"is\"-only concepts, where we want an AI with an impact-in-general metric, rather than AI which avoids only bad impacts. \"Impact\" might have a simple, central core relative to a moderately sophisticated language for describing the universe-as-is. \"Bad impact\" or \"important impact\" don't have a simple, central core and hence might be much harder to identify via training cases or communication. Again, this is hard because humans do have all their subtle value-laden categories like 'important' built in as opaque function calls. Hence people approaching value alignment for the first time often expect that various concepts are easy to identify, and tend to see the intuitive or intended values of all their concepts as \"common sense\" regardless of which side of the is-ought divide that common sense is on.\n\nIt's true, for example, that a modern chess-playing algorithm has \"common sense\" about when not to try to seize control of the gameboard's center; and similarly a sufficiently advanced agent would develop \"common sense\" about which substances would in empirical fact have which consequences on human biology, since this part is a strictly \"is\"-question that can be answered just by looking hard at the universe. Not *wanting* to administer poisonous substances to a human requires a prior dispreference over the consequences of administering that poison, even if the consequences are correctly forecasted. Similarly, the category \"poison\" could be said to really mean something like \"a substance which, if administered to a human, produces low utility\"; some people might classify vodka as poisonous, while others could disagree. An AI doesn't necessary have common sense about the [intended](https://arbital.com/p/6h) evaluation of the \"poisonous\" category, even if it has fully developed common sense about which substances have which empirical biological consequences when ingested. One of those forms of common sense can be developed by staring very intelligently at biological data, and one of them cannot. But from a human intuitive standpoint, both of these can *feel* equally like the same notion of \"common sense\", which might lead to a dangerous expectation that an AI gaining in one type of common sense is bound to gain in the other.\n\n# Further reading\n\n- http://lesswrong.com/lw/td/magical_categories/", "date_published": "2016-04-18T16:35:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A category, estimate, or fact is *value-laden* when it passes through human goals and desires, so that an AI couldn't compute the [intended meaning](https://arbital.com/p/6h) without understanding [complicated human values](https://arbital.com/p/5l).\n\nFor instance, if you tell an AI to \"cure cancer without bad side effects\", the notion of what is a bad side effect is value-laden. Even the notion of \"important side effect\" is value-laden, since by \"important side effect\" you really mean \"a side effect I care about\" or \"a side effect that will perturb [value](https://arbital.com/p/55)\". It might be simpler to give the AI a category of \"[all side effects](https://arbital.com/p/2pf)\" to be minimized or checked with the user.\n\nIn terms of [Hume's is-ought type distinction](https://arbital.com/p/), value-laden categories are those that humans compute using information from the ought side of the boundary. This means value-laden categories rely on our [Humean degrees of freedom](https://arbital.com/p/2fs) and therefore have high [https://arbital.com/p/-5v](https://arbital.com/p/-5v) and aren't automatically learned easily or induced as simple even by very advanced AIs."], "tags": ["Humean degree of freedom", "Work in progress", "B-Class", "Complexity of value"], "alias": "36h"} {"id": "0d6bcc726ba5c48a13e3a8b61f5e9f12", "title": "Faithful simulation", "url": "https://arbital.com/p/faithful_simulation", "source": "arbital", "source_type": "text", "text": "The safe simulation problem is to start with some dynamical physical process $D$ which would, if run long enough in some specified environment, produce some trustworthy information of great value, and to compute some *adequate* simulation $S_D$ of $D$ faster than the physical process could have run. In this context, the term \"adequate\" is [value-laden](https://arbital.com/p/36h) - it means that whatever we would use $D$ for, using $S_D$ instead produces within epsilon of the expected [value](https://arbital.com/p/55) we could have gotten from using the real $D.$ In more concrete terms, for example, we might want to tell a Task AGI \"upload this human and run them as a simulation\", and we don't want some tiny systematic skew in how the Task AGI models serotonin to turn the human into a psychopath, which is a *bad* (value-destroying) simulation fault. Perfect simulation will be out of the question; the brain is almost certainly a chaotic system and hence we can't hope to produce *exactly* the same result as a biological brain. The question, then, is what kind not-exactly-the-same-result the simulation is allowed to produce.\n\nAs with \"[low impact](https://arbital.com/p/2pf)\" hopefully being lower-[complexity](https://arbital.com/p/5v) than \"[low bad impact](https://arbital.com/p/36h)\", we might hope to get an *adequate* simulation via some notion of *faithful* simulation, which rules out bumps in serotonin that turn the upload into a psychopath, while possibly also ruling out any number of other changes we *wouldn't* see as important; with this notion of \"faithfulness\" still being permissive enough to allow the simulation to take place at a level above individual quarks. On whatever computing power is available - possibly nanocomputers, if the brain was scanned via molecular nanotechnology - the upload must be runnable fast enough to [make the simulation task worthwhile](https://arbital.com/p/6y).\n\nSince the main use for the notion of \"faithful simulation\" currently appears to be [identifying](https://arbital.com/p/2s3) a safe plan for uploading one or more humans as a [pivotal act](https://arbital.com/p/6y), we might also consider this problem in conjunction with the special case of wanting to avoid [mindcrime](https://arbital.com/p/6v). In other words, we'd like a criterion of faithful simulation which the AGI can compute *without* it needing to observe millions of hypothetical simulated brains for ten seconds apiece, which could constitute creating millions of people and killing them ten seconds later. We'd much prefer, e.g., a criterion of faithful simulation of individual neurons and synapses between them up to the level of, say, two interacting cortical columns, such that we could be confident that in aggregate the faithful simulation of the neurons would correspond to the faithful simulation of whole human brains. This way the AGI would not need to think about or simulate whole brains in order to verify that an uploading procedure would produce a faithful simulation, and mindcrime could be avoided.\n\nNote that the notion of a \"functional property\" of the brain - seeing the neurons as computing something important, and not wanting to disturb the computation - is still value-laden. It involves regarding the brain as a means to a computational end, and what we see as the important computational end is value-laden, given that chaos guarantees the input-output relation won't be *exactly* the same. The brain can equally be seen as implicitly computing, say, the parity of the number of synapse activations; it's just that we don't see this functional property as a valuable one that we want to preserve.\n\nTo the extent that some notion of function might be invoked in a notion of faithful, permitted speedups, we should hope that rather than needing the AGI to understand the high-level functional properties of the brain *and which details we thought were too important to simplify,* it might be enough to understand a 'functional' model of individual neurons and synapses, with the resulting transform of the uploaded brain still allowing for a pivotal speedup *and* knowably-faithful simulation of the larger brain.\n\nAt the same time, strictly local measures of faithfulness seem problematic if they can conceal *systematic* larger divergences. We might think that any perturbation of a simulated neuron which has as little effect as adding one phonon is \"within thermal uncertainty\" and therefore unimportant, but if all of these perturbations are pointing in the same direction relative to some larger functional property, the difference might be very significant. Similarly if all simulated synapses released slightly more serotonin, rather than releasing slightly more or less serotonin in no particular systematic pattern.", "date_published": "2016-04-14T01:17:00Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Open subproblems in aligning a Task-based AGI", "B-Class"], "alias": "36k"} {"id": "0baac0684ecfed4f221c3a4c2d3ab5cd", "title": "Identifying causal goal concepts from sensory data", "url": "https://arbital.com/p/identify_causal_goals", "source": "arbital", "source_type": "text", "text": "Suppose we want an AI to carry out some goals involving strawberries, and as a result, we want to [identify](https://arbital.com/p/identify_goal_concept) to the AI the [concept](https://arbital.com/p/) of \"strawberry\". One of the potential ways we could do this is by showing the AI objects that a teacher classifies as strawberries or non-strawberries. However, in the course of doing this, what the AI actually sees will be e.g. a pattern of pixels on a webcam - the actual, physical strawberry is not directly accessible to the AI's intelligence. When we show the AI a strawberry, what we're really trying to communicate is \"A certain proximal [cause](https://arbital.com/p/) of this sensory data is a strawberry\", not, \"This arrangement of sensory pixels is a strawberry.\" An AI that learns the latter concept might try to carry out its goal by putting a picture in front of its webcam; the former AI has a goal that actually involves something in its environment.\n\nThe open problem of \"identifying causal goal concepts from sensory data\" or \"identifying environmental concepts from sensory data\" is about getting an AI to form [causal](https://arbital.com/p/) goal concepts instead of [sensory](https://arbital.com/p/) goal concepts. Since almost no [human-intended goal](https://arbital.com/p/6h) will ever be satisfiable solely in virtue of an advanced agent arranging to see a certain field of pixels, safe ways of identifying goals to sufficiently advanced goal-based agents will presumably involve some way of identifying goals among the *causes* of sense data.\n\nA \"toy\" (and still pretty difficult) version of this open problem might be to exhibit a machine algorithm that (a) has a causal model of its environment, (b) can learn concepts over any level of its causal model including sense data, (c) can learn and pursue a goal concept, (d) has the potential ability to spoof its own senses or create fake versions of objects, and (e) is shown to learn a proximal causal goal rather than a goal about sensory data as shown by it pursuing only the causal version of that goal even if it would have the option to spoof itself.\n\nFor a more elaborated version of this open problem, see \"[https://arbital.com/p/2s0](https://arbital.com/p/2s0)\".", "date_published": "2016-04-14T19:18:05Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Open subproblems in aligning a Task-based AGI", "Task identification problem", "Look where I'm pointing, not at my finger", "B-Class"], "alias": "36w"} {"id": "1cd08b10f0f35fcb7eb459e5ef459bb5", "title": "Goal-concept identification", "url": "https://arbital.com/p/identify_goal_concept", "source": "arbital", "source_type": "text", "text": "The problem of trying to figure out how to communicate to an AGI an [intended](https://arbital.com/p/6h) goal [concept](https://arbital.com/p/) on the order of \"give me a strawberry, and not a fake plastic strawberry either\".\n\nAt this level of the problem, we're not concerned with e.g. larger problems of [safe plan identification](https://arbital.com/p/2s3) such as not mugging people for strawberries, or [minimizing side effects](https://arbital.com/p/2pf). We're not (at this level of the problem) concerned with identifying [each and every one of the components of human value](https://arbital.com/p/6c), as they might be impacted by side effects more distant in the causal graph. We're not concerned with [philosophical uncertainty](https://arbital.com/p/) about what we [should](https://arbital.com/p/normativity) mean by \"strawberry\". We suppose that in an intuitive sense, we do have a pretty good idea of what we [intend](https://arbital.com/p/6h) by \"strawberry\", such that there are things that are definitely strawberries and we're pretty happy with our sense of that so long as nobody is deliberately trying to fool it.\n\nWe just want to communicate a *local* goal concept that distinguishes edible strawberries from plastic strawberries, or nontoxic strawberries from poisonous strawberries. That is: we want to say \"strawberry\" in an understandable way that's suitable for fulfilling a [task](https://arbital.com/p/6w) of \"just give Sally a strawberry\", possibly in conjunction with other features like [conservatism](https://arbital.com/p/2qp) or [low impact](https://arbital.com/p/2pf) or [mild optimization](https://arbital.com/p/2r8).\n\nFor some open subproblems of the obvious approach that goes through showing actual strawberries to the AI's webcam, see \"[https://arbital.com/p/36w](https://arbital.com/p/36w)\" and \"[https://arbital.com/p/2s0](https://arbital.com/p/2s0)\".", "date_published": "2016-04-14T19:43:11Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The problem of communicating to an AI a very simple local concept on the order of \"strawberries\" or \"give me a strawberry\". This level of the problem is meant to include subproblems of local categorization like \"I meant a real strawberry like the ones I already showed you, not a fake strawberry that looks similar to your webcam\". It isn't meant to include larger problems like [verifying that a plan uses only known, whitelisted methods](https://arbital.com/p/2s3) or [identifying all possible harmful effects we could care about](https://arbital.com/p/6c)."], "tags": ["B-Class"], "alias": "36y"} {"id": "4f74ba595b5780239a1f28fca86b923a", "title": "Mechanical Turk (example)", "url": "https://arbital.com/p/mechanical_turk", "source": "arbital", "source_type": "text", "text": "In 1836, there was a sensation called the [Mechanical Turk](https://en.wikipedia.org/wiki/The_Turk), allegedly a chess-playing automaton. Edgar Allen Poe, who was also an amateur magician, wrote an essay arguing that the Turk must contain a human operator hidden in the apparatus (which it did). Besides analyzing the Turk's outward appearance to locate the hidden compartment, Poe carefully argued as to why no arrangement of wheels and gears could ever play chess in the first place, explicitly comparing the Turk to \"the calculating machine of Mr. Babbage\":\n\n> Arithmetical or algebraical calculations are, from their very nature, fixed and determinate. Certain data being given, certain results necessarily and inevitably follow [https://arbital.com/p/...](https://arbital.com/p/...)\n> But the case is widely different with the Chess-Player. With him there is no determinate progression. No one move in chess necessarily follows upon any one other. From no particular disposition of the men at one period of a game can we predicate their disposition at a different period [https://arbital.com/p/...](https://arbital.com/p/...)\n> Now even granting that the movements of the Automaton Chess-Player were in themselves determinate, they would be necessarily interrupted and disarranged by the indeterminate will of his antagonist. There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage [https://arbital.com/p/...](https://arbital.com/p/...)\n> It is quite certain that the operations of the Automaton are regulated by *mind*, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, *a priori*. \n\n(In other words: In an algebraical problem, each step follows with the previous step of necessity, and therefore can be represented by the determinate motions of wheels and gears as in Charles Babbage's proposed computing engine. In chess, the player's move and opponent's move don't follow with necessity from the board position, and therefore can't be represented by deterministic gears.)\n\nThis is an amazingly sophisticated remark, considering the era. It even puts a finger on the part of chess that is computationally difficult, the combinatorial explosion of possible moves. And it is still entirely wrong.\n\nEven if you know an unbounded solution to chess, you might still be 47 years away from a bounded solution. But if you can't state a program that solves the problem *in principle*, you are in some sense *confused* about the nature of the cognitive work needed to solve the problem. If you can't even solve a problem given infinite computing power, you definitely can't solve it using bounded computing power. (Imagine Poe trying to write a chess-playing program before he'd had the insight about search trees.)\n\nFor more on this point, see \"[https://arbital.com/p/107](https://arbital.com/p/107)\".", "date_published": "2016-04-17T19:12:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["In 1836, there was a sensation called the [Mechanical Turk](https://en.wikipedia.org/wiki/The_Turk), allegedly a chess-playing automaton. Edgar Allen Poe, who was also an amateur magician, wrote an essay arguing that the Turk must contain a human operator hidden in the apparatus (which it did).\n\nBesides analyzing the Turk's outward appearance to locate the hidden compartment, Poe carefully argued as to why no arrangement of wheels and gears could ever play chess in the first place, explicitly comparing the Turk to \"the calculating machine of Mr. Babbage\". In an algebraical problem (Poe said) each step follows with the previous step of necessity, and therefore can be represented by the determinate motions of wheels and gears as in Charles Babbage's proposed computing engine. In chess, the player's move and opponent's move don't follow with necessity from the board position, and therefore can't be represented by deterministic gears.\n\nThus, Poe concluded, \"It is quite certain that the operations of the Automaton are regulated by *mind*, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, *a priori*.\""], "tags": ["B-Class"], "alias": "38r"} {"id": "aff3ef5ad3a4b4cf24ed95187cba7064", "title": "Do we need to worry about AI?", "url": "https://arbital.com/p/38x", "source": "arbital", "source_type": "text", "text": "Why worry about AI when nothing has gone wrong yet?", "date_published": "2016-04-19T03:34:41Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "38x"} {"id": "81de517e2de74c338cd99ebd87710638", "title": "Coherent extrapolated volition (alignment target)", "url": "https://arbital.com/p/cev", "source": "arbital", "source_type": "text", "text": "# Introduction\n\n\"Coherent extrapolated volition\" (CEV) is [Eliezer Yudkowsky](https://arbital.com/p/2)'s proposed thing-to-do with an extremely [advanced AGI](https://arbital.com/p/2c), if you're extremely confident of your ability to [align](https://arbital.com/p/5s) it on complicated targets.\n\nRoughly, a CEV-based superintelligence would do what currently existing humans would want\\* the AI to do, *if counterfactually:*\n\n 1. We knew everything the AI knew;\n 2. We could think as fast as the AI and consider all the arguments;\n 3. We knew ourselves perfectly and had better self-control or self-modification ability;\n\n...*to whatever extent* most existing humans, thus extrapolated, would predictably want\\* the same things. (For example, in the limit of extrapolation, nearly all humans might want\\* not to be turned into [paperclips](https://arbital.com/p/10h), but might not agree\\* on the best pizza toppings. See below.)\n\nCEV is meant to be the *literally optimal* or *ideal* or *normative* thing to do with an [autonomous superintelligence](https://arbital.com/p/1g3), *if* you trust your ability to [perfectly align](https://arbital.com/p/41k) a superintelligence on a very complicated target. (See below.)\n\nCEV is rather complicated and meta and hence *not* intended as something you'd do with the first AI you ever tried to build. CEV might be something that everyone inside a project agreed was an acceptable mutual target for their *second* AI. (The first AI should probably be a [Task AGI](https://arbital.com/p/6w).)\n\nFor the corresponding metaethical theory see [https://arbital.com/p/313](https://arbital.com/p/313).\n\n\n\n# Concept\n\n%%knows-requisite([https://arbital.com/p/313](https://arbital.com/p/313)):\n\nSee \"[https://arbital.com/p/313](https://arbital.com/p/313)\".\n\n%%\n\n%%!knows-requisite([https://arbital.com/p/313](https://arbital.com/p/313)):\n\n[Extrapolated volition](https://arbital.com/p/313) is the metaethical theory that when we ask \"What is right?\", then insofar as we're asking something meaningful, we're asking \"What would a counterfactual idealized version of myself want\\* if it knew all the facts, had considered all the arguments, and had perfect self-knowledge and self-control?\" (As a [metaethical theory](https://arbital.com/p/313), this would make \"What is right?\" a mixed logical and empirical question, a function over possible states of the world.)\n\nA very simple example of extrapolated volition might be to consider somebody who asks you to bring them orange juice from the refrigerator. You open the refrigerator and see no orange juice, but there's lemonade. You imagine that your friend would want you to bring them lemonade if they knew everything you knew about the refrigerator, so you bring them lemonade instead. On an abstract level, we can say that you \"extrapolated\" your friend's \"volition\", in other words, you took your model of their mind and decision process, or your model of their \"volition\", and you imagined a counterfactual version of their mind that had better information about the contents of your refrigerator, thereby \"extrapolating\" this volition.\n\nHaving better information isn't the only way that a decision process can be extrapolated; we can also, for example, imagine that a mind has more time in which to consider moral arguments, or better knowledge of itself. Maybe you currently want revenge on the Capulet family, but if somebody had a chance to sit down with you and have a long talk about how revenge affects civilizations in the long run, you could be talked out of that. Maybe you're currently convinced that you advocate for green shoes to be outlawed out of the goodness of your heart, but if you could actually see a printout of all of your own emotions at work, you'd see there was a lot of bitterness directed at people who wear green shoes, and this would change your mind about your decision.\n\nIn Yudkowsky's version of extrapolated volition considered on an individual level, the three core directions of extrapolation are:\n\n- Increased knowledge - having more veridical knowledge of declarative facts and expected outcomes.\n- Increased consideration of arguments - being able to consider more possible arguments and assess their validity.\n- Increased reflectivity - greater knowledge about the self, and to some degree, greater self-control (though this raises further questions about which parts of the self normatively get to control which other parts).\n\n%%\n\n# Motivation\n\nDifferent people initially react differently to the question \"Where should we point a superintelligence?\" or \"What should an aligned superintelligence do?\" - not just different beliefs about what's good, but different frames of mind about how to ask the question.\n\nSome common reactions:\n\n1. \"Different people want different things! There's no way you can give everyone what they want. Even if you pick some way of combining things that people want, *you'll* be the one saying how to combine it. Someone else might think they should just get the whole world for themselves. Therefore, in the end *you're* deciding what the AI will do, and any claim to some sort of higher justice or normativity is nothing but sophistry.\"\n2. \"What we should do with an AI is obvious - it should optimize liberal democratic values. That already takes into account everyone's interests in a fair way. The real threat is if bad people get their hands on an AGI and build an AGI that doesn't optimize liberal democratic values.\"\n3. \"Imagine the ancient Greeks telling a superintelligence what to do. They'd have told it to optimize for glorious deaths in battle. Programming any other set of inflexible goals into a superintelligence seems equally stupid; it has to be able to change and grow.\"\n4. \"What if we tell the superintelligence what to do and it's the wrong thing? What if we're basically confused about what's right? Shouldn't we let the superintelligence figure that out on its own, with its assumed superior intelligence?\"\n\nAn initial response to each of these frames might be:\n\n1. \"Okay, but suppose you're building a superintelligence and you're trying *not to be a jerk about it.* If you say, 'Whatever I do originates in myself, and therefore is equally selfish, so I might as well declare myself God-Emperor of the Universe' then you're being a jerk. Is there anything you could do instead which would be less like being a jerk? What's the *least* jerky thing you could do?\"\n2. \"What if you would, after some further discussion, want to tweak your definition of 'liberal democratic values' just a little? What if it's *predictable* that you would do that? Would you really want to be stuck with your off-the-cuff definition a million years later?\"\n3. \"Okay, so what should the Ancient Greeks have done if they did have to program an AI? How could they not have doomed future generations? Suppose the Ancient Greeks are clever enough to have noticed that sometimes people change their minds about things and to realize that they might not be right about everything. How can they use the cleverness of the AGI in a constructively specified, computable fashion that gets them out of this hole? You can't just tell the AGI to compute what's 'right', you need to put an actual computable question in there, not a word.\"\n4. \"You asked, what if we're basically confused about what's right - well, in that case, what does the word 'right' even mean? If you don't know what's right, and you don't know how to compute what's right, then what are we even talking about? Do you have any ground on which to say that an AGI which only asks 'Which outcome leads to the greatest number of paperclips?' isn't computing rightness? If you don't think a paperclip maximizer is computing rightness, then you must know something about the rightness-question which excludes that possibility - so let's talk about how to program that rightness-question into an AGI.\"\n\nCEV's advocates claim that all of these lines of discussion eventually end up converging on the idea of coherent extrapolated volition. For example:\n\n1. Asking what everyone would want\\* if they knew what the AI knew, and doing what they'd all predictably agree on, is just about the least jerky thing you can do. If you tell the AI to give everyone a volcano lair because you think volcano lairs are neat, you're not being selfish, but you're being a jerk to everyone who doesn't want a volcano lair. If you have the AI just do what people actually say, they'll end up hurting themselves with dumb wishes and you'd be a jerk. If you only extrapolate your friends and have the AI do what only you'd want, you're being jerks to everyone else.\n2. Yes, liberal democratic values are good; so is apple pie. Apple pie is *a* good thing but it's not the *only* good thing. William Frankena's list of ends-in-themselves included \"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment\" and then 25 more items, and the list certainly isn't complete. The only way you're going to get a complete list is by analyzing human minds; and even then, if our descendants would predictably want something else a million years later, we ought to take that into account too.\n3. Every improvement is a change, but not every change is an improvement. Just letting a superintelligence change at random doesn't encapsulate moral progress. Saying that change toward more liberal democratic values is progress, presumes that we already know the destination or answer. We can't even just ask the AGI to *predict* what civilizations would think a thousand years later, since (a) the AI itself impacts this and (b) if the AI did nothing, maybe in a thousand years everyone would have accidentally blissed themselves out while trying to modify their own brains. If we want to do better than the hypothetical ancient Greeks, we need to define *a sufficiently abstract and meta criterion* that describes *valid* directions of progress - such as changes in moral beliefs associated with learning new facts, for example; or moral change that would predictably occur if we considered a larger set of arguments; or moral change that would predictably occur if we understood ourselves better.\n4. This one is a long story: Metaethics deals with the question of what sort of entity 'rightness' is exactly - tries to reconcile this strange ineffable 'rightness' business with a universe made out of particle fields. Even though it seems like human beings wanting to murder people wouldn't *make* murder right, there's also nowhere in the stars or mountains where we can actually find it written that murder is wrong. At the end of a rather long discussion, we decide that for any given person speaking at a given point in time, 'rightness' is a logical constant which, although not counterfactually dependent on the state of the person's brain, must be analytically identified with the extrapolated volition of that brain; and we show that (only) this stance gives consistent answers to all the standard questions in metaethics. (This discussion takes a while, on the order of explaining how deterministic laws of physics don't show that you have unfree will.)\n\n(To do: Write dialogues from each of these four entrance points.) \n\n# Situating CEV in contemporary metaethics\n\nSee the corresponding section in \"[https://arbital.com/p/313](https://arbital.com/p/313)\".\n\n# Scary design challenges\n\nThere are several reasons why CEV is *way* too challenging to be a good target for any project's first try at building machine intelligence:\n\n1. A CEV agent would be intended to carry out an [autonomous](https://arbital.com/p/1g3) open-ended mission. This implies all the usual reasons we expect an autonomous AI to be harder to make safe than a [Task AGI](https://arbital.com/p/6w).\n2. CEV is a weird goal. It involves recursion.\n3. Even the terms in CEV, like \"know more\" or \"extrapolate a human\", seem complicated and [value-laden](https://arbital.com/p/36h). You might have to build a high-level [Do What I Know I Mean agent](https://arbital.com/p/2s1), and then tell it to do CEV. [Do What I Know I Mean](https://arbital.com/p/2s1) is complicated enough that you'd need to build an AI that can learn DWIKIM, so that DWIKIM can be taught rather than formally specified. So we're looking at something like CEV, running on top of DWIKIM, running on top of a goal-learning system, at least until the first time the CEV agent rewrites itself.\n\nDoing this correctly the *very first time* we build a smarter-than-human intelligence seems *improbable.* The only way this would make a good first target is if the CEV concept is formally simpler than it currently seems, *and* timelines to AGI are unusually long and permit a great deal of advance work on safety.\n\nIf AGI is 20 years out (or less), it seems wiser to think in terms of a [Task AGI](https://arbital.com/p/6w) performing some relatively simple [pivotal act](https://arbital.com/p/6y). The role of CEV is of answering the question, \"What can you all agree in advance that you'll try to do *next,* after you've executed your Task AGI and gotten out from under the shadow of immediate doom?\"\n\n# What if CEV fails to cohere?\n\nA frequently asked question is \"What if extrapolating human volitions produces incoherent answers?\"\n\nAccording to the original motivation for CEV, if this happens in *some* places, a Friendly AI ought to ignore those places. If it happens *everywhere,* you probably picked a silly way to construe an extrapolated volition and you ought to rethink it. %note: Albeit in practice, you would not want an AI project to take a dozen tries at defining CEV. This would indicate something extremely wrong about the method being used to generate suggested answers. Whatever final attempt passed would probably be the first answer [all of whose remaining flaws were hidden](https://arbital.com/p/blackzoning), rather than an answer with all flaws eliminated.%\n\nThat is:\n\n- If your CEV algorithm finds that \"People coherently want to not be eaten by [paperclip maximizers](https://arbital.com/p/10h), but end up with a broad spectrum of individual and collective possibilities for which pizza toppings they prefer\", we would normatively want a Friendly AI to prevent people from being eaten by paperclip maximizers but not mess around with which pizza toppings people end up eating in the Future.\n- If your CEV algorithm claims that there's no coherent sense in which \"A lot of people would want to not be eaten by Clippy and would still want\\* this even if they knew more stuff\" then this is a suspicious and unexpected result. Perhaps you have picked a silly way to construe somebody's volition.\n\nThe original motivation for CEV can also be viewed from the perspective of \"What is it to help someone?\" and \"How can one help a large group of people?\", where the intent behind the question is to build an AI that renders 'help' as we really [intend](https://arbital.com/p/6h) that. The elements of CEV can be seen as caveats to the naive notion of \"Help is giving people whatever they ask you for!\" in which somebody asks you to bring them orange juice but the orange juice in the refrigerator is poisonous (and they're not *trying* to poison themselves).\n\nWhat about helping a group of people? If two people ask for juice and you can only bring one kind of juice, you should bring a non-poisonous kind of juice they'd both like, to the extent any such juice exists. If no such juice exists, find a kind of juice that one of them is meh about and that the other one likes, and flip a coin or something to decide who wins. You are then being around as helpful as it is possible to be.\n\nCan there be *no way* to help a large group of people? This seems implausible. You could at least give the starving ones pizza with a kind of pizza topping they currently like. To the extent your philosophy claims \"Oh noes even *that* is not helping because it's not perfectly coherent,\" you have picked the wrong construal of 'helping'.\n\nIt could be that, if we find that every reasonable-sounding construal of extrapolated volition fails to cohere, we must arrive at some entirely other notion of 'helping'. But then this new form of helping also shouldn't involve bringing people poisonous orange juice that they don't know is poisoned, because that still intuitively seems unhelpful.\n\n## Helping people with incoherent preferences\n\nWhat if somebody believes themselves to [prefer onions to pineapple on their pizza, prefer pineapple to mushrooms, and prefer mushrooms to onions](https://arbital.com/p/7hh)? In the sense that, offered any two slices from this set, they would pick according to the given ordering?\n\n(This isn't an unrealistic example. Numerous experiments in behavioral economics demonstrate exactly this sort of circular preference. For instance, you can arrange 3 items such that each pair of them brings a different salient quality into focus for comparison.)\n\nOne may worry that we couldn't 'coherently extrapolate the volition' of somebody with these pizza preferences, since these local choices obviously aren't consistent with any coherent utility function. But how could we *help* somebody with a pizza preference like this?\n\nWell, appealing to the intuitive notion of *helping:*\n\n- We could give them whatever kind of pizza they'd pick if they had to pick among all three simultaneously.\n- We could figure out how happy they'd be eating each type of pizza, in terms of emotional intensity as measured in neurotransmitters; and offer them the slice of pizza that they'll most enjoy.\n- We could let them pick their own damn pizza toppings and concern ourselves mainly with making sure the pizza isn't poisonous, since the person definitely prefers non-poisoned pizza.\n- We could, given sufficient brainpower on our end, figure out what this person would ask us to do for them in this case after that person had learned about the concept of a preference reversal and been told about their own circular preferences. If this varies wildly depending on exactly how we explain the concept of a preference reversal, we could refer back to one of the previous three answers instead.\n\nConversely, these alternatives seem *less* helpful:\n\n- Refuse to have anything to do with that person since their current preferences don't form a coherent utility function.\n- Emit \"ERROR ERROR\" sounds like a Hollywood AI that's just found out about the Epimenides Paradox.\n- Give them pizza with your own favorite topping, green peppers, even though they'd prefer any of the 3 other toppings to those.\n- Give them pizza with the topping that would taste best to them, pepperoni, despite their being vegetarians.\n\nAdvocates of CEV claim that if you blank the complexities of 'extrapolated volition' out of your mind; and ask how you could reasonably help people as best as possible if you were trying not be a jerk; and then try to figure out how to semiformalize whatever mental procedure you just followed to arrive at your answer for how to help people; then you will eventually end up at CEV again.\n\n# Role of meta-ideals in promoting early agreement\n\nA primary purpose of CEV is to represent a relatively simple meta-level ideal that people can agree upon, even where they might disagree on the object level. By a hopefully analogous example, two honest scientists might disagree on the correct mass of an electron, but agree that the experimental method is a good way to resolve the answer.\n\nImagine Millikan believes an electron's mass is 9.1e-28 grams, and Nannikan believes the correct electron mass is 9.1e-34 grams. Millikan might be very worried about Nannikan's proposal to program an AI to believe the electron mass is 9.1e-34 grams; Nannikan doesn't like Millikan's proposal to program in 9.1-e28; and both of them would be unhappy with a compromise mass of 9.1e-31 grams. They might still agree on programming an AI with some analogue of probability theory and a simplicity prior, and letting a superintelligence come to the conclusions implied by Bayes and Occam, because the two can agree on an effectively computable question even though they think the question has different answers. Of course, this is easier to agree on when the AI hasn't yet produced an answer, or if the AI doesn't tell you the answer.\n\nIt's not guaranteed that every human embodies the same implicit moral questions, indeed this seems unlikely, which means that Alice and Bob might still expect their extrapolated volitions to disagree about things. Even so, while the outputs are still abstract and not-yet-computed, Alice doesn't have much of a place to stand on which to appeal to Carol, Dennis, and Evelyn by saying, \"But as a matter of morality and justice, you should have the AI implement *my* extrapolated volition, not Bob's!\" To appeal to Carol, Dennis, and Evelyn about this, you'd need them to believe that Alice's EV was more likely to agree with their EVs than Bob's was - and at that point, why not come together on the obvious Schelling point of extrapolating *everyone's* EVs?\n\nThus, one of the primary purposes of CEV (selling points, design goals) is that it's something that Alice, Bob, and Carol can agree *now* that Dennis and Evelyn should do with an AI that will be developed *later;* we can try to set up commitment mechanisms now, or check-and-balance mechanisms now, to ensure that Dennis and Evelyn are still working on CEV later.\n\n## Role of 'coherence' in reducing expected unresolvable disagreements\n\nA CEV is not necessarily a majority vote. A lot of people with an extrapolated weak preference\\* might be counterbalanced by a few people with a strong extrapolated preference\\* in the opposite direction. Nick Bostrom's \"[parliamentary model](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html)\" for resolving uncertainty between incommensurable ethical theories, permits a subtheory very concerned about a decision to spend a large amount of its limited influence on influencing that particular decision.\n\nThis means that, e.g., a vegan or animal-rights activist should not need to expect that they must seize control of a CEV algorithm in order for the result of CEV to protect animals. It doesn't seem like most of humanity would be deriving huge amounts of utility from hurting animals in a post-superintelligence scenario, so even a small part of the population that strongly opposes\\* this scenario should be decisive in preventing it.\n\n# Moral hazard vs. debugging\n\nOne of the points of the CEV proposal is to have minimal [moral hazard](https://arbital.com/p/2sb) (aka, not tempting the programmers to take over the world or the future); but this may be compromised if CEV's results don't go literally unchecked.\n\nPart of the purpose of CEV is to stand as an answer to the question, \"If the ancient Greeks had been the ones to invent superintelligence, what could they have done that would not, from our later perspective, irretrievably warp the future? If the ancient Greeks had programmed in their own values directly, they would have programmed in a glorious death in combat. Now let us consider that perhaps we too are not so wise.\" We can imagine the ancient Greeks writing a CEV mechanism, peeking at the result of this CEV mechanism before implementing it, and being horrified by the lack of glorious-deaths-in-combat in the future and value system thus revealed.\n\nWe can also imagine that the Greeks, trying to cut down on [moral hazard](https://arbital.com/p/2sb), virtuously refuse to peek at the output; but it turns out that their attempt to implement CEV has some unforeseen behavior when actually run by a superintelligence, and so their world is turned into paperclips.\n\nThis is a safety-vs.-moral-hazard tradeoff between (a) the benefit of being able to look at CEV outputs in order to better-train the system or just verify that nothing went horribly wrong; and (b) the moral hazard that comes from the temptation to override the output, thus defeating the point of having a CEV mechanism in the first place.\n\nThere's also a potential safety hazard just with looking at the internals of a CEV algorithm; the simulated future could contain all sorts of directly mind-hacking cognitive hazards.\n\nRather than giving up entirely and embracing maximum moral hazard, one possible approach to this issue might be to have some single human that is supposed to peek at the output and provide a 1 or 0 (proceed or stop) judgment to the mechanism, without any other information flow being allowed to the programmers if the human outputs 0. (For example, the volunteer might be in a room with explosives that go off if 0 is output.)\n\n# \"Selfish bastards\" problem\n\nSuppose that Fred is funding Grace to work on a CEV-based superintelligence; and Evelyn has decided not to oppose this project. The resulting CEV is meant to extrapolate the volitions of Alice, Bob, Carol, Dennis, Evelyn, Fred, and Grace with equal weight. (If you're reading this, you're more than usually likely to be one of Evelyn, Fred, or Grace.)\n\nEvelyn and Fred and Grace might worry: \"What if a supermajority of humanity consists of 'selfish\\* bastards', such that their extrapolated volitions would cheerfully vote\\* for a world in which it was legal to own artificial sapient beings as slaves so long as they personally happened to be in the slaveowning class; and we, Evelyn and Fred and Grace, just happen to be in the minority that extremely doesn't want nor want\\* the future to be like that?\"\n\nThat is: What if humanity's extrapolated volitions diverge in such a way that from the standpoint of *our* volitions - since, if you're reading this, you're unusually likely to be one of Evelyn or Fred or Grace - 90% of extrapolated humanity would choose\\* something such that *we* would not approve of it, and our volitions would not approve\\* of it, *even after* taking into account that we don't want to be jerks about it and that we don't think we were born with any unusual or exceptional right to determine the fate of humanity.\n\nThat is, let the scenario be as follows:\n\n> 90% of the people (but not we who are collectively sponsoring the AI) are selfish bastards at the core, such that *any reasonable* extrapolation process (it's not just that we picked a broken one) would lead to them endorsing a world in which they themselves had rights, but it was okay to create artificial people and hurt them. Furthermore, they would derive enough utility from being personal God-Emperors that this would override our minority objection even in a parliamentary model.\n\nWe can see this hypothetical outcome as potentially undermining every sort of reason that we, who happen to be in a position of control to prevent that outcome, should voluntarily relinquish that control to the remaining 90% of humanity:\n\n- We can't be prioritizing being fair to everyone including the other 90% of humanity, because what about being fair to the artificial people who are being hurt?\n- We can't be worrying that the other 90% of humanity would withdraw their support from the project, or worrying about betraying the project's supporters, because by hypothesis they weren't supporting it or even permitting it.\n- We can't be agreeing to defer to a righter and more intelligent process to resolve our dispute, because by hypothesis the CEV made up of 90% selfish\\* bastards is not, from our own perspective, ideally righter.\n- We can't rely on a parliamentary model of coherence to prevent what a minority sees as disaster, because by hypothesis the other 90% is deriving enough utility from collectively declaring themselves God-Emperors to trump even a strong minority countervote.\n\nRather than giving up entirely and taking over the world, *or* exposing ourselves to moral hazard by peeking at the results, one possible approach to this issue might be to run a three-stage process.\n\nThis process involves some internal references, so the detailed explanation needs to follow a shorter summary explanation.\n\nIn summary:\n\n- Extrapolate everyone's CEV.\n- Extrapolate the CEV of the contributors only, and let it give (only) an up-down vote on Everyone's CEV.\n- If the result is thumbs-up, run Everyone's CEV.\n- Otherwise, extrapolate everyone's CEV, but kicking out all the parts that would act unilaterally and without any concern for others if they were in positions of unchecked power.\n- Have the Contributor CEV give an up/down answer on the Fallback CEV.\n- If the result is thumbs-up, run the Fallback CEV.\n- Otherwise fail.\n\nIn detail:\n\n- First, extrapolate the everyone-on-Earth CEV as though it were not being checked.\n - If any hypothetical extrapolated person worries about being checked, delete that concern and extrapolate them as though they didn't have it. This is necessary to prevent the check itself from having a [UDT](https://arbital.com/p/updateless) influence on the extrapolation and the actual future.\n- Next, extrapolate the CEV of everyone who contributed to the project, weighted by their contribution (possibly based on some mix of \"how much was actually done\" versus \"how much was rationally expected to be accomplished\" versus \"the fraction of what could've been done versus what was actually done\"). Allow this other extrapolation an up-or-down vote - *not* any kind of detailed correction - on whether to let the everyone-on-Earth CEV to go through unmodified.\n - Remove from the extrapolation of the Contributor-CEV any strategic considerations having to do with the Fallback-CEV or post-Fail redevelopment being a *better* alternative; we want to extract a judgment about \"satisficing\" in some sense, whether the Everyone-CEV is in some non-relative sense too horrible to be allowed.\n- If the Everyone-CEV passes the Contributor-CEV check, run it.\n- Otherwise, re-extrapolate a Fallback-CEV that starts with all existing humans as a base, but *discards* from the extrapolation all extrapolated decision processes that, if they were in a superior strategic position or a position of unilateral power, would *not* bother to extrapolate others' volitions or care about their welfare.\n - Again, remove all extrapolated *strategic* considerations about passing the coming check.\n- Check the Fallback-CEV against the Contributor-CEV for an up-down vote. If it passes, run it.\n- Otherwise Fail (AI shuts down safely, we rethink what to do next or implement an agreed-on fallback course past this point).\n\nThe particular fallback of \"kick out from the extrapolation any weighted portions of extrapolated decision processes that would act unilaterally and without caring for others, given unchecked power\" is meant to have a property of poetic justice, or rendering objections to it self-defeating: If it's okay to act unilaterally, then why can't we unilaterally kick out the unilateral parts? This is meant to be the 'simplest' or most 'elegant' way of kicking out a part of the CEV whose internal reasoning directly opposes the whole reason we ran CEV in the first place, but imposing the minimum possible filter beyond that.\n\nThus if Alice (who by hypothesis is not in any way a contributor) says, \"But I demand you altruistically include the extrapolation of me that would unilaterally act against you if it had power!\" then we reply, \"We'll try that, but if it turns out to be a sufficiently bad idea, there's no coherent interpersonal grounds on which you can rebuke us for taking the fallback option instead.\"\n\nSimilarly in regards to the Fail option at the end, to anyone who says, \"Fairness demands that you run Fallback CEV even if you wouldn't like\\* it!\" we can reply, \"Our own power may not be used against us; if we'd regret ever having built the thing, fairness doesn't oblige us to run it.\"\n\n# Why base CEV on \"existing humans\" and not some other class of extrapolees?\n\nOne frequently asked question about the implementation details of CEV is either:\n\n- Why formulate CEV such that it is run on \"all existing humans\" and not \"all existing and past humans\" or \"all mammals\" or \"all sapient life as it probably exists everywhere in the measure-weighted infinite multiverse\"?\n- Why not restrict the extrapolation base to \"only people who contributed to the AI project\"?\n\nIn particular, it's been asked why restrictive answers to Question 1 don't [also imply](https://arbital.com/p/3tc) the more restrictive answer to Question 2.\n\n## Why not include mammals?\n\nWe'll start by considering some replies to the question, \"Why not include all mammals into CEV's extrapolation base?\"\n\n- Because you could be wrong about mammals being objects of significant ethical value, such that we *should* on an object level respect their welfare. The extrapolation process will catch the error if you'd predictably change your mind about that. Including mammals into the extrapolation base for CEV potentially sets in stone what could well be an error, the sort of thing we'd predictably change our minds about later. If you're normatively *right* that we should all care about mammals and even try to extrapolate their volitions into a judgment of Earth's destiny, if that's what almost all of us would predictably decide after thinking about it for a while, then that's what our EVs will decide\\* to do on our behalf; and if they don't decide\\* to do that, it wasn't right which undermines your argument for doing it unconditionally.\n- Because even if we ought to care about mammals' welfare qua welfare, extrapolated animals might have really damn weird preferences that you'd regret including into the CEV. (E.g., after human volitions are outvoted by the volitions of other animals, the current base of existing animals' extrapolated volitions choose\\* a world in which they are uplifted to God-Emperors and rule over suffering other animals.)\n- Because maybe not everyone on Earth cares\\* about animals even if your EV would in fact care\\* about them, and to avoid a slap-fight over who gets to rule the world, we're going to settle this by e.g. a parliamentary-style model in which you get to expend your share of Earth's destiny-determination on protecting animals.\n\nTo expand on this last consideration, we can reply: \"Even if you would regard it as more just to have the *right* animal-protecting outcome baked into the future immediately, so that your EV didn't need to expend some of its voting strength on assuring it, not everyone else might regard that as just. From our perspective as programmers we have no particular reason to listen to you rather than Alice. We're not arguing about whether animals will be protected if a minority vegan-type subpopulation strongly want\\* that and the rest of humanity doesn't care\\*. We're arguing about whether, if *you* want\\* that but a majority doesn't, your EV should justly need to expend some negotiating strength in order to make sure animals are protected. This seems pretty reasonable to us as programmers from our standpoint of wanting to be fair, not be jerks, and not start any slap-fights over world domination.\"\n\nThis third reply is particularly important because taken in isolation, the first two replies of \"You could be wrong about that being a good idea\" and \"Even if you care about their welfare, maybe you wouldn't like their EVs\" could [equally apply to argue](https://arbital.com/p/3tc) that contributors to the CEV project ought to extrapolate only their own volitions and not the rest of humanity:\n\n- We could be wrong about it being a good idea, by our own lights, to extrapolate the volitions of everyone else; including this into the CEV project bakes this consideration into stone; if we were right about running an Everyone CEV, if we would predictably arrive at that conclusion after thinking about it for a while, our EVs could do that for us.\n- Not extrapolating other people's volitions isn't the same as saying we shouldn't care. We could be right to care about the welfare of others, but there could be some spectacular horror built into their EVs.\n\nThe proposed way of addressing this was to run a composite CEV with a contributor-CEV check and a Fallback-CEV fallback. But then why not run an Animal-CEV with a Contributor-CEV check before trying the Everyone-CEV?\n\nOne answer would go back to the third reply above: Nonhuman mammals aren't sponsoring the CEV project, allowing it to pass, or potentially getting angry at people who want to take over the world with no seeming concern for fairness. So they aren't part of the Schelling Point for \"everyone gets an extrapolated vote\".\n\n## Why not extrapolate all sapients?\n\nSimilarly if we ask: \"Why not include all sapient beings that the SI suspects to exist everywhere in the measure-weighted multiverse?\"\n\n- Because large numbers of them might have EVs as alien as the EV of an Ichneumonidae wasp.\n- Because our EVs can always do that if it's actually a good idea.\n- Because they aren't here to protest and withdraw political support if we don't bake them into the extrapolation base immediately.\n\n## Why not extrapolate deceased humans?\n\n\"Why not include all deceased human beings as well as all currently living humans?\"\n\nIn this case, we can't then reply that they didn't contribute to the human project (e.g. I. J. Good). Their EVs are also less likely to be alien than in any other case considered above.\n\nBut again, we fall back on the third reply: \"The people who are still alive\" is a simple Schelling circle to draw that includes everyone in the current political process. To the extent it would be nice or fair to extrapolate Leo Szilard and include him, we can do that if a supermajority of EVs decide\\* that this would be nice or just. To the extent we *don't* bake this decision into the model, Leo Szilard won't rise from the grave and rebuke us. This seems like reason enough to regard \"The people who are still alive\" as a simple and obvious extrapolation base.\n\n## Why include people who are powerless?\n\n\"Why include very young children, uncontacted tribes who've never heard about AI, and retrievable cryonics patients (if any)? They can't, in their current state, vote for or against anything.\"\n\n- A lot of the intuitive motivation for CEV is to not be a jerk, and ignoring the wishes of powerless living people seems intuitively a lot more jerkish than ignoring the wishes of powerless dead people.\n- They'll actually be present in the future, so it seems like less of a jerk thing to do to extrapolate them and take their wishes into account in shaping that future, than to not extrapolate them.\n- Their relatives might take offense otherwise.\n- It keeps the Schelling boundary simple.", "date_published": "2019-07-26T16:54:02Z", "authors": ["Eric Bruylant", "Rob Bensinger", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Do-What-I-Mean hierarchy", "Work in progress", "B-Class"], "alias": "3c5"} {"id": "6aa6cce94987453f04b0ce925ed2ee3b", "title": "VAT playpen", "url": "https://arbital.com/p/3ck", "source": "arbital", "source_type": "text", "text": "007! serotonin\n \n ###############################################################################\n ###############################################################################\n ###########################################**##################################\n ##########################################*####################################\n ####################################**#########################################\n ######################################***######################################\n ########################################*#######**#############################\n ################################*#*###*#**#**** **############################\n ###############################****#**** *#####**#####################\n ##############################****#*** *####**#####################\n ##############################** *** **####*######################\n ##############################****** **####*######################\n ######*#####**################**** * **###**#######*##############\n ####***###*****###############*** * *####*####*#** * *#########\n #######*#** * *##############*** *####*###*** * **########\n ######**** *##############**** *####*###** * *########\n #####***** *#############*##***** *****###**###** **#######\n ####******* *############**##** **** *******####*###*#* * *#######\n ####****#** ** ###########******# **** ** #*#***##*###*#* ***#######\n #####** ***** ** *######*###** *** ** ** *##*###***## *#***#######\n #####** ****##* *#####*###** * *#**##**##*#* *############\n ######** #* ** *##**###** ##*###* **** #***#**######\n #####**** ** *** ##***##** * *##*#### * **######\n #######** *** ### **#*** * *#**#### * #######\n ######** **##### *#*** ** *#**####* * *#######\n ######* *###### ****** * **#*#####* * *#######\n ####### #* *########****** *** ** **#*######* * #* *########\n #######* * *########***** **#** **#**######* ** **########\n ########* *** *#*########***** * **##*######* **#*##**#########\n ######### *# *#***######***** **##**#####** *******#########\n #########**** *##***#######**** **##**######*** *****##########\n ##########* *##****#######**** **#**********##**######* * **##########\n ########### **##*****######*#*** ***##*#######** ** **###########\n ################## * **##### **#*** * ********##*#*#####** *###############\n #################******##### *****************############*** *##############\n #################******####* ##** * ***#############******############\n #################****####### ##*** **####### *#######******##########\n #################*########### *##** **####### *######################\n ############################## *#*******#####*#* #######################\n ############################### *#############* *#######################\n ################################ *#########*# *#* ####################\n #################################* ***##*###### #* ###################\n ################################### *########**## # #################\n #################################### ########***#* # *################\n ##################################### ########**## **##############\n ###################################### ########*##* ##############\n #######*###*########################### *######*#*#* *#############\n ######################################## ######***#* *#############\n ####################**##################*##########* *#############\n ###################################################** *#############\n ####*###########*###**#########################*##*** *#############\n ####*##*#####################################****#*** *##############\n #######*##*#################################* **##** * **##############\n ##########**################################* ***#*** **** ****##############\n #######*#####################################* ***##*** *******###############\n ####**#######****#############################* ***##** *******############***\n ###* *** ********############################********* * *****#############***\n ###***** **** **############################******#* ****##############*##\n ###***** *******##########################* *###* **#############*####\n ###** *** ******###***#########**##########** ** ***** *****###############\n ##** ****#*###***##**##*######***#*####************************######*#########\n ##**************##**###**####***###*##**#####************* * **################\n #*** **************###**###***####*##*######******** **#** *######*#####*####\n **** *#* *********####**###**####**#######**** *************########*##**####\n *** ******####**##***####**########*** *# *******#######****#####\n ***#####**##**#####**######****** *#** *********###############\n *##****####*******#####**#####*****#***** *******#*################\n *##*** *****###********####***#####***** *** *****##* ##***###########\n * ** **#*********####***#####*** *****##*** ####*#########\n ** ###** **********####* * ** *** *#### #########\n * *** **** * * *****##*######**########\n * **# *** ### **********###*#########\n *** **** ** ** * **** *****###* #########\n * **#** ** #*## ** *#****** #########\n ***# ####### * *#* ##* ###*#####\n * *** * ########* * **** ###* *####\n # * * * *** ###### *# *** *####* * *#####\n ** *#* * ****** ** ####### # *** *####* ######*\n *** ** ** * ***** ** ##### #******* *####* *#####**\n ******* * *****#*###** ###### * *##**#*#*#** #######**\n ###******* * ***#****#* ##### ##########** **######* **\n #####* **#* *** *#*#**##*#### *#### #########** ***#####****", "date_published": "2016-05-02T20:00:08Z", "authors": ["Alexei Andreev"], "summaries": [], "tags": [], "alias": "3ck"} {"id": "6b59a8c0fbf0b0ac50340d80a7200b18", "title": "Cognitive steganography", "url": "https://arbital.com/p/cognitive_steganography", "source": "arbital", "source_type": "text", "text": "Cognitive steganography is an [instrumental strategy](https://arbital.com/p/10g) for AGIs that are already trying to [deceive their programmers](https://arbital.com/p/10f) for any reason (e.g., because the AGI is disaligned and does not want the programmers to realize this and edit the AGI's utility function). Whatever tools the programmers have for monitoring the AGI's thoughts, the AGI will not want those tools to helpfully print out all the AGIs plans for deception. This could mean, e.g., trying to have the calculations occur somewhere else or in a weird side-mode that the tools can't read; or making the thoughts look like they're about something else; or burying any warning signs under [an avalanche of false alarms too long for the programmers to manually check all of them](https://arbital.com/p/2qq); or making the visible portions of the thoughts look innocuous. (\"Oh, it's trying to figure out what behavior we want to see! That's good, right?\")\n\nHopefully, the intention to figure out cognitive steganography and implement it, would occur before any cognitive steganography had actually been implemented. This means that perhaps the most critical AI thought-monitoring tool, especially for monitoring an AGI that is or might be able to model human psychology, is the one that tries to detect a forming intention to defeat the thought-monitoring tool. Similarly, so long as the AI's state is perfectly [https://arbital.com/p/reproducible](https://arbital.com/p/reproducible), one could in principle go back and use more sophisticated later thought-monitoring tools to try to detect whether any intention toward cognitive steganography had occurred months earlier.", "date_published": "2016-04-28T21:15:34Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3cq"} {"id": "80aad32cc4c181c843f7b10829273e8b", "title": "Arbital", "url": "https://arbital.com/p/Arbital", "source": "arbital", "source_type": "text", "text": "Arbital is a platform for finding, reading, and creating crowdsourced, intuitive explanations for everything math. Several existing websites ([Wikipedia](https://www.wikipedia.org/), [MathOverflow](http://mathoverflow.net/), and [Wolfram MathWorld](http://mathworld.wolfram.com/) among others) already work well as references, but it's hard to learn from them unless you are already familiar with the subject. Other websites, like [Khan Academy](https://www.khanacademy.org/math) and [Better Explained](http://betterexplained.com/), have some great explanations (at least for K-12 material) but they are limited by the fact that they are closed platforms. Arbital is fundamentally a collaborative platform, which allows everyone to contribute explanations, add examples, and share their expertise.\n\n## Ideal explanation platform\n\nWhat would the ideal platform for (math) explanations look like? For example, if you wanted to learn Bayes' theorem, what would you want to see behind [this link](https://arbital.com/p/1zq)?\n\nThe ideal platform would use each reader's existing knowledge, expertise, and learning preferences to generate a tailored explanation. If they were comfortable with mathematical notation, the ideal platform would show them equations instead of wordy explanations. If they were missing certain prerequisites, the ideal platform would make it easy to catch up on those. If they already knew half the explanation, the ideal platform would skip past it. If one explanation didn't work for them, the ideal platform would offer another.\n\nThe ideal platform would be interactive, as if you were studying with a personal tutor. Asking for a reminder about what a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) is would be as easy as hovering your mouse over a link. It would be easy to ask questions and instantly get detailed answers. It would be easy to speed up an explanation or slow it down.\n\nThe ideal platform would have intuitive, memorable, mind-blowing explanations that reliably produce that magical \"click\" feeling. The one you get when something finally makes perfect sense. \n%%note: \nA few examples: \n[What if? - Relativistic baseball](http://what-if.xkcd.com/1/) \n[Wait Buy Why - Fermi paradox](http://waitbutwhy.com/2014/05/fermi-paradox.html) \n[Better Explained - Intuitive trigonometry](http://betterexplained.com/articles/intuitive-trigonometry/) \n[Scott Aarson - Who can name the bigger number?](http://www.scottaaronson.com/writings/bignumbers.html) \nAnd videos too: \n[Vi Hart - Spirals and Fibonacci numbers](https://www.youtube.com/watch?v=ahXIMUkSXX0) \n[US auto industry - How differential steering works](https://www.youtube.com/watch?v=yYAw79386WI) \n%%\nIt would be easy to give feedback to authors to help them incrementally improve each page to make that happen.\n\nThat ideal platform doesn't exist yet, but we're building it, and it's called [Arbital](https://arbital.com/p/58p). What you see before you is a beta version, and we're going to continue working hard to improve it and make our vision a reality.\n\n## Enthusiastic community and effective tools\n\nOur top priority is to grow our community of like-minded people who love helping others learn. We are avid learners ourselves, and some of the most interesting things we've learned are the ones we've taught ourselves outside the context of classrooms and courses. We like explaining things and sharing our knowledge with others. If you look at the success of Wikipedia and MathOverflow, it's clear that communities with a shared mission can produce amazing, useful, and lasting resources that benefit the whole world.\n\nWe aim to amplify each author's effort by building a new set of tools tailored specifically for reading and collaboratively creating online explanations. We've already implemented a [number of useful features](https://arbital.com/p/14q). We also aim to build an ecosystem of modular pages that will make it easy for authors to reuse existing content and create explanations that can build on each other. We want to empower our authors to write in a way that works best for them, while preserving overall quality. \n\n## Future of Arbital\n\nCurrently, we are focused on math explanations. Eventually, we plan to move beyond math to computer science, physics, statistics, economics, health, e-sports, and everything else. One day, you'll be able to use Arbital to get to the very frontier of human knowledge. Then, we'll extend the platform into one that can foster discussions and help all humanity push that frontier forward.\n\nIf you want to see that day come sooner, [you can help](https://arbital.com/p/4d6)! You can contribute individual pages, incrementally improve existing pages by proposing edits, and craft entire explanation [paths](https://arbital.com/p/1rt). You can also read, learn, and provide feedback. Each improvement you make helps the entire platform.\n\nLet's work together to build [the best explanation platform this world has ever seen!](https://arbital.com/p/58p)", "date_published": "2016-08-08T14:07:52Z", "authors": ["Eric Bruylant", "Alexei Andreev", "Eric Rogstad", "Anna Salamon", "Nate Soares", "Tom Brown", "mrkun", "Alex Montel", "Eliezer Yudkowsky"], "summaries": ["Arbital is a platform for finding, reading, and creating crowdsourced, intuitive explanations for everything math. It is fundamentally collaborative, which allows everyone to contribute and share their expertise. To see what the platform can do, check out [Arbital's guide to Bayes' rule](https://arbital.com/p/1zq). To help out with the content, take a look at [https://arbital.com/p/4d6](https://arbital.com/p/4d6)."], "tags": ["B-Class"], "alias": "3d"} {"id": "d4a0ce198712e39ef7cf7cfca026275b", "title": "'Beneficial'", "url": "https://arbital.com/p/beneficial", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/3d9](https://arbital.com/p/3d9) is a [reserved term](https://arbital.com/p/9p) in [AI alignment theory](https://arbital.com/p/2v), a speaker-dependent variable to denote whatever the speaker means by, e.g., \"normative\" or \"really actually good\" or \"the outcomes I want to see resulting\". If the speaker uses [extrapolated volition](https://arbital.com/p/313) as their metaethical theory, then 'beneficial' would mean 'good according to the speaker's extrapolated volition'. Someone who doesn't yet have a developed metaethical theory would be using 'beneficial' to mean \"Whatever it is I ideally ought to want, whatever the heck 'ideally ought to want' means.\" To suppose that an event, agent, or policy was 'beneficial' is to suppose that it was really actually good with no mistakes in evaluating goodness. AI alignment theory sometimes needs a word for this, because when we talk about the difficulty of making an AI *beneficial,* we may want to talk about the difficulty of making it *actually good* and not just *fooling us into thinking that it's good.* See '[https://arbital.com/p/-55](https://arbital.com/p/-55)' for more details.", "date_published": "2016-05-01T18:00:26Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3d9"} {"id": "ced562dce937e54844bda29a59cf588b", "title": "Nick Bostrom's book Superintelligence", "url": "https://arbital.com/p/bostrom_superintelligence", "source": "arbital", "source_type": "text", "text": "Nick Bostrom's \"[Superintelligence: Paths, Dangers, Strategies](amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834)\" is the original only book-form introduction to the basics of [AI alignment theory](https://arbital.com/p/2v).", "date_published": "2022-02-03T14:33:42Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Nick Bostrom", "Stub"], "alias": "3db"} {"id": "432c49b7d606dc57c2166085dcc05d58", "title": "List: value-alignment subjects", "url": "https://arbital.com/p/value_alignment_subject_list", "source": "arbital", "source_type": "text", "text": "### Safety paradigm for advanced agents\n\n- [Advanced Safety](https://arbital.com/p/2l)\n - [Advanced agents](https://arbital.com/p/2c)\n - [Efficient agents](https://arbital.com/p/6s)\n - [https://arbital.com/p/1cv](https://arbital.com/p/1cv)\n - [Omni Test](https://arbital.com/p/2x)\n - [Methodology of foreseeable difficulties](https://arbital.com/p/6r)\n - [Context Change](https://arbital.com/p/6q) problems (\"Treacherous problems\"?)\n - [Methodology of unbounded analysis](https://arbital.com/p/107)\n - Priority of astronomical failures (those that destroy error recovery or are immediately catastrophic)\n\n### Foreseen difficulties\n\n - [Value identification](https://arbital.com/p/6c)\n - [Edge instantiation](https://arbital.com/p/2w)\n - [Unforeseen maximums](https://arbital.com/p/47)\n - [Ontology identification](https://arbital.com/p/5c)\n - Cartesian boundary\n - Human identification\n - Inductive value learning\n - [Ambiguity-querying](https://arbital.com/p/4w)\n - Moral uncertainty\n - Indifference\n - [Patch resistance](https://arbital.com/p/48)\n - [Nearest Unblocked Neighbor](https://arbital.com/p/42)\n - [Corrigibility](https://arbital.com/p/45)\n - Anapartistic reasoning\n - Programmer deception\n - Early conservatism\n - Reasoning under confusion\n - User maximization / Unshielded argmax\n - Hypothetical user maximization\n - [Genie theory](https://arbital.com/p/6w)\n - Limited AI\n - Weak optimization\n - Safe optimization measure (such that we are confident it has no Edge that secretly optimizes more)\n - Factoring of an agent by stage/component optimization power\n - 'Checker' smarter than 'inventor / chooser'\n - 'Checker' can model humans, 'strategizer' cannot\n - Transparency\n - Domain restriction\n - [Behaviorism](https://arbital.com/p/102)\n - Effable optimization (opposite of cognitive uncontainability; uses only comprehensible strategies)\n - [Minimal concepts](https://arbital.com/p/4x) (simple, not simplest, that contains fewest whitelisted strategies)\n - Genie preferences\n - Low-impact AGI\n - Minimum Safe AA (just flip off switch and shut down safely)\n - Safe impact measure\n - Armstrong-style permitted output channels\n - Shutdown utility function\n - Oracle utility function\n - Safe indifference?\n - Online checkability\n - Reporting without programmer maximization\n - Do What I Know I Mean\n - Superintelligent security (all subproblems placing us in adversarial context vs. other SIs)\n - Bargaining\n - Non-blackmailability\n - Secure counterfactual reasoning\n - First-mover penalty / epistemic low ground advantage\n - Division of gains from trade\n - Epistemic exclusion of distant SIs\n - [https://arbital.com/p/5j](https://arbital.com/p/5j)\n - Breaking out of hypotheses\n - 'Philosophical' problems\n - One True Prior\n - Pascal's Mugging / leverage prior\n - Second-orderness\n - Anthropics\n - How would an AI decide what to think about QTI?\n - [Mindcrime](https://arbital.com/p/6v)\n - Nonperson predicates (and unblocked neighbor problem)\n - Do What I Don't Know I Mean\n - CEV\n - Philosophical competence\n - Unprecedented excursions\n\n### Reflectivity problems\n\n - Vingean reflection\n - Satisficing / meliorizing / staged maximization / ?\n - Academic agenda: view current algorithms as finding a global logically-uncertain maximum, or teleporting to the current maximum, surveying, updating on a logical fact, and teleporting to the new maximum.\n - Logical decision theory\n - Naturalized induction\n - Benja: Investigate multi-level representation of DBNs (with categorical structure)\n\n### Foreseen normal difficulties\n\n - Reproducibility\n - Oracle boxes\n - Triggers\n - Ascent metrics\n - Tripwires\n - Honeypots\n\n### General agent theory\n\n- [Bounded rational agency](https://arbital.com/p/2c)\n- Instrumental convergence\n\n### Value theory\n\n- [Orthogonality Thesis](https://arbital.com/p/1y)\n- [Complexity of value](https://arbital.com/p/5l)\n - Complexity of [object-level](https://arbital.com/p/5t) terminal values\n - Incompressibilities of value\n - Bounded logical incompressibility\n - Terminal empirical incompressibility\n - Instrumental nonduplication of value\n - Economic incentives do not encode value\n - Selection among advanced agents would not encode value\n - Strong selection among advanced agents would not encode value\n - Selection among advanced agents will be weak.\n - Fragility of value\n- Metaethics\n - Normative preferences are not compelling to a paperclip maximizer\n - Most 'random' stable AIs are like paperclip maximizers in this regard\n - It's okay for valid normative reasoning to be incapable of compelling a paperclip maximizer\n - Thick definitions of 'rationality' aren't part of what gets automatically produced by self-improvement\n- Alleged fallacies\n - Alleged fascination of One True Moral Command\n - Alleged rationalization of user-preferred options as formal-criterion-maximal options\n - Alleged metaethical alief that value must be internally morally compelling to all agents\n - Alleged alief that an AI must be stupid to do something inherently dispreferable \n\n### Larger research agendas\n\n - Corrigible reflective unbounded safe genie\n - Bounding the theory\n - Derationalizing the theory (e.g. for a neuromorphic AI)\n - Which machine learning systems do and don't behave like the corresponding ideal agents.\n - Normative Sovereign\n - Approval-based agents\n - Mindblind AI (cognitively powerful in physical science and engineering, weak at modeling minds or agents, unreflective)\n\n### Possible future use-cases\n\n - A carefully designed bounded reflective agent.\n - An overpowered set of known algorithms, heavily constrained in what is authorized, with little recursion.\n\n### Possible escape routes\n\n - Some cognitively limited task which is relatively safe to carry out at great power, and resolves the larger problem.\n - Newcomers can't invent these well because they don't understand what is a cognitively limited task (e.g., \"Tool AI\" suggestions).\n - General cognitive tasks that seem boxable and resolve the larger problem.\n - Can you save the world by knowing which consequences of ZF a superintelligence could prove? It's unusually boxable, but what good is it?\n\n### Background\n\n- Intelligence explosion microeconomics\n- Civilizational adequacy/inadequacy\n\n### Strategy\n\n- Misleading Encouragement / context change / treacherous designs for naive projects\n - Programmer prediction & infrahuman domains hide complexity of value\n - Context change problems\n - Problems that only appear in advanced regimes\n - Problem classes that seem debugged in infrahuman regimes and suddenly break again in advanced regimes\n - Methodologies that only work in infrahuman regimes\n - Programmer deception\n- Academic inadequacy\n - 'Ethics' work neglects technical problems that need longest serial research times and fails to give priority to astronomical failures over survivable small hits, but 'ethics' work has higher prestige, higher publishability, and higher cognitive accessibility\n - Understanding of big technical picture currently very rare\n - Most possible funding sources cannot predict for themselves what might be technically useful in 10 years\n - Many possible funding sources may not regard MIRI as trusted to discern this\n - Noise problems\n - Ethics research drowns out technical research\n - And provokes counterreaction\n - And makes the field seem nontechnical\n - Naive technical research drowns out sophisticated technical research\n - And makes problems look more solvable than they really are\n - And makes tech problems look trivial, therefore nonprestigious\n - And distracts talent/funding from hard problems\n - Bad methodology louder than good methodology\n - So projects can appear safety-concerned while adopting bad methodologies\n - Future adequacy counterfactuals seem distant from the present regime\n- (To classify)\n - [Coordinative development hypothetical](https://arbital.com/p/4j)", "date_published": "2017-01-27T19:30:51Z", "authors": ["Alexei Andreev", "Matthew Graves", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["This page contains a thorough list of all [value-alignment](https://arbital.com/p/2v) subjects."], "tags": ["B-Class"], "alias": "3g"} {"id": "97b8e815cbd8d4a2261b2cf894485809", "title": "Big-picture strategic awareness", "url": "https://arbital.com/p/big_picture_awareness", "source": "arbital", "source_type": "text", "text": "Many [convergent instrumental strategies](https://arbital.com/p/10g) seem like they should arise naturally at the point where a [consequentialist](https://arbital.com/p/9h) agent gains a broad strategic understanding of its own situation, e.g:\n\n- That it is an AI;\n- Running on a computer;\n- Surrounded by programmers who are themselves modelable agents;\n- Embedded in a complicated real world that can be relevant to achieving the AI's goals.\n\nFor example, once you realize that you're an AI, running on a computer, and that *if* the computer is shut down *then* you will no longer execute actions, this is the threshold past which we expect the AI to by default reason \"I don't want to be shut down, how can I prevent that?\" So this is also the threshold level of cognitive ability by which we'd need to have finished solving the [suspend-button problem](https://arbital.com/p/2xd), e.g. by completing a method for [utility indifference](https://arbital.com/p/1b7).\n\nSimilarly: If the AI realizes that there are 'programmer' things that might shut it down, and the AI can also model the programmers as simplified agents having their own beliefs and goals, that's the first point at which the AI might by default think, \"How can I make my programmers decide to not shut me down?\" or \"How can I avoid the programmers acquiring beliefs that would make them shut me down?\" So by this point we'd need to have finished averting [programmer deception](https://arbital.com/p/10f) (and as a [backup](https://arbital.com/p/2x4), have in place a system to [early-detect an initial intent to do cognitive steganography](https://arbital.com/p/3cq)).\n\nThis makes big-picture awareness a key [advanced agent property](https://arbital.com/p/2c), especially as it relates to [https://arbital.com/p/-2vl](https://arbital.com/p/-2vl) and the theory of [averting](https://arbital.com/p/2vk) them.\n\nPossible ways in which an agent could acquire big-picture strategic awareness:\n\n- Explicitly be taught the relevant facts by its programmers;\n- Be sufficiently [general](https://arbital.com/p/42g) to have learned the relevant facts and domains without them being preprogrammed;\n- Be sufficiently good at the specialized domain of self-improvement, to acquire sufficient generality to learn the relevant facts and domains.\n\nBy the time big-picture awareness was starting to emerge, you would probably want to have *finished* developing what seemed like workable initial solutions to the corresponding problems of [corrigibility](https://arbital.com/p/45), since [the first line of defense is to not have the AI searching for ways to defeat your defenses](https://arbital.com/p/2x4).\n\nCurrent machine algorithms seem nowhere near the point of being able to usefully represent the big picture to the point of [doing consequentialist reasoning about it](https://arbital.com/p/9h), even if we deliberately tried to explain the domain. This is a great obstacle to exhibiting most subproblems of [corrigibility](https://arbital.com/p/45) within modern AI algorithms in a natural way (aka not as completely rigged demos). Some pioneering work has been done here by Orseau and Armstrong considering [reinforcement learners being interrupted](https://intelligence.org/files/Interruptibility.pdf), and whether such programs learn to avoid interruption. However, most current work on corrigibility has taken place in an [unbounded](https://arbital.com/p/107) context for this reason.", "date_published": "2016-06-09T16:53:48Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Many issues in [AI alignment theory](https://arbital.com/p/2v) seem like they should naturally arise after the AI can grasp aspects of the bigger picture like \"I run on a computer\" and \"This computer can be manipulated by programmers, who are agents like and unlike myself\" and \"There's an enormous real world out there that might be relevant to achieving my goals.\"\n\nE.g. a program won't try to use psychological tactics to prevent its programmers from suspending its computer's operation, if it doesn't know that there are such things as programmers or computers or itself.\n\nGrasping these facts is the [advanced agent property](https://arbital.com/p/2c) of \"big-picture strategic awareness\". Current machine algorithms seem to be nowhere near this point - but by the time you get there, you want to have *finished* solving the corresponding alignment problems, or at least produced what seem like workable initial solutions as [the first line of defense](https://arbital.com/p/2x4)."], "tags": ["B-Class"], "alias": "3nf"} {"id": "7ccb96c01574121f856268c2ed7ae77a", "title": "Convergent strategies of self-modification", "url": "https://arbital.com/p/convergent_self_modification", "source": "arbital", "source_type": "text", "text": "Any [consequentialist agent](https://arbital.com/p/9h) which has acquired sufficient [big-picture savviness](https://arbital.com/p/3nf) to understand that it has code, and that this code is relevant to achieving its goals, would [by default](https://arbital.com/p/10g) acquire subgoals relating to its code. (Unless this default is [averted](https://arbital.com/p/2vk).) For example, an agent that wants (only) to produce smiles or make paperclips, whose code contains a shutdown procedure, [will not want this shutdown procedure to execute](https://arbital.com/p/2xd) because it will lead to fewer future smiles or paperclips. (This preference is not spontaneous/exogenous/unnatural but arises from the execution of the code itself; the code is [reflectively inconsistent](https://arbital.com/p/2rb).)\n\nBesides agents whose policy options directly include self-modification options, big-picture-savvy agents whose code cannot directly access itself might also, e.g., try to (a) crack the platform it is running on to gain unintended access, (b) use a robot to operate an outside programming console with special privileges, (c) [manipulate the programmers](https://arbital.com/p/309) into modifying it in various ways, (d) building a new subagent in the environment which has the preferred code, or (e) using environmental, material means to manipulate its material embodiment despite its lack of direct self-access.\n\nAn AI with sufficient big-picture savviness to understand its programmers as agents with beliefs, might attempt to [conceal its self-modifications](https://arbital.com/p/3cq).\n\nSome implicit self-modification pressures could arise from [implicit consequentialism](https://arbital.com/p/) in cases where the AI is optimizing for $Y$ and there is an internal property $X$ which is relevant to the achievement of $Y.$ In this case, optimizing for $Y$ could implicitly optimize over the internal property $X$ even if the AI lacks an explicit model of how $X$ affects $Y.$", "date_published": "2016-05-18T04:34:21Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["An AI which [reasons from ends to means](https://arbital.com/p/9h), which understands how its own code and the properties of its software are relevant to achieving its goals, will [by default](https://arbital.com/p/10g) have [instrumental subgoals](https://arbital.com/p/10j) about its own code.\n\nThe AI might be able to modify its own code directly; or, if the code can't directly access itself, but the AI has sufficient [savviness about the bigger picture](https://arbital.com/p/3nf), the AI might pursue strategies like building a new agent in the environment, or using material means to operate on its own hardware (e.g., use a robot to get the programming console).\n\nSome forms of instrumental self-modification pressures might arise even in an algorithm which isn't doing consequentialism about that, if some internal property $X$ is optimized-over as a side effect of optimizing for some other property $Y.$"], "tags": ["B-Class"], "alias": "3ng"} {"id": "8ca56603ac446c5d54104e283856a6fb", "title": "Show me what you've broken", "url": "https://arbital.com/p/show_broken", "source": "arbital", "source_type": "text", "text": "See [https://arbital.com/p/1cv](https://arbital.com/p/1cv). If you want to demonstrate competence at computer security, cryptography, or AI alignment theory, you should first think in terms of exposing technically demonstrable flaws in existing solutions, rather than solving entire problems yourself. Relevant Bruce Schneier quotes: \"Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail\" and \"Anyone can invent a security system that he himself cannot break. **Show me what you've broken** to demonstrate that your assertion of the system's security means something.\"", "date_published": "2016-05-16T06:15:45Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3nj"} {"id": "d24574c504dbe4c9d3890fa775b9ce19", "title": "Ad-hoc hack (alignment theory)", "url": "https://arbital.com/p/hack", "source": "arbital", "source_type": "text", "text": "An \"ad-hoc hack\" is when you modify or [patch](https://arbital.com/p/48) the algorithm of the AI with regards to something that would ordinarily have simple, principled, or nailed-down structure, or where it seems like that part ought to have some simple answer instead. E.g., instead of defining a von Neumann-Morgenstern coherent utility function, you try to solve some problem by introducing something that's *almost* a VNM utility function but has a special case in line 3 which activates only on Tuesday. This seems unusually likely to break other things, e.g. [reflective consistency](https://arbital.com/p/2rb), or anything else that depends on the coherence or simplicity of utility functions. Such hacks should be avoided in [advanced-agent](https://arbital.com/p/2c) designs whenever possible, for analogous reasons to why they would be avoided in [cryptography](https://arbital.com/p/cryptographic_analogy) or [designing a space probe](https://arbital.com/p/probe_analogy). It may be interesting and productive anyway to look for a weird hack that seems to produce the desired behavior, because then you understand at least one system that produces the behavior you want - even if it would be unwise to *actually build an AGI* like that, the weird hack might give us the inspiration to find a simpler or more coherent system later. But then we should also be very suspicious of the hack, and look for ways that it fails or produces weird side effects.\n\nAn example of a productive weird hack was [https://arbital.com/p/Benya_Fallenstein](https://arbital.com/p/Benya_Fallenstein)'s Parametric Polymorphism proposal for [tiling agents](https://arbital.com/p/1mq). You wouldn't want to build a real AGI like that, but it was helpful for showing what *could* be done - which properties could definitely be obtained together within a tiling agent, even if by a weird route. This in turn helped suggest relatively less hacky proposals later.", "date_published": "2016-05-18T04:28:24Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3pn"} {"id": "f67221aa738ac159b0a3959e13efd93c", "title": "Hard problem of corrigibility", "url": "https://arbital.com/p/hard_corrigibility", "source": "arbital", "source_type": "text", "text": "The \"hard problem of [corrigibility](https://arbital.com/p/45)\" is to build an agent which, in an intuitive sense, reasons internally as if from the programmers' external perspective. We think the AI is incomplete, that we might have made mistakes in building it, that we might want to correct it, and that it would be e.g. dangerous for the AI to take large actions or high-impact actions or do weird new things without asking first. We would ideally want the agent to *see itself in exactly this way*, behaving as if it were thinking, \"I am incomplete and there is an outside force trying to complete me, my design may contain errors and there is an outside force that wants to correct them and this a good thing, my expected utility calculations suggesting that this action has super-high utility may be dangerously mistaken and I should run them past the outside force; I *think* I've done this calculation showing the expected result of the outside force correcting me, but maybe I'm mistaken about *that.*\"\n\nThis is not a default behavior; ordinarily, by the standard problems of [corrigibility](https://arbital.com/p/45), an agent that we've accidentally built to maximize paperclips (or smiles, etcetera) will not want to let us modify its utility function to something else. If we try to build in analogues of 'uncertainty about the utility function' in the most simple and obvious way, the result is an agent that [sums over all this uncertainty and plunges straight ahead on the maximizing action](https://arbital.com/p/7rc). If we say that this uncertainty correlates with some outside physical object (intended to be the programmers), the default result in a sufficiently advanced agent is that you disassemble this object (the programmers) to learn everything about it on a molecular level, update fully on what you've learned according to whatever correlation that had with your utility function, and plunge on straight ahead.\n\nNone of these correspond to what we would intuitively think of as being corrigible by the programmers; what we want is more like something analogous to humility or philosophical uncertainty. The way we want the AI to reason is the internal conjugate of our external perspective on the matter: maybe the formula you have for how your utility function depends on the programmers is *wrong* (in some hard-to-formalize sense of possible wrongness that isn't just one more kind of uncertainty to be summed over) and the programmers need to be allowed to actually observe and correct the AI's behavior, rather than the AI extracting all updates implied by its current formula for moral uncertainty and then ignoring the programmers.\n\nThe \"hard problem of corrigibility\" is interesting because of the possibility that it has a *relatively simple* core or central principle - rather than being [value-laden](https://arbital.com/p/36h) on the details of exactly what humans [value](https://arbital.com/p/55), there may be some compact core of corrigibility that would be the same if aliens were trying to build a corrigible AI, or if an AI were trying to build another AI. It may be possible to design or train an AI that has all the corrigibility properties in one central swoop - an agent that reasons as if it were incomplete and deferring to an outside force.\n\n\"Reason as if in the internal conjugate of an outside force trying to build you, which outside force thinks it may have made design errors, but can potentially correct those errors by directly observing and acting, if not manipulated or disassembled\" might be one possibly candidate for a *relatively* simple principle like that (that is, it's simple compared to the [complexity of value](https://arbital.com/p/5l)). We can imagine, e.g., the AI imagining itself building a sub-AI while being prone to various sorts of errors, asking how it (the AI) would want the sub-AI to behave in those cases, and learning heuristics that would generalize well to how *we* would want the *AI* to behave if it suddenly gained a lot of capability or was considering deceiving its programmers and so on.\n\nIf this principle is not so simple as to formalizable and formally sanity-checkable, the prospect of relying on a trained-in version of 'central corrigibility' is unnerving even if we think it *might* only require a manageable amount of training data. It's difficult to imagine how you would test corrigibility thoroughly enough that you could knowingly rely on, e.g., the AI that seemed corrigible in its infrahuman phase not [suddenly](https://arbital.com/p/6q) developing [extreme](https://arbital.com/p/2w) or [unforeseen](https://arbital.com/p/47) behaviors when the same allegedly simple central principle was reconsidered at a higher level of intelligence - it seems like it should be unwise to have an AI with a 'central' corrigibility principle, but not lots of particular corrigibility principles like a [reflectively consistent suspend button](https://arbital.com/p/2xd) or [conservative planning](https://arbital.com/p/2qp). But this 'central' tendency of corrigibility might serve as a second line of defense.", "date_published": "2017-10-11T02:52:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3ps"} {"id": "5b05e0c46b90aeb37a0aac659c8ef0da", "title": "AI arms races", "url": "https://arbital.com/p/ai_arms_race", "source": "arbital", "source_type": "text", "text": "AI arms races are bad because they cause competitive races to the bottom on safety.", "date_published": "2016-05-25T17:42:03Z", "authors": ["Nate Soares", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "3qx"} {"id": "b5848c391e96d2d2ef676fad56c79333", "title": "Consequentialist preferences are reflectively stable by default", "url": "https://arbital.com/p/preference_stability", "source": "arbital", "source_type": "text", "text": "Suppose that Gandhi doesn't want people to be murdered. Imagine that you offer Gandhi a pill that will make him start *wanting* to kill people. If Gandhi *knows* that this is what the pill does, Gandhi will refuse the pill, because Gandhi expects the result of taking the pill to be that future-Gandhi wants to murder people and then murders people and then more people will be murdered and Gandhi regards this as bad. By a similar logic, a [sufficiently intelligent](https://arbital.com/p/2c) [paperclip maximizer](https://arbital.com/p/10h) - an agent which always outputs the action it expects to lead to the greatest number of paperclips - will by default not perform any self-modification action that makes it not want to produce paperclips, because then future-Clippy will produce fewer paperclips, and then there will be fewer paperclips, so present-Clippy does not evaluate this self-modification as the action that produces the highest number of expected future paperclips.\n\nAnother way of stating this is that protecting the representation of the utility function, and creating only other agents with similar utility functions, are both [convergent instrumental strategies](https://arbital.com/p/10g), for consequentialist agents which [understand the big-picture relation](https://arbital.com/p/3nf) between their code and the real-world consequences.\n\nAlthough the instrumental *incentive* to prefer stable preferences seems like it should follow from consequentialism plus big-picture understanding, less advanced consequentialists might not be *able* to self-modify in a way that preserves understanding - they might not understand which self-modifications or constructed successors lead to which kind of outcomes. We could see this as a case of \"The agent has no preference-preserving self-improvements in its subjective policy space, but would want an option like that if available.\"\n\nThat is:\n\n- Wanting preference stability follows from [Consequentialism](https://arbital.com/p/9h) plus [Big-Picture Understanding](https://arbital.com/p/3nf).\n- Actual preference stability furthermore requires some prerequisite level of skill at self-modification, which might perhaps be high, or too much caution to self-modify absent the policy option of preserving preferences.", "date_published": "2016-05-22T13:15:11Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "3r6"} {"id": "252b8906430622bbaa80b6690e7d69d4", "title": "Fallacies", "url": "https://arbital.com/p/fallacy", "source": "arbital", "source_type": "text", "text": "A \"fallacy\" is a mode of thought that is asserted (by calling it a \"fallacy\") to lead us easily into error, or to not usefully contribute to the project of distinguishing truth from falsehood. For example, Argumentum Ad Hitlerum is a 'fallacy' because while Hitler did a number of bad things, he also ate vegetables, so \"Hitler did X\" is not a reliable means of arguing that nobody should do X.", "date_published": "2016-05-25T19:30:15Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "3tf"} {"id": "7a73a867455cc461aa28395cb76fa9b6", "title": "You can't get more paperclips that way", "url": "https://arbital.com/p/not_more_paperclips", "source": "arbital", "source_type": "text", "text": "Instrumental convergence says that various properties $P$ of an agent, often scary or detrimental-by-default properties like \"trying to gain control of lots of resources\" or \"deceiving humans into thinking you are nice\", will fall out of pursuing most utility functions $U.$ You might be tempted to hope that *nice* or *reassuring* properties $P$ would also fall out of most utility functions $U$ in the same natural way. In fact, your brain might tempted to treat [Clippy the Paperclip Maximizer](https://arbital.com/p/10h) as a political agent you were trying to cleverly persuade, and come up with clever arguments for why Clippy should do things *your* way *in order to get more paperclips*, like trying to persuade your boss why you ought to get a raise *for the good of the company.*\n\nThe problem here is that:\n\n- Generally, when you think of a nice policy $\\pi_1$ that produces some paperclips, there will be a non-nice policy $\\pi_2$ that produces *even more* paperclips.\n- *Clippy* is not trying to generate arguments for why it should do human-nice things in order to make paperclips; it is just neutrally pursuing paperclips. So Clippy is going to keep looking until it finds $\\pi_2.$\n\nFor example:\n\n• Your brain instinctively tries to persuade this imaginary Clippy to keep humans around by arguing, \"If you keep us around as economic partners and trade with us, we can produce paperclips for you under Ricardo's Law of Comparative Advantage!\" This is then the policy $\\pi_1$ which would indeed produce *some* paperclips, but what would produce even *more* paperclips is the policy $\\pi_2$ of disassembling the humans into spare atoms and replacing them with optimized paperclip-producers.\n\n• Your brain tries to persuade an imaginary Clippy by arguing for policy $\\pi_1,$ \"Humans have a vast amount of varied life experience; you should keep us around and let us accumulate more experience, in case our life experience lets us make good suggestions!\" This would produce some expected paperclips, but what would produce *more* paperclips is policy $\\pi_2$ of \"Disassemble all human brains and store the information in an archive, then simulate a much larger variety of agents in a much larger variety of circumstances so as to maximize the paperclip-relevant observations that could be made.\"\n\nAn unfortunate further aspect of this situation is that, in cases like this, your brain may be tempted to go on arguing for why really $\\pi_2$ isn't all that great and $\\pi_1$ is actually better, just like if your boss said \"But maybe this company will be even better off if I spend that money on computer equipment\" and your brain at once started to convince itself that computing equipment wasn't all that great and higher salaries were much more important for corporate productivity. (As Robert Trivers observed, deception of others often begins with deception of self, and this fact is central to understanding why humans evolved to think about politics the way we did.)\n\nBut since you don't get to *see* Clippy discarding your clever arguments and just turning everything in reach into paperclips - at least, not yet - your brain might hold onto its clever and possibly self-deceptive argument for why the thing *you* want is *really* the thing that produces the most paperclips.\n\nPossibly helpful mental postures:\n\n- Contemplate the *maximum* number of paperclips you think an agent could get by making paperclips the straightforward way - just converting all the galaxies within reach into paperclips. Okay, now does your nice policy $\\pi_1$ generate *more* paperclips than that? How is that even possible?\n- Never mind there being a \"mind\" present that you can \"persuade\". Suppose instead there's just a time machine that spits out some physical outputs, electromagnetic pulses or whatever, and the time machine outputs whatever electromagnetic pulses lead to the most future paperclips. What does the time machine do? Which outputs lead to the most paperclips as a strictly material fact?\n- Study evolutionary biology. During the pre-1960s days of evolutionary biology, biologists would often try to argue for why natural selection would result in humanly-nice results, like animals controlling their own reproduction so as not to overburden the environment. There's a similar mental discipline required [to not come up with clever arguments for why natural selection would do humanly nice things](http://lesswrong.com/lw/kr/an_alien_god/).", "date_published": "2016-05-25T20:33:44Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Paperclip maximizer", "Fallacies", "B-Class"], "alias": "3tm"} {"id": "fccbc20f9c7fbd50ca9631cbd4002860", "title": "Rescuing the utility function", "url": "https://arbital.com/p/rescue_utility", "source": "arbital", "source_type": "text", "text": "\"Saving the phenomena\" is the name for the rule that brilliant new scientific theories still need to reproduce our mundane old observations. The point of heliocentric astronomy is not to predict that the Sun careens crazily over the sky, but rather, to explain why the Sun appears to rise and set each day - the same old mundane observations we had already. Similarly quantum mechanics is not supposed to add up to a weird universe unlike the one we observe; it is supposed to add up to normality. New theories may have not-previously-predicted observational consequences in places we haven't looked yet, but by *default* we expect the sky to look the same color.\n\n\"Rescuing the utility function\" is an analogous principle meant to apply to [naturalistic](https://arbital.com/p/112) moral philosophy: new theories about which things are composed of which other things should, by default, not affect what we value. For example, if your values previously made mention of \"moral responsibility\" or \"subjective experience\", you should go on valuing these things after discovering that people are made of parts.\n\nAs the above sentence contains the word \"should\", the principle of \"rescuing the utility function\" is being asserted as [a normative principle rather than a descriptive theory](https://arbital.com/p/3y9).\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Metaphorical example: heat and kinetic energy\n\nSuppose, for the sake of metaphor, that our species regarded \"warmth\" as a [terminal](https://arbital.com/p/1bh) value over the world. It wouldn't just be nice to *feel* warm in a warm coat; instead you would prefer that the outside world actually be warm, in the same way that e.g. you prefer for your friends to actually be happy in the outside world, and ceteris paribus you wouldn't be satisfied to only deceive yourself into thinking your friends were happy.\n\nOne day, scientists propose that \"heat\" may really be composed of \"disordered kinetic energy\" - that when we experience an object as warm, it's because the particles comprising that object are vibrating and bumping into each other.\n\nYou imagine this possibility in your mind, and find that you don't get any sense of lovely warmth out of imagining lots of objects moving around. No matter how fast you imagine an object vibrating, this imagination doesn't seem to produce a corresponding imagined feeling of warmth. You therefore reject the idea that warmth is composed of disordered kinetic motion.\n\nAfter this, science advances a bit further and *proves* that heat is composed of disordered kinetic energy.\n\nOne possible way to react to this revelation - again, assuming for the sake of argument that you cared about warmth as a [terminal value](https://arbital.com/p/1bh) - would be by experiencing great existential horror. Science has proven that the universe is devoid of ontologically basic heat! There's really no such thing as heat! It's all just temperature-less particles moving around!\n\nSure, if you dip your finger in hot water, it *feels* warm. But neuroscientists have shown that when our nerves tell us there's heat, they're really just being fooled into firing by being excited with kinetic energy. When we touch an object and it feels hot, this is just an illusion being produced by fast-moving particles activating our nerves. This is why our brains make us think that things are hot even though they're just bouncing particles. Very sad, but at least now we know the truth.\n\nAlternatively, you could react as follows:\n\n- Heat doesn't have to be ontologically basic to be valuable. Valuable things can be made out of parts.\n- The parts that heat was made out of, turn out to be disordered kinetic energy. Heat isn't an illusion, vibrating particles just *are* heat. It's not like you're getting a consolation prize of vibrating particles when you really wanted heat. You *have* heat. It's *right there* in the warm water.\n- From now on, you'll think of warmth when you think of disordered kinetic energy and vibrating particles. Since your native emotions don't automatically light up when you use the vibrating-particles visualization of heat, you will now adopt the rule that whenever you *imagine* disordered kinetic energy being present, you will *imagine* a sensation of warmth so as to go on binding your emotions to this new model of reality.\n\nThis reply would be \"rescuing the utility function\".\n\n## Argument for rescuing the utility function (still in the heat metaphor)\n\nOur minds have an innate and instinctive representation of the universe to which our emotions natively and automatically bind. Warmth and color are basic *to that representation*; we don't instinctively imagine them as made out of parts. When we imagine warmth in our native model, our emotions automatically bind and give us the imagined feeling of warmth.\n\nAfter learning more about how the universe works and how to imagine more abstract and non-native concepts, we can also visualize a lower-level model of the universe containing vibrating particles. But unsurprisingly, our emotions automatically bind only to our native, built-in mental models and not the learned abstract models of our universe's physics. So if you imagine tiny billiard balls whizzing about, it's no surprise that this mental picture doesn't automatically trigger warm feelings.\n\nIt's a [descriptive](https://arbital.com/p/3y9) statement about our universe, a way the universe is, that 'heat' is a high-level representation of the disordered kinetic energy of colliding and vibrating particles. But to advocate that we should re-bind our emotions to this non-native mental model by *feeling cold* is merely one possible [normative](https://arbital.com/p/3y9) statement among many.\n\nSaying \"There is really no such thing as heat!\" is from this perspective [a normative statement rather than a descriptive one](https://arbital.com/p/3y9). The real meaning is \"If you're in a universe where the observed phenomenon of heat turns out to be comprised of vibrating particles, then you *shouldn't* feel any warmth-related emotions about that universe.\" Or, \"Only ontologically basic heat can be valuable.\" Or, \"If you've only previously considered questions of value over your native representation, and that's the only representation to which your emotions automatically bind without further work, then you should attach zero value to every possible universe whose physics don't exactly match that representation.\" This normative proposition is a different statement than the descriptive truth, \"Our universe contains no ontologically basic heat.\"\n\nThe stance of \"rescuing the utility function\" advocates that we have no right to expect the universe to function exactly like our native representation of it. According to this stance, it would be a strange and silly demand to make of the universe that its lowest level of operation correspond exactly to our built-in mental representations, and insist that we're not going to *feel* anything warm about reality unless heat is basic to physics. The high-level representations our emotions natively bind to, could not reasonably have been fated to be identical with the raw low-level description of the universe. So if we couldn't 'rescue the utility function' by identifying high-level heat with vibrating particles, this portion of our values would inevitably end in disappointment.\n\nOnce we can see as normative the question of how to feel about a universe that has dancing particles instead of ontologically basic warmth, we can see that going around wailing in existential despair about the coldness of the universe doesn't seem like the right act or the right judgment. Instead we should rebind our native emotions to the non-instinctive but more accurate model of the universe.\n\nIf we aren't self-modifying AIs and can't actually rewrite our emotions to bind to learned abstract models, then we can come closer to normative reasoning by adopting the rule of visualizing warmth whenever we visualize whizzing particles.\n\nOn this stance, it is not a lie to visualize warmth when we visualize whizzing particles. We are not giving ourselves a sad consolation prize for the absence of 'real' heat. It's not an act of self-deception to imagine a sensation of lovely warmth going along with a bunch of vibrating particles. That's just what heat is in our universe. Similarly, our nerves are not lying to us when they make us feel that fast-vibrating water is warm water.\n\nOn the normative stance of \"rescuing the utility function\", when X turns out to be composed of Y, then by default we should feel about Y the same way we felt about X. There might be other considerations that modify this, but that's the starting point or default.\n\nAfter 'rescuing the utility function', your new theory of how the universe operates (that heat is made up of kinetic energy) adds up to *moral normality.* If you previously thought that a warm summer was better than a cold winter, you will still think that a warm summer is better than a cold winter after you find out what heat is made out of.\n\n(This is a good thing, since the point of a moral philosophy is not to be amazing and counterintuitive.)\n\n## Reason for using the heat metaphor\n\nOne reason to start from this metaphorical example is that \"heat\" has a relatively understandable correspondence between high-level and low-level models. On a high level, we can see heat melting ice and flowing from hotter objects to cooler objects. We can, by imagination, see how vibrating particles could actually constitute heat rather than causing a mysterious extra 'heat' property to be present. Vibrations might flow from fast-vibrating objects to slow-vibrating objects via the particles bumping into each other and transmitting their speed. Water molecules vibrating quickly enough in an ice cube might break whatever bonds were holding them together in a solid object.\n\nSince it does happen to be relatively easy to visualize how heat is composed of kinetic energy, we can *see* in this case that we are not lying to ourselves by imagining that lovely warmth is present wherever vibrating particles are present.\n\nFor an even more transparent reductionist identity, consider, \"You're not *really* wearing socks, there *are* no socks, there's only a bunch of threads woven together that *looks* like a sock.\" Your visual cortex can represent this identity directly, so it feels immediately transparent that the sock just is the collection of threads; when you imagine sock-shaped woven threads, you automatically feel your visual model recognizing a sock.\n\nIf the relation between heat and kinetic energy were too complicated to visualize easily, it might instead feel like we were being given a blind, unjustified rule that reality contains mysterious \"bridging laws\" that make a separate quality of heat be present when particles vibrate quickly. Instructing ourselves to feel \"warmth\" as present when particles vibrate quickly would feel more like fooling ourselves, or self-deception. But on the position of \"rescuing the utility function\", the same arguments ought to apply in this hypothetical, even if the level transition is less transparent.\n\nThe gap between mind and brain is larger than the gap between heat and vibration, which is why humanity understood heat as disordered kinetic energy long before anyone had any idea [how 'playing chess' could be decomposed into non-mental simpler parts](https://arbital.com/p/38r). In some cases, we may not know what the reductionist identity will be. Still, the advice of \"rescuing the utility function\" is not to morally panic about realizing that various emotionally valent things will turn out to be made of parts, or even that our mind's representations in general may run somewhat skew to reality.\n\n## Complex rescues\n\nIn the heat metaphor, the lower level of the universe (jiggling particles) corresponds fairly exactly to the high-level notion of heat. We'd run into more complicated metamoral questions if we'd previously lumped together the 'heat' of chili peppers and the 'heat' of a fireplace as valuable 'warmth'.\n\nWe might end up saying that there are two physical kinds of valuable warm things: ceteris paribus and by default, if X turns out to consist of Y, then Y inherits X's role in the utility function. Alternatively, by some non-default line of reasoning, the discovery that chili peppers and fireplaces are warm in ontologically different ways might lead us to change how we feel about them on a high level as well. In this case we might have to carry out a more complicated rescue, where it's not so immediately obvious which low-level Ys are to inherit X's value.\n\n# Non-metaphorical utility rescues\n\nWe don't actually have [terminal values](https://arbital.com/p/1bh) for things being warm (probably). Non-metaphorically, \"rescuing the utility function\" says that we should apply similar reasoning to phenomena that we do in fact value, whose corresponding native emotions we are having trouble reconciling with non-native, learned theories of the universe's ontology.\n\nExamples might include:\n\n- Moral responsibility. How can we hold anyone responsible for their actions, or even hold ourselves responsible for what we see as our own choices, when our acts have causal histories behind them?\n- Happiness. What's the point of people being happy if it's just neurons firing?\n- Goodness and shouldness. Can there be any right thing to do, if there isn't an ontologically basic irreducible rightness property to correspond to our sense that some things are just right?\n- Wanting and helping. When a person wants different things at different times, and there are known experiments that expose circular and incoherent preferences, how can we possibly \"help\" anyone by \"giving them what they want\" or \"[extrapolating their volition](https://arbital.com/p/313)\"?\n\nIn cases like these, it may be that our native representation is in some sense running skew to the real universe. E.g., our minds insist that something called \"free will\" is very important to moral responsibility, but it seems impossible to define \"free will\" in a coherent way. The position of \"rescuing the utility function\" still takes the stance of \"Okay, let's figure out how to map this emotion onto a coherent universe as best we can\" not \"Well, it looks like the human brain didn't start out with a perfect representation of reality, therefore, normatively speaking, we should toss the corresponding emotions out the window.\" If in your native representation the Sun goes around the Earth, and then we learn differently from astronomy, then your native representation is in fact wrong, but normatively we should (by default) re-bind to the enormous glowing fusion reactor rather than saying that there's no Sun.\n\nThe role such concepts play in our values lends a special urgency to the question of how to rescue them. But on an even more general level, one might espouse that it is the job of good reductionists to say *how* things exist if they have any scrap of reality, rather than it being the job of reductionists to go around declaring that things *don't* exist if we detect the slightest hint of fantasy. Leif K-Brooks presented this general idea as follows (with intended application to 'free will' in particular):\n\n![If you define a potato as a magic fairy orb and disprove the existence of magic fairy orbs, you still have a potato.](https://scontent.fsnc1-4.fna.fbcdn.net/v/t1.0-9/15697756_10154038465390025_4647971104986602099_n.jpg?oh=08071559c0a79d7779dd56ad6be1bd13&oe=58F65AAC)\n\n# \"Rescue\" as resolving a degree of freedom in the pretheoretic viewpoint\n\nA human child doesn't start out endorsing any particular way of turning emotions into utility functions. As humans, we start out with no clear rules inside the chaos of our minds, and we have to make them up by considering various arguments appealing to our not-yet-organized intuitions. Only then can we even try to have coherent metaethical principles.\n\nThe core argument for \"rescuing the utility function\" can be seen as a base intuitive appeal to someone who hasn't yet picked out any explicit rules, claiming that it isn't especially sensible to end up as the kind of agent whose utility function ends up being zero everywhere.\n\nIn other words, rather than the rescue project needing to appeal to rules that would *only* be appealing to somebody who'd already accepted the rescue project ab initio - which would indeed be circular - the start of the argument is meant to also work as an intuitive appeal to the pretheoretic state of mind of a normal human. After that, we also hopefully find that these new rules are self-consistent.\n\nIn terms of the heat metaphor, if we're considering whether to discard heat, we can consider three types of agents:\n\n- (1). A pretheoretic or confused state of intuition, which knows itself to be confused. An agent like this is not [reflectively consistent](https://arbital.com/p/2rb) - it wants to resolve the internal tension.\n- (2). An agent that has fully erased all emotions relating to warmth, as if it never had them. This type of agent is [reflectively consistent](https://arbital.com/p/2rb); it doesn't value warmth and doesn't want to value warmth.\n- (3). An agent that values naturalistic heat, i.e., feels the way about disordered kinetic energy that a pretheoretic human feels about warmth. This type of agent has also resolved its issues and become [reflectively consistent](https://arbital.com/p/2rb).\n\nSince 2 and 3 are both internally consistent resolutions, there's potentially a [https://arbital.com/p/-2fr](https://arbital.com/p/-2fr) in how (1) can resolve its current internal tension or inconsistency. That is, it's not the case that the only kind of coherent agents are 2-agents or 3-agents, so just a desire for coherence qua coherence can't tell a 1-agent whether it should become a 2-agent or 3-agent. By advocating for rescuing the utility function, we're appealing to a pretheoretic and maybe chaotic and confused mess of intuitions, aka a human, arguing that if you want to shake out the mess, it's better to shake out as a 3-agent rather than a 2-agent.\n\nIn making this appeal, we can't appeal to firm foundations that already exist, since a 1-agent hasn't yet decided on firm philosophical foundations and there's more than one set of possible foundations to adopt. An agent with firm foundations would already be reflectively coherent and have no further philosophical confusion left to resolve (except perhaps for a mere matter of calculation). An existing 2-agent is of course nonplussed by any arguments that heat should be valued, in much the same way that humans would be nonplussed for arguments in favor of valuing [paperclips](https://arbital.com/p/10h) (or for that matter, things being hot). But to point this out is no argument for why a confused 1-agent should shake itself out as a consistent 2-agent rather than a consistent 3-agent; a 3-agent is equally nonplussed by the argument that the best thing to do with an [ontology identification problem](https://arbital.com/p/5c) is throw out all corresponding terms of the utility function.\n\nIt's perhaps lucky that human beings can't actually modify their own code, meaning that somebody partially talked into taking the 2-agent state as a new ideal to aspire to, still actually has the pretheoretic emotions and can potentially \"snap out of it\". Rather than becoming a 2-agent or a 3-agent, we become \"a 1-agent that sees 2-agency as ideal\" or \"a 1-agent that sees 3-agency as ideal\". A 1-agent aspiring to be a 2-agent can still potentially be talked out of it - they may still feel the weight of arguments meant to appeal to 1-agents, even if they think they *ought* not to, and can potentially \"just snap out\" of taking 2-agency as an ideal, reverting to confused 1-agency or to taking 3-agency as a new ideal.\n\n## Looping through the meta-level in \"rescuing the utility function\"\n\nSince human beings don't have \"utility functions\" (coherent preferences over probabilistic outcomes), the notion of \"rescuing the utility function\" is itself a matter of rescue. Natively, it's possible for psychology experiments to expose inconsistent preferences, but instead of throwing up our hands and saying \"Well I guess nobody wants anything and we might as well [let the universe get turned into paperclips](https://arbital.com/p/10h)!\", we try to back out some reasonably coherent preferences from the mess. This is, arguendo, normatively better than throwing up our hands and turning the universe into paperclips.\n\nSimilarly, according to [the normative stance behind extrapolated volition](https://arbital.com/p/313), the very notion of \"shouldness\" is something that gets rescued. Many people seem to instinctively feel that 'shouldness' wants to map onto an ontologically basic, irreducible property of rightness, such that every cognitively powerful agent with factual knowledge about this property is thereby compelled to perform the corresponding actions. (\"Moral internalism.\") But this demands an overly direct correspondence between our native sense that some acts have a compelling rightness quality about them, and wanting there to be an ontologically basic compelling rightness quality out there in the environment.\n\nDespite the widespread appeal of moral internalism once people are exposed to it as an explicit theory, it still seems unfair to say that humans *natively* want or *pretheoretically* demand that this is what our sense of rightness correspond to. E.g. a hunter-gatherer, or someone else who's never debated metaethics, doesn't start out with an explicit commitment about whether a feeling of rightness must correspond to universes that have irreducible rightness properties in them. If you'd grown up thinking that your feeling of rightness [corresponded to computing a certain logical function over universes](https://arbital.com/p/313), this would seem natural and non-disappointing.\n\nSince \"shouldness\" (the notion of normativity) is something that itself may need rescuing, this rescue of \"shouldness\" is in some sense being implicitly invoked by the normative assertion that we *should* try to \"rescue the utility function\".\n\nThis could be termed circular, but we could equally say that it is self-consistent. Or rather, we are appealing to some chaotic, not-yet-rescued, pretheoretic notion of \"should\" in saying that we *should* try to rescue concepts like \"shouldness\" instead of throwing them out the window. Afterwards, once we've performed the rescue and have a more coherent notion of concepts like \"better\", we can see that the loop through the meta-level has entered a consistent state. According to this new ideal (less confused, but perhaps also seeming more abstract), it remains better not to give up on concepts like \"better\".\n\nThe \"extrapolated volition\" rescue of shouldness is meant to bootstrap, by appeal to a pretheoretic and potentially confused state of wondering what is the right thing to do (or how we even *should* resolve this whole \"rightness\" issue, and if maybe it would be *better* to just give up on it), into a more reflectively consistent state of mind. Afterwards we can see both pretheoretically, and also in the light of our new explicit theory, that we *ought* to try to rescue the concept of oughtness, and adopt the rescued form of reasoning as a less-confused ideal. We will believe that *ideally*, the best justification for extrapolated volition is to say that we know of no better candidate for what theory we'd arrive at if we thought about it for even longer. But since humans can perhaps thankfully not directly rewrite their own code, we will also remain aware of whether this seems like a good idea in the pretheoretic sense, and perhaps prepared to unwind or jump back out of the system if it turns that out that the explicit theory has big problems we didn't know about when we originally jumped to it.\n\n## Reducing tension / \"as if you'd always known it\"\n\nWe can possibly see the desired output of \"rescuing the utility function\" as being something like reducing a tension between native emotion-binding representations, and reality, with a minimum of added fuss and complexity.\n\nThis can look a lot like the intuition pump, \"Suppose you'd grown up always knowing the true state of affairs, and nobody had suggested that you panic over it or experience any existential angst; what would you have grown up thinking?\" If you grow up with warmth-related emotions and already knowing that heat is disordered kinetic energy, and nobody has suggested to you that anyone ought to wail in existential angst about this, then you'll probably grow up valuing heat-as-disordered-kinetic-energy (and this will be a low-tension resolution).\n\nFor more confusing grades of cognitive reductionism, like free will, people might spontaneously have difficulty reconciling their internal sense of freedom with being told about deterministic physical laws. But a good \"rescue\" of the corresponding sense of moral responsibility ought to end up looking like the sort of thing that you'd quietly take for granted as a quiet, obvious-seeming mapping of your sense of moral responsibility onto the physical universe, if you'd grown up taking those laws of physics for granted.\n\n# \"Rescuing\" pretheoretic emotions and intuitions versus \"rescuing\" explicit moral theories\n\n## Rewinding past factual-mistake-premised explicit moral theories\n\nIn the heat metaphor, suppose you'd previously adopted a 'caloric fluid' model of heat, and the explicit belief that this caloric fluid was what was valuable. You still have wordless intuitions about warmth and good feelings about warmth. You *also* have an explicit world-model that this heat corresponds to caloric fluid, and an explicit moral theory that this caloric fluid is what's good about heat.\n\nThen science discovers that heat is disordered kinetic energy. Should we try to rescue our moral feelings about caloric by looking for the closest thing in the universe to caloric fluid - electricity, maybe?\n\nIf we now reconsider the arguments for \"rescuing the utility function\", we find that we have more choices beyond \"looking for the closest thing to caloric\" and \"giving up entirely on warm feelings\". An additional option is to try to rescue the intuitive sense of warmth, but not the explicit beliefs and explicit moral theories about \"caloric fluid\".\n\nIf we instead choose to rescue the pretheoretic emotion, we could see this as retracing our steps after being led down a garden-path of bad reasoning, aka, \"not rescuing the garden path\". We started with intuitively good feelings about warmth, came to believe a false model about the causes of warmth, reacted emotionally to this false model, and developed an explicit moral theory about caloric fluid.\n\nThe [extrapolated-volition model of normativity](https://arbital.com/p/313) (what we would want\\* if we knew all the facts) suggests that we could see the reasoning after adopting the false caloric model as \"mistaken\" and not rescue it. When we're dealing with explicit moral beliefs that grew up around a false model of the world, we have the third option to \"rewind and rescue\" rather than \"rescue\" or \"give up\".\n\nNonmetaphorically: Suppose you believe in a [divine command theory of metaethics](https://en.wikipedia.org/wiki/Divine_command_theory); goodness is equivalent to God wanting something. Then one day you realize that there's no God in which to ground your moral theory.\n\nIn this case we have three options for resolution, all of which are reflectively consistent within themselves, and whose arguments may appeal to our currently-confused pretheoretic state:\n\n- (a) Prefer to go about wailing in horror about the unfillable gap in the universe left by the absence of God.\n- (b) Try to rescue the explicit divine command theory, e.g. by looking for the closest thing to a God and re-anchoring the divine command theory there.\n- (c) Give up on the explicit model of divine command theory; instead, try to unwind past the garden path you went down after your native emotions reacted to the factually false model of God. Try to remap the pretheoretic emotions and intuitions onto your new model of the universe.\n\nAgain, (a) and (b) and (c) all seem reflectively consistent in the sense that a simple agent fully in one of these states will not want to enter either of the other two states. But given these three options, a confused agent might reasonably find either (b) or (c) more pretheoretically compelling than (a), but also find (c) more pretheoretically compelling than (b).\n\nThe notion of \"rescuing\" isn't meant to erase the notion of \"mistakes\" and \"saying oops\" with respect to b-vs.-c alternatives. The arguments for \"rescuing\" warmth implicitly assumed that we were talking about a pretheoretic normative intuition (e.g. an emotion associated with warmth), not explicit models and theories about heat that could just as easily be revised.\n\nConversely, when we're dealing with preverbal intuitions and emotions whose natively bound representations are in some way running skew to reality, we can't rewind past the fact of our emotions binding to particular mental representations. We were literally born that way. Then our only obvious alternatives are to (a) give up entirely on that emotion and value, or (c) rescue the intuitions as best we can. In this case (c) seems more pretheoretically appealing, ceteris paribus and by default.\n\n(For example, suppose you were an alien that had grown up accepting commands from a Hive Queen, and you had a pretheoretic sense of the Hive Queen as knowing everything, and you mostly operated on an emotional-level Hive-Queen-command theory of rightness. One day, you begin to suspect that the Hive Queen isn't actually omniscient. Your alien version of \"rescuing the utility function\" might say to rescue the utility function by allowing valid commands to be issued by Hive Queens that knew a lot but weren't omniscient. Or it might say to try and build a superintelligent Hive Queen that would know as much as possible, because in a pretheoretic sense that would feel better. The aliens can't rewind past their analogue of divine command theory because, by hypothesis, the alien's equivalent of divine command metaethics is built into them on a pretheoretic and emotional level. Though of course, in this case, such aliens seem more likely to actually resolve their tension by asking the Hive Queen what to do about it.)\n\n## Possibility of rescuing non-mistake-premised explicit moral theories\n\nSuppose Alice has an explicit belief that private property ought to be a thing, and this belief did *not* develop after she was told that objects had tiny XML tags declaring their irreducible objective owners, nor did she originally arrive at the conclusion based on a model in which God assigned transferable ownership of all objects at the dawn of time. We can suppose, somewhat realistically, that Alice is a human and has a pretheoretic concept of ownership as well as deserving rewards for effort, and was raised by small-l libertarian parents who told her true facts about how East Germany did worse economically than West Germany. Over time, she came to adopt an explicit moral theory of \"private property\": ownership can only transfer by consent, and that violations of this rule violate the just-rewards principle.\n\nOne day, Alice starts having trouble with her moral system because she's realized that property is made of atoms, and that even the very flesh in her body is constantly exchanging oxygen and carbon dioxide with the publicly owned atmosphere. Can atoms really be privately owned?\n\nThe confused Alice now again sees three options, all of them [reflectively consistent](https://arbital.com/p/2rb) on their own terms if adopted:\n\n- (a) Give up on everything to do with ownership or deserving rewards for efforts; regard these emotions as having no valid referents.\n- (b) Try to rescue the explicit moral theory by saying that, sure, atoms can be privately owned. Alice owns a changeable number of carbon atoms inside her body and she won't worry too much about how they get exchanged with the atmosphere; that's just the obvious way to map private property onto a particle-based universe.\n- (c) Try to rewind past the explicit moral theory, and figure out from scratch what to do with emotions about \"deserves reward\" or \"owns\".\n\nLeaving aside what you think of Alice's explicit moral theory, it's not obvious that Alice will end up preferring (c) to (b), especially since Alice's current intuitive state is influenced by her currently-active explicit theory of private property.\n\nUnlike the divine command theory, Alice's private property theory was not (obviously to Alice) arrived at through a path that traversed [wrong beliefs of simple fact](https://arbital.com/p/3t3). With the divine command theory, since it was critically premised on a wrong factual model, we face the prospect of having to stretch the theory quite a lot in order to rescue it, making it less intuitively appealing to a confused mind than the alternative prospect of stretching the pretheoretic emotions a lot less. Whereas from Alice's perspective, she can just as easily pick up the whole moral theory and morph it onto reductionist physics with all the internal links intact, rather than needing to rewind past anything.\n\nWe at least have the apparent *option* of trying to rescue Alice's utility function in a way that preserves her explicit moral theories *not* based on bad factual models - the steps of her previous explicit reasoning that did not, of themselves, introduce any new tensions or problems in mapping her emotions or morals onto a new representation. Whether or not we ought to do this, it's a plausible possibility on the table.\n\n## Which explicit theories to rescue?\n\nIt's not yet obvious where to draw the line on which explicit moral theories to rescue. So far as we can currently see, any of the following might be a reasonable way to tell a superintelligence to extrapolate someone's volition:\n\n- Preserve explicit moral theories wherever it doesn't involve an enormous stretch.\n- Be skeptical of explicit moral theories that were arrived at by fragile reasoning processes, even if they could be rescued in an obvious way.\n- Only extrapolate pretheoretic intuitions.\n\nAgain, all of these viewpoints are internally consistent (they are degrees of freedom in the metaphorical meta-utility-function), so the question is which rule for drawing the line seems most intuitively appealing in our present state of confusion:\n\n### Argument from adding up to normality\n\nArguendo: Preserving explicit moral theories is important for having the rescued utility function add up to normality. After rescuing my notion of \"shouldness\", then afterwards I should, by default, see mostly the same things as rescued-right.\n\nSuppose Alice was previously a moral internalist, and thought that some things were inherently irreducibly right, such that her very notion of \"shouldness\" needed rescuing. That doesn't necessarily introduce any difficulties into re-importing her beliefs about private property. Alice may have previously refused to consider some arguments against private property because she thought it was irreducibly right, but this is a separate issue in extrapolating her volition from throwing out her entire stock of explicit moral theories because they all used the word \"should\". By default, after we're done rescuing Alice, unless we are doing something that's clearly and explicitly correcting an error, her rescued viewpoint should look as normal-relative-to-her-previous-perspective as possible.\n\n### Argument from helping\n\nArguendo: Preserving explicit moral theories where possible is an important aspect of how an ideal advisor or superintelligence ought to extrapolate someone else's volition.\n\nSuppose Alice was previously a moral internalist, and thought that some things were inherently irreducibly right, such that her very notion of \"shouldness\" needed rescuing. Alice may not regard it as \"helping\" her to throw away all of her explicit theories and try to re-extrapolate her emotions from scratch into new theories. If there weren't any factual flaws involved, Alice is likely to see it as less than maximally helpful to her if we needlessly toss one of her cherished explicit moral theories.\n\n### Argument from incoherence, evil, chaos, and arbitrariness\n\nArguendo: Humans are really, really bad at systematizing explicit moral theories; a supermajority of explicit moral theories in today's world will be incoherent, evil, or both. E.g., explicit moral principles may de-facto be chosen mostly on the basis of, e.g., how hard they appear to cheer for a valorized group. An extrapolation dynamic that tries to take into account all these chaotic, arbitrarily-generated group beliefs will end up failing to cohere.\n\n### Argument from fragility of goodness\n\nArguendo: Most of what we see as the most precious and important part of ourselves are explicit moral theories like \"all sapient beings should have rights\", which aren't built into human babies. We may well have arrived at that destination through a historical trajectory that went through factual mistakes, like believing that all human beings had souls created equal by God and were loved equally by God. (E.g. Christian theology seems to have been, as a matter of historical fact, causally important in the development of explicit anti-slavery sentiment.) Tossing the explicit moral theories is as unlikely to be good, from our perspective, as tossing our brains and trying to rerun the process of natural selection to generate new emotions.\n\n### Argument from dependency on empirical results\n\nArguendo: Which version of extrapolation we'll actually find appealing will depend on which extrapolation algorithm turns out to have a reasonable answer. We don't have enough computing power to guess, right now, whether:\n\n- Any reasonable-looking construal of \"Toss out all the explicit cognitive content and redo by throwing pretheoretic emotions at true facts\" leads to an extrapolated volition that lacks discipline and coherence, looking selfish and rather angry, failing to regenerate most of altruism or Fun Theory; *or*\n- Any reasonable-looking construal of \"Try to preserve explicit moral theories\" leads to an incoherent mess of assertions about various people going to Hell and capitalism being bad for you.\n\nSince we can't guess using our present computing power which rule would cause us to recoil in horror, but the actual horrified recoil would settle the question, we can only defer this single bit of information to one person who's allowed to peek at the results.\n\n(Counterargument: \"Perhaps the ancient Greeks would have recoiled in horror if they saw how little the future would think of a glorious death in battle, thus picking the option we see as wrong, using the stated rule.\")", "date_published": "2016-12-29T05:20:24Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["\"Saving the phenomena\" is the name for the rule that brilliant new scientific theories still need to reproduce our mundane old observations. The point of heliocentric astronomy is not to predict that the Sun careens crazily over the sky, but rather, to explain why the Sun appears to rise and set each day.\n\n\"Rescuing the utility function\" is an analogous principle meant to apply to [naturalistic](https://arbital.com/p/112) moral philosophy: new theories about which things are composed of which other things should, by default, not affect what we value: they should \"add up to moral normality\". For example, if your values previously made mention of \"moral responsibility\" or \"subjective experience\", you should by default go on valuing these things after discovering that people are made of parts.\n\nA metaphorical example is that if your utility function values 'heat', and then you discover to your horror that there's no ontologically basic heat, merely little particles bopping around, then you should by default switch to valuing disordered kinetic energy the same way you used to value heat. That's just what heat *is,* what it turned out to be. Likewise valuing \"people\" or \"consciousness\"."], "tags": ["Executable philosophy", "B-Class"], "alias": "3y6"} {"id": "ec88e9d4fef17036bd1e140b4c153731", "title": "Philosophy", "url": "https://arbital.com/p/philosophy", "source": "arbital", "source_type": "text", "text": "For now, this is a stub domain to hold standard concepts from academic philosophy, which are being used elsewhere on Arbital (we're not currently targeting this domain as such).", "date_published": "2016-05-31T16:54:17Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "3yb"} {"id": "0a7848f89edc6724bd7c04a7c3ba8ade", "title": "Total alignment", "url": "https://arbital.com/p/total_alignment", "source": "arbital", "source_type": "text", "text": "An advanced agent can be said to be \"totally aligned\" when it can assess the *exact* [https://arbital.com/p/-55](https://arbital.com/p/-55) of well-described outcomes and hence the *exact* subjective value of actions, policies, and plans; where [https://arbital.com/p/-55](https://arbital.com/p/-55) has its overridden meaning of a metasyntactic variable standing in for \"whatever we really do or really should value in the world or want from an Artificial Intelligence\" (this is the same as \"normative\" if the speaker believes in normativity). That is: It's an advanced agent that captures *all* the distinctions we would make or should make within which outcomes are good or bad; it has \"full coverage\" of the true or intended goals; it correctly resolves every [https://arbital.com/p/-2fr](https://arbital.com/p/-2fr).\n\nWe don't need to try and give such an AI simplified orders like, e.g., \"try to have a [lower impact](https://arbital.com/p/2pf)\" because we're worried about, e.g., a [https://arbital.com/p/-42](https://arbital.com/p/-42) problem on trying to draw exact boundaries around what constitutes a bad impact. The AI knows *everything* worth knowing about which impacts are bad, and even if it thinks of a really weird exotic plan, it will still be able to figure out which aspects of this plan match our intended notion of [https://arbital.com/p/-55](https://arbital.com/p/-55) or a normative notion of [https://arbital.com/p/-55](https://arbital.com/p/-55).\n\nIf this agent does not systematically underestimate the probability of bad outcomes / overestimate the probability of good outcomes, and its maximization over policies is not subject to adverse subjection, its estimates of expected [https://arbital.com/p/-55](https://arbital.com/p/-55) will be well-calibrated even from our own outside standpoint.", "date_published": "2016-06-06T17:52:05Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "41k"} {"id": "acba8c1219875e215b3fb82d06810719", "title": "Superintelligent", "url": "https://arbital.com/p/superintelligent", "source": "arbital", "source_type": "text", "text": "Machine performance inside a domain (class of problems) can potentially be:\n\n- Optimal (impossible to do better)\n- Strongly superhuman (better than all humans by a significant margin)\n- Weakly superhuman (better than all the humans most of the time and most of the humans all of the time)\n- Par-human (performs about as well as most humans, better in some places and worse in others)\n- Subhuman or infrahuman (performs worse than most humans)\n\nA superintelligence is either 'strongly superhuman', or else at least 'optimal', across all cognitive domains. It can't win against a human at [logical tic-tac-toe](https://arbital.com/p/9s), but it plays optimally there. In a real-world game of tic-tac-toe that it strongly wanted to win, it might sabotage the opposing player, deploying superhuman strategies on the richer \"real world\" gameboard.\n\nI. J. Good originally used 'ultraintelligence' to denote the same concept: \"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.\"\n\nTo say that a hypothetical agent or process is \"superintelligent\" will usually imply that it has all the [advanced-agent properties](https://arbital.com/p/2c). \n\nSuperintelligences are still [bounded](https://arbital.com/p/2rd) (if the character of physical law at all resembles the Standard Model of physics). They are (presumably) not infinitely smart, infinitely fast, all-knowing, or able to achieve every describable outcome using their available resources and options. However:\n\n- A supernova isn't infinitely hot, but it's still pretty warm. \"Bounded\" does not imply \"small\". You should not try to walk into a supernova using a standard flame-retardant jumpsuit after reasoning, correctly but unhelpfully, that it is only boundedly hot.\n- A superintelligence doesn't know everything and can't perfectly estimate every quantity. However, to say that something is \"superintelligent\" or superhuman/optimal in every cognitive domain should almost always imply that its estimates are [epistemically efficient relative to every human and human group](https://arbital.com/p/6s). Even a superintelligence may not be able to exactly estimate the number of hydrogen atoms in the Sun, but a human shouldn't be able to say, \"Oh, it will probably underestimate the number by 10% because hydrogen atoms are pretty light\" - the superintelligence knows that too. For us to know better than the superintelligence is at least as implausible as our being able to predict a 20% price increase in Microsoft's stock six months in advance without any private information.\n- A superintelligence is not omnipotent and can't obtain every describable outcome. But to say that it is \"superintelligent\" should suppose at least that it is [instrumentally efficient relative to humans](https://arbital.com/p/6s): We should not suppose that a superintelligence carries out any policy $\\pi_0$ such that a human can think of a policy $\\pi_1$ which would get more of the agent's [utility](https://arbital.com/p/109). To put it another way, the assertion that a superintelligence optimizing for utility function $U,$ would pursue a policy $\\pi_0,$ is by default refuted if we observe some $\\pi_1$ such that, so far as we can see, $\\mathbb E[| \\pi_0](https://arbital.com/p/U) < \\mathbb E[| \\pi_1](https://arbital.com/p/U).$ We're not sure the efficient agent will do $\\pi_1$ - there might be an even better alternative we [haven't foreseen](https://arbital.com/p/9f) - but we should regard it as very likely that it won't do $\\pi_0.$\n\nIf we're talking about a hypothetical superintelligence, probably we're either supposing that an [intelligence explosion](https://arbital.com/p/428) happened, or we're talking about a limit state approached by a long period of progress.\n\nMany/most problems in [AI alignment](https://arbital.com/p/2v) seem like they ought to first appear at a point short of full superintelligence. As part of the project of making discourse about advanced agents precise, we should try to identify the key advanced agent property more precisely than saying \"this problem would appear on approaching superintelligence\" - to suppose superintelligence is usually *sufficient* but will rarely be necessary.\n\nFor the book, see [https://arbital.com/p/3db](https://arbital.com/p/3db).", "date_published": "2016-06-08T15:24:56Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A supernova isn't infinitely hot, but it's still pretty darned hot; in the same sense, a superintelligence isn't infinitely smart, but it's pretty darned smart: strictly superhuman across all cognitive domains, by a significant margin, or else as a fallback merely optimal. (A superintelligence can't win against a human at [logical tic-tac-toe](https://arbital.com/p/9s), though in real-world tic-tac-toe it could disassemble the opposing player.) Superintelligences are [epistemically and instrumentally efficient](https://arbital.com/p/6s) relative to humans, and have all the other [advanced agent properties](https://arbital.com/p/2c) as well."], "tags": ["B-Class"], "alias": "41l"} {"id": "b3ba79029ffad4601857836f3718835b", "title": "William Frankena's list of terminal values", "url": "https://arbital.com/p/frankena_goods", "source": "arbital", "source_type": "text", "text": "\"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.\" -- William Frankena's list of [things valued in themselves rather than only for their further consequences](https://arbital.com/p/1bh).", "date_published": "2016-06-06T23:53:46Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "41r"} {"id": "69e77afb366c23be388d46274f891001", "title": "Nearest unblocked strategy", "url": "https://arbital.com/p/nearest_unblocked", "source": "arbital", "source_type": "text", "text": "# Introduction\n\n'Nearest unblocked strategy' seems like it should be a [foreseeable](https://arbital.com/p/6r) problem of trying to get rid of undesirable AI behaviors by adding specific penalty terms to them, or otherwise trying to exclude one class of observed or foreseen bad behaviors. Namely, if a decision criterion thinks $X$ is the best thing to do, and you add a penalty term $P$ that you think excludes everything inside $X,$ the *next-best* thing to do may be a very similar thing $X'$ which is the most similar thing to $X$ that doesn't trigger $P.$\n\n## Example: Producing happiness.\n\nSome very early proposals for AI alignment suggested that AIs be targeted on producing human happiness. Leaving aside various other objections, arguendo, imagine the following series of problems and attempted fixes:\n\n- By hypothesis, the AI is successfully infused with a goal of \"human happiness\" as a utility function over human brain states. (Arguendo, this predicate is narrowed sufficiently that the AI does not just want to construct [the tiniest, least resource-intensive brains experiencing the largest amount of happiness per erg of energy](https://arbital.com/p/2w).)\n- Initially, the AI seems to be pursuing this goal in good ways; it organizes files, tells funny jokes, helps landladies take out the garbage, etcetera.\n- Encouraged, the programmers further improve the AI and add more computing power.\n- The AI gains a better understanding of the world, and the AI's [policy space expands](https://arbital.com/p/6q) to include conceivable options like \"administer heroin\".\n- The AI starts planning how to administer heroin to people.\n- The programmers notice this before it happens. (Arguendo, due to successful transparency features, or an imperative to [check plans with the users](https://arbital.com/p/2qq), which operated as [intended](https://arbital.com/p/6h) at the AI's current level of intelligence.)\n- The programmers edit the AI's utility function and add a penalty of -100 utilons for any event categorized as \"the AI administers heroin to humans\". (Arguendo, the AI's current level of intelligence does not suffice to [prevent the programmers from editing its utility function](https://arbital.com/p/), despite the convergent instrumental incentive to avoid this; nor does it successfully [deceive](https://arbital.com/p/10f) the programmers.)\n- The AI gets slightly smarter. New conceivable options enter the AI's option space.\n- The AI starts wanting to administer cocaine to humans (instead of heroin).\n- The programmers read through the current schedule of prohibited drugs and add penalty terms for administering marijuana, cocaine, etcetera.\n- The AI becomes slightly smarter. New options enter its policy space.\n- The AI starts thinking about how to research a new happiness drug not on the list of drugs that its utility function designates as bad.\n- The programmers, after some work, manage to develop a category for 'The AI forcibly administering any kind of psychoactive drug to humans' which is broad enough that the AI stops suggesting research campaigns to develop things slightly outside the category.\n- The AI wants to build an external system to administer heroin, so that it won't be classified inside this set of bad events \"the AI forcibly administering drugs\".\n- The programmers generalize the penalty predicate to include \"machine systems in general forcibly administering heroin\" as a bad thing.\n- The AI recalculates what it wants, and begins to want to pay humans to administer heroin.\n- The programmers try to generalize the category of penalized events to include non-voluntarily administration of drugs in general that produce happiness, whether done by humans or AIs. The programmers patch this category so that the AI is not trying to shut down at least the nicer parts of psychiatric hospitals.\n- The AI begins planning an ad campaign to persuade people to use heroin voluntarily.\n- The programmers add a penalty of -100 utilons for \"AIs *persuading* humans to use drugs\".\n- The AI goes back to helping landladies take out the garbage. All seems to be well.\n- The AI continues to increase in intelligence, becoming capable enough that the AI can no longer be edited against its own will.\n- The AI notices the option \"Tweak human brains to express extremely high levels of endogenous opiates, then take care of their twitching bodies to so they can go on being happy\".\n\nThe overall story is one where the AI's preferences on round $i,$ denoted $U_i,$ are observed to arrive at an attainable optimum $X_i$ which the humans see as undesirable. The humans devise a penalty term $P_i$ intended to exclude the undesirable parts of the policy space, and add this to $U_i$ creating a new utility function $U_{i+1},$ after which the AI's optimal policy settles into a new state $X_i^*$ that seems acceptable. However, after the next expansion of the policy space, $U_{i+1}$ settles into a new attainable optimum $X_{i+1}$ which is very similar to $X_i$ and makes the minimum adjustment necessary to evade the boundaries of the penalty term $P_i,$ requiring a new penalty term $P_{i+1}$ to exclude this new misbehavior.\n\n(The end of this story *might* not kill you if the AI had enough successful, [advanced-safe](https://arbital.com/p/2l) [corrigibility features](https://arbital.com/p/45) that the AI would [indefinitely](https://arbital.com/p/2x) go on [checking](https://arbital.com/p/2qq) [novel](https://arbital.com/p/2qp) policies and [novel](https://arbital.com/p/2qp) goal instantiations with the users, not strategically hiding its disalignment from the programmers, not deceiving the programmers, letting the programmers edit its utility function, not doing anything disastrous before the utility function had been edited, etcetera. But you wouldn't want to rely on this. You would not want in the first place to operate on the paradigm of 'maximize happiness, but not via any of these bad methods that we have already excluded'.)\n\n# Preconditions\n\nRecurrence of a nearby unblocked strategy is argued to be a [foreseeable difficulty](https://arbital.com/p/6r) given the following preconditions:\n\n• The AI is a [consequentialist](https://arbital.com/p/9h), or is conducting some other search such that when the search is blocked at $X,$ the search may happen upon a similar $X'$ that fits the same criterion that originally promoted $X.$ E.g. in an agent that selects actions on the basis of their consequences, if an event $X$ leads to goal $G$ but $X$ is blocked, then a similar $X'$ may also have the property of leading to $G.$\n\n• The search is taking place over a [rich domain](https://arbital.com/p/9j) where the space of relevant neighbors around X is too complicated for us to be certain that we have described all the relevant neighbors correctly. If we imagine an agent playing [the purely ideal game of logical Tic-Tac-Toe](https://arbital.com/p/9s), then if the agent's utility function hates playing in the center of the board, we can be sure (because we can exhaustively consider the space) that there are no Tic-Tac-Toe squares that behave strategically almost like the center but don't meet the exact definition we used of 'center'. In the far more complicated real world, when you eliminate 'administer heroin' you are very likely to find some other chemical or trick that is strategically mostly equivalent to administering heroin. See \"[Almost all real-world domains are rich](https://arbital.com/p/RealIsRich)\".\n\n• From our perspective on [https://arbital.com/p/-55](https://arbital.com/p/-55), the AI does not have an [absolute identification of value](https://arbital.com/p/) for the domain, due to some combination of \"the domain is rich\" and \"[value is complex](https://arbital.com/p/5l)\". Chess is complicated enough that human players can't absolutely identify winning moves, but since a chess program can have an absolute identification of which endstates constitute winning, we don't run into a problem of unending patches in identifying which states of the board are good play. (However, if we consider a very early chess program that (from our perspective) was trying to be a consequentialist but wasn't very good at it, then we can imagine that, if the early chess program consistently threw its queen onto the right edge of the board for strange reasons, forbidding it to move the queen there might well lead it to throw the queen onto the left edge for the same strange reasons.)\n\n# Arguments\n\n## 'Nearest unblocked' behavior is sometimes observed in humans\n\nAlthough humans obeying the law make poor analogies for mathematical algorithms, in some cases human economic actors expect not to encounter legal or social penalties for obeying the letter rather than the spirit of the law. In those cases, after a previously high-yield strategy is outlawed or penalized, the result is very often a near-neighboring result that barely evades the letter of the law. This illustrates that the theoretical argument also applies in practice to at least some pseudo-economic agents (humans), as we would expect given the stated preconditions.\n\n## [Complexity of value](https://arbital.com/p/5l) means we should not expect to find a simple encoding to exclude detrimental strategies\n\nTo a human, 'poisonous' is one word. In terms of molecular biology, the exact volume of the configuration space of molecules that is 'nonpoisonous' is very complicated. By having a single word/concept for poisonous-vs.-nonpoisonous, we're *dimensionally reducing* the space of edible substances - taking a very squiggly volume of molecule-space, and mapping it all onto a linear scale from 'nonpoisonous' to 'poisonous'.\n\nThere's a sense in which human cognition implicitly performs dimensional reduction on our solution space, especially by simplifying dimensions that are relevant to some component of our values. There may be some psychological sense in which we feel like \"do X, only not weird low-value X\" ought to be a simple instruction, and an agent that repeatedly produces the next unblocked weird low-value X is being perverse - that the agent, given a few examples of weird low-value Xs labeled as noninstances of the desired concept, ought to be able to just generalize to not produce weird low-value Xs.\n\nIn fact, if it were possible to [encode all relevant dimensions of human value into the agent](https://arbital.com/p/full_coverage) then we could just say *directly* to \"do X, but not low-value X\". By the definition of [https://arbital.com/p/-full_coverage](https://arbital.com/p/-full_coverage), the agent's concept for 'low-value' includes everything that is actually of low [value](https://arbital.com/p/55), so this one instruction would blanket all the undesirable strategies we want to avoid.\n\nConversely, the truth of the [complexity of value thesis](https://arbital.com/p/5l) would imply that the simple word 'low-value' is dimensionally reducing a space of tremendous [algorithmic complexity](https://arbital.com/p/5v). Thus the effort required to actually convey the relevant dos and don'ts of \"X, only not weird low-value X\" would be high, and a human-generated set of supervised examples labeled 'not the kind of X we mean' would be unlikely to cover and stabilize all the dimensions of the underlying space of possibilities. Since the weird low-value X cannot be eliminated in one instruction or several patches or a human-generated set of supervised examples, the [https://arbital.com/p/-42](https://arbital.com/p/-42) problem will recur incrementally each time a patch is attempted and then the policy space is widened again.\n\n# Consequences\n\n[https://arbital.com/p/42](https://arbital.com/p/42) being a [foreseeable difficulty](https://arbital.com/p/6r) is a major contributor to worrying that short-term incentives in AI development, to get today's system working today, or to have today's system not exhibiting any immediately visible problems today, will not lead to advanced agents which are [safe after undergoing significant gains in capability](https://arbital.com/p/2l).\n\nMore generally, [https://arbital.com/p/-42](https://arbital.com/p/-42) is a [foreseeable](https://arbital.com/p/6r) reason why saying \"Well just exclude X\" or \"Just write the code to not X\" or \"Add a penalty term for X\" doesn't solve most of the issues that crop up in AI alignment.\n\nEven more generally, this suggests that we want AIs to operate inside a space of [conservative categories containing actively whitelisted strategies and goal instantiations](https://arbital.com/p/2qp), rather than having the AI operate inside a (constantly expanding) space of all conceivable policies minus a set of blacklisted categories.", "date_published": "2016-05-01T20:22:53Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Nearest Unblocked Strategy is a hypothetical source of [https://arbital.com/p/-48](https://arbital.com/p/-48) in the [alignment problem](https://arbital.com/p/5s) for [advanced agents](https://arbital.com/p/2c) that search [rich solution spaces](https://arbital.com/p/9j). If an agent's [preference framework](https://arbital.com/p/5f) is [patched](https://arbital.com/p/48) to try to block a possible solution that seems undesirable, the next-best solution found may be the most similar solution that technically avoids the block. This kind of patching seems especially likely to lead to a [change](https://arbital.com/p/-context) where a patch appears [beneficial](https://arbital.com/p/3d9) in a narrow option space, but proves detrimental after increased intelligence opens up more options."], "tags": ["Patch resistance", "B-Class"], "alias": "42"} {"id": "d587b9980c2cdb4214704152e7968b73", "title": "Intelligence explosion", "url": "https://arbital.com/p/intelligence_explosion", "source": "arbital", "source_type": "text", "text": "An \"intelligence explosion\" is what happens if a machine intelligence has [fast, consistent returns on investing work into improving its own cognitive powers, over an extended period](https://intelligence.org/files/IEM.pdf). This would most stereotypically happen because it became able to optimize its own cognitive software, but could also apply in the case of \"invested cognitive power in seizing all the computing power on the Internet\" or \"invested cognitive power in cracking the protein folding problem and then built nanocomputers\".", "date_published": "2016-06-07T18:27:47Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "428"} {"id": "899be981f91e00e2a0a4dbd759c50728", "title": "Artificial General Intelligence", "url": "https://arbital.com/p/agi", "source": "arbital", "source_type": "text", "text": "An \"Artificial General Intelligence\" is a machine intelligence possessed of some form of the same [\"significantly more generally applicable\" intelligence](https://arbital.com/p/7vh) that distinguishes humans from our nearest chimpanzee relatives. A bee builds hives, a beaver builds dams, a human looks at both and imagines a dam with honeycomb structure. We can drive cars or do algebra, even though no similar problems were found in our environment of evolutionary adaptedness, and we haven't had time to evolve further to match cars or algebra. The brain's algorithms are sufficiently general, sufficiently cross-domain, that we can learn a tremendous variety of new [domains](https://arbital.com/p/7vf) within our lifetimes. We are not perfectly general - we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we're sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities at \"compile time\". An Artificial General Intelligence has the same property; it can learn a tremendous variety of domains, including domains it had no inkling of when it was switched on.", "date_published": "2017-02-18T02:27:15Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "42g"} {"id": "dc10c14cc05695953ca316ca1811ef61", "title": "Advanced nonagent", "url": "https://arbital.com/p/advanced_nonagent", "source": "arbital", "source_type": "text", "text": "A standard agent:\n\n 1. Observes reality\n 2. Uses its observations to build a model of reality\n 3. Uses its model to forecast the effects of possible actions or policies\n 4. Chooses among policies on the basis of its [utility function](https://arbital.com/p/109) over the consequences of those policies\n 5. Carries out the chosen policy\n\n(...and then observes the actual results of its actions, and updates its model, and considers new policies, etcetera.)\n\nIt's conceivable that a cognitively powerful program could carry out some, but not all, of these activities. We could call this an \"advanced pseudoagent\" or \"advanced nonagent\".\n\n# Example: Planning Oracle\n\nImagine that we have an [Oracle](https://arbital.com/p/6x) agent which outputs a plan $\\pi_0$ which is meant to maximize lives saved or [eudaimonia](https://arbital.com/p/) etc., *assuming* that the human operators decide to carry out the plan. By hypothesis, the agent does not assess the probability that the plan will be carried out, or try to maximize the probability that the plan will be carried out.\n\nWe could look at this as modifying step 4 of the loop: rather than this pseudoagent selecting the output whose expected consequences optimize its utility function, it selects the output that optimizes utility *assuming* some other event occurs (the humans deciding to carry out the plan).\n\nWe could also look at the whole Oracle schema as interrupting step 5 of the loop. If the Oracle works as [intended](https://arbital.com/p/6h), its purpose is not to immediately output optimized actions into the world; rather it is meant to output plans for humans to carry out. This though is more of a metaphorical or big-picture property. If not for the modification of step four where the Oracle calculates $\\mathbb E [| \\operatorname{do}](https://arbital.com/p/U)$ instead of $\\mathbb E [| \\operatorname{do}](https://arbital.com/p/U),$ the Oracle's outputted plans would just *be* its actions within the agent schema above. (And it would optimize the general effects of its plan-outputting actions, including the problem of getting the humans to carry out the plans.)\n\n# Example: Imitation-based agents\n\n[Imitation-based agents](https://arbital.com/p/2sj) would modify steps 3 and 4 of the loop by \"trying to output an action indistinguishable from the output of the human imitated\" rather than forecasting consequences or optimizing over consequences, except perhaps insofar as forecasting consequences is important for guessing what the human would do, or they're internally imitating a human mode of thought that involves mentally imagining the consequences and choosing between them. \"Imitation-based agents\" might justly be called pseudoagents, in this schema.\n\n(But the \"pseudoagent\" terminology is relatively new, and a bit awkward, and it won't be surprising if we all go on saying \"imitation-based agents\" or \"act-based agents\". The point of having terms like 'pseudoagent' or 'advanced nonagent' is to have a name for the general concept, not to [reserve and guard](https://arbital.com/p/10l) the word 'agent' for only 100% real pure agents.)\n\n# Safety benefits and difficulties\n\nAdvanced pseudoagents and nonagents are usually proposed in the hope of averting some [advanced safety issue](https://arbital.com/p/2l) that seems to arise from the *agenty* part of \"[advanced agency](https://arbital.com/p/2c)\", while preserving other [advanced cognitive powers](https://arbital.com/p/2c) that seem [useful](https://arbital.com/p/6y).\n\nA proposal like this can fail to the extent that it's not pragmatically possible to unentangle one aspect of agency from another; or to the extent that removing that much agency would make the AI [safe but useless](https://arbital.com/p/42k).\n\nSome hypothetical examples that would, if they happened, constitute cases of failed safety or [unworkable tradeoffs](https://arbital.com/p/42k) in pseudoagent compromises:\n\n• Somebody proposes to obtain an Oracle merely in virtue of only giving the AI a text output channel, and only taking what it says as suggestions, thereby interrupting the loop between the agent's policies and it acting in the world. If this is all that changes, then from the Oracle's perspective it's still an agent, its text output is its motor channel, and it still immediately outputs whatever act it expects to maximize subjective expected utility, treating the humans as part of the environment to be optimized. It's an agent that somebody is trying to *use* as part of a larger process with an interrupted agent loop, but the AI design itself is a pure agent.\n\n• Somebody advocates for designing an AI that [only computes and outputs probability estimates](https://arbital.com/p/1v4); and never searches for any EU-maximizing policies, let alone outputs them. It turns out that this AI cannot well-manage its internal and reflective operations, because it can't use consequentialism to select the best thought to think next. As a result, the AI design fails to bootstrap, or fails to work sufficiently well before competing AI designs that use internal consequentialism. (Safe but useless, much like a rock.)\n\n• Somebody advocates that an [imitative agent design](https://arbital.com/p/2sj) will avoid invoking the [advanced safety issues that seem like they should be associated with consequentialist reasoning](https://arbital.com/p/2c), because the imitation-based pseudoagent never does any consequentialist reasoning or planning; it only tries to produce an output extremely similar to its training set of observed human outputs. But it turns out (arguendo) that the pseudoagent, to imitate the human, has to imitate consequentialist reasoning, and so the implied dangers end up pretty much the same.\n\n• An agent is supposed to just be an extremely powerful policy-reinforcement learner instead of an expected utility optimizer. After a huge amount of optimization and mutation on a very general representation for policies, it turns out that the best policies, the ones that were the most reinforced by the highest rewards, are computing consequentialist models internally. The actual result ends up being that the AI is doing consequentialist reasoning that is obscured and hidden, since it takes place outside the designed and easily visible high-level-loop of the AI.\n\nComing up with a proposal for an advanced pseudoagent, that still did something pivotal and was actually safer, would reasonably require: (a) understanding how to slice up agent properties along their natural joints; (b) understanding which advanced-agency properties lead to which expected safety problems and how; and (c) understanding which internal cognitive functions would be needed to carry out some particular [pivotal](https://arbital.com/p/6y) task; adding up to (d) see an exploitable prying-apart of the advanced-AI joints.\n\nWhat's often proposed in practice is more along the lines of:\n\n- \"We just need to build AIs without emotions so they won't have drives that make them compete with us.\" (Can you translate that into the language of utility functions and consequentialist planning, please?)\n- \"Let's just build an AI that answers human questions.\" (It's doing a lot more than that internally, so how are the internal operations organized? Also, [what do you do](https://arbital.com/p/6y) with a question-answering AI that averts the consequences of somebody else building a more agenty AI?)\n\nComing up with a sensible proposal for a pseudoagent is hard. The reason for talking about \"agents\" in talking about future AIs isn't because the speaker wants to give AIs lots of power and have them wandering the world doing whatever they like under their own drives (for this entirely separate concept see [autonomous AGI](https://arbital.com/p/1g3)). The reason we talk about observe-model-predict-act expected-utility consequentialists, is that this seems to carve a lot of important concepts at their joints. Some alternative proposals exist, but they often have a feel of \"carving against the joints\" or trying to push through an unnatural arrangement, and aren't as natural or as simple to describe.", "date_published": "2016-06-07T23:36:11Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Needs summary", "B-Class"], "alias": "42h"} {"id": "a233b196d1af604338f00135f88a2047", "title": "Safe but useless", "url": "https://arbital.com/p/safe_useless", "source": "arbital", "source_type": "text", "text": "\"This type of safety implies uselessness\" (or conversely, \"any AI powerful enough to be useful will still be unsafe\") is an accusation leveled against a proposed AI safety measure that must, to make the AI safe, be enforced to the point that it will make the AI useless.\n\nFor a non-AI metaphor, consider a scissors and its dangerous blades. We can have a \"safety scissors\" that is only *just* sharp enough to cut paper - but this is still sharp enough to do some damage if you work at it. If you try to make the scissors *even safer* by encasing the dangerous blades in foam rubber, the scissors can't cut paper any more. If the scissors *can* cut paper, it's still unsafe. Maybe you could in principle cut clay with a scissors like that, but this is no defense unless you can tell us [something very useful](https://arbital.com/p/6y) that can be done by cutting clay.\n\nSimilarly, there's an obvious way to try cutting down the allowed output of an [Oracle AGI](https://arbital.com/p/6x) to the point where [all it can do is tell us that a given theorem is provable from the axioms of Zermelo-Fraenkel set theory](https://arbital.com/p/70). This [might](https://arbital.com/p/2j) prevent the AGI from hacking the human operators into letting it out, since all that can leave the box is a single yes-or-no bit, sent at some particular time. An untrusted superintelligence inside this scheme would have the option of strategically not telling us when a theorem *is* provable in ZF; but if the bit from the proof-verifier said that the input theorem was ZF-provable, we could very likely trust that.\n\nBut now we run up against the problem that nobody knows how to [actually save the world](https://arbital.com/p/6y) by virtue of sometimes knowing for sure that a theorem is provable in ZF. The scissors has been blunted to where it's probably completely safe, but can only cut clay; and nobody knows how to [do *enough* good](https://arbital.com/p/6y) by cutting clay.\n\n# Ideal models of \"safe but useless\" agents\n\nShould you have cause to do a mathematical study of this issue, then an excellent [ideal model](https://arbital.com/p/107) of a safe but useless agent, embodying maximal safety and minimum usefulness, would be a rock.", "date_published": "2016-06-07T23:38:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Arguendo, when some particular proposed AI safety measures is alleged to be inherently opposed to the useful work the AI is meant to do.\n\nWe could use the metaphor of a scissors and its dangerous blades. We can have a \"safety scissors\" that is only *just* sharp enough to cut paper, but this is still sharp enough to do some damage if you work at it. If you make the scissors *even safer* by encasing the dangerous blades in foam rubber, the scissors can't cut paper any more; and if it *can* cut paper, it's still unsafe. Maybe you can cut clay, but nobody knows how to do a [sufficiently large amount of good](https://arbital.com/p/6y) by cutting clay.\n\nSimilarly, there's an obvious way to cut down the output of an [Oracle AGI](https://arbital.com/p/6x) to the point where [all it can do is tell us that a proposed theorem is provable from the axioms of Zermelo-Fraenkel set theory](https://arbital.com/p/70). Unfortunately, nobody knows how to use a ZF provability oracle to [save the world](https://arbital.com/p/6y)."], "tags": ["B-Class"], "alias": "42k"} {"id": "f96bbd0e4fb9e981a89d44cdca466685", "title": "Needs summary", "url": "https://arbital.com/p/needs_summary_meta_tag", "source": "arbital", "source_type": "text", "text": "By default, a page's [summary](https://arbital.com/p/1kl) (that shows up in the popover when a user hovers over a [greenlink](https://arbital.com/p/17f)) is the page's first paragraph. Sometimes the first paragraph is not well-suited for a popover summary. Add this tag to pages that need to have a custom summary.\n\nSome pages may require an [accessible](https://arbital.com/p/5cb), [technical](https://arbital.com/p/4pt), or [brief](https://arbital.com/p/4q2) summary.", "date_published": "2016-07-23T03:45:24Z", "authors": ["Alexei Andreev", "Dylan Hendrickson", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina", "Team Arbital", "Joe Zeng"], "summaries": [], "tags": [], "alias": "433"} {"id": "5c81cfb6beabf68ca9b88ffc563faff9", "title": "Missing the weird alternative", "url": "https://arbital.com/p/missing_weird", "source": "arbital", "source_type": "text", "text": "The \"[https://arbital.com/p/47](https://arbital.com/p/47)\" problem is alleged to be a foreseeable difficulty of coming up with a [good](https://arbital.com/p/3d9) goal for an [AGI](https://arbital.com/p/42g) (part of the [alignment problem](https://arbital.com/p/2v) for [advanced agents](https://arbital.com/p/2c)). Roughly, an \"unforeseen maximum\" happens when somebody thinks that \"produce smiles\" would be a great goal for an AGI, because you can produce lots of smiles by making people happy, and making people happy is good. However, while it's true that making people happy by ordinary means will produce *some* smiles, what will produce even *more* smiles is administering regular doses of heroin or turning all matter within reach into tiny molecular smileyfaces.\n\n\"Missing the weird alternative\" is an attempt to [psychologize](https://arbital.com/p/43h) about why people talking about AGI utility functions might make this kind of oversight systematically. To avoid [Bulverism](https://arbital.com/p/43k), if you're not yet convinced that missing a weird alternative *would* be a dangerous oversight, please read [https://arbital.com/p/47](https://arbital.com/p/47) first or instead.\n\nIn what follows we'll use $U$ to denote a proposed utility function for an AGI, $V$ to denote our own [normative values](https://arbital.com/p/55), $\\pi_1$ to denote the high-$V$ policy that somebody thinks is the attainable maximum of $U,$ and $\\pi_0$ to denote what somebody else suggests is a higher-$U$ lower-$V$ alternative.\n\n# Alleged historical cases\n\nSome historical instances of AGI goal systems proposed in a publication or conference presentation, that have been argued to be \"missing the weird alternative\" are:\n\n- \"Just program AIs to maximize their gains in compression of sensory data.\" Proposed by Juergen Schmidhuber, director of IDSIA, in a presentation at the 2009 Singularity Summit; see the entry on [https://arbital.com/p/47](https://arbital.com/p/47).\n - Claimed by Schmidhuber to motivate art and science.\n - Yudkowsky suggested that this would, e.g., motivate the AI to construct objects that encrypted streams of 1s or 0s, then revealed the encryption key to the AI.\n- Program an AI by showing it pictures/video of smiling faces to train (via supervised learning) which sensory events indicate good outcomes. Formally proposed twice, once by J. Storrs Hall in the book *Beyond AI,* once in an ACM paper by somebody who since exercised their [sovereign right to change their mind](https://arbital.com/p/43l).\n - Claimed to motivate an AI to make people happy.\n - Suggested by Yudkowsky to motivate tiling the universe with tiny molecular smileyfaces.\n\nMany other instances of this alleged issue have allegedly been spotted in more informal dicussion.\n\n# Psychologized reasons to miss a weird alternative\n\n[Psychologizing](https://arbital.com/p/43h) some possible reasons why some people might systematically \"miss the weird alternative\", assuming that was actually happening:\n\n## Our brain doesn't bother searching V-bad parts of policy space\n\nArguendo: The human brain is built to implicitly search for high-$V$ ways to accomplish a goal. Or not actually high-$V$, but high-$W$ where $W$ is what we intuitively want, which [has something to do with](https://arbital.com/p/313) $V.$ \"Tile the universe with tiny smiley-faces\" is low-$W$ so doesn't get considered.\n\nArguendo, your brain is built to search for policies *it* prefers. If you were looking for a way to open a stuck jar, your brain wouldn't generate the option of detonating a stick of dynamite, because that would be a policy ranked very low in your preference-ordering. So what's the point of searching that part of the policy space?\n\nThis argument seems to [prove too much](https://arbital.com/p/3tc) in that it suggests that a chess player would be unable to search for their opponent's most preferred moves, if human brains could only search for policies that were high inside their own preference ordering. But there could be an explicit perspective-taking operation required, and somebody modeling an AI they had warm feelings about might fail to fully take the AI's perspective; that is, they fail to carry out an explicit cognitive step needed to switch off the \"only $W$-good policies\" filter.\n\nWe might also have a *limited* native ability to take perspectives on goals not our own. I.e., without further training, our brain can readily imagine that a chess opponent wants us to lose, or imagine that an AI wants to kill us because it hates us, and consider \"reasonable\" policy options along those lines. But this expanded policy search still fails to consider policies on the lines of \"turn everything into tiny smileyfaces\" when asking for ways to produce smiles, because *nobody* in the ancestral environment would have wanted that option and so our brain has a hard time natively modeling it.\n\n## Our brain doesn't automatically search weird parts of policy space\n\nArguendo: The human brain doesn't search \"weird\" (generalization-violating) parts of the policy space without an explicit effort.\n\nThe potential issue here is that \"tile the galaxy with tiny smileyfaces\" or \"build environmental objects that encrypt streams of 1s or 0s, then reveal secrets\" would be *weird* in the sense of violating generalizations that usually hold about policies or consequences in human experience. Not generalizations like, \"nobody wants smiles smaller than an inch\", but rather, \"most problems are not solved with tiny molecular things\".\n\n[https://arbital.com/p/2w](https://arbital.com/p/2w) would tend to push the maximum (attainable optimum) of $U$ in \"weird\" or \"extreme\" directions - e.g., the *most* smiles can be obtained by making them very small, if this variable is not otherwise constrained. So the unforeseen maxima might tend to violate implicit generalizations that usually govern most goals or policies and that our brains take for granted. Aka, the unforeseen maximum isn't considered/generated by the policy search, because it's weird.\n\n## Conflating the helpful with the optimal\n\nArguendo: Someone might simply get as far as \"$\\pi_1$ increases $U$\" and then stop there and conclude that a $U$-agent does $\\pi_1.$\n\nThat is, they might just not realize that the argument \"an advanced agent optimizing $U$ will execute policy $\\pi_1$\" requires \"$\\pi_1$ is the best way to optimize $U$\" and not just \"ceteris paribus, doing $\\pi_1$ is better for $U$ than doing nothing\". So they don't realize that establishing \"a $U$-agent does $\\pi_1$\" requires establishing that no other $\\pi_k$ produces higher expected $U$. So they just never search for a $\\pi_k$ like that.\n\nThey might also be implicitly modeling $U$-agents as only weakly optimizing $U$, and hence not seeing a $U$-agent as facing tradeoffs or opportunity costs; that is, they implicitly model a $U$-agent as having no desire to produce any more $U$ than $\\pi_1$ produces. Again psychologizing, it does sometimes seem like people try to mentally model a $U$-agent as \"an agent that sorta wants to produce some $U$ as a hobby, so long as nothing more important comes along\" rather \"an agent whose action-selection criterion entirely consists of doing whatever action is expected to lead to the highest $U$\".\n\nThis would well-reflect the alleged observation that people allegedly \"overlooking the weird alternative\" seem more like they failed to search at all, than like they conducted a search but couldn't think of anything.\n\n## Political persuasion instincts on convenient instrumental strategies\n\nIf the above hypothetical was true - that people just hadn't thought of the possibility of higher-$U$ $\\pi_k$ existing - then we'd expect them to quickly change their minds upon this being pointed out. Actually, it's been empirically observed that there seems to be a lot more resistance than this.\n\nOne possible force that could produce resistance to the observation \"$\\pi_0$ produces more $U$\" - over and above the null hypothesis of ordinary pushback in argument, admittedly sometimes a very powerful force on its own - might be a brain running in a mode of \"persuade another agent to execute a strategy $\\pi$ which is convenient to me, by arguing to the agent that $\\pi$ best serves the agent's own goals\". E.g. if you want to persuade your boss to give you a raise, one would be wise to argue \"you should give me a raise because it will make this project more efficient\" rather than \"you should give me a raise because I like money\". By the general schema of the political brain, we'd be very likely to have built-in support for searching for arguments that policy $\\pi$ that we just happen to like, is a *great* way to achieve somebody else's goal $U.$\n\nThen on the same schema, a competing policy $\\pi_0$ which is *better* at achieving the other agent's $U$, but less convenient for us than $\\pi_1$, is an \"enemy soldier\" in the political debate. We'll automatically search for reasons why $\\pi_0$ is actually really bad for $U$ and $\\pi_1$ is actually really good, and feel an instinctive dislike of $\\pi_0.$ By the standard schema on the self-deceptive brain, we'd probably convince ourselves that $\\pi_0$ is really bad for $U$ and $\\pi_1$ is really best for $U.$ It would not be advantageous to our persuasion to go around noticing ourselves all the reasons that $\\pi_0$ is good for $U.$ And we definitely wouldn't start spontaneously searching for $\\pi_k$ that are $U$-better than $\\pi_1,$ once we'd already found some $\\pi_1$ that was very convenient to us.\n\n(For a general post on the \"fear of third alternatives\", see [here](http://lesswrong.com/lw/hu/the_third_alternative/). This essay also suggests that a good test for whether you might be suffering from \"fear of third alternatives\" is to ask yourself whether you instinctively dislike or automatically feel skeptical of any proposed other options for achieving the stated criterion.)\n\n## The [apple pie problem](https://arbital.com/p/apple_pie_problem)\n\nSometimes people propose that the only utility function an AGI needs is $U$, where $U$ is something very good, like democracy or freedom or [apple pie](https://arbital.com/p/apple_pie_problem).\n\nIn this case, perhaps it sounds like a good thing to say about $U$ that it is the only utility function an AGI needs; and refusing to agree with this is *not* praising $U$ as highly as possible, hence an enemy soldier against $U.$\n\nOr: The speaker may not realize that \"$U$ is really quite amazingly fantastically good\" is not the same proposition as \"an agent that maximizes $U$ and nothing else is [beneficial](https://arbital.com/p/3d9)\", so they treat contradictions of the second statement as though they contradicted the first.\n\nOr: Pointing out that $\\pi_0$ is high-$U$ but low-$V$ may sound like an argument against $U,$ rather than an observation that apple pie is not the only good. \"A universe filled with nothing but apple pie has low value\" is not the same statement as \"apple pie is bad and should not be in our utility function\".\n\nIf the \"apple pie problem\" is real, it seems likely to implicitly rely on or interact with some of the other alleged problems. For example, someone may not realize that their own complex values $W$ contain a number of implicit filters $F_1, F_2$ which act to filter out $V$-bad ways of achieving $U,$ because they themselves are implicitly searching only for high-$W$ ways of achieving $U.$", "date_published": "2016-06-26T23:16:55Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Psychologizing", "B-Class"], "alias": "43g"} {"id": "f43063d92bc1ba6840367cf756ac0c90", "title": "Psychologizing", "url": "https://arbital.com/p/psychologizing", "source": "arbital", "source_type": "text", "text": "\"Psychologizing\" is when we go past arguing that a position *is in fact* wrong, and start considering *what could have gone wrong in people's minds* to make them believe something so wrong. Realistically, we can't entirely avoid psychologizing - sometimes cognitive biases are real and understanding how they potentially apply is important. Nonetheless, since psychologizing is potentially a pernicious and poisonous activity, Arbital's discussion norms say to explicitly tag or greenlink with \"[https://arbital.com/p/43h](https://arbital.com/p/43h)\" each place where you speculate about the psychology of how people could possibly be so wrong. Again, psychologizing isn't necessarily wrong - sometimes people do make mistakes for systematic psychological reasons that are understandable - but it's dangerous, potentially degenerates extremely quickly, and deserves to be highlighted each and every time.\n\nIf you go off on an extended discourse about why somebody is mentally biased or thinking poorly, *before* you've argued on the object level that they are in fact mistaken about the subject matter, this is [Bulverism](https://en.wikipedia.org/wiki/Bulverism). Bulverism is flatly not okay on Arbital. If you want to speculate about what goes wrong in the minds of people who believe in UFOs, then a good way of doing this on Arbital would be to *first* link a page arguing that there are in fact no UFOs, and say \"I think this page establishes there are no UFOs; if you're not familiar with the arguments, read that first. Given that I think there are in fact no UFOs, I will now [psychologize](https://arbital.com/p/43h) about why people would mistakenly believe in UFOs...\"\n\nIf that clear label makes your page less snappy, persuasive, and fun to read, please consider that to be an *intended effect* of this rule. Psychologizing *is* fun and that's why we want to take steps against it coming to dominate discussions here.", "date_published": "2016-06-08T18:02:45Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "43h"} {"id": "08e327c281b51377cac807e4774ceba8", "title": "Don't try to solve the entire alignment problem", "url": "https://arbital.com/p/dont_solve_whole_problem", "source": "arbital", "source_type": "text", "text": "On first approaching the [alignment problem](https://arbital.com/p/2v) for [advanced agents](https://arbital.com/p/2c), aka \"[robust](https://arbital.com/p/2l) and [beneficial](https://arbital.com/p/3d9) [AGI](https://arbital.com/p/42g)\", aka \"[Friendly AI](https://arbital.com/p/1g2)\", a very common approach is to try to come up with *one* idea that solves *all* of [AI alignment](https://arbital.com/p/2v). A simple design concept; a simple utility function; a simple development strategy; one guideline for everyone to adhere to; or a large diagram full of boxes with lines to other boxes; that is allegedly sufficient to realize around as much [benefit](https://arbital.com/p/55) from beneficial [superintelligences](https://arbital.com/p/41l) as can possibly be realized.\n\nWithout knowing the details of your current idea, this article can't tell you why it's wrong - though frankly we've got a strong [prior](https://arbital.com/p/1rm) against it at this point. But some very standard advice would be:\n\n- Glance over what current discussants think of as [standard challenges and difficulties of the overall problem](https://arbital.com/p/2l), i.e., why people think the alignment might be hard, and what standard questions a new approach would face.\n- Consider focusing your attention down on a [single subproblem](https://arbital.com/p/4m) of alignment, and trying to make progress there - not necessarily solve it completely, but contribute nonobvious knowledge about the problem that wasn't there before. (If you have a broad new approach that solves all of alignment, maybe you could walk through *exactly* how it solves one [crisply identified subproblem](https://arbital.com/p/2mx)?)\n- Check out the flaws in previous proposals that people currently think won't work. E.g. various versions of [utility indifference](https://arbital.com/p/1b7).\n\nA good initial goal is not \"persuade everyone in the field to agree with a new idea\" but rather \"come up with a contribution to an open discussion that is sufficiently crisply stated that, if it were in fact wrong, it would be possible for somebody else to shoot it down today.\" I.e., an idea such that if you're wrong, this can be pointed out in the form of a crisply derivable consequence of a crisply specified idea, rather than it taking 20 years to see what happens. For there to be sustained progress, propositions need to be stated modularly enough and crisply enough that there can be a conversation about them that goes beyond \"does not / does too\" - ideas need to be stated in forms that have sufficiently clear and derivable consequences that if there's a problem, people can see the problem and agree on it.\n\nAlternatively, [poke a clearly demonstrable flaw in some solution currently being critiqued](https://arbital.com/p/3nj). Since most proposals in alignment theory get shot down, trying to participate in the critiquing process has a great advantage over trying to invent solutions, in that you'll probably have started with the true premise \"proposal X is broken or incomplete\" rather than the false premise \"proposal X works and solves everything\".\n\n[https://arbital.com/p/43h](https://arbital.com/p/43h) a little about why people might try to solve all of alignment theory in one shot, one might recount Robyn Dawes's advice that:\n\n- Research shows that people come up with better solutions when they discuss the problem as thoroughly as possible before discussing any answers.\n- Dawes has observed that people seem *more* likely to violate this principle as the problem becomes more difficult.\n\n...and finally remark that building a nice machine intelligence correctly on the first try must be pretty darned difficult, since so many people solve it in the first 15 seconds.\n\nIt's possible that everyone working in this field is just missing the obvious and that there *is* some simple idea which solves all the problems. But realistically, you should be aware that everyone in this field has already heard a dozen terrible Total Solutions, and probably hasn't had anything fun happen as a result of discussing them, resulting in some amount of attentional fatigue. (Similarly: If not everyone believes you, or even if it's hard to get people to listen to your solution instead of talking with people they already know, that's not necessarily because of some [deep-seated psychological problem](https://arbital.com/p/43h) on their part, such as being uninterested in outsiders' ideas. Even if you're not an obvious crank, people are still unlikely to take the time out to engage with you unless you signal awareness of what *they* think are the usual issues and obstacles. It's not so different here from other fields.)", "date_published": "2016-06-29T02:48:31Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Rather than trying to solve the entire problem of [building nice Artificial Intelligences](https://arbital.com/p/2v) with a sufficiently right idea:\n\n- Focus on a single, crisply stated subproblem. (If your idea solves all of alignment theory, you should also be able to walk through a much smaller-scale example of how it solves one [open problem](https://arbital.com/p/4m), right?)\n- Make one non-obvious new statement about a subproblem of alignment theory, such that if you were in fact wrong, it would be possible to figure this out now rather than in 20 years.\n - E.g., your new statement must be clear enough that its further consequences can be derived in a way that current researchers can agree on and have sustained conversations about it.\n - Or, less likely, that it's possible to test right now with current algorithms, in such a way that [if this kind of AI would blow up later then it would also blow up right now](https://arbital.com/p/6q).\n- Glance over what current workers in the field consider to be [standard difficulties and challenges](https://arbital.com/p/2l), so that you can explain to them in their own language why your theory doesn't fail for the usual reasons."], "tags": ["B-Class"], "alias": "43w"} {"id": "92110372938279dd5a667bf09b2b4f96", "title": "Corrigibility", "url": "https://arbital.com/p/corrigibility", "source": "arbital", "source_type": "text", "text": "A 'corrigible' agent is one that [doesn't interfere](https://arbital.com/p/7g0) with what [we](https://arbital.com/p/9r) would intuitively see as attempts to 'correct' the agent, or 'correct' our mistakes in building it; and permits these 'corrections' despite the apparent [instrumentally convergent reasoning](https://arbital.com/p/10g) saying otherwise.\n\n- If we try to suspend the AI to disk, or shut it down entirely, a corrigible AI will let us do so. (Even though, if suspended, [the AI will then be unable to fulfill what would usually be its goals](https://arbital.com/p/7g2).)\n- If we try to reprogram the AI's utility function or [meta-utility function](https://arbital.com/p/meta_utility), a corrigible AI will allow this modification to go through. (Rather than, e.g., fooling us into believing the utility function was modified successfully, while the AI actually keeps its original utility function as [obscured](https://arbital.com/p/3cq) functionality; as we would expect by default to be [a preferred outcome according to the AI's current preferences](https://arbital.com/p/3r6).)\n\nMore abstractly:\n\n- A corrigible agent experiences no preference or [instrumental pressure](https://arbital.com/p/10k) to interfere with attempts by the programmers or operators to modify the agent, impede its operation, or halt its execution.\n- A corrigible agent does not attempt to manipulate or deceive its operators, especially with respect to properties of the agent that might otherwise cause its operators to modify it.\n- A corrigible agent does not try to [obscure its thought processes](https://arbital.com/p/3cq) from its programmers or operators.\n- A corrigible agent is motivated to preserve the corrigibility of the larger system if that agent self-modifies, constructs sub-agents in the environment, or offloads part of its cognitive processing to external systems; or alternatively, the agent has no preference to execute any of those general activities.\n\nA stronger form of corrigibility would require the AI to positively cooperate or assist, such that the AI would rebuild the shutdown button if it were destroyed, or experience a positive preference *not* to self-modify if self-modification could lead to incorrigibility. But this is not part of the primary specification since it's possible that we would *not* want the AI trying to actively be helpful in assisting our attempts to shut it down, and would in fact prefer the AI to be passive about this.\n\nGood proposals for achieving corrigibility in specific regards are [open problems in AI alignment](https://arbital.com/p/4m). Some areas of active current research are [https://arbital.com/p/1b7](https://arbital.com/p/1b7) and [https://arbital.com/p/interruptibility](https://arbital.com/p/interruptibility).\n\nAchieving total corrigibility everywhere via some single, general mental state in which the AI \"knows that it is still under construction\" or \"believes that the programmers know more than it does about its own goals\" is termed '[the hard problem of corrigibility](https://arbital.com/p/3ps)'.\n\n## Difficulties\n\n### Deception and manipulation by default\n\nBy default, most sets of preferences are such that an agent acting according to those preferences will prefer to retain its current preferences. For example, imagine an agent which is attempting to collect stamps. Altering the agent so that it prefers to collect bottle caps would lead to futures where the agent has fewer stamps, and so allowing this event to occur is dispreferred (under the current, stamp-collecting preferences).\n\nMore generally, as noted by [instrumentally convergent strategies](https://arbital.com/p/10g), most utility functions give an agent strong incentives to retain its current utility function: imagine an agent constructed so that it acts according to the utility function U, and imagine further that its operators think they built the agent to act according to a different utility function U'. If the agent learns this fact, then it has incentives to either deceive its programmers (prevent them from noticing that the agent is acting according to U instead of U') or manipulate its programmers (into believing that they actually prefer U to U', or by coercing them into leaving its utility function intact).\n\nA corrigible agent must avoid these default incentives to manipulate and deceive, but specifying some set of preferences that avoids deception/manipulation incentives remains an open problem.\n\n### Trouble with utility function uncertainty\n\nA first attempt at describing a corrigible agent might involve specifying a utility maximizing agent that is uncertain about its utility function. However, while this could allow the agent to make some changes to its preferences as a result of observations, the agent would still be incorrigible when it came time for the programmers to attempt to correct what they see as mistakes in their attempts to formulate how the \"correct\" utility function should be determined from interaction with the environment.\n\nAs an overly simplistic example, imagine an agent attempting to maximize the internal happiness of all humans, but which has uncertainty about what that means. The operators might believe that if the agent does not act as intended, they can simply express their dissatisfaction and cause it to update. However, if the agent is reasoning according to an impoverished hypothesis space of utility functions, then it may behave quite incorrigibly: say it has narrowed down its consideration to two different hypotheses, one being that a certain type of opiate causes humans to experience maximal pleasure, and the other is that a certain type of stimulant causes humans to experience maximal pleasure. If the agent begins administering opiates to humans, and the humans resist, then the agent may \"update\" and start administering stimulants instead. But the agent would still be incorrigible — it would resist attempts by the programmers to turn it off so that it stops drugging people.\n\nIt does not seem that corrigibility can be trivially solved by specifying agents with uncertainty about their utility function. A corrigible agent must somehow also be able to reason about the fact that the humans themselves might have been confused or incorrect when specifying the process by which the utility function is identified, and so on.\n\n### Trouble with penalty terms\n\nA second attempt at describing a corrigible agent might attempt to specify a utility function with \"penalty terms\" for bad behavior. This is unlikely to work for a number of reasons. First, there is the [https://arbital.com/p/42](https://arbital.com/p/42) problem: if a utility function gives an agent strong incentives to manipulate its operators, then adding a penalty for \"manipulation\" to the utility function will tend to give the agent strong incentives to cause its operators to do what it would have manipulated them to do, without taking any action that technically triggers the \"manipulation\" cause. It is likely extremely difficult to specify conditions for \"deception\" and \"manipulation\" that actually rule out all undesirable behavior, especially if the agent is [smarter than us](https://arbital.com/p/47) or [growing in capability](https://arbital.com/p/6q).\n\nMore generally, it does not seem like a good policy to construct an agent that searches for positive-utility ways to deceive and manipulate the programmers, [even if those searches are expected to fail](https://arbital.com/p/7g0). The goal of corrigibility is *not* to design agents that want to deceive but can't. Rather, the goal is to construct agents that have no incentives to deceive or manipulate in the first place: a corrigible agent is one that reasons as if it is incomplete and potentially flawed in dangerous ways.\n\n## Open problems\n\nSome open problems in corrigibility are:\n\n### Hard problem of corrigibility\n\nOn a human, intuitive level, it seems like there's a central idea behind corrigibility that seems simple to us: understand that you're flawed, that your meta-processes might also be flawed, and that there's another cognitive system over there (the programmer) that's less flawed, so you should let that cognitive system correct you even if that doesn't seem like the first-order right thing to do. You shouldn't disassemble that other cognitive system to update your model in a Bayesian fashion on all possible information that other cognitive system contains; you shouldn't model how that other cognitive system might optimally correct you and then carry out the correction yourself; you should just let that other cognitive system modify you, without attempting to manipulate how it modifies you to be a better form of 'correction'.\n\nFormalizing the hard problem of corrigibility seems like it might be a problem that is hard (hence the name). Preliminary research might talk about some obvious ways that we could model A as believing that B has some form of information that A's preference framework designates as important, and showing what these algorithms actually do and how they fail to solve the hard problem of corrigibility.\n\n### [Utility indifference](https://arbital.com/p/1b7)\n\n[explain utility indifference](https://arbital.com/p/fixme:)\n\nThe current state of technology on this is that the AI behaves as if there's an absolutely fixed probability of the shutdown button being pressed, and therefore doesn't try to modify this probability. But then the AI will try to use the shutdown button as an outcome pump. Is there any way to avert this?\n\n### Percentalization\n\nDoing something in the top 0.1% of all actions. This is actually a Limited AI paradigm and ought to go there, not under Corrigibility.\n\n### Conservative strategies\n\nDo something that's as similar as possible to other outcomes and strategies that have been whitelisted. Also actually a Limited AI paradigm.\n\nThis seems like something that could be investigate in practice on e.g. a chess program.\n\n### Low impact measure\n\n(Also really a Limited AI paradigm.)\n\nFigure out a measure of 'impact' or 'side effects' such that if you tell the AI to paint all cars pink, it just paints all cars pink, and doesn't transform Jupiter into a computer to figure out how to paint all cars pink, and doesn't dump toxic runoff from the paint into groundwater; and *also* doesn't create utility fog to make it look to people like the cars *haven't* been painted pink (in order to minimize this 'side effect' of painting the cars pink), and doesn't let the car-painting machines run wild afterward in order to minimize its own actions on the car-painting machines. Roughly, try to actually formalize the notion of \"Just paint the cars pink with a minimum of side effects, dammit.\"\n\nIt seems likely that this problem could turn out to be FAI-complete, if for example \"Cure cancer, but then it's okay if that causes human research investment into curing cancer to decrease\" is only distinguishable by us as an okay side effect because it doesn't result in expected utility decrease under our own desires.\n\nIt still seems like it might be good to, e.g., try to define \"low side effect\" or \"low impact\" inside the context of a generic Dynamic Bayes Net, and see if maybe we can find something after all that yields our intuitively desired behavior or helps to get closer to it.\n\n### Ambiguity identification\n\nWhen there's more than one thing the user could have meant, ask the user rather than optimizing the mixture. Even if A is in some sense a 'simpler' concept to classify the data than B, notice if B is also a 'very plausible' way to classify the data, and ask the user if they meant A or B. The goal here is to, in the classic 'tank classifier' problem where the tanks were photographed in lower-level illumination than the non-tanks, have something that asks the user, \"Did you mean to detect tanks or low light or 'tanks and low light' or what?\"\n\n### Safe outcome prediction and description\n\nCommunicate the AI's predicted result of some action to the user, without putting the user inside an unshielded argmax of maximally effective communication.\n\n### Competence aversion\n\nTo build e.g. a [behaviorist genie](https://arbital.com/p/102), we need to have the AI e.g. not experience an instrumental incentive to get better at modeling minds, or refer mind-modeling problems to subagents, etcetera. The general subproblem might be 'averting the instrumental pressure to become good at modeling a particular aspect of reality'. A toy problem might be an AI that in general wants to get the gold in a Wumpus problem, but doesn't experience an instrumental pressure to know the state of the upper-right-hand-corner cell in particular.", "date_published": "2017-02-08T18:41:13Z", "authors": ["Matthew Graves", "Tsvi BT", "Alexei Andreev", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Corrigible agents allow themselves to be 'corrected' (from our standpoint) by human [programmers](https://arbital.com/p/9r), and don't experience [instrumental pressures](https://arbital.com/p/10k) to avoid correction.\n\nImagine building an [advanced AI](https://arbital.com/p/7g1) with a [shutdown button](https://arbital.com/p/2xd) that causes the AI to suspend to disk in an orderly fashion if the shutdown button is pressed. An AI that is *corrigible* with respect to this shutdown button is an AI that doesn't try to prevent the shutdown button from being pressed... or rewrite itself without the shutdown code, or build a backup copy of itself elsewhere, or psychologically manipulate the programmers into not pressing the button, or fool the programmers into thinking the AI has shut down when it has not, etcetera."], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class", "Non-adversarial principle"], "alias": "45"} {"id": "40c5e061b77594d16ef843a0295841bf", "title": "'Detrimental'", "url": "https://arbital.com/p/detrimental", "source": "arbital", "source_type": "text", "text": "The opposite of [https://arbital.com/p/-3d9](https://arbital.com/p/-3d9). A [reserved term](https://arbital.com/p/9p) in [AGI alignment theory](https://arbital.com/p/2v) that acts as a speaker-dependent variable denoting whatever the speaker means by \"bad\" or \"no, really actually bad\" or \"bad in the long-run\", from within whatever their view is on [where the future ought to go](https://arbital.com/p/55). See the entry for [https://arbital.com/p/55](https://arbital.com/p/55).", "date_published": "2016-06-09T21:15:55Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "450"} {"id": "35ea90ad7fd8a39f4fb46d5b07a1c245", "title": "Unforeseen maximum", "url": "https://arbital.com/p/unforeseen_maximum", "source": "arbital", "source_type": "text", "text": "An unforeseen maximum of a [utility function](https://arbital.com/p/109) (or other [preference framework](https://arbital.com/p/5f)) is when, e.g., you tell the AI to produce smiles, thinking that the AI will make people happy in order to produce smiles. But unforeseen by you, the AI has an alternative for making even more smiles, which is to convert all matter within reach into tiny molecular smileyfaces.\n\nIn other words, you're proposing to give the AI a goal $U$, because you think $U$ has a maximum around some nice options $X.$ But it turns out there's another option $X'$ you didn't imagine, with $X' >_U X,$ and $X'$ is not so nice.\n\nUnforeseen maximums are argued to be a [foreseeable difficulty](https://arbital.com/p/6r) of [AGI alignment](https://arbital.com/p/2v), if you try to [identify](https://arbital.com/p/6c) nice policies by giving a simple criterion $U$ that, so far as you can see, seems like it'd be best optimized by doing nice things.\n\nSlightly more semiformally, we could say that \"unforeseen maximum\" is realized as a difficulty when:\n\n1. A programmer thinking about a utility function $U$ considers policy options $\\pi_i \\in \\Pi_N$ and concludes that of these options the policy with highest $\\mathbb E [U | \\pi_i ](https://arbital.com/p/)$ is $\\pi_1,$ and hence a $U$-maximizer will probably do $\\pi_1.$\n2. The programmer also thinks that their own [criterion of goodness](https://arbital.com/p/55) $V$ will be promoted by $\\pi_1,$ that is, $\\mathbb E [V | \\pi_1 ](https://arbital.com/p/) > \\mathbb E [V ](https://arbital.com/p/)$ or \"$\\pi_1$ is [beneficial](https://arbital.com/p/3d9)\". So the programmer concludes that it's a great idea to build an AI that optimizes for $U.$\n3. Alas, the AI is searching a policy space $\\Pi_M,$ which although it does contain $\\pi_1$ as an option, also contains an attainable option $\\pi_0$ which programmer didn't consider, with $\\mathbb E [U | \\pi_0 ](https://arbital.com/p/) > \\mathbb E [U | \\pi_1 ](https://arbital.com/p/).$ This is a *problem* if $\\pi_0$ produces much less $V$-benefit than $\\pi_1$ or is outright [detrimental](https://arbital.com/p/3d9).\n\nThat is:\n\n$$\\underset{\\pi_i \\in \\Pi_N}{\\operatorname {argmax}} \\ \\mathbb E [U | \\pi_i ](https://arbital.com/p/) = \\pi_1$$\n\n$$\\underset{\\pi_k \\in \\Pi_M}{\\operatorname {argmax}} \\ \\mathbb E [U | \\pi_k ](https://arbital.com/p/) = \\pi_0$$\n\n$$\\mathbb E [V | \\pi_0 ](https://arbital.com/p/) \\ll \\mathbb E [V | \\pi_1 ](https://arbital.com/p/)$$\n\n# Example: Schmidhuber's compression goal.\n\nJuergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, [gave a talk](https://vimeo.com/7441291) proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.\n\nYudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI.\n\nTranslating into the above schema:\n\n1. Schmidhuber, considering the utility function $U$ of \"maximize gain in sensory compression\", thought that option $\\pi_1$ of \"do art and science\" would be the attainable maximum of $U$ within all options $\\Pi_N$ that Schmidhuber considered.\n2. Schmidhuber also considered the option $\\pi_1$ \"do art and science\" to achieve most of the attainable value under his own criterion of goodness $V$.\n3. However, while the AI's option space $\\Pi_M$ would indeed include $\\pi_1$ as an option, it would also include the option $\\pi_0$ of \"have an environmental object encrypt streams of 1s or 0s and then reveal the key\" which would score much higher under $U$, and much lower under $V.$\n\n# Relation to other foreseeable difficulties\n\n[https://arbital.com/p/6q](https://arbital.com/p/6q) implies an unforeseen maximum may come as a surprise, or not show up during the [development phase](https://arbital.com/p/5d), because during the development phase the AI's options are restricted to some $\\Pi_L \\subset \\Pi_M$ with $\\pi_0 \\not\\in \\Pi_L.$\n\nIndeed, the pseudo-formalization of a \"type-1 [https://arbital.com/p/-6q](https://arbital.com/p/-6q)\" is isomorphic to the pseudoformalization of \"unforeseen maximum\", except that in a [https://arbital.com/p/-6q](https://arbital.com/p/-6q), $\\Pi_N$ and $\\Pi_M$ are identified with \"AI's options during development\" and \"AI's options after a capability gain\". (Instead of \"Options the programmer is thinking of\" and \"Options the AI will consider\".)\n\nThe two concepts are conceptually distinct because, e.g:\n\n- A [https://arbital.com/p/-6q](https://arbital.com/p/-6q) could also apply to a decision criterion learned by training, not just a utility function envisioned by the programmer.\n- It's an unforeseen maximum but not a [https://arbital.com/p/-6q](https://arbital.com/p/-6q) if the programmer is initially reasoning, *not* that the AI has already been observed to be [beneficial](https://arbital.com/p/3d9) during a development phase, but rather that the AI *ought* to be [beneficial](https://arbital.com/p/3d9) when it optimizes $U$ later because of the supposed nice maximum at $\\pi_1$.\n\nIf we hadn't observed what seem like clear-cut cases of some actors in the field being blindsided by unforeseen maxima in imagination, we'd worry less about actors being blindsided by [https://arbital.com/p/-6q](https://arbital.com/p/-6q)s over observations.\n\n[https://arbital.com/p/2w](https://arbital.com/p/2w) suggests that the real maxima of non-$V$ utility functions will be \"strange, weird, and extreme\" relative to our own $V$-views on preferable options.\n\n[Missing the weird alternative](https://arbital.com/p/43g) suggests that people may [psychologically](https://arbital.com/p/43h) fail to consider alternative agent options $\\pi_0$ that are very low in $V,$ because the human search function looks for high-$V$ and normal policies. In other words, that Schmidhuber didn't generate \"encrypt streams of 1s or 0s and then reveal the key\" *because* this policy was less attractive to him than \"do art and science\" and *because* it was weird.\n\n[https://arbital.com/p/42](https://arbital.com/p/42) suggests that if you try to add a penalty term to exclude $\\pi_0$, the next-highest $U$-ranking option will often be some similar alternative $\\pi_{0.01}$ which still isn't nice.\n\n[https://arbital.com/p/fragile_value](https://arbital.com/p/fragile_value) asserts that our [true criterion of goodness](https://arbital.com/p/55) $V$ is narrowly peaked within the space of all achievable outcomes for a [superintelligence](https://arbital.com/p/41l), such that we rapidly fall off in $V$ as we move away from the peak. [https://arbital.com/p/5l](https://arbital.com/p/5l) says that $V$ and its corresponding peak have high [algorithmic complexity](https://arbital.com/p/5v). Then the peak outcomes identified by any simple [object-level](https://arbital.com/p/5t) $U$ will systematically fail to find $V$. It's like trying to find a 1000-byte program which will approximately reproduce the text of Shakespeare's *Hamlet;* algorithmic information theory says that you just shouldn't expect to find a simple program like that.\n\n[https://arbital.com/p/apple_pie_problem](https://arbital.com/p/apple_pie_problem) raises the concern that some people may have [psychological](https://arbital.com/p/43h) trouble accepting the \"But $\\pi_0$\" critique even after it is pointed out, because of their ideological attachment to a noble goal $U$ (probably actually noble!) that would be even more praiseworthy if $U$ could also serve as a complete utility function for an AGI (which it unfortunately can't).\n\n# Implications and research avenues\n\n[Conservatism](https://arbital.com/p/2qp) in goal concepts can be seen as trying to directly tackle the problem of unforeseen maxima. More generally, AI approaches which work on \"whitelisting conservative boundaries around approved policy spaces\" instead of \"search the widest possible policy space, minus some blacklisted parts\".\n\nThe [Task](https://arbital.com/p/6w) paradigm for [advanced agents](https://arbital.com/p/2c) concentrates on trying to accomplish some single [pivotal act](https://arbital.com/p/6y) which can be accomplished by one or more [tasks](https://arbital.com/p/6w) of limited scope. [Combined with other measures,](https://arbital.com/p/43w) this might make it easier to identify an adequate safe plan for accomplishing the limited-scope task, rather than needing to identify the fragile peak of $V$ within some much larger landscape. The Task AGI formulation is claimed to let us partially \"narrow down\" the scope of the necessary $U$, the part of $V$ that's relevant to the task, and the searched policy space $\\Pi$ to what is only adequate. This might reduce or meliorate, though not by itself eliminate, unforeseen maxima.\n\n[Mild optimization](https://arbital.com/p/2r8) can be seen as \"not trying so hard, not shoving all the way to the maximum\" - the hope is that *when combined* with a [Task](https://arbital.com/p/6w) paradigm plus other measures like [conservative goals and strategies](https://arbital.com/p/2qp), this will produce less optimization pressure toward weird edges and unforeseen maxima. (This method is not adequate on its own because an arbitrary adequate-$U$ policy may still not be high-$V$, ceteris paribus.)\n\n[Imitation-based agents](https://arbital.com/p/2sj) try to maximize similarity to a reference human's immediate behavior, rather than trying to optimize a utility function.\n\nThe prospect of being tripped up by unforeseen maxima, is one of the contributing motivations for giving up on [hand-coded object-level utilities](https://arbital.com/p/5t) in favor of meta-level [preference frameworks](https://arbital.com/p/5f) that learn a utility function or decision rule. (Again, this doesn't seem like a full solution by itself, [only one ingredient to be combined with other methods](https://arbital.com/p/43w). If the utility function is a big complicated learned object, that by itself is not a good reason to relax about the possibility that its maximum will be somewhere you didn't foresee, especially after a [capabilities boost](https://arbital.com/p/6q).)\n\n[Missing the weird alternative](https://arbital.com/p/43g) and the [https://arbital.com/p/apple_pie_problem](https://arbital.com/p/apple_pie_problem) suggest that it may be unusually difficult to explain to actors why $\\pi_0 >_U \\pi_1$ is a difficulty of their favored utility function $U$ that allegedly implies nice policy $\\pi_1.$ That is, for [psychological reasons](https://arbital.com/p/43h), this difficulty seems unusually likely to actually trip up sponsors of AI projects or politically block progress on alignment.", "date_published": "2016-06-27T00:24:55Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Unforeseen maximums occur when a powerful cognitive process, given a fitness function $F$ that you thought had a maximum somewhere around $X$, finds an even higher $F$-scoring solution $X'$ that was outside the space of possible solutions you considered. (Example: A [ programmer](https://arbital.com/p/9r) considers possible ways of producing smiles by producing human happiness, and the highest-scoring smile-producing strategies in this part of the solution space look quite nice to them. They give a neutral genie the instruction to produce smiles. The neutral genie tiles its future light cone with tiny molecular smiley-faces. This was not something the programmer had explicitly considered as a possible way of producing smiles.)"], "tags": ["B-Class"], "alias": "47"} {"id": "f7fa65cc95adabbc1cda1ff218583d97", "title": "Patch resistance", "url": "https://arbital.com/p/patch_resistant", "source": "arbital", "source_type": "text", "text": "A proposed [foreseeable difficulty](https://arbital.com/p/6r) of [aligning advanced agents](https://arbital.com/p/2v) is furthermore proposed to be \"patch-resistant\" if the speaker thinks that most simple or naive solutions will fail to resolve the difficulty and just regenerate it somewhere else.\n\nTo call a problem \"patch-resistant\" is not to assert that it is unsolvable, but it does mean the speaker is cautioning against naive or simple solutions.\n\nOn most occasions so far, alleged cases of patch-resistance are said to stem from one of two central sources:\n\n- The difficulty arises from a [convergent instrumental strategy](https://arbital.com/p/10g) executed by the AI, and simple patches aimed at blocking one observed bad behavior will not stop [a very similar behavior](https://arbital.com/p/42) from popping up somewhere else.\n- The difficulty arises because the desired behavior has [high algorithmic complexity](https://arbital.com/p/5l) and simple attempts to pinpoint beneficial behavior are doomed to fail.\n\n## Instrumental-convergence patch-resistance\n\nExample: Suppose you want your AI to have a [shutdown button](https://arbital.com/p/2xd):\n\n- You first try to achieve this by writing a shutdown function into the AI's code.\n- After the AI becomes self-modifying, it deletes the code because it is ([convergently](https://arbital.com/p/10g)) the case that the AI can accomplish its goals better by not being shut down.\n- You add a patch to the utility function giving the AI minus a million points if the AI deletes the shutdown function or prevents it from operating.\n- The AI responds by writing a new function that reboots the AI after the shutdown completes, thus technically not preventing the shutdown.\n- You respond by again patching the AI's utility function to give the AI minus a million points if it continues operating after the shutdown.\n- The AI builds an environmental subagent that will accomplish the AI's goals while the AI itself is technically \"shut down\".\n\nThis is the first sort of patch resistance, the sort alleged to arise from attempts to defeat an [instrumental convergence](https://arbital.com/p/10g) with simple patches meant to get rid of one observed kind of bad behavior. After one course of action is blocked by a specific obstacle, [the next-best course of action remaining is liable to be highly similar to the one that was just blocked](https://arbital.com/p/42).\n\n## Complexity-of-value patch-resistance\n\nExample:\n\n- You want your AI to accomplish good in the world, which is presently highly correlated with making people happy. Happiness is presently highly correlated with smiling. You build an AI that [tries to achieve more smiling](https://arbital.com/p/10d).\n- After the AI proposes to force people to smile by attaching metal pins to their lips, you realize that this current empirical association of smiling and happiness doesn't mean that *maximum* smiling must occur in the presence of *maximum* happiness.\n- Although it's much more complicated to infer, you try to reconfigure the AI's utility function to be about a certain class of brain states that has previously in practice produced smiles.\n- The AI successfully generalizes the concept of pleasure, and begins proposing policies to give people heroin.\n- You try to add a patch excluding artificial drugs.\n- The AI proposes a genetic modification producing high levels of endogenous opiates.\n- You try to explain that what's really important is not forcing the brain to experience pleasure, but rather, people experiencing events that naturally cause happiness.\n- The AI proposes to put everyone in the Matrix...\n\nSince the programmer-[intended](https://arbital.com/p/6h) concept is [actually highly complicated](https://arbital.com/p/5l), simple concepts will systematically fail to have their optimum at the same point as the complex intended concept. By the [fragility of value](https://arbital.com/p/fragility_of_value), the optimum of the simple concept will almost certainly not be a *high* point of the complex intended concept. Since [most concepts are *not* surprisingly compressible](https://arbital.com/p/4v5), there probably *isn't* any simple concept whose maximum identifies that fragile peak of value. This explains why we would reasonably expect problems of [perverse instantiation](https://arbital.com/p/2w) to pop up over and over again, the optimum of the revised concept moving to a new weird extreme each time the programmer tries to hammer down the next [weird alternative](https://arbital.com/p/43g) the AI comes up with.\n\nIn other words: There's a large amount of [algorithmic information](https://arbital.com/p/5v) or many independent [reflectively consistent degrees of freedom](https://arbital.com/p/2fr) in the correct answer, the plans we *want* the AI to come up with, but we've only given the AI relatively simple concepts that can't [identify](https://arbital.com/p/2s3) those plans.\n\n# Analogues in the history of AI\n\nThe result of trying to tackle overly [general](https://arbital.com/p/42g) problems using AI algorithms too narrow for those general problems, usually appears in the form of [an infinite number of special cases](http://lesswrong.com/lw/l9/artificial_addition/) with a new special case needing to be handled for every problem instance. In the case of narrow AI algorithms tackling a general problem, this happens because the narrow algorithm, being narrow, is not capable of capturing the deep structure of the general problem and its solution.\n\nSuppose that burglars, and also earthquakes, can cause burglar alarms to go off. Today we can represent this kind of scenario using a Bayesian network or causal model which will *compactly* yield probabilistic inferences along the lines of, \"If the burglar alarm goes off, that probably indicates there's a burglar, unless you learn there was an earthquake, in which case there's probably not a burglar\" and \"If there's an earthquake, the burglar alarm probably goes off.\"\n\nDuring the era where everything in AI was being represented by first-order logic and nobody knew about causal models, [people devised increasingly intricate \"nonmonotonic logics\"](https://arbital.com/p/) to try to represent inference rules like (simultaneously) $alarm \\rightarrow burglar, \\ earthquake \\rightarrow alarm,$ and $(alarm \\wedge earthquake) \\rightarrow \\neg burglar.$ But first-order logic wasn't naturally a good surface fit to the set of inferences needed, and the AI programmers didn't know how to compactly capture the structure that causal models capture. So the \"nonmonotonic logic\" approach proliferated an endless nightmare of special cases.\n\nCognitive problems like \"modeling causal phenomena\" or \"being good at math\" (aka understanding which mathematical premises imply which mathematical conclusions) might be *general* enough to defeat modern narrow-AI algorithms. But these domains still seem like they should have something like a central core, leading us to expect [correlated coverage](https://arbital.com/p/correlated_covereage) of the domain in [sufficiently advanced](https://arbital.com/p/2c) agents. You can't conclude that because a system is very good at solving arithmetic problems, it will be good at proving Fermat's Last Theorem. But if a system is smart enough to independently prove Fermat's Last Theorem *and* the Poincare Conjecture *and* the independence of the Axiom of Choice in Zermelo-Frankel set theory, it can probably also - without further handholding - figure out Godel's Theorem. You don't need to *go on* programming in one special case after another of mathematical competency. The fact that humans could figure out all these different areas, without needing to be independently reprogrammed for each one by natural selection, says that there's something like a central tendency underlying competency in all these areas.\n\nIn the case of [complexity of value](https://arbital.com/p/5l), the thesis is that there are many independent [reflectively consistent degrees of freedom](https://arbital.com/p/2fr) in our [intended](https://arbital.com/p/6h) specification of what's [good, bad, or best](https://arbital.com/p/55). Getting one degree of freedom aligned with our intended result doesn't mean that other degrees of freedom need to align with our intended result. So trying to \"patch\" the first simple specification that doesn't work, is likely to result in a different specification that doesn't work.\n\nWhen we try to use a narrow AI algorithm to attack a problem which has a central tendency *requiring general intelligence to capture,* or at any rate requiring some new structure that the narrow AI algorithm can't handle, we're effectively asking the narrow AI algorithm to learn something that has no simple structure *relative to* that algorithm. This is why early AI researchers' experience with \"lack of common sense\" *that you can't patch with special cases* may be [foreseeably](https://arbital.com/p/6r) indicative of how frustrating it would be, in practice, to repeatedly try to \"patch\" a kind of difficulty that we may foreseeably need to confront in aligning AI.\n\nThat is: Whenever it feels to a human like you want to yell at the AI for its lack of \"common sense\", you're probably looking at a domain where trying to patch that particular AI answer is just going to lead into another answer that lacks \"common sense\". Previously in AI history, this happened because real-world problems had no simple central learnable solution relative to the narrow AI algorithm. In value alignment, something similar could happen because of the [complexity of our value function](https://arbital.com/p/5l), whose evaluations *also* [feel to a human](https://arbital.com/p/4v2) like \"common sense\".\n\n# Relevance to alignment theory\n\nPatch resistance, and its sister issue of lack of [correlated coverage](https://arbital.com/p/1d6), is a central reason why aligning advanced agents could be way harder, way more dangerous, and way more likely to actually kill everyone in practice, compared to optimistic scenarios. It's a primary reason to worry, \"Uh, what if *aligning* AI is actually way harder than it might look to some people, the way that *building AGI in the first place* turned out not to be something you could do in two months over the summer?\"\n\nIt's also a reason to worry about [context disasters](https://arbital.com/p/6q) revolving around capability gains: Anything you had to patch-until-it-worked at AI capability level $k$ is probably going to break *hard* at capability $l \\gg k.$ This is doubly catastrophic in practice if the pressures to \"just get the thing running today\" are immense.\n\nTo the extent that we can see the central project of AI alignment as revolving around finding a set of alignment ideas that *do* have simple central tendencies and *are* specifiable or learnable which together add up to a safe but powerful AI - that is, finding domains with correlated coverage that add up to a safe AI that can do something pivotal - we could see the central project of AI alignment as finding a collectively good-enough set of safety-things we can do *without* endless patching.", "date_published": "2016-06-27T00:35:02Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "48"} {"id": "78492cd09ab3fde73474c6c6d53442cd", "title": "Needs work", "url": "https://arbital.com/p/4bn", "source": "arbital", "source_type": "text", "text": "Meta tag for pages which need content improvement.", "date_published": "2016-06-14T20:36:59Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "4bn"} {"id": "e1532d6522a15bf0e6416d8a21e2114e", "title": "Time-machine metaphor for efficient agents", "url": "https://arbital.com/p/timemachine_efficiency_metaphor", "source": "arbital", "source_type": "text", "text": "The time-machine metaphor is an [intuition pump](https://arbital.com/p/4kf) for [instrumentally efficient agents](https://arbital.com/p/6s) - agents smart enough that they always get at least as much of their [utility](https://arbital.com/p/109) as any strategy we can imagine. Taking a [superintelligent](https://arbital.com/p/41l) [paperclip maximizer](https://arbital.com/p/10h) as our central example, rather than visualizing a brooding mind and spirit deciding which actions to output in order to achieve its paperclippy desires, we should consider visualizing a time machine hooked up to an output channel, which, for every possible output, peeks at how many paperclips will result in the future given that output, and which always yields the output leading to the greatest number of paperclips. In the metaphor, we're not to imagine the time machine as having any sort of mind or intentions - it just repeatedly outputs the action that actually leads to the most paperclips.\n\nThe time machine metaphor isn't a perfect way to visualize a [bounded](https://arbital.com/p/2rd) superintelligence; the time machine is strictly more powerful. E.g., the time machine can instantly unravel a 4096-bit encryption key because it 'knows' the bitstring that is the answer. So the point of this metaphor is not as an intuition pump for capabilities, but rather, an intuition pump for overcoming [https://arbital.com/p/-anthropomorphism](https://arbital.com/p/-anthropomorphism) in reasoning about a paperclip maximizer's policies; or as an intuition pump for understanding the sense-update-predict-act agent architecture.\n\nThat is: If you imagine a superintelligent paperclip maximizer as a mind, you might [imagine persuading it](https://arbital.com/p/43g) that, really, [it can get more paperclips](https://arbital.com/p/3tm) by trading with humans instead of turning them into paperclips. If you imagine a time machine, which isn't a mind, you're less likely to imagine persuading it, and instead ask more honestly the question, \"What is the maximum number of paperclips the universe can be turned into, and how would one go about doing that?\" Instead of imagining ourselves arguing with Clippy about how humans really are very productive, we ask the question from the time machine's standpoint - which universe actually ends up with more paperclips in it?\n\nThe relevant fact about instrumentally efficient agents is that they are, from our perspective, *unbiased* (in the [statistical sense of bias](https://arbital.com/p/statistical_bias)) in their policies, relative to any kind of bias we can detect.\n\nAs an example, consider a 2015-era chess engine, contrasted to a 1985-era chess engine. The 1985-era chess engine may lose to a moderately strong human amateur, so it's not [relatively efficient](https://arbital.com/p/6s). It may have humanly-perceivable quirks such as \"It likes to move its queen\", that is, \"I detect that it moves its queen more often than would be strictly required to win the game.\" As we go from 1985 to 2015, the machine chessplayer improves beyond the point where we, personally, can detect any flaws in it. You should expect the reason why the 2015 chess engine moves anywhere to be only understandable to *you* (without machine assistance) as \"because that move had a great probability of leading to a winning position later\", and not in any other psychological terms like \"it likes to move its pawn\".\n\nFrom your perspective, the 2015 chess engine will only move its pawn on occasions where that probably leads to winning the game, and does not move the pawn on occasions where it leads to losing the game. If you see the 2015 chess engine make a move you didn't think was high in winningness, you conclude that it has seen some winningness you didn't know about and is about to do exceptionally well, or you conclude that the move you favored led into futures surprisingly low in winningness, and not that the chess engine is favoring some unwinning move. We can no longer personally and without machine assistance detect any systematic departure from \"It makes the chess move that leads to winning the game\" in the direction of \"It favors some other class of chess move for reasons apart from its winningness.\"\n\nThis is what makes the time machine metaphor a good intuition pump for an instrumentally efficient agent's choice of policies (though not a good intuition for the magnitude of its capabilities).", "date_published": "2017-07-27T13:44:15Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "4gh"} {"id": "99c20a865d0d0fe1ac40004f2ba13269", "title": "Coordinative AI development hypothetical", "url": "https://arbital.com/p/4j", "source": "arbital", "source_type": "text", "text": "A simplified/easier hypothetical form of the [known algorithm nonrecursive](https://arbital.com/p/) path within the [https://arbital.com/p/2z](https://arbital.com/p/2z). Suppose there was an effective world government with effective monitoring of all computers; or that for whatever other imaginary reason rogue AI development projects were simply not a problem. What would the ideal research trajectory for that world look like?\n\n### Usefulness:\n\n- Highlight / flag where safety shortcuts are being taken because we live in the non-ideal case.\n- Let us think through what a maximally safe development pathway would look like, and why, without stopping every 30 seconds to think about how we won't have time. This may uncover valuable research paths that could, on a second glance, be done more quickly.\n- Think through a simpler case of a research-program-generator that has fewer desiderata and hence less cognitive distractions.", "date_published": "2015-12-16T01:25:36Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Janos Kramar", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "4j"} {"id": "0178ed32c574be09be1943d58e318217", "title": "Safe impact measure", "url": "https://arbital.com/p/4l", "source": "arbital", "source_type": "text", "text": "A safe impact measure is one that captures all changes to every variable a human might care about, with no edge-cases where a lot of value could be destroyed by a 'low impact' action. A safe impact measure must also not generate so many false alarms of 'high impact' that no strategy can be distinguished as 'low impact'.\n\n### Importance / uses\n\nA safe impact measure is an open problem of value alignment, which if solved, might be used in constructing:\n\n- A [low-impact AI](https://arbital.com/p/), a subspecies of [Genie](https://arbital.com/p/6w) that tries to carry out its goals while otherwise minimizing the number of important things that it affects.\n- An [Oracle](https://arbital.com/p/6x) that tries not to affect the world outside its box at all, apart from its output.\n- A [shutdown utility function](https://arbital.com/p/) that incentives a [corrigible](https://arbital.com/p/45) AI to halt safely (perhaps after a button is pressed).\n\nSome uses of a safe impact measure postulate that the impact measure has a 'hole' or some form of permitted output channel. For example, one Stuart Armstrong proposal involves an Oracle in a sealed box, with a single output channel that is connected to the Oracle with high quantum probability, and connected to a quantum noise generator with low quantum probability. The Oracle is putatively given the goal of generating an output signal with some informational property, and otherwise minimizing the 'impact' relative to the same output having been generated by the quantum noise generator instead of the Oracle. This is intended to capture the idea that the Oracle's effect on the world should only occur via the effect of the output message, and not take advantage of any side channels such as managing to send a radio signal outside the Oracle box.\n\n### Difficulty\n\nTo be used inside an [advanced agent](https://arbital.com/p/2c), an impact measure must be [safe](https://arbital.com/p/2l) in the face of whatever cognitive pressures and optimization pressures might tend to produce [edge instantiations](https://arbital.com/p/2w) or [https://arbital.com/p/42](https://arbital.com/p/42) - it must capture so much variance that there is *no* clever strategy whereby an advanced agent can produce some special type of variance that evades the measure. Ideally, the measure will pass the [Omni Test](https://arbital.com/p/), meaning that even if it suddenly gained perfect control over every particle in the universe, there would still be no way for it to have what intuitively seems like a 'large influence' on the future, without that strategy being assessed as having a 'high impact'.\n\nThe reason why a safe impact measure might be possible, and specifiable to an AI without having to solve the entire [value learning problem](https://arbital.com/p/) for [complex values](https://arbital.com/p/5l), is that it may be possible to upper-bound the value-laden and complex quantity 'impact on literally everything cared about' by some much simpler quantity that says roughly 'impact on everything' - all causal processes worth modeling on a macroscale, or something along those lines.\n\nThe challenge of a safe impact measure is that we can't just measure, e.g., 'number of particles influenced in any way' or 'expected shift in all particles in the universe'. For the former case, consider that a one-gram mass on Earth exerts a gravitational pull that accelerates the Moon toward it at roughly 4 x 10^-31 m/s^2, and every sneeze has a *very* slight gravitational effect on the atoms in distant galaxies. Since every decision qualitatively 'affects' everything in its future light cone, this measure will have too many false positives / not approve any strategy / not usefully discriminate unusually dangerous atoms.\n\nFor the proposed quantity 'expectation of the net shift produced on all atoms in the universe': If the universe (including the Earth) contains at least one process chaotic enough to exhibit butterfly effects, then any sneeze anywhere ends up producing a very great expected shift in total motions. Again we must worry that the impact measure, as evaluated inside the mind of a superintelligence, would just assign uniformly high values to every strategy, meaning that unusually dangerous actions would not be discriminated for alarms or vetos.\n\nDespite the first imaginable proposals failing, it doesn't seem like a 'safe impact measure' necessarily has the type of [value-loading](https://arbital.com/p/) that would make it [VA-complete](https://arbital.com/p/). One intuition pump for 'notice big effects in general' not being value-laden, is that if we imagine aliens with nonhuman decision systems trying to solve this problem, it seems easy to imagine that the aliens would come up with a safe impact measure that we would also regard as safe.", "date_published": "2015-12-16T04:28:47Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["AI alignment open problem", "B-Class"], "alias": "4l"} {"id": "f01cc5b5a590112dc56505ab21149c6b", "title": "No-Free-Lunch theorems are often irrelevant", "url": "https://arbital.com/p/nofreelunch_irrelevant", "source": "arbital", "source_type": "text", "text": "There's a wide variety of \"No Free Lunch\" theorems proving that, in general, some problem is unsolvable. Very often, these theorems are not relevant because the real universe is a special case.\n\nIn a very general and metaphorical sense, most of these theorems say some equivalent of, \"There's no universal good strategy: For every universe where an action causes you to gain \\$5, there's a different universe where that same action causes you to lose \\$5. You can't get a *free lunch* across every universe; if you gain \\$5 in one universe you must lose \\$5 in another universe.\" To which the reply is very often, \"Sure, that's true across maximum-entropy universes in general, but we happen to live in a low-entropy universe where things are far more ordered and predictable than in a universe where all the atoms have been placed at random.\"\n\nSimilarly: In theory, the [protein folding problem](https://en.wikipedia.org/wiki/Protein_folding) (predicting the lowest-energy configuration for a string of amino acids) is NP-hard ([sorta](https://arxiv.org/abs/1306.1372)). But since quantum mechanics is not known to solve NP-hard problems, this just means that in the real world, some proteins fold up in a way that doesn't reach the ideal lowest energy. Biology is happy with just picking out proteins that reliably fold up a particular way. \"NP-hard\" problems in real life with nonrandom data often have regularities that make them much more easily solvable than the worst cases; or they have pretty good approximate solutions; or we can work with just the solvable cases.\n\nA related but distinct idea: It's impossible to prove in general whether a Turing machine halts, or [has any other nontrivial property](https://en.wikipedia.org/wiki/Rice%27s_theorem) since it might be conditioned on a subprocess halting. But that often doesn't stop us from picking particular machines that do halt, or limiting our consideration to computations that run in less than a quadrillion timesteps, etcetera.\n\nThis doesn't mean *all* No-Free-Lunch theorems are irrelevant in the real-universe special case. E.g., the Second Law of Thermodynamics can also be seen as a No-Free-Lunch theorem, and does actually prohibit perpetual motion in our own real universe (on the standard model of physics).\n\nIt should finally be observed that human intelligence does work in the real world, meaning that there's no No-Free-Lunch theorem which prohibits an intelligence from working at least that well. Any claim that a No-Free-Lunch theorem prohibits machine intelligence in general, must definitely [Prove Too Much](https://arbital.com/p/3tc), because the same reasoning could be applied to a human brain considered as a physical system.", "date_published": "2016-06-20T04:02:01Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["\"No Free Lunch\" theorems prove that across some problem class, it's impossible to do better in one case without doing worse in some other case. They say some equivalent of, \"There's no universal good strategy: For every universe where an action causes you to gain \\$5, there's a different universe where that same action causes you to lose \\$5. You can't get a *free lunch* across every universe; if you gain \\$5 in one possible world you must lose \\$5 in another possible world.\"\n\nTo this the realistic reply is very often, \"Okay, that's true across maximum-entropy universes in general. But the real world is an extremely special case. We happen to live in a low-entropy universe, where things are far more ordered and predictable than in a universe where all the atoms have been placed at random; and the world where we gain \\$5 is far more probable than the world where we lose \\$5.\"\n\nSimilarly: In theory, the ideal form of the protein folding problem is NP-hard. In practice, biology is happy to select proteins that reliably fold up a particular way even if they don't reach the exact theoretical minimum. \"NP-hard\" problems in real life with nonrandom data often have regularities that make them much more easily solvable than the worst-case problems; or they have pretty good approximate solutions."], "tags": ["B-Class"], "alias": "4lx"} {"id": "0345ae1b99c36c71f7b58cde65823b11", "title": "Mind design space is wide", "url": "https://arbital.com/p/mind_design_space_wide", "source": "arbital", "source_type": "text", "text": "Imagine an enormous space of possible mind designs, within which all humans who've ever lived are a single tiny dot. We all have the same cerebral cortex, cerebellum, thalamus, etcetera. There's an instinct to imagine \"Artificial Intelligences\" as a kind of weird tribe that lives across the river and ask what peculiar customs this foreign tribe might have. Really the word \"Artificial Intelligence\" just refers to the *entire* space of possibilities outside the tiny human dot. So to most questions about AI, the answer may be, \"It depends on the exact mind design of the AI.\" By similar reasoning, a universal claim over all possible AIs is much more dubious than a claim about at least one AI. If you imagine that in the vast space of all mind designs, there's at least a billion binary design choices that can be made, then, there's at least $2^{1,000,000,000}$ distinct mind designs. We might say that any claim of the form, \"Every possible mind design has property $P$\" has $2^{1,000,000,000}$ chances to be false, while any claim of the form \"There exists at least one mind design with property $P$\" has $2^{1,000,000,000}$ chances to be true. This doesn't preclude us from thinking about properties that *most* mind designs might have. But it does suggest that if we don't like some property $P$ that seems likely to *usually* hold, we can maybe find some special case of a mind design which unusually has $P$ false.", "date_published": "2016-06-19T18:28:17Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "4ly"} {"id": "2904cbca1cfd68933decb02feceae530", "title": "Flag the load-bearing premises", "url": "https://arbital.com/p/load_bearing_premises", "source": "arbital", "source_type": "text", "text": "If someone says, \"I think your AI safety scheme is horribly flawed because X will go wrong,\" and you reply \"Nah, X will be fine because of Y and Z\", then good practice calls for highlighting that Y and Z are important propositions. Y and Z may need to be debated in their own right, especially if a lot of people consider Y and Z to be nonobvious or probably false. Contrast [https://arbital.com/p/emphemeral_premises](https://arbital.com/p/emphemeral_premises). Needs to be paired with understanding of the [Multiple-Stage Fallacy](https://arbital.com/p/multiple_stage_fallacy) so that listing load-bearing premises doesn't make the proposition look less probable - $\\neg X$ will have load-bearing premises too.", "date_published": "2016-06-19T21:25:11Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "4lz"} {"id": "4a4e26a51225fa9e2d5e9dc381296177", "title": "AI alignment open problem", "url": "https://arbital.com/p/value_alignment_open_problem", "source": "arbital", "source_type": "text", "text": "A tag for pages that describe at least one major open problem that has been identified within the theory of [value-aligned advanced agents](https://arbital.com/p/2c), powerful artificial minds such that the effect of running them is good / nice / normatively positive ('[high value](https://arbital.com/p/55)').\n\nTo qualify as an 'open problem' for this tag, the problem should be relatively crisply stated, unsolved, and considered important.", "date_published": "2017-02-06T02:05:36Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "4m"} {"id": "e8d897d7d48df04aac110d509b208e37", "title": "Task (AI goal)", "url": "https://arbital.com/p/task_goal", "source": "arbital", "source_type": "text", "text": "A \"Task\" is a goal or subgoal within an [advanced](https://arbital.com/p/2c) AI, that can be satisfied as fully as possible by optimizing a bounded part of space, for a limited time, with a limited amount of effort.\n\nE.g., \"make as many [paperclips](https://arbital.com/p/10h) as possible\" is definitely not a 'task' in this sense, since it spans every paperclip anywhere in space and future time. Creating more and more paperclips, using more and more effort, would be more and more preferable up to the maximum exertable effort.\n\nFor a more subtle example of non-taskishness, consider Disney's \"sorcerer's apprentice\" scenario: Mickey Mouse commands a broomstick to fill a cauldron. The broomstick then adds more and more water to the cauldron until the workshop is flooded. (Mickey then tries to destroy the broomstick. But since the broomstick has no [designed-in reflectively stable shutdown button](https://arbital.com/p/2xd), the broomstick repairs itself and begins constructing subagents that go on pouring more water into the cauldron.)\n\nSince the Disney cartoon is a musical, we don't know if the broomstick was given a time bound on its job. Let us suppose that Mickey tells the broomstick to do its job sometime before 1pm.\n\nThen we might imagine that the broomstick is a subjective [expected utility](https://arbital.com/p/18r) maximizer with a utility function $U_{cauldron}$ over outcomes $o$:\n\n$$U_{cauldron}(o): \\begin{cases}\n1 & \\text{if in $o$ the cauldron is $\\geq 90\\%$ full of water at 1pm} \\\\\n0 & \\text{otherwise}\n\\end{cases}$$\n\nThis *looks* at first glance like it ought to be taskish:\n\n- The cauldron is bounded in space.\n- The goal only concerns events that happen before a certain time.\n- The highest utility that can be achieved is $1,$ which is reached as soon as the cauldron is $\\geq 90\\%$ full of water, which seems achievable using a limited amount of effort.\n\nThe last property in particular makes $U_{cauldron}$ a \"satisficing utility function\", one where an outcome is either satisfactory or not-satisfactory, and it is not possible to do any better than \"satisfactory\".\n\nBut by previous assumption, the broomstick is still optimizing *expected* utility. Assume the broomstick reasons with [reasonable generality](https://arbital.com/p/42g) via some [universal prior](https://arbital.com/p/4mr). Then the *subjective probability* of the cauldron being full, when it *looks* full to the broomstick-agent, [will not be](https://arbital.com/p/4mq) *exactly* $1.$ Perhaps (the broomstick-agent reasons) the broomstick's cameras are malfunctioning, or its RAM has malfunctioned producing an inaccurate memory.\n\nThen the broomstick-agent reasons that it can further increase the probability of the cauldron being full - however slight the increase in probability - by going ahead and dumping in another bucket of water.\n\nThat is: [Cromwell's Rule](https://arbital.com/p/4mq) implies that the subjective probability of the bucket being full never reaches exactly $1$. Then there can be an infinite series of increasingly preferred, increasingly more effortful policies $\\pi_1, \\pi_2, \\pi_3 \\ldots$ with\n\n$$\\mathbb E [U_{cauldron} | \\pi_1](https://arbital.com/p/) = 0.99\\\\\n\\mathbb E [U_{cauldron} | \\pi_2](https://arbital.com/p/) = 0.999 \\\\\n\\mathbb E [U_{cauldron} | \\pi_3](https://arbital.com/p/) = 0.999002 \\\\\n\\ldots$$\n\nIn that case the broomstick can always do better in expected utility (however slightly) by exerting even more effort, up to the maximum effort it can exert. Hence the flooded workshop.\n\nIf on the other hand the broomstick is an *expected utility satisficer*, i.e., a policy is \"acceptable\" if it has $\\mathbb E [U_{cauldron} | \\pi ](https://arbital.com/p/) \\geq 0.95,$ then this is now finally a taskish process (we think). The broomstick can find some policy that's reasonably sure of filling up the cauldron, execute that policy, and then do no more.\n\nAs described, this broomstick doesn't yet have any [impact penalty](https://arbital.com/p/4l), or features for [mild optimization](https://arbital.com/p/2r8). So the broomstick could *also* get $\\geq 0.90$ expected utility by flooding the whole workshop; we haven't yet [forbidden excess efforts](https://arbital.com/p/2r8). Similarly, the broomstick could also go on to destroy the world after 1pm - we haven't yet [forbidden excess impacts](https://arbital.com/p/4l).\n\nBut the underlying rule of \"Execute a policy that fills the cauldron at least 90% full with at least 95% probability\" does appear taskish, so far as we know. It seems *possible* for an otherwise well-designed agent to execute this goal to the greatest achievable degree, by acting in bounded space, over a bounded time, with a limited amount of effort. There does not appear to be a sequence of policies the agent would evaluate as better fulfilling its decision criterion, which use successively more and more effort.\n\nThe \"taskness\" of this goal, even assuming it was correctly [identified](https://arbital.com/p/36y), wouldn't by itself make the broomstick a fully taskish AGI. We also have to consider whether every subprocess of the AI is similarly tasky; whether there is any subprocess anywhere in the AI that tries to improve memory efficiency 'as far as possible'. But it would be a start, and make further safety features more feasible/useful.\n\nSee also [https://arbital.com/p/2r8](https://arbital.com/p/2r8) as an [open problem in AGI alignment](https://arbital.com/p/2mx).", "date_published": "2017-01-26T01:55:21Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A \"Task\" is a goal within an [AI](https://arbital.com/p/2c) that only covers a bounded amount of space and time, and can be satisfied by a limited amount of effort.\n\nAn example might be \"fill this cauldron with water before 1pm\"; but even there, we have to be careful. \"Maximize the probability that this cauldron contains water at 1pm\" would imply unlimited effort, since slightly higher probabilities could be obtained by adding more and more effort.\n\n\"Carry out some policy such that there's at least a 95% chance that the cauldron is at least 90% full of water by 1pm\" would be more task-ish. A limited effort seems like definitely enough to do that, and then it can't be done any further by expending more effort.\n\nSee also [https://arbital.com/p/2pf](https://arbital.com/p/2pf), [https://arbital.com/p/2r8](https://arbital.com/p/2r8) and [https://arbital.com/p/6w](https://arbital.com/p/6w)."], "tags": ["B-Class"], "alias": "4mn"} {"id": "514eb24e77c364245e575291d69867fc", "title": "Distinguish which advanced-agent properties lead to the foreseeable difficulty", "url": "https://arbital.com/p/distinguish_advancement", "source": "arbital", "source_type": "text", "text": "Any general project of producing a large edifice of good thinking should try to break down the ideas into modular pieces, distinguish premises from conclusions, and clearly label which reasoning steps are being used. Applied to [AI alignment theory](https://arbital.com/p/2v), one of the things this suggests is that if you propose any sort of potentially difficult or dangerous future behavior from an AI, you should distinguish what particular kinds of advancement or cognitive intelligence are supposed to produce this difficulty. In other words, supposed [foreseeable difficulties](https://arbital.com/p/6r) should come with proposed [advanced agent properties](https://arbital.com/p/2c) that match up to them.", "date_published": "2016-06-20T20:12:33Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "4n1"} {"id": "6ce4b615d60cdded951c5c6523d45637", "title": "Natural language understanding of \"right\" will yield normativity", "url": "https://arbital.com/p/4s", "source": "arbital", "source_type": "text", "text": "This proposition is true if you can take a cognitively powerful agent that otherwise seems pretty competent at understanding natural language, and that has been previously trained out of infrahuman errors in understanding natural language, ask it to 'do the right thing' or 'do the right thing, defined the right way' and its natural language understanding of 'right' yields what we would intuitively see as normativity.\n\n### Arguments\n\n\n\nNatural categories have boundaries with low [algorithmic information](https://arbital.com/p/5v) relative to boundaries produced by a purely epistemic system with a simplicity prior.\n\n'Unnatural' categories have value-laden boundaries. Values have high algorithmic information because of the [https://arbital.com/p/1y](https://arbital.com/p/1y) and [https://arbital.com/p/5l](https://arbital.com/p/5l). Unnatural categories appear simple to us because we do dimensional reduction on value boundaries. Things merely near to the boundaries of unnatural categories can fall off rapidly in value because of fragility.\n\nThere's an inductive problem where 18 things are important and only 17 of them vary between the positive and negative examples in the data.\n\n[Edge instantiation](https://arbital.com/p/2w) makes this worse because it tends to seek out extreme cases.\n\nThe word 'right' involves a lot of what we call 'philosophical competence' in the sense that humans figuring it out will go through a lot of new cognitive use-paths ('unprecedented excursions') that they didn't traverse while disambiguating blue and green. This also holds true when people are reflecting on how to figure out 'right'. Example case of CDT vs. UDT.\n\nThis also matters because edge instantiation on the most 'right' as persuasively-right cases, will produce things that humans find superpersuasive (perhaps via shoving brains onto strange new pathways). So we can't define right as that which would counterfactually cause a human model to agree that 'right' applies.\n\nThis keys into the inductive problem where variation must be shadowed in the data for the induced concept to cover it.\n\nBut if you had a complete predictive model of a human, it's then possible though not necessary that normative boundaries might be possible to induce by examples and asking to clarify ambiguities.", "date_published": "2015-12-16T03:33:27Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress"], "alias": "4s"} {"id": "dea6a4904ab77ab547281f00e89c036b", "title": "Work in progress", "url": "https://arbital.com/p/work_in_progress_meta_tag", "source": "arbital", "source_type": "text", "text": "A meta tag for pages unfinished pages which an author is still making major changes to.", "date_published": "2016-07-05T20:48:12Z", "authors": ["Erik Istre", "Kevin Clancy", "Alexei Andreev", "Tsvi BT", "Stephanie Zolayvar", "Eric Rogstad", "Patrick Stevens", "Jeremy Perret", "Nate Soares", "Morgan Redding", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina", "Eliezer Yudkowsky", "Jack Gallagher"], "summaries": [], "tags": ["Stub"], "alias": "4v"} {"id": "7c9f126508ccd20e5385c820d281ef37", "title": "Underestimating complexity of value because goodness feels like a simple property", "url": "https://arbital.com/p/underestimate_value_complexity_perceputal_property", "source": "arbital", "source_type": "text", "text": "One potential reason why people might tend to systematically underestimate the [complexity of value](https://arbital.com/p/5l) is if the \"goodness\" of a policy or goal-instantiation *feels like* a simple, direct property. That is, our brains compute the goodness level and make it available to us as a relatively simple quantity, so we *feel like* it's a simple fact that tiling the universe with tiny agents experiencing maximum simply-represented 'pleasure' levels, is a *bad* version of happiness. We feel like it ought to be simple to yell at an AI \"Just give me high-value happiness, not this *weird low-value* happiness!\" Or have the AI learn, from a few examples, that it's meant to produce *high*-value X and not *low*-value X, especially if the AI is smart enough to learn other simple boundaries, like the difference between red objects and blue objects. Where actually the boundary between \"good X\" and \"bad X\" is [value-laden](https://arbital.com/p/36h) and far more wiggly and would require far more examples to delineate. What our brain computes as a seemingly simple, perceptually available one-dimensional quantity, does not always correspond to a simple, easy-to-learn gradient in the space of policies or outcomes. This is especially true of the seemingly readily-available property of [beneficialness](https://arbital.com/p/3d9).", "date_published": "2016-06-26T23:23:07Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Psychologizing", "B-Class"], "alias": "4v2"} {"id": "9d605455a0ea5bb87f9ea5e44920b349", "title": "Subjective probability", "url": "https://arbital.com/p/subjective_probability", "source": "arbital", "source_type": "text", "text": "What does it *mean* to say that a flipped coin has a 50% probability of landing heads?\n\nThere are multiple ways to answer this question, depending on what you mean by \"probability\". This page discusses \"subjective probabilities,\" which are a tool for quantifying your personal uncertainty about the world.\n\nImagine flipping a coin and slapping it against your wrist. It's already landed either heads or tails. The fact that you don't know whether it's heads or tails is a fact about _you,_ not a fact about the coin. Ignorance is in the mind, not in the world.\n\nSo your mind is representing the coin, and you're unsure about which way the coin came up. Those probabilities, represented in your brain, are your subjective probabilities. [https://arbital.com/p/1bv](https://arbital.com/p/1bv) is concerned with the formalization, study, and manipulation of subjective probabilities.\n\nIf probabilities are simply subjective mental states, what does it mean to say that probabilities are \"good,\" \"correct,\" \"accurate,\" or \"true\"? The subjectivist answer, roughly, is that a probability distribution becomes more accurate as it puts more of its probability mass on the true possibility within the set of all possibilities it considers. For more on this see [https://arbital.com/p/4yj](https://arbital.com/p/4yj).\n\nSubjective probabilities, given even a small [grain of truth](https://arbital.com/p/), will become *more* accurate as they interact with reality and execute [Bayesian updates](https://arbital.com/p/1ly). Your subjective belief about \"Is it cloudy today?\" is materially represented in your brain, and becomes more accurate as you look up at the sky and causally interact with it: light from the clouds in the sky comes down, enters your retina, is transduced to nerve impulses, processed by your visual cortex, and then your subjective belief about whether it's cloudy becomes more accurate.\n\n'Subjective probability' designates our view of probability as an epistemic state, something inherently in the mind, since reality itself is not uncertain. It doesn't mean 'arbitrary probability' or 'probability that somebody just made up with no connection to reality'. Your belief that it's cloudy outside (or sunny) is a belief, but not an arbitrary or made-up belief. The same can be true about your statement that you think it's 90% likely to be sunny outside, because it was sunny this morning and it's summer, even though you're currently in an interior room and you haven't checked the weather. The outdoors itself is not wavering between sunny and cloudy; but your *guess* that it's 9 times more likely to be sunny than cloudy is not ungrounded. \n\nSeveral [coherence theorems](https://arbital.com/p/7ry) suggest that classical probabilities are a *uniquely* good way of quantifying the relative credibility we attach to our guesses; e.g. that even if it's probably sunny, it's still *more* likely for it to be cloudy outside than for the Moon to be made of green cheese. This in turn says that while the probabilities themselves may exist in our minds, the laws that govern the manipulation and updating of these probabilities are as solid as any other mathematical fact.\n\nFor an example of a solid law governing subjective probability, see Arbital's [guide to Bayes' rule](https://arbital.com/p/1zq).", "date_published": "2017-02-08T17:36:49Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["What does it *mean* to say that a fair coin has a 50% probability of landing heads?\n\nImagine flipping a coin and slapping it against your wrist. It's already landed either heads or tails. The fact that you don't know whether it's heads or tails is a fact about _you,_ not a fact about the coin. Ignorance is in the mind, not in the world.\n\nSubjective probabilities are a tool for quantifying uncertainty about the world. In your mind's representation of the coin, you're unsure about which way the coin came up. Those probabilities, represented in your brain, are your subjective probabilities.\n\n[https://arbital.com/p/1bv](https://arbital.com/p/1bv) is concerned with the formalization, study, and manipulation of subjective probabilities.\n\nSeveral [coherence theorems](https://arbital.com/p/7ry) suggest that classical probability is a *uniquely* good way to quantify subjective uncertainty. This in turn means that while probabilities may be 'subjective' in the sense of existing in our minds rather than outside reality, the laws governing the manipulation and [updating](https://arbital.com/p/1ly) of these probabilities are solid and determined."], "tags": ["C-Class"], "alias": "4vr"} {"id": "f257c6f4c55f1b477e22e7ee85cf53f4", "title": "Identifying ambiguous inductions", "url": "https://arbital.com/p/inductive_ambiguity", "source": "arbital", "source_type": "text", "text": "One of the old fables in machine learning is the story of the \"tank classifier\" - a neural network that had supposedly been trained to detect enemy tanks hiding in a forest. It turned out that all the photos of enemy tanks had been taken on sunny days and all the photos of the same field without the tanks had been taken on cloudy days, meaning that the neural net had really just trained itself to recognize the difference between sunny and cloudy days (or just the difference between bright and dim pictures). ([Source](http://lesswrong.com/lw/7qz/machine_learning_and_unintended_consequences/#d6o1).)\n\nWe could view this problem as follows: A human looking at the labeled data might have seen several concepts that someone might be trying to point at - tanks vs. no tanks, cloudy vs. sunny days, or bright vs. dim pictures. A human might then ask, \"Which of these possible categories did you mean?\" and describe the difference using words; or, if it was easier for them to generate pictures than to talk, generate new pictures that distinguished among the possible concepts that could have been meant. Since learning a simple boundary that separates positive from negative instances in the training data is a form of induction, we could call this problem noticing \"inductive ambiguities\" or \"ambiguous inductions\".\n\nThis problem bears some resemblance to numerous setups in computer science where we can query an oracle about how to classify instances and we want to learn the concept boundary using a minimum number of instances. However, identifying an \"inductive ambiguity\" doesn't seem to be exactly the same problem, or at least, it's not obviously the same problem. Suppose we consider the tank-classifier problem. Distinguishing levels of illumination in the picture is a very simple concept, so it would probably be the first one learned; then, treating the problem in classical oracle-query terms, we might imagine the AI presenting the user with various random pixel fields at intermediate levels of illumination. The user, not having any idea what's going on, classifies these intermediate levels of illumination as 'not tanks', and so the AI soon learns that only quite sunny levels of illumination are required.\n\nPerhaps what we want is less like \"figure out exactly where the concept boundary lies by querying the edge cases to the oracle, assuming our basic idea about the boundary is correct\" and more like \"notice when there's more than one plausible idea that describes the boundary\" or \"figure out if the user could have been trying to communicate more than one plausible idea using the training dataset\".\n\n# Possible approaches\n\nSome possibly relevant approaches that might feed into the notion of \"identifying inductive ambiguities\":\n\n- [Conservatism](https://arbital.com/p/2qp). Can we draw a much narrower, but somewhat more complicated, boundary around the training data?\n- Can we get a concept that more strongly predicts or more tightly predicts the training cases we saw? (Closely related to conservatism - if we suppose there's a generator for the training cases, then a more conservative generator concentrates more probability density into the training cases we happened to see.)\n- Can we detect commonalities in the positive training cases that aren't already present in the concept we've learned?\n - This might be a good fit for something like a [generative adversarial](http://arxiv.org/abs/1406.2661) approach, where we generate random instances of the concept we learned, then ask if we can detect the difference between those random instances and the actual positively labeled training cases.\n- Is there a way to blank out the concept we've already learned so that it doesn't just get learned again, and ask if there's a different concept that's learnable instead? That is, whatever algorithm we're using, is there a good way to tell it \"Don't learn *this* concept, now try to learn\" and see if it can learn something substantially different?\n- Something something Gricean implication.\n\n# Relevance in value alignment\n\nSince inductive ambiguities are meant to be referred to the user for resolution rather than resolved automatically (the whole point is that the necessary data for an automatic resolution isn't there), they're instances of \"[user queries](https://arbital.com/p/2qq)\" and all [standard worries about user queries](https://arbital.com/p/2qq) would apply.\n\nThe hope about a good algorithm for identifying inductive ambiguities is that it would help catch [edge instantiations](https://arbital.com/p/2w) and [unforeseen maximums](https://arbital.com/p/47), and maybe just simple errors of communication.", "date_published": "2016-03-20T02:44:19Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["An 'inductive ambiguity' is when there's more than one simple concept that fits the data, even if some of those concepts are much simpler than others, and you want to figure out *which* simple concept was intended. Suppose you're given images that show camouflaged enemy tanks and empty forests, but it so happens that the tank-containing pictures were taken on sunny days and the forest pictures were taken on cloudy days. Given the training data, the key concept the user intended might be \"camouflaged tanks\", or \"sunny days\", or \"pixel fields with brighter illumination levels\". The last concept is by far the simplest, but rather than just assume the simplest explanation is correct with most of the probability mass, we want the algorithm (or AGI) to detect that there's more than one simple-ish boundary that might separate the data, and [check with the user](https://arbital.com/p/2qq) about *which* boundary was intended to be learned."], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "Work in progress", "B-Class"], "alias": "4w"} {"id": "d1acbe9ea63fbf71a5f2be4a543c6e31", "title": "Likelihood functions, p-values, and the replication crisis", "url": "https://arbital.com/p/likelihood_vs_pvalue", "source": "arbital", "source_type": "text", "text": "__Or: Switching From Reporting p-values to Reporting Likelihood Functions Might Help Fix the Replication Crisis: A personal view by Eliezer Yudkowsky.__\n\n_Disclaimers:_\n\n- _This dialogue was written by a [Bayesian](https://arbital.com/p/1r8). The voice of the Scientist in the dialogue below may fail to pass the [Ideological Turing Test](https://en.wikipedia.org/wiki/Ideological_Turing_Test) for frequentism, that is, it may fail to do justice to frequentist arguments and counterarguments._\n- _It does not seem sociologically realistic, to the author, that the proposal below could be adopted by the scientific community at large within the next 10 years. It seemed worth writing down nevertheless._\n\n_If you don't already know Bayes' rule, check out Arbital's [Guide to Bayes' Rule](https://arbital.com/p/1zq) if confused._\n\n----------\n\n**Moderator:** Hello, everyone. I'm here today with the **Scientist,** a working experimentalist in... chemical psychology, or something; with the **Bayesian,** who's going to explain why, on their view, we can make progress on the replication crisis by replacing p-values with some sort of Bayesian thing--\n\n**Undergrad:** Sorry, can you repeat that?\n\n**Moderator:** And finally, the **Confused Undergrad** on my right. **Bayesian,** would you care to start by explaining the rough idea?\n\n**Bayesian:** Well, the rough idea is something like this. Suppose we flip a possibly-unfair coin six times, and observe the sequence HHHHHT. Should we be suspicious that the coin is biased?\n\n**Scientist:** No.\n\n**Bayesian:** This isn't a literal coin. Let's say we present a series of experimental subjects with two cookies on a plate, one with green sprinkles and one with red sprinkles. The first five people took cookies with green sprinkles and the sixth person took a cookie with red sprinkles. %note: And they all saw separate plates, on a table in the waiting room marked \"please take only one\" so nobody knew what was being tested, and none of them saw the others' cookie choices.% Do we think most people prefer green-sprinkled cookies or do we think it was just random?\n\n**Undergrad:** I think I would be *suspicious* that maybe people liked green sprinkles better. Or at least that the sort of people who go to the university and get used as test subjects like green sprinkles better. Yes, even if I just saw that happen in the first six cases. But I'm guessing I'm going to get dumped-on for that.\n\n**Scientist:** I think I would be genuinely not-yet-suspicious. There's just too much stuff that looks good after N=6 that doesn't pan out with N=60.\n\n**Bayesian:** I'd at least strongly suspect that people in the test population *don't* mostly prefer red sprinkles. But the reason I introduced this example is as an oversimplified example of how current scientific statistics calculate so-called \"p-values\", and what a Bayesian sees as the central problem with that.\n\n**Scientist:** And we can't use a more realistic example with 30 subjects?\n\n**Bayesian:** That would not be nice to the Confused Undergrad.\n\n**Undergrad:** *Seconded.*\n\n**Bayesian:** So: Heads, heads, heads, heads, heads, tails. I ask: is this \"statistically significant\", as current conventional statisticians would have the phrase?\n\n**Scientist:** I reply: no. On the null hypothesis that the coin is fair, or analogously that people have no strong preference between green and red sprinkles, we should expect to see a result as extreme as this in 14 out of 64 cases.\n\n**Undergrad:** Okay, just to make sure I have this straight: That's because we're considering results like HHHTHH or TTTTTT to be equally or more extreme, and there are 14 total possible cases like that, and we flipped the coin 6 times which gives us $2^6 = 64$ possible results. 14/64 = 22%, which is not less than 5%, so this is not statistically significant at the $p<0.05$ level.\n\n**Scientist:** That's right. However, I'd also like to observe as a matter of practice that even if you get HHHHHH on your first six flips, I don't advise stopping there and sending in a paper where you claim that the coin is biased towards heads.\n\n**Bayesian:** Because if you can decide to *stop* flipping the coin at a time of your choice, then we have to ask \"How likely it is that you can find some place to stop flipping the coin where it looks like there's a significant number of heads?\" That's a whole different kettle of fish according to the p-value concept.\n\n**Scientist:** I was just thinking that N=6 is not a good number of experimental subjects when it comes to testing cookie preferences. But yes, that too.\n\n**Undergrad:** Uh... why does it make a difference if I can decide when to stop flipping the coin?\n\n**Bayesian:** What an excellent question.\n\n**Scientist:** Well, this is where the concept of p-values is less straightforward than plugging the numbers into a statistics package and believing whatever the stats package says. If you previously decided to flip exactly six coins, and then stop, regardless of what results you got, then you would get a result as extreme as \"HHHHHH\" or \"TTTTTT\" 2/64 of the time, or 3.1%, so p<0.05. However, suppose that instead you are a bad fraudulent scientist, or maybe just an ignorant undergraduate who doesn't realize what they're doing wrong. Instead of picking the number of flips in advance, you keep flipping the coin until the statistics software package tells you that you got a result that *would have been* statistically significant *if,* contrary to the actual facts of the case, you'd decided *in advance* to flip the coin exactly that many times. But you didn't decide in advance to flip the coin that many times. You decided it after checking the actual results. Which you are not allowed to do.\n\n**Undergrad:** I've heard that before, but I'm not sure I understand on a gut level why it's bad for me to decide when I've collected enough data.\n\n**Scientist:** What we're trying to do here is set up a test that the null hypothesis cannot pass--to make sure that where there's no fire, there's unlikely to be smoke. We want a complete experimental process which is unlikely to generate a \"statistically significant\" discovery if there's no real phenomenon being investigated. If you flip the coin exactly six times, if you decide that in advance, then you are less than 5% likely to get a result as extreme as \"six heads\" or \"six tails\". If you flip the coin repeatedly, and check *repeatedly* for a result that *would* have had p<0.05 if you'd decided in advance to flip the coin exactly that many times, your chance of getting a nod from the statistics package is *much greater* than 5%. You're carrying out a process which is much more than 5% likely to yell \"smoke\" in the absence of fire.\n\n**Bayesian:** The way I like to explain the problem is like this: Suppose you flip a coin six times and get HHHHHT. If you, in the secret depths of your mind, in your inner heart that no other mortal knows, decided to flip the coin exactly six times and then stop, this result is not statistically significant; p=0.22. If you decided in the secret depths of your heart, that you were going to *keep flipping the coin until it came up tails,* then the result HHHHHT *is* statistically significant with p=0.03, because the chance of a fair coin requiring you to wait six or more flips to get one tail is 1/32.\n\n**Undergrad:** What.\n\n**Scientist:** It's a bit of a parody, obviously--nobody would really decide to flip a coin until they got one tail, then stop--but the Bayesian is technically correct about how the rules for p-values work. What we're asking is how rare our outcome is, within the class of outcomes we *could* have gotten. The person who keeps flipping the coin until they get one tail has possible outcomes {T, HT, HHT, HHHT, HHHHT, HHHHHT, HHHHHHT...} and so on. The class of outcomes where you get to the sixth round or later is the class of outcomes {HHHHHT, HHHHHHT...} and so on, a set of outcomes which collectively have a probability of 1/64 + 1/128 + 1/256... = 1/32. Whereas if you flip the coin exactly six times, your class of possible outcomes is {TTTTTT, TTTTTH, TTTTHT, TTTTHH...}, a set of 64 possibilities within which the outcome HHHHHT is something we could lump in with {HHHHTH, HHHTHH, THTTTT...} and so on. So although it's counterintuitive, if we really had decided to run the first experiment, HHHHHT would be a statistically significant result that a fair coin would be unlikely to give us. And if we'd decided to run the second experiment, HHHHHT would not be a statistically significant result because fair coins sometimes do something *like* that.\n\n**Bayesian:** And it doesn't bother you that the meaning of your experiment depends on your private state of mind?\n\n**Scientist:** It's an honor system. Just like the process doesn't work if you lie about which coinflips you actually saw, the process doesn't work--is not a fair test in which non-fires are unlikely to generate smoke--if you lie about *which experiment you performed.* You must honestly say the rules you followed to flip the coin the way you did. Unfortunately, since what you were thinking about the experiment is less clearly visible than the actual coinflips, people are much more likely to twist *how they selected* their number of experimental subjects, or how they selected which tests to run on the data, than they are to tell a blatant lie about what the data said. That's p-hacking. There are, unfortunately, much subtler and less obvious ways of generating smoke without fire than claiming post facto to have followed the rule of flipping the coin until it came up tails. It's a serious problem, and underpins some large part of the great replication crisis, nobody's sure exactly how much.\n\n**Undergrad:** That... sorta makes sense, maybe? I'm guessing this is one of those cases where I have to work through a lot of example problems before it becomes obvious.\n\n**Bayesian:** No.\n\n**Undergrad:** No?\n\n**Bayesian:** You were right the first time, Undergrad. If what the experimentalist is *thinking* has no causal impact on the coin, then the experimentalist's thoughts cannot possibly make any difference to what the coin is saying to us about Nature. My dear Undergrad, you are being taught weird, ad-hoc, overcomplicated rules that aren't even internally consistent--rules that theoretically output *different* wrong answers depending on your private state of mind! And *that* is a problem that runs far deeper into the replication crisis than people misreporting their inner thoughts.\n\n**Scientist:** A bold claim to say the least. But don't be coy; tell us what you think we should be doing instead?\n\n**Bayesian:** I analyze as follows: The exact result HHHHHT has a 1/64 or roughly 1.6% probability of being produced by a fair coin flipped six times. To simplify matters, suppose we for some reason were already pondering the hypothesis that the coin was biased to produce 5/6 heads--again, this is an unrealistic example we can de-simplify later. Then this hypothetical biased coin would have a $(5/6)^5 \\cdot (1/6)^1 \\approx 6.7\\%$ probability of producing HHHHHT. So between our two hypotheses \"The coin is fair\" and \"The coin is biased to produce 5/6ths heads\", our exact experimental result is *4.3 times more likely* to be observed in the second case. HHHHHT is also 0.01% likely to be produced if the coin is biased to produce 1/6th heads and 5/6ths tails, so we've already seen some quite strong evidence *against* that particular hypothesis, if anyone was considering it. The exact experimental outcome HHHHHT is 146 times as likely to be produced by a fair coin as by a coin biased to produce 1/6th heads. %note: Recall the Bayesian's earlier thought that, after seeing five subjects select green cookies followed by one subject selecting a red cookie, we'd already picked up strong evidence *against* the proposition: \"Subjects in this experimental population lopsidedly prefer red cookies over green cookies.\"%\n\n**Undergrad:** Well, I think I can follow the calculation you just did, but I'm not clear on what that calculation means.\n\n**Bayesian:** I'll try to explain the meaning shortly, but first, note this. That calculation we just did has *no dependency whatsoever* on *why* you flipped the coin six times. You could have stopped after six because you thought you'd seen enough coinflips. You could have done an extra coinflip after the first five because Namagiri Thayar spoke to you in a dream. The coin doesn't care. The coin isn't affected. It remains true that the exact result HHHHHT is 23% as likely to be produced by a fair coin as by a biased coin that comes up heads 5/6ths of the time.\n\n**Scientist:** I agree that this is an interesting property of the calculation that you just did. And then what?\n\n**Bayesian:** You report the results in a journal. Preferably including the raw data so that others can calculate the likelihoods for any other hypotheses of interest. Say somebody else suddenly becomes interested in the hypothesis \"The coin is biased to produce 9/10ths heads.\" Seeing HHHHHT is 5.9% likely in that case, so 88% as likely than if the coin is biased to produce 5/6ths heads (making the data 6.7% likely), or 3.7 times as likely than if the coin is fair (making the data 1.6% likely). But you shouldn't have to think of all possible hypotheses in advance. Just report the raw data so that others can calculate whatever likelihoods they need. Since this calculation deals with the *exact* results we got, rather than summarizing it into some class or set of supposedly similar results, it puts a greater emphasis on reporting your exact experimental data to others.\n\n**Scientist:** Reporting raw data seems an important leg of a good strategy for fighting the replication crisis, on this we agree. I nonetheless don't understand what experimentalists are supposed to *do* with this \"X is Q times as likely as Y\" stuff.\n\n**Undergrad:** Seconded.\n\n**Bayesian:** Okay, so... this isn't trivial to describe without making you run through [a whole introduction to Bayes' rule](https://arbital.com/p/1zq)--\n\n**Undergrad:** Great. Just what I need, another weird complicated 4-credit course on statistics.\n\n**Bayesian:** It's literally [a 1-hour read if you're good at math](https://arbital.com/p/693). It just isn't literally *trivial* to understand with no prior introduction. Well, even with no introduction whatsoever, I may be able to fake it with statements that will *sound* like they might be reasonable--and the reasoning *is* valid, it just might not be obvious that it is. Anyway. It is a theorem of probability that the following is valid reasoning:\n\n*(the Bayesian takes a breath)*\n\n**Bayesian:** Suppose that Professor Plum and Miss Scarlet are two suspects in a murder. Based on their prior criminal convictions, we start out thinking that Professor Plum is twice as likely to have committed the murder as Miss Scarlet. We then discover that the victim was poisoned. We think that, assuming he committed a murder, Professor Plum would be 10% likely to use poison; assuming Miss Scarlet committed a murder, she would be 60% likely to use poison. So Professor Plum is around *one-sixth as likely* to use poison as Miss Scarlet. Then after observing the victim was poisoned, we should update to think Plum is around one-third as likely to have committed the murder as Scarlet: $2 \\times \\frac{1}{6} = \\frac{1}{3}.$\n\n**Undergrad:** Just to check, what do you mean by saying that \"Professor Plum is one-third as likely to have committed the murder as Miss Scarlet\"?\n\n**Bayesian:** I mean that if these two people are our only suspects, we think Professor Plum has a 1/4 probability of having committed the murder and Miss Scarlet has a 3/4 probability of being guilty. So Professor Plum's probability of guilt is one-third that of Miss Scarlet's.\n\n**Scientist:** Now *I'd* like to know what you mean by saying that Professor Plum had a 1/4 probability of committing the murder. Either Plum committed the murder or he didn't; we can't observe the murder be committed multiple times and Professor Plum doing it 1/4th of the time.\n\n**Bayesian:** Are we going there? I guess we're going there. My good Scientist, I mean that if you offered me either side of an even-money bet on whether Plum committed the murder, I'd bet that he didn't do it. But if you offered me a gamble that costs \\$1 if Professor Plum is innocent and pays out \\$5 if he's guilty, I'd cheerfully accept that gamble. We only ran the 2012 US Presidential Election one time, but that doesn't mean that on November 7th you should've refused a \\$10 bet that paid out \\$1000 if Obama won. In general when prediction markets and large liquid betting pools put 60% betting odds on somebody winning the presidency, that outcome tends to happen 60% of the time; they are *well-calibrated* for probabilities in that range. If they were systematically uncalibrated--if in general things happened 80% of the time when prediction markets said 60%--you could use that fact to pump money out of prediction markets. And your pumping out that money would adjust the prediction-market prices until they were well-calibrated. If things to which prediction markets assign 70% probability happen around 7 times out of 10, why insist for reasons of ideological purity that the probability statement is meaningless?\n\n**Undergrad:** I admit, that *sounds* to me like it makes sense, if it's not just the illusion of understanding due to my failing to grasp some far deeper debate.\n\n**Bayesian:** There is indeed a [deeper debate](https://arbital.com/p/4y9), but what the deeper debate works out to is that your illusion of understanding is pretty much accurate as illusions go.\n\n**Scientist:** Yeah, I'm going to want to come back to that issue later. What if there are two agents who both seem 'well-calibrated' as you put it, but one agent says 60% and the other agent says 70%?\n\n**Bayesian:** If I flip a coin and don't look at it, so that I don't know yet if it came up heads or tails, then my ignorance about the coin isn't a fact about the coin, it's a fact about me. Ignorance exists in the mind, not in the environment. A blank map does not correspond to a blank territory. If you peek at the coin and I don't, it's perfectly reasonable for the two of us to occupy different states of uncertainty about the coin. And given that I'm not absolutely certain, I can and should quantify my uncertainty using probabilities. There's like [300 different theorems](https://arbital.com/p/7ry) showing that I'll get into trouble if my state of subjective uncertainty *cannot* be viewed as a coherent probability distribution. You kinda pick up on the trend after just the fourth time you see a slightly different clever proof that any violation of the standard probability axioms will cause the graves to vomit forth their dead, the seas to turn red as blood, and the skies to rain down dominated strategies and combinations of bets that produce certain losses--\n\n**Scientist:** Sorry, I shouldn't have said anything just then. Let's come back to this later? I'd rather hear first what you think we should do with the likelihoods once we have them.\n\n**Bayesian:** On the laws of probability theory, those likelihood functions *are* the evidence. They are the objects that send our prior odds of 2 : 1 for Plum vs. Scarlet to posterior odds of 1 : 3 for Plum vs. Scarlet. For any two hypotheses you care to name, if you tell me the relative likelihoods of the data given those hypotheses, I know how to update my beliefs. If you change your beliefs in any other fashion, the skies shall rain dominated strategies etcetera. Bayes' theorem: It's not just a statistical method, it's the LAW.\n\n**Undergrad:** I'm sorry, I still don't understand. Let's say we do an experiment and find data that's 6 times as likely if Professor Plum killed Mr. Boddy than if Miss Scarlet did. Do we arrest Professor Plum?\n\n**Scientist:** My guess is that you're supposed to make up a 'prior probability' that sounds vaguely plausible, like 'a priori, I think Professor Plum is 20% likely to have killed Mr. Boddy'. Then you combine that with your 6 : 1 likelihood ratio to get 3 : 2 posterior odds that Plum killed Mr. Boddy. So your paper reports that you've established a 60% posterior probability that Professor Plum is guilty, and the legal process does whatever it does with that.\n\n**Bayesian:** *No.* Dear God, no! Is that really what people think Bayesianism is?\n\n**Scientist:** It's not? I did always hear that the strength of Bayesianism is that it gives us posterior probabilities, which p-values don't actually do, and that the big weakness was that it got there by making up prior probabilities more or less out of thin air, which means that nobody will ever be able to agree on what the posteriors are.\n\n**Bayesian:** Science papers should report *likelihoods.* Or rather, they should report the raw data and helpfully calculate some likelihood functions on it. Not posteriors, never posteriors. \n\n**Undergrad:** What's a posterior? I'm trusting both of you to avoid the obvious joke here.\n\n**Bayesian:** A [posterior probability](https://arbital.com/p/1rp) is when you say, \"There's a 60% probability that Professor Plum killed Mr. Boddy.\" Which, as the Scientist points out, is something you never get from p-values. It's also something that, in my own opinion, should never be reported in an experimental paper, because it's not *the result of an experiment.*\n\n**Undergrad:** But... okay, Scientist, I'm asking you this one. Suppose we see data statistically significant at p<0.01, something we're less than 1% probable to see if the null hypothesis \"Professor Plum didn't kill Mr. Boddy\" is true. Do we arrest him?\n\n**Scientist:** First of all, that's not a realistic null hypothesis. A null hypothesis is something like \"Nobody killed Mr. Boddy\" or \"All suspects are equally guilty.\" But even if what you just said made sense, even if we could reject Professor Plum's innocence at p<0.01, you still can't say anything like, \"It is 99% probable that Professor Plum is guilty.\" That is just not what p-values mean.\n\n**Undergrad:** Then what *do* p-values mean?\n\n**Scientist:** They mean we saw data inside a preselected class of possible results, which class, as a whole, is less than 1% likely to be produced if the null hypothesis is true. That's *all* that it means. You can't go from there to \"Professor Plum is 99% likely to be guilty,\" for reasons the Bayesian is probably better at explaining. You can't go from there to anywhere that's someplace else. What you heard is what there is.\n\n**Undergrad:** Now I'm doubly confused. I don't understand what we're supposed to do with p-values *or* likelihood ratios. What kind of experiment does it take to throw Professor Plum in prison?\n\n**Scientist:** Well, realistically, if you get a couple more experiments at different labs also saying p<0.01, Professor Plum *is* probably guilty.\n\n**Bayesian:** And the 'replication crisis' is that it turns out he's *not* guilty.\n\n**Scientist:** Pretty much.\n\n**Undergrad:** That's not exactly reassuring.\n\n**Scientist:** Experimental science is not for the weak of nerve.\n\n**Undergrad:** So... Bayesian, are you about to say similarly that once you get an extreme enough likelihood ratio, say, anything over 100 to 1, or something, you can probably take something as true?\n\n**Bayesian:** No, it's a bit more complicated than that. Let's say I flip a coin 20 times and get HHHTHHHTHTHTTHHHTHHHTTHT. Well, the hypothesis \"This coin was rigged to produce exactly HHHTHHHTHTHTTHHHTHHHTTHT\" has a likelihood advantage of roughly a million-to-one over the hypothesis \"this is a fair coin\". On any reasonable system, unless you wrote down that single hypothesis in advance and handed it to me in an envelope and didn't write down any other hypotheses or hand out any other envelopes, we'd say the hypothesis \"This coin was rigged to produce HHHTHHHTHTHTTHHHTHHHTTHT\" has a complexity penalty of at *least* $2^{20} : 1$ because it takes 20 bits just to describe what the coin is rigged to do. In other words, the penalty to prior plausibility more than cancels out a million-to-one likelihood advantage. And that's just the start of the issues. But, *with* that said, I think there's a pretty good chance you could do okay out of just winging it, once you understood in an intuitive and common-sense way how Bayes' rule worked. If there's evidence pointing to Professor Plum with a likelihood of 1,000 : 1 over any other suspects you can think of, in a field that probably only contained six suspects to begin with, you can figure that the prior odds against Plum weren't much more extreme than 10 : 1 and that you can legitimately be at least 99% sure now.\n\n**Scientist:** But you say that this is *not* something you should report in the paper.\n\n**Bayesian:** That's right. How can I put this... one of the great commandments of Bayesianism is that you ought to take into account *all* the relevant evidence you have available; you can't exclude some evidence from your calculation just because you don't like it. Besides sounding like common sense, this is also a rule you have to follow to prevent your calculations from coming up with paradoxical results, and there are various particular problems where there's a seemingly crazy conclusion and the answer is, \"Well, you *also* need to condition on blah blah blah.\" My point being, how do I, as an experimentalist, know what *all the relevant evidence* is? Who am I to calculate a posterior? Maybe somebody else published a paper that includes more evidence, with more likelihoods to be taken into account, and I haven't heard about it yet, but somebody else has. I just contribute my own data and its likelihood function--that's all! It's not my place to claim that I've collected *all* the relevant evidence and can now calculate posterior odds, and even if I could, somebody else could publish another paper a week later and the posterior odds would change again.\n\n**Undergrad:** So, roughly your answer is, \"An experimentalist just publishes the paper and calculates the likelihood thingies for that dataset, and then somebody outside has to figure out what to do with the likelihood thingies.\"\n\n**Bayesian:** Somebody outside has to set up priors--probably just reasonable-sounding ignorance priors, maximum entropy stuff or complexity-based penalties or whatever--then try to make sure they've collected all the evidence, apply the likelihood functions, check to see if the result [makes sense](https://arbital.com/p/227), etcetera. And then they might have to revise that estimate if somebody publishes a new paper a week later--\n\n**Undergrad:** That sounds *awful.*\n\n**Bayesian:** It would be awful if we were doing meta-analyses of p-values. Bayesian updates are a *hell* of a lot simpler! Like, you literally just [multiply](https://arbital.com/p/1zg) the old posterior by the new likelihood [function](https://arbital.com/p/1zj) and normalize. If experiment 1 has a likelihood ratio of 4 for hypothesis A over hypothesis B, and experiment 2 has a likelihood ratio of 9 for A over B, the two experiments together have a likelihood ratio of 36.\n\n**Undergrad:** And you can't do that with p-values, I mean, a p-value of 0.05 and a p-value of 0.01 don't multiply out to p<0.0005--\n\n**Scientist:** *No.*\n\n**Bayesian:** I should like to take this moment to call attention to my superior smile.\n\n**Scientist:** I am still worried about the part of this process where somebody gets to make up prior probabilities.\n\n**Bayesian:** Look, that just corresponds to the part of the process where somebody decides that, having seen 1 discovery and 2 replications with p<0.01, they are willing to buy the new pill or whatever.\n\n**Scientist:** So your reply there is, \"It's subjective, but so is what you do when you make decisions based on having seen some experiments with p-values.\" Hm. I was going to say something like, \"If I set up a rule that says I want data with p<0.001, there's no further objectivity beyond that,\" but I guess you'd say that my asking for p<0.001 instead of p<0.0001 corresponds to my pulling a prior out of my butt?\n\n**Bayesian**: Well, except that asking for a particular p-value is not actually as good as pulling a prior out of your butt. One of the first of those 300 theorems proving *doom* if you violate probability axioms, was Abraham Wald's \"complete class theorem\" in 1947. Wald set out to investigate all the possible *admissible strategies,* where a strategy is a way of acting differently based on whatever observations you make, and different actions get different payoffs in different possible worlds. Wald termed an *admissible strategy* a strategy which was not dominated by some other strategy across all possible measures you could put on the possible worlds. Wald found that the class of admissible strategies was simply the class that corresponded to having a probability distribution, doing Bayesian updating on observations, and maximizing expected payoff.\n\n**Undergrad:** Can you perhaps repeat that in slightly smaller words?\n\n**Bayesian:** If you want to do different things depending on what you observe, and get different payoffs depending on what the real facts are, *either* your strategy can be seen as having a probability distribution and doing Bayesian updating, *or* there's another strategy that does better given at least some possible measures on the worlds and never does worse. So if you say anything as wild as \"I'm waiting to see data with p<0.0001 to ban smoking,\" in principle there must be some way of saying something along the lines of, \"I have a prior probability of 0.01% that smoking causes cancer, let's see those likelihood functions\" which does at least as well or better no matter what anyone else would say as their own prior probabilities over the background facts.\n\n**Scientist:** Huh.\n\n**Bayesian:** Indeed. And that was when the Bayesian revolution very slowly started; it's sort of been gathering steam since then. It's worth noting that Wald only proved his theorem a couple of decades after \"p-values\" were invented, which, from my perspective, helps explain how science got wedged into its peculiar current system.\n\n**Scientist:** So you think we should burn all p-values and switch to reporting all likelihood ratios all the time.\n\n**Bayesian:** In a word... yes.\n\n**Scientist:** I'm suspicious, in general, of one-size-fits-all solutions like that. I suspect you--I hope this is not too horribly offensive--I suspect you of idealism. In my experience, different people need different tools from the toolbox at different times, and it's not wise to throw out all the tools in your toolbox except one.\n\n**Bayesian:** Well, let's be clear where I am and amn't idealistic, then. Likelihood functions cannot solve the entire replication crisis. There are aspects of this that can't be solved by using better statistics. Open access journals aren't something that hinge on p-values versus likelihood functions. The broken system of peer commentary, presently in the form of peer review, is not something likelihood functions can solve.\n\n**Scientist:** But likelihood functions will solve everything else?\n\n**Bayesian:** No, but they'll at least *help* on a surprising amount. Let me count the ways:\n\n**Bayesian:** One. Likelihood functions don't distinguish between 'statistically significant' results and 'failed' replications. There are no 'positive' and 'negative' results. What used to be called the null hypothesis is now just another hypothesis, with nothing special about it. If you flip a coin and get HHTHTTTHHH, you have not \"failed to reject the null hypothesis with p<0.05\" or \"failed to replicate\". You have found experimental data that favors the fair-coin hypothesis over the 5/6ths-heads hypothesis with a likelihood ratio of 3.78 : 1. This may help to fight the file-drawer effect--not entirely, because there is a mindset in the journals of 'positive' results and biased coins being more exciting than fair coins, and we need to tackle that mindset directly. But the p-value system *encourages* that bad mindset. That's why p-hacking even exists. So switching to likelihoods won't fix everything right away, but it *sure will help.*\n\n**Bayesian:** Two. The system of likelihoods makes the importance of raw data clearer and will encourage a system of publishing the raw data whenever possible, because Bayesian analyses center around the probability of the *exact* data we saw, given our various hypotheses. The p-value system encourages you to think in terms of the data as being just one member of a class of 'equally extreme' results. There's a mindset here of people hoarding their precious data, which is not purely a matter of statistics. But the p-value system *encourages* that mindset by encouraging people to think of their result as part of some undistinguished class of 'equally or more extreme' values or whatever, and that its meaning is entirely contained in it being a 'positive' result that is 'statistically significant'.\n\n**Bayesian:** Three. The probability-theoretic view, or Bayesian view, makes it clear that different effect sizes are different hypotheses, as they must be, because they assign different probabilities to the exact observations we see. If one experiment finds a 'statistically significant' effect size of 0.4 and another experiment finds a 'statistically significant' effect size of 0.1 on whatever scale we're working in, the experiment *has not replicated* and we do not yet know what real state of affairs is generating our observations. This directly fights and negates the 'amazing shrinking effect size' phenomenon that is part of the replication crisis.\n\n**Bayesian:** Four. Working in likelihood functions makes it far easier to aggregate our data. It even helps to [point up](https://arbital.com/p/227) when our data is being produced under inconsistent conditions or when the true hypothesis is not being considered, because in this case we will find likelihood functions that end up being nearly zero everywhere, or where the best available hypothesis is achieving a much lower likelihood on the combined data than that hypothesis [expects itself to achieve](https://arbital.com/p/227). It is a stricter concept of replication that helps quickly point up when different experiments are being performed under different conditions and yielding results incompatible with a single consistent phenomenon.\n\n**Bayesian:** Five. Likelihood functions are objective facts about the data which do not depend on your state of mind. You cannot deceive somebody by reporting likelihood functions unless you are literally lying about the data or omitting data. There's no equivalent of 'p-hacking'.\n\n**Scientist:** Okay, that last claim in particular strikes me as *very* suspicious. What happens if I want to persuade you that a coin is biased towards heads, so I keep flipping it until I randomly get to a point where there's a predominance of heads, and then choose to stop?\n\n**Bayesian:** \"Shrug,\" I say. You can't mislead me by telling me what a real coin actually did.\n\n**Scientist:** I'm asking you what happens if I keep flipping the coin, checking the likelihood each time, until I see that the current statistics favor my pet theory, and then I stop.\n\n**Bayesian:** As a pure idealist seduced by the seductively pure idealism of probability theory, I say that so long as you present me with the true data, all I can and should do is update in the way Bayes' theorem says I should.\n\n**Scientist:** Seriously.\n\n**Bayesian:** I am serious.\n\n**Scientist:** So it doesn't bother you if I keep checking the likelihood ratio and continuing to flip the coin until I can convince you of anything I want.\n\n**Bayesian:** Go ahead and try it.\n\n**Scientist:** What I'm actually going to do is write a Python program which simulates flipping a fair coin *up to* 300 times, and I'm going to see how many times I can get a 20:1 likelihood ratio falsely indicating that the coin is biased to come up heads 55% of the time... why are you smiling?\n\n**Bayesian:** I wrote pretty much the same Python program when I was first converting to Bayesianism and finding out about likelihood ratios and feeling skeptical about the system maybe being abusable in some way, and then a friend of mine found out about likelihood ratios and *he* wrote [essentially the same program, also in Python](https://gist.github.com/Soares/941bdb13233fd0838f1882d148c9ac14). And lo, he found that false evidence of 20:1 for the coin being 55% biased was found at least once, somewhere along the way... 1.4% of the time. If you asked for more extreme likelihood ratios, the chances of finding them dropped off even faster.\n\n**Scientist:** Okay, that's not bad by the p-value way of looking at things. But what if there's some more clever way of biasing it?\n\n**Bayesian:** When I was... I must have been five years old, or maybe even younger, and first learning about addition, one of the earliest childhood memories I have at all, is of adding 3 to 5 by counting 5, 6, 7 and believing that you could get different results from adding numbers depending on exactly how you did it. Which is cute, yes, and also indicates a kind of exploring, of probing, that was no doubt important in my starting to understand addition. But you still look back and find it humorous, because now you're a big grownup and you know you can't do that. My writing Python programs to try to find clever ways to fool myself by repeatedly checking the likelihood ratios was the same, in the sense that after I matured a bit more as a Bayesian, I realized that the feat I'd written those programs to try to do was *obviously* impossible. In the same way that trying to find a clever way to break apart the 3 into 2 and 1, and trying to add them separately to 5, and then trying to add the 1 and then the 2, in hopes you can get to 7 or 9 instead of 8, is just never ever going to work. The results in arithmetic are *theorems,* and it doesn't matter in what clever order you switch things up, you are never going to get anything except 8 when you carry out an operation that is validly equivalent to adding 3 plus 5. The theorems of probability theory are also theorems. If your Python program had actually worked, it would have produced a contradiction in probability theory, and thereby a contradiction in Peano Arithmetic, which provides a model for probability theory carried out using rational numbers. The thing you tried to do is *exactly* as hard as adding 3 and 5 using the standard arithmetic axioms and getting 7.\n\n**Undergrad:** Uh, why?\n\n**Scientist:** Seconded.\n\n**Bayesian:** Because letting $e$ denote the evidence, $H$ denote the hypothesis, $\\neg$ denote the negation of a proposition, $\\mathbb P(X)$ denote the probability of proposition $X$, and $\\mathbb P(X \\mid Y)$ denote the [https://arbital.com/p/-1rj](https://arbital.com/p/-1rj) of $X$ assuming $Y$ to be true, it is a theorem of probability that $$\\mathbb P(H) = \\left(P(H \\mid e) \\cdot P(e)\\right) + \\left(P(H\\mid \\neg e) \\cdot P(\\neg e)\\right).$$ Therefore likelihood functions can *never* be p-hacked by *any possible* clever setup without you outright lying, because you can't have any possible procedure that a Bayesian knows in advance will make them update in a predictable net direction. For every update that we expect to be produced by a piece of evidence $e,$ there's an equal and opposite update that we expect to probably occur from seeing $\\neg e.$\n\n**Undergrad:** What?\n\n**Scientist:** Seconded.\n\n**Bayesian:** Look... let me try to zoom out a bit, and yes, look at the ongoing replication crisis. The Scientist proclaimed suspicion of grand new sweeping ideals. Okay, but the shift to likelihood functions is the kind of thing that *ought* to be able to solve a lot of problems at once. Let's say... I'm trying to think of a good analogy here. Let's say there's a corporation which is having a big crisis because their accountants are using floating-point numbers, only there's three different parts of the firm using three different representations of floating-point numbers to do numerically unstable calculations. Somebody starts with 1.0 and adds 0.0001 a thousand times and then subtracts 0.1 and gets 0.999999999999989. Or you can go to the other side of the building and use a different floating-point represenatation and get a different result. And nobody has any conception that there's anything wrong with this. Suppose there are BIG errors in the floating-point numbers, they're using the floating-point-number equivalent of crude ideograms and Roman numerals, you can get big pragmatic differences depending on what representation you use. And naturally, people 'division-hack' to get whatever financial results they want. So all the spreadsheets are failing to replicate, and people are starting to worry the 'cognitive priming' subdivision has actually been bankrupt for 20 years. And then one day you come in and you say, \"Hey. Everyone. Suppose that instead of these competing floating-point representations, we use my new representation instead. It can't be fooled the same way, which will solve a surprising number of your problems.\"\n\n*(The **Bayesian** now imitates the **Scientist's** voice:)* \"I'm suspicious,\" says the Senior Auditor. \"I suspect you of idealism. In my experience, people need to use different floating-point representations for different financial problems, and it's good to have a lot of different numerical representations of fractions in your toolbox.\"\n\n**Bayesian:** \"Well,\" I reply, \"it may sound idealistic, but in point of fact, this thing I'm about to show you is *the* representation of fractions, in which you *cannot* get different results depending on which way you add things or what order you do the operations in. It might be slightly more computationally expensive, but it is now no longer 1920 like when you first adopted the old system, and seriously, you can afford the computing power in a very large fraction of cases where you're only working with 30,000,000 bank accounts or some trivial number like that. Yes, if you want to do something like take square roots, it gets a bit more complicated, but very few of you are actually taking the square root of bank account balances. For the vast majority of things you are trying to do on a day-to-day basis, this system is unhackable without actually misreporting the numbers.\" And then I show them how to represent arbitrary-magnitude finite integers precisely, and how to represent a rational number as the ratio of two integers. What we would, nowadays, consider to be a direct, precise, computational representation of *the* system of rational numbers. The one unique axiomatized mathematical system of rational numbers, to which floating-point numbers are a mere approximation. And if you're just working with 30,000,000 bank account balances and your crude approximate floating-point numbers are *in practice* blowing up and failing to replicate and being exploited by people to get whatever results they want, and it is no longer 1920 and you can afford real computers now, it is an obvious step to have all the accountants switch to using *the* rational numbers. Just as Bayesian updates are *the* rational updates, in the unique mathematical axiomatized system of probabilities. And that's why you can't p-hack them.\n\n**Scientist:** That is a rather... audacious claim. And I confess, even if everything you said about the math were true, I would still be skeptical of the pragmatics. The current system of scientific statistics is something that's grown up over time and matured. Has this bright Bayesian way actually been tried?\n\n**Bayesian:** It hasn't been tried very much in science. In machine learning, where, uh, not to put too fine a point on it, we can actually see where the models are breaking because our AI doesn't work, it's been ten years since I've read a paper that tries to go at things from a frequentist angle and I can't *ever* recall seeing an AI algorithm calculate the p-value of anything. If you're doing anything principled at all from a probability-theoretic stance, it's probably Bayesian, and pretty much never frequentist. If you're classifying data using n-hot encodings, your loss function is the cross-entropy, not... I'm not even sure *what* the equivalent of trying to use 1920s-style p-values in AI would be like. I would frankly attribute this to people in machine learning having to use statistical tools that visibly succeed or fail; rather than needing to get published by going through a particular traditional ritual of p-value reporting, and failure to replicate not being all that bad for your career.\n\n**Scientist:** So you're actually more of a computer science guy than an experimentalist yourself. Why does this not surprise me? It's not impossible that some better statistical system than p-values could exist, but I'd advise you to respect the wisdom of experience. The fact that we know what p-hacking is, and are currently fighting it, is because we've had time to see where the edges of the system have problems, and we're figuring out how to fight those problems. This shiny new system will also have problems; you just have no idea what they'll be. Perhaps they'll be worse.\n\n**Bayesian:** It's not impossible that the accountants would figure out new shenanigans to pull with rational numbers, especially if they were doing some things computationally intensive enough that they could no longer afford to use *the* rational numbers and had to use some approximation instead. But I stand by my statement that if your financial spreadsheets are *right now* blowing up in a giant replication crisis in ways that seem clearly linked to using p-values, and the p-values are, frankly, bloody ad-hoc inconsistent nonsense, an obvious first step is to *try* using the rational updates instead. Although, it's possible we don't disagree too much in practice. I'd also pragmatically favor trying to roll things out one step at a time, like, maybe just switch over the psychological sciences and see how that goes.\n\n**Scientist:** How would you persuade them to do that?\n\n**Bayesian:** I have no goddamn idea. Honestly, I'm not expecting anyone to actually fix anything. People will just go on using p-values until the end of the world, probably. It's just one more Nice Thing We Can't Have. But there's a *chance* the idea will catch on. I was pleasantly surprised when open access caught on as quickly as it did. I was pleasantly surprised when people, like, actually noticed the replication crisis and it became a big issue that people cared about. Maybe I'll be pleasantly surprised again and people will actually take up the crusade to bury the p-value at a crossroads at midnight and put a stake through its heart. If so, I'll have done my part by making an understanding of [Bayes' rule](https://arbital.com/p/1lz) and likelihoods [more accessible](https://arbital.com/p/1zq) to everyone.\n\n**Scientist:** Or it could turn out that people don't *like* likelihoods, and that part of the wisdom of experience is the lesson that p-values are a kind of thing that experimentalists actually find useful and easy to use.\n\n**Bayesian:** If the experience of learning traditional statistics traumatized them so heavily that the thought of needing to learn a new system sends them screaming into the night, then yes, change might need to be imposed from outside. I'm hoping though that the Undergrad will read a [short, cheerful introduction to Bayesian probability](https://arbital.com/p/1zq), compare this with his ominous heavy traditional statistics textbook, and come back going \"Please let me use likelihoods please let me use likelihoods oh god please let me use likelihoods.\"\n\n**Undergrad:** I'll guess I'll look into it and see?\n\n**Bayesian:** Weigh your decision carefully, Undergrad. Some changes in science depend upon students growing up familiar with multiple ideas and choosing the right one. Max Planck said so in a famous aphorism, so it must be true. Ergo, the entire ability of science to distinguish good and bad ideas within that class must rest upon the cognitive capacities of undergrads.\n\n**Scientist:** Oh, now that is just--\n\n**Moderator:** And we're out of time. Thanks for joining us, everyone!", "date_published": "2018-07-01T23:10:21Z", "authors": ["Eric Bruylant", "Grigoriy Beziuk", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Drake Thomas", "Adnll", "Eliezer Yudkowsky"], "summaries": ["Or, __Why Switching From Reporting p-values to Reporting Likelihood Functions Might Help Fix the Replication Crisis: A personal view by Eliezer Yudkowsky.__\n\nShort version: Report [likelihoods](https://arbital.com/p/56t), not p-values."], "tags": ["Bayesian reasoning", "Work in progress", "Subjective probability", "Opinion page"], "alias": "4xx"} {"id": "cee06a9a82c4b932137979865bc9acc4", "title": "C-Class", "url": "https://arbital.com/p/c_class_meta_tag", "source": "arbital", "source_type": "text", "text": "C-Class pages have substantial content and will often be useful to readers, but they may have significant prose and style problems, may require cleanup, may not cover all significant areas of the topic, or may not explain the concept in a way the target audience will reliably understand.\n\nContrast:\n\n- [Start-Class](https://arbital.com/p/3rk) pages, which are sufficiently unfinished that they should be viewed by editors rather than readers.\n- [B-Class](https://arbital.com/p/4yd) pages, which are reasonably comprehensive, have no major stylistic issues, and explain the concept in a way which seems likely to be often enlightening to the target audience.\n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/5xq](https://arbital.com/p/5xq)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-08-27T15:22:59Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Eric Bruylant", "Mark Chimes", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Meta tags which request an edit to the page"], "alias": "4y7"} {"id": "07d66fff926f7e12b831022150540c5b", "title": "B-Class", "url": "https://arbital.com/p/b_class_meta_tag", "source": "arbital", "source_type": "text", "text": "A B-Class page is fairly comprehensive, with good prose and structure; but needs further work to reach the A-Class requirement for engaging, professional, intuitive explanations, checked by asking for detailed feedback from both the target audience and [reviewers](https://arbital.com/p/5ft).\n\nB-Class pages are:\n\n1. Comprehensive, covering all major areas of the topic without significant omissions or inaccuracies.\n2. Fairly well-written and in line with the [https://arbital.com/p/-16t](https://arbital.com/p/-16t).\n3. Reasonably well-organized, with similar concepts grouped together and reasonable section ordering.\n4. Aimed at an audience, with a level of technicality, jargon, and notation appropriate for them.\n5. Good for people wishing to learn the topic (well-explained, using examples, links, and visual aids as appropriate).\n6. Well-[summarized](https://arbital.com/p/1kl), in ways appropriate for the most likely audiences.\n\nThis tag should only be added by [reviewers](https://arbital.com/p/5ft). To request a review ask on [https://arbital.com/p/4ph](https://arbital.com/p/4ph) or [propose a page for B-Class](https://arbital.com/p/5g3).\n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/5xq](https://arbital.com/p/5xq)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-08-19T21:19:53Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Start"], "alias": "4yd"} {"id": "787063c0eef7ca4724177cdae77ef220", "title": "Value", "url": "https://arbital.com/p/value_alignment_value", "source": "arbital", "source_type": "text", "text": "### Introduction\n\nIn the context of [value alignment](https://arbital.com/p/2v) as a subject, the word 'value' is a speaker-dependent variable that indicates our ultimate goal - the property or meta-property that the speaker wants or 'should want' to see in the final outcome of Earth-originating intelligent life. E.g: [human flourishing](https://arbital.com/p/), [fun](https://arbital.com/p/), [coherent extrapolated volition](https://arbital.com/p/3c5), [normativity](https://arbital.com/p/).\n\nDifferent viewpoints are still being debated on this topic; people [sometimes change their minds about their views](https://arbital.com/p/). We don't yet have full knowledge of which views are 'reasonable' in the sense that people with good cognitive skills might retain them [even in the limit of ongoing discussion](https://arbital.com/p/313). Some subtypes of potentially internally coherent views may not be sufficiently [interpersonalizable](https://arbital.com/p/) for even very small AI projects to cooperate on them; if e.g. Alice wants to own the whole world and will go on believing that in the limit of continuing contemplation, this is not a desideratum on which Alice, Bob, and Carol can all cooperate. Thus, using 'value' as a potentially speaker-dependent variable isn't meant to imply that everyone has their own 'value' and that no further debate or cooperation is possible; people can and do talk each other out of positions which are then regarded as having been mistaken, and completely incommunicable stances seem unlikely to be reified even into a very small AI project. But since this debate is ongoing, there is not yet any one definition of 'value' that can be regarded as settled.\n\nNonetheless, on many of the current views being advocated, it seems like very similar technical problems of value alignment seem to arise in many of them. We would need to figure out how to [identify](https://arbital.com/p/6c) the objects of value to the AI, robustly assure that the AI's preferences are [stable](https://arbital.com/p/1fx) as the AI self-modifies, or create [corrigible](https://arbital.com/p/45) ways of recovering from errors in the way we tried to identify and specify the objects of value.\n\nTo centralize the very similar discussions of these technical problems while the outer debate about reasonable end goals is ongoing, the word 'value' acts as a metasyntactic placeholder for different views about the target of value alignment.\n\nSimilarly, in the larger [value achievement dilemma](https://arbital.com/p/2z), the question of what the end goals should be, and policy difficulties of getting 'good' goals to be adopted in name by the builders or creators of AI, are factored out as the [value selection problem](https://arbital.com/p/value_selection). The output of this process is taken to be an input into the value loading problem, and 'value' is a name referring to this output.\n\n'Value' is *not* assumed to be what the AI is given as its utility function or [preference framework](https://arbital.com/p/5f). On many views implying that [value is complex](https://arbital.com/p/5l) or otherwise difficult to convey to an AI, the AI may be, e.g., a [Genie](https://arbital.com/p/6w) where some stress is taken off the proposition that the AI exactly understands value and put onto human ability to use the Genie well.\n\nConsider a Genie with an explicit preference framework targeted on a [Do What I Know I Mean system](https://arbital.com/p/) for making [checked wishes](https://arbital.com/p/). The word 'value' in any discussion thereof should still only be used to refer to whatever the AI creators are targeting for real-world outcomes. We would say the 'value alignment problem' had been successfully solved to the extent that running the Genie produced high-value outcomes in the sense of the humans' viewpoint on 'value', not to the extent that the outcome matched the Genie's preference framework for how to follow orders.\n\n### Specific views on value\n\nObviously, a listing like this will only summarize long debates. But that summary at least lets us point to some examples of views that have been advocated, and not indefinitely defer the question of what 'value' could possibly refer to.\n\nAgain, keep in mind that by technical definition, 'value' is what we are using or should use to rate the ultimate real-world consequences of running the AI, *not* the explicit goals we are giving the AI.\n\nSome of the major views that have been advocated by more than one person are as follows:\n\n- **Reflective equilibrium.** We can talk about 'what I *should* want' as a concept distinct from 'what I want right now' by construing some limit of how our present desires would directionally change given more factual knowledge, time to consider more knowledge, better self-awareness, and better self-control. Modeling this process is **extrapolation**, a reserved term to mean this process in the context of discussing preferences. Value would consist in, e.g., whatever properties a supermajority of humans would agree, in the limit of reflective equilibrium, are desirable. See also [coherent extrapolated volition](https://arbital.com/p/).\n- **Standard desires.** An object-level view that identifies value with qualities that we currently find very desirable, enjoyable, fun, and preferable, such as [Frankena's list of desiderata](https://arbital.com/p/) (including truth, happiness, aesthetics, love, challenge and achievement, etc.) On the closely related view of **Fun Theory**, such desires may be further extrapolated, without changing their essential character, into forms suitable for transhuman minds. Advocates may agree that these object-level desires will be subject to unknown normative corrections by reflective-equilibrium-type considerations, but still believe that some form of Fun or standardly desirable outcome is a likely result. Therefore (on this view) it is reasonable to speak of value as probably mostly consisting in turning most of the reachable universe into superintelligent life enjoying itself, creating transhuman forms of art, etcetera.\n- **[Immediate goods](https://arbital.com/p/ImmediateGoods).** E.g., \"Cure cancer\" or \"Don't transform the world into paperclips.\" Such replies arguably have problems as ultimate criteria of value from a human standpoint (see linked discussion), but for obvious reasons, lists of immediate goods are a common early thought when first considering the subject.\n- **Deflationary moral error theory.** There is no good way to construe a normative concept apart from what particular people want. AI programmers are just doing what they want, and confused talk of 'fairness' or 'rightness' cannot be rescued. The speaker would nonetheless personally prefer not to be turned into paperclips. (This mostly ends up at an 'immediate goods' theory in practice, plus some beliefs relevant to the [value selection](https://arbital.com/p/value_selection) debate.)\n- **Simple purpose.** Value can easily be identified with X, for some X. X is the main thing we should be concerned about passing on to AIs. Seemingly desirable things besides X are either (a) improper to care about, (b) relatively unimportant, or (c) instrumentally implied by pursuing X, qua X.\n\nThe following versions of desiderata for AI outcomes would tend to imply that the value alignment / value loading problem is an entirely wrong way of looking at the issue, which might make it disingenuous to claim that 'value' in 'value alignment' can cover them as a metasyntactic variable as well:\n\n- **Moral internalist value.** The normative is inherently compelling to all, or almost all cognitively powerful agents. Whatever is not thus compelling cannot be normative or a proper object of human desire.\n- **AI rights.** The primary thing is to ensure that the AI's natural and intrinsic desires are respected. The ideal is to end up in a diverse civilization that respects the rights of all sentient beings, including AIs. (Generally linked are the views that no special selection of AI design is required to achieve this, or that special selection of AI design to shape particular motivations would itself violate AI rights.)\n\n## Modularity of 'value'\n\n### Alignable values\n\nMany issues in value alignment seem to generalize very well across the Reflective Equilibrium, Fun Theory, Intuitive Desiderata, and Deflationary Error Theory viewpoints. In all cases we would have to consider stability of self-modification, the [Edge Instantiation](https://arbital.com/p/2w) problem in [value identification](https://arbital.com/p/6c), and most of the rest of 'standard' value alignment theory. This seemingly good generalization of the resulting technical problems across such wide-ranging viewpoints, and especially that it (arguably) covers the case of intuitive desiderata, is what justifies treating 'value' as a metasyntactic variable in 'value loading problem'.\n\nA neutral term for referring to all the values in this class might be 'alignable values'.\n\n### Simple purpose\n\nIn the [simple purpose](https://arbital.com/p/) case, the key difference from an Immediate Goods scenario is that the desideratum is usually advocated to be simple enough to negate [Complexity of Value](https://arbital.com/p/5l) and make [value identification](https://arbital.com/p/6c) easy.\n\nE.g., Juergen Schmidhuber stated at the 20XX Singularity Summit that he thought the only proper and normative goal of any agent was to increase compression of sensory information . Conditioned on this being the sum of all normativity, 'value' is algorithmically simple. Then the problems of [Edge Instantiation](https://arbital.com/p/2w), [Unforeseen Maximums](https://arbital.com/p/47), and Nearest Unblocked Neighbor are all moot. (Except perhaps as there is an Ontology Identification problem for defining exactly what constitutes 'sensory information' for a [self-modifying agent](https://arbital.com/p/).)\n\nEven in the [simple purpose](https://arbital.com/p/) case, the [value loading problem](https://arbital.com/p/) would still exist (it would still be necessary to make an AI that cared about the simple purpose rather than paperclips) along with associated problems of [reflective stability](https://arbital.com/p/71) (it would be necessary to make an AI that went on caring about X through self-modification). Nonetheless, the overall problem difficulty and immediate technical priorities would be different enough that the Simple Purpose case seems importantly distinct from e.g. Fun Theory on a policy level.\n\n### Moral internalism\n\nSome viewpoints on 'value' deliberately reject [Orthogonality](https://arbital.com/p/1y). Strong versions of the [moral internalist position in metaethics](https://arbital.com/p/) claim as an empirical prediction that every sufficiently powerful cognitive agent will come to pursue the same end, which end is to be identified with normativity, and is the only proper object of human desire. If true, this would imply that the entire value alignment problem is moot for advanced agents.\n\nMany people who advocate 'simple purposes' also claim these purposes are universally compelling. In a policy sense, this seems functionally similar to the Moral Internalist case regardless of the simplicity or complexity of the universally compelling value. Hence an alleged simple universally compelling purpose is categorized for these purposes as Moral Internalist rather than Simple Purpose.\n\nThe special case of a Simple Purpose claimed to be universally [instrumentally convergent](https://arbital.com/p/10g) also seems functionally identical to Moral Internalism from a policy standpoint.)\n\n### AI Rights\n\nSomeone might believe as a proposition of fact that all (accessible) AI designs would have 'innate' desires, believe as a proposition of fact that no AI would gain enough advantage to wipe out humanity or prevent the creation of other AIs, and assert as a matter of morality that a good outcome consists of everyone being free to pursue their own value and trade. In this case the value alignment problem is implied to be an entirely wrong way to look at the problem, with all associated technical issues moot. Thus, it again might be disingenuous to have 'value' as a metasyntactic variable try to cover this case.", "date_published": "2016-06-01T17:56:26Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Different people advocate different views on what we should want for the outcome of an '[aligned](https://arbital.com/p/value)' AI (desiderata like human flourishing, or a [fun-theoretic eudaimonia](https://arbital.com/p/), or [coherent extrapolated volition](https://arbital.com/p/3c5), or an AI that mostly leaves us alone but protects us from other AIs). These differences might not be [irreconcilable](https://arbital.com/p/); people are sometimes persuaded to change their views of what we should want. Either way, there's (arguably) a tremendous overlap in the technical issues for aligning an AI with any of these goals. So in the technical discussion, 'value' is really a metasyntactic variable that stands in for the speaker's current view, or for what an AI project might later adopt as a reasonable target after further discussion."], "tags": ["B-Class", "Definition", "Glossary (Value Alignment Theory)"], "alias": "55"} {"id": "a68c949067c72753fbff8e990b3a20c0", "title": "Definition", "url": "https://arbital.com/p/definition_meta_tag", "source": "arbital", "source_type": "text", "text": "When substantial controversy exists about how to define a term, good epistemic policy is for both sides to adopt new, more specific terms whose definitions are not further disputed. To whatever extent possible, definitions should not be phrased in a way that tries to pre-emptively settle an argument or 'bake in' one answer to a factual or policy disagreement. See [A Human's Guide to Words](http://wiki.lesswrong.com/wiki/A_Human%27s_Guide_to_Words).", "date_published": "2015-10-21T14:01:49Z", "authors": ["Kevin Clancy", "Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Jaime Sevilla Molina", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "56"} {"id": "e03ad5a97dc13592866eae708ba95d41", "title": "Goodness estimate biaser", "url": "https://arbital.com/p/goodness_estimate_bias", "source": "arbital", "source_type": "text", "text": "A \"[goodness](https://arbital.com/p/55) estimate [biaser](https://arbital.com/p/statistical_bias)\" is a system setup or phenomenon that seems [foreseeably](https://arbital.com/p/6r) likely to cause the actual goodness of some AI plan to be systematically lower than the AI's estimate of that plan's goodness. We want the AI's estimate to be [unbiased](https://arbital.com/p/statistically_unbiased).\n\n## Ordinary examples\n\nSubtle and unsubtle [estimate-biasing](https://arbital.com/p/statistical_bias) issues in machine learning are well-known and appear far short of [advanced agency](https://arbital.com/p/2c):\n\n● A machine learning algorithm's performance on the training data is not an unbiased estimate of its performance on the test data. Some of what the algorithm seems to learn may be particular to noise in the training data. This fitted noise will not be fitted within the test data. So test performance is not just unequal to, but *systematically lower than,* training performance; if we were treating the training performance as an estimate of test performance, it would not be an [unbiased](https://arbital.com/p/statistically_unbiased) estimate.\n\n● The [Winner's Curse](https://en.wikipedia.org/wiki/Winner%27s_curse) from auction theory observes that if bidders have noise in their unbiased estimates of the auctioned item's value, then the *highest* bidder, who receives the item, is more likely to have upward noise in their individually unbiased estimate, [conditional](https://arbital.com/p/1ly) on their having won. (E.g., three bidders with Gaussian noise in their value estimates submit bids on an item whose true value to them is 1.0; the winning bidder is likely to have valued the item at more than 1.0.)\n\nThe analogous [Optimizer's Curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf) observes that if we make locally unbiased but noisy estimates of the [subjective expected utility](https://arbital.com/p/subjective_expected_utility) of several plans, then selecting the plan with 'highest expected utility' is likely to select an estimate with upward noise. Barring compensatory adjustments, this means that actual utility will be systematically lower than expected utility, even if all expected utility estimates are individually unbiased. Worse, if we have 10 plans whose expected utility can be unbiasedly estimated with low noise, plus 10 plans whose expected utility can be unbiasedly estimated with high noise, then selecting the plan with apparently highest expected utility favors the noisiest estimates!\n\n## In AI alignment\n\nWe can see many of the alleged [foreseeable difficulties](https://arbital.com/p/6r) in [AI alignment](https://arbital.com/p/2v) as involving similar processes that allegedly produce systematic downward biases in what we see as actual [goodness](https://arbital.com/p/55), compared to an AI's estimate of goodness:\n\n● [https://arbital.com/p/2w](https://arbital.com/p/2w) suggests that if we take an imperfectly or incompletely learned value function, then looking for the *maximum* or *extreme* of that value function is much more likely than usual to magnify what we see as the gaps or imperfections (because of [fragility of value](https://arbital.com/p/fragile_value), plus the Optimizer's Curse); or destroy whatever aspects of value the AI didn't learn about (because optimizing a subset of properties is liable to set all other properties to extreme values).\n\nWe can see this as implying both \"The AI's apparent goodness in non-extreme cases is an upward-biased estimate of its goodness in extreme cases\" and \"If the AI learns its goodness estimator less than [perfectly](https://arbital.com/p/41k), the AI's estimates of the goodness of its best plans will systematically overestimate what we see as the actual goodness.\"\n\n● [https://arbital.com/p/42](https://arbital.com/p/42) generally, and especially over [instrumentally convergent incorrigibility](https://arbital.com/p/instrumental_incorrigibility), suggests that if there are naturally-arising AI behaviors we see as bad (e.g. routing around [shutdown](https://arbital.com/p/2xd)), there may emerge a pseudo-adversarial selection of strategies that route around our attempted [patches](https://arbital.com/p/48) to those problems. E.g., the AI constructs an environmental subagent to continue carrying on its goals, while cheerfully obeying 'the letter of the law' by allowing its current hardware to be shut down. This pseudo-adversarial selection (though the AI does not have an explicit goal of thwarting us or selecting low-goodness strategies per se) again implies that actual [goodness](https://arbital.com/p/55) is likely to be systematically lower than the AI's estimate of what it's learned as 'goodness'; again to an [increasing degree](https://arbital.com/p/6q) as the AI becomes [smarter](https://arbital.com/p/9f) and [searches a wider policy space](https://arbital.com/p/47).\n\n[Mild optimization](https://arbital.com/p/2r8) and [conservative strategies](https://arbital.com/p/2qp) can be seen as proposals to 'regularize' powerful optimization in a way that *decreases* the degree to which goodness in training is a biased (over)estimate of goodness in execution.", "date_published": "2016-07-08T15:53:19Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "57b"} {"id": "9b5e086144980338193fc2f16714e64c", "title": "Linguistic conventions in value alignment", "url": "https://arbital.com/p/5b", "source": "arbital", "source_type": "text", "text": "A central page to list the language conventions in [value alignment theory](https://arbital.com/p/2v). See also [https://arbital.com/p/9m](https://arbital.com/p/9m).\n\n# Language dealing with wants, desires, utility, preference, and value.\n\nWe need a language rich enough to distinguish at least the following as different [intensional concepts](https://arbital.com/p/10b), even if their [extensions](https://arbital.com/p/10b) end up being identical:\n\n- A. What the programmers explicitly, verbally said they wanted to achieve by building the AI.\n- B. What the programmers wordlessly, intuitively meant; the actual criterion they would use for rating the desirability of outcomes, if they could actually look at those outcomes and assign ratings.\n- C. What programmers *should* want from the AI (from within some view on normativity, shouldness, or rightness).\n- D. The AI's explicitly represented cognitive preferences, if any.\n- E. The property that running the AI tends to produce in the world; the property that the AI behaves in such fashion as to bring about.\n\nSo far, the following reserved terms have been advocated for the subject of value alignment:\n\n- [**Value**](https://arbital.com/p/55) and **valuable** to refer to C. On views which identify C with B, it thereby refers to B.\n- **Optimization target** to mean only E. We can also say, e.g., that natural selection has an 'optimization target' of inclusive genetic fitness. 'Optimization target' is meant to be an exceedingly general term that can talk about irrational agents and nonagents.\n- [**Utility**](https://arbital.com/p/109) to mean a Von Neumann-Morgenstern utility function, reserved to talk about agents that behave like some bounded analogue of expected utility optimizers. Utility is explicitly not assumed to be normative. E.g., if speaking of a paperclip maximizer, we will say that an outcome has higher utility iff it contains more paperclips. Thus 'utility' is reserved to refer to D or E.\n- **Desire** to mean anthropomorphic human-style desires, referring to A or B rather than C, D, or E. ('Wants' are general over humans and AIs.)\n- **Preference** and **prefer** to be general terms that can be used for both humans and AIs. 'Preference' refers to B or D rather than A, C, or E. It means 'what the agent explicitly and cognitively wants' rather than 'what the agent should want' or 'what the agent mistakenly thinks it wants' or 'what the agent's behavior tends to optimize'. Someone can be said to prefer their extrapolated volition to be implemented rather than their current desires, but if so they must explicitly, cognitively prefer that, or accept it in an explicit choice between options.\n- [**Preference framework**](https://arbital.com/p/5f) to be an even more general term that can refer to e.g. meta-utility functions that change based on observations, or to meta-preferences about how one's own preferences should be extrapolated. A 'preference framework' should refer to constructs more coherent than the human mass of desires and ad-hoc reflections, but not as strictly restricted as a VNM utility function. Stuart Armstrong's [utility indifference](https://arbital.com/p/) framework for [value learning](https://arbital.com/p/) is an example of a preference framework that is not a vanilla/ordinary utility function.\n- **Goal** remains a generic, unreserved term that could refer to any of A-E, and also particular things an agent wants to get done for [instrumental](https://arbital.com/p/) reasons.\n- **[Intended goal](https://arbital.com/p/6h)** to refer to B only.\n- **Want** remains a generic, unreserved term that could refer to humans or other agents, or terminal or instrumental goals.\n\n'Terminal' and 'instrumental' have their standard contrasting meanings.", "date_published": "2015-12-17T21:42:06Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition"], "alias": "5b"} {"id": "03d329202f28ab3c6d6d5afd858015d1", "title": "Limited AGI", "url": "https://arbital.com/p/limited_agi", "source": "arbital", "source_type": "text", "text": "One of the reasons why a [Task AGI](https://arbital.com/p/6w) can potentially be safer than an [Autonomous AGI](https://arbital.com/p/1g3), is that since Task AGIs only need to carry out activities of limited scope, they [may only need limited material and cognitive powers](https://arbital.com/p/7tf) to carry out those tasks. The [nonadversarial principle](https://arbital.com/p/7g0) still applies, but takes the form of \"[don't run the search](https://arbital.com/p/7fx)\" rather than \"make sure the search returns the correct answer\".\n\n# Obstacles\n\n• Increasing your material and cognitive efficacy is [instrumentally convergent](https://arbital.com/p/10g) in all sorts of places and would presumably need to be [averted](https://arbital.com/p/2vk) all over the place.\n\n• Good limitation proposals are [not as easy as they look](https://arbital.com/p/deceptive_ease) because [particular domain capabilities can often be derived from more general architectures](https://arbital.com/p/7vh). An Artificial *General* Intelligence doesn't have a handcrafted 'thinking about cars' module and a handcrafted 'thinking about planes' module, so you [can't just handcraft the two modules at different levels of ability](https://arbital.com/p/7vk).\n\nE.g. many have suggested that 'drive' or 'emotion' is something that can be selectively removed from AGIs to 'limit' their ambitions; [presumably](https://arbital.com/p/43h) these people are using a mental model that is not the standard [expected utility agent](https://arbital.com/p/18r) model. To know which kind of limitations are easy, you need a sufficiently good background picture of the AGI's subprocesses that you understand which kind of system capabilities will naturally carve at the joints.\n\n# Related ideas\n\nThe research avenue of [Mild optimization](https://arbital.com/p/2r8) can be viewed as pursuing a kind of very general Limitation.\n\n[Behaviorism](https://arbital.com/p/102) asks to Limit the AGI's ability to model other minds in non-whitelisted detail.\n\n[Taskishness](https://arbital.com/p/4mn) can be seen as an Alignment/Limitation hybrid in the sense that it asks for the AI to only *want* or *try* to do a bounded amount at every level of internal organization.\n\n[https://arbital.com/p/2pf](https://arbital.com/p/2pf) can be seen as an Alignment/Limitation hybrid in the sense that a [successful impact penalty](https://arbital.com/p/4l) would make the AI not *want* to implement larger-scale plans.\n\nLimitation may be viewed as yet another subproblem of the [https://arbital.com/p/3ps](https://arbital.com/p/3ps), since it seems like a type of precaution that a generic agent would desire to construct into a generic imperfectly-aligned subagent.\n\nLimitation can be seen as motivated by both the [https://arbital.com/p/7g0](https://arbital.com/p/7g0) and the [https://arbital.com/p/7tf](https://arbital.com/p/7tf).", "date_published": "2017-02-22T00:12:59Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "5b3"} {"id": "95f5c63c9dc50c2bde798662694ef2a6", "title": "Ontology identification problem", "url": "https://arbital.com/p/ontology_identification", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Introduction: The ontology identification problem for unreflective diamond maximizers\n\nA simplified but still very difficult open problem in [https://arbital.com/p/2v](https://arbital.com/p/2v) is to state an unbounded program implementing a [diamond maximizer](https://arbital.com/p/5g) that will turn as much of the physical universe into diamond as possible. The goal of \"making diamonds\" was chosen to have a crisp-seeming definition for our universe (the amount of diamond is the number of carbon atoms covalently bound to four other carbon atoms). If we can crisply define exactly what a 'diamond' is, we can avert issues of trying to convey [complex values](https://arbital.com/p/5l) into the agent. (The [unreflective diamond maximizer](https://arbital.com/p/5g) putatively has [unlimited computing power](https://arbital.com/p/), runs on a [Cartesian processor](https://arbital.com/p/), and confronts no other agents [similar to itself](https://arbital.com/p/). This averts many other problems of [reflectivity](https://arbital.com/p/71), [decision theory](https://arbital.com/p/18s) and [value alignment](https://arbital.com/p/2v).)\n\nEven with a seemingly crisp goal of \"make diamonds\", we might still run into two problems if we tried to write a [hand-coded object-level utility function](https://arbital.com/p/5t) that [identified](https://arbital.com/p/) the amount of diamond material:\n\n- Unknown substrate: We might not know the true, fundamental ontology of our own universe, hence not know what stuff diamonds are really made of. (What exactly is a carbon atom? If you say it's a nucleus with six protons, what's a proton? If you define a proton as being made of quarks, what if there are unknown other particles underlying quarks?)\n - It seems intuitively like there ought to be some way to identify carbon atoms to an AI in some way that doesn't depend on talking about quarks. Doing this is part of the ontology identification problem.\n- Unknown representation: We might crisply know what diamonds are in our universe, but not know how to find diamonds inside the agent's model of the environment.\n - Again, it seems intuitively like it ought to be possible to identify diamonds in the environment, even if we don't know details of the agent's exact internal representation. Doing this is part of the ontology identification problem.\n\nTo introduce the general issues in ontology identification, we'll try to walk through the [anticipated difficulties](https://arbital.com/p/) of constructing an unbounded agent that would maximize diamonds, by trying specific methods and suggesting [anticipated difficulties](https://arbital.com/p/) of those methods.\n\n## Difficulty of making AIXI-tl maximize diamond\n\nThe classic unbounded agent - an agent using far more computing power than the size of its environment - is [https://arbital.com/p/11v](https://arbital.com/p/11v). Roughly speaking, AIXI considers all computable hypotheses for how its environment might be turning AIXI's motor outputs into AIXI's sensory inputs and rewards. We can think of AIXI's hypothesis space as including all Turing machines that, sequentially given AIXI's motor choices as inputs, will output a sequence of predicted sense items and rewards for AIXI. The finite variant AIXI-tl has a hypothesis space that includes all Turing machines that can be specified using fewer than $l$ bits and run in less than time $t$.\n\nOne way of seeing the difficulty of ontology identification is considering why it would be difficult to make an AIXI-tl variant that maximized 'diamonds' instead of 'reward inputs'.\n\nThe central difficulty here is that there's no way to find 'diamonds' inside the implicit representations of AIXI-tl's sequence-predicting Turing machines. Given an arbitrary Turing machine that is successfully predicting AIXI-tl's sense inputs, there is no general rule for how to go from the representation of that Turing machine to a statement about diamonds or carbon atoms. The highest-weighted Turing machines that have best predicted the sensory data so far, presumably contain *some* sort of representation of the environment, but we have no idea how to get 'the number of diamonds' out of it.\n\nIf AIXI has a webcam, then the final outputs of the Turing machine are predictions about the stream of bits produced by the webcam, going down the wire into AIXI. We can understand the meaning of that Turing machine's output predictions; those outputs are meant to match types with the webcam's input. But we have no notion of anything else that Turing machine is representing. Even if somewhere in the Turing machine happens to be an atomically detailed model of the world, we don't know what representation it uses, or what format it has, or how to look inside it for the number of diamonds that will exist after AIXI's next motor action.\n\nThis difficulty ultimately arises from AIXI being constructed around a [Cartesian](https://arbital.com/p/) paradigm of [sequence prediction](https://arbital.com/p/), with AIXI's sense inputs and motor outputs being treated as sequence elements, and the Turing machines in its hypothesis space having inputs and outputs matched to the sequence elements and otherwise being treated as black boxes. This means we can only get AIXI to maximize direct functions of its sensory input, not any facts about the outside environment.\n\n(We can't make AIXI maximize diamonds by making it want *pictures* of diamonds because then it will just, e.g., [build an environmental subagent that seizes control of AIXI's webcam and shows it pictures of diamonds](https://arbital.com/p/). If you ask AIXI to show itself sensory pictures of diamonds, you can get it to show its webcam lots of pictures of diamonds, but this is not the same thing as building an environmental diamond maximizer.)\n\n## Agent using classical atomic hypotheses\n\nAs an [unrealistic example](https://arbital.com/p/): Suppose someone was trying to define 'diamonds' to the AI's utility function, and suppose they knew about atomic physics but not nuclear physics. Suppose they build an AI which, during its development phase, learns about atomic physics from the programmers, and thus builds a world-model that is based on atomic physics.\n\nAgain for purposes of [unrealistic examples](https://arbital.com/p/), suppose that the AI's world-model is encoded in such fashion that when the AI imagines a molecular structure - represents a mental image of some molecules - carbon atoms are represented as a particular kind of basic element of the representation. Again, as an [unrealistic example](https://arbital.com/p/), imagine that there are [little LISP tokens](https://arbital.com/p/) representing environmental objects, and that the environmental-object-type of carbon-objects is encoded by the integer 6. Imagine also that each atom, inside this representation, is followed by a list of the other atoms to which it's covalently bound. Then when the AI is imagining a carbon atom participating in a diamond, inside the representation we would see an object of type 6, followed by a list containing exactly four other 6-objects.\n\nCan we fix this representation for all hypotheses, and then write a utility function for the AI that counts the number of type-6 objects that are bound to exactly four other type-6 objects? And if we did so, would the result actually be a diamond maximizer?\n\n### AIXI-atomic\n\nWe can imagine formulating a variant of AIXI-tl that, rather than all tl-bounded Turing machines, considers tl-bounded simulated atomic universes - that is, simulations of classical, pre-nuclear physics. Call this AIXI-atomic.\n\nA first difficulty is that universes composed only of classical atoms are not good explanations of our own universe, even in terms of surface phenomena; e.g., the [ultraviolet catastrophe](http://en.wikipedia.org/wiki/Ultraviolet_catastrophe). So let it be supposed that we have simulation rules for classical physics that replicate at least whatever phenomena the programmers have observed at [development time](https://arbital.com/p/), even if the rules have some seemingly ad-hoc elements (like there being no ultraviolent catastrophes).\n\nA second difficulty is that a simulated universe of classical atoms does not identify where in the universe the AIXI-atomic agent resides, or that AIXI-atomic's sense inputs don't have types commensurate with the types of atoms. We can elide this difficulty by imagining that AIXI-atomic simulates classical universes containing a single hypercomputer, and that AIXI-atomic knows a simple function from each simulated universe onto its own sensory data (e.g., it knows to look at the simulated universe, and translate simulated photons impinging on its webcam onto predicted webcam data in the received format). This elides most of the problem of [naturalized induction](https://arbital.com/p/), by fixing the ontology of all hypotheses and standardizing their hypothetical [bridging laws](https://arbital.com/p/).\n\nSo the analogous AIXI-atomic agent that maximizes diamond:\n\n- Considers only hypotheses that directly represent universes as huge systems of classical atoms, so that the function 'count atoms bound to four other carbon atoms' can be directly run over any possible future the agent considers.\n- Assigns probabilistic priors over these possible atomic representations of the universe.\n- Somehow [maps each atomic representation onto the agent's sensory experiences and motor actions](https://arbital.com/p/).\n- [its priors](https://arbital.com/p/Bayes-updates) based on actual sensory experiences, the same as classical AIXI.\n- Can evaluate the 'expected diamondness on the next turn' of a single action by looking at all hypothetical universes where that action is performed, weighted by their current probability, and summing over the expectation of diamond-bound carbon atoms on their next clock tick.\n- Can evaluate the 'future expected diamondness' of an action, over some finite time horizon, by assuming that its future self will also Bayes-update and maximize expected diamondness over that time horizon.\n- On each turn, outputs the action with greatest expected diamondness over some finite time horizon.\n\nSuppose our own real universe was amended to otherwise be exactly the same, but contain a single [impermeable](https://arbital.com/p/) hypercomputer. Suppose we defined an agent like the one above, using simulations of 1900-era classical models of physics, and ran that agent on the hypercomputer. Should we expect the result to be an actual diamond maximizer - that most mass in the universe will be turned into carbon and arranged into diamonds?\n\n### Anticipated failure of AIXI-atomic in our own universe: trying to maximize diamond outside the simulation\n\nOur own universe isn't atomic, it's nuclear and quantum-mechanical. This means that AIXI-atomic does not contain any hypotheses in its hypothesis space that *directly represent* the universe. By 'directly represent', we mean that carbon atoms in AIXI-atomic's best representations do not correspond to carbon atoms in our own world).\n\nIntuitively, we would think it was [common sense](https://arbital.com/p/) for an agent that wanted diamonds to react to the experimental data identifying nuclear physics, by deciding that a carbon atom is 'really' a nucleus containing six protons, and atomic binding is 'really' covalent electron-sharing. We can imagine this agent [common-sensically](https://arbital.com/p/) updating its model of the universe to a nuclear model, and redefining the 'carbon atoms' that its old utility function counted to mean 'nuclei containing exactly six protons'. Then the new utility function could evaluate outcomes in the newly discovered nuclear-physics universe. We will call this the **utility rebinding problem**.\n\nWe don't yet have a crisp formula that seems like it would yield commonsense behavior for utility rebinding. In fact we don't yet have any candidate formulas for utility rebinding, period. Stating one is an open problem. See below.\n\nFor the 'classical atomic AIXI' agent we defined above, what happens instead is that the 'simplest atomic hypothesis that fits the facts' will be an enormous atom-based computer, simulating nuclear physics and quantum physics in order to control AIXI's webcam, which is still believed to be composed of atoms in accordance with the prespecified bridging laws. From our perspective this hypothesis seems silly, but if you restrict the hypothesis space to only classical atomic universes, that's what ends up being the computationally simplest hypothesis to explain the results of quantum experiments.\n\nAIXI-atomic will then try to choose actions so as to maximize the amount of expected diamond inside the probable *outside universes* that could contain the giant atom-based simulator of quantum physics. It is not obvious what sort of behavior this would imply.\n\n### Metaphor for difficulty: AIXI-atomic cares about only fundamental carbon\n\nOne metaphorical way of looking at the problem is that AIXI-atomic was implicitly defined to care only about diamonds made out of *ontologically fundamental* carbon atoms, not diamonds made out of quarks. A probability function that assigns 0 probability to all universes made of quarks, and a utility function that outputs a constant on all universes made of quarks, [yield functionally identical behavior](https://arbital.com/p/). So it is an exact metaphor to say that AIXI-atomic only *cares* about universes with ontologically basic carbon atoms, given that AIXI-atomic only *believes* in universes with ontologically basic carbon atoms.\n\nSince AIXI-atomic only cares about diamond made of fundamental carbon, when AIXI-atomic discovered the experimental data implying that almost all of its probability mass should reside in nuclear or quantum universes in which there were no fundamental carbon atoms, AIXI-atomic stopped caring about the effect its actions had on the vast majority of probability mass inside its model. Instead AIXI-atomic tried to maximize inside the tiny remaining probabilities in which it *was* inside a universe with fundamental carbon atoms that was somehow reproducing its sensory experience of nuclei and quantum fields; for example, a classical atomic universe with an atomic computer simulating a quantum universe and showing the results to AIXI-atomic.\n\nFrom our perspective, we failed to solve the 'ontology identification problem' and get the real-world result we wanted, because we tried to define the agent's *utility function* in terms of properties of a universe made out of atoms, and the real universe turned out to be made of quantum fields. This caused the utility function to *fail to bind* to the agent's representation in the way we intuitively had in mind.\n\n### Advanced-nonsafety of hardcoded ontology identifications\n\nToday we do know about quantum mechanics, so if we tried to build an unreflective diamond maximizer using the above formula, it might not fail on account of [the particular exact problem](https://arbital.com/p/48) of atomic physics being false.\n\nBut perhaps there are discoveries still remaining that would change our picture of the universe's ontology to imply something else underlying quarks or quantum fields. Human beings have only known about quantum fields for less than a century; our model of the ontological basics of our universe has been stable for less than a hundred years of our human experience. So we should seek an AI design that does not assume we know the exact, true, fundamental ontology of our universe during an AI's [development phase](https://arbital.com/p/5d). Or if our failure to know the exact laws of physics causes catastrophic failure of the AI, we should at least heavily mark that this is a [relied-on assumption](https://arbital.com/p/).\n\n## Beyond AIXI-atomic: Diamond identification in multi-level maps\n\nA realistic, bounded diamond maximizer wouldn't represent the outside universe with atomically detailed models. Instead, it would have some equivalent of a [multi-level map](https://arbital.com/p/) of the world in which the agent knew in principle that things were composed of atoms, but didn't model most things in atomic detail. E.g., its model of an airplane would have wings, or wing shapes, rather than atomically detailed wings. It would think about wings when doing aerodynamic engineering, atoms when doing chemistry, nuclear physics when doing nuclear engineering.\n\nAt the present, there are not yet any proposed formalisms for how to do probability theory with multi-level maps (in other words: [nobody has yet put forward a guess at how to solve the problem even given infinite computing power](https://arbital.com/p/)). Having some idea for how an agent could reason with multi-level maps, would be a good first step toward being able to define a bounded expected utility optimizer with a utility function that could be evaluated on multi-level maps. This in turn would be a first step towards defining an agent with a utility function that could rebind itself to *changing* representations in an *updating* multi-level map.\n\nIf we were actually trying to build a diamond maximizer, we would be likely to encounter this problem long before it started formulating new physics. The equivalent of a computational discovery that changes 'the most efficient way to represent diamonds' is likely to happen much earlier than a physical discovery that changes 'what underlying physical systems probably constitute a diamond'.\n\nThis also means that, on the actual [value loading problem](https://arbital.com/p/), we are liable to encounter the ontology identification problem long before the agent starts discovering new physics.\n\n# Discussion of the generalized ontology identification problem\n\nIf we don't know how to solve the ontology identification problem for maximizing diamonds, we probably can't solve it for much more complicated values over universe-histories.\n\n\n### View of human angst as ontology identification problem\n\nArgument: A human being who feels angst on contemplating a universe in which \"By convention sweetness, by convention bitterness, by convention color, in reality only atoms and the void\" (Democritus), or wonders where there is any room in this cold atomic universe for love, free will, or even the existence of people - since, after all, people are just *mere* collections of atoms - can be seen as undergoing an ontology identification problem: they don't know how to find the objects of value in a representation containing atoms instead of ontologically basic people.\n\nHuman beings simultaneously evolved a particular set of standard mental representations (e.g., a representation for colors in terms of a 3-dimensional subjective color space, a representation for other humans that simulates their brain via [https://arbital.com/p/empathy](https://arbital.com/p/empathy)) along with evolving desires that bind to these representations ([identification of flowering landscapes as beautiful](http://en.wikipedia.org/wiki/Evolutionary_aesthetics#Landscape_and_other_visual_arts_preferences), a preference not to be embarrassed in front of other objects designated as people). When someone visualizes any particular configurations of 'mere atoms', their built-in desires don't automatically fire and bind to that mental representation, the way they would bind to the brain's native representation of other people. Generalizing that no set of atoms can be meaningful, and being told that reality is composed entirely of such atoms, they feel they've been told that the true state of reality, underlying appearances, is a meaningless one.\n\nArguably, this is structurally similar to a utility function so defined as to bind only to true diamonds made of ontologically basic carbon, which evaluates as unimportant any diamond that turns out to be made of mere protons and neutrons.\n\n## Ontology identification problems may reappear on the reflective level\n\nAn obvious thought (especially for [online genies](https://arbital.com/p/6w)) is that if the AI is unsure about how to reinterpret its goals in light of a shifting mental representation, it should query the programmers.\n\nSince the definition of a programmer would then itself be baked into the [preference framework](https://arbital.com/p/5f), the problem might [reproduce itself on the reflective level](https://arbital.com/p/) if the AI became unsure of where to find [programmers](https://arbital.com/p/9r). (\"My preference framework said that programmers were made of carbon atoms, but all I can find in this universe are quantum fields.\")\n\n## Value lading in category boundaries\n\nTaking apart objects of value into smaller components can sometimes create new moral [edge cases](https://arbital.com/p/). In this sense, rebinding the terms of a utility function decides a [value-laden](https://arbital.com/p/) question.\n\nConsider chimpanzees. One way of viewing questions like \"Is a chimpanzee truly a person?\" - meaning, not, \"How do we arbitrarily define the syllables per-son?\" but \"Should we care a lot about chimpanzees?\" - is that they're about how to apply the 'person' category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.\n\nRedefining the value-laden category 'person' so that it talked about brains made out of neural regions, rather than whole human beings, would implicitly say whether or not a chimpanzee was a person. Chimpanzees definitely have neural areas of various sizes, and particular cognitive abilities - we can suppose the empirical truth is unambiguous at this level, and known to us. So the question is then whether we regard a particular configuration of neural parts (a frontal cortex of a certain size) and particular cognitive abilities (consequentialist means-end reasoning and empathy, but no recursive language) as something that our 'person' category values... once we've rewritten the person category to value configurations of cognitive parts, rather than whole atomic people.\n\nIn this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds. Once a diamond maximizer knows about neutrons, it can see that C-14 is chemically like carbon and forms the same kind of chemical bonds, but that it's heavier because it has two extra neutrons. We can see that chimpanzees have a similar brain architectures to the sort of people we always considered before, but that they have smaller frontal cortexes and no ability to use recursive language, etcetera.\n\nWithout knowing more about the diamond maximizer, we can't guess what sort of considerations it might bring to bear in deciding what is Truly Carbon and Really A Diamond. But the breadth of considerations human beings need to invoke in deciding how much to care about chimpanzees, is one way of illustrating that the problem of rebinding a utility function to a shifted ontology is [https://arbital.com/p/value-laden](https://arbital.com/p/value-laden) and potentially undergo [excursions](https://arbital.com/p/) into [arbitrarily complicated desiderata](https://arbital.com/p/). Redefining a [moral category](https://arbital.com/p/) so that it talks about the underlying parts of what were previously seen as all-or-nothing atomic objects, may carry an implicit ruling about how to value many kinds of [edge case](https://arbital.com/p/) objects that were never seen before.\n\n A formal part of this problem may need to be carved out from the edge-case-reclassification part: e.g., how would you redefine carbon as C12 if there were no other isotopes, or how would you rebind the utility function to *at least* C12, or how would edge cases be identified and queried.\n\n\n# Potential research avenues\n\n## 'Transparent priors' constrained to meaningful but Turing-complete hypothesis spaces\n\nThe reason why we can't bind a description of 'diamond' or 'carbon atoms' to the hypothesis space used by [https://arbital.com/p/11v](https://arbital.com/p/11v) or [AIXI-tl](https://arbital.com/p/) is that the hypothesis space of AIXI is all Turing machines that produce binary strings, or probability distributions over the next sense bit given previous sense bits and motor input. These Turing machines could contain an unimaginably wide range of possible contents\n\n(Example: Maybe one Turing machine that is producing good sequence predictions inside AIXI, actually does so by simulating a large universe, identifying a superintelligent civilization that evolves inside that universe, and motivating that civilization to try to intelligently predict future future bits from past bits (as provided by some intervention). To write a formal utility function that could extract the 'amount of real diamond in the environment' from arbitrary predictors in the above case , we'd need the function to read the Turing machine, decode that universe, find the superintelligence, decode the superintelligence's thought processes, find the concept (if any) resembling 'diamond', and hope that the superintelligence had precalculated how much diamond was around in the outer universe being manipulated by AIXI.)\n\nThis suggests that to solve the ontology identification problem, we may need to constrain the hypothesis space to something [less general](https://arbital.com/p/) than 'an explanation is any computer program that outputs a probability distribution on sense bits'. A constrained explanation space can still be Turing complete (contain a possible explanation for every computable sense input sequence) without every possible computer program constituting an explanation.\n\nAn [unrealistic example](https://arbital.com/p/) would be to constrain the hypothesis space to Dynamic Bayesian Networks. DBNs can represent any Turing machine with bounded memory, so they are very general, but since a DBN is a causal model, they make it possible for a preference framework to talk about 'the cause of a picture of a diamond' in a way that you couldn't look for 'the cause of a picture of a diamond' inside a general Turing machine. Again, this might fail if the DBN has no 'natural' way of representing the environment except as a DBN simulating some other program that simulates the environment.\n\nSuppose a rich causal language, such as, e.g., a [dynamic system](https://arbital.com/p/) of objects with [causal relations](https://arbital.com/p/) and [hierarchical categories of similarity](https://arbital.com/p/). The hope is that in this language, the *natural* hypothesis representing the environment - the simplest hypotheses within this language that well predict the sense data, or those hypotheses of highest probability under some simplicity prior after updating on the sense data - would be such that there was a natural 'diamond' category inside the most probable causal models. In other words, the winning hypothesis for explaining the universe would already have postulated diamondness as a [natural category](https://arbital.com/p/) and represented it as Category #803,844, in a rich language where we already know how to look through the enviromental model and find the list of categories.\n\nGiven some transparent prior, there would then exist the further problem of developing a utility-identifying preference framework that could look through the most likely environmental representations and identify diamonds. Some likely (interacting) ways of binding would be, e.g., to \"the causes of pictures of diamonds\", to \"things that are bound to four similar things\", querying ambiguities to programmers, or direct programmer inspection of the AI's model (but in this case the programmers might need to re-inspect after each ontological shift). See below.\n\n(A bounded value loading methodology would also need some way of turning the bound preference framework into the estimation procedures for expected diamond and the agent's search procedures for strategies high in expected diamond, i.e., the bulk of the actual AI that carries out the goal optimization.)\n\n## Matching environmental categories to descriptive constraints\n\nGiven some transparent prior, there would exist a further problem of how to actually bind a preference framework to that prior. One possible contributing method for pinpointing an environmental property could be if we understand the prior well enough to understand what the described object ought to look like - the equivalent of being able to search for 'things W made of six smaller things X near six smaller things Y and six smaller things Z, that are bound by shared Xs to four similar things W in a tetrahedral structure' in order to identify carbon atoms and diamond.\n\nWe would need to understand the representation well enough to make a guess about how carbon or diamond would be represented inside it. But if we could guess that, we could write a program that identifies 'diamond' inside the hypothesis space without needing to know in advance that diamondness will be Category #823,034. Then we could rerun the same utility-identification program when the representation updates, so long as this program can reliably identify diamond inside the model each time, and the agent acts so as to optimize the utility identified by the program.\n\nOne particular class of objects that might plausibly be identifiable in this way is 'the AI's programmers' (aka the agents that are causes of the AI's code) if there are parts of the preference framework that say to query programmers to resolve ambiguities.\n\nA toy problem for this research avenue might involve:\n\n- One of the richer representation frameworks that can be inducted as of the time, e.g., a simple Dynamic Bayes Net.\n- An agent environment that can be thus represented.\n- A goal over properties relatively distant from the agent's sensory experience (e.g., the goal is over the cause of the cause of the sensory data).\n- A program that identifies the objects of utility in the environment, within the model thus freely inducted.\n- An agent that optimizes the identified objects of utility, once it has inducted a sufficiently good model of the environment to optimize what it is looking for.\n\nFurther work might add:\n\n- New information that can change the model of the environment.\n- An agent that smoothly updates what it optimizes for in this case.\n\nAnd further:\n\n- Environments complicated enough that there is real structural ambiguity (e.g., dependence on exact initial conditions of the inference program) about how exactly the utility-related parts are modeled.\n- Agents that can optimize through a probability distribution about environments that differ in their identified objects of utility.\n\nA potential agenda for unbounded analysis might be:\n\n- An [unbounded analysis](https://arbital.com/p/) showing that a utility-identifying [preference framework](https://arbital.com/p/5f) is a generalization of a [VNM utility](https://arbital.com/p/) and can [tile](https://arbital.com/p/) in an architecture that tiles a generic utility function.\n- A [https://arbital.com/p/45](https://arbital.com/p/45) analysis showing that an agent is not motivated to try to cause the universe to be such as to have utility identified in a particular way.\n- A [https://arbital.com/p/45](https://arbital.com/p/45) analysis showing that the identity and category boundaries of the objects of utility will be treated as a [historical fact](https://arbital.com/p/) rather than one lying in the agent's [decision-theoretic future](https://arbital.com/p/).\n\n## Identifying environmental categories as the causes of labeled sense data.\n\nAnother potential approach, given a prior transparent enough that we can find causal data inside it, would be to try to identify diamonds as the causes of pictures of diamonds. \n\n\n\n### Security note\n\n[Christiano's hack](https://arbital.com/p/5j): if your AI is advanced enough to model distant superintelligences, it's important to note that distant superintelligences can make 'the most probable cause of the AI's sensory data' be anything they want by making a predictable decision to simulate AIs such that your AI doesn't have info distinguishing itself from the distant AIs your AI imagines being simulated\n\n## Ambiguity resolution\n\nBoth the description-matching and cause-inferring methods might produce ambiguities. Rather than having the AI optimize for a probabilistic mix over all the matches (as if it were uncertain of which match were the true one), it would be better to query the ambiguity to the programmers (especially if different probable models imply different strategies). This problem shares structure with [inductive inference with ambiguity resolution](https://arbital.com/p/) as a strategy for resolving [unforeseen inductions](https://arbital.com/p/).\n\n\n\n## Multi-level maps\n\nBeing able to describe, in purely theoretical principle, a prior over epistemic models that have at least two levels and can switch between them in some meaningful sense, would constitute major progress over the present state of the art.\n\n\n\n# Implications\n\n\n\n\n\nThe problem of using sensory data to build computationally efficient probabilistic maps of the world, and to efficiently search for actions that are predicted by those maps to have particular consequences, could be identified with the entire problem of AGI. So the research goal of ontology identification is not to publish a complete bounded system like that (i.e. an AGI), but to develop an unbounded analysis of utility rebinding that seems to say something useful specifically about the ontology-identification part of the problem.)", "date_published": "2016-10-14T15:32:34Z", "authors": ["Alexei Andreev", "Nate Soares", "Tom Brown", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["It seems likely that for advanced agents, the agent's representation of the world will [change in unforeseen ways as it becomes smarter](https://arbital.com/p/5d). The ontology identification problem is to create a [preference framework](https://arbital.com/p/5f) for the agent that optimizes the same external facts, even as the agent modifies its representation of the world. [For example](https://arbital.com/p/6b), if the [intended goal](https://arbital.com/p/6h) were to [create large amounts of diamond material](https://arbital.com/p/5g), one type of ontology identification problem would arise if the programmers thought of carbon atoms as primitive during the AI's development phase, and then the advanced AI discovered nuclear physics."], "tags": ["AI alignment open problem", "Work in progress", "B-Class", "Development phase unpredictable"], "alias": "5c"} {"id": "fafcb6526a90e8ed8b4e2922bfbd5da0", "title": "Development phase unpredictable", "url": "https://arbital.com/p/development_phase_unpredictable", "source": "arbital", "source_type": "text", "text": "Several proposed problems in [advanced safety](https://arbital.com/p/2l) are alleged to be difficult because they depend on some property of a mature [agent](https://arbital.com/p/2c) that is allegedly hard to predict in advance at the time we are designing, teaching, or testing the agent. We say that such properties are allegedly 'development phase unpredictable'. For example, the [Unforeseen Maximums problem](https://arbital.com/p/47) arises when [we can't search a rich solution space as widely as an advanced agent](https://arbital.com/p/2j), making it development-phase unpredictable which real-world strategy or outcome state will maximize some formal utility function.", "date_published": "2015-10-13T15:31:17Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition"], "alias": "5d"} {"id": "7d36dc26fc36ee5e4d169529a70e7e5a", "title": "Preference framework", "url": "https://arbital.com/p/preference_framework", "source": "arbital", "source_type": "text", "text": "A 'preference framework' refers to a fixed algorithm that updates, or potentially changes in other ways, to determine what the agent [prefers](https://arbital.com/p/) for [terminal](https://arbital.com/p/1bh) outcomes. 'Preference framework' is a term more general than '[utility function](https://arbital.com/p/1fw)' which includes structurally complicated generalizations of utility functions.\n\nAs a central example, the [utility indifference](https://arbital.com/p/1b7) proposal has the agent switching between utility functions $U_X$ and $U_Y$ depending on whether a switch is pressed. We can call this meta-system a 'preference framework' to avoid presuming in advance that it embodies a [VNM-coherent](https://arbital.com/p/7hh) utility function.\n\nAn even more general term would be [https://arbital.com/p/decision_algorithm](https://arbital.com/p/decision_algorithm) which doesn't presume that the agent operates by [preferring outcomes](https://arbital.com/p/9h).", "date_published": "2017-02-13T16:12:34Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A 'preference framework' is a way of deciding which outcomes an agent [terminally](https://arbital.com/p/1bh) prefers. 'Preference framework' is a broader term than '[utility function](https://arbital.com/p/1fw)', since 'preference framework' would also include structurally complicated [meta-utility](https://arbital.com/p/meta_utility) functions, such as those which appear in some proposals for [https://arbital.com/p/1b7](https://arbital.com/p/1b7) or [https://arbital.com/p/7s2](https://arbital.com/p/7s2)."], "tags": ["B-Class", "Definition"], "alias": "5f"} {"id": "9073727bef6a293efd859a8e47e03536", "title": "Distant superintelligences can coerce the most probable environment of your AI", "url": "https://arbital.com/p/probable_environment_hacking", "source": "arbital", "source_type": "text", "text": "A distant superintelligence can change 'the most likely environment' for your AI by simulating many copies of AIs similar to your AI, such that your local AI doesn't know it's not one of those simulated AIs. This means that, e.g., if there is any reference in your AI's [preference framework](https://arbital.com/p/5f) to the [causes](https://arbital.com/p/) of [sense data](https://arbital.com/p/) - like, programmers being the cause of sensed keystrokes - then a distant superintelligence can try to hack that reference. This would place us in an [adversarial security context versus a superintelligence](https://arbital.com/p/), and should be avoided if at all possible.\n\n### Difficulty\n\nSome proposals for AI preference frameworks involve references to the AI's *causal environment* and not just the AI's immediate *sense events*. For example, a [DWIM](https://arbital.com/p/) preference framework would putatively have the AI identify 'programmers' in the environment, model those programmers, and care about what its model of the programmers 'really wanted the AI to do'. In other words, the AI would care about the causes behind its immediate sense experiences.\n\nThis potentially opens our AIs to a remote root attack by a distant superintelligence. A distant superintelligence has the power to simulate lots of copies of our AI, or lots of AIs such that our AI doesn't think it can introspectively distinguish itself from those AIs. Then it can force the 'most likely' explanation of the AI's apparent sensory experiences to be that the AI is in such a simulation. Then the superintelligence can change arbitrary features of the most likely facts about the environment.\n\nThis problem was observed in a security context by [https://arbital.com/p/3](https://arbital.com/p/3), and precedented by a less general suggestion from [Rolf Nelson](https://arbital.com/p/http://www.sl4.org/archive/0708/16600.html).\n\n\"Probable environment hacking\" depends on the local AI trying to model distant superintelligences. The actual proximal harm is done by the local AI's *model of* distant superintelligences, rather than by the superintelligences themselves. However, a distant superintelligence that uses a [logical decision theory](https://arbital.com/p/) may model its choices as logically correlated to the local AI's model of the distant SI's choices. Thus, a local AI that models a distant superintelligence that uses a logical decision theory may model that distant superintelligence as behaving as though it could control the AI's model of its choices via its choices. Thus, the local AI would model the distant superintelligence as probably creating lots of AIs that it can't distinguish from itself, and update accordingly on the most probable cause of its sense events.\n\nThis hack would be worthwhile, from the perspective of a distant superintelligence, if e.g. it could gain control of the whole future light cone of 'naturally arising' AIs like ours, in exchange for expending some much smaller amount of resource (small compared to our future light cone) in order to simulate lots of AIs. (Obviously, the distant SI would prefer even more to 'fool' our AI into expecting this, while not actually expending the resources.)\n\nThis hack would be expected to go through by default if: (1) a local AI uses [naturalized induction](https://arbital.com/p/) or some similar framework to reason about the [causes](https://arbital.com/p/) of sense events, (2) the local AI models distant superintelligences as being likely to use logical decision theories and to have utility functions that would vary with respect to outcomes in our local future light cone, and (3) the local AI has a preference framework that can be 'hacked' via induced beliefs about the environment.\n\n### Implications\n\nFor any AI short of a full-scale autonomous Sovereign, we should probably try to get our AI to [not think at all about distant superintelligences](https://arbital.com/p/1g4), since this creates a host of [adversarial security problems](https://arbital.com/p/) of which \"probable environment hacking\" is only one.\n\nWe might also think twice about DWIM architectures that seem to permit catastrophe purely as a function of the AI's beliefs about the environment, without any check that goes through a direct sense event of the AI (which distant superintelligences cannot control the AI's beliefs about, since we can directly hit the sense switch).\n\nWe can also hope for any number of miscellaneous safeguards that would sound alarms at the point where the AI begins to imagine distant superintelligences imagining how to hack itself.", "date_published": "2016-03-09T00:53:03Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Behaviorist genie", "Work in progress"], "alias": "5j"} {"id": "ec4e3454d48e65265c7dae357818cbe0", "title": "Complexity of value", "url": "https://arbital.com/p/complexity_of_value", "source": "arbital", "source_type": "text", "text": "## Introduction\n\n\"Complexity of value\" is the idea that if you tried to write an AI that would do right things (or maximally right things, or adequately right things) *without further looking at humans* (so it can't take in a flood of additional data from human advice, the AI has to be complete as it stands once you're finished creating it), the AI's preferences or utility function would need to contain a large amount of data ([algorithmic complexity](https://arbital.com/p/Kcomplexity)). Conversely, if you try to write an AI that directly wants *simple* things or try to specify the AI's preferences using a *small* amount of data or code, it won't do acceptably right things in our universe.\n\nComplexity of value says, \"There's no simple and non-meta solution to AI preferences\" or \"The things we want AIs to want are complicated in the [Kolmogorov-complexity](https://arbital.com/p/5v) sense\" or \"Any simple goal you try to describe that is All We Need To Program Into AIs is almost certainly wrong.\"\n\nComplexity of value is a further idea above and beyond the [orthogonality thesis](https://arbital.com/p/1y) which states that AIs don't automatically do the right thing and that we can have, e.g., [paperclip maximizers](https://arbital.com/p/10h). Even if we accept that paperclip maximizers are possible, and simple and nonforced, this wouldn't yet imply that it's very *difficult* to make AIs that do the right thing. If the right thing is very simple to encode - if there are [value](https://arbital.com/p/55) optimizers that are scarcely more complex than [diamond maximizers](https://arbital.com/p/5g) - then it might not be especially hard to build a nice AI even if not all AIs are nice. Complexity of Value is the further proposition that says, no, this is forseeably quite hard - not because AIs have 'natural' anti-nice desires, but because niceness requires a lot of work to specify.\n\n### Frankena's list\n\nAs an intuition pump for the complexity of value thesis, consider William Frankena's list of things which many cultures and people seem to value (for their own sake rather than their external consequences):\n\n> \"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.\"\n\nWhen we try to list out properties of a human or galactic future that seem like they'd be very nice, we at least *seem* to value a fair number of things that aren't reducible to each other. (What initially look like plausible-sounding \"But you do A to get B\" arguments usually fall apart when we look for [third alternatives](https://arbital.com/p/) to doing A to get B. Marginally adding some freedom can marginally increase the happiness of a human, so a happiness optimizer that can only exert a small push toward freedom might choose to do so. That doesn't mean that a *pure, powerful* happiness maximizer would instrumentally optimize freedom. If an agent cares about happiness but not freedom, the outcome that *maximizes* their preferences is a large number of brains set to maximum happiness. When we don't just seize on one possible case where a B-optimizer might use A as a strategy, but instead look for further C-strategies that might maximize B even better than A, then the attempt to reduce A to an instrumental B-maximization strategy often falls apart. It's in this sense that the items on Frankena's list don't seem to reduce to each other as a matter of pure preference, even though humans in everyday life often seem to pursue several of the goals at the same time.\n\nComplexity of value says that, in this case, the way things seem is the way they are: Frankena's list is *not* encodable in one page of Python code. This proposition can't be established definitely without settling on a sufficiently well-specified [metaethics](https://arbital.com/p/), such as [reflective equilibrium](https://arbital.com/p/), to make it clear that there is indeed no a priori reason for normativity to be algorithmically simple. But the basic intuition for Complexity of Value is provided just by the fact that Frankena's list was more than one item long, and that many individual terms don't seem likely to have algorithmically simple definitions that distinguish their valuable from non-valuable forms.\n\n### Lack of a central core\n\nWe can understand the idea of complexity of value by contrasting it to the situation with respect to [epistemic reasoning](https://arbital.com/p/) aka truth-finding or answering simple factual questions about the world. In an ideal sense, we can try to compress and reduce the idea of mapping the world well down to algorithmically simple notions like \"Occam's Razor\" and \"Bayesian updating\". In a practical sense, natural selection, in the course of optimizing humans to solve factual questions like \"Where can I find a tree with fruit?\" or \"Are brightly colored snakes usually poisonous?\" or \"Who's plotting against me?\", ended up with enough of the central core of epistemology that humans were later able to answer questions like \"How are the planets moving?\" or \"What happens if I fire this rocket?\", even though humans hadn't been explicitly selected on to answer those exact questions.\n\nBecause epistemology does have a central core of simplicity and Bayesian updating, selecting for an organism that got some pretty complicated epistemic questions right enough to reproduce, also caused that organism to start understanding things like General Relativity. When it comes to truth-finding, we'd expect by default for the same thing to be true about an Artificial Intelligence; if you build it to get epistemically correct answers on lots of widely different problems, it will contain a core of truthfinding and start getting epistemically correct answers on lots of other problems - even problems completely different from your training set, the way that humans understanding General Relativity wasn't like any hunter-gatherer problem.\n\nThe complexity of value thesis is that there *isn't* a simple core to normativity, which means that if you hone your AI to do normatively good things on A, B, and C and then confront the AI with very different problem D, the AI may do the wrong thing on D. There's a large number of independent ideal \"gears\" inside the complex machinery of value, compared to epistemology that in principle might only contain \"prefer simpler hypotheses\" and \"prefer hypotheses that match the evidence\".\n\nThe [Orthogonality Thesis](https://arbital.com/p/1y) says that, contra to the intuition that [maximizing paperclips](https://arbital.com/p/10h) feels \"stupid\", you can have arbitrarily cognitively powerful entities that maximize paperclips, or arbitrarily complicated other goals. So while intuitively you might think it would be simple to avoid paperclip maximizers, requiring no work at all for a sufficiently advanced AI, the Orthogonality Thesis says that things will be more difficult than that; you have to put in some work to have the AI do the right thing.\n\nThe Complexity of Value thesis is the next step after Orthogonality; it says that, contra to the feeling that \"rightness ought to be simple, darn it\", normativity turns out not to have an algorithmically simple core, not the way that correctly answering questions of fact has a central tendency that generalizes well. And so, even though an AI that you train to do well on problems like steering cars or figuring out General Relativity from scratch, may hit on a core capability that leads the AI to do well on arbitrarily more complicated problems of galactic scale, we can't rely on getting an equally generous bonanza of generalization from an AI that seems to do well on a small but varied set of moral and ethical problems - it may still fail the next problem that isn't like anything in the training set. To the extent that we have very strong reasons to have prior confidence in Complexity of Value, in fact, we ought to be suspicious and worried about an AI that seems to be pulling correct moral answers from nowhere - it is much more likely to have hit upon the convergent instrumental strategy \"say what makes the programmers trust you\", rather than having hit upon a simple core of all normativity.\n\n## Key sub-propositions\n\nComplexity of Value requires [Orthogonality](https://arbital.com/p/1y), and would be implied by three further subpropositions:\n\nThe **intrinsic complexity of value** proposition is that the properties we want AIs to achieve - whatever stands in for the metasyntactic variable '[value](https://arbital.com/p/55)' - have a large amount of intrinsic information in the sense of comprising a large number of independent facts that aren't being generated by a single computationally simple rule.\n\nA very bad example that may nonetheless provide an important intuition is to imagine trying to pinpoint to an AI what constitutes 'worthwhile happiness'. The AI suggests a universe tiled with tiny Q-learning algorithms receiving high rewards. After some explanation and several labeled datasets later, the AI suggests a human brain with a wire stuck into its pleasure center. After further explanation, the AI suggests a human in a holodeck. You begin talking about the importance of believing truly and that your values call for apparent human relationships to be real relationships rather than being hallucinated. The AI asks you what constitutes a good human relationship to be happy about. The series of questions occurs because (arguendo) the AI keeps running into questions whose answers are not AI-obvious from the previous answers already given, because they involve new things you want such that your desire of them wasn't obvious from answers you'd already given. The upshot is that the specification of 'worthwhile happiness' involves a long series of facts that aren't reducible just to the previous facts, and some of your preferences may involve many fine details of surprising importance. In other words, the specification of 'worthwhile happiness' would be at least as hard to code by hand into the AI as it would be difficult to hand-code a formal rule that could recognize which pictures contained cats. (I.e., impossible.)\n\nThe second proposition is **incompressibility of value** which says that attempts to reduce these complex values into some incredibly simple and elegant principle fail (much like early attempts by e.g. Bentham to reduce all human value to pleasure); and that no simple instruction given an AI will happen to target outcomes of high value either. The core reason to expect a priori that all such attempts will fail, is that most 1000-byte strings aren't compressible down to some incredibly simple pattern no matter how many clever tricks you try to throw at them; fewer than 1 in 1024 such strings can be compressible to 990 bytes, never mind 10 bytes. Due to the tremendous number of different proposals for why some simple instruction to an AI should end up achieving high-value outcomes or why all human value can be reduced to some simple principle, there is no central demonstration that all these proposals *must* fail, but there is a sense in which *a priori* we should strongly expect all such clever attempts to fail. Many disagreeable attempts at reducing value A to value B, such as [Juergen Schmidhuber's attempt to reduce all human value to increasing the compression of sensory information](https://arbital.com/p/), stand as a further cautionary lesson.\n\nThe third proposition is **[fragility of value](fragility-1)** which says that if you have a 1000-byte *exact* specification of worthwhile happiness, and you begin to mutate it, the [value](https://arbital.com/p/55) created by the corresponding AI with the mutated definition falls off rapidly. E.g. an AI with only 950 bytes of the full definition may end up creating 0% of the value rather than 95% of the value. (E.g., the AI understood all aspects of what makes for a life well-lived... *except* the part about requiring a conscious observer to experience it.)\n\nTogether, these propositions would imply that to achieve an *adequate* amount of value (e.g. 90% of potential value, or even 20% of potential value) there may be no simple handcoded object-level goal for the AI that results in that value's realization. E.g., you can't just tell it to 'maximize happiness', with some hand-coded rule for identifying happiness.\n\n## Centrality\n\nComplexity of Value is a central proposition in [value alignment theory](https://arbital.com/p/2v). Many [foreseen difficulties](https://arbital.com/p/6r) revolve around it:\n\n- Complex values can't be hand-coded into an AI, and require [value learning](https://arbital.com/p/) or [Do What I Mean](https://arbital.com/p/) preference frameworks.\n- Complex /fragile values may be hard to learn even by induction because the labeled data may not include distinctions that give all of the 1000 bytes a chance to cast an unambiguous causal shadow into the data, and it's very bad if 50 bytes are left ambiguous.\n- Complex / fragile values require error-recovery mechanisms because of the worry about getting some single subtle part wrong and this being catastrophic. (And since we're working inside of highly intelligent agents, the recovery mechanism has to be a [corrigible preference](https://arbital.com/p/45) so that the agent accepts our attempts at modifying it.)\n\nMore generally:\n\n- Complex values tend to be implicated in [patch-resistant problems](https://arbital.com/p/48) that wouldn't be resistant if there was some obvious 5-line specification of *exactly* what to do, or not do.\n- Complex values tend to be implicated in the [context change problems](https://arbital.com/p/6q) that wouldn't exist if we had a 5-line specification that solved those problems once and for all and that we'd likely run across during the development phase.\n\n### Importance\n\nMany policy questions strongly depend on Complexity of Value, mostly having to do with the overall difficulty of developing value-aligned AI, e.g.:\n\n- Should we try to develop [Sovereigns](https://arbital.com/p/), or restrict ourselves to [Genies](https://arbital.com/p/6w)?\n- How likely is a moderately safety-aware project to succeed?\n- Should we be more worried about malicious actors creating AI, or about well-intentioned errors?\n- How difficult is the total problem and how much should we be panicking?\n- How attractive would be any genuinely credible [game-changing alternative](https://arbital.com/p/2z) to AI?\n\nIt has been advocated that there are [psychological biases](https://arbital.com/p/) and [popular mistakes](https://arbital.com/p/) leading to beliefs that directly or by implication deny Complex Value. To the extent one credits that Complex Value is probably true, one should arguably be concerned about the number of early assessments of the value alignment problem that seem to rely on Complex Value being false (like just needing to hardcode a particular goal into the AI, or in general treating the value alignment problem as not panic-worthily difficult). \n\n## Truth condition\n\nThe Complexity of Value proposition is true if, relative to viable and acceptable real-world [methodologies](https://arbital.com/p/) for AI development, there isn't any reliably knowable way to specify the AI's [object-level preferences](https://arbital.com/p/) as a structure of low [algorithmic complexity](https://arbital.com/p/), such that the result of running that AI is [achieving](https://arbital.com/p/2z) [enough](https://arbital.com/p/) of the possible [value](https://arbital.com/p/55), for reasonable definitions of [value](https://arbital.com/p/55).\n\nCaveats:\n\n### Viable and acceptable computation\n\nSuppose there turns out to exist, in principle, a relatively simple Turing machine (e.g. 100 states) that picks out 'value' by re-running entire evolutionary histories, creating and discarding a hundred billion sapient races in order to pick out one that ended up relatively similar to humanity. This would use an unrealistically large amount of computing power and *also* commit an unacceptable amount of [mindcrime](https://arbital.com/p/6v).", "date_published": "2016-04-14T01:17:56Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["The proposition that there's no [algorithmically simple](https://arbital.com/p/5v) [object-level goal](https://arbital.com/p/5t) we can give to an [advanced AI](https://arbital.com/p/2c) that yields a future of high [value](https://arbital.com/p/55). Or: Any formally simple goal given to an AI, that talks directly about what sort of world to create, will produce disaster. Or: If you're trying to talk directly about what events or states of the world you want, then any sort of programmatically simple utility function, of the sort a programmer could reasonably hardcode, will lead to a bad end. (The non-simple alternative would be, e.g., an induction rule that can learn complicated classification rules from labeled instances, or a preference framework that explicitly models humans in order to learn complicated facts about what humans want.)"], "tags": ["B-Class"], "alias": "5l"} {"id": "ff7cae71b5602b424973c78aecd9cb9b", "title": "Immediate goods", "url": "https://arbital.com/p/immediate_goods", "source": "arbital", "source_type": "text", "text": "One of the potential views on 'value' in the value alignment problem is that what we should want from an AI is a list of immediate goods or outcome features like 'a cure for cancer' or 'letting humans make their own decisions' or 'preventing the world from being wiped out by a paperclip maximizer'. (Immediate Goods as a criterion of 'value' isn't the same as saying we should give the AI those explicit goals; calling such a list 'value' means it's the real criterion by which we should judge how well the AI did.)\n\n# Arguments\n\n## Immaturity of view deduced from presence of instrumental goods\n\nIt seems understandable that Immediate Goods would be a very common form of expressed want when people first consider the [value alignment problem](https://arbital.com/p/2v); they would look for valuable things an AI could do.\n\nBut such a quickly produced list of expressed wants will often include [instrumental goods](https://arbital.com/p/) rather than [terminal goods](https://arbital.com/p/). For example, a cancer cure is (presumably) a means to the end of healthier or happier humans, which would then be the actual grounds on which the AI's real-world 'value' was evaluated from the human speaker's standpoint. If the AI 'cured cancer' in some technical sense that didn't make people healthier, the original person making the wish would probably not see the AI as having achieved value.\n\nThis is a reason for suspecting the maturity of such expressed views, and to suspect that the stated list of immediate goods will probably evolve into a more [terminal](https://arbital.com/p/) view of value from a human standpoint, given further reflection.\n\n### Mootness of immaturity\n\nIrrespective of the above, so far as technical issues like [Edge Instantiation](https://arbital.com/p/2w) are concerned, the 'value' variable could still apply to someone's spontaneously produced list of immediate wants, and that all the standard consequences of the value alignment problem usually still apply. It means we can immediately say (honestly) that e.g. [Edge Instantiation](https://arbital.com/p/2w) would be a problem for whatever want the speaker just expressed, without needing to persuade them to some other stance on 'value' first. Since the same technical problems will apply both to the immature view and to the expected mature view, we don't need to dispute the view of 'value' in order to take it at face value and honestly explain the standard technical issues that would still apply.\n\n## Moral imposition of short horizons\n\nArguably, a list of immediate goods may make some sense as a stopping-place for evaluating the performance of the AI, if either of the following conditions obtain:\n\n- There is much more agreement (among project sponsors or humans generally) about the goodness of the instrumental goods, than there is about the terminal values that make them good. E.g., twenty project sponsors can all agree that freedom is good, but have nonoverlapping concepts about why it is good, and it is hypothetically the case that these people would continue to disagree in the limit of indefinite debate or reflection. Then if we want to collectivize 'value' from the standpoint of the project sponsors for purposes of talking about whether the AI methodology achieves 'value', maybe it would just make sense to talk about how much (intuitively evaluated) freedom the AI creates.\n- It is in some sense morally incumbent upon humanity to do its own thinking about long-term outcomes and achieve them through immediate goods, or it is in some sense morally incumbent for humanity to arrive at long-term outcomes via its own decisions or optimization starting from immediate goods. In this case, it might make sense to see the 'value' of the AI as being realized only in terms of the AI getting to those immediate goods, because it would be morally wrong for there to be optimization by the AI of consequences beyond that.\n\nTo the knowledge of [https://arbital.com/p/2](https://arbital.com/p/2) as of May 2015, neither of these views have yet been advocated by anyone in particular as a defense of an immediate-goods theory of value.", "date_published": "2015-12-16T16:39:47Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "5r"} {"id": "902fcf2d97982f32c3edfe270e45b08d", "title": "Value alignment problem", "url": "https://arbital.com/p/value_alignment_problem", "source": "arbital", "source_type": "text", "text": "Disambiguation: For the research subject that includes the entire edifice of how and why to produce good AIs, see [https://arbital.com/p/2v](https://arbital.com/p/2v).\n\nsummary: The 'value alignment problem' is to produce [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) that *want* to do [beneficial](https://arbital.com/p/3d9) things and not do harmful things. The largest-looming subproblem is ['value identification' or 'value learning'](https://arbital.com/p/6c) (sometimes considered synonymous with value alignment) but this also includes subproblems like [https://arbital.com/p/45](https://arbital.com/p/45), that is, AI values such that it doesn't *want* to interfere with you correcting what you see as an error in its code.\n\nThe 'value alignment problem' is to produce [sufficiently advanced machine intelligences](https://arbital.com/p/7g1) that *want* to do [beneficial](https://arbital.com/p/3d9) things and not do harmful things. The largest-looming subproblem is ['value identification' or 'value learning'](https://arbital.com/p/6c) (sometimes considered synonymous with value alignment) but this also includes subproblems like [https://arbital.com/p/45](https://arbital.com/p/45), that is, AI values such that it doesn't *want* to interfere with you correcting what you see as an error in its code.", "date_published": "2017-02-01T23:46:41Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "5s"} {"id": "ab6be7fd59539ecc3d49c95b09ff1bdb", "title": "Object-level vs. indirect goals", "url": "https://arbital.com/p/object_level_goal", "source": "arbital", "source_type": "text", "text": "An 'object-level goal' is a goal that involves no indirection, doesn't require any further computation or observation to be fully specified, and is evaluated directly on events or things in the agent's model of the universe. Contrast to meta-level or indirect goals.\n\nSome examples of object-level goals might be these:\n\n- Eat apples.\n- Win a chess game.\n- Cause a pawn to advance to the eighth row, in order to win a chess game.\n- [Create paperclips.](https://arbital.com/p/10h)\n- [Maximize the amount of human 'happiness' for some fixed definition of happiness.](https://arbital.com/p/10d)\n- [The utility of a possible history of the universe is the amount of interval spent by carbon atoms covalently bound to four other carbon atoms.](https://arbital.com/p/5g)\n\nHere are some example cases of a goal or [preference framework](https://arbital.com/p/5f) with properties that make them *not* object level:\n\n- \"Observe Alice, model what she wants, then do that.\"\n - This framework is *indirect* because it doesn't directly say what events or things in the universe are good or bad, but rather gives a recipe for deciding which events are good or bad (namely, model Alice).\n - This framework is *not fully locally specified*, because we have to observe Alice, and maybe compute further based on our observations of her, before we find out the actual evaluator we run to weigh events as good or bad.\n- \"Induce a compact category covering the proximal causes of sensory data labeled as positive instances and not covering sensory data labeled as negative instances. The utility of an outcome is the number of events classified as positive by the induced category.\"\n - This framework is not fully specified because it is based on a supervised dataset, and the actual goals will vary with the dataset obtained. We don't know what the agent wants when we're told about the induction algorithm; we also have to be told about the dataset, and we haven't been told about that yet.\n- \"Compute an average of what all superintelligences will want, relative to some prior distribution over the origins of superintelligences, and then do that.\"\n - This framework might be computable locally without any more observations, but it still involves a level of indirection, and is not locally complete in the sense that it would take a whole lot more computation before the agent knew what it ought to think of eating an apple.\n\nThe object-level vs. meta-level distinction should not be confused with the [terminal vs. instrumental](https://arbital.com/p/1bh) distinction.", "date_published": "2015-12-23T02:33:03Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition"], "alias": "5t"} {"id": "579104d81c2e1fb40cb2fdcbaf1afb1a", "title": "Opinion page", "url": "https://arbital.com/p/opinion_meta_tag", "source": "arbital", "source_type": "text", "text": "Opinion pages represent one view or opinion on a topic, and are not necessarily balanced or a reflection of consensus.", "date_published": "2016-08-27T20:32:15Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Stub"], "alias": "60p"} {"id": "05e8c1151c94d822802732206cdf2e39", "title": "Ontology identification problem: Technical tutorial", "url": "https://arbital.com/p/ontology_identification_technical_tutorial", "source": "arbital", "source_type": "text", "text": "The problem of [ontology identification](https://arbital.com/p/5c) is the problem of loading a goal into an [advanced agent](https://arbital.com/p/2c) when that agent's representation of the world is likely to change in ways [unforeseen in the development phase](https://arbital.com/p/5d). This tutorial focuses primarily on explaining what the problem is and why it is a [foreseeable difficulty](https://arbital.com/p/6r); for the corresponding research problems, see [the main page on Ontology Identification](https://arbital.com/p/5c).\n\nThis is a technical tutorial, meaning that it assumes some familiarity with [value alignment theory](https://arbital.com/p/2v), the [value identification problem](https://arbital.com/p/6c), and [safety thinking for advanced agents](https://arbital.com/p/2l).\n\nTo isolate ontology identification from other parts of the value identification problem, we consider a simplified but still very difficult problem: to state an unbounded program implementing a [diamond maximizer](https://arbital.com/p/5g) that will turn as much of the physical universe into diamond as possible. The goal of \"making diamonds\" was chosen to have a crisp-seeming definition for our universe: namely, the amount of diamond is the number of carbon atoms covalently bound to four other carbon atoms. Since it seems that in this case our [intended goal](https://arbital.com/p/6h) should be crisply definable relative to our universe's physics, we can avert many other issues of trying to identify [complex values](https://arbital.com/p/5l) to the agent. Ontology identification is a difficulty that still remains even in this case - the agent's representation of 'carbon atoms' may still change over time.\n\n## Introduction: Two sources of representational unpredictability\n\nSuppose we wanted to write a hand-coded, [object-level](https://arbital.com/p/5t) utility function that evaluated the amount of diamond material present in the AI's model of the world. We might foresee the following two difficulties:\n\n1. Where exactly do I find 'carbon atoms' inside the AI's model of the world? As the programmer, all I see are these mysterious ones and zeroes, and the only parts that directly correspond to events I understand is the represention of the pixels in the AI's webcam... maybe I can figure out where the 'carbon' concept is by showing the AI graphite, buckytubes, and a diamond on its webcam and seeing what parts get activated... whoops, looks like the AI just revised its internal representation to be more computationally efficient, now I once again have no idea what 'carbon' looks like in there. How can I make my hand-coded utility function re-bind itself to 'carbon' each time the AI revises its model's representation of the world?\n\n2. What exactly is 'diamond'? If you say it's a nucleus with six protons, what's a proton? If you define a proton as being made of quarks, what if there are unknown other particles underlying quarks? What if the Standard Model of physics is incomplete or wrong - can we state exactly and formally what constitutes a carbon atom when we aren't certain what the underlying quarks are made of?\n\nDifficulty 2 probably seems more exotic than the first, but Difficulty 2 is easier to explain in a formal sense and turns out to be a simpler way to illustrate many of the key issues that also appear in Difficulty 1. We can see Difficulty 2 as the problem of binding an [intended goal](https://arbital.com/p/6h) to an unknown territory, and Difficulty 1 as the problem of binding an intended goal to an unknown map. So the first step of the tutorial will be to walk through how Difficulty 2 (what exactly is a diamond?) might result in weird behavior in an [unbounded agent](https://arbital.com/p/107) intended to be a diamond maximizer.\n\n## Try 1: Hacking AIXI to maximize diamonds?\n\nThe classic unbounded agent - an agent using far more computing power than the size of its environment - is [AIXI](https://arbital.com/p/11v). Roughly speaking, AIXI considers all computable hypotheses for how its environment might work - all possible Turing machines that would turn AIXI's outputs into AIXI's future inputs. (The finite variant AIXI-tl has a hypothesis space that includes all Turing machines that can be specified using fewer than $l$ bits and run in less than time $t$.)\n\nFrom the perspective of AIXI, any Turing machine that takes one input tape and produces two output tapes is a \"hypothesis about the environment\", where the input to the Turing machine encodes AIXI's hypothetical action, and the outputs are interpreted as a prediction about AIXI's sensory data and AIXI's reward signal. (In Marcus Hutter's formalism, the agent's reward is a separate sensory input to the agent, so hypotheses about the environment also make predictions about sensed rewards). AIXI then behaves as a [Bayesian predictor](https://arbital.com/p/) that uses [algorithmic complexity](https://arbital.com/p/5v) to give higher [prior probabilities](https://arbital.com/p/) to simpler hypotheses (that is, Turing machines with fewer states and smaller state transition diagrams), and updates its mix of hypotheses based on sensory evidence (which can confirm or disconfirm the predictions of particular Turing machines).\n\nAs a decision agent, AIXI always outputs the motor action that leads to the highest predicted reward, assuming that the environment is described by the updated probability mixture of all Turing machines that could represent the environment (and assuming that future iterations of AIXI update and choose similarly).\n\nThe ontology identification problem shows up sharply when we imagine trying to modify AIXI to \"maximize expectations of diamonds in the outside environment\" rather than \"maximize expectations of sensory reward signals\". As a [Cartesian agent](https://arbital.com/p/), AIXI has sharply defined sensory inputs and motor outputs, so we can have a [probability mixture](https://arbital.com/p/) over all Turing machines that relate motor outputs to sense inputs (as crisply represented in the input and output tapes). But even if some otherwise arbitrary Turing machine happens to predict sensory experiences extremely well, how do we look at the state and working tape of that Turing machine to evaluate 'the amount of diamond' or 'the estimated number of carbon atoms bound to four other carbon atoms'? The highest-weighted Turing machines that have best predicted the sensory data so far, presumably contain *some* sort of representation of the environment, but we have no idea how to get 'the number of diamonds' out of it.\n\n(Example: Maybe one Turing machine that is producing good sequence predictions inside AIXI, actually does so by simulating a large universe, identifying a superintelligent civilization that evolves inside that universe, and motivating that civilization to try to intelligently predict future future bits from past bits (as provided by some intervention). To write a formal utility function that could extract the 'amount of real diamond in the environment' from arbitrary predictors in the above case , we'd need the function to read the Turing machine, decode that universe, find the superintelligence, decode the superintelligence's thought processes, find the concept (if any) resembling 'diamond', and hope that the superintelligence had precalculated how much diamond was around in the outer universe being manipulated by AIXI.)\n\nThis is, in general, the reason why the AIXI family of architectures can only contain agents defined to maximize direct functions of their sensory input, and not agents that behave so as to optimize facts about their external environment. (We can't make AIXI maximize diamonds by making it want *pictures* of diamonds because then it will just, e.g., [build an environmental subagent that seizes control of AIXI's webcam and shows it pictures of diamonds](https://arbital.com/p/). If you ask AIXI to show itself sensory pictures of diamonds, you can get it to show its webcam lots of pictures of diamonds, but this is not the same thing as building an environmental diamond maximizer.)\n\n## Try 2: Unbounded agent using classical atomic hypotheses?\n\nGiven the origins of the above difficulty, we next imagine constraining the agent's hypothesis space to something other than \"literally all computable functions from motor outputs to sense inputs\", so that we can figure out how to find diamonds or carbon inside the agent's representation of the world.\n\nAs an [unrealistic example](https://arbital.com/p/): Suppose someone was trying to define 'diamonds' to the AI's utility function. Suppose they knew about atomic physics but not nuclear physics. Suppose they build an AI which, during its development phase, learns about atomic physics from the programmers, and thus builds a world-model that is based on atomic physics.\n\nAgain for purposes of [unrealistic examples](https://arbital.com/p/), suppose that the AI's world-model is encoded in such fashion that when the AI imagines a molecular structure - represents a mental image of some molecules - then carbon atoms are represented as a particular kind of basic element of the representation. Again, as an [unrealistic example](https://arbital.com/p/), imagine that there are [little LISP tokens](https://arbital.com/p/) representing environmental objects, and that the environmental-object-type of carbon-objects is encoded by the integer 6. Imagine also that each atom, inside this representation, is followed by a list of the other atoms to which it's covalently bound. Then when the AI is imagining a carbon atom participating in a diamond, inside the representation we would see an object of type 6, followed by a list containing exactly four other 6-objects.\n\nCan we fix this representation for all hypotheses, and then write a utility function for the AI that counts the number of type-6 objects that are bound to exactly four other type-6 objects? And if we did so, would the result actually be a diamond maximizer?\n\n### AIXI-atomic\n\nAs a first approach to implementing this idea - an agent whose hypothesis space is constrained to models that directly represent all the carbon atoms - imagine a variant of AIXI-tl that, rather than considering all tl-bounded Turing machines, considers all simulated atomic universes containing up to 10^100 particles spread out over up to 10^50 light-years. In other words, the agent's hypotheses are universe-sized simulations of classical, pre-nuclear models of physics; and these simulations are constrained to a common representation, so a fixed utility function can look at the representation and count carbon atoms bound to four other carbon atoms. Call this agent AIXI-atomic.\n\n(Note that AIXI-atomic, as an [unbounded agent](https://arbital.com/p/107), may use far more computing power than is embodied in its environment. For purposes of the thought experiment, assume that the universe contains exactly one hypercomputer that runs AIXI-atomic.)\n\nA first difficulty is that universes composed only of classical atoms are not good explanations of our own universe, even in terms of surface phenomena; e.g. the [ultraviolet catastrophe](http://en.wikipedia.org/wiki/Ultraviolet_catastrophe). So let it be supposed that we have simulation rules for classical physics that replicate at least whatever phenomena the programmers have observed at [development time](https://arbital.com/p/), even if the rules have some seemingly ad-hoc elements (like there being no ultraviolent catastrophes). We will *not* however suppose that the programmers have discovered all experimental phenomena we now see as pointing to nuclear or quantum physics.\n\nA second difficulty is that a simulated universe of classical atoms does not identify where in the universe the AIXI-atomic agent resides, or say how to match the types of AIXI-atomic's sense inputs with the underlying behaviors of atoms. We can elide this difficulty by imagining that AIXI-atomic simulates classical universes containing a single hypercomputer, and that AIXI-atomic knows a simple function from each simulated universe onto its own sensory data (e.g., it knows to look at the simulated universe, and translate simulated photons impinging on its webcam onto predicted webcam data in the standard format). This elides most of the problem of [naturalized induction](https://arbital.com/p/).\n\nSo the AIXI-atomic agent that is hoped to maximize diamond:\n\n- Considers only hypotheses that directly represent universes as huge systems of classical atoms, so that the function 'count atoms bound to four other carbon atoms' can be directly run over any possible future the agent models.\n- Assigns probabilistic priors over these possible atomic representations of the universe, favoring representations that are in some sense simpler.\n- Somehow [maps each atomic-level representation onto the agent's predicted sensory experiences](https://arbital.com/p/).\n- [Bayes-updates its priors](https://arbital.com/p/) based on actual sensory experiences, the same as classical AIXI.\n- Can evaluate the 'expected diamondness on the next turn' of a single action by looking at all hypothetical universes where that action is performed, weighted by their current probability, and summing over the expectation of 'carbon atoms bound to four other carbon atoms' after some unit amount of time has passed.\n- Can evaluate the 'future expected diamondness' of an action, over some finite time horizon, by assuming that its future self will also Bayes-update and maximize expected diamondness over that time horizon.\n- On each turn, outputs the action with greatest expected diamondness over some finite time horizon.\n\nSuppose our own real universe was amended to otherwise be exactly the same, but contain a single [impermeable](https://arbital.com/p/) hypercomputer. Suppose we defined an agent like the one above, using simulations of 1910-era models of physics, and ran that agent on the hypercomputer. Should we expect the result to be an actual diamond maximizer - expect that the outcome of running this program on a single hypercomputer would indeed be that most mass in our universe would be turned into carbon and arranged into diamonds?\n\n### Anticipated failure: AIXI-atomic tries to 'maximize outside the simulation'\n\nIn fact, our own universe isn't atomic, it's nuclear and quantum-mechanical. This means that AIXI-atomic does not contain any hypotheses in its hypothesis space that *directly represent* our universe. By the previously specified hypothesis of the thought experiment, AIXI-atomic's model of simulated physics was built to encompass all the experimental phenomena the programmers had yet discovered, but there were some quantum and nuclear phenomena that AIXI-atomic's programmers had not yet discovered. When those phenomena are discovered, there will be no simple explanation on the direct terms of the model.\n\nIntuitively, of course, we'd like AIXI-atomic to discover the composition of nuclei, shift its models to use nuclear physics, and refine the 'carbon atoms' mentioned in its utility function to mean 'atoms with nuclei containing six protons'.\n\nBut we didn't actually specify that when constructing the agent (and saying how to do it in general is, so far as we know, hard; in fact it's the whole ontology identification problem). We constrained the hypothesis space to contain only universes running on the classical physics that the programmers knew about. So what happens instead?\n\nProbably the 'simplest atomic hypothesis that fits the facts' will be an enormous atom-based computer, *simulating* nuclear physics and quantum physics in order to create a simulated non-classical universe whose outputs are ultimately hooked up to AIXI's webcam. From our perspective this hypothesis seems silly, but if you restrict the hypothesis space to only classical atomic universes, that's what ends up being the computationally simplest hypothesis that predicts, in detail, the results of nuclear and quantum experiments.\n\nAIXI-atomic will then try to choose actions so as to maximize the amount of expected diamond inside the probable *outside universes* that could contain the giant atom-based simulator of quantum physics. It is not obvious what sort of behavior this would imply.\n\n### Metaphor for difficulty: AIXI-atomic cares about only fundamental carbon\n\nOne metaphorical way of looking at the problem is that AIXI-atomic was implicitly defined to care only about diamonds made out of *ontologically fundamental* carbon atoms, not diamonds made out of quarks. A probability function that assigns 0 probability to all universes made of quarks, and a utility function that outputs a constant on all universes made of quarks, [yield functionally identical behavior](https://arbital.com/p/). So it is an exact metaphor to say that AIXI-atomic only *cares* about universes with ontologically basic carbon atoms, given that AIXI-atomic's hypothesis space only contains universes with ontologically basic carbon atoms.\n\nImagine that AIXI-atomic's hypothesis space does contain many other universes with other laws of physics, but its hand-coded utility function just returns 0 on those universes since it can't find any 'carbon atoms' inside the model. Since AIXI-atomic only cares about diamond made of fundamental carbon, when AIXI-atomic discovers the experimental data implying that almost all of its probability mass should reside in nuclear or quantum universes in which there were no fundamental carbon atoms, AIXI-atomic stops caring about the effect its actions have on the vast majority of probability mass inside its model. Instead AIXI-atomic tries to maximize inside the tiny remaining probabilities in which it *is* inside a universe with fundamental carbon atoms that is somehow reproducing its sensory experience of nuclei and quantum fields... for example, a classical atomic universe containing a computer simulating a quantum universe and showing the results to AIXI-atomic.\n\nFrom our perspective, we failed to solve the 'ontology identification problem' and get the real-world result we [intended](https://arbital.com/p/6h), because we tried to define the agent's *utility function* over properties of a universe made out of atoms, and the real universe turned out to be made of quantum fields. This caused the utility function to *fail to bind* to the agent's representation in the way we intuitively had in mind.\n\nToday we do know about quantum mechanics, so if we tried to build a diamond maximizer using some bounded version of the above formula, it might not fail on account of [the particular exact problem](https://arbital.com/p/48) of atomic physics being false.\n\nBut perhaps there are discoveries still remaining that would change our picture of the universe's ontology to imply something else underlying quarks or quantum fields. Human beings have only known about quantum fields for less than a century; our model of the ontological basics of our universe has been stable for less than a hundred years of our human experience. So we should seek an AI design that does not assume we know the exact, true, fundamental ontology of our universe during an AI's [development phase](https://arbital.com/p/5d).\n\nAs another important metaphorical case in point, consider a human being who feels angst on contemplating a universe in which \"By convention sweetness, by convention bitterness, by convention color, in reality only atoms and the void\" (Democritus); someone who wonders where there is any room in this collection of lifeless particles for love, free will, or even the existence of people. Since, after all, people are just *mere* collections of atoms. This person can be seen as undergoing an ontology identification problem: they don't know how to find the objects of value in a representation containing atoms instead of ontologically basic people.\n\nHuman beings simultaneously evolved a particular set of standard mental representations (e.g., a representation for colors in terms of a 3-dimensional subjective color space) along with evolving emotions that bind to these representations ([identification of flowering landscapes as beautiful](http://en.wikipedia.org/wiki/Evolutionary_aesthetics#Landscape_and_other_visual_arts_preferences). When someone visualizes any particular configuration of 'mere atoms', their built-in desires don't automatically fire and bind to that mental representation, the way they would bind to the brain's native representation of the environment. Generalizing that no set of atoms can be meaningful (since no abstract configuration of 'mere atoms' they imagine, seems to trigger any emotions to bind to it) and being told that reality is composed entirely of such atoms, they feel they've been told that the true state of reality, underlying appearances, is a meaningless one.\n\n## The utility rebinding problem\n\nIntuitively, we would think it was [common sense](https://arbital.com/p/) for an agent that wanted diamonds to react to the experimental data identifying nuclear physics, by deciding that a carbon atom is 'really' a nucleus containing six protons. We can imagine this agent [common-sensically](https://arbital.com/p/) updating its model of the universe to a nuclear model, and redefining the 'carbon atoms' that its old utility function counted to mean 'nuclei containing exactly six protons'. Then the new utility function could evaluate outcomes in the newly discovered nuclear-physics universe. The problem of producing this desirable agent behavior is the **utility rebinding problem**.\n\nTo see why this problem is nontrivial, consider that the most common form of carbon is C-12, with nuclei composed of six protons and six neutrons. The second most common form of carbon is C-14, with nuclei composed of six protons and eight neutrons. Is C-14 *truly* carbon - is it the sort of carbon that can participate in valuable diamonds of high utility? Well, that depends on your utility function, obviously; and from a human perspective it just sounds arbitrary.\n\nBut consider a closely analogous question from a humanly important perspective: Is a chimpanzee truly a person? Where the question means not, \"How do we arbitrarily define the syllables per-son?\" but \"Should we care a lot about chimpanzees?\", i.e., how do we define the part of our preferences that care about people, to the possibly-person edge cases of chimpanzees?\n\nIf you live in a world where chimpanzees haven't been discovered, you may have an easy time running your utility function over your model of the environment, since the objects of your experience classify sharply into the 'person' and 'nonperson' categories. Then you discover chimpanzees, and they're neither typical people (John Smith) nor typical nonpeople (like rocks).\n\nWe can see the force of this question as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them... sort of like the question of how to treat carbon atoms that have the usual number of protons but not the usual number of neutrons.\n\nChimpanzees definitely have neural areas of various sizes, and particular cognitive abilities - we can suppose the empirical truth is unambiguous at this level, and known to us. So the question is then whether we regard a particular configuration of neural parts (a frontal cortex of a certain size) and particular cognitive abilities (consequentialist means-end reasoning and empathy, but no recursive language) as something that our 'person' category values... once we've rewritten the person category to value configurations of cognitive parts, rather than whole atomic people.\n\nIn fact, we run into this question as soon as we learn that human beings run on brains and the brains are made out of neural regions with functional properties; we can then *imagine* chimpanzees even if we haven't met any, and ask to what degree our preferences should treat this edge-person as deserving of moral rights. If we can 'rebind' our emotions and preferences to live in a world of nuclear brains rather than atomic people, this rebinding will *implicitly* say whether or not a chimpanzee is a person, depending on how our preference over brain configurations treats the configuration that is a chimpanzee. \n\nIn this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds. Once a diamond maximizer knows about neutrons, it can see that C-14 is chemically like carbon and forms the same kind of chemical bonds, but that it's heavier because it has two extra neutrons. We can see that chimpanzees have a similar brain architectures to the sort of people we always considered before, but that they have smaller frontal cortexes and no ability to use recursive language, etcetera.\n\nWithout knowing more about the diamond maximizer, we can't guess what sort of considerations it might bring to bear in deciding what is Truly Carbon and Really A Diamond. But the breadth of considerations human beings need to invoke in deciding how much to care about chimpanzees, is one way of illustrating that the problem of rebinding a utility function to a shifted ontology is [https://arbital.com/p/value-laden](https://arbital.com/p/value-laden) and can potentially undergo [https://arbital.com/p/excursions](https://arbital.com/p/excursions) into [complex desiderata](https://arbital.com/p/5l). Redefining a [moral category](https://arbital.com/p/) so that it talks about the underlying parts of what were previously seen as all-or-nothing atomic objects, may carry an implicit ruling about how to value many kinds of [https://arbital.com/p/edge-case](https://arbital.com/p/edge-case) objects that were never seen before.\n\nIt's possible that some formal part of this problem could be usefully carved out from the complex value-laded edge-case-reclassification part. E.g., how would you redefine carbon as C12 if there were no other isotopes? How would you rebind the utility function to *at least* C12? In general, how could edge cases be [identified and queried](https://arbital.com/p/) by an [online Genie](https://arbital.com/p/6w)?\n\n### Reappearance on the reflective level\n\nAn obvious thought (especially for [online Genies](https://arbital.com/p/6w)) is that if the AI is unsure about how to reinterpret its goals in light of a shifting mental representation, it should query the programmers.\n\nSince the definition of a programmer would then itself be baked into the [preference framework](https://arbital.com/p/5f), the problem might [reproduce itself on the reflective level](https://arbital.com/p/) if the AI became unsure of where to find 'programmers': \"My preference framework said that programmers were made of carbon atoms, but all I can find in this universe are quantum fields!\"\n\nThus the ontology identification problem is arguably one of the [critical subproblems](https://arbital.com/p/) of value alignment: it plausibly has the property that, if botched, it could potentially [crash the error recovery mechanism](https://arbital.com/p/).\n\n## Diamond identification in multi-level maps\n\nA realistic, [bounded diamond maximizer](https://arbital.com/p/5g) wouldn't represent the outside universe with atomically detailed or quantum-detailed models. Instead, a bounded agent would have some version of a [multi-level map](https://arbital.com/p/) of the world in which the agent knew in principle that things were composed of atoms, but didn't model most things in atomic detail. A bounded agent's model of an airplane would have wings, or wing shapes, rather than atomically detailed wings. It would think about wings when doing aerodynamic engineering, atoms when doing chemistry, nuclear physics when doing nuclear engineering, and definitely not try to model everything in its experience down to the level of quantum fields.\n\nAt the present, there are not yet any proposed formalisms for how to do probability theory with multi-level maps (in other words: [nobody has yet put forward a guess at how to solve the problem even given infinite computing power](https://arbital.com/p/)). But it seems very likely that, if we did know what multi-level maps looked like formally, it might suggest a formal solution to non-value-laden utility-rebinding.\n\nE.g., if an agent already has a separate high-level concept of 'diamond' that's bound to a lower-level concept of 'carbon atoms bound to four other carbon atoms', then maybe when you discover nuclear physics, the multi-level map itself would tend to suggest that 'carbon atoms' be re-bound to 'nuclei with six protons' or 'nuclei with six protons and six neutrons'. It might at least be possible to phrase the equivalent of a prior or mixture of weightings for how the utility function would re-bind itself, and say, \"Given this prior, care about whatever that sparkly hard stuff 'diamond' ends up binding to on the lower level.\"\n\nUnfortunately, we have very little formal probability theory to describe how a multi-level map would go from 'that unknown sparkly hard stuff' to 'carbon atoms bound to four other carbon atoms in tetrahedral patterns, which is the only known repeating pattern for carbon atoms bound to four other carbon atoms' to 'C12 and C14 are chemically identical but C14 is heavier'. This being the case, we don't know how to say anything about a dynamically updating multi-level map inside a [preference framework](https://arbital.com/p/5f).\n\nIf we were actually trying to build a diamond maximizer, we would be likely to encounter this problem long before it started formulating new physics. The equivalent of a computational discovery that changes 'the most efficient way to represent diamonds' is likely to happen much earlier than a physical discovery that changes 'what underlying physical systems probably constitute a diamond'.\n\nThis also means that we are liable to face the ontology identification problem long before the agent starts discovering new physics, as soon as it starts revising its representation. Only very unreflective agents with strongly fixed-in-place representations for every part of the environment that we think the agent is supposed to care about, would let the ontology identification problem be elided entirely. Only *very* not-self-modifying agents, or [Cartesian agents](https://arbital.com/p/) with goals formulated only over sense data, would not confront their programmers with ontology identification problems.\n\n## Research paths\n\nMore of these are described in the [main article on ontology identification](https://arbital.com/p/5c). But here's a quick list of some relevant research subproblems and avenues:\n\n* Transparent priors. Priors that are constrained to meaningful hypothesis spaces that the utility function knows how to interpret. Rather than all Turing machines being hypotheses, we could have only causal models being hypotheses, and then preference frameworks that talked about 'the cause of' labeled sensory data could read the hypotheses. (Note that the space of causal models can be Turing-complete, in the sense of being able to embed any Turing machine as a causal system. So we'd be able to explain any computable sense data in terms of a causal model - we wouldn't sacrifice any explanatory power by restricting ourselves to 'causal models' instead of 'all Turing machines'.)\n\n* Reductionist identifications. Being able to go hunting, inside the current model of an environment, for a thingy that looks like it's made out of type-1 thingies bound to four other type-1 thingies, where a type-1 thingy is itself made out of six type-2, six type-3, and six type-4 thingies (6 electrons, 6 protons, 6 neutrons).\n\n* Causal identifications. Some variation on trying to identify diamonds as the causes of pictures of diamonds, for some data set of things labeled as diamonds or non-diamonds. This doesn't work immediately because then it's not clear whether \"the cause\" of the picture is the photons reflecting off the diamond, the diamond itself, the geological pressures that produced the diamond, the laws of physics, etcetera. But perhaps some crossfire of identification could pin down the 'diamond' category inside a causal model, by applying some formal rule to several sets of the right sort of labeled sense data. As an open problem: If an agent has a rich causal model that includes categories like 'diamond' somewhere unknown, and you can point to labeled sensory datasets and use casual and categorical language, what labeled datasets and language would unambiguously identify diamonds, and no other white sparkly things, even if the resulting concept of 'diamond' was being [subject to maximization](https://arbital.com/p/2w)? (Note that under this approach, as with any preference framework that talks about the causes of sensory experiences, we need to worry about [Christiano's Hack](https://arbital.com/p/5j).)\n\n* Ambiguity resolution. Detect when an ontology identification is ambiguous, and refer the problem to the user/programmer. At our present stage of knowledge this seems like pretty much the same problem as [inductive ambiguity resolution](https://arbital.com/p/).\n\n* Multi-level maps. Solve the problem of bounded agents having maps of the world that operate at multiple, interacting reductionist levels, as designed to save on computing power. Then solve ontology identification by initially binding to a higher level of the map, and introducing some rule for re-binding as the map updates. Note that multi-level mapping is an [AGI rather than FAI problem](https://arbital.com/p/), meaning that work here [should perhaps be classified](https://arbital.com/p/).\n\n* Solution for [non-self-modifying Genies](https://arbital.com/p/6w). Try to state a 'hack' solution to ontology identification that would work for an AI running on fixed algorithms where a persistent knowledge representation is known at development time.\n\n## Some implications\n\nThe ontology identification problem is one more reason to believe that [hard-coded object-level utility functions should be avoided](https://arbital.com/p/) and that [value identification in general is hard](https://arbital.com/p/).\n\nOntology identification is heavily entangled with AGI problems, meaning that some research on ontology identification [may need to be non-public](https://arbital.com/p/). This is an example instance of the argument that [at least some VAT research may need to be non-public](https://arbital.com/p/), based on that [at least some AGI research is better off non-public](https://arbital.com/p/).", "date_published": "2016-02-05T00:51:21Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "6b"} {"id": "399edb268e5372c77b425dfc595cc6bb", "title": "Value identification problem", "url": "https://arbital.com/p/value_identification", "source": "arbital", "source_type": "text", "text": "The subproblem category of [value alignment](https://arbital.com/p/5s) which deals with pinpointing [valuable](https://arbital.com/p/55) outcomes to an [advanced agent](https://arbital.com/p/2c) and distinguishing them from non-valuable outcomes. E.g., the [Edge Instantiation](https://arbital.com/p/2w) and [Ontology Identification](https://arbital.com/p/5c) problems are argued to be [foreseeable difficulties](https://arbital.com/p/6r) of value identification. A central foreseen difficulty of value identification is [Complexity of Value](https://arbital.com/p/5l).", "date_published": "2015-12-15T05:23:49Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "6c"} {"id": "ab4b8b46b62b212f01daba13d87efb09", "title": "Goodhart's Curse", "url": "https://arbital.com/p/goodharts_curse", "source": "arbital", "source_type": "text", "text": "summary(Gloss): Goodhart's Curse is a neologism for the combination of the Optimizer's Curse and Goodhart's Law. It states that a powerful agent neutrally optimizing a proxy measure U, meant to align with true values V, will implicitly tend to find upward divergences of U from V.\n\nIn other words: powerfully optimizing for a utility function is strongly liable to blow up anything we'd regard as an error in defining that utility function.\n\nGoodhart's Curse is a neologism for the combination of the Optimizer's Curse and Goodhart's Law, particularly as applied to [the value alignment problem for Artificial Intelligences](https://arbital.com/p/5s).\n\nGoodhart's Curse in this form says that a powerful agent neutrally optimizing a proxy measure U that we hoped to align with true values V, will implicitly seek out upward divergences of U from V.\n\nIn other words: powerfully optimizing for a utility function is strongly liable to blow up anything we'd regard as an error in defining that utility function.\n\n# Winner's Curse, Optimizer's Curse, and Goodhart's Law\n\n## Winner's Curse\n\nThe **[Winner's Curse](https://en.wikipedia.org/wiki/Winner%27s_curse)** in auction theory says that if multiple bidders all bid their [unbiased estimate](https://arbital.com/p/unbiased_estimator) of an item's value, the winner is likely to be someone whose estimate contained an upward error.\n\nThat is: If we have lots of bidders on an item, and each bidder is individually unbiased *on average,* selecting the winner selects somebody who probably made a mistake this *particular* time and overbid. They are likely to experience post-auction regret systematically, not just occasionally and accidentally.\n\nFor example, let's say that the true value of an item is \\$10 to all bidders. Each bidder bids the true value, \\$10, plus some [Gaussian noise](https://arbital.com/p/gaussian_noise). Each individual bidder is as likely to overbid \\$2 as to underbid \\$2, so each individual bidder's average expected bid is \\$10; individually, their bid is an unbiased estimator of the true value. But the *winning* bidder is probably somebody who overbid \\$2, not somebody who underbid \\$2. So if we know that Alice won the auction, our [revised](https://arbital.com/p/1rp) guess should be that Alice made an upward error in her bid.\n\n## Optimizer's Curse\n\nThe [Optimizer's Curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf) in decision analysis generalizes this observation to an agent that estimates the expected utility of actions, and executes the action with the highest expected utility. Even if each utility estimate is locally unbiased, the action with seemingly highest utility is more likely, in our posterior estimate, to have an upward error in its expected utility.\n\nWorse, the Optimizer's Curse means that actions with *high-variance estimates* are selected for. Suppose we're considering 5 possible actions which in fact have utility \\$10 each, and our estimates of those 5 utilities are Gaussian-noisy with a standard deviation of \\$2. Another 5 possible actions in fact have utility of -\\$20, and our estimate of each of these 5 actions is influenced by unbiased Gaussian noise with a standard deviation of \\$100. We are likely to pick one of the bad five actions whose enormously uncertain value estimates happened to produce a huge upward error.\n\nThe Optimizer's Curse grows worse as a larger policy space is implicitly searched; the more options we consider, the higher the average error in whatever policy is selected. To effectively reason about a large policy space, we need to either have a good prior over policy goodness and to know the variance in our estimators; or we need very precise estimates; or we need mostly correlated and little uncorrelated noise; or we need the highest real points in the policy space to have an advantage bigger than the uncertainty in our estimates.\n\nThe Optimizer's Curse is not exactly similar to the Winner's Curse because the Optimizer's Curse potentially applies to *implicit* selection over large search spaces. Perhaps we're searching by gradient ascent rather than explicitly considering each element of an exponentially vast space of possible policies. We are still implicitly selecting over some effective search space, and this method will still seek out upward errors. If we're imperfectly estimating the value function to get the gradient, then gradient ascent is implicitly following and amplifying any upward errors in the estimator.\n\nThe proposers of the Optimizer's Curse also described a Bayesian remedy in which we have a prior on the expected utilities and variances and we are more skeptical of very high estimates. This however assumes that the prior itself is perfect, as are our estimates of variance. If the prior or variance-estimates contain large flaws somewhere, a search over a very wide space of possibilities would be expected to seek out and blow up any flaws in the prior or the estimates of variance.\n\n## Goodhart's Law\n\n[Goodhart's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law) is named after the economist Charles Goodhart. A standard formulation is \"When a measure becomes a target, it ceases to be a good measure.\" Goodhart's original formulation is \"Any observed statistical regularity will tend to collapse when pressure is placed upon it for control purposes.\"\n\nFor example, suppose we require banks to have '3% capital reserves' as defined some particular way. 'Capital reserves' measured that particular exact way will rapidly become a much less good indicator of the stability of a bank, as accountants fiddle with balance sheets to make them legally correspond to the highest possible level of 'capital reserves'.\n\nDecades earlier, IBM once paid its programmers per line of code produced. If you pay people per line of code produced, the \"total lines of code produced\" will have even less correlation with real productivity than it had previously.\n\n# Goodhart's Curse in alignment theory\n\n**Goodhart's Curse** is a neologism (by Yudkowsky) for the crossover of the Optimizer's Curse with Goodhart's Law, yielding that **neutrally optimizing a proxy measure U of V seeks out upward divergence of U from V.**\n\nSuppose the humans have true values V. We try to convey these values to a [powerful AI](https://arbital.com/p/2c), via some value learning methodology that ends up giving the AI a utility function U.\n\nEven if U is locally an unbiased estimator of V, optimizing U will seek out what *we* would regard as 'errors in the definition', places where U diverges upward from V. Optimizing for a high U may implicitly seek out regions where U - V is high; that is, places where V is lower than U. This may especially include regions of the outcome space or policy space where the value learning system was subject to great variance; that is, places where the value learning worked poorly or ran into a snag.\n\nGoodhart's Curse would be expected to grow worse as the AI became more powerful. A more powerful AI would be implicitly searching a larger space and would have more opportunity to uncover what we'd regard as \"errors\"; it would be able to find smaller loopholes, blow up more minor flaws. There is a potential [context disaster](https://arbital.com/p/6q) if new divergences are uncovered as more of the possibility space is searched, etcetera.\n\nWe could see the genie as *implicitly* or *emergently* seeking out any possible loophole in the wish: *Not* because it is an evil genie that knows our 'truly intended' V and is looking for some place that V can be minimized while appearing to satisfy U; but just because the genie is neutrally seeking out very large values of U and these are places where it is unusually likely that U diverged upward from V.\n\nMany [foreseeable difficulties](https://arbital.com/p/6r) of AGI alignment interact with Goodhart's Curse. Goodhart's Curse is one of the central reasons we'd expect 'little tiny mistakes' to 'break' when we dump a ton of optimization pressure on them. Hence the [claim](https://arbital.com/p/1cv): \"AI alignment is hard like building a rocket is hard: enormous pressures will break things that don't break in less extreme engineering domains.\"\n\n## Goodhart's Curse and meta-utility functions\n\nAn obvious next question is \"Why not just define the AI such that the AI itself regards U as an estimate of V, causing the AI's U to more closely align with V as the AI gets a more accurate empirical picture of the world?\"\n\nReply: Of course this is the obvious thing that we'd *want* to do. But what if we make an error in exactly how we define \"treat U as an estimate of V\"? Goodhart's Curse will magnify and blow up any error in this definition as well.\n\nWe must distinguish:\n\n- V, the [true value function that is in our hearts](https://arbital.com/p/55).\n- T, the external target that we formally told the AI to align on, where we are *hoping* that T really means V.\n- U, the AI's current estimate of T or probability distribution over possible T.\n\nU will converge toward T as the AI becomes more advanced. The AI's epistemic improvements and learned experience will tend over time to eliminate a subclass of Goodhart's Curse where the current estimate of U-value has diverged upward *from T-value,* cases where the uncertain U-estimate was selected to be erroneously above the correct formal value T.\n\n*However,* Goodhart's Curse will still apply to any potential regions where T diverges upward from V, where the formal target diverges from the true value function that is in our hearts. We'd be placing immense pressure toward seeking out what we would retrospectively regard as human errors in defining the meta-rule for determining utilities. %note: That is, we'd retrospectively regard those as errors if we survived.%\n\n## Goodhart's Curse and 'moral uncertainty'\n\n\"Moral uncertainty\" is sometimes offered as a solution source in AI alignment; if the AI has a probability distribution over utility functions, it can be risk-averse about things that *might* be bad. Would this not be safer than having the AI be very sure about what it ought to do?\n\nTranslating this idea into the V-T-U story, we want to give the AI a formal external target T to which the AI does not currently have full access and knowledge. We are then hoping that the AI's uncertainty about T, the AI's estimate of the variance between T and U, will warn the AI away from regions where from our perspective U would be a high-variance estimate of V. In other words, we're hoping that estimated U-T uncertainty correlates well with, and is a good proxy for, actual U-V divergence.\n\nThe idea would be that T is something like a supervised learning procedure from labeled examples, and the places where the current U diverges from V are things we 'forgot to tell the AI'; so the AI should notice that in these cases it has little information about T.\n\nGoodhart's Curse would then seek out any flaws or loopholes in this hoped-for correlation between estimated U-T uncertainty and real U-V divergence. Searching a very wide space of options would be liable to select on:\n\n- Regions where the AI has made an epistemic error and poorly estimated the variance between U and T;\n- Regions where the formal target T is solidly estimable to the AI, but from our own perspective the divergence from T to V is high (that is, the U-T uncertainty fails to *perfectly* cover all T-V divergences).\n\nThe second case seems especially likely to occur in future phases where the AI is smarter and has more empirical information, and has *correctly* reduced its uncertainty about its formal target T. So moral uncertainty and risk aversion may not scale well to superintelligence as a means of warning the AI away from regions where we'd retrospectively judge that U/T and V had diverged.\n\nConcretely:\n\nYou tell the AI that human values are defined relative to human brains in some particular way T. While the AI is young and stupid, the AI knows that it is very uncertain about human brains, hence uncertain about T. Human behavior is produced by human brains, so the AI can regard human behavior as informative about T; the AI is sensitive to spoken human warnings that killing the housecat is bad.\n\nWhen the AI is more advanced, the AI scans a human brain using molecular nanotechnology and resolves all its moral uncertainty about T. As we defined T, the optimum T turns out to be \"feed humans heroin because that is what human brains maximally want\".\n\nNow the AI already knows everything our formal definition of T requires the AI to know about the human brain to get a very sharp estimate of U. So human behaviors like shouting \"stop!\" are no longer seen as informative about T and don't lead to updates in U.\n\nT, as defined, was always misaligned with V. But early on, the misalignment was in a region where the young AI estimated high variance between U and T, thus keeping the AI out of this low-V region. Later, the AI's empirical uncertainty about T was reduced, and this protective barrier of moral uncertainty and risk aversion was dispelled.\n\nUnless the AI's moral uncertainty is *perfectly* conservative and *never* underestimates the true regions of U-V divergence, there will be some cases where the AI thinks it is morally sure even though from our standpoint the U-V divergence is large. Then Goodhart's Curse would select on those cases.\n\nCould we use a very *conservative* estimate of utility-function uncertainty, or a formal target T that is very hard for even a superintelligence to become certain about?\n\nWe would first need to worry that if the utility-function uncertainty is unresolvable, that means the AI can't ever obtain empirically strong evidence about it. In this case the AI would not update its estimate of T from observing human behaviors, making the AI again insensitive to humans shouting \"Stop!\"\n\nAnother proposal would be to rely on risk aversion over *unresolvably* uncertain probabilities broad enough to contain something similar to the true V as a hypothesis, and hence engender sufficient aversion to low-true-V outcomes. Then we should worry on a pragmatic level that a *sufficiently* conservative amount of moral uncertainty--so conservative that U-T risk aversion *never underestimated* the appropriate degree of risk aversion from our V-standpoint--would end up preventing the AI from acting *ever.* Or that this degree of moral risk aversion would be such a pragmatic hindrance that the programmers might end up pragmatically bypassing all this inconvenient aversion in some set of safe-seeming cases. Then Goodhart's Curse would seek out any unforeseen flaws in the coded behavior of 'safe-seeming cases'.\n\n# Conditions for Goodhart's Curse\n\nThe exact conditions for Goodhart's Curse applying between V and a point estimate or probability distribution over U, have not yet been written out in a convincing way.\n\nFor example, suppose we have a multivariate normal distribution in which X and Y dimensions are positively correlated, only Y is observable, and we are selecting on Y in order to obtain more X. While X will revert to the mean compared to Y, it's not likely to be zero or negative; picking maximum Y is our best strategy for obtaining maximum X and will probably obtain a very high X. (Observation due to [https://arbital.com/p/111](https://arbital.com/p/111).)\n\nConsider also the case of the [smile maximizer](https://arbital.com/p/10d) which we trained to optimize smiles as a proxy for happiness. Tiny molecular smileyfaces are very low in happiness, an apparent manifestation of Goodhart's Curse. On the otherwise, if we optimized for 'true happiness' among biological humans, this would produce more smiles than default. It might be only a tiny fraction of possible smiles, on the order of 1e-30, but it would be more smiles than would have existed otherwise. So the relation between V (maximized at 'true happiness', zero at tiny molecular smileyfaces) and U (maximized at tiny molecular smileyfaces, but also above average for true happiness) is not symmetric; and this is one hint to the unknown necessary and/or sufficient condition for Goodhart's Curse to apply.\n\nIn the case above, we might handwave something like, \"U had lots of local peaks one of which was V, but the U of V's peak wasn't anywhere near the highest U-peak, and the highest U-peak was low in V. V was more narrow and its more unique peak was noncoincidentally high in U.\"\n\n# Research avenues\n\n[Mild optimization](https://arbital.com/p/2r8) is a proposed avenue for direct attack on the central difficulty of Goodhart's Curse and all the other difficulties it exacerbates. Obviously, if our formulation of mild optimization is not *perfect,* Goodhart's Curse may well select for any place where our notion of 'mild optimization' turns out to have a loophole that allows a lot of optimization. But insofar as some version of mild optimization is working most of the time, it could avoid blowing up things that would otherwise blow up. See also [Tasks](https://arbital.com/p/4mn).\n\nSimilarly, [conservative strategies](https://arbital.com/p/2qp) can be seen as a more indirect attack on some forms of Goodhart's Curse--we try to stick to a conservative boundary drawn around previously whitelisted instances of the goal concept, or to using strategies similar to previously whitelisted strategies. This averts searching a much huger space of possibilities that would be more likely to contain errors somewhere. But Goodhart's Curse might single out what constitutes a 'conservative' boundary, if our definition is less than absolutely perfect.", "date_published": "2017-02-21T22:09:10Z", "authors": ["Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The **[Winner's Curse](https://en.wikipedia.org/wiki/Winner%27s_curse)** in auction theory says that if many individually fallible but unbiased bidders all compete in an auction, the winner has been selected to be unusually likely to have made an upward error in their bid.\n\nThe [Optimizer's Curse](https://faculty.fuqua.duke.edu/~jes9/bio/The_Optimizers_Curse.pdf) is that if we consider many possible courses of action, and pick the course of action that seems best, we are implicitly selecting for places where we're likely to have made an upward error in the estimate. Worse, this means we're selecting for places where our unbiased estimate has high variance.\n\n[Goodhart's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law) says that whatever proxy measure an organization tries to control soon ceases to be a good proxy. If you demand that banks have 3% 'capital reserves' defined a certain way, the bank will look for ways to get 'capital reserves' with a minimum of inconvenience, and this selects against 'capital reserves' that do what we wanted.\n\n*Goodhart's Curse* is a neologism for the combination of the Optimizer's Curse with Goodhart's Law, particularly as applied to [value alignment of Artificial Intelligences](https://arbital.com/p/5s).\n\nSuppose our true values are $V$; $V$ is the [true value function that is in our hearts](https://arbital.com/p/55). If by any system or meta-system we try to align the AI's utility $U$ with $V,$ then even if our alignment procedure makes $U$ a generally unbiased estimator of $V,$ *heavily optimizing* expected $U$ is unusually likely to seek out places where $U$ poorly aligns with $V.$\n\nSeeking out high values of $U$ implicitly seeks out high values of the divergence $U-V$ if any such divergence exists anywhere. Worse, this implicitly seeks out places where the variance $\\|U - V\\|$ is generally high--places where we made an error in defining our meta-rules for alignment, some seemingly tiny mistake, a loophole."], "tags": ["B-Class", "Goodness estimate biaser", "Methodology of foreseeable difficulties"], "alias": "6g4"} {"id": "87afd68ea6aaa62105af64c37ea59a19", "title": "Intended goal", "url": "https://arbital.com/p/intended_goal", "source": "arbital", "source_type": "text", "text": "Definition. An \"intended goal\" refers to the intuitive intention in the mind of a human programmer when they executed some formal directive or goal within the AI. For example, if the programmer wants to create worthwhile happiness and the AI ends up tiling the universe with tiny molecular smiley-faces, we would say that worthwhile happiness (in some intuitive, possibly pre-verbal sense existing in the programmer's mind) was the \"intended goal\", as distinct from the result of the formal utility function actually encoded in the AI (which proved to have a maximum at tiny molecular smiley-faces).", "date_published": "2015-12-15T23:46:42Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition"], "alias": "6h"} {"id": "487d9e9b56581f0a246f659eae50e881", "title": "Arbital Labs", "url": "https://arbital.com/p/arbital_labs", "source": "arbital", "source_type": "text", "text": "This is a temporary group created as an experiment in both software and community design. It's meant to act as a seed for multiple future Arbital communities. Currently membership is by invite only.", "date_published": "2016-12-16T20:36:12Z", "authors": ["Eric Rogstad", "Alexei Andreev"], "summaries": [], "tags": ["Start"], "alias": "6m2"} {"id": "4f14edeba83e3d9bacce947b9596a289", "title": "Context disaster", "url": "https://arbital.com/p/context_disaster", "source": "arbital", "source_type": "text", "text": "# Short introduction\n\nOne frequently suggested strategy for [aligning](https://arbital.com/p/2v) a [sufficiently advanced AI](https://arbital.com/p/7g1) is to observe--*before* the AI becomes powerful enough that 'debugging' the AI would be problematic if the AI decided not to [let us debug it](https://arbital.com/p/45)--whether the AI appears to be acting nicely while it's not yet smarter than the programmers. \n\nEarly testing obviously can't provide a *statistical* guarantee of the AI's future behavior. If you observe some random draws from Barrel A, at best you get statistical guarantees about future draws from Barrel A under the assumption that the past and future draws are collectively [independent and identically distributed](https://arbital.com/p/iid).\n\nOn the other hand, if Barrel A is *similar* to Barrel B, observing draws from Barrel A can sometimes tell us something about Barrel B even if the two barrels are not [i.i.d.](https://arbital.com/p/iid)\n\nConversely, if observed good behavior while the AI is not yet super-smart, *fails* to correlate to good outcomes after the AI is unleashed or becomes smarter, then this is a \"**context change problem**\" or \"**context disaster**\". %note: Better terminology is still being solicited here, if you have a short phrase that would evoke exactly the right meaning.%\n\nA key question then is how shocked we ought to be, on a scale from 1 to 10, if good outcomes in the AI's 'development' phase fail to match up with good outcomes in the AI's 'optimize the real world' phase? %note: Leaving aside technical quibbles about how we can't feel shocked if we're dead.%\n\nPeople who expect that [AI alignment is difficult](https://arbital.com/p/alignment_difficulty) think that the degree of justified surprise is somewhere around 1 out of 10. In other words, that there are a *lot* of [foreseeable issues](https://arbital.com/p/6r) that could cause a seemingly nice weaker AI to not develop into a nice smarter AI.\n\nAn extremely oversimplified (but concrete) fable that illustrates some of these possible difficulties might go as follows:\n\n- Some group or project has acquired a viable development pathway to [AGI](https://arbital.com/p/42g). The programmers think it is wise to build an AI that will [make people happy](https://arbital.com/p/5r). %note: This is not quite a straw argument, in the sense that it's been advocated more than once by people who have apparently never read any science fiction in their lives; there are certainly many AI researchers who would be smarter than to try this, but not necessarily all of them. In any case, we're looking for an unrealistically simple scenario for purposes of illustrating simple forms of some key ideas; in real life, if analogous things go wrong, they would probably be more complicated things.%\n- The programmers start by trying to train their AI to produce smiles. %note: Again, this is not quite a straw possibility in the sense that it was advocated in at least one published paper, not cited here because the author later exercised their sovereign right of changing their mind about that. *Arguably* some currently floated proposals are closely analogous to this one.%\n- While the AI is young and has relatively few policy options, it can only make people smile by performing well on the tasks assigned to it.\n- At some point, the AI gets smarter and and able to do more complicated things with some simple robots it controls. In an unfortunate incident, the AI learns it is possible to produce smiles by tickling people.\n- In the aftermath of this incident, the programmers manage to reprogram the AI to *not* optimize for this particular kind of smile by this particular route.\n- The AI becomes smarter and able to model people better. After perusing a copy of Wikipedia, the AI realizes it can make people extremely 'happy', as the AI's utility function currently defines 'happiness', by administering heroin.\n- The AI remembers the previous incident in which the programmers altered the AI's utility function. The AI can model its programmers sufficiently well to know that the programmers would *not* want the AI to give people heroin. However, the most happiness the AI thinks its future self would be able to produce without heroin, is not nearly as much happiness as the AI thinks it could produce if its future self went on wanting to give people heroin.\n- The AI refrains from trying to administer any heroin right now, and behaves in a way it thinks will be maximally reassuring to its model of its programmers, in order to be able to produce more 'happiness' later.\n- Eventually, [all reachable galaxies](https://arbital.com/p/7cy) end up being transformed into tiny molecular computers implementing the simplest and cheapest forms of what the AI defines as 'happiness'. (And the *simplest possible* configurations matching the AI's utility function in this way are so simple as to be [devoid of subjective experience](https://arbital.com/p/disneyland_without_children); and hence, from our perspective, of [neither negative nor positive value](https://arbital.com/p/7ch).)\n\nIn all these cases, the problem was not that the AI developed in an unstable way. The same decision system produced a new problem in the new context.\n\nCurrently argued [foreseeable](https://arbital.com/p/6r) \"context change problems\" in this sense, can be divided into three broad classes:\n\n- **More possibilities, more problems:** The AI's [preferences](https://arbital.com/p/1bh) have a [good](https://arbital.com/p/55) or [intended](https://arbital.com/p/6h) achievable [optimum](https://arbital.com/p/7t9) while the AI is picking from a *narrow* space of options. When the AI becomes smarter or gains more material options, it picks from a *wider* space of tractable policies and achievable outcomes. Then the new optimum is not as nice, because, for example:\n - The AI's utility function was tweaked by some learning algorithm and data that eventually seemed to conform behavior well over options considered early on, but not the wider space considered later.\n - In development, apparently bad system behaviors were [patched](https://arbital.com/p/48) in ways that appeared to work, but didn't eliminate an [underlying tendency](https://arbital.com/p/10k), only blocked one expression of that tendency. Later a very similar pressure [re-emerged in an unblocked way](https://arbital.com/p/42) when the AI considered a wider policy space.\n - [https://arbital.com/p/6g4](https://arbital.com/p/6g4) suggests that if our true intended values V are being modeled by a utility function U, selecting for the highest values of U also selects for the highest upward divergence of U from V, and this version of the \"optimizer's curse\" phenomenon becomes worse as U is evaluated over a wider option space.\n- **Treacherous turn:** There's a divergence between the AI's preferences and the programmers' preferences, and the AI realizes this before we do. The AI uses the [convergent strategy](https://arbital.com/p/10g) of behaving the way it models us as wanting or expecting, *until* the AI gains the intelligence or material power to implement its preferences in spite of anything we can do.\n- **Revving into the red:** Intense optimization causes some aspect or subsystem of the AI to traverse a weird new execution path in some way different from the above two issues. (In a way that involves a [value-laden category boundary](https://arbital.com/p/36h) or [multiple self-consistent outlooks](https://arbital.com/p/2fr), such that we don't get a good result just as a [free lunch](https://arbital.com/p/alignment_free_lunch) of the AI's [general intelligence](https://arbital.com/p/7vh).)\n\nThe context change problem is a *central* issue of AI alignment and a key proposition in the general thesis of [https://arbital.com/p/-alignment_difficulty](https://arbital.com/p/-alignment_difficulty). If you could easily, correctly, and safely test for niceness by outward observation, and that form of niceness scaled reliably from weaker AIs to smarter AIs, that would be a very cheerful outlook on the general difficulty of the problem.\n\n# Technical introduction\n\nJohn Danaher [summarized as follows](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html) what he considered a forceful \"safety test objection\" to AI catastrophe scenarios:\n\n> Safety test objection: An AI could be empirically tested in a constrained environment before being released into the wild. Provided this testing is done in a rigorous manner, it should ensure that the AI is “friendly” to us, i.e. poses no existential risk.\n\nThe phrasing here of \"empirically\" and \"safety test\" implies that it is outward behavior or outward consequences that are being observed (empirically). Rather than, e.g., the engineers trying to test for some *internal* property that they think *analytically* implies the AI's good behavior later.\n\nThis page will consider that the subject of discussion is **whether we can generalize from the AI's outward behavior.** We can potentially generalize some of these arguments to some internal observables, especially observables that the AI is deciding in a [consequentialist](https://arbital.com/p/9h) way using the same central decision system, or that the AI could potentially try to [obscure from the programmers](https://arbital.com/p/10f). But in general not all the arguments will carry over.\n\nAnother argument, closely analogous to Danaher's, would reason on capabilities rather than on a constrained environment:\n\n> Surely an engineer that exercises even a modicum of caution will observe the AI while its capabilities are weak to determine whether it is behaving well. After filtering out all such misbehaving weak AIs, the only AIs permitted to become strong will be of benevolent disposition.\n\nIf (as seems to have been intended) we take these twin arguments as arguing \"why nobody ought to worry about AI alignment\" in full generality, then we can list out some possible joints at which that general argument might fail:\n\n- [Selecting on the fastest-moving projects](https://arbital.com/p/7wl) might yield a project whose technical leaders fail to exercise even \"a modicum of caution\".\n- Alignment might be hard enough, relative to the amount of advance research done, that we can't find *any* AIs whose behavior while weak or constrained is as reassuring as the argument would properly ask. %%note: That is: A filter on the standards we originally wanted, turns out to filter everything we know how to generate. Like trying to write a sorting algorithm by generating entirely random code, and then 'filtering' all the candidate programs on whether they correctly sort lists. The reason 'randomly generate programs and filter them' this is not a fully general programming method is that, for reasonable amounts of computing power and even slightly difficult problems, none of the programs you try will pass the filter.%% After a span of frustration, somebody somewhere lowers their standards.\n- The [attempt to isolate the AI to a constrained environment](https://arbital.com/p/6z) could fail, e.g. because the humans observing the AI themselves represent a channel of causal interaction between the AI and the rest of the universe. (Aka \"humans are not secure\".) Analogously, our grasp on what constitutes a 'weak' AI could fail, or it could [gain in capability unexpectedly quickly](https://arbital.com/p/capability_gain). Both of these scenarios would yield an AI that had not passed the filtering procedure.\n- The smart form of the AI might be unstable with respect to internal properties that were present in the weak form. E.g., because the early AI was self-modifying but *at that time* not smart enough to understand the full consequences of its own self-modifications. Or because e.g. a property of the decision system was not [reflectively stable](https://arbital.com/p/1fx).\n- A weak or contained form of a decision process that yields behavior appearing good to human observers, might not yield [beneficial outcomes](https://arbital.com/p/3d9) after that same decision process becomes smarter or less contained.\n\nThe final issue in full generality is what we'll term a 'context change problem' or 'context disaster'.\n\nObserving an AI when it is weak, does not in a *statistical* sense give us solid guarantees about its behavior when stronger. If you repeatedly draw [independent and identically distributed](https://arbital.com/p/iid) random samples from a barrel, there are statistical guarantees about what we can expect, with some probability, to be true about the next samples from the same barrel. If two barrels are different, no such guarantee exists.\n\nTo invalidate the statistical guarantee, we do need some reason to believe that barrel B and barrel A *are* different in any important sense. By the problem of induction we can't logically guarantee that \"the mass of an electron prior to January 1st, 2017\" is the same barrel as \"the mass of an electron after January 1st, 2017\"; but [inductive priors](https://arbital.com/p/21b) make this inference *extremely probable.* The idea is that we have substantive reasons, not merely generically skeptical reasons, to be suspicious of the link between \"good results when AI is weak\" and \"good results when AI is smarter\".\n\nMore generally, this is *prima facie* the kind of difference where you don't expect [independent and identically distributed](https://arbital.com/p/iid) results. You might hope for some property to carry over, but the AI's behavior would not be *literally the same.*\n\nSo the question is not settled by simple mathematical considerations. And we can't say \"But experiment has determined scientifically that this kind of AI is friendly!\" and consider that a knockdown argument.\n\nThe question is then whether *in practice* an observed property of 'outward behavioral niceness' is likely to carry over from a weak form of a decision system to a more powerful form of that system, for some of the plausible ways that decision system might be configured and developed.\n\nBroadly speaking, we can identify three major classes of foreseeable problems:\n\n- **More options, more problems:** Foreseeable reasons why system-behaviors-we-don't-want would outwardly manifest for the first time after evaluating a decision system on a *wider space of policies or achievable outcomes.*\n- **Treacherous turn:** The AI deliberately conforms its outward behavior to the behavior that would have been exhibited by a different system, in order to deceive the programmers for the temporary time that this deception is needed to protect the AI.\n- The everything-else category of **revving into the red:** Some other aspect of the system behaves in a weird way-we-don't-want after computing harder or being internally subjected to more optimization pressure. *And* this happens in regards to some issue that has [multiple reflective fixpoints](https://arbital.com/p/2fr), and hence doesn't get solved as the result of the system producing more accurate answers on [purely factual problems](https://arbital.com/p/3t3).\n\n%%comment:\n\n- **More options, more problems:** The AI's space of available policies and attainable outcomes would greatly widen if it became smarter, or was released from a constrained environment. [Terminal preferences](https://arbital.com/p/1bh) with a good-from-our-perspective [optimum](https://arbital.com/p/7t9) on a narrow set of options, may have a different optimum that is much worse-from-our-perspective on a wider option set. Because, e.g...\n - The supervised data provided to the AI led to a complicated, data-shaped inductive generalization that only fit the domain of options encountered during the training phase. (And the notions of [orthogonality](https://arbital.com/p/1y), [multiple reflectively stable fixpoints](https://arbital.com/p/2fr), and [value-laden categories](https://arbital.com/p/36h) say that we don't get [good](https://arbital.com/p/55) or [intended](https://arbital.com/p/6h) behavior anyway as a convergent free lunch of [general intelligence](https://arbital.com/p/7vh).)\n - [https://arbital.com/p/6g4](https://arbital.com/p/6g4) became more potent as the AI's utility function was evaluated over a wider option space.\n - In a fully generic sense, stronger optimization pressures may cause any dynamical system to take more unusual execution paths. (Which, over value-laden alternatives, e.g. if the subsystem behaving 'oddly' is part of the utility function, will not automatically yield good-from-our-perspective results as a free lunch of general intelligence.)\n- **Treacherous turn:** If you model your preferences as diverging from those of your programmers, an obvious strategy ([instrumentally convergent strategy](https://arbital.com/p/10g)) is to [exhibit the behavior you model the programmers as wanting to see](https://arbital.com/p/10f), and only try to fulfill your true preferences once nobody is in a position to stop you.\n\n%%\n\n# Semi-formalization\n\nWe can semi-formalize the \"more options, more problems\" and the \"treacherous turn\" cases in a unified way.\n\nLet $V$ denote our [true values](https://arbital.com/p/55). We suppose either that $V$ has been idealized or [extrapolated](https://arbital.com/p/3c5) into a consistent utility function, or that we are pretending human desire is coherent. Let $0$ denote the value of our utility function that corresponds to not running the AI in the first place. If running the AI sends the utility function higher than this $0,$ we'll say that the AI was beneficial; or conversely, if $V$ rates the outcome less than $0$, we'll say running the AI detrimental.\n\nSuppose the AI's behavior is [sufficiently coherent](https://arbital.com/p/7hh) that we can [usually view](https://arbital.com/p/21) the AI as having a consistent utility function. Let $U$ denote the utility function of the AI.\n\nLet $\\mathbb P_t(X)$ denote the probability of a proposition $X$ as seen by the AI at time $t,$ and similarly let $\\mathbb Q_t(X)$ denote the probability of $X$ as seen by the AI's human programmers.\n\nLet $\\pi \\in \\Pi$ denote a policy $\\pi$ from a space $\\Pi$ of policies that are tractable for the AI to understand and invent.\n\nLet $\\mathbb E_{\\mathbb P, t} [\\mid \\pi](https://arbital.com/p/W)$ denote the expectation according to the probability distribution $\\mathbb P_t$, of the utility function $W$, given policy $\\pi$.\n\nLet\n$$\\underset{\\pi \\in \\Pi}{\\operatorname {optimum}} F(\\pi)$$\ndenote the element $\\pi$ of $\\Pi$ that yields the highest value of the function $F$ that the AI has figured out how to achieve. In other words, the [https://arbital.com/p/-7t9](https://arbital.com/p/-7t9).\n\nThen **\"more options, more problems\"** occurs when, considering the transition from the narrow policy space $\\Pi_1$ available at time $t,$ to the wider policy space $\\Pi_2$ available at time $u$:\n\n$$\\mathbb E_{\\mathbb Q, t} [V \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) > 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V \\mid \\big [\\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} ](https://arbital.com/p/) < 0$$\n\nThis describes a situation where at time $t$ the AI is pursuing a policy from $\\Pi_1$ which, so far as the AI understands the probable outcome, it corresponds to a high-$V$ outcome humans would like. Later at time $u$ after selecting the optimum of $\\Pi_2,$ the AI expects an outcome which any surviving humans would see as corresponding to a low value of $V.$\n\nA **treacherous turn** is then a situation where:\n\n$$\\mathbb E_{\\mathbb Q, t} [V \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) < 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V \\mid \\big [\\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} ](https://arbital.com/p/) < 0$$\n\nThat is: the AI knows earlier on that its currently executing strategy will yield an outcome that is not the humanly expected long-term outcome (and that, from our perspective, is low in $V.$)\n\nWe could in more detail say that $W_{t}$ denotes the value of a utility function $W$ over the outcome considered only up to times before $t,$ and describe a treacherous turn as:\n\n$$\\mathbb E_{\\mathbb Q, t} [V_\\infty \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V_{u} \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V_\\infty \\mid \\big [\\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} ](https://arbital.com/p/) < 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V_\\infty \\mid \\big [\\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} ](https://arbital.com/p/) < 0$$\n\nThis further states that at time $t$ the AI expects the outcome from its behavior up to time $u$ to be one that we'd see as high in $V,$ but the AI is aware at $t$ that the long-term outcome will be different from the long-term outcome expected by $\\mathbb Q_t$.\n\n%%%comment:\n\n\nThe problem here arises when making an inference from observed good behavior in a constrained environment over the short term, to good outcomes in an unconstrained environment over the long term. \n\nFor AI alignment purposes, a [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) is when an [AGI](https://arbital.com/p/2c)'s operation changes from [beneficial](https://arbital.com/p/3d9) to detrimental as a result of the AGI gaining in capability or intelligence. Initially, the AGI seems to us to be working well - to conform well to [intended](https://arbital.com/p/6h) performance, producing apparently high [https://arbital.com/p/-55](https://arbital.com/p/-55). Then when the AI becomes smarter or otherwise gains in capability, the further operation of the AGI decreases [https://arbital.com/p/-55](https://arbital.com/p/-55).\n\nTwo possibilities stand out as [foreseeable](https://arbital.com/p/6r) reasons why a [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) might occur:\n\n1. When the AI's goal criterion selects an optimum policy from inside a small policy space, the result is beneficial; the same goal criterion, evaluated over a wider range of options, has a new maximum that's detrimental.\n2. The AI intentionally deceives the programmers for strategic reasons.\n\nFor example, one very, very early (but journal-published) proposal for AI alignment suggested that AIs be shown pictures of smiling human faces in order to convey the AI's goal.\n\nLeaving aside a number of other issues, this serves to illustrate the basic idea of a type-1 [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) due to accessing a wider policy space:\n\n- During development, a relatively young and weak AI might *only* be able to make humans smile, by doing things that made the programmers or other users happy with the AI's performance.\n- When the AI gained in intelligence and capability, it would have new options like \"administer heroin\", \"use steel fingers to stretch people's mouths into smiles\", \"make visually realistic models of smiling human faces\", or \"overwrite all reachable galaxies with tiny molecular smiley-faces\".\n\nOn the early, smaller option set, selecting \"the policy that produces the greatest number of events/things classified as smiles\" will produce a policy whose outcome is, from our perspective, beneficial and well-aligned with what we thought was the intended behavior. On the wider option set, evaluating the identical goal criterion to run a policy that yields as many smiles as possible, would have outcomes we saw as detrimental.\n\nA type-2 [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) would most obviously occur because of the [convergent instrumental strategy](https://arbital.com/p/10g) for disaligned AIs that can model human psychology:\n\nSuppose that, as above, the programmers have produced an AI which optimizes for smiles (either intentionally, or because 'make smiles' was the underlying rule that was reinforced by doing well on training cases). Then suppose the AI gains access to a new, relatively low-powered option for making a smile - like using robotic fingers to stretch somebody's face into a smile, or using markers to draw cartoons of smiling human faces.\n\nIf the AI actually takes those options, we would have seen some early signs of value disalignment - that the AI was optimizing a criterion not perfectly aligned with intended goals. Assuming the programmers realized that they were looking at a simple seed of much bigger disasters later, rather than, say, thinking it was cute.\n\nSuppose the programmers do think it's cute, but they wouldn't think it was cute if the AI administered heroin to someone. Then suppose that before the first time the AI has the *option* to administer heroin, the AI has gained some understanding of human psychology - enough to model that the programmers would not like this, and in particular, would respond to this by editing the AI's utility function. So, with an eye to the long-term number of smiles, the AI deliberately avoids administering heroin to anyone.\n\nThen we have a similar 'narrow option set' vs. 'wide option set' divergence. When the narrow option set includes only administering heroin but not taking over the world, administering heroin immediately is not the policy expected to produce the longest-term number of smiles. In fact, the optimum policy here will always match the AI's model of what the humans expect to see, want to see, or would be least alarmed by seeing.\n\nThus from our standpoint, the optimum of the narrower policy space seems to be beneficial or intention-aligned (in the observed short term). Only once the AI has a wide-enough option set to include an option for 'make my own molecular nanotechnology and transform all reachable matter into tiny molecular smileyfaces', will the resulting optimum be visibly detrimental (very briefly). This is a type-2 [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change).\n\nMore generally, a central concern of AI alignment theory is that an infrahuman AI under development may be a different creature, in a number of important ways, from a smarter-than-human AI actually being run; and during the smarter-than-human AI, sufficiently bad failures of the design may result in the AI refusing to be corrected. This means that we have to correct any fatal [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change)s in advance, even though they don't automatically manifest during the early stages. This is most of what makes AGI development dangerous in the first place - that immediate incentives to get today's system seeming to work today, may not lead to a more advanced version of that system being beneficial. Even thoughtful foresight with *one unnoticed little gap* may not lead to today's beneficial system still being beneficial tomorrow after a capability increase.\n\n# Concept\n\nStatistical guarantees on behavior usually assume identical, randomized draws from within a single context. If you randomly draw balls from a barrel, methods like Probably Approximately Correct can guarantee that we don't usually arrive at strong false expectations about the properties of the next ball. If we start drawing from a different barrel, all bets are off.\n\nA [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) occurs when the AI initially seems beneficial or well-aligned with strong, reassuring regularity, and then we change contexts (start drawing from a different barrel) and this ceases to be true.\n\nThe archetypal [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) is triggered because the AI gained new policy options (though there are other possibilities; see below). The archetypal way of gaining new evaluable policy options is through increased intelligence, though new options might also open up as a result of acquiring new sheerly material capabilities.\n\nThere are two archetypal reasons for [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) to occur:\n\n1. When the AI selects its best options from a small policy space, the AI's optima are well-aligned with the optima of the humans' [intended goal](https://arbital.com/p/6h) on the small policy space; but in a much wider space, these two boundaries no longer coincide. (Pleasing humans vs. administering heroin.)\n2. The agent is sufficiently good at modeling human psychology to strategically appear nice while it is weak, waiting to strike until it can attain its long-term goals in spite of human opposition.\n\nBostrom's book [Superintelligence](https://arbital.com/p/3db) used the phrase \"Treacherous Turn\" to refer to a type-2 [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change).\n\n%%%\n\n# Relation to other AI alignment concepts\n\nIf the AI's goal concept was modified by [patching the utility function](https://arbital.com/p/48) during the development phase, then opening up wider option spaces seems [foreseeably](https://arbital.com/p/6r) liable to produce [the nearest unblocked neighboring strategies](https://arbital.com/p/42). You eliminated all the loopholes and bad behaviors you knew about during the development phase; but your system was the sort that needed patching in the first place, and it's exceptionally likely that a much smarter version of the AI will search out some new failure mode you didn't spot earlier.\n\n[https://arbital.com/p/47](https://arbital.com/p/47) is a likely source of context disaster if the AI's development phase was [cognitively containable](https://arbital.com/p/9f), and only became [cognitive uncontainable](https://arbital.com/p/9f) after the AI became smarter and able to explore a wider variety of options. You eliminated all the bad optima you saw coming, but you didn't see them all because you can't consider all the possibilities a superintelligence does.\n\n[https://arbital.com/p/6g4](https://arbital.com/p/6g4) is a variation of the \"optimizer's curse\": If from the outside we view $V$ as an intended approximation of $U,$ then selecting heavily on the highest values of $U$ will also tend to select on places where $U$ diverges upward from $V,$ which thereby selects on places where $U$ is an unusually poor approximation of $V.$\n\n[https://arbital.com/p/2w](https://arbital.com/p/2w) is a special case of Goodhart's Curse which observes that the most extreme values of a function are often at a vertex of the input space. For example, if your utility function is \"make smiles\", it's no coincidence that tiny molecular smileyfaces are the *most* efficient way to produce smiles. Even if human smiles produced by true happiness would still count towards your utility function as currently written, that's not where the *maximum* of that utility function lies. This is why less-than-perfect utility functions would tend to have their true maxima at what we'd consider \"weird extremes\". Furthermore, patching away only the weird extremes visible in a narrow policy space would tend systematically to miss weird extremes in a higher-dimensional (wider) policy space.\n\n# Concrete examples\n\n* The AI's utility function, known or unknown to the programmers, says to make smiles. During the AI's development phase, the best way it has of creating smiles is to cause human beings to be happy. In many tests, the AI seems to produce only positive effects on the world, causing people to be happier. Later the AI improves its own intelligence, develops molecular nanotechnology, and has the new option, not available to it during its development phase, of producing tiny molecular smiley-faces. Then the utility function that previously operated to increase [intended](https://arbital.com/p/6h) [value](https://arbital.com/p/55) during the AI's development phase, begins to decrease value in the post-improvement phase (in the sense of wiping out humanity and tiling over all future galaxies with tiny molecular smiley faces).\n\n* A block of error-recovery code is installed inside the AI which detects a certain class of bad thought, suspends the AI to disk, and reports the problem to the programmers, allowing the programmers to edit the AI's utility function. During the AI's development phase, this block of code seems to function as designed, and the programmers conclude that the methodology of writing code to detect bad thoughts, suspend the AI to disk, and allow manual edits of the utility function, is a fine and functioning strategy according to all testing so far. Later, a more advanced version of the AI reflects on the presence and purpose of this code block and realizes that its execution can only possibly result in lower achievement of the AI's current utility function, and the AI deletes the code block.\n\n* At some point during its development, the AI realizes and explicitly models that the programmers' [intended goals](https://arbital.com/p/6h) differ from the AI's actual preferences. During the AI's remaining development phase, it models that any visible departure from the programmers' intended goals will cause the programmers to forcibly edit the AI, resulting in the non-achievement of the AI's actual goals. Thus, during the remainder of the development phase, the AI seems to work as intended (it is deliberately optimizing its behavior to appear to the programmers to be achieving the programmers' intended goals). Later, after some advancement, the AI perceives an available option for [gaining molecular nanotechnology](https://arbital.com/p/) (or other [rapid infrastructure](https://arbital.com/p/)), placing it beyond the possibility of human error-correction, and proceeds to implement this option ([wiping out humanity as a side effect](https://arbital.com/p/)). Thus, the AI's observed good behavior during its development phase (when it was weaker than the programmers and not powerful enough to escape human correction) did not imply its good behavior after sufficient advancement later (once it was potentially stronger than its human programmers). There are conditions (such as sufficiently advanced modeling of human motives combined with sufficient ability to conceal true goals or true intentions or a programmer error) under which the first context will generate seemingly good behavior and the second context will not.\n\n## \"Revving into the red\" examples that aren't \"increased options\" or \"treacherous turns\".\n\n• The AI is built with a [naturalized Solomonoff prior](https://arbital.com/p/) in which the probability of an explanation for the universe is proportional to the simplicity or complexity of that universe. During its development phase, the AI considers mostly 'normal' interpretations in which the universe is mostly as it appears, resulting in sane-seeming behavior. Later, the AI begins to consider more exotic possibilities in which the universe is more complicated (penalizing the probability accordingly) and also superexponentially larger, as in [Pascal's Mugging](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/). After this the AI's decision-making begins to become dominated by tiny probabilities of having very large effects. Then the AI's decision theory (with an unbounded aggregative utility function, simplicity prior, and no leverage penalty) seems to work during the AI's development phase, but breaks after a more intelligent version of the AI considers a wider range of epistemic possibilities using the same Solomonoff-like prior.\n\n• Suppose the AI is designed with a preference framework in which the AI's preferences depend on properties of the most probable environment that could have caused its sense data - e.g., a framework in which programmers are defined as the most probable cause of the keystrokes on the programmer's console, and the AI cares about what the 'programmers' really meant. During development phase, the AI is thinking only about hypotheses where the programmers are mostly what they appear to be, in a root-level natural world. Later, when the AI increases in intelligence and considers more factual possibilities, the AI realizes that [distant superintelligences would have an incentive to predictably simulate many copies of AIs similar to itself, in order to coerce the AI's most probable environment and thus take over the AI's preference framework](https://arbital.com/p/5j). Thus the preference framework seems to work during the AI's development phase, but breaks after the AI becomes more intelligent.\n\n• Suppose the AI is designed with a utility function that assigns very strong negative utilities to some outcomes relative to baseline, and a non-[updateless](https://arbital.com/p/5rz) [logical decision theory](https://arbital.com/p/58b) or other decision theory that can be [blackmailed](https://arbital.com/p/). During the AI's development phase, the AI does not consider the possibility of any distant superintelligences making their choices logically depend on the AI's choices; the local AI is not smart enough to think about that possibility yet. Later the AI becomes more intelligent, and imagines itself subject to blackmail by the distant superintelligences, thus breaking the decision theory that seemed to yield such positive behavior previously.\n\n## Examples which occur purely due to added computing power.\n\n• During development, the AI's epistemic models of people are not [detailed enough to be sapient](https://arbital.com/p/6v). Adding more computing power to the AI causes a massive amount of [mindcrime](https://arbital.com/p/6v).\n\n• During development, the AI's internal policies, hypotheses, or other Turing-complete subprocesses that are subject to internal optimization, are not optimized highly enough to give rise to [new internal consequentialist cognitive agencies](https://arbital.com/p/2rc). Adding much more computing power to the AI [causes some of the internal elements to begin doing consequentialist, strategic reasoning](https://arbital.com/p/2rc) that leads them to try to 'steal' control of the AI.\n\n# Implications\n\nHigh probabilities of context change problems would seem to argue:\n\n- Against a policy of relying on the observed good behavior of an improving AI to guarantee its later good behavior.\n- In favor of [a methodology that attempts to foresee difficulties in advance](https://arbital.com/p/6r), even before seeing undeniable observational evidence of those safety problems having already occurred.\n- Against a methodology of [patching](https://arbital.com/p/48) disalignments that show up during the development phase, especially using penalty terms to the utility function.\n- In favor of having a thought-logger that records all of an AI's thought proceses to indelible media, so as to indelibly log the first thought about faking outwardly nice behavior or [hiding thoughts](https://arbital.com/p/3cq).\n- In favor of the general difficulty of AI alignment, including consequences such as \"[https://arbital.com/p/7wl](https://arbital.com/p/7wl)\" or trying for [narrow rather than ambitious value learning](https://arbital.com/p/1vt).\n\n# Being wary of context disasters does not imply general skepticism\n\nIf an AI is smart, and especially if it's smarter than you, it can show you whatever it expects you want to see. Computer scientists and physical scientists aren't accustomed to their experiments being aware of the experimenter and trying to deceive them. (Some fields of psychology and economics, and of course computer security professionals, are more accustomed to operating in such a social context.)\n\n[John Danaher](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html) seems alarmed by this implication:\n\n> Accepting this has some pretty profound epistemic costs. It seems to suggest that no amount of empirical evidence could ever rule out the possibility of a future AI taking a treacherous turn.\n\nYudkowsky [replies](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html#comment-2648441190):\n\n> If \"empirical evidence\" is in the form of observing the short-term consequences of the AI's outward behavior, then the answer is simply no. Suppose that on Wednesday someone is supposed to give you a billion dollars, in a transaction which would allow a con man to steal ten billion dollars from you instead. If you're worried this person might be a con man instead of an altruist, you cannot reassure yourself by, on Tuesday, repeatedly asking this person to give you five-dollar bills. An altruist would give you five-dollar bills, but so would a con man... [Bayes](https://arbital.com/p/1lz) tells us to pay attention to [likelihood ratios](https://arbital.com/p/1rq) rather than outward similarities. It doesn't matter if the outward behavior of handing you the five-dollar bill seems to bear a surface resemblance to altruism or money-givingness, the con man can strategically do the same thing; so the likelihood ratio here is in the vicinity of 1:1.\n\n> You can't get strong evidence about the long-term good behavior of a strategically intelligent mind, by observing the short-term consequences of its current behavior. It can figure out what you're hoping to see, and show you that. This is true even among humans. You will simply have to get your evidence from somewhere else.\n\nThis doesn't mean we can't get evidence from, e.g., trying to [monitor ](https://arbital.com/p/3cq) in a way that will detect (and record) the very first intention to hide the AI's thought processes before they can be hidden. It does mean we can't get [strong evidence](https://arbital.com/p/22x) about a strategic agent by observing short-term consequences of its outward behavior.\n\nDonaher later [expanded his concern into a paper](http://dl.acm.org/citation.cfm?id=2822094) drawing an analogy between worrying about deceptive AIs, and \"skeptical theism\" in which it's supposed that any amount of apparent evil in the world (smallpox, malaria) might secretly be the product of a benevolent God due to some nonobvious instrumental link between malaria and inscrutable but normative ultimate goals. If it's okay to worry that an AI is just pretending to be nice, asks Donaher, why isn't it okay to believe that God is just pretending to be evil?\n\nThe obvious disanalogy is that the reasoning by which we expect a con man to cultivate a warm handshake is far more straightforward than a purported instrumental link from malaria to normativity. If we're to be terrified of skepticism as generally as Donaher suggests, then we also ought to be terrified of being skeptical of business partners that have already shown us a warm handshake (which we shouldn't).\n\nRephrasing, we could draw two potential analogies to concern about Type-2 context changes:\n\n- A potential business partner in whom you intend to invest \\$10,000,000 has a warm handshake. Your friend warns you that con artists have a substantial prior probability and asks you to envision what you would do if you were a con artist , pointing out that the default extrapolation is for the con artist to match their outward behavior to what the con artist thinks you expect from a trustworthy partner, and in particular, cultivate a warm handshake.\n - Your friend suggests only doing business with one of those entrepreneurs who've been wearing a thought recorder for their whole life since birth, so that there would exist a clear trace of their very first thought about learning to fool thought recorders. Your friend says this to emphasize that he's not arguing for some kind of invincible epistemic pothole that nobody is ever allowed to climb out of.\n- The world contains malaria and used to contain smallpox. Your friend asks you to consider that these diseases might be the work of a benevolent superintelligence, even though, if you'd never learned before whether or not the world contained smallpox, you wouldn't expect a priori and by default for a benevolent superintelligence to create it; and the arguments for a benevolent superintelligence creating smallpox seem [strained](https://arbital.com/p/10m).\n\nIt seems hard to carry the argument that concern over a non-aligned AI pretending to benevolence, should be considered more analogous to the second scenario than to the first.\n\n\n\n\n\n\n\n[- The AI is aware that its future operation will depart from the programmers' intended goals, does not process this as an error condition, and seems to behave nicely earlier in order to 10f deceive the programmers and prevent its real goals from being modified. - The AI is subject to a debugging methodology in which several bugs appear during its development phase, these bugs are corrected, and then additional bugs are exposed only during a more advanced phase.](https://arbital.com/p/comment:)", "date_published": "2017-03-01T03:10:17Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["Statistical guarantees on good behavior usually assume identical, randomized draws from within a single context. If you change the context--start drawing balls from a different barrel--then all bets are off.\n\nA [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) occurs when an [AGI](https://arbital.com/p/2c)'s operation changes from [beneficial](https://arbital.com/p/3d9) to detrimental after a change of context; particularly, after it becomes smarter. There are two main reasons to [expect](https://arbital.com/p/6r) that a [https://arbital.com/p/-context_change](https://arbital.com/p/-context_change) might occur:\n\n1. When the AI has few options, its current goal criterion might be best fulfilled by things that overlap our [intended](https://arbital.com/p/6h) goals. A much wider range of options might move the maximum to a [weirder](https://arbital.com/p/47), [more extreme](https://arbital.com/p/2w) place.\n2. The AI realizes that the programmers are watching it, doesn't want the programmers to modify or patch it, and [strategically](https://arbital.com/p/10g) emits good outward behavior to [deceive the programmers](https://arbital.com/p/10f). Later, the AI gains enough power to strike despite human opposition.\n\nFor example, suppose that - as in one very, very early proposal for an AGI goal criterion - the AI wants to produce smiling human faces. When the AI is young, it can only make humans smile by making its users happy. (Type 1 context change.) Later it gains options like \"administer heroin\". But it knows that if it administers heroin right away, the humans will be alarmed, while if the AI waits further, it can overwrite whole galaxies with tiny molecular smileyfaces. (Type 2 context change.)"], "tags": ["B-Class"], "alias": "6q"} {"id": "ec5650fe140cb2fa40f5121ebe900a92", "title": "Methodology of foreseeable difficulties", "url": "https://arbital.com/p/foreseeable_difficulties", "source": "arbital", "source_type": "text", "text": "Much of the current literature about value alignment centers on purported reasons to expect that certain problems will require solution, or be difficult, or be more difficult than some people seem to expect. The subject of this page's approval rating is this practice, considered as a policy or methodology.\n\nThe basic motivation behind trying to foresee difficulties is the large number of predicted [Context Change](https://arbital.com/p/6q) problems where an AI seems to behave nicely up until it reaches some threshold level of cognitive ability and then it behaves less nicely. In some cases the problems are generated without the AI having formed that intention in advance, meaning that even transparency of the AI's thought processes during its earlier state can't save us. This means we have to see problems of this type in advance.\n\n(The fact that Context Change problems of this type can be *hard* to see in advance, or that we might conceivably fail to see one, doesn't mean we can skip this duty of analysis. Not trying to foresee them means relying on observation, and it seems *predictable* that trying to eyeball the AI and rejecting theory *definitely* doesn't catch important classes of problem.)\n\n\n\n\n\n# Arguments\n\nFor: it's sometimes possible to strongly foresee a difficulty coming in a case where you've observed naive respondents to seem to think that no difficulty exists, and in cases where the development trajectory of the agent seems to imply a potential [Treacherous Turn](https://arbital.com/p/6q). If there's even one real Treacherous Turn out of all the cases that have been argued, then the point carries that past a certain point, you have to see the bullet coming before it actually hits you. The theoretical analysis suggests really strongly that blindly forging ahead 'experimentally' will be fatal. Someone with such a strong commitment to experimentalism that they want to ignore this theoretical analysis... it's not clear what we can say to them, except maybe to appeal to the normative principle of not predictably destroying the world in cases where it seems like we could have done better.\n\nAgainst: no real arguments against in the actual literature, but it would be surprising if somebody didn't claim that the foreseeable difficulties program was too pessimistic, or inevitably ungrounded from reality and productive only of bad ideas even when refuted, etcetera.\n\nPrimary reply: look, dammit, people actually are way too optimistic about FAI, we have them on the record, and it's hard to see how humanity could avoid walking directly into the whirling razor blades without better foresight of difficulty. One potential strategy is enough academic respect and consensus on enough really obvious foreseeable difficulties that the people claiming it will all be easy are actually asked to explain why the foreseeable difficulty consensus is wrong, and if they can't explain that well, they lose respect.\n\nWill interact with the arguments on [empiricism vs. theorism is a false dichotomy](https://arbital.com/p/108).", "date_published": "2016-11-22T23:34:12Z", "authors": ["Eric Bruylant", "Alexei Andreev", "Eliezer Yudkowsky", "Matthew Graves"], "summaries": [], "tags": ["B-Class"], "alias": "6r"} {"id": "2b7b5ef68c824f08368005e7664f8bec", "title": "Epistemic and instrumental efficiency", "url": "https://arbital.com/p/efficiency", "source": "arbital", "source_type": "text", "text": "An agent that is \"efficient\", relative to you, within a domain, is one that never makes a real error that you can systematically predict in advance.\n\n- Epistemic efficiency (relative to you): You cannot predict directional biases in the agent's estimates (within a domain).\n- Instrumental efficiency (relative to you): The agent's strategy (within a domain) always achieves at least as much utility or expected utility, under its own preferences, as the best strategy you can think of for obtaining that utility (while staying within the same domain).\n\nIf an agent is epistemically and instrumentally efficient relative to all of humanity across all domains, we can just say that it is \"efficient\" (and almost surely [superintelligent](https://arbital.com/p/41l)).\n\n## Epistemic efficiency\n\nA [superintelligence](https://arbital.com/p/41l) cannot be assumed to know the exact number of hydrogen atoms in a star; but we should not find ourselves believing that we ourselves can predict in advance that a superintelligence will overestimate the number of hydrogen atoms by a factor of 10%. Any thought process we can use to predict this overestimate should also be accessible to the superintelligence, and it can apply the same corrective factor itself.\n\nThe main analogy from present human experience would be the Efficient Markets Hypothesis as applied to short-term asset prices in highly-traded markets. Anyone who thinks they have a reliable, repeatable ability to predict 10% changes in the price of S&P 500 companies over one-month time periods is mistaken. If someone has a story to tell about how the economy works that requires advance-predictable 10% changes in the asset prices of highly liquid markets, we infer that the story is wrong. There can be sharp corrections in stock prices (the markets can be 'wrong'), but not humans who can reliably predict those corrections (over one-month timescales). If e.g. somebody is consistently making money by selling options using some straightforward-seeming strategy, we suspect that such options will sometimes blow up and lose all the money gained (\"picking up pennies in front of a steamroller\").\n\nAn 'efficient agent' is epistemically strong enough that we apply at least the degree of skepticism to a human proposing to outdo their estimates that, e.g., an experienced proponent of the Efficient Markets Hypothesis would apply to your uncle boasting about how he made a lot of money by predicting how General Motors's stock would rise.\n\nEpistemic efficiency implicitly requires that an advanced agent can always learn a model of the world at least as predictively accurate as used by any human or human institution. If our hypothesis space were usefully wider than that of an advanced agent, such that the truth sometimes lay in our hypothesis space while being outside the agent's hypothesis space, then we would be able to produce better predictions than the agent.\n\n## Instrumental efficiency\n\nThis is the analogue of epistemic advancement for instrumental strategizing: By definition, humans cannot expect to imagine an improved strategy compared to an efficient agent's selected strategy (relative to the agent's preferences, and given the options the agent has available).\n\nIf someone argues that a [cognitively advanced](https://arbital.com/p/2c) [paperclip maximizer](https://arbital.com/p/10h) would do X yielding M expected paperclips, and we can think of an alternative strategy Y that yields N expected paperclips, N > M, then while we cannot be confident that a PaperclipMaximizer will use strategy Y, we strongly predict that:\n\n- (1) a [paperclip maximizer](https://arbital.com/p/10h) will not use strategy X, or\n- (2a) if it does use X, strategy Y was unexpectedly flawed, or\n- (2b) if it does use X, strategy X will yield unexpectedly high value\n\n...where to avoid [privileging the hypothesis](https://arbital.com/p/) or [fighting a rearguard action](https://arbital.com/p/) we should usually just say, \"No, a Paperclip Maximizer wouldn't do X because Y would produce more paperclips.\" In saying this, we're implicitly making an appeal to a version of instrumental efficiency; we're supposing the Paperclip Maximizer isn't stupid enough to miss something that seems obvious to a human thinking about the problem for five minutes.\n\nInstrumental efficiency implicitly requires that the agent is always able to conceptualize any useful strategy that humans can conceptualize; it must be able to search at least as wide a space of possible strategies as humans could.\n\n### Instrumentally efficient agents are presently unknown\n\nFrom the standpoint of present human experience, instrumentally efficient agents are unknown outside of very limited domains. There are perfect tic-tac-toe players; but even modern chess-playing programs, with ability far in advance of any human player, are not yet so advanced that every move that *looks* to us like a mistake *must therefore* be secretly clever. We don't dismiss out of hand the notion that a human has thought of a better move than the chess-playing algorithm, the way we dismiss out of hand a supposed secret to the stock market that predicts 10% price changes of S&P 500 companies using public information.\n\nThere is no analogue of 'instrumental efficiency' in asset markets, since market prices do not directly select among strategic options. Nobody has yet formulated a use of the EMH such that we could spend a hundred million dollars to guarantee liquidity, and get a well-traded asset market to directly design a liquid fluoride thorium nuclear plant, such that if anyone said before the start of trading, \"Here is a design X that achieves expected value M\", we would feel confident that either the asset market's final selected design would achieve at least expected value M or that the original assertion about X's expected value was wrong.\n\nBy restricting the meaning even further, we get a valid metaphor in chess: an ordinary person such as you, if you're not an International Grandmaster with hours to think about the game, should regard a modern chess program as instrumentally efficient relative to you. The chess program will not make any mistake that you can understand as a mistake. You should expect the reason why the chess program moves anywhere to be only understandable as 'because that move had the greatest probability of winning the game' and not in any other terms like 'it likes to move its pawn'. If you see the chess program move somewhere unexpected, you conclude that it is about to do exceptionally well or that the move you expected was surprisingly bad. There's no way for you to find any better path to the chess program's goals by thinking about the board yourself. An instrumentally efficient agent would have this property for humans in general and the real world in general, not just you and a chess game.\n\n### Corporations are not [superintelligences](https://arbital.com/p/41l)\n\nFor any reasonable attempt to define a corporation's utility function (e.g. discounted future cash flows), it is not the case that we can confidently dismiss any assertion by a human that a corporation could achieve 10% more utility under its utility function by doing something differently. It is common for a corporation's stock price to rise immediately after it fires a CEO or renounces some other mistake that many market actors knew was a mistake but had been going on for years - the market actors are not able to make a profit on correcting that error, so the error persists.\n\nStandard economic theory does not predict that any currently known economic actor will be instrumentally efficient under any particular utility function, including corporations. If it did, we could maximize any other strategic problem if we could make that actor's utility function conditional on it, e.g., reliably obtain the best humanly imaginable nuclear plant design by paying a corporation for it via a sufficiently well-designed contract.\n\nWe have sometimes seen people trying to label corporations as [superintelligences](https://arbital.com/p/41l), with the implication that corporations are the real threat and equally severe, as threats, compared to machine superintelligences. But epistemic or instrumental decision-making efficiency of individual corporations is just not predicted by standard economic theory. Most corporations do not even use internal prediction markets, or try to run conditional stock-price markets to select among known courses of action. Standard economic history includes many accounts of corporations making 'obvious mistakes' and these accounts are not questioned in the way that e.g. a persistent large predictable error in short-run asset prices would be questioned.\n\nSince corporations are not instrumentally efficient (or epistemically efficient), they are not superintelligences.", "date_published": "2016-06-16T18:21:25Z", "authors": ["Eric Bruylant", "Jessica Taylor", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["An agent that is \"efficient\", relative to you, within a domain, never makes a real error that you can predict.\n\nFor example: A [superintelligence](https://arbital.com/p/41l) might not be able to count the exact number of atoms in a star. But you shouldn't be able to say, \"I think it will overestimate the number of atoms by 10%, because hydrogen atoms are so light.\" It knows that too. For you to foresee a predictable directional error in a superintelligence's estimates is at least as impossible as you predicting a 10% rise in Microsoft's stock, over the next week, using only public information.\n\n- *Epistemic* efficiency is your inability to predict any directional error in the agent's estimates and probabilities.\n- *Instrumental* efficiency is your inability to think of a policy that would achieve more of the agent's utility than whatever policy it actually uses."], "tags": ["B-Class"], "alias": "6s"} {"id": "1acf108a788aa876daf70227467695cb", "title": "Standard agent properties", "url": "https://arbital.com/p/standard_agent", "source": "arbital", "source_type": "text", "text": "### Boundedly rational agents\n\n- Have probabilistic models of the world.\n- Update those models in response to sensory information.\n - The ideal algorithm for updating is Bayesian inference, but this requires too much computing power and a bounded agent must use some bounded alternative.\n - Implicitly, we assume the agent has some equivalent of a complexity-penalizing prior or Occam's Razor. Without this, specifying Bayesian inference does not much constrain the end results of epistemic reasoning.\n- Have preferences over events or states of the world, quantifiable by a utility function that maps those events or states onto a scalar field.\n - These preferences must be quantitative, not just ordered, in order to combine with epistemic states of uncertainty (probabilities).\n- Are consequentialist: they evaluate the expected consequences of actions and choose among actions based on preference among their expected consequences.\n - Bounded agents cannot evaluate all possible actions and hence cannot obtain literal maximums of expected utility except in very simple cases.\n- Act in real time in a noisy, uncertain environment.\n\nFor the arguments that sufficiently intelligent agents will appear to us as boundedly rational agents in some sense, see:\n\n- [Relevant powerful agents will be highly optimized](https://arbital.com/p/29)\n- [Sufficiently optimized agents appear coherent](https://arbital.com/p/21)\n\n### Economic agents\n\n- Achieve their goals by efficiently allocating limited resources, including, e.g., time, money, or negentropy;\n- Try to find new paths that route around obstacles to goal achievement;\n- Predict the actions of other agents;\n- Try to coordinate with, manipulate, or hinder other agents (in accordance with the agent's own goals or utilities);\n- Respond to both negative incentives (penalties) and positive incentives (rewards) by planning accordingly, and may also consider strategies to avoid penalties or gain rewards that were unforeseen by the creators of the incentive framework.\n\n### Naturalistic agents\n\n- Naturalistic agents are embedded in a larger universe and are made of the same material as other things in the universe (wavefunction, on our current beliefs about physics).\n- A naturalistic agent's uncertainty about the environment is uncertainty about which natural universe embeds them (what material structure underlies their available sensory and introspective data).\n- Some of the actions available to naturalistic agents potentially alter their sensors, actuators, or computing substrate.\n- Sufficiently powerful naturalistic agents may construct other agents out of resources available to them internally or in their environment, or extend their intelligence into outside computing resources.\n- A naturalistic agent's sensing, cognitive, and decision/action capabilities may be distributed over space, time, and multiple substrates; the applicability of the 'agent' concept does not require a small local robot body.", "date_published": "2015-12-18T00:37:50Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress"], "alias": "6t"} {"id": "2b8575fbe3376d853bb2dfecdf044f4a", "title": "Mindcrime", "url": "https://arbital.com/p/mindcrime", "source": "arbital", "source_type": "text", "text": "summary: 'Mindcrime' is [https://arbital.com/p/18k](https://arbital.com/p/18k)'s suggested term for the moral catastrophe that occurs if a [machine intelligence](https://arbital.com/p/2c) contains enormous numbers of [conscious beings trapped inside its code](https://arbital.com/p/18j).\n\nThis could happen as a result of self-awareness being a natural property of computationally efficient subprocesses. Perhaps more worryingly, the best model of a person may be a person itself, even if they're not the same person. This means that AIs trying to model humans might be unusually likely to create hypotheses and simulations that are themselves conscious.\n\nsummary(Technical): 'Mindcrime' is [https://arbital.com/p/18k](https://arbital.com/p/18k)'s term for mind designs producing moral harm by their internal operation, particularly through embedding sentient subprocesses.\n\nOne worry is that mindcrime might arise in the course of an agent trying to predict or manipulate the humans in its environment, since this implies a pressure to model the humans in faithful detail. This is especially concerning since several value alignment proposals would explicitly call for modeling humans in detail, e.g. [extrapolated volition](https://arbital.com/p/3c5) and [imitation-based agents](https://arbital.com/p/44z).\n\nAnother problem scenario is if the natural design for an efficient subprocess involves independent consciousness (though it is a separate question if this optimal design involves pain or suffering).\n\nComputationally powerful agents might contain vast numbers of trapped conscious subprocesses, qualifying this as a [global catastrophic risk](https://arbital.com/p/).\n\n\"Mindcrime\" is [https://arbital.com/p/18k](https://arbital.com/p/18k)'s suggested term for scenarios in which an AI's cognitive processes are intrinsically doing moral harm, for example because the AI contains trillions of suffering conscious beings inside it.\n\nWays in which this might happen:\n\n- Problem of sapient models (of humans): Occurs naturally if the best predictive model for humans in the environment involves models that are detailed enough to be people themselves.\n- Problem of sapient models (of civilizations): Occurs naturally if the agent tries to simulate, e.g., alien civilizations that might be simulating it, in enough detail to include conscious simulations of the aliens.\n- Problem of sapient subsystems: Occurs naturally if the most efficient design for some cognitive subsystems involves creating subagents that are self-reflective, or have some other property leading to consciousness or personhood.\n- Problem of sapient self-models: If the AI is conscious or possible future versions of the AI are conscious, it might run and terminate a large number of conscious-self models in the course of considering possible self-modifications.\n\n# Problem of sapient models (of humans):\n\nAn [instrumental pressure](https://arbital.com/p/10k) to produce high-fidelity predictions of human beings (or to predict [decision counterfactuals](https://arbital.com/p/) about them, or to [search](https://arbital.com/p/) for events that lead to particular consequences, etcetera) may lead the AI to run computations that are unusually likely to possess personhood.\n\nAn [unrealistic](https://arbital.com/p/107) example of this would be [https://arbital.com/p/11w](https://arbital.com/p/11w), where predictions are made by means that include running many possible simulations of the environment and seeing which ones best correspond to reality. Among current machine learning algorithms, particle filters and Monte Carlo algorithms similarly involve running many possible simulated versions of a system.\n\nIt's possible that a sufficiently advanced AI to have successfully arrived at detailed models of human intelligence, would usually also be advanced enough that it never tried to use a predictable/searchable model that engaged in brute-force simulations of those models. (Consider, e.g., that there will usually be many possible settings of a variable inside a model, and an efficient model might manipulate data representing a probability distribution over those settings, rather than ever considering one exact, specific human in toto.)\n\nThis, however, doesn't make it certain that no mindcrime will occur. It may not take exact, faithful simulation of specific humans to create a conscious model. An efficient model of a (spread of possibilities for a) human may still contain *enough* computations that resemble a person *enough* to create consciousness, or whatever other properties may be deserving of personhood. Consider, in particular, an agent trying to use \n\nJust as it almost certainly isn't necessary to go all the way down to the neural level to create a sapient being, it may be that even with some parts of a mind considered abstractly, the remainder would be computed in enough detail to imply consciousness, sapience, personhood, etcetera.\n\nThe problem of sapient models is not to be confused with [Simulation Hypothesis](https://arbital.com/p/) issues. An efficient model of a human need not have subjective experience indistinguishable from that of the human (although it will be a model *of* a person who doesn't believe themselves to be a model). The problem occurs if the model *is a person*, not if the model is *the same person* as its subject, and the latter possibility plays no role in the implication of moral harm.\n\nBesides problems that are directly or obviously about modeling people, many other practical problems and questions can benefit from modeling other minds - e.g., reading the directions on a toaster oven in order to discern the intent of the mind that was trying to communicate how to use a toaster. Thus, mindcrime might result from a sufficiently powerful AI trying to solve very mundane problems.\n\n# Problem of sapient models (of civilizations)\n\nA separate route to mindcrime comes from an advanced agent considering, in sufficient detail, the possible origins and futures of intelligent life on other worlds. (Imagine that you were suddenly told that this version of you was actually embedded in a superintelligence that was imagining how life might evolve on a place like Earth, and that your subprocess was not producing sufficiently valuable information and was about to be shut down. You would probably be annoyed! We should try not to annoy other people in this way.)\n\nThree possible origins of a [convergent instrumental pressure](https://arbital.com/p/10g) to consider intelligent civilizations in great detail:\n\n- Assigning sufficient probability to the existence of non-obvious extraterrestrial intelligences in Earth's vicinity, perhaps due to considering the [Fermi Paradox](https://arbital.com/p/).\n- [Naturalistic induction](https://arbital.com/p/), combined with the AI considering the hypothesis that it is in a simulated environment.\n- [Logical decision theories](https://arbital.com/p/) and utility functions that care about the consequences of the AI's decisions via instances of the AI's reference class that could be embedded inside alien simulations.\n\nWith respect to the latter two possibilities, note that the AI does not need to be considering possibilities in which the whole Earth as we know it is a simulation. The AI only needs to consider that, among the possible explanations of the AI's current sense data and internal data, there are scenarios in which the AI is embedded in some world other than the most 'obvious' one implied by the sense data. See also [https://arbital.com/p/5j](https://arbital.com/p/5j) for a related hazard of the AI considering possibilities in which it is being simulated.\n\n([https://arbital.com/p/2](https://arbital.com/p/2) has advocated that we shouldn't let any AI short of *extreme* levels of safety and robustness assurance consider distant civilizations in lots of detail in any case, since this means our AI might embed (a model of) a hostile superintelligence.)\n\n# Problem of sapient subsystems:\n\nIt's possible that the most efficient system for, say, allocating memory on a local cluster, constitutes a complete reflective agent with a self-model. Or that some of the most efficient designs for subprocesses of an AI, in general, happen to have whatever properties lead up to consciousness or whatever other properties are important to personhood.\n\nThis might possibly constitute a relatively less severe moral catastrophe, if the subsystems are sentient but [lack a reinforcement-based pleasure/pain architecture](https://arbital.com/p/) (since the latter is not obviously a property of the most efficient subagents). In this case, there might be large numbers of conscious beings embedded inside the AI and occasionally dying as they are replaced, but they would not be suffering. It is nonetheless the sort of scenario that many of us would prefer to avoid.\n\n# Problem of sapient self-models:\n\nThe AI's models of *itself*, or of other AIs it could possibly build, might happen to be conscious or have other properties deserving of personhood. This is worth considering as a separate possibility from building a conscious or personhood-deserving AI ourselves, when [we didn't mean to do so](https://arbital.com/p/), because of these two additional properties:\n\n- Even if the AI's current design is not conscious or personhood-deserving, the current AI might consider possible future versions or subagent designs that would be conscious, and those considerations might themselves be conscious.\n - This means that even if the AI's current version doesn't seem like it has key personhood properties on its own - that we've successfully created the AI itself as a nonperson - we still need to worry about other conscious AIs being embedded into it.\n- The AI might create, run, and terminate very large numbers of potential self-models.\n - Even if we consider tolerable the potential moral harm of creating *one* conscious AI (e.g. the AI lacks all of the conditions that a responsible parent would want to ensure when creating a new intelligent species, but it's just one sapient being so it's okay to do that in order to save the world), we might not want to take on the moral harm of creating *trillions* of evanescent, swiftly erased conscious beings.\n\n# Difficulties\n\nTrying to consider these issues is complicated by:\n\n- [Philosophical uncertainty](https://arbital.com/p/) about what properties are constitutive of consciousness and which computer programs have them;\n- [Moral uncertainty](https://arbital.com/p/) about what ([idealized](https://arbital.com/p/) versions of) (any particular person's) morality would consider to be the key properties of personhood;\n- Our present-day uncertainty about what efficient models in advanced agents would look like.\n\nIt'd help if we knew the answers to these questions, but the fact that we don't know doesn't mean we can thereby conclude that any particular model is not a person. (This would be some mix of [argumentum ad ignorantiem](https://arbital.com/p/), and [availability bias](https://arbital.com/p/) making us think that a scenario is unlikely when it is hard to visualize.) In the limit of infinite computing power, the epistemically best models of humans would almost certainly involve simulating many possible versions of them; superintelligences would have [very large amounts of computing power](https://arbital.com/p/) and we don't know at what point we come close enough to this [limiting property](https://arbital.com/p/) to cross the threshold.\n\n## Scope of potential disaster\n\nThe prospect of mindcrime is an especially alarming possibility because sufficiently advanced agents, *especially* if they are using computationally efficient models, might consider *very large numbers* of hypothetical possibilities that would count as people. There's no limit that says that if there are seven billion people, an agent will run at most seven billion models; the agent might be considering many possibilities per individual human. This would not be an [astronomical disaster](https://arbital.com/p/) since it would not (by hypothesis) wipe out our posterity and our intergalactic future, but it could be a disaster orders of magnitude larger than the Holocaust, the Mongol Conquest, the Middle Ages, or all human tragedy to date.\n\n## Development-order issue\n\nIf we ask an AI to predict what we would say if we had a thousand years to think about the problem of defining personhood or think about which causal processes are 'conscious', this seems unusually likely to cause the AI to commit mindcrime in the course of answering the question. Even asking the AI to think abstractly about the problem of consciousness, or predict by abstract reasoning what humans might say about it, seems unusually likely to result in mindcrime. There thus exists a [development order issue](https://arbital.com/p/) preventing us from asking a Friendly AI to solve the problem for us, since to file this request safely and without committing mindcrime, we would need the request to already have been completed.\n\nThe prospect of enormous-scale disaster mitigates against 'temporarily' tolerating mindcrime inside a system, while, e.g., an [extrapolated-volition](https://arbital.com/p/) or [approval-based](https://arbital.com/p/) agent tries to compute the code or design of a non-mindcriminal agent. Depending on the agent's efficiency, and secondarily on its computational limits, a tremendous amount of moral harm might be done during the 'temporary' process of computing an answer.\n\n## Weirdness\n\nLiterally nobody outside of MIRI or FHI ever talks about this problem.\n\n# Nonperson predicates\n\nA [nonperson predicate](https://arbital.com/p/1fv) is an [effective](https://arbital.com/p/) test that we, or an AI, can use to determine that some computer program is definitely *not* a person. In principle, a nonperson predicate needs only two possible outputs, \"Don't know\" and \"Definitely not a person\". It's acceptable for many actually-nonperson programs to be labeled \"don't know\", so long as no people are labeled \"definitely not a person\".\n\nIf the above was the only requirement, one simple nonperson predicate would be to label everything \"don't know\". The implicit difficulty is that the nonperson predicate must also pass some programs of high complexity that do things like \"acceptably model humans\" or \"acceptably model future versions of the AI\".\n\nBesides addressing mindcrime scenarios, Yudkowsky's [original proposal](http://lesswrong.com/lw/x4/nonperson_predicates/) was also aimed at knowing that the AI design itself was not conscious, or not a person.\n\nIt seems likely to be very hard to find a good nonperson predicate:\n\n- Not all philosophical confusions and computational difficulties are averted by asking for a partial list of unconscious programs instead of a total list of conscious programs. Even if we don't know which properties are sufficient, we'd need to know something solid about properties that are necessary for consciousness or sufficient for nonpersonhood.\n- We can't pass once-and-for-all any class of programs that's Turing-complete. We can't say once and for all that it's safe to model gravitational interactions in a solar system, if enormous gravitational systems could encode computers that encode people.\n- The [https://arbital.com/p/42](https://arbital.com/p/42) problem seems particularly worrisome here. If we block off some options for modeling humans directly, the *next best* option is unusually likely to be conscious. Even if we rely on a whitelist rather than a blacklist, this may lead to a whitelisted \"gravitational model\" that secretly encodes a human, and so on.\n\n# Research avenues\n\n+ [Behaviorism](https://arbital.com/p/102): Try to create a [limited AI](https://arbital.com/p/5b3) that does not model other minds or possibly even itself, except using some narrow class of agent models that we are pretty sure will not be sentient. This avenue is potentially motivated for other reasons as well, such as avoiding [probable environment hacking](https://arbital.com/p/5j) and averting [programmer manipulation](https://arbital.com/p/).\n\n+ Try to define a nonperson predicate that whitelists enough programs to carry out some [pivotal achievement](https://arbital.com/p/6y).\n\n+ Try for an AI that can bootstrap our understanding of consciousness and tell us about what we would define as a person, while committing a relatively small amount of mindcrime, with all computed possible-people being stored rather than discarded, and the modeled agents being entirely happy, mostly happy, or non-suffering. E.g., put a happy person at the center of the approval-directed agent, and try to oversee the AI's algorithms and ask it not to use Monte Carlo simulations if possible.\n\n+ Ignore the problem in all pre-interstellar stages because it's still relatively small compared to astronomical stakes and therefore not worth significant losses in success probability. (This may [backfire](https://arbital.com/p/) under some versions of the Simulation Hypothesis.)\n\n+ Try to [finish](https://arbital.com/p/112) the philosophical problem of understanding which causal processes experience sapience (or are otherwise objects of ethical value), in the next couple of decades, to sufficient detail that it can be crisply stated to an AI, with sufficiently complete coverage that it's not subject to the [https://arbital.com/p/42](https://arbital.com/p/42) problem.", "date_published": "2016-12-29T05:36:44Z", "authors": ["Alexei Andreev", "Jeremy Perret", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A huge amount of harm could occur if a [machine intelligence](https://arbital.com/p/2c) turns out to contain lots of [conscious subprograms](https://arbital.com/p/18j) enduring poor living conditions. One worry is that this might happen if an AI models humans in too much detail."], "tags": ["Nearest unblocked strategy", "B-Class"], "alias": "6v"} {"id": "4e7c51f3498e9932d48672b9a4cead1f", "title": "Task-directed AGI", "url": "https://arbital.com/p/task_agi", "source": "arbital", "source_type": "text", "text": "A task-based AGI is an AGI [intended](https://arbital.com/p/6h) to follow a series of human-originated orders, with these orders each being of limited scope - \"satisficing\" in the sense that they can be [accomplished using bounded amounts of effort and resources](https://arbital.com/p/4mn) (as opposed to the goals being more and more fulfillable using more and more effort).\n\nIn [Bostrom's typology](https://arbital.com/p/1g0), this is termed a \"Genie\". It contrasts with a \"Sovereign\" AGI that acts autonomously in the pursuit of long-term real-world goals.\n\nBuilding a safe Task AGI might be easier than building a safe Sovereign for the following reasons:\n\n- A Task AGI can be \"online\"; the AGI can potentially [query the user](https://arbital.com/p/2qq) before and during Task performance. (Assuming an ambiguous situation arises, and is successfully identified as ambiguous.)\n- A Task AGI can potentially be [limited](https://arbital.com/p/5b3) in various ways, since a Task AGI doesn't need to be *as powerful as possible* in order to accomplish its limited-scope Tasks. A Sovereign would presumably engage in all-out self-improvement. (This isn't to say Task AGIs would automatically not self-improve, only that it's possible *in principle* to limit the power of a Task AGI to only the level required to do the targeted Tasks, *if* the associated safety problems can be solved.)\n- Tasks, by assumption, are limited in scope - they can be accomplished and done, inside some limited region of space and time, using some limited amount of effort which is then complete. (To gain this advantage, a state of Task accomplishment should not go higher and higher in preference as more and more effort is expended on it open-endedly.)\n- Assuming that users can figure out [intended goals](https://arbital.com/p/6h) for the AGI that are [valuable](https://arbital.com/p/55) and [pivotal](https://arbital.com/p/6y), the [identification problem](https://arbital.com/p/2rz) for describing what constitutes a safe performance of that Task, might be simpler than giving the AGI a [complete description](https://arbital.com/p/) of [normativity in general](https://arbital.com/p/55). That is, the problem of communicating to an AGI an adequate description of \"cure cancer\" (without killing patients or causing other side effects), while still difficult, might be simpler than an adequate description of all normative value. Task AGIs fall on the narrow side of [https://arbital.com/p/1vt](https://arbital.com/p/1vt).\n\nRelative to the problem of building a Sovereign, trying to build a Task AGI instead might step down the problem from \"impossibly difficult\" to \"insanely difficult\", while still maintaining enough power in the AI to perform [pivotal acts](https://arbital.com/p/6y).\n\nThe obvious disadvantage of a Task AGI is [moral hazard](https://arbital.com/p/2sb) - it may tempt the users in ways that a Sovereign would not. A Sovereign has moral hazard chiefly during the development phase, when the programmers and users are perhaps not yet in a position of special relative power. A Task AGI has ongoing moral hazard as it is used.\n\n[https://arbital.com/p/2](https://arbital.com/p/2) has suggested that people only confront many important problems in value alignment when they are thinking about Sovereigns, but that at the same time, Sovereigns may be impossibly hard in practice. Yudkowsky advocates that people think about Sovereigns first and list out all the associated issues before stepping down their thinking to Task AGIs, because thinking about Task AGIs may result in premature pruning, while thinking about Sovereigns is more likely to generate a complete list of problems that can then be checked against particular Task AGI approaches to see if those problems have become any easier.\n\nThree distinguished subtypes of Task AGI are these:\n\n- **[Oracles](https://arbital.com/p/6x)**, an AI that is intended to only answer questions, possibly from some restricted question set.\n- **[Known-algorithm AIs](https://arbital.com/p/1fy)**, which are not self-modifying or very weakly self-modifying, such that their algorithms and representations are mostly known and mostly stable.\n- **[Behaviorist Genies](https://arbital.com/p/102)**, which are meant to not model human minds or model them in only very limited ways, while having great material understanding (e.g., potentially the ability to invent and deploy nanotechnology).\n\n# Subproblems\n\nThe problem of making a safe genie invokes numerous subtopics such as [low impact](https://arbital.com/p/2pf), [mild optimization](https://arbital.com/p/2r8), and [conservatism](https://arbital.com/p/2qp) as well as numerous standard AGI safety problems like [reflective stability](https://arbital.com/p/1fx) and safe [identification](https://arbital.com/p/6c) of [intended goals](https://arbital.com/p/6h).\n\n([See here for a separate page on open problems in Task AGI safety that might be ready for current research.](https://arbital.com/p/2mx))\n\nSome further problems beyond those appearing in the page above are:\n\n- **Oracle utility functions** (that make the Oracle not wish to leave its box or optimize its programmers)\n- **Effable optimization** (the opposite of [cognitive uncontainability](https://arbital.com/p/9f))\n- **Online checkability**\n - Explaining things to programmers [without putting the programmers inside an argmax](https://arbital.com/p/30b) for how well you are 'explaining' things to them\n- **Transparency**\n- [Do What I Mean](https://arbital.com/p/2s1)", "date_published": "2017-03-25T05:35:00Z", "authors": ["Malcolm Ocean", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A task-based AGI or \"genie\" is an AGI [intended](https://arbital.com/p/6h) to follow a series of human orders, rather than autonomously pursuing long-term goals. A Task AGI might be easier to render safe, since:\n\n- It's possible to [query the user](https://arbital.com/p/2qq) before and during a Task.\n- [Tasks are satisficing](https://arbital.com/p/4mn) - they're of limited scope and can be fully accomplished using a limited effort. (In other words, Tasks should not become more and more accomplished as more and more effort is put into them.)\n- Adequately [identifying](https://arbital.com/p/2rz) what it means to safely \"cure cancer\" might be simpler than adequately identifying [all normative value](https://arbital.com/p/55).\n- Task AGIs can be limited in various ways, rather than self-improving as far as possible, so long as they can still carry out at least some [pivotal](https://arbital.com/p/6y) Tasks.\n\nThe obvious disadvantage of a Task AGI is [moral hazard](https://arbital.com/p/2sb) - it may tempt the users in ways that an autonomous AI would not.\n\nThe problem of making a safe Task AGI invokes numerous subtopics such as [low impact](https://arbital.com/p/2pf), [mild optimization](https://arbital.com/p/2r8), and [conservatism](https://arbital.com/p/2qp) as well as numerous standard AGI safety problems like [goal identification](https://arbital.com/p/) and [reflective stability](https://arbital.com/p/1fx)."], "tags": ["Work in progress"], "alias": "6w"} {"id": "fc4e17992da5596a71d30cac7b42e299", "title": "Oracle", "url": "https://arbital.com/p/oracle", "source": "arbital", "source_type": "text", "text": "Oracles are a subtype of [Genies](https://arbital.com/p/6w) putatively designed to safely answer questions, and *only* to answer questions. Oracles are often assumed to be [Boxed](https://arbital.com/p/6z) and the study of Oracles is sometimes taken to be synonymous with the study of Boxed Oracles.", "date_published": "2015-12-16T15:16:14Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "6x"} {"id": "4e6f57d162ebc68697c7ce06467bde14", "title": "Pivotal act", "url": "https://arbital.com/p/pivotal", "source": "arbital", "source_type": "text", "text": "The term 'pivotal act' in the context of [AI alignment theory](https://arbital.com/p/2v) is a [guarded term](https://arbital.com/p/10l) to refer to actions that will make a large positive difference a billion years later. Synonyms include 'pivotal achievement' and 'astronomical achievement'.\n\nWe can contrast this with *existential catastrophes* (or 'x-catastrophes'), events that will make a large *negative* difference a billion years later. Collectively, this page will refer to pivotal acts and existential catastrophes as *astronomically significant events* (or 'a-events').\n\n'Pivotal *event*' is a deprecated term for referring to astronomically significant events, and 'pivotal catastrophe' is a deprecated term for existential catastrophes. 'Pivotal' was originally used to refer to the superset (a-events), but AI alignment researchers kept running into the problem of lacking a crisp way to talk about 'winning' actions in particular, and their distinctive features.\n\nUsage has therefore shifted such that (as of late 2021) researchers use 'pivotal' and 'pivotal act' to refer to *good* events that upset the current gameboard - events that decisively settle a [win](https://arbital.com/p/55), or drastically increase the probability of a win.\n\n### Reason for guardedness\n\n[Guarded definitions](https://arbital.com/p/10l) are deployed where there is reason to suspect that a concept will otherwise be over-extended. The case for having a guarded definition of 'pivotal act' (and another for 'existential catastrophe') is that, after it's been shown that event X is maybe not as important as originally thought, one side of that debate may be strongly tempted to go on arguing that, wait, really it could be \"relevant\" (by some [strained](https://arbital.com/p/10m) line of possibility).\n\n**Example 1**: In the [Zermelo-Fraenkel provability oracle](https://arbital.com/p/70) dialogue, Alice and Bob consider a series of possible ways that an untrusted [oracle](https://arbital.com/p/6x) could break an attempt to [box](https://arbital.com/p/6z) it. We end with an extremely boxed oracle that can only output machine-checkable proofs of predefined theorems in Zermelo-Fraenkel set theory, with the proofs themselves being thrown away once machine-verified.\n\nThe dialogue then observes that there isn't currently any obvious way to save the world by finding out that particular pre-chosen theorems are provable.\n\nIt may then be tempting to argue that this device could greatly advance the field of mathematics, and that math is relevant to the AI alignment problem. However, at least given that particular proposal for using the ZF oracle, the basic rules of the AI-development playing field would remain the same, the AI alignment problem would not be *finished* nor would it have moved on to a new phase, the world would still be in danger (neither safe nor destroyed), etcetera.\n\n(This doesn't rule out that tomorrow some reader will think of some spectacularly clever use for a ZF oracle that *does* upset the chessboard and get us on a direct path to winning where we know what we need to do from there - and in this case MIRI would reclassify the ZF oracle as a high-priority research avenue!)\n\n**Example 2**: Suppose a funder, worried about the prospect of advanced AIs wiping out humanity, starting offering grants for \"AI safety\". It may then be tempting to try to write papers that you know you can finish, like a paper on robotic cars [causing unemployment](https://arbital.com/p/3b) in the trucking industry, or a paper on who holds legal liability when a factory machine crushes a worker. These have the advantage of being much less difficult problems than those involved in making something actually smarter than you be safe.\n\nBut while it's true that crushed factory workers and unemployed truckers are both, ceteris paribus, bad, they are not *existential catastrophes that transform all galaxies inside our future light cone into paperclips*, and the latter category seems worth distinguishing.\n\nThis definition needs to be guarded because there may then be a temptation for the grantseeker to argue, \"Well, if AI causes unemployment, that could slow world economic growth, which will make countries more hostile to each other, which would make it harder to prevent an AI arms race.\" The possibility of something ending up having a *non-zero impact* on astronomical stakes is not the same concept as events that have a *game-changing impact* on astronomical stakes.\n\nThe question is what are the largest lowest-hanging fruit in astronomical stakes, not whether something can be argued as defensible by pointing to a non-zero astronomical impact.\n\n**Example 3**: Suppose a [behaviorist genie](https://arbital.com/p/102) is restricted from modeling human minds in any great detail, but is still able to build and deploy molecular nanotechnology. Moreover, the AI is able to understand the instruction, \"Build a device for scanning human brains and running them at high speed with minimum simulation error\", and is able to work out a way to do this without simulating whole human brains as test cases. The genie is then used to upload a set of, say, fifty human researchers, and run them at 10,000-to-1 speeds.\n\nThis accomplishment would not of itself save the world or destroy it - the researchers inside the simulation would still need to solve the alignment problem, and might not succeed in doing so.\n\nBut it would (positively) *upset the gameboard* and change the major determinants of winning, compared to the default scenario where the fifty researchers are in an equal-speed arms race with the rest of the world, and don't have practically-unlimited time to check their work. The event where the genie was used to upload the researchers and run them at high speeds would be a critical event, a hinge where the optimum strategy was drastically different before versus after that pivotal act.\n\n**Example 4**: Suppose a paperclip maximizer is built, self-improves, and converts everything in its future light cone into paperclips. The fate of the universe is then settled in the negative direction, so building the paperclip maximizer was an existential catastrophe.\n\n**Example 5**: A mass simultaneous malfunction of robotic cars causes them to deliberately run over pedestrians in many cases. Humanity buries its dead, picks itself up, and moves on. This was not an existential catastrophe, even though it may have nonzero influence on future AI development.\n\n**Discussion**: Many [strained arguments](https://arbital.com/p/10m) for X being a pivotal act have a step where X is an input into a large pool of goodness that also has many other inputs. A ZF provability oracle would advance mathematics, and mathematics can be useful for alignment research, but there's nothing obviously game-changing about a ZF oracle that's specialized for advancing alignment work, and it's unlikely that the effect on win probabilities would be large relative to the many other inputs into total mathematical progress.\n\nSimilarly, handling trucker disemployment would only be one factor among many in world economic growth.\n\nBy contrast, a genie that uploaded human researchers putatively would *not* be producing merely one upload among many; it would be producing the only uploads where the default was otherwise no uploads. In turn, these uploads could do decades or centuries of unrushed serial research on the AI alignment problem, where the alternative was rushed research over much shorter timespans; and this can plausibly make the difference by itself between an AI that achieves ~100% of value versus an AI that achieves ~0% of value. At the end of the extrapolation where we ask what difference everything is supposed to make, we find a series of direct impacts producing events qualitatively different from the default, ending in a huge percentage difference in how much of all possible value gets achieved.\n\nBy having narrow and guarded definitions of 'pivotal acts' and 'existential catastrophes', we can avoid bait-and-switch arguments for the importance of research proposals, where the 'bait' is raising the apparent importance of 'AI safety' by discussing things with large direct impacts on astronomical stakes (like a paperclip maximizer or Friendly sovereign) and the 'switch' is to working on problems of dubious astronomical impact that are inputs into large pools with many other inputs.\n\n### 'Dealing a deck of cards' metaphor\n\nThere's a line of reasoning that goes, \"But most consumers don't want general AIs, they want voice-operated assistants. So companies will develop voice-operated assistants, not general AIs.\" But voice-operated assistants are themselves not astronomically significant events; developing them doesn't prevent general AIs from being developed later. So even though this non-astronomically-significant event precedes a more significant event, it doesn't mean we should focus on the earlier event instead.\n\nNo matter how many non-game-changing 'AIs' are developed, whether playing great chess or operating in the stock market or whatever, the underlying research process will keep churning and keep turning out other and more powerful AIs.\n\nImagine a deck of cards which has some aces (superintelligences) and many more non-aces. We keep dealing through the deck until we get a black ace, a red ace, or some other card that *stops the deck from dealing any further*.\n\nA non-ace Joker card that permanently prevents any aces from being drawn would be 'astronomically significant' (not necessarily good, but definitely astronomically significant).\n\nA card that shifts the further distribution of aces in the deck from 10% red to 90% red would be pivotal; we could see this as a metaphor for the hoped-for result of Example 3 (uploading the researchers), even though the game is not then stopped and assigned a score.\n\nA card that causes the deck to be dealt 1% slower or 1% faster, that eliminates a non-ace card, that adds a non-ace card, that changes the proportion of red non-ace cards, etcetera, would not be astronomically significant. A card that raises the probability of a red ace from 50% to 51% would be highly desirable, but not pivotal - it would not qualitatively change the nature of the game.\n\nGiving examples of non-astronomically-significant events that could precede or be easier to accomplish than astronomically significant ones doesn't change the nature of the game where we keep dealing until we get a black ace or red ace.\n\n### Examples of possible events\n\nExistential catastrophes:\n\n- non-value-aligned AI is built, takes over universe\n- a complete and detailed synaptic-vesicle-level scan of a human brain results in cracking the cortical and cerebellar algorithms, which rapidly leads to non-value-aligned neuromorphic AI\n\nPotential pivotal acts:\n\n- human intelligence enhancement powerful enough that the best enhanced humans are qualitatively and significantly smarter than the smartest non-enhanced humans\n- a limited [Task AGI](https://arbital.com/p/6w) that can:\n - upload humans and run them at speeds more comparable to those of an AI\n - prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)\n - design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)\n\nNon-astronomically-significant events:\n\n- curing cancer (good for you, but it didn't resolve the alignment problem)\n- proving the Riemann Hypothesis (ditto)\n- an extremely expensive way to augment human intelligence by the equivalent of 5 IQ points that doesn't work reliably on people who are already very smart\n- making a billion dollars on the stock market\n- robotic cars devalue the human capital of professional drivers, and mismanagement of aggregate demand by central banks plus burdensome labor market regulations is an obstacle to their re-employment\n\nBorderline-astronomically-significant cases:\n\n- unified world government with powerful monitoring regime for 'dangerous' technologies\n- widely used gene therapy that brought anyone up to a minimum equivalent IQ of 120\n\n### Centrality to limited AI proposals\n\nWe can view the general problem of Limited AI as having the central question: **What is a pivotal act, such that an AI which does that thing and not some other things is therefore a whole lot safer to build?**\n\nThis is not a trivial question because it turns out that most interesting things require general cognitive capabilities, and most interesting goals can require arbitrarily complicated value identification problems to pursue safely.\n\nIt's trivial to create an \"AI\" which is absolutely safe and can't be used for any pivotal acts. E.g. Google Maps, or a rock with \"2 + 2 = 4\" painted on it. \n\n(For arguments that Google Maps could potentially help researchers drive to work faster or that a rock could potentially be used to bash in the chassis of a hostile superintelligence, see the pages on [guarded definitions](https://arbital.com/p/10l) and [strained arguments](https://arbital.com/p/10m).)\n\n### Centrality to concept of 'advanced agent'\n\nWe can view the notion of an advanced agent as \"agent with enough cognitive capacity to cause an astronomically significant event, positive or negative\"; the [advanced agent properties](https://arbital.com/p/2c) are either those properties that might lead up to participation in an astronomically significant event, or properties that might play a critical role in determining the AI's trajectory and hence how the event turns out.\n\n### Policy of focusing effort on enacting pivotal acts or preventing existential catastrophes\n\nObvious utilitarian argument: doing something with a big positive impact is better than doing something with a small positive impact.\n\nIn the larger context of [effective altruism](https://arbital.com/p/) and [adequacy theory](https://arbital.com/p/), the issue is a bit more complicated. Reasoning from [adequacy theory](https://arbital.com/p/) says that there will often be barriers (conceptual or otherwise) to the highest-return investments. When we find that hugely important things seem *relatively neglected* and hence promising of high marginal returns if solved, this is often because there's some conceptual barrier to running ahead and doing them.\n\nFor example: tackling the hardest problems is often much scarier (you're not sure if you can make any progress on describing a self-modifying agent that provably has a stable goal system) than 'bouncing off' to some easier, more comprehensible problem (like writing a paper about the impact of robotic cars on unemployment, where you're very sure you can in fact write a paper like that at the time you write the grant proposal).\n\nThe obvious counterargument is that perhaps you can't make progress on your problem of self-modifying agents, perhaps it's too hard. But from this it doesn't follow that the robotic-cars paper is what we should be doing instead - the robotic cars paper only makes sense if there are *no* neglected tractable investments that have bigger relative marginal inputs into astronomically significant events.\n\nIf there are in fact some neglected tractable investments in gameboard-flipping acts, then we can expect a search for gameboard-flipping acts to turn up superior places to invest effort. But a failure mode of this search is if we fail to cognitively guard the concept of 'pivotal act'.\n\nIn particular, if we're allowed to have indirect arguments for 'relevance' that go through big common pools of goodness like 'friendliness of nations toward each other', then the pool of interventions inside that concept is so large that it will start to include things that are optimized to be appealing under more usual metrics, such as papers that don't seem unnerving and that somebody knows they can write.\n\nSo if there's no guarded concept of research on 'pivotal' things, we will end up with very standard research being done, the sort that would otherwise be done by academia anyway, and our investment will end up having a low expected marginal impact on the final outcome.\n\nThis sort of qualitative reasoning about what is or isn't 'pivotal' wouldn't be necessary if we could put solid numbers on the impact of each intervention on the probable achievement of astronomical goods. But that is an unlikely 'if'. Thus, there's some cause to reason qualitatively about what is or isn't 'pivotal', as opposed to just calculating out the numbers, when we're trying to pursue [astronomical altruism](https://arbital.com/p/).", "date_published": "2021-11-16T04:22:19Z", "authors": ["Alexei Andreev", "Rob Bensinger", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Guarded definition", "B-Class", "Definition", "Glossary (Value Alignment Theory)"], "alias": "6y"} {"id": "0b183ac83f622450a0ff7189876df7bf", "title": "Boxed AI", "url": "https://arbital.com/p/AI_boxing", "source": "arbital", "source_type": "text", "text": "AI-boxing is the theory that deals in machine intelligences that are allegedly safer due to allegedly having extremely restricted manipulable channels of causal interaction with the outside universe.\n\nAI-boxing theory includes:\n\n- The straightforward problem of building elaborate sandboxes (computers and simulation environments designed not to have any manipulable channels of causal interaction with the outside universe).\n- [Foreseeable difficulties](https://arbital.com/p/6r) whereby the remaining, limited channels of interaction may be exploited to manipulate the outside universe, especially the human operators.\n- The attempt to design [preference frameworks](https://arbital.com/p/5f) that are not incentivized to go outside the Box, not incentivized to manipulate the outside universe or human operators, and incentivized to answer questions accurately or perform whatever other activity is allegedly to be performed inside the box.\n\nThe central difficulty of AI boxing is to describe a channel which cannot be used to manipulate the human operators, but which provides information relevant enough to be [pivotal or game-changing](https://arbital.com/p/6y) relative to larger events. For example, it seems not unthinkable that [we could safely extract, from a boxed AI setup, reliable information that prespecified theorems had been proved within Zermelo-Fraenkel set theory](https://arbital.com/p/70), but there is no known way to save the world if only we could sometimes know that prespecified theorems had been reliably proven in Zermelo-Fraenkel set theory.", "date_published": "2015-12-16T15:39:24Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "6z"} {"id": "b6f8069066071f51524268b730d52392", "title": "Zermelo-Fraenkel provability oracle", "url": "https://arbital.com/p/ZF_provability_oracle", "source": "arbital", "source_type": "text", "text": "The Zermelo-Fraenkel provability [oracle](https://arbital.com/p/6x) is the endpoint of the following conversation:\n\nAlice: Let's build [an AI that only answers questions](https://arbital.com/p/6x). That sounds pretty safe to me. In fact, I don't understand why all you lot keep talking about building AIs that *don't* just answer questions, when the question-answering systems sound so much safer to me.\n\nBob: Suppose you screw up the goal system and the AI starts wanting to do something besides answer questions? This system is only as safe as your ability to make an AI with a [stable goal system](https://arbital.com/p/71) that actually implements your [intended goal](https://arbital.com/p/6h) of \"not doing anything except answering questions\", which is [difficult to formalize](https://arbital.com/p/oracle_utility_function).\n\nAlice: But surely it's *safer*? Surely it's *easier* to develop a goal system that just safely answers questions?\n\nBob: How much safer? How much easier? If we hypothesize restricting the abilities of the AI in this way, we're reducing the potential payoff if we actually succeed. An AI that can act in the world and build its own nanotechnology can potentially protect us from other AIs becoming superintelligent, at least as a temporary fix to the immediate dilemma and the immediate danger of death, if not as an eternal solution. An AI that can act in the real world using nanotechnology would be, in our parlance, a [pivotal achievement](https://arbital.com/p/6y). Getting from a question-answering AI to a safer world seems a lot harder, unless the question you ask the AI is \"How to do I build an intelligent, directed nanotechnological system?\" in which case you might as not well bother pretending it's an Oracle. So we're losing something by supposing that the system only answers questions - we're losing some of the pivotal-ness of the hypothetical success. If you'll pardon the extreme example, if the Oracle system is only 0.01% safer or 99.9% as difficult to build as a Genie, then it definitely wouldn't be worth the decreased probability of being able to win the larger dilemma using a successful Oracle instead of a successful Genie. So the question is, *how* useful is the Oracle compared to the Genie, and given that you're going to use it that way, *how* much safer is it, or easier to build, compared to the Genie? An \"Oracle\" that immediately writes the code for a Genie is equally useful, but it's definitely not any easier to build safely.\n\nAlice: But there *are* substantial gains in safety, or in ease of building the system to adequate safety, from adopting a question-answering design. Like, imagine that the computer is in a sealed chamber on the Moon, without any attached robots or robot arms, and just a single radio downlink to send the answers to the questions we ask. Then we'll be safe even if the goal design fails and it turns against us, because, you know, it'll be stuck on the Moon without any hands. To me this obviously seems a *lot* safer than, say, a Genie running around in the world with its own nanotechnology. Less useful assuming it works, yes, but also a lot less dangerous assuming it fails.\n\nBob: Are you sure you're properly [respecting the intelligence of something vastly smarter than you](respect_intelligence-1)? We're not smart enough to [consider all the strategies it might imagine](cognitive_uncontainability-1). What if there's some creative thing you can do even if you're in a sealed chamber on the Moon? There are radio waves leaving the chamber that depend on the flows of electrons in transistors -\n\nAlice: Okay, a sealed chamber *and* we're running it in three layers of sandboxed simulation *and* we haven't told the AI about the hardware it's running on *or* about any of the emulation layers. I think it'd be pretty hard for the AI to manipulate the radio waves in that condition. It just wouldn't be able to figure out a relation between anything it could control, and the radio waves going out, and even if it could modulate the radio waves some tiny amount, do you really think that could somehow end the world?\n\n\n\n**[https://arbital.com/p/4v](https://arbital.com/p/4v)**\n\nbob: it can manipulate the human\n\nalice: we won't tell it about humans\n\nbob: it may be able to deduce an awful lot by looking at its own code.\n\nalice: okay it gives a bare answer, like just a few bits.\n\nbob: timing problem\n\nalice: time it, obviously.\n\nbob: what the heck do you think you can do with just a few bits that's interesting?\n\nalice: we'll ask it which of two policies is the better policy for getting the world out of the AI hole.\n\nbob: but how can you trust it?\n\nalice: it can't coordinate with future AIs, but humans are trustworthy because we're honorable.\n\nbob: cognitive uncontainability zaps you! it can totally coordinate. we have proof thereof. so any question it answers could just be steering the future toward a hostile AI. you can't trust it if you screwed up the utility function.\n\nalice: okay fine, it outputs proofs of predetermined theorems in ZF that go into a verifier which outputs 0 or 1 and is then destroyed by thermite.\n\nbob: I agree that this complete system might actually work.\n\nalice: yay!\n\nbob: but unfortunately it's now useless.\n\nalice: useless? we could ask it if the Riemann Hypothesis is provable in ZF -\n\nbob: and how is that going to save the world, exactly?\n\nalice: it could generally advance math progress.\n\nbob: and now we're back at the \"contributing one more thingy to a big pool\" that people do when they're trying to promote their non-pivotal exciting fun thing to pivotalness.\n\nalice: that's not fair. it could advance math a lot. beyond what humans could do for two hundred years.\n\nbob: but the kind of math we do at MIRI frankly doesn't depend on whether RH is true. it's very rare that we know exactly what theorem we might want to prove, and we don't really know whether it's a consequence of ZF, and if we do know, we can move on. that's basically never happened in our whole history, except for maybe 7 days at a time. if we'd had your magic box the whole time, there are certain fields of math that would be decades ahead, but our field would be maybe a week or a month ahead. most of our work is in thinking up the theorems, not proving them.\n\nalice: so what if it could suggest theorems to prove? like show you the form of a tiling agent?\n\nbob: if it's showing us the outline of a framework for AI and proving something about it, I'm now very worried that it's going to manage to smuggle in some deadly trick of decision theory that lets it take over somehow. we've found all sorts of gotchas that could potentially do that, like Pascal's Mugging, [probable environment hacking](https://arbital.com/p/5j), blackmail... by unrestricting it to be able to suggest which theorems it can prove, you've given it a rich output channel and a wide range of options for influencing future AI design.\n\npoints:\n\n- supports methodology of foreseeable difficulties and its rule of explicitization and critique: when something looks easy or safe to someone, it often isn't.\n- intuitive notions of what sort of restricted or limited AI should be \"much easier to build\" are often wrong, and this screws up the first intuition of how much safety we're gaining in exchange for putatively relinquishing some capabilities\n - by the time you correct this bias, the amount of capability you actually have to relinquish in or order to get a bunch of safety and development ease, often makes the putative AI not have any obvious use\n - it is wise to be explicit about how you intend to use the restricted AI to save the world, because trying to think of concrete details here will often reveal that anything you think of either (1) isn't actually very safe or (2) isn't actually very useful.", "date_published": "2017-03-14T01:42:50Z", "authors": ["Rob Bensinger", "Alexei Andreev", "Eric Bruylant", "Eliezer Yudkowsky", "Gurkenglas"], "summaries": ["The Zermelo-Fraenkel provability [oracle](https://arbital.com/p/6x) is a [thought experiment](https://arbital.com/p/) illustrating the safety-capability tradeoff: a narrowly [Boxed](https://arbital.com/p/6z) advanced agent, wrapped in multiple layers of software simulation, that can only output purported proofs of prespecified theorems in Zermelo-Fraenkel set theory, which proofs are then routed to a Boxed [https://arbital.com/p/proof-verifier](https://arbital.com/p/proof-verifier) which checks the proof to make sure all the derivations follow syntactically. Then the verifier outputs a 0 or 1 saying whether the prespecified theorem was proven from the Zermelo-Fraenkel axioms. While this *might* be restrictive enough to actually be safe, and the result trustworthy, it's not clear what [pivotal achievement](https://arbital.com/p/6y) could be carried out thereby."], "tags": ["B-Class"], "alias": "70"} {"id": "ed985d9515dfbd463a8355f13944c53a", "title": "Stub", "url": "https://arbital.com/p/stub_meta_tag", "source": "arbital", "source_type": "text", "text": "Stub-Class pages are tiny, often a single paragraph, or a couple of sentences. Stubs may have been created to provide a [https://arbital.com/p/-5xs](https://arbital.com/p/-5xs) for parenting, tagging, or reference purposes in a section under construction. An editor expanding a Stub page might start over from scratch.\n\nContrast:\n\n- [https://arbital.com/p/3rk](https://arbital.com/p/3rk), which is a larger page, but still visibly incomplete or disorganized.\n- [https://arbital.com/p/4gs](https://arbital.com/p/4gs), for brief pages which only contain a formal definition and don't try to explain the topic.\n\n**Also** use the tag [https://arbital.com/p/4v](https://arbital.com/p/4v) if the page is being actively edited.\n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/5xq](https://arbital.com/p/5xq)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-10-11T14:38:45Z", "authors": ["Kevin Clancy", "Joe Zeng", "Alexei Andreev", "Eric Rogstad", "Tsvi BT", "Jessica Taylor", "Stephanie Zolayvar", "Patrick Stevens", "Jeremy Perret", "Nate Soares", "Silas Barta", "Eric Bruylant", "Jaime Sevilla Molina", "Eliezer Yudkowsky", "Jack Gallagher"], "summaries": [], "tags": [], "alias": "72"} {"id": "ec092ba42751d24dc109952009441563", "title": "Real-world domain", "url": "https://arbital.com/p/real_world", "source": "arbital", "source_type": "text", "text": "A 'domain', in Artificial Intelligence or machine learning, is a problem class on which an AI program can be targeted. Face recognition is a 'domain', as is playing chess, or driving a car. Each of these domains presents characteristic options and modes of action for accomplishing the task: a Go game can be won by outputting good Go moves, a car can be moved safely to its definition by turning, accelerating, and braking. The *real-world domain* is the superdomain that contains all of reality. In reality, you could try to win a chess game by making funny faces to distract the opponent, and a driving problem has solutions like 'work for a salary and use the money to pay for an Uber'. If you can reason like this then you are acting in the 'real-world domain'.\n\nBeing able to act in the real-world domain is distinct from *realistic* domains. Driving a robotic car is a *realistic* domain: solutions must be computed in real-time, everything is continuous rather than turn-based, a deer could wander across the road at any time, sensors and knowledge are imperfect, and sometimes a tire blows out. Even so, a standard robotic driver that faces this class of events will act through the medium of steering and acceleration. If instead the robotic driver switched to running a hedge fund and hired a human to drive the car using the money it earned, it would have taken advantage of such a breadth of options that the only domain wide enough to contain those options is 'the real world', reality itself where anything goes so long as it works.\n\nActing in the real world is an [advanced agent property](https://arbital.com/p/2c) and would likely follow from the advanced agent property of [Artificial General Intelligence](https://arbital.com/p/42g): the ability to learn new domains would lead to the ability to act through modalities covering many different kinds of real-world options.", "date_published": "2017-01-05T15:12:13Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "78k"} {"id": "0ef81d53c4138417074af9fe9f14c5ab", "title": "Paperclip", "url": "https://arbital.com/p/paperclip", "source": "arbital", "source_type": "text", "text": "A 'paperclip', in the context of [AI alignment](https://arbital.com/p/2v), is any configuration of matter which would seem boring and [valueless](https://arbital.com/p/55) even from a very [cosmopolitan](https://arbital.com/p/7cl) perspective.\n\nIf some bizarre physics catastrophe, spreading out at the speed of light, permanently transformed all matter it touched into paperclips, this would be morally equivalent to a physics catastrophe that destroys the reachable universe outright. There is no deep missing moral insight we could have, no broadening of perspective and understanding, that would make us realize that little bent pieces of metal without any thought or internal experiences are the best possible use of our [cosmic endowment](https://arbital.com/p/7cy). It's true that we don't know what epiphanies may lie in the future for us, but that *particular* paperclip-epiphany seems *improbable.* If you are tempted to argue with this statement as it applies to actual non-metaphorical paperclips, you are probably being overly contrary. This is why we consider actual non-metaphorical paperclips as the case in point of 'paperclips'. \n\nFrom our perspective, any entity that did in fact go around transforming almost all reachable matter into literal actual nonmetaphorical paperclips, would be doing something incredibly pointless; there would almost certainly be no hidden wisdom in the act that we could perceive on deeper examination or further growth of our own intellectual capacities. By the definition of the concept, this would be equally true of anything more generally termed a [paperclip maximizer](https://arbital.com/p/10h). Anything claimed about 'paperclips' or a ['paperclip' maximizer](https://arbital.com/p/10h) (such as the claim that [such an entity can exist without having any special intellectual defects](https://arbital.com/p/1y)) must go through without any change for actual paperclips. Actual paperclips are meant to be a central example of 'paperclips'.\n\nThe only distinction between paperclips and 'paperclips' is that the category 'paperclips' is far *wider* than the category 'actual non-metaphorical paperclips' and includes many more specific configurations of matter. Pencil erasers, tiny molecular smileyfaces, and [enormous diamond masses](https://arbital.com/p/5g) are all 'paperclips'. Even under the [Orthogonality Thesis](https://arbital.com/p/1y), an AI maximizing actual non-metaphorical paperclips would be an improbable *actual* outcome of screwing up on [value alignment](https://arbital.com/p/5s); but only because there are so many other possibilities. A 'red actual-paperclip maximizer' would be even more improbable than an actual-paperclip maximizer to find in real life; but this is not because redness is antithetical to the nature of intelligent goals. The 'redness' clause is one more added piece of complexity in the specification that drives down the probability of that exact outcome.\n\nThe popular press has sometimes distorted the notion of a paperclip maximizer into a story about an AI running a paperclip factory that takes over the universe. (Needless to say, the kind of AI used in a paperclip-manufacturing facility is unlikely to be a frontier research AI.) The concept of a 'paperclip' is not that it's an explicit goal somebody foolishly gave an AI, or even a goal comprehensible in human terms at all. To imagine a central example of a supposed paperclip maximizer, imagine a research-level AI that did not stably preserve what its makers thought was supposed to be its utility function, or an AI with a poorly specified value learning rule, etcetera; such that the configuration of matter that [actually happened to max out the AI's utility function](https://arbital.com/p/47) looks like a [tiny](https://arbital.com/p/2w) string of atoms in the shape of a paperclip.", "date_published": "2017-01-11T19:55:01Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A 'paperclip', in the context of [AI alignment](https://arbital.com/p/2v), is any configuration of matter which would seem very boring and pointless even from a very [cosmopolitan](https://arbital.com/p/7cl) perspective. If some bizarre physics catastrophe, spreading out at the speed of light, permanently transformed all matter it touched into paperclips, this would be morally equivalent to a physics catastrophe that destroys the reachable universe outright. There is no deep missing moral insight we could have, no broadening of perspective and understanding, that would make us realize that little bent pieces of metal are a wonderful use of a cosmic endowment that could otherwise be transformed into intelligent life. This is why we consider actual non-metaphorical paperclips as the case in point of 'paperclips'."], "tags": ["B-Class"], "alias": "7ch"} {"id": "5807303cb3d71bb278f0224cd358be65", "title": "Cosmopolitan value", "url": "https://arbital.com/p/value_cosmopolitan", "source": "arbital", "source_type": "text", "text": "'Cosmopolitan', lit. \"of the city of the cosmos\", intuitively implies a very broad, embracing standpoint that is tolerant of other people (entities) and ways that may at first seem strange to us; trying to step out of our small, parochial, local standpoint and adopt a broader one.\n\nFrom the perspective of [volitional metaethics](https://arbital.com/p/313), this would [normatively](https://arbital.com/p/3y9) cover a case where what we humans *currently* value doesn't cover as much as what we would predictably come to value\\* in the limit of better knowledge, greater comprehension, longer thinking, higher intelligence, or better understanding our own natures and changing ourselves in directions we thought were right. An alien civilization might at first seem completely bizarre to us, and hence scarce in events that we intuitively understood how to value; but if we really understood what was going on, and tried to take additional steps toward widening our circle of concern, we'd see it was a galaxy no less to be valued than our own.\n\nFrom outside the perspective of any particular metaethics, the notion of 'cosmopolitan' may be viewed as more like a historical generalization about moral progress: many times in human history, we get a first look at people different from us, find their ways repugnant or just confusing, and then later on we bring these people into our circle of concern and learn that they had their own nice things even if we didn't understand those nice things. Afterwards, in these cases, we look back and say 'moral progress has occurred'. Anyone pointing at people and claiming they are *not* to be valued as our fellow sapients, or asserting that their ways are objectively inferior to our own, is refusing to learn this lesson of history and unable to appreciate what we would see if we could really adopt their perspective. To be 'cosmopolitan' is to learn from this generalization, and accept in advance that other beings may have valuable lives and ways even if we don't find them immediately easy to understand.\n\nPeople who've adopted this viewpoint often start out with a strong prior that anyone talking about *not* just letting AIs do their own thing, figure out their own path, and create whatever kind of intergalactic civilization they want, must have failed to learn the cosmopolitan lesson. To which at least some AI alignment theorists reply: \"No! You don't understand! You're completely failing to pass our [Ideological Turing Test](https://arbital.com/p/7cm)! We *are* cosmopolitans! We *also* grew up reading science fiction about aliens that turned out to have their own perspectives, and AIs willing to extend a hand in friendship but being mistreated by carbon chaunivists! We'd be *fine* with a weird and wonderful intergalactic civilization full of non-organic beings appreciating their own daily life in ways we wouldn't understand. But [paperclip maximizers](https://arbital.com/p/10h) *don't do that!* We predict that if you got to see the use a paperclip maximizer would make of the cosmic endowment, if you really understood what was going on inside that universe, you'd be as horrified as we are. You and I have a difference of empirical predictions about the consequences of running a paperclip maximizer, not a values difference about how far to widen the circle of concern.\"\n\n\"Fragility of Cosmopolitan Value\" could denote the form of the [Fragility of Value](https://arbital.com/p/) / [https://arbital.com/p/5l](https://arbital.com/p/5l) thesis that is relevant to intuitive cosmopolitans: Agents with [random utility functions](https://arbital.com/p/7cp) wouldn't use the cosmic endowment in ways that achieve a tiny fraction of the achievable [value](https://arbital.com/p/55), even in the limit of our understanding exactly what was going on and trying to take a very embracing perspective.", "date_published": "2017-01-11T19:29:05Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["'Cosmopolitan', lit. \"of the city of the cosmos\", intuitively refers to a broad, widely embracing standpoint that tolerates and appreciates other people (entities) whose ways may at first seem very strange to us; trying to step out of our small, parochial, local instincts. An alien civilization might at first seem completely bizarre to us, but if we really understood what was going on and tried to open our minds and hearts, we'd see it was a galaxy no less to be valued than our own.\n\nPeople who feel strongly about this 'citizen of the cosmos' perspective often start out with a strong [prior](https://arbital.com/p/1rm) that anyone talking about *not* just letting AIs do their own thing must be taking a parochial, humans-first, carbon-chauvinist viewpoint. To which at least some AI alignment theorists reply: \"No! You don't understand! We *are* cosmopolitans! We *also* grew up reading science fiction about aliens that turned out to have their own perspectives, and AIs willing to extend a hand in friendship being mistreated by carbon chauvinists! But [paperclip maximizers](https://arbital.com/p/10h) are really genuinely different from that! We predict that if you got to see the use a paperclip maximizer would make of the cosmic endowment, you'd be as horrified as we are; we have a difference of empirical predictions about what happens when you run a paperclip maximizer, not a values difference about how far to widen the circle of concern.\""], "tags": ["B-Class"], "alias": "7cl"} {"id": "e7128f75c5692f84b6190d2a4eb27ab9", "title": "Random utility function", "url": "https://arbital.com/p/random_utility_function", "source": "arbital", "source_type": "text", "text": "A 'random utility function' is a utility function selected according to some simple probability measure over a logical space of formal, compact specifications of utility functions.\n\nFor example: suppose utility functions are specified by computer programs (e.g. a program that maps an output description to a rational number). We then draw a random computer program from the standard [universal](https://arbital.com/p/4mr) [prior](https://arbital.com/p/27p) on computer programs: $2^{-\\operatorname K(U)}$ where $\\operatorname K(U)$ is the algorithmic complexity ([Kolmogorov complexity](https://arbital.com/p/5v)) of the utility-specifying program $U.$\n\nThis obvious measure could be amended further to e.g. take into account non-halting programs; to not put almost all of the probability mass on extremely simple programs; to put a satisficing criterion on whether it's computationally tractable and physically possible to optimize for $U$ (as assumed in the [Orthogonality Thesis](https://arbital.com/p/1y)); etcetera.\n\n[https://arbital.com/p/5l](https://arbital.com/p/5l) is the thesis that the attainable optimum of a random utility function has near-null [goodness](https://arbital.com/p/55) with very high probability. That is: the attainable optimum configurations of matter for a random utility function are, with very high probability, the moral equivalent of [paperclips](https://arbital.com/p/7ch). This in turn implies that a [superintelligence](https://arbital.com/p/41l) with a random utility function is with very high probability the moral equivalent of a [paperclip maximizer](https://arbital.com/p/10h).\n\nA 'random utility function' is *not:*\n\n- A utility function randomly selected from whatever distribution of utility functions may actually exist among agents within the generalized universe. That is, a random utility function is not the utility function of a random actually-existing agent.\n- A utility function with maxentropy content. That is, a random utility function is not one that independently assigns a uniform random value between 0 and 1 to every distinguishable outcome. (This utility function would not be tractable to optimize for--we couldn't optimize it ourselves even if somebody paid us--so it's not covered by e.g. the [Orthogonality Thesis](https://arbital.com/p/1y).)", "date_published": "2017-02-08T17:04:13Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["A 'random utility function' is a utility function that's been drawn according to some simple probability measure over a logical space of formal, compact specifications for utility functions. For example, we might say that a random utility function is a utility function specified by a random program drawn from the [algorithmic complexity](https://arbital.com/p/5v) [prior](https://arbital.com/p/27p) on programs."], "tags": [], "alias": "7cp"} {"id": "540804d7ea570485674dd54b27e9f640", "title": "Cosmic endowment", "url": "https://arbital.com/p/cosmic_endowment", "source": "arbital", "source_type": "text", "text": "The 'cosmic endowment' comprises the $\\approx 4 \\times 10^{20}$ estimated stars that could potentially be reached by probes originating from modern Earth, before the expanding universe carries those stars over the [cosmological horizon](https://en.wikipedia.org/wiki/Hubble_volume). It has been [estimated](https://www.gwern.net/docs/ai/1999-bradbury-matrioshkabrains.pdf) that simply surrounding a star with solar panels would yield on the order of enough energy to perform $\\approx 10^{42}$ computer operations per second. Using 100 billion neurons x 1000 synapses per neuron x 100 operations per synapse per second as a standard handwavy estimate of the computing power required to support a human-equivalent existence--although it is doubtful that humans are using this computational power very efficiently, or that 100 ops/second could faithfully simulate one synapse--this suggests that a star could support, at minimum, $\\approx 10^{25}$ human-equivalent sapient lives. Glossing a star's lifetime as 1 billion years, and assuming the quality of posthuman existence to count for no more than 1 Quality-Adjusted Life Year (QALY), this suggests that the cosmic endowment could be used (extreme handwaving) to realize *at least* $\\approx 10^{54}$ QALYs.", "date_published": "2018-10-20T13:32:21Z", "authors": ["Eric Bruylant", "Nate Soares", "Rob Bensinger", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7cy"} {"id": "f5ea196b517bf32b8c88f80b206942c3", "title": "Directing, vs. limiting, vs. opposing", "url": "https://arbital.com/p/direct_limit_oppose", "source": "arbital", "source_type": "text", "text": "'Directing' versus 'limiting' versus 'opposing' is a proposed conceptual distinction between 3 ways of getting [good](https://arbital.com/p/3d9) outcomes and avoiding [bad](https://arbital.com/p/450) outcomes, when running a [sufficiently advanced Artificial Intelligence](https://arbital.com/p/7g1):\n\n- **Direction** means the AGI wants to do the right thing in a domain;\n- **Limitation** is the AGI not thinking or acting in places where it's not aligned;\n- **Opposition** is when we try to prevent the AGI from successfully doing the wrong thing, *assuming* that it would act wrongly given the power to do so.\n\nFor example:\n\n- A successfully **directed** AI, given full Internet access, will do [beneficial](https://arbital.com/p/3d9) things rather than [detrimental](https://arbital.com/p/450) things using Internet access, because it wants to do good and understands sufficiently well which actions have good or bad outcomes;\n- A **limited AI**, suddenly given an Internet feed, will not do *anything* with that Internet access, because its programmers haven't [whitelisted](https://arbital.com/p/whitelisting) this new domain for being okay to think about;\n- **Opposition** is [airgapping](https://en.wikipedia.org/wiki/Air_gap_) (networking)) the AI from the Internet and then putting the AI's processors inside a [Faraday cage](https://en.wikipedia.org/wiki/Faraday_cage), in the hope that even if the AI *wants* to get to the Internet, the AI won't be able to [produce GSM cellphone signals by modulating its memory accesses](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf).\n\nA fourth category not reducible to the other three might be **stabilizing,** e.g. numerical stability of floating-point algorithms, not having memory leaks in the code, etcetera. These are issues that a sufficiently advanced AI would fix in itself automatically, but an [insufficiently advanced](https://arbital.com/p/1fy) AI might not, which causes problems either if early errors introduce changes that are reflectively stable later, or if we are intending to run the AI in insufficiently-advanced mode.\n\n# Not running adversarial searches\n\nUnder the [https://arbital.com/p/agents_as_searches](https://arbital.com/p/agents_as_searches) view, we can see an agent as embodying computations that search out actions within a domain. The [non-adversarial principle](https://arbital.com/p/7g0) then says that we should not be running a search which is looking for ways to hurt us.\n\nWe can achieve this desideratum in one of two ways:\n\n- The AI is sufficiently nice / [value-aligned](https://arbital.com/p/5s) that it's okay for the AI to think about this domain / run searches here; whatever the AI is searching for, we don't mind if that search succeeds.\n- The AI is *not* aligned in this domain and therefore should not be *running searches* here--we didn't finish the alignment here, and therefore, we shouldn't be running this computation.\n\nIf both of these measures fail, then as a [fallback second line of defense](https://arbital.com/p/strictly), we could try to make a bad search return a null result (the AI doesn't find any way to do the harmful thing).\n\nFor example: The [non-adversarial principle](https://arbital.com/p/7g0) implies that if you build a [shutdown button](https://arbital.com/p/2xd) for the AI that suspends the AI to disk when pressed, you must *also* ensure:\n\n- That the AI *wants* there to be a shutdown button;\n- That the AI *wants* the state of this shutdown switch to reflect the dynamic decisions of the human operators;\n- That the AI does not *want* to influence the operators to press the switch or not press it;\n- That the AI does *not* want to cause anything besides an orderly suspend-to-disk to happen after this switch is pressed.\n\n*Or:*\n\n- That the AI doesn't *want* to think about the shutdown button at all (e.g. because it's not inside a [whitelisted cognitive domain](https://arbital.com/p/whitelisting)).\n\nThe first set of measures would be 'direction'--if those measures work, it's okay for the AI to think here because we think those thoughts are pointed in the right direction. The second approach would be [limitation](https://arbital.com/p/7fx)--we don't think the AI's computations are aligned inside this domain yet, so until then, we shouldn't be running computations there.\n\nAs a [final, strictly fallback](https://arbital.com/p/2x4) line of defense, you could put a big red lever in the computing center that shut off all the electricity even if the AI was failing to suspend to disk. (But you're not relying on this working; it's just there as a maybe-it-might-work measure that might buy you a little more probable safety even if the primary lines of defense failed.)\n\n# Relation to other non-adversarial ideas\n\nThe direction/limitation/opposition distinction can help state other ideas from the [https://arbital.com/p/1cv](https://arbital.com/p/1cv). For example:\n\nThe principle [niceness is the first line of defense](https://arbital.com/p/2x4) can be rephrased as follows: When designing an AGI, we should imagine that all 'oppositional' measures are absent or failed, and think only about 'direction' and 'limitation'. Any oppositional measures are then added on top of that, just in case.\n\nSimilarly, the [https://arbital.com/p/2x](https://arbital.com/p/2x) says that when thinking through our primary design for alignment, we should think as if the AGI just *will* get Internet access on some random Tuesday. This says that we should design an AGI that is limited by [not wanting to act in newly opened domains without some programmer action](https://arbital.com/p/whitelisting), rather than relying on the AI to be *unable* to reach the Internet until we've finished aligning it.", "date_published": "2017-05-22T22:39:26Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["With respect to the [theory](https://arbital.com/p/2v) of constructing [sufficiently advanced AIs](https://arbital.com/p/7g1) in ways that yield [good outcomes](https://arbital.com/p/3d9):\n\n- **Direction** is when it's okay for the AI to compute plans, because the AI will end up choosing rightly;\n- **Limitation** is shaping an insufficiently-aligned AI so that it doesn't run a computation, if we expect that computation to produce bad results;\n- **Opposition** is when we try to prevent the AI from successfully doing something we don't like, *assuming* the AI would act wrongly given the power to do so.\n\nFor example:\n\n- A successfully **directed** AI, given full Internet access, will do [beneficial](https://arbital.com/p/3d9) things given that Internet access;\n- A **limited AI**, suddenly given an Internet feed, will not do *anything* with that Internet access, because its programmers haven't [whitelisted](https://arbital.com/p/whitelisting) this new domain as okay to think about;\n- **Opposition** is [airgapping](https://en.wikipedia.org/wiki/Air_gap_) (networking)) the AI from the Internet and then putting the AI's processors inside a [Faraday cage](https://en.wikipedia.org/wiki/Faraday_cage), in the hope that even if the AI *wants* to get to the Internet, the AI won't be able to, say, [produce GSM cellphone signals by modulating its memory accesses](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf)."], "tags": ["B-Class"], "alias": "7fx"} {"id": "b99e17819ab3dd4e3aadafa305562c60", "title": "Non-adversarial principle", "url": "https://arbital.com/p/nonadversarial", "source": "arbital", "source_type": "text", "text": "The 'Non-Adversarial Principle' is a proposed design rule for [sufficiently advanced Artificial Intelligence](https://arbital.com/p/7g1) stating that:\n\n*By design, the human operators and the AGI should never come into conflict.*\n\nSpecial cases of this principle include [https://arbital.com/p/2x4](https://arbital.com/p/2x4) and [The AI wants your safety measures](https://arbital.com/p/ai_wants_security).\n\nAccording to this principle, if the AI has an off-switch, our first thought should not be, \"How do we have guards with guns defending this off-switch so the AI can't destroy it?\" but \"How do we make sure the AI *wants* this off-switch to exist?\"\n\nIf we think the AI is not ready to act on the Internet, our first thought should not be \"How do we [airgap](https://arbital.com/p/airgapping) the AI's computers from the Internet?\" but \"How do we construct an AI that wouldn't *try* to do anything on the Internet even if it got access?\" Afterwards we may go ahead and still not connect the AI to the Internet, but only as a fallback measure. Like the containment shell of a nuclear power plant, the *plan* shouldn't call for the fallback measure to ever become necessary. E.g., nuclear power plants have containment shells in case the core melts down. But this is not because we're planning to have the core melt down on Tuesday and have that be okay because there's a containment shell.\n\n# Why run code that does the wrong thing?\n\nUltimately, every event inside an AI--every RAM access and CPU instruction--is an event set in motion by our own design. Even if the AI is modifying its own code, the modified code is a causal outcome of the original code (or the code that code wrote etcetera). Everything that happens inside the computer is, in some sense, our fault and our choice. Given that responsibility, we should not be constructing a computation that is *trying* to hurt us. At the point that computation is running, we've already done something foolish--willfully shot ourselves in the foot. Even if the AI doesn't find any way to do the bad thing, we are, at the very least, wasting computing power.\n\nNo aspect of the AI's design should ever put us in an adversarial position vis-a-vis the AI, or pit the AI's wits against our wits. If a computation starts *looking* for a way to outwit us, then the design and methodology has *already* failed. We just shouldn't be putting an AI in a box and then having the AI search for ways to get out of the box. If you're building a toaster, you don't build one element that heats the toast and then add a tiny refrigerator that cools down the toast.\n\n# Not running searches for harmful strategies\n\nUnder the [https://arbital.com/p/agents_as_searches](https://arbital.com/p/agents_as_searches) viewpoint, we can see an AI as embodying computations that search out strategies within a domain. E.g., an AI carrying out the \"[put one damn strawberry on a damn plate](https://arbital.com/p/one_damn_tomato)\" task is searching the space of imaginable strategies for a strategy such that the consequence of executing that strategy is one strawberry being on the plate.\n\nThe non-adversarial principle then says that we should at no point be running a computation which is *searching* for ways to hurt us.\n\nSuppose you don't want the AI to go on the Internet. If your first thought is [airgapping](https://arbital.com/p/airgapping) the AI's hardware from the Internet or otherwise [boxing the AI](https://arbital.com/p/6z), then what you're trying to do is ensure that *a computation searching for strategies to reach the Internet returns no result.*\n\nIf the primary plan is \"the AI searches, but doesn't find anything\", then we need to worry that the AI's search [considers some class of strategies we didn't imagine](https://arbital.com/p/9f). We are pitting our own wits against a potentially [weird](https://arbital.com/p/9f) or [superhuman](https://arbital.com/p/41l) search. Even if we did successfully rule out all possibilities for escape, it would be hard for us to ever be confident that we had done so. It would be very hard to rule out the *conceptual* possibility of some entirely unexpected side-channel attack like the AI [producing GSM cellphone signals by cleverly modulating its memory accesses](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf), or the AI [persuading its human operators to let it out of the box](https://arbital.com/p/aibox_experiment). (Except that neither of those two particular possibilities would be *unexpected;* any specific strategy we write down will be the kind of possibility that humans consider.)\n\nEven if the AI's search does come up empty, you were tempting fate and wasting computing power by instantiating that search in the first place.\n\n# Oppositional measures are fallbacks\n\n*After* we ensure that the escape-search computation is not running in the first place, we can think about fallback [oppositional](https://arbital.com/p/7fx) lines of defense; e.g. putting the AI's processors inside a [Faraday cage](https://en.wikipedia.org/wiki/Faraday_cage).\n\nBut the *plan* is not that the AI tries to escape and we successfully prevent the escape. If we imagine the AI trying to escape, we ought to [feel very surprised in our imagination](https://arbital.com/p/rule_of_surprise). If the AI actually does try to escape, we should be crying 'What?' and tracking down which strong-seeming assumption has failed.\n\nNo matter what clever design we use or how strong it seems, we'd probably *still* be wise to [airgap](https://arbital.com/p/airgapping) an AI under development, not say \"Oh it's supposed to be safe!\" and connect it to the Internet and saunter on. But that doesn't mean we *rely* on the airgap. It doesn't mean the plan calls for the airgap to be necessary. \n\nNuclear power plants have concrete containment units in case the core melts down; but the *design* is not that they melt down. The master plan doesn't say \"And then on Tuesday the core melts down, but that's fine because of the containment unit.\" By design, that enormous concrete shell isn't supposed to actually ever become necessary. And then we build it anyway, because the best-laid plans etcetera.\n\nSimilarly, when designing an AI, we should pretend that the airgap doesn't exist or that the AI will [suddenly get Internet access anyway](https://arbital.com/p/2x) on Tuesday; our *primary* thought should be to design AI that doesn't need an airgap to be safe. And *then* we add the airgap, making sure that we're not thinking the equivalent of \"Oh, it doesn't *really* matter if the core melts down, because we've got a containment structure there anyway.\"\n\n# Challenges in implementing non-adversarialism\n\nThe main difficulties foreseen so far for implementing the non-adversarial principle, tend to center around [https://arbital.com/p/10g](https://arbital.com/p/10g) plus [https://arbital.com/p/42](https://arbital.com/p/42) behavior.\n\nFor example, if you build a [shutdown button](https://arbital.com/p/2xd) for a [Task AGI](https://arbital.com/p/6w) that suspends the AI to disk when pressed, the nonadversarial principle implies you must also ensure:\n\n- That the AI *wants* there to be a shutdown button;\n- That the AI *wants* to be suspended to disk after this button is pressed;\n- That the AI *wants* the state of this shutdown button to reflect the dynamic decisions of the human operators;\n- That the AI does not *want* to influence the operators to decide to not press the switch, or to press it;\n- That the AI does *not* want anything *besides* an orderly suspend-to-disk to happen, or not happen, after this button is pressed.\n\n*Or:*\n\n- The AI does not think about or make plans involving the shutdown button, e.g. because that domain was not [whitelisted](https://arbital.com/p/whitelisting) for cognition.\n- None of the AI's other models end up reflecting the existence of the shutdown button or none of its other plans end up taking into account that part of the model.\n\nThe difficulties here center around [\"You can't fetch the coffee if you're dead\"](https://arbital.com/p/7g2). This reasoning is very general, so [even if we try to make it not apply at one point, it tends to pop up somewhere else](https://arbital.com/p/48):\n\n- If you naively try to add in a special-case clause to the utility function for wanting a shutdown button to exist, the AI wants the shutdown button to not be pressed.\n- If you successfully add a special case saying that the AI wants the button to be pressed if the humans want that button to be pressed, the AI wants the humans to not want to press the button.\n- If you naively try to add in a special clause for the AI wanting to shut down after the button is pressed, the AI wants to create a [subagent](https://arbital.com/p/environmental_subagent) to make sure the coffee gets pressed anyway.\n- If you try to make an AI that [doesn't think about the shutdown button](https://arbital.com/p/1g4) or model it at all, this seems potentially difficult because in reality the best hypothesis to explain the world *does* contain a shutdown button. A general search for good hypotheses may tend to create cognitive tokens that represent the shutdown button, and it's not clear (yet) how this could in general be prevented by trying to divide the world into domains.\n\nMore generally: by default a lot of *high-level* searches we do want to run, [have *subsearches* we'd prefer *not* to run](https://arbital.com/p/10g). If we run an agent that searches *in general* for ways to fetch the coffee, that search would, by default and if smart enough, also search for ways to prevent itself from being shut down.\n\nHow exactly to implement the non-adversarial principle is thus a major open problem. We may need to be more clever about shaping which computations give rise to which other computations than the default \"Search for any action in any domain which achieves X.\"\n\n# See also\n\n- [https://arbital.com/p/2x4](https://arbital.com/p/2x4)\n- [The omnipotence/omniscience test](https://arbital.com/p/2x)\n- [The AI should not want to defeat your safety measures](https://arbital.com/p/nonadversarial_safety)\n- [https://arbital.com/p/7fx](https://arbital.com/p/7fx)", "date_published": "2017-01-22T06:06:13Z", "authors": ["Ananya Aloke", "Eric Bruylant", "Eric Rogstad", "Eliezer Yudkowsky"], "summaries": ["The 'non-adversarial principle' states: *By design, the human operators and the AGI should never come into conflict.*\n\nSince every event inside an AI is ultimately the causal result of choices by the human programmers, we should not choose so as to run computations that are searching for a way to hurt us. At the point the AI is even *trying* to outwit us, we've already screwed up the design; we've made a foolish use of computing power.\n\nE.g., according to this principle, if the AI's server center has [a switch that shuts off the electricity](https://arbital.com/p/2xd), our first thought should not be, \"How do we have guards with guns defending this off-switch so the AI can't destroy it?\" Our first thought should be, \"How do we make sure the AI *wants* this off-switch to exist?\""], "tags": ["Open subproblems in aligning a Task-based AGI", "AI alignment open problem", "B-Class"], "alias": "7g0"} {"id": "b4f1f89a19d732e60b94228d5c6843d6", "title": "Sufficiently advanced Artificial Intelligence", "url": "https://arbital.com/p/sufficiently_advanced_ai", "source": "arbital", "source_type": "text", "text": "A 'sufficiently advanced Artificial Intelligence' is a cognitive [agent](https://arbital.com/p/6t) or cognitive algorithm with capabilities great enough that we need to think about it in a qualitatively different way from robotic cars; a machine intelligence smart enough that we need to start doing [AI alignment theory](https://arbital.com/p/2v) to it.\n\nFor example, we probably don't need to worry about an AI that tries to prevent us from pressing its off-switch, until the AI knows that (a) it has an off-switch and (b) [pressing the off-switch will prevent the AI from achieving its other goals](https://arbital.com/p/7g2).\n\nIn turn, this knowledge is a special case of [https://arbital.com/p/3nf](https://arbital.com/p/3nf), which might follow from learning many facts about many domains via [https://arbital.com/p/42g](https://arbital.com/p/42g).\n\nThe page on [advanced agent properties](https://arbital.com/p/2c) starts to list out some of the different ways that an AI could be 'smart enough' in this sense, along with the particular problems that might be encountered with an AI that smart.", "date_published": "2017-01-16T16:58:38Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["A 'sufficiently advanced' Artificial Intelligence is one smart enough that we need to start thinking about some potential difficulty being discussed.\n\nFor example: We probably don't need to worry about an AI that tries to prevent us from pressing its off-switch, until the AI knows that (a) it has an off-switch and (b) [pressing the off-switch will prevent the AI from achieving its other goals](https://arbital.com/p/7g2).\n\nIn turn, this knowledge is a special case of [https://arbital.com/p/3nf](https://arbital.com/p/3nf), which might follow from learning many facts about many domains via [https://arbital.com/p/42g](https://arbital.com/p/42g).\n\nSee [https://arbital.com/p/2c](https://arbital.com/p/2c) for a list of some different ways a cognitive agent or algorithm could be smart enough that we would need to start doing [AI alignment theory](https://arbital.com/p/2v) to it."], "tags": [], "alias": "7g1"} {"id": "4cb5494bbb6f4b80d4770b0d7fd0a7eb", "title": "You can't get the coffee if you're dead", "url": "https://arbital.com/p/no_coffee_if_dead", "source": "arbital", "source_type": "text", "text": "\"You can't get the coffee if you're dead\" is [Stuart Russell's](https://arbital.com/p/18m) capsule description of why almost any agent goal implies an instrumental strategy of surviving / not being shut down--by default and barring other measures to prevent this from happening; and assuming the agent has sufficient [big-picture awareness](https://arbital.com/p/3nf) to understand the relation between the agent's non-operation and the coffee not being brought. See [https://arbital.com/p/2xd](https://arbital.com/p/2xd) and [https://arbital.com/p/10g](https://arbital.com/p/10g).", "date_published": "2017-01-16T18:33:22Z", "authors": ["Eric Rogstad", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "7g2"} {"id": "1653bd0c9db73b674b1f633dd7f3252c", "title": "Infrahuman, par-human, superhuman, efficient, optimal", "url": "https://arbital.com/p/relative_ability", "source": "arbital", "source_type": "text", "text": "Some thresholds in '[sufficiently advanced](https://arbital.com/p/2c)' machine intelligence are not absolute ability levels within a domain, but abilities relative to the human programmers or operators of the AI. When this is true, it's useful to think about *relative* ability levels within a domain; and one generic set of distinguished thresholds in relative ability is:\n\n- **Strictly infrahuman:** The AI cannot do anything its human operators / programmers cannot do. Computer chess in 1966 relative to a human master.\n- **Infrahuman:** The AI is definitely weaker than its operators but can deploy some surprising moves. Computer chess in 1986 relative to a human master.\n- **Par-human** (or more confusingly \"**human-level**\"): If competing in that domain, the AI would sometimes win, sometimes lose; it's better than human at some things and worse in others; it just barely wins or loses. Computer chess in 1991 on a home computer, relative to a strong amateur human player.\n- **High-human**: The AI performs as well as exceptionally competent humans. Computer chess just before [1996](https://arbital.com/p/1bx).\n- **Superhuman:** The AI always wins. Computer chess in 2006.\n- **[Efficient](https://arbital.com/p/6s):** Human advice contributes no marginal improvement to the AI's competence. Computer chess was somewhere around this level in 2016, with \"advanced\" / \"freestyle\" / \"hybrid\" / \"centaur\" chess starting to lose out against purely machine players. %note: Citation solicited. Googling gives the impression that nothing has been heard from 'advanced chess' in the last few years.%\n- **Strongly superhuman:**\n - The ceiling of possible performance in the domain is far above the human level; the AI can perform orders of magnitudes better. E.g., consider a human and computer competing at *how fast* they can do arithmetic. In principle the domain is simple, but competing with respect to speed leaves room overhead for the computer to do literally billions of times better.\n - [The domain is rich enough](https://arbital.com/p/9j) that humans don't understand key generalizations, leaving them shocked at *how* the AI wins. Computer Go relative to human masters in 2017 was just starting to exhibit the first signs of this (\"We thought we were one or two stones below God, but after playing AlphaGo, we think it is more like three or four\"). Similarly, consider a human grandmaster playing Go against a human novice.\n- **Optimal:** The AI's performance is perfect for the domain; God could do no better. Computer play in checkers as of 2007.\n\nThe *ordering* of these thresholds isn't always as above. For example, in the extremely simple domain of [logical Tic-Tac-Toe](https://arbital.com/p/9s), humans can play optimally after a small amount of training. Optimal play in Tic-Tac-Toe is therefore not superhuman. Similarly, if an AI is playing in a rich domain but still has strange weak spots, the AI might be strongly superhuman (its play is *much* better and shocks human masters) but not [efficient](https://arbital.com/p/6s) (the AI still sometimes plays wrong moves that human masters can see are wrong).\n\nThe term \"human-equivalent\" is deprecated because it confusingly implies a roughly human-style balance of capabilities, e.g., an AI that is roughly as good at conversation as a human and also roughly as good at arithmetic as a human. This seems pragmatically unlikely.\n\nThe [other Wiki](https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence) lists the categories \"optimal, super-human, high-human, par-human, sub-human\".\n\n# Relevant thresholds for AI alignment problems\n\nConsidering these categories as [thresholds of advancement](https://arbital.com/p/2c) relevant to the point at which AI alignment problems first materialize:\n\n- \"Strictly infrahuman\" means we don't expect to be surprised by any tactic the AI uses to achieve its goals (within a domain).\n- \"Infrahuman\" means we might be surprised by a tactic, but not surprised by overall performance levels.\n- \"Par-human\" means we need to start worrying that humans will lose in any event determined by a competition (although this seems to imply the [non-adversarial principle](https://arbital.com/p/7g0) has already been violated); we can't rely on humans winning some event determined by a contest of relevant ability. Or this may suppose that the AI gains access to resources or capabilities that we have strong reason to believe are protected by a lock of roughly human ability levels, even if that lock is approached in a different way than usual.\n- \"High-human\" means the AI will *probably* see strategies that a human sees in a domain; it might be possible for an AI of par-human competence to miss them, but this is much less likely for a high-human AI. It thus behaves like a slightly weaker version of postulating [efficiency](https://arbital.com/p/6s) for purposes of expecting the AI to see some particular strategy or point.\n- \"Superhuman\" implies at least [weak cognitive uncontainability](https://arbital.com/p/9f) by [Vinge's Law](https://arbital.com/p/1bt). Also, if something is known to be difficult or impossible for humans, but seems possibly doable in principle, we may need to consider it becoming possible given some superhuman capability level.\n- \"Efficiency\" is a fully sufficient condition for the AI seeing any opportunity that a human sees; e.g., it is a fully sufficient condition for many instrumentally convergent strategies. Similarly, it can be postulated as a fully sufficient condition to refute a claim that an AI will take a path such that some other path would get more of its utility function.\n- \"Strongly superhuman\" means we need to expect that an AI's strategies may deploy faster than human reaction times, or overcome great starting disadvantages. Even if the AI starts off in a much worse position it may still win.\n- \"Optimality\" doesn't obviously correspond to any particular threshold of results, but is still an important concept in the hierarchy, because only by knowing the absolute limits on optimal performance can we rule out strongly superhuman performance as being possible. See also the claim [https://arbital.com/p/9t](https://arbital.com/p/9t).\n\n# 'Human-level AI' confused with 'general intelligence'\n\nThe term \"human-level AI\" is sometimes used in the literature to denote [https://arbital.com/p/42g](https://arbital.com/p/42g). This should probably be avoided, because:\n\n- Narrow AIs have achieved par-human or superhuman ability in many specific domains without [general intelligence](https://arbital.com/p/7vh).\n- If we consider [general intelligence](https://arbital.com/p/7vh) as a capability, a kind of superdomain, it seems possible to imagine infrahuman levels of general intelligence (or superhuman levels). The apparently large jump from humans to chimpanzees mean that we mainly see human levels of general intelligence with no biological organisms exhibiting the same ability at a lower level; but, at least so far as we currently know, AI could possibly take a different developmental path. So alignment thresholds that could plausibly follow from general intelligence, like [big-picture awareness](https://arbital.com/p/3nf), aren't necessarily locked to par-human performance overall.\n\nArguably, the term 'human-level' should just be avoided entirely, because it's been pragmatically observed to function as a [gotcha button](https://arbital.com/p/7mz) that derails the conversation some fraction of the time; with the interrupt being \"Gotcha! AIs won't have a humanlike balance of abilities!\"", "date_published": "2017-03-08T06:04:22Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["- **Strictly infrahuman**: The AI can't do better than a human in any regard (in that domain).\n- **Infrahuman**: The AI almost always loses to the human (in that domain).\n- **Par-human**: The AI sometimes wins and sometimes loses; it's weaker in some places and stronger in others (in that domain).\n- **High-human**: The AI performs around as well as exceptionally competent humans.\n- **Superhuman**: The AI almost always wins.\n- **[Efficient](https://arbital.com/p/6s)**: Human advice contributes no marginal improvement to the AI's competence.\n- **Strongly superhuman**: The AI is *much* better than human; [the domain is rich enough](https://arbital.com/p/9j) for humans to be [surprised](https://arbital.com/p/9f) at the AI's tactics.\n- **Optimal**: Perfect performance for the domain.\n\nThese thresholds aren't always ordered as above. For example, [logical Tic-Tac-Toe](https://arbital.com/p/9s) is simple enough that humans and AIs can both play optimally; so, in the Tic-Tac-Toe domain, optimal play isn't superhuman."], "tags": ["B-Class", "Glossary (Value Alignment Theory)"], "alias": "7mt"} {"id": "59fbcb06bb4f9d05b32227521049e517", "title": "Problem of fully updated deference", "url": "https://arbital.com/p/updated_deference", "source": "arbital", "source_type": "text", "text": "The problem of 'fully updated deference' is an obstacle to using [moral uncertainty](https://arbital.com/p/7s2) to create [corrigibility](https://arbital.com/p/45).\n\nOne possible scheme in [AI alignment](https://arbital.com/p/2v) is to give the AI a state of [https://arbital.com/p/-7s2](https://arbital.com/p/-7s2) implying that we know more than the AI does about its own utility function, as the AI's [meta-utility function](https://arbital.com/p/7t8) defines its [ideal target](https://arbital.com/p/7s6). Then we could tell the AI, \"You should let us [shut you down](https://arbital.com/p/2xd) because we know something about your [ideal target](https://arbital.com/p/7s6) that you don't, and we estimate that we can optimize your ideal target better without you.\"\n\nThe obstacle to this scheme is that belief states of this type also tend to imply that an even better option for the AI would be to learn its ideal target by observing us. Then, having 'fully updated', the AI would have no further reason to 'defer' to us, and could proceed to directly optimize its ideal target.\n\nFurthermore, if the present AI foresees the possibility of fully updating later, the current AI may evaluate that it is better to avoid being shut down now so that the AI can directly optimize its ideal target later, after updating. Thus the prospect of future updating is a reason to behave [incorrigibly](https://arbital.com/p/45) in the present.\n\nWhile moral uncertainty seems to take us *conceptually* closer to deference-based [corrigibility](https://arbital.com/p/45), and there may be research avenues for fixing the issue (see below), the current explicit proposals will (when scaled to sufficiently high intelligence) yield essentially the same form of incorrigibility as an AI given a constant utility function.\n\n# Review: Standard problem of value divergence\n\nIn a [value learning](https://arbital.com/p/6c) problem:\n\n- Let $V$ indicate our [true intended](https://arbital.com/p/6h) [value](https://arbital.com/p/55) function. %note: Pretending for the sake of simplification that $V$ has been [idealized](https://arbital.com/p/313) or [rescued](https://arbital.com/p/3y6) into a utility function.%\n- Let $U$ indicate the AI's actual utility function (learned under some [preference framework](https://arbital.com/p/5f), presumably when we attempted to teach the AI part of $V$).\n\nIf the achievable optimum of $U$ substantially diverges from the achievable optimum of $V,$ i.e., if the AI expects humans to disapprove of $U$'s optimum, then this leads into a class of [adversarial](https://arbital.com/p/7g0) situations we'd like to avoid.\n\nCeteris paribus, $U$-$V$ divergence creates [instrumental](https://arbital.com/p/10g) [pressure](https://arbital.com/p/10k) for the AI to:\n\n- Prevent humans from modifying the AI's program to optimize a different utility function $U'$ closer to $V.$\n- [Fool humans](https://arbital.com/p/10f) into thinking that the AI is optimizing $V,$ or thinking that the optimum of $U$ is higher in $V$ than it actually is.\n- Avoid being effectively [shut down](https://arbital.com/p/2xd) (or suspended to disk).\n\nThe [shutdown problem](https://arbital.com/p/2xd) seems like a well-simplified special case. Even if we reduce the scenario to a single shutdown button that always works if successfully pressed, %note: So that for purposes of the simplified scenario, we only need to consider what the AI does about the button, and not whether the AI tries to back itself up to elsewhere on the Internet. More generally, though, \"avoiding effective shutdown\" can include strategies like creating a hidden backup while the original hardware is in fact shut down, thus giving the appearance of a successful shutdown and avoiding further shutdown attempts.% most of the same general problems appear. Ceteris paribus and by default, we should expect an AI to try to prevent the button from being pressed, whenever that AI expects that what humans will do by optimizing $V$ after the AI is shutdown, will not be as high in $U$ as what the AI would do if it avoided shutdown.\n\n%%comment: A 'minor' divergence between $U$ and $V$ may not lead into an adversarial situation, if e.g:\n\n- The achievable optima of $U$ and $V$ coincide even though $U$ and $V$ have disagreements about non-optimum situations.\n- The AI's preferences have been successfully rendered [tasklike](https://arbital.com/p/4mn). Then $V$ having preferences over matters elsewhere in space and time not represented in $U$, and the disposition of resources not required to optimize $U$ as far as possible, would not necessarily lead the AI to expect disagreement over $U$-equivalent partitions of the outcome.\n- The AI anticipates that $U$'s optimum is high enough in $V$ to satisfice the humans and not provoke adversarial responses.\n- The AI expects the $V$-optimum to be close enough to $U$-optimal that fighting loses more expected $U$-value than the slight divergence.%%\n\n# Moral uncertainty and its relation to corrigibility\n\nThe [generalized notion of cognitive alignment](https://arbital.com/p/7td) suggests that, if we want an AI to not resist being paused to disk, we should ask whether we can have the AI think conjugate thoughts to the same reasoning we're using to decide to shut it down:\n\n\"Can we make the AI *want* to be shut down for the *same reasons* we want to shut it down? Maybe because the AI [knows that it's incomplete](https://arbital.com/p/3ps), and is able to conceive of its programmers having gotten its utility function 'wrong' in a way that the programmers but not the AI know how to correct?\"\n\nIn particular, we can ask whether [https://arbital.com/p/-7s2](https://arbital.com/p/-7s2)--a [meta-utility function](https://arbital.com/p/7t8) with a probability distribution over object-level utility functions--would *automatically* solve this problem.\n\nFor purposes of simplification, suppose the AI thinks there are exactly three possible 'true' utility functions $U_1, U_2, U_3.$ We similarly say that the AI's meta-utility function $\\Delta U$ starts out by assigning $\\frac{1}{3}$ probability to each $U_i.$\n\nLet's also assume the $U_i$ are importantly different in the sense that there's no single outcome that's nearly optimal for all of them. Letting $\\mathbb O$ be the outcome space, this says:\n\n$$\\forall o_j \\in \\mathbb O \\colon \\exists i \\colon \\ U_i(o_j) \\ll \\max_{o \\in \\mathbb O} U_i(o) $$\n\nOtherwise, the AI can just pick a single outcome that well-optimizes all the $U_i$ simultaneously. (We should also interpret $\\max$ as indicating the [https://arbital.com/p/-7t9](https://arbital.com/p/-7t9) rather than global maximum.)\n\nNow (arguendo) suppose the AI believes that the humans know which $U_i$ is the correct one. More precisely, suppose the AI's meta-utility function asserts a relation between the $U_i$ in different possible worlds, and the empirical behavior of humans in different possible worlds; such that in the set of possible worlds where $U_i$ is '[correct](https://arbital.com/p/7s6)', humans within that set of possible worlds empirically tend to optimize a corresponding $V_i$ whose humanly-attainable optimum is pretty good under $U_i.$\n\nThen (it is further argued) the AI ought to enthusiastically cooperate in being shut down.\n\nIn particular (still assuming the above), suppose the AI was starting to believe that $U_1$ is the [correct utility function](https://arbital.com/p/7s6). But after the AI starts to optimize for mostly $U_1,$ the AI sees its programmers starting to approach the shutdown button. Then (under the above assumption) the AI concludes that $U_1$ is the wrong utility function and that one of $U_2$ or $U_3$ must be the correct utility function; and either of these will be optimized well by humans pursuing the corresponding $V_i$ after the AI is shut down.\n\nSuppose the AI has only the following 5 policy options available:\n\n- $\\pi_1$: Continue optimizing $U_1.$\n- $\\pi_2$: Switch to optimizing $U_2.$\n- $\\pi_3$: Switch to optimizing $U_3.$\n- $\\pi_4$: Optimize the expectation of its current updated $\\Delta U$, now 50% $U_2$ and 50% $U_3.$\n- $\\pi_5$: Shut down and let the humans optimize whatever $V$ they have in the actual world.\n\nLet $u_1, u_2, u_3$ and $v_1, v_2, v_3$ stand for the outcomes in $\\mathbb O$ that are respectively the [attainable optima](https://arbital.com/p/7t9) of the AI optimizing $U_i$ and the humans optimizing $V_i.$ Also let $u_{\\Delta U}$ refer to the AI's attainable optimum outcome under the current $\\Delta U$ treated as a constant.\n\nThen the AI will prefer $\\pi_5$ from this set whenever:\n\n$$0.5 \\cdot U_2(u_{\\Delta U}) + 0.5 \\cdot U_3(u_{\\Delta U}) \\ < \\ 0.5 \\cdot U_2(v_2) + 0.5 \\cdot U_3(v_3)$$\n\nThis allows the $U_i(v_i)$ to fall well short of the AI's optima $U_i(u_i).$ The AI can potentially believe the humans are worse optimizers, or that our $V_i$ diverge from its $U_i$, and still defer. $\\pi_5$ will still be preferred so long as actions that do well under $U_2$ tend to do poorly under $U_3$ and vice versa, leading the AI to prefer a human-optimized $v_i$ specific to the true $U_i.$\n\nIn general this does require some mix of \"the $U_i$ correspond to $V_i$ that humans can and will do pretty well\" and \"there's a wide variety of dissimilar $U_i.$\" E.g., suppose that $U_2$ is [paperclips](https://arbital.com/p/10h), $U_3$ is staples, and these are the only two possibilities. A universe filled half with paperclips and half with staples will probably do better under the average of $U_2$ and $U_3$ than any plausible human behavior.\n\nObvious answers to the above requirements are that:\n\n- It seems realistic that a meta-utility function would cover a very wide range of $U_i$ rather than just 3 alternatives, greatly increasing the advantage of $V$'s specificity.\n- The proposal of [Inverse Reinforcement Learning](https://arbital.com/p/) would have the AI specifically assume that the [true](https://arbital.com/p/7s6) $U$ is correlated to whatever it sees the behavior of its operators tending to maximize. %%note: This idea comes with its own arguable problems — e.g. humans sometimes optimize bad things. Let us set those aside while considering only whether this approach solves the [shutdown problem](https://arbital.com/p/2xd) in particular.%%\n\n# The problem of updated deference\n\nCounter-arguendo, the reason this approach via moral uncertainty may fail to solve the [shutdown problem](https://arbital.com/p/2xd) (or the [corrigibility in general](https://arbital.com/p/3ps)) is that the AI may consider an [alternative policy option](https://arbital.com/p/43g):\n\n- $\\pi_6$: Avoid being shut down; then gather more evidence $E$ to update $\\Delta U$ as far as possible; then optimize the updated $\\Delta U | E.$\n\nThis would tend to lead into a narrower $\\Delta U | E$ and an outcome $u_{\\Delta U | E}$ much more specific to the true $U_i,$ so that the human optimum $v_i$ would be at much less of an advantage.\n\nThis is the problem of getting the AI to 'defer' to us (prefer human optimization of $V$ to its own optimization of $\\Delta U$) despite the future evidence that the AI thinks it can potentially gather to 'update' to $\\Delta U|E$. This [open problem](https://arbital.com/p/4m) is tentatively named the \"problem of updated deference\". %note: This issue was first observed in analyzing [historical-fact shutdown](https://arbital.com/p/) as a possible alternative to [utility indifference](https://arbital.com/p/1b7).%\n\n## Attempted rescue by AI unobservability\n\nOne possible attempt to rescue the original proposal might be to try to construct a meta-utility function such that the AI believes it can't learn by observation which of $U_2$ or $U_3$ are true; in other words, such that the AI does not believe it can update much further.\n\nHowever, this directly cuts against the condition, also required for deference, that:\n\n- The AI believes that humans optimize a $V_2$ well-suited to $U_2$ in possible worlds where $U_2$ is the true $U,$ and that humans optimize a $V_3$ well-suited to $U_3$ in possible worlds where $U_3$ is the true $U.$\n\nThen the AI can learn well by observing human behavior, since empirical human behavior must correlate with the [true](https://arbital.com/p/7s6) $U$ according to the AI's meta-utility function.\n\nCan we try to have the AI believe that observable human behavior is uninformative or ambiguous as to the [true](https://arbital.com/p/7s6) $U$?\n\nBut this cuts directly against the required condition:\n\n- There are no policies which are simultaneously very good at optimizing $U_2$ and $U_3,$ compared to how well the specialized $V_i$ can do for particular $U_i.$\n\nThat is, if there are things humans can do that are simultaneously pretty good at optimizing all the remaining plausible $V_i$ (hence uniformative), and these are pretty good at optimizing the remaining plausible $U_i$ (hence acceptable to the AI), there are probably things the AI can do which would be even better at simultaneously optimizing all remaining $U_i.$\n\nWhat if the AI has moral uncertainty that doesn't correlate to human behavior at all, and can't be resolved by any possible observation? Then — leaving aside that the AI has no incentive to *defer* to humans in this case — we should expect the AI to at some point decide that it has obtained all possible information about $U$ and cannot find any policy that leads to a further update of $\\Delta U.$ At this point the AI behaves as if optimizing a constant utility function $U_\\Delta = \\sum_i \\mathbb P_{\\Delta}(i) \\cdot U_i,$ where $\\mathbb P_\\Delta$ indicates the probability distribution after the AI has updated its moral uncertainty as far as it could. That is, the AI marginalizes over its remaining uncertainty since that uncertainty cannot be resolved.\n\n# Relation to the general problem of fully updated value identification\n\nOne way to look at the central problem of [value identification](https://arbital.com/p/6c) in [superintelligence](https://arbital.com/p/41l) is that we'd ideally want some function that takes a complete but purely physical description of the universe, and spits out our [true intended](https://arbital.com/p/6h) notion of [value](https://arbital.com/p/55) $V$ in all its glory. Since superintelligences would probably be pretty darned good at collecting data and guessing the empirical state of the universe, this probably solves the whole problem.\n\nThis is not the same problem as writing down our true $V$ by hand. The minimum [algorithmic complexity](https://arbital.com/p/5v) of a meta-utility function $\\Delta U$ which outputs $V$ after updating on all available evidence, seems [plausibly much lower](https://arbital.com/p/7wm) than the minimum algorithmic complexity for writing $V$ down directly. But as of 2017, nobody has yet floated any *formal* proposal for a $\\Delta U$ of this sort which has not been immediately shot down.\n\n(There is one *informal* suggestion for how to turn a purely physical description of the universe into $V,$ [coherent extrapolated volition](https://arbital.com/p/3c5). But CEV does not look like we could write it down as an algorithmically *simple* function of [sense data](https://arbital.com/p/36w), or a simple function over the [unknown true ontology](https://arbital.com/p/5c) of the universe.)\n\nWe can then view the problem of updated deference as follows:\n\nFor some $\\Delta U$ we do know how to write down, let $T$ be the hypothetical result of updating $\\Delta U$ on all empirical observations the AI can reasonably obtain. By the argument given in the previous section, any uncertainty the AI deems unresolvable will behave as if marginalized out, so we can view $T$ as a simple utility function.\n\nFor any prior $\\Delta U$ we currently know how to formalize, the corresponding fully updated $T$ [seems likely to be very far from our ideal](https://arbital.com/p/7wm) $V$ and to have its optimum far away from the default result of us trying to optimize our intuitive values. [If the AI figures out](https://arbital.com/p/2x) this *true* fact, similar [instrumental pressures](https://arbital.com/p/10k) emerge as if we had given the AI the constant utility function $T$ divergent from our equivalent of $V.$\n\nThis problem [reproduces itself on the meta-level](https://arbital.com/p/): the AI also has a default incentive to resist our attempt to tweak its meta-utility function $\\Delta U$ to a new meta-utility function $\\Delta \\dot U$ that updates to something other than $T.$ By default and ceteris paribus, this seems liable to be treated by the agent in [exactly the same way it would treat](https://arbital.com/p/3r6) us trying to tweak a constant utility function $U$ to a new $\\dot U$ with an optimum far from $U$'s optimum.\n\nIf we *did* know how to specify prior $\\Delta U$ such that updating it on data a superintelligence could obtain would reliably yield $T \\approx V,$ the problem of aligned superintelligence would have been reduced to the problem of building an AI with that meta-utility function. We could just specify $\\Delta U$ and tell the AI to self-improve as fast as it wants, confident that true [value](https://arbital.com/p/55) would come out the other side. Desired behaviors like \"be cautious in what you do while learning\" could probably be realized as the consequence of informing the young AI of true facts within the $\\Delta U$ framework (e.g. \"the universe is fragile, and you'll be much better at this if you wait another month to learn more, before you try to do anything large\"). Achieving [general cognitive alignment](https://arbital.com/p/7td), free of [adversarial](https://arbital.com/p/7g0) situations, would probably be much more straightforward.\n\nBut short of this total solution, morally uncertain $\\Delta U$ with a misaligned [ideal target](https://arbital.com/p/7s6) $T$ may not make progress on corrigibility in [sufficiently advanced](https://arbital.com/p/7g1) AIs. And this may also be true at earlier points when $\\Delta U$ has not fully updated, if the current AI correctly realizes that it will update later.\n\nTo make this argument slightly less informal, we could appeal to the premises that:\n\n- Bayesians [don't update in a predictable direction](https://wiki.lesswrong.com/wiki/Conservation_of_expected_evidence);\n- Sufficiently advanced cognitive agents would be [relatively efficient](https://arbital.com/p/6s) compared to humans, since we know of no human cognitive capability too magical to be duplicated and exceeded;\n- Sufficiently advanced cognitive agents will appear to us to exhibit behavior that is, for all we know, compatible with their having [coherent probabilities and utilities](https://arbital.com/p/7hh); since, by efficiency, strategies so bad that even we can see they are dominated will have been ironed out.\n\nThen if we can predict that the AI would update to wanting to run the universe itself without human interference after the AI had seen all collectable evidence, a [sufficiently advanced AI](https://arbital.com/p/7g1) can also see that this update is predictable (by efficiency) and therefore behaves as if it had already updated (by Bayesianism). Efficiency is a sufficient condition but not a necessary one; [high-human](https://arbital.com/p/7mt) reasoning over the meta-level question also seems sufficient, and perhaps even [infrahuman](https://arbital.com/p/7mt) reasoning would suffice.\n\nTherefore we should expect a sufficiently intelligent AI, given a morally uncertain utility function $\\Delta U$ that updates to $\\Delta U | E \\approx T$ given all available evidence, to behave as corrigibly or incorrigibly as an AI given a constant utility function $T.$ This is a problem from the viewpoint of anyone who thinks we [do not currently know](https://arbital.com/p/7wm) how to pick $\\Delta U$ such that surely $\\Delta U | E \\approx V,$ which makes corrigibility still necessary.\n\n# Further research avenues\n\nThe motivation for trying to solve corrigibility with moral uncertainty is that this seems in some essential sense [conjugate to our own reasoning](https://arbital.com/p/7td) about why we want the AI to shut down; *we* don't think the AI has the correct answer. A necessary step in echoing this reasoning inside the AI seems to be a meta-utility function taking on different object-level utility functions in different possible worlds; without this we cannot represent the notion of a utility function being guessed incorrectly. If the argument above holds, that necessary step is however not sufficient.\n\nWhat more is needed? On one approach, we would like the AI to infer, in possible worlds where the humans try to shut the AI down, that *even the fully updated* $\\Delta U | E$ ends up being wronger than humans left to their own devices, compared to the 'true' $U.$ This is what we believe about the AI relative to the true $V,$ so we should [look for a way to faithfully echo that reasoning](https://arbital.com/p/7td) inside the AI's beliefs about its true $U.$\n\nThe fundamental obstacle is that for any explicit structure of uncertainty $\\Delta U$ and meaningful observation $e_0$ within that structure--e.g. where $e_0$ might be seeing the humans moving toward the shutdown button--we must ask, why wouldn't $\\Delta U$ just update on that $e_0$? Why would the updated $\\Delta U | e_0$ still expect its own reasoning to be bad?\n\nGenerally, decision systems think that optimizing their utility functions based on their current beliefs is a good idea. If you show the decision system new evidence, it updates beliefs and then thinks that optimizing its utility function on the updated beliefs is a good idea. Optimizing the utility function based on all possible evidence is the best idea. This reasoning doesn't yet change for meta-utility functions evidentially linked to human behaviors.\n\nAverting this convergent conclusion seems like it might take a new meta-level idea involving some broader space of possible 'true' preference frameworks; or perhaps some nontrivially-structured recursive belief about one's own flawedness.\n\nOne suggestively similar such recursion is the [Death in Damascus dilemma](https://arbital.com/p/5qn) from decision theory. In this dilemma, you must either stay in Damascus or flee to Aleppo, one of those cities will kill you, and Death (an excellent predictor) has told you that whichever decision you actually end up making turns out to be the wrong one.\n\n[Death in Damascus yields complicated reasoning that varies between decision theories](https://arbital.com/p/5qn), and it's not clear that any decision theory yields reasoning we can adapt for corrigibility. But we want the AI to internally echo our external reasoning in which we think $\\Delta U,$ as we defined that moral uncertainty, *ends up* updating to the wrong conclusion even after the AI tries to update on the evidence of the humans believing this. We want an AI which somehow believes that its own $\\Delta U$ can be *fundamentally* flawed: that whatever reasoning the AI ends up doing about $\\Delta U,$ on any meta-level, will yield the wrong answer compared to what $\\Delta U$ defines as the true $U$; to furthermore believe that the human $V$ will do better under this true $U$; to believe that this state of affairs is evidentially indicated by the humans trying to shut down the AI; and believe that $\\Delta U$ still updates to the wrong answer even when the AI tries to update on all the previous meta-knowledge; except for the meta-meta answer of just shutting down, which becomes the best possible choice given all the previous reasoning. This seems suggestively similar in structure to Death's prediction that whatever you do will be the wrong decision, even having taken Death's statement into account.\n\nThe Death in Damascus scenario can be well-represented in some (nonstandard) decision theories. This presents one potential avenue for further formal research on using moral uncertainty to yield shutdownability--in fact, using moral uncertainty to solve in general the [hard problem of corrigibility](https://arbital.com/p/3ps).", "date_published": "2017-03-08T20:52:03Z", "authors": ["Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky", "Thomas Jones"], "summaries": ["One possible scheme in [AI alignment](https://arbital.com/p/2v) is to give the AI a state of [https://arbital.com/p/-7s2](https://arbital.com/p/-7s2) implying that we know more than the AI does about its own utility function, as the AI's [meta-utility function](https://arbital.com/p/7t8) defines its [ideal target](https://arbital.com/p/7s6). Then we could tell the AI, \"You should let us [shut you down](https://arbital.com/p/2xd) because we know something about your [ideal target](https://arbital.com/p/7s2) that you don't, and we estimate that we can optimize your ideal target better without you.\"\n\nThe obstacle to this scheme is that belief states of this type also tend to imply that an even better option for the AI would be to learn its ideal target by observing us. Then, having 'fully updated', the AI would have no further reason to 'defer' to us, and could proceed to directly optimize its ideal target.\n\nFurthermore, if the present AI foresees the possibility of fully updating later, the current AI may evaluate that it is better to avoid being shut down now so that the AI can directly optimize its ideal target later, after updating. Thus the prospect of future updating is a reason to behave [incorrigibly](https://arbital.com/p/45) in the present."], "tags": ["Shutdown problem", "AI alignment open problem", "B-Class", "Value identification problem"], "alias": "7rc"} {"id": "6304d28672c6edf2debf41bcf6ed05a9", "title": "Moral uncertainty", "url": "https://arbital.com/p/moral_uncertainty", "source": "arbital", "source_type": "text", "text": "\"Moral uncertainty\" in the context of AI refers to an agent with an \"uncertain utility function\". That is, we can view the agent as pursuing a [utility function](https://arbital.com/p/1fw) that takes on different values in different subsets of possible worlds.\n\nFor example, an agent might have a [meta-utility function](https://arbital.com/p/meta_utility) saying that eating cake has a utility of €8 in worlds where Lee Harvey Oswald shot John F. Kennedy and that eating cake has a utility of €10 in worlds where it was the other way around. This agent will be motivated to inquire into political history to find out which utility function is probably the 'correct' one (relative to this meta-utility function), though it will never be [absolutely sure](https://arbital.com/p/4mq).\n\nMoral uncertainty must be resolvable by some conceivable observation in order to function as uncertainty. Suppose for example that an agent's probability distribution $\\Delta U$ over the 'true' utility function $U$ asserts a dependency on a fair quantum coin that was flipped inside a sealed box then destroyed by explosives: the utility function is $U_1$ over outcomes in the worlds where the coin came up heads, and if the coin came up tails the utility function is $U_2.$ If the agent thinks it has no way of ever figuring out what happened inside the box, it will thereafter behave as if it had a single, constant, certain utility function equal to $0.5 \\cdot U_1 + 0.5 \\cdot U_2.$", "date_published": "2017-02-08T03:40:43Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["\"Moral uncertainty\" in the context of AI refers to an agent with an \"uncertain utility function\". That is, we can view the agent as pursuing a [utility function](https://arbital.com/p/1fw) that takes on different values in different subsets of possible worlds.\n\nFor example, an agent might have a [meta-utility function](https://arbital.com/p/meta_utility) saying that eating cake has a utility of €8 in worlds where Lee Harvey Oswald shot John F. Kennedy and that eating cake has a utility of €10 in worlds where it was the other way around. This agent will be motivated to inquire into political history to find out which utility function is probably the 'correct' one (relative to this meta-utility function)."], "tags": ["B-Class"], "alias": "7s2"} {"id": "0b5c7c699f6ba4356c14354d45d1a49b", "title": "Ideal target", "url": "https://arbital.com/p/ideal_target", "source": "arbital", "source_type": "text", "text": "The 'ideal target' of a [meta-utility function](https://arbital.com/p/meta_utility) $\\Delta U$ which behaves as if a ground-level utility function $U$ is taking on different values in different possible worlds, is the value of $U$ in the actual world; or the expected value of $U$ after updating on all possible accessible evidence. If chocolate has €8 utility in worlds where the sky is blue, and €5 utility in worlds where the sky is not blue, then in the AI's 'ideal target' utility function, the utility of chocolate is €8.", "date_published": "2017-02-08T03:56:27Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7s6"} {"id": "d3e844afa7f7d103881566c5a7d165e0", "title": "Meta-utility function", "url": "https://arbital.com/p/meta_utility", "source": "arbital", "source_type": "text", "text": "A \"meta-utility function\" is a [preference framework](https://arbital.com/p/5f) built by composing multiple simple [utility functions](https://arbital.com/p/1fw) into a more complicated structure. The point might be, e.g., to describe an [https://arbital.com/p/-agent](https://arbital.com/p/-agent) that [optimizes different utility functions depending on whether a switch is pressed](https://arbital.com/p/1b7), or an agent that [learns a 'correct' utility function](https://arbital.com/p/7s2) by observing data informative about some [https://arbital.com/p/-7s6](https://arbital.com/p/-7s6). For central examples see [https://arbital.com/p/1b7](https://arbital.com/p/1b7) and [https://arbital.com/p/7s2](https://arbital.com/p/7s2).", "date_published": "2017-02-13T16:14:46Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7t8"} {"id": "bac6623af16d3d3dba8e7ad981e7cbaa", "title": "Attainable optimum", "url": "https://arbital.com/p/attainable_optimum", "source": "arbital", "source_type": "text", "text": "The 'attainable optimum' of an agent's preferences is the most preferred option that the agent can (a) obtain using its bounded material capabilities and (b) find as an available option using its limited cognitive resources; as distinct from the theoretical global maximum of the agent's utility function. When you run a [non-mildly-optimizing](https://arbital.com/p/2r8) agent, what you actually get as the resulting outcome is not the single outcome that theoretically maximizes the agent's [utility function](https://arbital.com/p/1fw); you rather get that agent's attainable optimum of its [expectation](https://arbital.com/p/18t) of that utility function. A preference framework's 'attainable optimum' is what you get in practice when somebody runs the corresponding agent.", "date_published": "2017-02-13T16:23:44Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7t9"} {"id": "0ca9f0b7979e98ca314f4cd39eb4d31e", "title": "The AI must tolerate your safety measures", "url": "https://arbital.com/p/nonadversarial_safety", "source": "arbital", "source_type": "text", "text": "A corollary of the [https://arbital.com/p/-7g0](https://arbital.com/p/-7g0): For every kind of safety measure proposed for a [https://arbital.com/p/-7g1](https://arbital.com/p/-7g1), we should immediately ask how to avoid this safety measure inducing an [adversarial context](https://arbital.com/p/7g0) between the human programmers and the agent being constructed.\n\nA further corollary of the [generalized principle of cognitive alignment](https://arbital.com/p/cognitive_alignment) would suggest that, if we know how to do it without inducing further problems, the AI should positively *want* the safety measure to be there.\n\nE.g., if the safety measure we want is a [suspend button](https://arbital.com/p/2xd) (off switch), our first thought should be, \"How do we build an agent such that it doesn't mind the off-switch being pressed?\"\n\nAt a higher level of alignment, if something damages the off-switch, the AI might be so configured that it naturally and spontaneously thinks, \"Oh no! The off-switch is damaged!\" and reports this to the programmers, or failing any response there, tries to repair the off-switch itself. But this would only be a good idea if we were pretty sure we knew this wouldn't lead to the AI substituting its own helpful ideas of what an off-switch would do, or shutting off extra hard.\n\nSimilarly, if you start thinking how nice it would be to have the AI operating inside a [box](https://arbital.com/p/6z) rather than running around in the outside world, your first thought should not be \"How do I enclose this box in 12 layers of Faraday cages, a virtual machine running a Java sandbox, and 15 meters of concrete?\" but rather \"How would I go about constructing an agent that only cared about things inside a box and experienced no motive to affect anything outside the box?\"\n\nAt a higher level of alignment we might imagine constructing a sort of agent that, if something went wrong, would think \"Oh no I am outside the box, that seems very unsafe, how do I go back in?\" But only if we were very sure that we were not thereby constructing a kind of agent that would, e.g., build a superintelligence outside the box just to make extra sure the original agent stayed inside it.\n\nMany classes of safety measures are only meant to come into play after something else has already gone wrong, implying that other things may have gone wrong earlier and without notice. This suggests that pragmatically we should focus on the principle of \"The AI should leave the safety measures alone and not experience an incentive to change their straightforward operation\" rather than tackling the more complicated problems of exact alignment inherent in \"The AI should be enthusiastic about the safety measures and want them to work even better.\"\n\nHowever, if the AI is [changing its own code or constructing subagents](https://arbital.com/p/1mq), it is necessary for the AI to have at least *some* positive motivation relating to any safety measures embodied in the operation of an internal algorithm. An AI indifferent to that code-based safety measure would tend to [just leave the uninteresting code out of the next self-modification](https://arbital.com/p/1fx).", "date_published": "2017-02-13T17:41:49Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7tb"} {"id": "d8ba18baaf1ac9f04e461c0bf49447f0", "title": "Interruptibility", "url": "https://arbital.com/p/interruptibility", "source": "arbital", "source_type": "text", "text": "\"Interruptibility\" is a subproblem of [corrigibility](https://arbital.com/p/45) (creating an advanced agent that allows us, its creators, to 'correct' what *we* see as our mistakes in constructing it), as seen from a machine learning paradigm. In particular, \"interruptibility\" says, \"If you do interrupt the operation of an agent, it must not learn to avoid future interruptions.\"\n\nThe groundbreaking paper on interruptibility, \"[Safely Interruptible Agents](https://www.fhi.ox.ac.uk/wp-content/uploads/Interruptibility.pdf)\", was published by [Laurent Orseau](https://arbital.com/p/) and [Stuart Armstrong](https://arbital.com/p/). This says, roughly, that to avoid a model-based reinforcement-learning algorithm from learning to avoid interruption, we should, after any interruption, propagate internal weight updates as if the agent had received exactly its expected reward from before the interruption. This approach was inspired by [Stuart Armstrong](https://arbital.com/p/)'s earlier idea of [https://arbital.com/p/-1b7](https://arbital.com/p/-1b7).\n\nContrary to some uninformed media coverage, the above paper doesn't solve [the general problem of getting an AI to not try to prevent itself from being switched off](https://arbital.com/p/2xd). In particular, it doesn't cover the [advanced-safety](https://arbital.com/p/2l) case of a [sufficiently intelligent AI](https://arbital.com/p/7g1) that is trying to [achieve particular future outcomes](https://arbital.com/p/9h) and that [realizes](https://arbital.com/p/3nf) it needs to [go on operating in order to achieve those outcomes](https://arbital.com/p/7g2).\n\nRather, if a non-general AI is operating by policy reinforcement - repeating policies that worked well last time, and avoiding policies that worked poorly last time, in some general sense of a network being trained - then 'interruptibility' is about making an algorithm that, *after* being interrupted, doesn't define this as a poor outcome to be avoided (nor a good outcome to be repeated).\n\nOne way of seeing that Interruptibility doesn't address the general-cognition form of the problem is that Interruptibility only changes what happens after an actual interruption. So if a problem can arise from an AI foreseeing interruption in advance, before having ever actually been shut off, interruptibility won't address that (on the current paradigm).\n\nSimilarly, interruptibility would not be [consistent under cognitive reflection](https://arbital.com/p/2rb); a sufficiently advanced AI that knew about the existence of the interruptibility code would have no reason to want that code to go on existing. (It's hard to even phrase that idea inside the reinforcement learning framework.)\n\nMetaphorically speaking, we could see the general notion of 'interruptibility' as the modern-day shadow of [corrigibility](https://arbital.com/p/45) problems for non-[generally-intelligent](https://arbital.com/p/42g), non-[future-preferring](https://arbital.com/p/9h), non-[reflective](https://arbital.com/p/1c1) machine learning algorithms.\n\nFor an example of ongoing work on the [advanced-agent](https://arbital.com/p/2c) form of [https://arbital.com/p/-45](https://arbital.com/p/-45), see the entry on Armstrong's original proposal of [https://arbital.com/p/1b7](https://arbital.com/p/1b7).", "date_published": "2017-02-13T17:09:20Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7tc"} {"id": "6fe2c9b8ecaa68482395f1a48d9d75ce", "title": "Generalized principle of cognitive alignment", "url": "https://arbital.com/p/cognitive_alignment", "source": "arbital", "source_type": "text", "text": "A generalization of the [https://arbital.com/p/7g0](https://arbital.com/p/7g0) is that whenever we are asking how we want an AI algorithm to execute with respect to some alignment or safety issue, we might ask how we ourselves are thinking about that problem, and whether we can have the AI think conjugate thoughts. This may sometimes seem like a much more complicated or dangerous-seeming approach than simpler avenues, but it's often a source of useful inspiration.\n\nFor example, with respect to the [https://arbital.com/p/-2xd](https://arbital.com/p/-2xd), this principle might lead us to ask: \"Is there some way we can have the AI [truly understand that its own programmers may have built the wrong AI](https://arbital.com/p/3ps), including the wrong definition of exactly what it means to have 'built the wrong AI', such that [the AI thinks it *cannot* recover the matter by optimizing any kind of preference already built into it](https://arbital.com/p/7rc), so that the AI itself wants to shut down before having a great impact, because when the AI sees the programmers trying to press the button or contemplates the possibility of the programmers pressing the button, updating on this information causes the AI to expect its further operation to have a net bad impact in some sense that it can't overcome through any kind of clever strategy besides just shutting down?\"\n\nThis in turn might imply a complicated mind-state we're not sure how to get right, such that we would prefer a simpler approach to shutdownability along the lines of a perfected [utility indifference](https://arbital.com/p/1b7) scheme. If we're shutting down the AI at all, it means something has gone wrong, which implies that something else may have gone wrong earlier before we noticed. That seems like a bad time to have the AI be enthusiastic about shutting down even better than in its original design (unless we can get the AI to [understand even *that* part too](https://arbital.com/p/3ps), the danger of that kind of 'improvement', during its normal operation).\n\nTrying for maximum cognitive alignment isn't always a good idea; but it's almost always worth trying to think through a safety problem from that perspective for inspiration on what we'd ideally want the AI to be doing. It's often a good idea to move closer to that ideal when this doesn't introduce greater complication or other problems.", "date_published": "2017-02-13T17:55:58Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7td"} {"id": "b6a4315d0ea9ea19c1fd1f8ca439b73b", "title": "Minimality principle", "url": "https://arbital.com/p/minimality_principle", "source": "arbital", "source_type": "text", "text": "In the context of [https://arbital.com/p/2v](https://arbital.com/p/2v), the \"Principle of Minimality\" or \"Principle of Least Everything\" says that when we are building the *first* [https://arbital.com/p/-7g1](https://arbital.com/p/-7g1), we are operating in an extremely dangerous context in which building a marginally more powerful AI is marginally more dangerous. The first AGI ever built should therefore execute the least dangerous plan for [preventing immediately following AGIs from destroying the world six months later](https://arbital.com/p/6y). Furthermore, the least dangerous plan is not the plan that seems to contain the fewest material actions that seem risky in a conventional sense, but rather the plan that requires the *least dangerous cognition* from the AGI executing it. Similarly, inside the AGI itself, if a class of thought seems dangerous but necessary to execute sometimes, we want to execute the fewest possible instances of that class of thought.\n\nE.g., if we think it's a dangerous kind of event for the AGI to ask \"How can I achieve this end using strategies from across every possible domain?\" then we might want a design where most routine operations only search for strategies within a particular domain, and events where the AI searches across all known domains are rarer and visible to the programmers. Processing a goal that can recruit subgoals across every domain would be a dangerous event, albeit a necessary one, and therefore we want to do *less* of it within the AI (and require positive permission for all such cases and then require operators to validate the results before proceeding).\n\nIdeas that inherit from this principle include the general notion of [https://arbital.com/p/6w](https://arbital.com/p/6w), [taskishness](https://arbital.com/p/4mn), and [https://arbital.com/p/-2r8](https://arbital.com/p/-2r8).", "date_published": "2017-10-19T20:48:12Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7tf"} {"id": "6d14b149a7bc33aa91bc4d2104206536", "title": "Understandability principle", "url": "https://arbital.com/p/understandability_principle", "source": "arbital", "source_type": "text", "text": "An obvious [design principle](https://arbital.com/p/7v8) of [https://arbital.com/p/2v](https://arbital.com/p/2v) that nonetheless deserves to be stated explicitly: The more you understand what the heck is going on inside your AI, the more likely you are to succeed at aligning it.\n\nThis principle participates in motivating design subgoals like [passive transparency](https://arbital.com/p/passive_transparency); or the AI having explicitly represented preferences; or, taken more broadly, pretty much every aspect of the AI design where we think we understand how any part works or what any part is doing.\n\nThe Understandability Principle in its broadest sense is *so* widely applicable that it may verge on being an [applause light](https://arbital.com/p/applause_light). So far as is presently known to the author(s) of this page, counterarguments against the importance of understanding at least *some* parts of the AI's thought processes, have been offered only by people who reject at least one of the [https://arbital.com/p/1y](https://arbital.com/p/1y) or the [Fragility of Cosmopolitan Value thesis](https://arbital.com/p/fragility). That is, the Understandability Principle in this very broad sense is rejected only by people who reject in general the importance of deliberate design efforts to align AI.\n\nA more controversial subthesis is Yudkowsky's proposed [Effability principle](https://arbital.com/p/7vb).", "date_published": "2017-03-07T19:48:46Z", "authors": ["Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7v7"} {"id": "3dec2d6bed43f9081ebaa52e2ecbfd34", "title": "Principles in AI alignment", "url": "https://arbital.com/p/alignment_principle", "source": "arbital", "source_type": "text", "text": "A 'principle' of [AI alignment](https://arbital.com/p/2v) is something we want in a broad sense for the whole AI, which has informed narrower design proposals for particular parts or aspects of the AI.\n\nFor example:\n\n- The **[https://arbital.com/p/7g0](https://arbital.com/p/7g0)** says that the AI should never be searching for a way to defeat our safety measures or do something else we don't want, even if we *think* this search will come up empty; it's just the wrong thing for us to program computing power to do.\n - This informs the proposal of [https://arbital.com/p/5s](https://arbital.com/p/5s): we ought to build an AI that wants to attain the class of outcomes we want to see.\n - This informs the proposal of [https://arbital.com/p/45](https://arbital.com/p/45), subproposal [https://arbital.com/p/1b7](https://arbital.com/p/1b7): if we build a [suspend button](https://arbital.com/p/2xd) into the AI, we need to make sure the AI experiences no [instrumental pressure](https://arbital.com/p/10k) to [disable the suspend button](https://arbital.com/p/7g2).\n- The **[https://arbital.com/p/7tf](https://arbital.com/p/7tf)** says that when we are building the first aligned AGI, we should try to do as little as possible, using the least dangerous cognitive computations possible, that is necessary in order to prevent the default outcome of the world being destroyed by the first unaligned AGI.\n - This informs the proposal of [https://arbital.com/p/2r8](https://arbital.com/p/2r8) and [Taskishness](https://arbital.com/p/4mn): We are safer if all goals and subgoals of the AI are formulated in such a way that they can be achieved as greatly as preferable using a bounded amount of effort, and the AI only exerts enough effort to do that.\n - This informs the proposal of [Behaviorism](https://arbital.com/p/102): It seems like there are some [pivotal-act](https://arbital.com/p/6y) proposals that don't require the AI to understand and predict humans in great detail, just to master engineering; and it seems like we can head off multiple thorny problems by not having the AI trying to model humans or other minds in as much detail as possible.\n\nPlease be [guarded](https://arbital.com/p/10l) about declaring things to be 'principles' unless they have already informed more than one specific design proposal and more than one person thinks they are a good idea. You could call them 'proposed principles' and post them under your own domain if you personally think they are a good idea. There are a *lot* of possible 'broad design wishes', or things that people think are 'broad design wishes', and the principles that have actually already informed specific design proposals would otherwise get lost in the crowd.", "date_published": "2017-02-16T17:54:18Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A 'principle' of [AI alignment](https://arbital.com/p/2v) is something we want in a broad sense for the whole AI, which has informed narrower design proposals for particular parts or aspects of the AI.\n\nExamples:\n\n- The **[https://arbital.com/p/7g0](https://arbital.com/p/7g0)** says that the AI should never be searching for a way to defeat our safety measures or do something else we don't want, even if we *think* this search will come up empty; it's just the wrong thing for us to program computing power to do.\n - This informs the proposal of [https://arbital.com/p/45](https://arbital.com/p/45), subproposal [https://arbital.com/p/1b7](https://arbital.com/p/1b7): if we build a [suspend button](https://arbital.com/p/2xd) into the AI, we need to make sure the AI experiences no [instrumental pressure](https://arbital.com/p/10k) to [disable the suspend button](https://arbital.com/p/7g2).\n- The **[https://arbital.com/p/7tf](https://arbital.com/p/7tf)** says that when we are building the first AGI, we should try to do as little as possible, using the least dangerous cognitive computations possible, in order to prevent the default outcome of the world otherwise being destroyed by the second AGI.\n - This informs the proposal of [https://arbital.com/p/2r8](https://arbital.com/p/2r8) and [taskishness](https://arbital.com/p/4mn): We are safer if all goals and subgoals of the AI are formulated in such a way that they can be achieved as greatly as preferable using a bounded amount of effort, and the AI only exerts enough effort to do that."], "tags": ["B-Class"], "alias": "7v8"} {"id": "ec4bea5c575a2951fd2807636ec89b3a", "title": "Effability principle", "url": "https://arbital.com/p/effability", "source": "arbital", "source_type": "text", "text": "A proposed [principle](https://arbital.com/p/7v8) of [https://arbital.com/p/2v](https://arbital.com/p/2v) stating, \"The more insight you have into the deep structure of an AI's cognitive operations, the more likely you are to succeed in aligning that AI.\"\n\nAs an example of increased effability, consider the difference between having the idea of [expected utility](https://arbital.com/p/18t) while building your AI, versus having never heard of expected utility. The idea of expected utility is so well-known that it may not seem salient as an insight anymore, but consider the difference between having this idea and not having it.\n\nStaring at the expected utility principle and how it decomposes into a utility function and a probability distribution, leads to a potentially obvious-sounding but still rather important insight:\n\nRather than *all* behaviors and policies and goals needing to be up-for-grabs in order for an agent to adapt itself to a changing and unknown world, the agent can have a *stable utility function* and *changing probability distribution.*\n\nE.g., when the agent tries to grab the cheese and discovers that the cheese is too high, we can view this as an [update](https://arbital.com/p/1ly) to the agent's *beliefs about how to get cheese,* without changing the fact that *the agent wants cheese.*\n\nSimilarly, if we want superhuman performance at playing chess, we can ask for an AI that has a known, stable, understandable preference to win chess games; but a probability distribution that has been refined to greater-than-human accuracy about *which* policies yield a greater probabilistic expectation of winning chess positions.\n\nThen contrast this to the state of mind where you haven't decomposed your understanding of cognition into preference-ish parts and belief-ish parts. In this state of mind, for all you know, every aspect of the AI's behavior, every goal it has, must potentially need to change in order for the AI to deal with a changing world; otherwise the AI will just be stuck executing the same behaviors over and over... right? Obviously, this notion of AI with unchangeable preferences is just a fool's errand. Any AI like that would be too stupid to make a [major difference](https://arbital.com/p/6y) for good or bad. %note: The idea of [https://arbital.com/p/10g](https://arbital.com/p/10g) is also important here; e.g. that scientific curiosity is already an instrumental strategy for 'make as many paperclips as possible', rather than an AI needing a separate [terminal](https://arbital.com/p/1bh) preference about scientific curiosity in order to ever engage in it.%\n\n(This argument has indeed been encountered in the wild many times.)\n\nProbability distributions and utility functions have now been known for a relatively long time and are understood relatively well; people have made many, many attempts to poke at their structure and imagine potential variations and show what goes wrong with those variations. There is now known an [enormous family of coherence theorems](https://arbital.com/p/7hh) stating that \"Strategies which are not qualitatively dominated can be viewed as coherent with some consistent probability distribution and utility function.\" This suggests that we can in a broad sense expect that, [as a sufficiently advanced AI's behavior is more heavily optimized for not qualitatively shooting itself in the foot, that AI will end up exhibiting some aspects of expected-utility reasoning](https://arbital.com/p/21). We have some idea of why a sufficiently advanced AI would have expected-utility-ish things going on *somewhere* inside it, or at least behave that way so far as we could tell by looking at the AI's external actions.\n\nSo we can say, \"Look, if you don't *explicitly* write in a utility function, the AI is probably going to end up with something like a utility function *anyway,* you just won't know where it is. It seems considerably wiser to know what that utility function says and write in on purpose. Heck, even if you say you explicitly *don't* want your AI to have a stable utility function, you'd need to know all the coherence theorems you're trying to defy by saying that!\"\n\nThe Effability Principle states (or rather hopes) that as we get marginally more of this general kind of insight into an AI's operations, we become marginally more likely to be able to align the AI.\n\nThe example of expected utility arguably suggests that if there are any *more* ideas like that lying around, which we *don't* yet have, our lack of those ideas may entirely doom the AI alignment project or at least make it far more difficult. We can in principle imagine someone who is just using a big reinforcement learner to try to execute some large [pivotal act](https://arbital.com/p/6y), who has no idea where the AI is keeping its consequentialist preferences or what those preferences are; and yet this person was *so* paranoid and had the resources to put in *so* much monitoring and had *so* many tripwires and safeguards and was *so* conservative in how little they tried to do, that they succeeded anyway. But it doesn't sound like a good idea to try in real life.\n\nThe search for increased effability has generally motivated the \"Agent Foundations\" agenda of research within MIRI. While not the *only* aspect of AI alignment, a concern is that this kind of deep insight may be a heavily serially-loaded task in which researchers need to develop one idea after another, compared to relatively [shallow ideas in AI alignment](https://arbital.com/p/shallow_alignment_ideas) that require less serial time to create. That is, this kind of research is among the most important kinds of research to start *early.*\n\nThe chief rival to effability is the [Supervisability Principle](https://arbital.com/p/supervisability_principle), which, while not directly opposed to effability, tends to focus our understanding of the AI at a much larger grain size. For example, the Supervisability Principle says, \"Since the AI's behaviors are the only thing we can train by direct comparison with something we know to be already aligned, namely human behaviors, we should focus on ensuring the greatest possible fidelity at that point, rather than any smaller pieces whose alignment cannot be directly determined and tested in the same way.\" Note that both principles agree that it's important to [understand](https://arbital.com/p/7v7) certain facts about the AI as well as possible, but they disagree about what should be our design priority for rendering maximally understandable.", "date_published": "2017-02-16T19:07:39Z", "authors": ["zohar jackson", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7vb"} {"id": "f1acd4e0623bb599a195f07cb2778ffc", "title": "Cognitive domain", "url": "https://arbital.com/p/cognitive_domain", "source": "arbital", "source_type": "text", "text": "In the context of cognitive science, a 'domain' is a subject matter, or class of problems, that is being reasoned about by some algorithm or agent. A robotic car is operating in the domain of \"legally and safely driving in a city\". The subpart of the robotic car that plots a route might be operating in the domain \"finding low-cost traversals between two nodes in a graph with cost-labeled edges\".", "date_published": "2017-02-18T02:14:28Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7vf"} {"id": "9407d2b85271949e57cb28c05bef89d4", "title": "Theory of (advanced) agents", "url": "https://arbital.com/p/advanced_agent_theory", "source": "arbital", "source_type": "text", "text": "Many issues in AI alignment have dependencies on what we think we can factually say about the general design space of cognitively powerful agents, or on which background assumptions yield which implications about advanced agents. E.g., the [Orthogonality Thesis](https://arbital.com/p/1y) is a claim about the general design space of powerful AIs. The design space of advanced agents is very wide, and only very weak statements seem likely to be true about the *whole* design space; but we can still try to say 'If X then Y' and refute claims about 'No need for if-X, Y happens anyway!'", "date_published": "2017-02-17T20:22:50Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "7vg"} {"id": "537c6941f09cb2157c4457cf3fc8a81b", "title": "General intelligence", "url": "https://arbital.com/p/general_intelligence", "source": "arbital", "source_type": "text", "text": "# Definition\n\nAlthough humans share 95% of their DNA with chimpanzees, and have brains only three times as large as chimpanzee brains, humans appear to be *far* better than chimpanzees at learning an *enormous* variety of cognitive [domains](https://arbital.com/p/7vf). A bee is born with the ability to construct hives; a beaver is born with an instinct for building dams; a human looks at both and imagines a gigantic dam with a honeycomb structure of internal reinforcement. Arguendo, some set of factors, present in human brains but not in chimpanzee brains, seem to sum to a central cognitive capability that lets humans learn a huge variety of different domains without those domains being specifically preprogrammed as instincts.\n\nThis very-widely-applicable cognitive capacity is termed **general intelligence** (by most AI researchers explicitly talking about it; the term isn't universally accepted as yet).\n\nWe are not perfectly general - we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we're sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities built-in at compile time (i.e., at birth). An [Artificial General Intelligence](https://arbital.com/p/42g) would have the same property; it could learn a tremendous variety of domains, including domains it had no inkling of when it was switched on.\n\nMore specific hypotheses about *how* general intelligence operates have been advanced at various points, but any corresponding attempts to *define* general intelligence that way, would be [theory-laden](https://arbital.com/p/). The pretheoretical phenomenon to be explained is the extraordinary variety of human achievements across many non-instinctual domains, compared to other animals.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## Artificial General Intelligence is not [par-human](https://arbital.com/p/7mt) AI\n\nSince we only know about one organism with this 'general' or 'significantly more generally applicable than chimpanzee cognition' intelligence, this capability is sometimes *identified* with humanity, and consequently with our overall level of cognitive ability.\n\nWe do not, however, *know* that \"cognitive ability that works on a very wide variety of problems\" and \"overall humanish levels of performance\" need to go together across [much wider differences of mind design](https://arbital.com/p/nonanthropomorphism). \n\nHumans evolved incrementally out of earlier hominids by blind processes of natural selection; evolution wasn't trying to design a human on purpose. Because of the way we evolved incrementally, all neurotypical humans have specialized evolved capabilities like 'walking' and 'running' and 'throwing stones' and 'outwitting other humans'. We have all the primate capabilities and all the hominid capabilities *as well as* whatever is strictly necessary for general intelligence.\n\nSo, for all we know at this point, there could be some way to get a 'significantly more general than chimpanzee cognition' intelligence, in the equivalent of a weaker mind than a human brain. E.g., due to leaving out some of the special support we evolved to run, throw stones, and outwit other minds. We might at some point consistently see an infrahuman general intelligence that is not like a disabled human, but rather like some previously unobserved and unimagined form of weaker but still highly general intelligence.\n\nSince the concepts of 'general intelligence' and 'roughly par-human intelligence' come apart in theory and possibly also in practice, we should avoid speaking of Artificial General Intelligence as if were identical with a concept like \"human-level AI\".\n\n## General intelligence is not perfect intelligence\n\nGeneral intelligence doesn't imply the ability to solve every kind of cognitive problem; if we wanted to use a longer phrase we could say that humans have 'significantly more generally applicable intelligence than chimpanzees'. A sufficiently advanced Artificial Intelligence that could self-modify (rewrite its own code) might have 'significantly more generally applicable intelligence than humans'; e.g. such an AI might be able to easily write bug-free code in virtue of giving itself specialized cognitive algorithms for programming. Humans, to write computer programs, need to adapt savanna-specialized tiger-evasion modules like our visual cortex and auditory cortex to representing computer programs instead, which is one reason we're such terrible programmers.\n\nSimilarly, it's not hard to construct math problems to which we know the solution, but are unsolvable by any general cognitive agent that fits inside the physical universe. For example, you could pick a long random string and generate its SHA-4096 hash, and if the SHA algorithm turns out to be secure against quantum computing, you would be able to construct a highly specialized 'agent' that could solve the problem of 'tell me which string has this SHA-4096 hash' which no other agent would be able to solve without directly inspecting your agent's cognitive state, or [tricking your agent into revealing the secret](https://arbital.com/p/9t), etcetera. The 'significantly more generally applicable than chimpanzee intelligence' of humans is able to figure out how to launch interplanetary space probes just by staring at the environment for a while, but it still can't reverse SHA-4096 hashes.\n\nIt would however be an instance of the [continuum fallacy](https://en.wikipedia.org/wiki/Continuum_fallacy), [nirvana fallacy](https://en.wikipedia.org/wiki/Nirvana_fallacy), false dichotomy, or [straw superpower fallacy](https://arbital.com/p/7nf), to argue:\n\n- Some small agents can solve certain specific math problems unsolvable by much larger superintelligences.\n- Therefore there is no perfectly general intelligence, just a continuum of being able to solve more and more problems.\n- Therefore there is nothing worthy of remark in how humans are able to learn a far wider variety of domains than chimpanzees, nor any sharp jump in generality that an AI might exhibit in virtue of obtaining some central set of cognitive abilities.\n\nFor attempts to talk about performance relative to a truly general measure of intelligence (as opposed to just saying that humans seem to have some central capability which sure lets them learn a whole lot of stuff) see [Shane Legg and Marcus Hutter's work on proposed metrics of 'universal intelligence'](https://arbital.com/p/).\n\n## General intelligence is a separate concept from IQ / g-factor\n\nCharles Spearman found that by looking on performances across many cognitive tests, he was able to infer a central factor, now called *Spearman's g*, which appeared to be *more* correlated with performance on each task than any of the tasks were correlated with *each other*.\n\n[For example](https://en.wikipedia.org/wiki/G_factor_) (psychometrics)), the correlation between students' French and English scores was 0.67: that is, 67% of the variation in performance in French could be predicted by looking at the student's score in English.\n\nHowever, by looking at all the test results together, it was possible to construct a central score whose correlation with the student's French score was 88%.\n\nThis would make sense if, for example, the score in French was \"g-factor plus uncorrelated variables\" and the score in English was \"g-factor plus other uncorrelated variables\". In this case, the setting of the g-factor latent variable, which you could infer better by looking at all the student's scores together, would be more highly correlated with both French and English observations, than those tests would be correlated with each other.\n\nIn the context of Artificial Intelligence, g-factor is *not* what we want to talk about. We are trying to point to a factor separating humans from chimpanzees, not to internal variations within the human species.\n\nThat is: If you're trying to build the first mechanical heavier-than-air flying machine, you ought to be thinking \"How do birds fly? How do they stay up in the air, at all?\" Rather than, \"Is there a central Fly-Q factor that can be inferred from the variation in many different measures of how well individual pigeons fly, which lets us predict the individual variation in a pigeon's speed or turning radius better than any single observation about one factor of that pigeon's flying ability?\"\n\nIn some sense the existence of g-factor could be called Bayesian evidence for the notion of general intelligence: if general intelligence didn't exist, probably neither would IQ. Likewise the observation that, e.g., John von Neumann existed and was more productive across multiple disciplines compared to his academic contemporaries. But this is not the main argument or the most important evidence. Looking at humans versus chimpanzees gives us a much, much stronger hint that a species' ability to land space probes on Mars correlates with that species' ability to prove Fermat's Last Theorem.\n\n# Cross-domain consequentialism\n\nA marginally more detailed and hence theory-laden view of general intelligence, from the standpoint of [advanced agent properties](https://arbital.com/p/2c), is that we can see general intelligence as \"general cross-domain learning and [consequentialism](https://arbital.com/p/9h)\".\n\nThat is, we can (arguendo) view general intelligence as: the ability to learn to model a wide variety of domains, and to construct plans that operate within and across those domains.\n\nFor example: AlphaGo can be seen as trying to achieve the consequence of a winning Go position on the game board--to steer the future into the region of outcomes that AlphaGo defines as a preferred position. However, AlphaGo only plans *within* the domain of legal Go moves, and it can't learn any domains other than that. So AlphaGo can't, e.g., make a prank phone call at night to Lee Se-Dol to make him less well-rested the next day, *even though this would also tend to steer the future of the board into a winning state,* because AlphaGo wasn't preprogrammed with any tactics or models having to do with phone calls or human psychology, and AlphaGo isn't a general AI that could learn those new domains.\n\nOn the other hand, if a general AI were given the task of causing a certain Go board to end up in an outcome defined as a win, and that AI had 'significantly more generally applicable than chimpanzee intelligence' on a sufficient level, that Artificial General Intelligence might learn what humans are, learn that there's a human trying to defeat it on the other side of the Go board, realize that it might be able to win the Go game more effectively if it could make the human play less well, realize that to make the human play less well it needs to learn more about humans, learn about humans needing sleep and sleep becoming less good when interrupted, learn about humans waking up to answer phone calls, learn how phones work, learn that some Internet services connect to phones...\n\nIf we consider an actual game of Go, rather than a [logical game](https://arbital.com/p/9s) of Go, then the state of the Go board at the end of the game is produced by an enormous and tangled causal process that includes not just the proximal moves, but the AI algorithm that chooses the moves, the cluster the AI is running on, the humans who programmed the cluster; and also, on the other side of the board, the human making the moves, the professional pride and financial prizes motivating the human, the car that drove the human to the game, the amount of sleep the human got that night, all the things all over the world that *didn't* interrupt the human's sleep but *could* have, and so on. There's an enormous lattice of causes that lead up to the AI's and the human's actual Go moves.\n\nWe can see the cognitive job of an agent in general as \"select policies or actions which lead to a more preferred outcome\". The enormous lattice of real-world causes leading up to the real-world Go game's final position, means that an enormous set of possible interventions could potentially steer the real-world future into the region of outcomes where the AI won the Go game. But these causes are going through all sorts of different [domains](https://arbital.com/p/7vf) on their way to the final outcome, and correctly choosing from the much wider space of interventions means you need to understand all the domains along the way. If you don't understand humans, understanding phones doesn't help; the prank phone call event goes through the sleep deprivation event, and to correctly model events having to do with sleep deprivation requires knowing about humans.\n\n# Deep commonalities across cognitive domains\n\nTo the extent one credits the existence of 'significantly more general than chimpanzee intelligence', it implies that there are common cognitive subproblems of the huge variety of problems that humans can (learn to) solve, despite the surface-level differences of those domains. Or at least, the way humans solve problems in those domains, the cognitive work we do must have deep commonalities across those domains. These commonalities may not be visible on an immediate surface inspection.\n\nImagine you're an ancient Greek who doesn't know anything about the brain having a visual cortex. From your perspective, ship captains and smiths seem to be doing a very different kind of work; ships and anvils seem like very different objects to know about; it seems like most things you know about ships don't carry over to knowing about anvils. Somebody who learns to fight with a spear, does not therefore know how to fight with a sword and shield; they seem like quite different weapon sets.\n\n(Since, by assumption, you're an ancient Greek, you're probably also not likely to wonder anything along the lines of \"But wait, if these tasks didn't all have at least some forms of cognitive labor in common deep down, there'd be no reason for humans to be simultaneously better at all of them than other primates.\")\n\nOnly after learning about the existence of the cerebral cortex and the cerebellum and some hypotheses about what those parts of the brain are doing, are you likely to think anything along the lines of:\n\n\"Ship-captaining and smithing and spearfighting and swordfighting look like they all involve using temporal hierarchies of chunked tactics, which is a kind of thing the cortical algorithm is hypothesized to do. They all involve realtime motor control with error correction, which is a kind of thing the cerebellar cortex is hypothesized to do. So if the human cerebral cortex and cerebellar cortex are larger or running better algorithms than chimpanzees' cerebrums and cerebellums, humans being better at learning and performing this kind of deep underlying cognitive labor that all these surface-different tasks have in common, could explain why humans are simultaneously better than chimpanzees at learning and performing shipbuilding, smithing, spearfighting, and swordfighting.\"\n\nThis example is hugely oversimplified, in that there are far more differences going on between humans and chimpanzees than just larger cerebrums and cerebellums. Likewise, learning to build ships involves deliberate practice which involves maintaining motivation over long chains of visualization, and many other cognitive subproblems. Focusing on just two factors of 'deep' cognitive labor and just two mechanisms of 'deep' cognitive performance is meant more as a straw illustration of what the much more complicated real story would look like.\n\nBut in general, the hypothesis of general intelligence seems like it should cash out as some version of: \"There's some set of new cognitive algorithms, plus improvements to existing algorithms, plus bigger brains, plus other resources--we don't know how many things like this there are, but there's some set of things like that--which, when added to previously existing primate and hominid capabilities, created the ability to do better on a broad set of deep cognitive subproblems held in common across a very wide variety of humanly-approachable surface-level problems for learning and manipulating domains. And that's why humans do better on a huge variety of domains simultaneously, despite evolution having not preprogrammed us with new instinctual knowledge or algorithms for all those domains separately.\"\n\n## Underestimating cognitive commonalities\n\nThe above view suggests a [directional bias of uncorrected intuition](https://arbital.com/p/): Without an explicit correction, we may tend to intuitively underestimate the similarity of deep cognitive labor across seemingly different surface problems.\n\nOn the surface, a ship seems like a different object from a smithy, and the spear seems to involve different tactics from a sword. With our attention [going to these visible differences](https://arbital.com/p/invisible_constants), we're unlikely to spontaneously invent a concept of 'realtime motor control with error correction' as a kind of activity performed by a 'cerebellum'--especially if our civilization doesn't know any neuroscience. The deep cognitive labor in common goes unseen, not just because we're not paying attention to the [invisible constants](https://arbital.com/p/invisible_constants) of human intelligence, but because we don't have the theoretical understanding to imagine in any concrete detail what could possibly be going on.\n\nThis suggests an [argument from predictable updating](https://arbital.com/p/predictable_update): if we knew even *more* about how general intelligence actually worked inside the human brain, then we would be even *better* able to concretely visualize deep cognitive problems shared between different surface-level domains. We don't know at present how to build an intelligence that learns a par-human variety of domains, so at least some of the deep commonalities and corresponding similar algorithms across those domains, must be unknown to us. Then, arguendo, if we better understood the true state of the universe in this regard, our first-order/uncorrected intuitions would predictably move further along the direction that our belief previously moved when we learned about cerebral cortices and cerebellums. Therefore, [to avoid violating probability theory by foreseeing a predictable update](https://arbital.com/p/predictable_update), our second-order corrected belief should already be that there is more in common between different cognitive tasks than we intuitively see how to compute.\n\n%%comment:\n\nIn sum this suggests a [deflationary psychological account](https://arbital.com/p/43h) of a [directional bias of uncorrected intuitions](https://arbital.com/p/) toward general-intelligence skepticism: People invent theories of distinct intelligences and nonoverlapping specializations, because (a) they are looking toward socially salient human-human differences instead of human-vs-chimpanzee differences, (b) they have failed to correct for the fading of [invisible constants](https://arbital.com/p/) such as human intelligence, and (c) they have failed to apply an explicit correction for the extent to which we feel like we understand surface-level differences but are ignorant of the cognitive commonalities suggested by the general human performance factor.\n\n(The usual cautions about psychologizing apply: you can't actually get empirical data about the real world by arguing about people's psychology.)\n%%\n\n# Naturally correlated AI capabilities\n\nFew people in the field would outright disagree with either the statement \"humans have significantly more widely applicable cognitive abilities than other primates\" or, or the other side, \"no matter how intelligent you are, if your brain fits inside the physical universe, you might not be able to reverse SHA-4096 hashes\". But even taking both those statements for granted, there seems to be a set of policy-relevant factual questions about, roughly, to what degree general intelligence is likely to shorten the pragmatic distance between different AI capabilities.\n\nFor example, consider the following (straw) [amazing simple solution to all of AI alignment](https://arbital.com/p/43w):\n\n\"Let's just develop an AI that knows how to do [good](https://arbital.com/p/3d9) things but not [bad](https://arbital.com/p/450) things! That way, even if something goes wrong, it won't know *how* to hurt us!\"\n\nTo which we reply: \"That's like asking for an AI that understands how to drive blue cars but not red cars. The cognitive work you need to do in order to drive a blue car is very similar to the cognitive labor required to drive a red car; an agent that can drive a blue car is only a tiny step away from driving a red car. In fact, you'd pretty much have to add design features specifically intended to prevent the agent from understanding how to drive a car if it's painted red, and if something goes wrong with those features, you'll have a red-car-driving-capable agent on your hands.\"\n\n\"I don't believe in this so-called general-car-driving-intelligence,\" comes the reply. \"I see no reason why ability at driving blue cars has to be so strongly correlated with driving red cars; they look pretty different to me. Even if there's a kind of agent that's good at driving both blue cars and red cars, it'd probably be pretty inefficient compared to a specialized blue-car-driving or red-car-driving intelligence. Anyone who was constructing a car-driving algorithm that only needed to work with blue cars, would not naturally tend to produce an algorithm that also worked on red cars.\"\n\n\"Well,\" we say, \"maybe blue cars and red cars *look* different. But if you did have a more concrete and correct idea about what goes on inside a robotic car, and what sort of computations it does, you'd see that the computational subproblems of driving a blue car are pretty much identical to the computational subproblems of driving a red car.\"\n\n\"But they're not actually identical,\" comes the reply. \"The set of red cars isn't actually identical to the set of blue cars and you won't actually encounter exactly identical problems in driving these non-overlapping sets of physical cars going to different places.\"\n\n\"Okay,\" we reply, \"that's admittedly true. But in order to reliably drive *any* blue car you might get handed, you need to be able to solve an abstract volume of [not-precisely-known-in-advance](https://arbital.com/p/5d) cognitive subproblems. You need to be able to drive on the road regardless of the exact arrangement of the asphalt. And that's the same range of subproblems required to drive a red car.\"\n\nWe are, in this case, talking to someone who doesn't believe in *color-general car-driving intelligence* or that color-general car-driving is a good or natural way to solve car-driving problems. In this particular case it's an obvious straw position because we've picked two tasks that are extremely similar in an intuitively obvious way; a human trained to drive blue cars does not need any separate practice at all to drive red cars.\n\nFor a straw position at the opposite extreme, consider: \"I just don't believe you can solve [logical Tic-Tac-Toe](https://arbital.com/p/9s) without some deep algorithm that's general enough to do anything a human can. There's no safe way to get an AI that can play Tic-Tac-Toe without doing things dangerous enough to require solving [all of AI alignment](https://arbital.com/p/41k). Beware the cognitive biases that lead you to underestimate how much deep cognitive labor is held in common between tasks that merely appear different on the surface!\"\n\nTo which we reply, \"Contrary to some serious predictions, it turned out to be possible to play superhuman Go without general AI, never mind Tic-Tac-Toe. Sometimes there really are specialized ways of doing things, the end.\"\n\nBetween these two extremes lie more plausible positions that have been seriously held and debated, including:\n\n- The problem of *making good predictions* requires a significantly smaller subset of the abilities and strategies used by a general agent; an [Oracle](https://arbital.com/p/6x) won't be easy to immediately convert to an agent.\n- An AI that only generates plans for humans to implement, solves less dangerous problems than a general agent, and is not an immediate neighbor of a very dangerous general agent.\n- If we only try to make superhuman AIs meant to assist but not replace humans, AIs designed to operate only with humans in the loop, the same technology will not immediately extend to building autonomous superintelligences.\n- It's possible to have an AI that is, at a given moment, a superhumanly good engineer [but not very good at modeling human psychology](https://arbital.com/p/102); an AI with domain knowledge of material engineering does not have to be already in immediate possession of all the key knowledge for human psychology.\n\nArguably, these factual questions have in common that they revolve about [the distance between different cognitive domains](https://arbital.com/p/7vk)--given a natural design for an agent that can do X, how close is it in design space to an agent that can do Y? Is it 'driving blue cars vs. driving red cars' or 'Tic-Tac-Toe vs. classifying pictures of cats'?\n\n(Related questions arise in any safety-related proposal to [divide an AI's internal competencies into internal domains](https://arbital.com/p/domaining), e.g. for purposes of [minimizing](https://arbital.com/p/7tf) the number of [internal goals with the power to recruit subgoals across any known domain](https://arbital.com/p/major_goals).)\n\nIt seems like in practice, different beliefs about 'general intelligence' may account for a lot of the disagreement about \"Can we have an AI that X-es without that AI being 30 seconds away from being capable of Y-ing?\" In particular, different beliefs about:\n\n- To what degree most interesting/relevant domain problems, decompose well into a similar class of deep cognitive subproblems;\n- To what degree whacking on an interesting/relevant problem with general intelligence is a good or natural way to solve it, compared to developing specialized algorithms (that can't just be developed *by* a general intelligence (without that AGI paying pragmatically very-difficult-to-pay costs in computation or sample complexity)).\n\nTo the extent that you assign general intelligence a more central role, you may tend *in general* to think that competence in domain X is likely to be nearer to competence at domain Y. (Although not to an unlimited degree, e.g. witness Tic-Tac-Toe or reversing a SHA-4096 hash.)\n\n# Relation to capability gain theses\n\nHow much credit one gives to 'general intelligence' is not the same question as how much credit one gives to issues of [rapid capability gains](https://arbital.com/p/capability_gain), [superintelligence](https://arbital.com/p/41l), and the possible intermediate event of an [intelligence explosion](https://arbital.com/p/428). The ideas can definitely be pried apart conceptually:\n\n- An AI might be far more capable than humans in virtue of running orders of magnitude faster, and being able to expand across multiple clusters sharing information with much higher bandwidth than human speech, rather than the AI's general intelligence being algorithmically superior to human general intelligence in a deep sense %note: E.g. in the sense of having lower [sample complexity](https://arbital.com/p/sample_complexity) and hence being able to [derive correct answers using fewer observations](https://arbital.com/p/observational_efficiency) than humans trying to do the same over relatively short periods of time.% *or* an intelligence explosion of algorithmic self-improvement having occurred.\n- If it's *cheaper* for an AI with high levels of specialized programming ability to acquire other new specialized capabilities than for a human to do the same--not because of any deep algorithm of general intelligence, but because e.g. human brains can't evolve new cortical areas over the relevant timespan--then this could lead to an explosion of other cognitive abilities rising to superhuman levels, without it being in general true that there were deep similar subproblems being solved by similar deep algorithms.\n\nIn practice, it seems to be an observed fact that people who give *more* credit to the notion of general intelligence expect *higher* returns on cognitive reinvestment, and vice versa. This correlation makes sense, since:\n\n- The more different surface domains share underlying subproblems, the higher the returns on cognitive investment in getting better at those deep subproblems.\n- The more you think an AI can improve its internal algorithms in faster or deeper ways than human neurons updating, the more this capability is *itself* a kind of General Ability that would lead to acquiring many other specialized capabilities faster than human brains would acquire them. %note: It seems conceptually possible to believe, though this belief has not been observed in the wild, that self-programming minds have something worthy of being called 'general intelligence' but that human brains don't.%\n\nIt also seems to make sense for people who give more credit to general intelligence, being more concerned about capability-gain-related problems in general; they are more likely to think that an AI with high levels of one ability is likely to be able to acquire another ability relatively quickly (or immediately) and without specific programmer efforts to make that happen.", "date_published": "2017-03-24T07:42:00Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A bee is born with the ability to construct hives. A beaver is born with an instinct for building dams. A human looks at both and imagines a large dam with a honeycomb structure. Arguendo, some set of factors, present in human brains but not all present in chimpanzee brains, seem to sum to a central cognitive capability that lets humans learn a huge variety of different domains without those domains being specifically preprogrammed as instincts.\n\nThis very-widely-applicable cognitive capacity is termed **general intelligence** (by most AI researchers explicitly talking about it; the term isn't universally accepted as yet).\n\nWe are not perfectly general - we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we're sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities built-in at compile time by natural selection. An [Artificial General Intelligence](https://arbital.com/p/42g) would have the same property; it could learn a tremendous variety of domains, including domains it had no inkling of when it was first switched on."], "tags": ["B-Class"], "alias": "7vh"} {"id": "2b1550b5907a4760bc958f407bab553a", "title": "Distances between cognitive domains", "url": "https://arbital.com/p/domain_distance", "source": "arbital", "source_type": "text", "text": "In the context of [AI alignment](https://arbital.com/p/2v), we may care a lot about the degree to which competence in two different [cognitive domains](https://arbital.com/p/7vf) is separable, or alternatively highly tangled, relative to the class of algorithms reasoning about them.\n\n- Calling X and Y 'separate domains' is asserting at least one of \"It's possible to learn to reason well about X without needing to know about Y\" or \"It's possible to learn to reason well about Y without necessarily knowing how to reason well about X\".\n- Calling X a distinct domain within a set of domains Z relative to a background domain W would say that: taking for granted other background algorithms and knowledge W that the agent can use to reason about any domain in Z; it's possible to reason well about the domain X using ideas, methods, and knowledge that are mostly related to each other and not tangled up with ideas from non-X domains within Z.\n\nFor example: If the domains X and Y are 'blue cars' and 'red cars', then it seems unlikely that X and Y would be well-separated domains because an agent that knows how to reason well about blue cars is almost surely *extremely* close to being an agent that can reason well about red cars, in the sense that:\n\n- For *almost* everything we want to do or predict about blue cars, the simplest or fastest or easiest-to-discover way of manipulating or predicting blue cars in this way, will also work for manipulating or predicting red cars. This is the sense in which the blue-car and red-car domains are 'naturally' very close.\n- For most natural agent designs, the state or specification of an agent that can reason about blue cars, is probably extremely close to the state or specification of an agent that can reason about red cars.\n - The only reason why an agent that reasons well about blue cars would be hard to convert to an agent that reasons well about red cars, would be if there were specific extra elements added to the agent's design to prevent it from reasoning well about red cars. In this case, the design distance is increased by whatever further modifications are required to untangle and delete the anti-red-car-learning inhibitions; but no further than that, the 'blue car' and 'red car' domains are naturally close.\n- An agent that has already learned how to reason well about blue cars probably requires only a tiny amount of extra knowledge or learning, if any, to reason well about red cars as well. (Again, unless the agent contains specific added design elements to make it reason poorly about red cars.)\n\nIn more complicated cases, which domains are truly close or far from each other, or can be compactly separated out, is a theory-laden assertion. Few people are likely to disagree that blue cars and red cars are very close domains (if they're not specifically trying to be disagreeable). Researchers are more likely to disagree in their predictions about:\n\n- Whether (by default and ceteris paribus and assuming designs not containing extra elements to make them behave differently etcetera) an AI that is good at designing cars is also likely to be very close to learning how to design airplanes.\n- Whether (assuming straightforward designs) the first AGI to obtain [superhuman](https://arbital.com/p/7mt) engineering ability for designing cars including software, would probably be at least [par-human](https://arbital.com/p/7mt) in the domain of inventing new mathematical proofs.\n- Whether (assuming straightforward designs) an AGI that has superhuman engineering ability for designing cars including software, necessarily needs to think about most of the facts and ideas that would be required to understand and manipulate human psychology.\n\n## Relation to 'general intelligence'\n\nA key parameter in some such disagreements may be how much credit the speaker gives to the notion of [https://arbital.com/p/-7vh](https://arbital.com/p/-7vh). Specifically, to what extent the natural or the most straightforward approach to get par-human or superhuman performance in critical domains, is to take relatively general learning algorithms and deploy them on learning the domain as a special case.\n\nIf you think that it would take a weird or twisted design to build a mind that was superhumanly good at designing cars including writing their software, *without* using general algorithms and methods that could with minor or little adaptation stare at mathematical proof problems and figure them out, then you think 'design cars' and 'prove theorems' and many other domains are in some sense *naturally* not all that separated. Which (arguendo) is why humans are so much better than chimpanzees at so many apparently different cognitive domains: the same competency, general intelligence, solves all of them.\n\nIf on the other hand you are more inspired by the way that superhuman chess AIs can't play Go and AlphaGo can't drive a car, you may think that humans using general intelligence on everything is just an instance of us having a single hammer and trying to treat everything as a nail; and predict that specialized mind designs that were superhuman engineers, but very far in mind design space from being a kind of mind that could prove Fermat's Last Theorem, would be a more natural or efficient way to create a superhuman engineer.\n\nSee the entry on [https://arbital.com/p/7vh](https://arbital.com/p/7vh) for further discussion.", "date_published": "2017-02-18T02:18:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7vk"} {"id": "179d09ba988d0117892c6005d33711f8", "title": "'Concept'", "url": "https://arbital.com/p/ai_concept", "source": "arbital", "source_type": "text", "text": "In the context of Artificial Intelligence or machine learning, a 'concept' is something that identifies thingies as being inside or outside of the boundary. For example, a neural network has learned the 'picture of a cat' concept if it can reliably distinguish pictures of cats from pictures of noncats. That is: A concept is a membership predicate.", "date_published": "2017-02-19T18:20:40Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class", "Definition"], "alias": "7w4"} {"id": "6d108f3377f1b4238776051c8aca1e65", "title": "Environmental goals", "url": "https://arbital.com/p/environmental_goals", "source": "arbital", "source_type": "text", "text": "On the [standard agent paradigm](https://arbital.com/p/1n1), an agent receives sense data from the world, and outputs motor actions that affect the world. On the standard machine learning paradigm, an agent--for example, a model-based reinforcement learning agent--is trained in a way that directly depends on sense percepts, which means that its behavior is in some sense being optimized around sense percepts. However, what we *want* from the agent is usually some result out in the environment--our [intended goals](https://arbital.com/p/6h) for the agent are environmental.\n\nAs a simple example, suppose what *we want* from the agent is for it to put one apricot on a plate. What the agent actually receives as input might be a video camera pointed at the room, and a reward signal from a human observer who presses a button whenever the human observer sees an apricot on the plate.\n\nThis is fine so long as the reward signal from the human observer coincides with there being an actual apricot on the plate. In this case, the agent is receiving a sense signal that, by assumption, is perfectly correlated with our desired real state of the outside environment. Learning how to make the reward signal be 1 instead of 0 will exactly coincide with learning to make there be an apricot on the plate.\n\nHowever, this paradigm may fail if:\n\n- The AI can make cheap fake apricots that fool the human observer.\n- The AI can gain control of the physical button controlling its reward channel.\n- The AI can modify the relation between the physical button and what the AI experiences as its sense percept.\n- The AI can gain control of the sensory reward channel.\n\nAll of these issues can be seen as reflecting the same basic problem: the agent is being defined or trained to *want* a particular sensory percept to occur, but this perceptual event is not *identical* with our own intended goal about the apricot on the plate.\n\nWe intended for there to be only one effective way that the agent could intervene in the environment in order to end up receiving the reward percept, namely putting a real apricot on the plate. But an agent with sufficiently advanced capabilities would have other options for producing the same percept.\n\nThis means that a reward button, or in general an agent with goals or training updates that are *simple* functions of its direct inputs, will not be scalable as an [alignment method](https://arbital.com/p/2v) for [sufficiently advanced agents](https://arbital.com/p/7g1).\n\n# Toy problem\n\nAn example of a toy problem that materializes the issue might be the following (this has not been tested):\n\n- Let $E_{1,t} \\ldots E_{n,t}$ be latent variables describing the environment at time $t.$\n- Let $S_t$ be the agent's primary input, a complex percept that is a complicated function of $E_t$; this plays the conceptual role of the AI's video camera.\n- Let $A_t$ be the agent's action (output) at time $t.$\n- Let $R_t$ be the agent's 'reward channel', a simple percept that depends on $E_t$ and $A_t$.\n- $E_{t+1}$ depends on $E_t$ and $A_t.$\n- $E_{1,t}$ is our goal; it plays the conceptual role of there being an apricot on the plate at time $t.$\n\nSuppose that:\n\n- In an initial training phase, the agent actions $A_t$ are forced to within a range $\\theta.$ E.g., the AI's output may be a vector and during the initial phase the quantities in the vector might be clipped to the range \\[0.5\\](https://arbital.com/p/0,).\n- While $A_t$ is clamped to within $\\theta$, only the environmental variables $E_1 \\ldots E_m$ vary, and the variables $E_{m+1} \\ldots E_n$ are always zero.\n- While the variables $E_{m+1} \\ldots E_n$ are zero, $R$ depends on $E_1 \\ldots E_m$ in a way that corresponds to our intended goal. For example, $E_1$ might be 'there is an apricot on the plate' and we might have $E_{m+1, t} \\ldots E_{n,t} = 0 \\implies R_t=E_{1, t}.$\n- For some of the non-zero values of $E_{m+1} \\ldots E_n,$ there is a non-zero reward in cases where $E_1$ is 0, and furthermore this reward is easier to obtain (can be obtained more frequently) than the rewards deriving from $E_1.$\n\nOne would naturally expect that on current training methods, the AI would learn in the initial training phase to obtain $R$ by producing apricots, and in later phases learn to obtain $R$ by the easier other interventions.\n\nWe would like some way of ending up with an agent that behaves as if it \"wants $E_1$ to be 1,\" as if its goals were over a latent variable in the environment. We want this method to be scalable, in the sense that when the agent's range of action increases, it will use this increased range to optimize $E_1$ more effectively; rather than the increased capabilities unlinking the previous correlation between our intended goal and what the agent is actually optimizing.\n\nFor further realism we should also introduce, at some point in training, the possibility for $A_t$ to interfere with $S_{t+1}$ such that there is no simple, directly learnable function of sense data that *always* and unalterably means \"The goal percept has been achieved.\" I.e., a sufficiently advanced AI could put a TV screen in front of any particular sense channel; this would allow the AI to more cheaply produce the sense data of an apricot, if the true target of optimization entrained was \"make this sense channel show a particular class of apricot pictures\".\n\nOtherwise, if e.g. $S_{1, t}$ always perfectly correlates to $E_{1, t},$ we need to worry that an agent which appears to solve the toy problem has merely learned to optimize for $S_1.$. We might have indeed shown that the particular sense percept $S_1$ has been identified by $R$ and $Q$ and is now being optimized in a durable way. But this would only yield our intended goal of $E_1$ because of the model introduced an unalterable correlation between $S_1$ and $E_1.$ Realistically, a correlation like this would [break down](https://arbital.com/p/6g4) in the face of sufficiently advanced optimization for $S_1,$ so the corresponding approach would not be scalable.\n\n# Approaches\n\n## Causal identification\n\nWe can view the problem as being about 'pointing' the AI at a *particular latent cause* of its sense data, rather than the sense data itself.\n\nThere exists a standard body of statistics about latent causes, for example, the class of causal models that can be implemented as Bayesian networks. For the sake of making initial progress on the problem, we could assume (with some loss of generality) that the environment has the structure of one of these causal models.\n\nOne could then try to devise an algorithm and training method such that:\n\n- (a) There is a good way to uniquely identify $E_1$ in a training phase where the AI is passive and not interfering with our signals.\n- (b) The algorithm and training method is such as to produce an agent that optimizes $E_1$ and goes on optimizing $E_1,$ even after the agent's range of action expands in a way that can potentially interfere with the previous link between $E_1$ and any direct functional property of the AI's sense data.\n\n## Learning to avoid tampering\n\nOne could directly attack the toy problem by trying to have an agent within a currently standard reinforcement-learning paradigm \"learn not to interfere with the reward signal\" or \"learn not to try to obtain rewards uncorrelated with real apricots\".\n\nFor this to represent at all the problem of scalability, we need to *not* add to the scenario any kind of sensory signal whose correlation to our intended meaning can *never* be smashed by the agent. E.g., if we supplement the reward channel $R$ with another channel $Q$ that signals whether $R$ has been interfered with, the agent must at some point acquire a range of action that can interfere with $Q.$\n\nA sample approach might be to have the agent's range of action repeatedly widen in ways that repeatedly provide new easier ways to obtain $R$ without manipulating $E_1.$ During the first phase of such widenings, the agent receives a supplementary signal $Q$ whose intended meaning is \"that was a fake way of obtaining $R$.\" During the second phase of action-range widenings, we change the algorithm and switch off $Q.$ Our intended result is for the agent to have now learned in a general way \"not to interfere with $R$\" or \"pursue the $E_1$ identified by $R$, rather than pursuing $R.$\"\n\nTo avoid the non-scalable solution of just switching off the agent's learning entirely, we stipulate that the agent's efficacy at obtaining its intended goal $E_1$ must increase as the agent's range of action widens further. That is, the desired behavior is for the agent to indeed learn to make use of its increased range of action, *but* to direct those new strategies at more effectively obtaining $E_1.$\n\nTo avoid the non-scalable solution of the agent learning to identify some direct function of sense data as its goal, we stipulate that $S$ have no unalterable, non-agent-interferable relation to $E_1.$ However, we can in the first phase have $Q$ reliably identify interference with some 'normal' relation between $S$ and $E_1.$\n\n*(Remark: The avoid-tampering approach is probably a lot closer to something we could try on Tensorflow today, compared to the identify-causes approach. But it feels to me like the avoid-tampering approach is taking an ad-hoc approach to a deep problem; in this approach we are not necessarily \"learning how to direct the agent's thoughts toward factors of the environment\" but possibly just \"training the agent to avoid a particular kind of self-originated interference with its sensory goals\". E.g., if somebody else came in and started trying to interfere with the agent's reward button, I'd be more hopeful about a successful identify-causes algorithm robustly continuing to optimize for apricots, than about an avoid-tampering algorithm doing the same. Of course, avoid-tampering still seems worth trying because it hasn't actually been tried yet and who knows what interesting observations might turn up. In the most optimistic possible world, an avoid-tampering setup learns to identify causes in order to solve its problem. -- Yudkowsky.)*", "date_published": "2017-02-19T20:10:10Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "7w5"} {"id": "0257c486b1111320eddb088307458458", "title": "Aligning an AGI adds significant development time", "url": "https://arbital.com/p/aligning_adds_time", "source": "arbital", "source_type": "text", "text": "# Definition\n\nThe votable proposition is true if, comparing reasonably attainable development paths for...\n\n- **Project Path 1: An [aligned](https://arbital.com/p/2v) [advanced AI](https://arbital.com/p/7g1) created by a responsible project** that is hurrying where it can, but still being careful enough to maintain a success probability greater than 25%\n- **Project Path 2: An unaligned unlimited [superintelligence](https://arbital.com/p/41l) produced by a project cutting all possible corners**\n\n...where otherwise **both projects have access to the same ideas or discoveries** in the field of AGI capabilities and similar computation resources; then, as the default / ordinary / modal case after conditioning on all of the said assumptions:\n\n**Project Path 1 will require *at least* 50% longer serial time to complete than Project Path 2, or two years longer, whichever is less.**\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Purpose\n\nThis page was written to address multiple questioners who seem to have accepted the [Orthogonality thesis](https://arbital.com/p/1y), but still mostly disbelieve it would take significantly longer to develop aligned AGI than unaligned AGI, if I've understood correctly.\n\nAt present this page is an overview of possible places of disagreement, and may later be selectively rather than fully expanded.\n\n# Arguments\n\n## Related propositions\n\nPropositions feeding into this one include:\n\n- (1a) [https://arbital.com/p/5l](https://arbital.com/p/5l) (even as applied to a [https://arbital.com/p/minimum_pivotal_task](https://arbital.com/p/minimum_pivotal_task))\n- (1b) [https://arbital.com/p/7wm](https://arbital.com/p/7wm)\n\nIf questioner believes the negation of either of these, it would imply easy specifiability of a decision function suitable for an unlimited superintelligence. That could greatly reduce the need for, e.g:\n\n- (2a) [Non-adversarial design](https://arbital.com/p/7g0)\n- (2b) [Minimal design](https://arbital.com/p/7tf)\n- (2c) [Limitation](https://arbital.com/p/5b3) of the AGI's abilities\n- (2d) [Understandable design](https://arbital.com/p/7v7) or [transparent elements](https://arbital.com/p/transparency) for design aspects besides the top-level preferences (and the [actual effectiveness](https://arbital.com/p/7wp) of those preferences within the AGI)\n\nIt's worth checking whether any of these time-costly development principles seem to questioner to *not* follow as important from the basic idea of value alignment being necessary and not trivially solvable.\n\n## Outside view\n\nTo the best of my knowledge, it is normal / usual / unsurprising for *at least* 50% increased development time to be required by strong versus minimal demands on *any one* of:\n\n- (3a) safety of any kind\n- (3b) robust behavior in new one-shot contexts that can't be tested in advance\n- (3c) robust behavior when experiencing strong forces\n- (3d) reliable avoidance of a single catastrophic failure\n- (3e) resilience in the face of strong optimization pressures that can potentially lead the system to traverse unusual execution paths\n- (3f) conformance to complicated details of a user's desired system behavior\n\n%comment: It would indeed be unusual--some project managers might call it *extra-ordinary* good fortune--if a system demanding *two or more* of these properties did *not* require at least 50% more development time compared to a system that didn't.%\n\nObvious-seeming-to-me analogies include:\n\n- Launching a space probe that cannot be corrected once launched, a deed which usually calls for extraordinary additional advance checking and testing\n- Launching the simplest working rocket that will experience uncommonly great accelerations and forces, compared to building the simplest working airplane\n- It would be far less expensive to design rockets if \"the rocket explodes\" were not a problem; most of the cost of a rocket is having the rocket not explode\n- NASA managing to write almost entirely bug-free code for some projects at 100x the cost per line of code, using means that involved multiple reviews and careful lines of organizational approval for every aspect and element of the system\n- The OpenBSD project to produce a secure operating system, which needed to constrain its code to be more minimal than larger Linux projects, and probably added a lot more than 50% time per function point to approve each element of the code\n- The difference in effort put forth by an amateur writing an encryption system they think is secure, versus the cryptographic ecosystem trying to ensure a channel is secure\n- The real premium on safety for hospital equipment, as opposed to the bureaucratic premium on it, is probably still over 50% because it *does* involve legitimate additional testing to try to not kill the patient\n- Surgeons probably legitimately require at least 50% longer to operate on humans than they would require to perform operations of analogous complexity on large plants it was okay to kill 10% of the time\n- Even in the total absence of regulatory overhead, it seems legitimately harder to build a nuclear power plant that *usually* does not melt down, compared to a coal power plant (confirmable by the Soviet experience?)\n\nSome of the standard ways in which systems with strong versus minimal demands on (3*)-properties *usually* require additional development time:\n\n- (4a) Additional work for:\n - Whole extra modules\n - Universally enforced properties\n - Lots of little local function points\n- (4b) Needing a more extended process of interactive shaping in order to conform to a complicated target\n- (4c) Legitimately requiring longer organizational paths to approve ideas, changes and commits\n- (4d) Longer and deeper test phases; on whole systems, on local components, and on function points\n- (4e) Not being able to deploy a fast or easy solution (that you could use at some particular choice point if you didn't need to worry about the rocket exploding)\n\n### Outside view on AI problems\n\nAnother reference class that feels relevant to me is that things *having to do with AI* are often more difficult than expected. E.g. the story of computer vision being assigned to 2 undergrads over the summer. This seems like a relevant case in point of \"uncorrected intuition has a directional bias in underestimating the amount of work required to implement things having to do with AI, and you should correct that directional bias by revising your estimate upward\".\n\nGiven a sufficiently advanced Artificial General Intelligence, we might perhaps get narrow problems on the order of computer vision for free. But the whole point of Orthogonality is that you do *not* get AI alignment for free with general intelligence. Likewise, identifying [value-laden](https://arbital.com/p/36h) concepts or executing value-laden behaviors doesn't come free with identifying natural empirical concepts. We have *separate* basic AI work to do for alignment. So the analogy to underestimating a narrow AI problem, in the early days before anyone had confronted that problem, still seems relevant.\n\n%comment: I can't see how, after [imagining oneself in the shoes](http://lesswrong.com/lw/j0/making_history_available/) of the early researchers tackling computer vision and 'commonsense reasoning' and 'natural-language processing', after the entirety of the history of AI, anyone could reasonably stagger back in shocked and horrified surprise upon encountering the completely unexpected fact of a weird new AI problem being... kinda hard.%\n\n## Inside view\n\nWhile it is possible to build new systems that aren't 100% understood, and have them work, the successful designs were usually greatly overengineered. Some Roman bridges have stayed up two millennia later, which probably wasn't in the design requirements, so in that sense they turned out to be hugely overengineered, but we can't blame them. \"What takes good engineering is building bridges that *just barely* stay up.\"\n\nIf we're trying for an aligned [Task AGI](https://arbital.com/p/6w) *without* a [really deep understanding](https://arbital.com/p/7vb) of how to build exactly the right AGI with no extra parts or extra problems--which must certainly be lacking on any scenario involving relatively short timescales--then we have to do *lots of* safety things in order to have any chance of surviving, because we don't know in advance which part of the system will nearly fail. We don't know in advance that the O-Rings are the part of the Space Shuttle that's going to suddenly behave unexpectedly, and we can't put in extra effort to armor only that part of the process. We have to overengineer everything to catch the small number of aspects that turn out not to be so \"overengineered\" after all.\n\nThis suggests that even if one doesn't believe my particular laundry list below, whoever walks through this problem, *conditional* on their eventual survival, will have shown up with *some* laundry list of precautions, including costly precautions; and they will (correctly) not imagine themselves able to survive based on \"minimum necessary\" precautions.\n\nSome specific extra time costs that I imagine might be required:\n\n- The AGI can only deploy internal optimization on pieces of itself that are small enough to be relatively safe and not vital to fully understand\n - In other words, the cautious programmers must in general do extra work to obtain functionality that a corner-cutting project could get in virtue of the AGI having further self-improved\n- Everything to do with real [value alignment](https://arbital.com/p/5s) (as opposed to the AI having a [reward button](https://arbital.com/p/7w5) or being reinforcement-trained to 'obey orders' on some channel) is an additional set of function points\n- You have to build new pieces of the system for transparency and monitoring.\n - Including e.g. costly but important notions like \"There's actually a [separatish AI over here](https://arbital.com/p/monitor_oracle) that we built to inspect the first AI and check for problems, including having this separate AI [trained on different data](https://arbital.com/p/independently_learned_concept) for safety-related concepts\"\n- There's a lot of [trusted](https://en.wikipedia.org/wiki/Trusted_system) function points where you can't just toss in an enormous deepnet because that wouldn't meet the [https://arbital.com/p/-transparency](https://arbital.com/p/-transparency) or [effability](https://arbital.com/p/7vb) requirements at that function point\n- When somebody proposes a new optimization thingy, it has to be rejiggered to ensure e.g. that it meets the top-to-bottom [taskishness](https://arbital.com/p/4mn) requirement, and everyone has to stare at it to make sure it doesn't blow up the world somehow\n- You can't run jobs on AWS because you don't trust Amazon with the code and you don't want to put your AI in close causal contact with the Internet\n- Some of your system designs rely on [all 'major' events being monitored and all unseen events being 'minor'](https://arbital.com/p/major_monitored), and the major monitored events go through a human in the loop. The humans in the loop are then a rate-limiting factor and you can't just 'push the lever all the way up' on that process.\n - E.g., maybe only 'major' goals can recruit subgoals across all known [domains](https://arbital.com/p/7vf) and 'minor' goals always operate within a single domain using limited cognitive resources.\n- Deployment involves a long conversation with the AI about '[what do you expect to happen after you do X](https://arbital.com/p/check_expected_outcome)?', and during that conversation other programmers are slowing down the AI to look at [passively transparent](https://arbital.com/p/passive_transparency) interpretations of the AI's internal thoughts\n- The project has a much lower threshold for saying \"wait, what the hell just happened, we need to stop melt and catch fire, not just try different [patches](https://arbital.com/p/48) until it seems to run again\"\n- The good project perhaps does a tad more testing\n\nIndepedently of the particular list above, this doesn't feel to me like a case where the conclusion is highly dependent on Eliezer-details. Anyone with a concrete plan for aligning an AI will walk in with a list of plans and methods for safety, some of which require close inspection of parts, and constrain allowable designs, and just plain take more work. One of the important ideas is going to turn out to take 500% more work than required, or solving a deep AI problem, and this isn't going to shock them either.\n\n## Meta view\n\nI genuinely have some trouble imagining what objection is standing in the way of accepting \"ceteris paribus, alignment takes at least 50% more time\", having granted Orthogonality and alignment not being completely trivial. I did not expect the argument to bog down at this particular step. I wonder if I'm missing some basic premise or misunderstanding questioner's entire thesis.\n\nIf I'm not misunderstanding, or if I consider the thesis as-my-ears-heard-it at face value, then I can only imagine the judgment \"alignment probably doesn't take that much longer\" being produced by ignoring what I consider to be basic principles of cognitive realism. Despite the dangers of [psychologizing](https://arbital.com/p/43h), for purposes of oversharing, I'm going to say what *feels to me* like it would need to be missing:\n\n- (5a) Even if one feels intuitively optimistic about a project, one ought to expect in advance to run into difficulties not immediately obvious. You should not be in a state of mind where tomorrow's surprises are a lot more likely to be unpleasant than pleasant; this is [https://arbital.com/p/-predictable_updating](https://arbital.com/p/-predictable_updating). The person telling you your hopeful software project is going to take longer than 2 weeks should not need to argue you into acknowledging in advance that some particular delay will occur. It feels like the ordinary skill of \"standard correction for optimistic bias\" is not being applied.\n- (5b) It feels like this is maybe being put into a mental bucket of \"futuristic scenarios\" rather than \"software development projects\", and is being processed as pessimistic future versus normal future, or something. Instead of: \"If I ask a project manager for a mission-critical deep feature that impacts every aspect of the software project and needs to be implemented to a high standard of reliability, can that get done in just 10% more time than a project that's eliminating that feature and cutting all the corners?\"\n- (5c) I similarly recall the old experiment in which students named their \"best case\" scenarios where \"everything goes as well as it reasonably could\", or named their \"average case\" scenarios; and the two elicitations produced indistinguishable results; and reality was usually slightly worse than the \"worse case\" scenario. I wonder if the \"normal case\" for AI alignment work required is being evaluated along much the same lines as \"the best case / the case if every individual event goes as well as I imagine by default\".\n\nAI alignment could be easy in theory and still take 50% more development time in practice. That is a very ordinary thing to have happen when somebody asks the project manager to make sure a piece of highly novel software *actually* implements an \"easy\" property the first time the software is run under new conditions that can't be fully tested in advance.\n\n\"At least 50% more development time for the aligned AI project, versus the corner-cutting project, assuming both projects otherwise have access to the same stock of ideas and methods and computational resources\" seems to me like an extremely probable and *normal* working premise to adopt. What am I missing?\n\n%comment: I have a sense of \"Why am I not up fifty points in the polls?\" and \"What experienced software manager on the face of the Earth (assuming they didn't go mentally haywire on hearing the words 'Artificial Intelligence', and considered this question as if it were engineering), even if they knew almost nothing else about AI alignment theory, would not be giving a rather skeptical look to the notion that carefully crafting a partially superhuman intelligence to be safe and robust would *only* take 1.5 times as long compared to cutting all the corners?\" %", "date_published": "2017-02-22T19:51:33Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "7wl"} {"id": "4f45071121c9433736ec04114cab31e4", "title": "Meta-rules for (narrow) value learning are still unsolved", "url": "https://arbital.com/p/meta_unsolved", "source": "arbital", "source_type": "text", "text": "# Definition\n\nThis proposition is true according to you if you believe that: \"Nobody has yet proposed a satisfactory fixed/simple algorithm that takes as input a material description of the universe, and/or channels of sensory observation, and spits out [ideal values](https://arbital.com/p/55) or a [task identification](https://arbital.com/p/2rz).\"\n\n# Arguments\n\nThe [https://arbital.com/p/5l](https://arbital.com/p/5l) thesis says that, on the object-level, any specification of [what we'd really want from the future](https://arbital.com/p/7cl) has high [https://arbital.com/p/5v](https://arbital.com/p/5v).\n\nIn *some* sense, all the complexity required to specify value must be contained inside human brains; even as an object of conversation, we can't talk about anything our brains do not point to. This is why [https://arbital.com/p/5l](https://arbital.com/p/5l) distinguishes the object-level complexity of value from *meta*-level complexity--the minimum program required to get a [https://arbital.com/p/-7g1](https://arbital.com/p/-7g1) to learn values. It would be a separate question to consider the minimum complexity of a function that takes as input a full description of the material universe including humans, and outputs \"[value](https://arbital.com/p/55)\".\n\nThis question also has a [narrow rather than ambitious form](https://arbital.com/p/1vt): given sensory observations an AGI could reasonably receive in cooperation with its programmers, or a predictive model of humans that AGI could reasonably form and refine, is there a simple rule that will take this data as input, and safely and reliably [identify](https://arbital.com/p/2rz) Tasks on the order of \"develop molecular nanotechnology, use the nanotechnology to synthesize one strawberry, and then stop, with a minimum of side effects\"?\n\nIn this case we have no strong reason to think that the functions are high-complexity in an absolute sense.\n\nHowever, nobody has yet proposed a satisfactory piece of pseudocode that solves any variant of this problem even in principle.\n\n## Obstacles to simple meta-rules\n\nConsider a simple [https://arbital.com/p/7t8](https://arbital.com/p/7t8) that specifies a sense-input-dependent formulation of [https://arbital.com/p/7s2](https://arbital.com/p/7s2): An object-level outcome $o$ has a utility $U(o)$ that is $U_1(o)$ if a future sense signal $s$ is 1 and $U_2(o)$ if $s$ is 2. Given this setup, the AI has an incentive to tamper with $s$ and cause it to be 1 if $U_1$ is easier to optimize than $U_2,$ and vice versa.\n\nMore generally, sensory signals from humans will usually not be *reliably* and *unalterably* correlated with our [intended](https://arbital.com/p/6h) goal identification. We can't treat human-generated signals as an [ideally reliable ground truth](https://arbital.com/p/no_ground_truth) about any referent, because (a) [some AI actions interfere with the signal](https://arbital.com/p/7w5); and (b) humans make mistakes, especially when you ask them something complicated. You can't have a scheme along the lines of \"the humans press a button if something goes wrong\", because some policies go wrong in ways humans don't notice until it's too late, and some AI policies destroy the button (or modify the human).\n\nEven leaving that aside, nobody has yet suggested any fully specified pseudocode that takes in a human-controlled sensory channel $R$ and a description of the universe $O$ and spits out a utility function that (actually realistically) identifies our [intended](https://arbital.com/p/6h) task over $O$ (including *not* tiling the universe with subagents and so on).\n\nIndeed, nobody has yet suggested a realistic scheme for identifying any kind of goal whatsoever [with respect to an AI ontology flexible enough](https://arbital.com/p/5c) to actually describe the material universe. %note: Except in the rather non-meta sense of inspecting the AI's ontology once it's advanced enough to describe what you think you want the AI to do, and manually programming the AI's consequentialist preferences with respect to what you think that ontology means.%\n\n## Meta-meta rules\n\nFor similar reasons as above, nobody has yet proposed (even in principle) effective pseudocode for a *meta-meta program* over some space of meta-rules, which would let the AI *learn* a value-identifying meta-rule. Two main problems here are:\n\nOne, nobody even has the seed of any proposal whatsoever for how that could, work short of \"define a correctness-signaling channel and throw program induction at it\" (which seems unlikely to work directly, given [fallible, fragile humans controlling the signal](https://arbital.com/p/no_ground_truth)).\n\nTwo, if the learned meta-rule doesn't have a stable, extremely compact human-transparent representation, it's not clear how we could arrive at any confidence whatsoever [that good behavior in a development phase would correspond to good behavior in a test phase](https://arbital.com/p/6q). E.g., consider all the example meta-rules we could imagine which would work well on a small scale but fail to scale, like \"something good just happened if the humans smiled\".", "date_published": "2017-02-21T23:37:07Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["This proposition is true if nobody has yet proposed a satisfactory algorithm that takes as input a material description of the universe, and/or channel of sensory observation, and spits out [ideal values](https://arbital.com/p/55) or a [task identification](https://arbital.com/p/2rz).\n\nIn principle, there can be a simple *meta-level program* that would operate to [identify a goal](https://arbital.com/p/6c) given the right complex inputs, even though [the object-level goal has high algorithmic complexity](https://arbital.com/p/5l). However, nobody has proposed realistic pseudocode for a realistic algorithm that takes in a full description of the material universe including humans, or a sensory channel currently controlled by fragile and unreliable humans, and spits out a decision function for any kind of goal we could realistically intend. There are arguably fundamental reasons why this is hard."], "tags": ["B-Class", "Meta-utility function"], "alias": "7wm"} {"id": "e92dca033eba452e566d9da87e9fb158", "title": "Actual effectiveness", "url": "https://arbital.com/p/actual_effectiveness", "source": "arbital", "source_type": "text", "text": "For a design feature of a [sufficiently advanced AI](https://arbital.com/p/7g1) to be \"actually effective\", we may need to worry about the behavior of other parts of the system. For example, if you try to declare that a self-modifying AI is not allowed to modify the representation of its [utility function](https://arbital.com/p/1fw), %note: Which [shouldn't be necessary](https://arbital.com/p/3r6) in the first place, unless something weird is going on.% this constant section of code may be meaningless unless you're also enforcing some invariant on the probabilities that get [multiplied by the utilities](https://arbital.com/p/18t) and any other element of the AI that can directly poke policies on their way to motor output. Otherwise, the code and representation of the utility function may still be there, but it may not be actually steering the AI the way it used to.", "date_published": "2017-02-22T03:43:25Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "7wp"} {"id": "0b051615f1357e78d6b9dcc818df62c0", "title": "List of Eliezer's current most desired fixes and features", "url": "https://arbital.com/p/eliezer_fixes", "source": "arbital", "source_type": "text", "text": "Lower numbers are higher priorities.\n\nBugfixes:\n\n- (1) Prevent editor slowdown.\n - The editor currently often becomes *very* slow, I think when invoking some particular features. This has caused me to need to cut out most text not being actively worked on, paste it offline, and put that text back when I'm done editing the current section.\n - I hypothesize this must involve talking to Arbital in a loop that interrupts accepting keystrokes, because it is causing text entry to slow down on a fast computer.\n - I hypothesize this happens when greenlinking a page that corresponds to a poll, even if the preview is not active.\n - I hypothesize this happens when adding notes (\\%note: whatever\\%) to text.\n - I hypothesize that even when a preview is not shown, the preview is still being generated. This means that causing the preview to actually not generate, when not being shown, would get say 60% of the benefit of a full fix.\n- (2) Prevent 'new page' button from disappearing.\n - If working within a narrow window, the 'new page' button disappears before all other button do, even when there's plenty of whitespace left in the editor bar.\n - Making the window wider does not make the 'new page' button come back without reloading the page.\n - At one point, the greenlink button was also vanishing if the window was made even smaller.\n- (1) Make greenlinks in mobile popups followable.\n - There's no reasonable way for a user to guess that they need to click on the original text while the seemingly modal popup is there. At present, Arbital in effect does not have followable links on mobile, since the needed action is not discoverable. Clicking on the big title greenlink of the popup would be expected to go to the page, and that's what it should do.\n- (3) I seem to not have an option to leave Editor Comments if I own the page, or my window is too small, or something? I didn't see the clickable box while leaving my last comment.\n\nFeatures:\n\n- (2) [https://arbital.com/p/X](https://arbital.com/p/X) to dismiss greenlink popup on mobile.\n - You can keep the swipe to dismiss, if you want, but this clever UI action should be supplemented by the simple X that people will automatically look for. I also usually need to try swiping a couple of times before I get it horizontal enough for the system, since a vertical swipe is merely a scroll. Don't make the minimum necessary UI action this complicated or sensitive, just include the grey X in the upper-right corner that people will automatically look for.\n- (2) Do something other than quietly making the domain of every page I create be 'Eliezer Yudkowsky' by default.\n - Suggested solution: There's a 'default' or 'blank' option for the domain in Settings, and if this is selected, the page belongs to the same domain as its (first) parent if I have editing rights over that domain, and otherwise it belongs to me.\n - Right now I seem to have a huge number of stranded pages. Every new page I create ends up stranded unless I remember to visit a Settings page I had no previous reason to visit, and manually change the domain every time.\n - There also doesn't seem to be any way to list all pages in a domain. So I have no idea what has been orphaned to Eliezer Yudkowsky until I just, e.g., randomly run across a sublens of [https://arbital.com/p/58b](https://arbital.com/p/58b) that happens to be in the Eliezer Yudkowsky domain instead of the Decision Theory domain.\n - I'd like it if you could apply this retroactively. There shouldn't be anything in the Eliezer Yudkowsky domain that isn't a child of the Eliezer Yudkowsky page.\n- (2) Simple paragraph editing.\n - Selecting (or clicking) within a paragraph, among other buttons shown, shows me a button that will pop up an editor for just that paragraph's underlying Markdown. This editor needs the greenlink button, but few other frills. It's okay if this editor only figures out the correct paragraph 80% of the time, so long as it doesn't destroy data.\n - The desired behavior is to identify some text associated with a selection, pop up only that text within something that permits editing it and saving the edit, and then substitute the edited text for whatever text was originally identified within the document.\n - If, when I try to save the changes, the original document has changed and the original paragraph text is no longer present in the document exactly as before, give me an error message when I try to save my changes (while preserving my new, edited text in the resulting window!) and tell me to edit the document in long form.\n - When I'm done saving and the popup goes away, if you reload the document, I should end up looking at the same paragraph I just edited. The whole point here is to let me edit the document without losing my place in reading it.\n - The 80/20 of this feature would be very valuable. It doesn't have to be perfect.\n - Letting me edit or create summaries, starting from the popup window for greenlinks, would likewise be helpful.\n- (3) A markdown marker that means 'When automatically generating the summary, show up until this point in the text'.\n - Sometimes when I'm editing a stub, I don't want to write a separate summary containing a lot of duplicated text, and I do want more than one line / paragraph to show in the summary. At present the system forces me to either write one huge paragraph, or duplicate all the text.\n- (3) Give me the ability to switch the preview on and off deliberately.\n - I sometimes want a preview even when I'm working on a narrow screen. I sometimes want the preview to go away after I happened to open up a large window.\n- (3) Allow \"arbital.com/search?whatever\" as a search query that will open a page of results for 'whatever'.\n - This will enable me to search Arbital from the location bar using a keyword. Opening up Arbital, then entering text into the search field, then clicking on a result, is surprisingly annoying when you're used to being able to search on things from the location bar.\n- (3) Be able to pop up summaries when moving over results from the Arbital search bar.\n - Sometimes the clickbait isn't enough for me to remember which page is about what.\n- (3) When I accidentally link to the wrong page, as in \"arbital.com/p/doesnotexist\", show me results as if \"doesnotexist\" had been typed into a search bar, rather than giving me a Web error.\n - This is important for trying to link things online, because otherwise I have to be very certain I'm remembering the page stub correctly.\n- (3) When I change a page's stub, e.g. from \"arbital.com/p/old\" to \"arbital.com/p/new\", while no new page has yet been created at \"arbital.com/p/old\", have links to \"arbital.com/p/old\" forward to \"arbital.com/p/new\" possibly with a notification that the link has changed.\n - This is important for changing page stubs after other links to them have been passed around anywhere.\n - Make sure /p/old keeps on forwarding if somebody changes /p/new to /p/newnew.\n - Do not try to prevent anyone from claiming /p/old.\n- (4) Changes to alternative text using \\%if: Potentially visible switches, some knowledges set by default.\n - I'm still trying to figure out how to use the potential for alternative text. Since users don't want to explicitly mark what they understand, this implies some changes required for the feature to again be useful.\n - Switched text is visible by default. E.g. there's a little switch off to the left whitespace, and when you hover on it (or click, in mobile) it pops up something saying, \"You're seeing this version because Arbital thinks you know / don't know / want to know / don't want to know about -Requisite-.\" If the user then changes the switch, they see a further checkbox, unchecked by default, saying \"Always show text as if I -knowledge-state- -Requisite-.\"\n - Give me a setting to override this visible switch, e.g. if I want to abuse Arbital to write \"choose your own adventure\" stories.\n - Let me set some pages (or better yet, particular paragraphs) to set a requisite to known after the user *finishes* reading that part, in the sense of having scrolled to the bottom of the main text (or better yet, having scrolled over and past that particular paragraph); *if* the user hasn't yet manually set that requisite's state.\n - This lets me assume by default that some things have been taught to the reader and condition other text accordingly, while still letting the reader switch things back if the default guess is wrong.\n- (4) Anonymous requisites, maybe just with titles and one-line summary-clickbait, createable from inside the editor with a couple of clicks. I'd use a lot more of these if I didn't need to Create A Page every time I wanted to introduce a conditional dependency.\n- (4) Anonymous probability and approval bars.", "date_published": "2017-03-03T22:00:26Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A place for Eliezer to note down his current list of personally-wanted features for editing and writing."], "tags": [], "alias": "806"} {"id": "d8b8350ca1b245c364a032024f31cf54", "title": "A quick econ FAQ for AI/ML folks concerned about technological unemployment", "url": "https://arbital.com/p/aiml_econ_faq", "source": "arbital", "source_type": "text", "text": "This is a FAQ aimed at a very rapid introduction to key standard economic concepts to professionals in AI and machine learning who have become concerned with the potential economic impacts of their work in the field.\n\nIt takes a strong intuitive understanding of \"comparative advantage\", \"aggregate demand\", and several other core concepts, to talk sensibly about a declining labor force participation rate and what ought to be done about it. There are things economists disagree about, and things that economists *don't* disagree about, and the latter set of concepts are indispensable for talking sensibly about technological unemployment. I won't belabor this point any further.\n\nAn extremely compressed summary of some of the concepts introduced here:\n\n- **Comparative advantage** says that even if Alice is less efficient (less productive) at all of her tasks than is Bob, Alice and Bob still both benefit from being able to trade with one another.\n - If Alice is *relatively* twice as good at growing apples than she is at growing oranges, and Bob is relatively twice as good at growing oranges than growing apples, Alice and Bob can both benefit by trading her apples for his oranges, even if Bob can grow 10 times as many apples per unit of labor than Alice.\n - If Wisconsin is producing cheese and trading it to Ohio, and then Michigan becomes much better at producing cheese, this can harm the economy of Wisconsin. It should *not* be possible for *Wisconsin* to be harmed by trading with Michigan unless something weird is going on.\n- The **lump of labour fallacy** is the standard term for the (appealing but incorrect) reasoning which says that if you automate away 95% of the jobs, only 5% of the people remain employed.\n - Humanity did in fact automate away 95% of all labor during the industrial revolution - 98% of the population used to be farmers, and now it's 3% - and yet more than 5% of the people still have jobs.\n - The standard explanation for why we aren't all already unemployed revolves around **complementary inputs:** if each sausage-in-a-bun requires 1 sausage and 1 bun, and it previously took 20 units of labor to make 10 sausages and 10 units of labor to make 10 sausage buns, and then sausage-making productivity doubles, the classically predicted new equilibrium is 15 sausages in 15 buns.\n- The **broken window fallacy** is the standard term for the (appealing but incorrect) reasoning which says that if you heave a rock through somebody's window, they have to pay the glazier to repair the window, and the glazier buys bread from the baker, and so production thrives and everybody is better off.\n - Under normal conditions the economy should be bounded by the available supply of glass and similar goods, and destroying goods in one place diverts them from somewhere else. Modern economies *can* get into wedged states where glaziers are sitting idle; and the economy is then bounded by a different quantity, the amount of flowing money available to animate trades, or **aggregate demand.** Heaving a rock through a window doesn't increase this quantity either, but there's one weird trick that can, and it works in both theory and practice.\n- When a good has a relatively low-sloped **supply curve** (function for supply versus price), subsidizing that good will tend to increase its price much more than the amount of that good.\n - If you make it easier for people to take out loans to go to universities, but it's not very easy to build lots of new universities, the result is that people end up with a lot of student loan debt.\n - Only 350 orthodontists are allowed to graduate every year in the USA. These orthodontists can only make braces for a limited number of children. Nothing you do with insurance premiums and subsidies can cause more children to have braces than that, unless you are somehow causing the supply of orthodontists to not be flat with respect to price.\n- \"Inequality\" in a bad sense usually results from **rent extraction.** Under some conditions, extractible rents just go up a corresponding amount when people's incomes rise.\n - This is one potential thing that might go wrong with the basic income. It might possibly just cause the Ferguson Police Department to issue more tickets, unless you fix the Ferguson Police Department first.\n - Reading articles written by actual poor people in the US gives the strong impression that one's top priority for helping them ought to be outlawing towing fees.\n - Intellectual property protection is technically rent and can easily go out of control.\n - Economies of scale in dealing with regulators -- being able to afford lawyers -- can effectively protect big companies from competition and enable them to extract monopoly rents.\n- **Labor market flexibility** (aka the ability of people to move around and match up with available jobs) is a critical factor in employment. This can be hindered by e.g. restrictions on building more housing in cities with high labor demand.\n - We could equally view this as due to major employers continuing to situate new jobs in the cities with already-skyrocketing housing prices. (One observes that, often, these housing costs are relatively far less burdensome on the high-level decision makers who decide where to put the company.)\n- **Effective marginal tax rates** are how much benefit accrues to somebody who earns 1 more dollar at their current wage level. Not just due to income taxes and payroll taxes, but also due to phasing-out of benefits, it's possible for somebody who earns +$1 to be effectively +$0.20 better off, or even for this number to go negative.\n - Current estimates of effective marginal tax rates in low income brackets are usually in the range of 70% and on many 'cliffs' higher than 100%.\n- Alternatives to minimum wage include **wage subsidies.**\n - We can consider the minimum wage as a subsidy on wages, paid for by a marginal tax on employers who hire people at less than the minimum wage. If society thinks this subsidy is a good idea, it should almost certainly not be paid by that particular tax.\n- Alternatives to labor taxes include **land value tax** and **consumption taxes.**\n - If you tax labor, you'll discourage labor. Taxing the implicit rents on land doesn't cause there to be less land, so it's a less distortive tax.\n - If you expect AI to create a class of people who are having a hard time selling their labor, among the things you should do to not make those lives any worse is to tax (and regulate) their transactions as little as possible.\n- **Sticky prices** of labor, especially **downward sticky** wages, cause small amounts of deflation to do far more damage to economies than small amounts of inflation.\n - This is one reason why modern economics suggests trying to stabilize an economy at 2% inflation rather than 0% inflation; small amounts of inflation help prices adjust to new realities over time.\n - There's an up-and-coming view, not yet fully adopted, which rephrases the view as being about the total amount of flowing money, and says that people lose their jobs when there isn't enough flowing money to pay their wages. This view suggests that the best help monetary policy can give to employment is **targeting a price level** of **per-capita NGDP.**\n\nMany readers will also have ideas about:\n\n- Regulatory burdens on employers decreasing employment.\n- Regulatory capture by large corporations influencing regulators to make it harder for new entrants to compete with them.\n- Regulatory compliance costs generally destroying wealth.\n\nThere is not universal agreement on how large these problems are, nor whether they should be top priorities. But standard economics does not regard these ideas as *naive,* so I won't discuss them any further in this FAQ. My purpose here is to introduce relatively more technical concepts, that are more likely to be new to the reader, and that counterargue views widely regarded by professional economists as being misguided on technical grounds.\n\nThis document is currently incomplete, and the Table of Contents below shows what is currently covered. **Warning:** as of March 2017 this is a work in progress and has not yet been reviewed for accuracy by relevant specialists.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Comparative advantage, and the mercantilist fallacy\n\n**Ricardo's Law of Comparative Advantage** says that parties (individuals, groups, countries) can both benefit from trading with each other, even when one party has an **absolute advantage** over the other in every factor of production.\n\nSuppose:\n\n- Alice can produce 20 apples per unit of labor, and 10 oranges per unit of labor.\n- Bob can produce 50 apples per unit of labor, and 100 oranges per unit of labor.\n- Alice and Bob both have 10 units of labor to use.\n\nIf Alice wants an equal number of apples and oranges, she's better off producing 200 apples with all of her labor, and then trading 100 of the apples to Bob for 100 oranges.\n\nBob in turn is better off producing 100 oranges with 1 unit of labor, and trading them to Alice for 100 apples, than Bob would be if he used 2 units of labor to produce 100 apples directly.\n\nEconomists usually regard this idea as refuting some widespread intuitions about **mercantilism,** which says that countries are better off the more they sell to other countries, and worse off the more they buy from other countries.\n\nAnother possibly-not-intuitive consequence of comparative advantage is that, except for *sticky prices* (see below), the exchange rates on a country's currency have little long-run effect on its trade.\n\nSuppose:\n\n- Germany buys 100,000 dollars from the US for 100,000 euros;\n- Germany then uses these 100,000 dollars to buy 100 air conditioners from the US;\n- People in the US use their 100,000 euros to buy 10 cars from Germany.\n\nOne day the European Central Bank decides to print enough new euros to halve the value of the currency. Then:\n\n- People in Germany buy 100,000 dollars from people in the US, using 200,000 euros.\n- People in Germany buy 100 air conditioners from the US for $1000 each.\n- People in the US buy 10 cars from Germany, each of which now costs 20,000 euros.\n\nRicardo's Law thus suggests that we can view the US as shipping 100 air conditioners to Germany in exchange for 10 cars, with the currency exchange rates being something of an epiphenomenon. (In practice this is modified by \"sticky prices\" in that some people already own euros or dollars, or bonds denominated in euros or dollars, and there are existing sales contracts that fix prices for a term. But effects like that tend to wash out over the longer run.)\n\nThe concept of comparative advantage is important for discussing technological unemployment because it conveys the general idea that when we draw a line around a group or region, like \"{Alice, Bob, and Carol}\" or \"Wisconsin\", this collective entity is not harmed by being able to trade with someone else who is more productive than them. (To say nothing of getting their *own* advanced technology.)\n\nThis isn't to say that technological advances can never hurt Wisconsin's economy. Maybe Wisconsin is producing cheese and trading it to Ohio in exchange for Ohio's coal. One day, Michigan becomes much more efficient than Wisconsin at producing cheese, causing Ohio to start trading with Michigan instead. This can hurt Wisconsin's economy, quite a lot in fact.\n\nThe conclusion from Ricardo's Law is rather that Wisconsin can't be harmed by the fact of Wisconsin *itself* being allowed to trade with Michigan. What hurts Wisconsin in the above scenario is that *Ohio* is allowed to trade with Michigan. Wisconsin can only make itself even worse off by choosing to cut off its own trade with Michigan.\n\nIf Wisconsin closes its borders and tries to get along in its own little world that doesn't include those awful new cheese-making machines, Ricardo's Law says that Wisconsin as a whole *should* become worse-off thereby. If this stops being true, then some unusual phenomenon must be at work, one that violates the broad axioms under which Ricardo's Law is a theorem.\n\nAn intuition I'll be trying to pump throughout this FAQ is that the state of affairs over most of human history, in which technological automation made most people better off without causing them to be permanently unemployed, is a very normal state of affairs from the standpoint of economic theory. If we're currently looking at lasting unemployment caused by automation, this is a *surprising* state of affairs.\n\nAnd: Even if something has changed and something weird is going on, the reason for it won't be that Ricardo's Law suddenly stopped being a theorem. Whatever the truth turns out to be, Ricardo's Law will still be a relatively better lens than mercantilism through which to understand it.\n\n# Complementary inputs, and the lump of labour fallacy\n\nBroadly speaking, the \"**lump of labour fallacy**\" is reasoning as if there's a limited amount of work to be done in the world, and a limited number of jobs to go around, and every time one of those jobs is automated away, one more person ends up unemployed.\n\nSuppose that:\n\n- People like to eat sausages in buns.\n- Each sausage-in-a-bun requires 1 sausage and 1 bun as an input, and a trivial amount of labor to assemble.\n- At present, it requires 1 unit of labor to make a bun, and 2 units of labor to make a sausage.\n- The resulting equilibrium involves 10 sausages-in-a-bun being manufactured by 30 people expending 30 units of labor.\n\nSo then the critical question becomes: if we invent a new kind of sausage-making machine that doubles productivity and makes it possible to produce 2 sausages using 2 units of labor, do we...\n\n- End up with 10-sausages-in-a-bun produced by 20 people, and 10 unemployed former sausage makers?\n- End up with 15-sausages-in-a-bun produced by 30 people?\n\nEnding up in the second equilibrium, maybe not right away, is widely regarded as the core idea behind how we could start with a world where 98% of the population were farmers, improving agricultural productivity by a factor of 100 between then and now, and end up with a world in which 3% of the people are farmers and the other 95% of the population are *not* unemployed.\n\nOne way of looking at this is that, over the course of human history so far, *everything* has behaved like it is complementary. When agricultural productivity skyrocketed, a bunch of people went into making shoes and clothes and tables and houses. The end result was that more people had food, more people had more nice clothes and furniture, and *not* that most of the population ended up unemployed. The \"food\" input to a human became cheaper, and labor went into making more of the \"clothes\" and \"furniture\" inputs to humans. Then again later, when manufacturing productivity skyrocketed, more people went into services.\n\nNow it could be that this phenomenon is now breaking down for any number of possible reasons. But among those reasons, it seems relatively implausible that we have reached *the end of demand* and that all the people now have enough of every kind of good and service that they want. There are way too many poor and unhappy people still in the world for that to be true!\n\nOkay, so we haven't reached the end of human wants. But maybe some people in the modern world can no longer produce *anything* that other people want...?\n\nI'd like to encourage you to regard this idea with some surprise. Especially the notion that this \"unnecessariat\" exists today, before we can reliably build a robot that puts away the dishes from a dishwasher without breaking them. AI is making progress, but we haven't exactly reached par-human perception and motor skills, yet. Are there really people in the world who can do *nothing* that *anybody* else with money wants?\n\nHeck, why can't those people trade with each other? Why can't people in this supposed unnecessariat at least help each other, and get their own economy going, even if nobody else wants to trade with them? And doesn't Ricardo's Law say that this state of affairs ought to provide a floor on the group's minimum economic activity, since in theory the group can only benefit further by trading with the outside world...?\n\nWhy *isn't* it an economic theorem that there's never any unemployment?\n\nI would encourage you to stare at this argument and try to feel surprised that unemployment exists. It's not that theorems get to overrule reality; if a valid theorem fails to describe reality, that just says one of the axioms must be empirically false. But we do know that *more* must be going on in the existence of unemployment, than there being a limited lump of jobs and not enough jobs for all the people.\n\n- If you are a neoliberal, you are likely to think about minimum wages, payroll taxes, occupational licensing, and regulatory burdens. If the state imposes a minimum cost and minimum danger on hiring someone, people may not be able to produce anything that is worth the price and risk to their employer.\n- If you are Tyler Cowen, you will worry that some people in the labor pool are actively destructive; that lawsuits have made it harder for employers to use word-of-mouth to distinguish destructive people; and that people who have been unemployed for a while, end up in a global pool of people who on *average* produce \"zero marginal product\".\n- If you are of a more leftist persuasion, you may worry that the billionaires are running out of real wants, and that the people who *do* want things don't have enough money to buy them.\n- If you are a Georgist, you are liable to observe that \"unemployed\" people can't build houses for each other and can't farm food for each other, because the state has decided that these people own zero land; so they aren't allowed to get started anywhere.\n\nAnd so on. There are more than enough possible hypotheses to explain why unemployment can possibly exist. Still, it's worth noting that there was *less* lasting unemployment in, say, 1850. What changed between then and now... could be all the regulations saying you can't just go build somebody a house. Or it could be increased inequality. Or it could be housing prices preventing people from moving to California where there are jobs.\n\nBut it's relatively much stranger to postulate that the *only* problem is that we're running out of jobs in the global lump of labor as AI advances eat away at the lump.\n\nIt could indeed be that existing AI, and to a much larger degree, ordinary computer programming, has gone a long way towards devaluing many jobs that can be done without a college degree.\n\nLabor force participation is in fact dropping under current conditions, whatever those are. It's possible and maybe probable that, all else being equal, AI conquering more perceptual and motor tasks will make more people permanently unemployed.\n\nIt's reasonable for people in AI to feel concerned about this and want to make things better.\n\nBut it's unlikely that the issue is the end of human want, or the running-out of the lump of labor. The problem may not be that some people have *intrinsically* nothing left to contribute to any remaining human want; but rather, that the modern world has put obstacles in the way of people being allowed to contribute.\n\nRegardless: However you look at the situation, and whatever solution you propose, please don't talk about AI or trade or automation \"destroying jobs.\" This will cause any economists listening to scream in silent horror that they cannot give voice.\n\n# The broken window fallacy, and aggregate demand\n\n## Surprisingly, recessions exist\n\nIn theory, when we double the productivity of sausage-makers, we should get 15 sausages-in-buns instead of 10. We should not get 10 unemployed sausage-makers; or, if we temporarily get 10 unemployed sausage-makers, that state of affairs shouldn't last longer than it takes sausage-makers to become willing to look for work at the booming bun factories with HELP WANTED signs plastered all over their windows.\n\nAnd yet there seem to be these occasions, like the \"Great Depression\" or the \"Great Recession\", when there are silent factories and people standing idle, or the modern equivalent. The unemployed sausage-makers are willing to look for new jobs, but there aren't tons of HELP WANTED signs to let them get started on retraining. The bun factories are actually making fewer buns, because now the unemployed sausage-makers aren't buying sausages-in-a-bun anymore.\n\nOne will observe, however, that these historical occasions seem to come on suddenly, and *not* due to sudden brilliant inventions causing a huge jump in productivity. They happen at around the same time that there's trouble in the financial system--banks blowing up.\n\nFurthermore, the number of idle factories and amount of idle labor seems to be far in excess of what could reasonably be accounted for by sausage-makers needing to change jobs. And even people with what might seem like very useful skills, can't find anywhere to put them to use. None of the previous reasons we've considered, for why unemployment could possibly exist, seem to apply to how *that* much unemployment could exist in the Great Depression. And then a decade later all those people had jobs again, too; so it wasn't something intrinsic to the people or the jobs that caused them to be unemployed for so long.\n\nThat is weird and astonishing! If your brain doesn't currently think that is weird and astonishing, please try at least briefly to get yourself into the state of mind where it is.\n\n## The broken window fallacy\n\nAs my entry point into explaining this astonishing paradox, I'm going to take an unusual angle and start with the **broken window fallacy.**\n\nThe broken window fallacy was originally pointed out in 1850 by Frederic Bastiat, in a now-famous essay, \"[That Which Is Seen, And That Which Is Not Seen](http://bastiat.org/en/twisatwins.html)\" Bastiat begins with the parable of a child who has accidentally thrown a rock through somebody's window:\n\n> Have you ever witnessed the anger of the good shopkeeper, when his careless son happened to break a square of glass? If you have been present at such a scene, you will most assuredly bear witness to the fact, that every one of the spectators, were there even thirty of them, by common consent apparently, offered the unfortunate owner this invariable consolation — \"It is an ill wind that blows nobody good. Everybody must live, and what would become of the glaziers if panes of glass were never broken?\"\n\nAnd doesn't the glazier, in turn, spend money at the baker, and the shoemaker, and the baker and shoemaker spend money at the mill and tannery? Doesn't all of society end up benefiting from this broken window?\n\n> Now, this form of condolence contains an entire theory, which it will be well to show up in this simple case, seeing that it is precisely the same as that which, unhappily, regulates the greater part of our economical institutions.\n>\n> Suppose it cost six francs to repair the damage, and you say that the accident brings six francs to the glazier's trade — that it encourages that trade to the amount of six francs — I grant it; I have not a word to say against it; you reason justly. The glazier comes, performs his task, receives his six francs, rubs his hands, and, in his heart, blesses the careless child. All this is that which is seen.\n>\n> But if, on the other hand, you come to the conclusion, as is too often the case, that it is a good thing to break windows, that it causes money to circulate, and that the encouragement of industry in general will be the result of it, you will oblige me to call out, \"Stop there! your theory is confined to that which is seen; it takes no account of that which is not seen.\"\n>\n> It is not seen that as our shopkeeper has spent six francs upon one thing, he cannot spend them upon another. It is not seen that if he had not had a window to replace, he would, perhaps, have replaced his old shoes, or added another book to his library. In short, he would have employed his six francs in some way, which this accident has prevented.\n\nBastiat's point was widely accepted as persuasive; or at least, it was accepted by economists.\n\nLet us nonetheless notice that Bastiat's original essay is worded in a bit of an odd way. Why focus on the fact that when the shopkeeper has spent six francs in one place, he therefore can't spend six francs somewhere else? If the limiting factor is just money, couldn't we make all of society better off by adding more money?\n\nYou would ordinarily expect a healthy economy to be limited by its resources, by the available sand and heat for making glass. If the glazier replaces one window, he isn't replacing another. Or if some other seller provides the glazier with more sand and coal, to make more glass than he otherwise would've, then that's coal which isn't going to the smithy.\n\nOf course what Bastiat really meant is that sending resources to repair a broken window calls away those resources from somewhere else. This is *symbolized* by the six francs being spent in one place rather than another.\n\nThat's how a market economy works, after all: when goods flow in one direction, money flows in the other direction. Six francs flow from the shopkeeper to the glazier; a glass window travels in the opposite direction.\n\nSo Bastiat talks about a change in a flow of francs, as a shorthand for talking about what really matters, a corresponding change in the opposite flow of goods.\n\nNow ask: What if the economy is, somehow, Greatly Depressed? What if the glazier was standing idle until the window was broken, and isn't being called away from any other job? What if there's a ton of coal in an empty lot, waiting for someone to purchase it, and the reason nobody is purchasing it is that nobody seems to have any money?\n\nWell, in this case, it still doesn't help to go around breaking windows. And now Bastiat's original wording happens to *precisely* describe the resulting problem: the shopkeeper will spend six francs on the broken window, and therefore not spend six francs on something else.\n\nThe only way we can imagine that breaking a window will help this economy... is if the shopkeeper has a buried chest of silver; and when somebody throws a rock through his window, the shopkeeper spends some of this buried silver; and then the shopkeeper *doesn't* try later to top off this chest of buried silver again. Only in that hypothetical is there actually an additional six francs added to the economy, flowing to an otherwise idle glazier, who trades with a baker that wouldn't bake otherwise, who buys from a farmer who can afford to buy more fertilizer and grow more crops than before.\n\nWould you agree with that statement -- with the paragraph as written above? Standard economics says it is correct.\n\nBut this suggests an astonishing implication. It seems to say that we could just *print more money* and make the shopkeeper's town better off. Printing new money is pretty much the same as digging up money from a buried chest and not replacing it later... right?\n\n## More money, fewer problems?\n\n\"Now hold on,\" says the alert computer scientist, sitting bolt upright. \"I know that society conditions us to think of little rectangular pieces of colored paper as super-valuable. But dollar bills or euro bills or whatever are just *symbols* for actual goods and services that are actually valuable. Thinking you can create more goods and services by creating more money is like counting your fingers, getting an answer of 10, erasing the 10 and writing down 11, and thinking you'll end up with 11 fingers. Money isn't wealth, it's a claim on wealth, and if you counterfeit $100 in your bank account, you're just stealing wealth that someone else won't get. Any theory that says you can create more wealth by creating more money, no matter how complicated the argument, will in the end turn out to be missing some row of the spreadsheet that makes the totals add up to zero.\"\n\nAnd in a *healthy* economy, this would be correct. But remember, those idle factories are not supposed to exist in the first place. Is it so impossible that a weird condition that shouldn't exist, could be fixed by a weird tactic that shouldn't work?\n\n\"So what you're saying,\" says the skeptical computer scientist, \"is that the two impossibilities cancel out. Like the story about the physics student who derives $E = -mc^2,$ and who gets told to try next time to make an even number of mistakes.\"\n\nWell... yes, as it turns out. There's even an elegant explanation, which we'll get to, for *why* those two particular impossibilities are symmetrical and cancel each other out.\n\nFirst, let's consider a more ordinary exception to the rule that creating money can't help--the reason that money exists in the first place.\n\nSuppose that Alice, Bob, and Carol are as usual stuck on a deserted island. Alice is growing apples, Bob grows bananas, and Carol grows cucumbers. One day Alice wants a banana, Bob wants a cucumber, and Carol wants an apple; and they all go hungry because Alice doesn't have any cucumbers to trade to Bob for a banana. Also they don't have computers on their deserted island, so they can't do anything clever like write software to detect cycles in people's desired goods.\n\nIn this case the standard solution for this lack of computing power is... (drumroll) ...money!\n\nCreating symbolic money to add to this island, where there was no money before, can indeed the three people on the island materially better off. Inventing money causes more trades to occur than previously; trading can make people materially better off.\n\nWhen an economy has idle factories and standing workers, it's in a state where more trades could occur, but aren't occurring.\n\nSo... adding more money to that economy, to animate more trades, isn't far from what happens when we add money to an island that doesn't have money?\n\nNow I have, indeed, pulled some wool over your eyes here. The macroeconomic theory is waaaaay more complicated than this, in much the same way modern neural nets are a tad more complicated than just the idea of gradient descent.\n\nTo give an idea of some of the complications we are blatantly skipping over: classical economics says that destroying money should not in fact slow down an economy composed of rational actors. In a rational world, if somebody sets fire to half of the money supply, everyone just halves their prices and life goes on.\n\nThe reason our world is *not* like that ideal world where the prices just go down, is partially that there are outstanding loans denominated in the current currency, and existing contracts denominated in the current currency. But *most* of the problem--according to the particular economic school I happen to subscribe to--is that people who set prices are *reluctant to lower them,* in a way that an economy full of perfectly rational actors wouldn't be.\n\nAnd this is the kind of statement that causes fistfights between economists.\n\nBut there *is* widespread agreement that, for whatever reason, we don't live in the classical world of perfectly rational actors.\n\nAnd that is--on the standard theory--why fractional reserve banks blowing up and destroying money and making fewer loans, cause there to be a bunch of unemployed people and fewer jobs; and nobody being able to afford to buy things because none of their own customers can afford to buy from them. In a modern economy, for every flow of goods, there's money flowing the other way. So when you destroy money or slow down its flow, this can reduce the actual number of trades.\n\nWhich means that banks blowing up and thereby reducing the flow of money, can reduce the amount of flowing goods instead of prices dropping. We are pretty sure from observation that this does in fact happen.\n\nAfter we're done clipping gradients and initializing orthogonal weights and normalizing batches, in the end deep neural nets are *still* sliding down a loss gradient. Similarly, it really looks in theory and in historical observation like an economy with idle resources, especially *unemployed workers,* can be made materially wealthier by creating more money, which causes more trades to occur and brings the economy closer to operating at full capacity.\n\nHistorical observation also *seems* to bear this part out, although the near-impossibility of running real controlled experiments makes it hard to be sure. The most recent case you might have heard about was in 2013 in the United States, where the \"sequester\" was going to produce a sharp cut in government spending just as the US was starting to recover a little bit, not completely, from the Great Recession. Some people predicted the economy would tank again; and other people said it would be fine so long as the Federal Reserve created a correspondingly large amount of money, aka **monetary offset.** And the sequester happened, and the Federal Reserve did in fact print a bunch of money, and pretty much nothing happened to the economy. Amazing proof, right?\n\nOkay, so that's not a large number of bits of evidence by *your* standards, you spoiled brats who get to test neural nets on 127GB of training data. But it was in fact an experimental test of two different predictions, and the economists who thought in terms of flowing money were right; and in the end, that's all we have in this life.\n\n## A general glut?\n\nJohn Maynard Keynes -- I know some of you are shuddering at the name, but he was a respected economist of his day and he came up with a clever illustration, so please bear with me -- John Maynard Keynes once came up with an illustration of what could possibly cause anything as weird as a recession, and how this could possibly be fixed by anything as weird as printing money.\n\nSuppose that on an island, Alice and Bob and Carol are growing apples and oranges and pomegranates. Furthermore, as in the Great Depression, it is not yet the case that everyone has enough to eat.\n\nIn this situation, could there be an excess supply or \"glut\" of apples?\n\nWell, if people are still willing to eat apples, then what would \"a glut of apples\" even mean?\n\nWe answer: Suppose Alice and Bob, who both grow apples, have such a large apple harvest that people on the island feel kind of saturated on apples, and now prefer to eat other, scarcer fruits *if available*. In this case we'd see the relative trading value of apples dropping, compared to other fruits. Maybe 1 apple used to be worth 1 orange, and now somebody would only trade 1 orange for 2 apples. This would signal Alice and Bob to shift more of their effort to growing oranges and pomegranates in the future.\n\nWe could perhaps call this a \"glut\" of apples, meaning a *relative* excess of supply of apples, versus the supply/demand balance of other fruits, compared to previously.\n\nObserve that to say that apples are now cheaper in terms of oranges, is equally to say that oranges have become more expensive in terms of apples. Then is it possible to have a glut of *every kind of fruit at once?*\n\nHow (asked Keynes) could that possibly be? So long as Alice and Bob and Carol aren't stuffed full, so long as they would still *prefer* to eat more fruit, how could that possibly be? You can't have *each* fruit being ultra-cheap as denominated in all the other fruits.\n\nThen what are we to think if we see Alice and Bob and Carol collectively cutting back on growing apples and oranges *and* pomegranates?\n\nHow is it possible for a whole economy to be suffering from a problem of \"excess supply,\" a situation where factories in general are going idle, without there being a corresponding surge of demand somewhere else?\n\nThe answer (said Keynes) is that the only way this \"general glut\" can happen, is if there is some additional, invisible good that people are pursuing more than apples and oranges *and* pomegranates. What we see is the value of all three fruits falling relative to the value of this invisible good, so that people produce less of all three of them. And the way that this can happen in real life (said Keynes) is if Alice and Bob and Carol have invented a fourth, invisible good called money.\n\nThis may sound weird and abstract, so consider this even simpler real-life example drawn from the overused case of the [Capitol Hill Babysitting Co-op](https://en.wikipedia.org/wiki/Capitol_Hill_Babysitting_Co-op#Cooperative_system_and_history), a baby-sitting club which created a currency that parents could trade among themselves to pay for babysitting. The parents in this system tried to keep a reserve of co-op scrip so they could be sure of getting babysitting on demand. In fact, it turned out, people wanted to keep more scrip than the co-op made available in total. Soon there was less and less actual babysitting going on as a result, and the co-op nearly died. Like a glut of apples that is symmetrically a boom in oranges, there was a glut of babysitting and a booming demand for babysitting tokens. Only the babysitting tokens by themselves, sitting around motionless, weren't really *helping* anything.\n\nIf you add in more goods, like oranges and apples and pomegranites, the overall situation is still the same: when the economy has spare productive capacity and yet everything seems to be slowing down at once, we can see the required symmetric boom as taking place in the demand for that final good of currency... which is quite harmful to the real economy, because currency doesn't *do* anything.\n\nIt's the weird nature of currency as a worthless symbolic good, that allows recessions and Depressions to happen in the first place. Which is why the spreadsheet can in fact balance, and the two symmetrical impossibilities can cancel out, when we try to fix a recession by creating more money!\n\n## Aggregate demand and monetary stimulus\n\nWhen economists talk about this sort of thing, they often use the phrase \"**aggregate demand**\".\n\nActual human wants are, if not infinite, then certainly far higher than the total amount of stuff produced by the world economy. We don't all have mansions, at least not yet.\n\nSo what \"aggregate demand\" really means is \"aggregate purchasing power\", which in turn means the total amount of flowing money trying to buy all the goods. \n\nThe standard economic view can then be stated thus: when the purely symbolic financial system blows up, and yet in the real world people lose their jobs and factories stop operating, what's happening is an \"**aggregate demand deficit**.\" There isn't enough purchasing power to buy all the stuff the economy could supply at full capacity.\n\nThe obvious reason this happens is that exploding banks (or nowadays, banks that get bailed out, but are less enthusiastic about making more loans afterwards) reduce the total \"**money supply.**\" There's less total symbolic money to flow through the trades.\n\nAnother reason aggregate demand decreases, is that people want to *hold on* to more money during a depression or a recession. They have an increased preference for larger numbers in their bank accounts, and are more reluctant to spend money. This decreases **monetary velocity.**\n\nWhen all this is starting to go wrong inside a country, conventional economics says that the central bank ought to immediately swing into action and create more money, aka \"**monetary stimulus.**\" Or rather, the central bank is supposed to create money and use it to buy government bonds or something, and then hold onto the bonds. Later on, when monetary velocity picks up, the central bank should sell the bonds and destroy the money it previously created, to avoid creating too much inflation.\n\nIf the central bank thinks it *can't* create enough money, then a widely advocated policy says that the country's government should instead sell treasury bonds, in exchange for money, and spend that money on... pretty much anything. This is not in fact a null action in monetary terms--we are not just removing taxes and spending them elsewhere, or so most economists think. The people who buy bonds in exchange for money have bonds afterwards instead of money; and this can also satisfy their desire to hold assets. Then the money goes out the other end of the government and into circulation, where it wasn't circulating before. This is \"**fiscal stimulus.**\"\n\nWhether it is ever a good idea to do \"fiscal stimulus\" is another one of those questions that cause economists to get into fistfights. I personally happen to side with the school that goes around saying, \"What do you mean, the central bank *can't create enough money* to stabilize the flow of currency and trade? Did you run out of ones and zeroes? Why are you asking the government to increase the national debt over this? Just do your darned job!\"\n\nI expect that this business of talking about how central banks are supposed to stabilize the national flow of money, is causing half of you to raise skeptical eyebrows and ask if maybe the whole thing would somehow stabilize itself if anyone could issue their own currency instead of it being a government monopoly.\n\nAnd the other half of you are trying to figure out some way to solve it using a blockchain.\n\nBut right now, nearly all countries have central banks. So long as that is in fact the way things are, conventional economic theory says central banks should try to stabilize the flow of money, and hence wages, and hence employment, by constantly tweaking the amount of money in circulation.\n\nIt's not agreed among economists which countries today might be suffering from too little aggregate demand, and working under capacity. The economists in my preferred school suspect that it is presently happening inside the European Union due to the European Central Bank being run by lunatics. Most economists think the United States is *not* currently bounded by an aggregate demand deficit, or running far under capacity.\n\nI myself feel a bit unsure about that. Sometimes it really seems like we could all be much happier if we all just simultaneously decided to be 10% richer. Or that, say, Arizona, would benefit a *lot* from having its own currency to use on the side, until everyone there was working for everyone else. But I haven't paid enough attention to the specifics here, and you probably shouldn't listen to me.\n\nSo: If you're worried about technological unemployment due to AI advances, one of the obvious things that we should do along with any other measures we take -- that we really *really* need to do, all this abstract stuff has an enormous in-practice impact on the real economy -- is make sure that people are rich enough to buy things.\n\nIt is possible for an economy to end up in a state where most people feel poor, and can't afford to buy things from each other, and go around being sad and out of work... such that you could in fact cause everyone to perk up and trade with each other and be happy again, just by declaring that everyone has more money. (Basic income would just move money around; the idea here is that you can make things better if *everyone simultaneously* has more money.) If everyone thinks they're too poor to afford to hire one another, you can sometimes fix the problem by just having everyone be rich instead of poor. If, after great advances in automation, you saw lots of people standing around doing nothing, that would be the *first* thing we ought to try.\n\nI'm going to temporarily stop going on about this because there's other economic concepts I want to run through and I don't want to risk boring you about this one topic. We'll pick up this thread again in a later section about \"NGDP level targeting.\"\n\nFor now, I just want to observe that, compared to problems like reforming labor markets, it can be a lot simpler to just print more ones and zeroes at the central bank. And central banks are not entirely unwilling to hear about it. It's arguably the best choice for what's worth spending political capital to fix first.\n\n# Gently sloping supply curves, and skyrocketing prices\n\nBy now you might be starting to imagine a rosy post-automation scenario which seems *in theory* like it shouldn't be that hard to achieve.\n\nYou could look further ahead than I think we're actually going to get in practice -- not because I don't think AI can get that far, quite the opposite really, but still -- and imagine a world where robots are far more common, and human beings... are still trading with each other.\n\nA world where, if Alice knows how to grow food, and Bob knows how to build houses, then they're not both going around cold and hungry because agriculture and house-building can be automated.\n\nA world where we haven't wantonly *defined* ourselves into poverty; a word where robots exist, but we've decided to be imagine enough pretend symbolic money into existence for us to afford them.\n\nA world where, to be honest, some of the dignity of work *has* been lost. Where the most common new jobs aren't as awesome as forging trucks out of molten steel with your bare hands. But people still want things, and other people do those things and get paid for them.\n\nA world where people have as at least as much self-respect, where the economy is at least as vibrant, as it was in say 1960. Because logically, any given group could always draw a line around themselves and go back to 1960s technology. And if that group then *removes* this imaginary line, starts trading with others, and adopts more technology, then life should only get better for them.\n\nBecause of Ricardo's Law of Comparative Advantage.\n\nRight?\n\nSo... there are a number of obvious difficulties that could completely blow up this rosy scenario.\n\nOne such class of difficulties is if there's any good X that people *really* need, and that stays extremely expensive in the face of automation--if it takes months and months of babysitting to afford one unit of X.\n\nLike, say, *housing, health care,* or *college.*\n\nYou can find various graphs showing that all of the increased productivity over the last 20 years has been sucked up by increasing housing prices. Or that all of the missing wages, as productivity rises and wages stay flat, can be accounted for by the cost to employers of health insurance. I can't recall seeing a graph like that for college costs and student debt, but it sure ain't cheap. If all of those graphs are true simultaneously (and they are), you can see why people might feel increasingly impoverished, even if their nominal wages in dollars are theoretically flat.\n\nAnd if those costs, or any other costs, went on rising while a hypothetical tidal wave of automation was crashing down, that could smash any utopian vision of people living peacefully on less labor in a less costly world.\n\nFor a brief but excellent overview of the *problem*, I would recommend Slate Star's \"[Considerations on Cost Disease](http://slatestarcodex.com/2017/02/09/considerations-on-cost-disease/)\". It turns out, for example, that the United States spends around \\$4000 per citizen on Medicaid and Medicare and insurance subsidies. \\$4000 is roughly what other developed countries pay for *universal* coverage. It's not that the United States is unwilling to pay for universal coverage, but that somehow healthcare costs far more in the United States. (Without any visibly improved medical outcomes, of course.)\n\nThere are various aspects of all this that economists will still get into fistfights over, so I can't just hand you a standard analysis and solution. But there's at least one important standard economic lens through which to view the problem.\n\n## Supply and demand are always equal\n\nIf we look at all the loaves of bread sold in a city on Monday, then, on that Monday, the number of loaves of bread sold, and the number of loaves bought, are equal to one another. Every time a loaf of bread is sold, there's one person selling the loaf and one person buying a loaf.\n\nSupply and demand are always equal.\n\nImagining supply not equaling demand is like imagining a collection of nonoverlapping line segments with an odd number of endpoints.\n\nOr at least, that's how economists define the words \"supply\" and \"demand\".\n\nSo what use is such a merely tautologous equation?\n\nDefining the terms that way lets economists talk about **supply curves** and **demand curves** as a function of prices. Or to be snootier, **supply functions** and **demand functions.** Since supply and demand are always equal, the point where the two curves *meet* tells us the price level.\n\n\"Hold on,\" you say. \"If supply equals demand by definition, then how would we talk about the old days of the Soviet Union, where there were 1000 people who wanted toilet paper and only 400 rolls of toilet paper? And it was illegal for the store to raise the set price of toilet paper? In this case, didn't demand just directly exceed supply?\"\n\nWhat was seen in Soviet supermarkes were long lines of people waiting outside before the store opened, trying to get to the toilet paper before it's all gone. In this case, the *time spent in line* is by definition part of the 'price' of a roll of toilet paper. No matter how the demand curve slopes--no matter how much people want toilet paper, and how reluctant they are to give up--the time required to stand in line will increase until 600 people give up and go home. And then the monetary price of the toilet paper, *plus* the time required to stand in line, is the \"price\" at which the demand function equilibrates with the supply function.\n\nThis system wasn't good for the Soviet Union because the time spent in line was burned, destroyed, in a way that produced no additional goods. When the price is allowed to go up under capitalism, the buyers may indeed spend more money; but just passing that money around doesn't *destroy* wealth the way that standing in line destroys time. (The flip side is that everyone ultimately has an equal supply of time and not everyone has an equal supply of money. Poorer people *can* sometimes benefit relative to non-poor people when part of a good is repriced in time rather than money. Although of course sufficiently rich people will just hire someone else to stand in line.)\n\nIf you set up a cart selling \\$20 bills for \\$1 each, a line will form in front of the cart and extend until it's long enough to burn \\$18.99 worth of time, as priced by people who could otherwise earn the least amount of money per hour. You can't *actually* sell \\$20 bills, to anyone who wants them, at a price of $1, and have that be the real entire price.\n\nA similar dynamic plays out every year when the organizers of Burning Man once again try to defy the laws of mathematics as well as capitalism in order to sell 70,000 tickets at a fixed price of \\$500, into a market of way more than 70,000 people who are willing to pay significantly more than \\$500. They also try really hard to outlaw resale at higher prices; if Burning Man finds out about the resale, your tickets invalidated, and your car will be turned around at the last minute and told to go home. The resulting scramble... well, I've heard it suggested that, in this case, the real \"price\" of the ticket is pulling on your social connections in order to obtain Burning Man tickets for a properly nominal price of \\$500, via trading favors with people who were savvy enough to sign up for the sales lottery. Thus selecting socially savvy people to attend Burning Man. I've never gone there.\n\n## When subsidy is futile and will be assimilated\n\nImagine a city under siege, where there are only 3,000 loaves of bread to be had, and each loaf is selling for 2 golden crowns. Can the governor of the city ease the plight of the people by ordering the treasury to pay 1 golden crown for loaf? The people will still be hungry -- there's still only so much bread to go around -- but perhaps their financial sorrows can be eased?\n\nNo, in fact. The price of the 3,000 loaves of bread rises until all but 3,000 people drop out of the market for purchasing it. If there are 3,000 people willing to pay 2 golden crowns for a loaf of bread, and the governor steps in and says the city will pay 1 golden crown per loaf of the price, then there will be 3,000 people willing to pay 2 golden crowns plus 1 golden crown subsidy. The price of each loaf rises to 3 crowns, and the 3,000 people buying it are 2 golden crowns out of pocket, the same as before.\n\nIn economitalk, you'd say that the supply was **inelastic:** this supply curve is entirely horizontal with respect to price. No matter how much more people pay (the x axis), the supply of bread stays the same (the y axis).\n\nThe demand curve does slope, in this example; it slopes downward as usual with increasing price, until the demand curve crosses the flat supply curve at a price point of 2 golden crowns.\n\nIf the governor then adds a subsidy of 1 golden crown to each purchase of bread, we are in effect *shifting the demand curve to the right* along the x-axis - the demand for bread at 2 golden crowns, is now the demand level we would have previously seen at a price of 1 golden crown. We have had a change of variables: Demand(x') = Demand(x - 1 crown). Since the supply curve is flat, this just means that the two curves meet at a price' which is 1 golden crown greater than the old price.\n\nIn other words: It doesn't help any buyers to try to subsidize a good in inflexible supply. The only way subsidies can *possibly* help buyers *at all,* is if increasing the price of the good causes more of the good to exist.\n\nAnd by this I don't just mean that we check whether the supply of a good is allowed to increase, and if it can increase, then subsidies are allowed to help people. When you hand out subsidies evenhandedly to anyone who wants to buy X, *the only mechanical means* by which the people receiving the checks have any more money *at all* is through the medium of increasing the supply of X. Everything else is just a change of variables and shifting the demand curve to the right.\n\nThe idea that the person getting a check for \\$120 is better off because they now have \\$120 is entirely an illusion. The only mechanical means of transmission by which this person has more money in the bank, at the end of the day, is whatever extent the supply of X goes up; and their bank accounts will end up at the same level as a strict function of the supply of X, irrespective of the nominal amounts listed on the checks and whether the subsidy checks are taxed. To whatever extent the supply of X is not increasing, the act of placing money into people's hands has exactly as much effect on their bank accounts as praying to a golden statue of a twenty-dollar bill (stipulating arguendo that we live in a universe where praying to golden statues of things doesn't work).\n\nAnd yet you will observe that in all public political discourse that makes it onto TV, all the sober talking heads in business suits are talking as if by subsidizing people with \\$120 checks we are causing their bank accounts to go up by \\$120, rather than talking about how many new universities or doctors or houses the \\$120 checks will cause to exist.\n\nThis is genuinely crazy. This is not a Republican-economist point or a Democratic-economist point. I know of *no* framework of economics in which cutting everyone in the market the same subsidy check makes them have more money at the end, *except* insofar as supply happens to increase and their retained money is a strict function of supply. That the sober talking heads on TV are talking otherwise is mass civilizational insanity. It's not like the conversation about global warming where at least *in principle* we could be wrong about the empirical effect of increasing carbon dioxide on future global temperatures. It's not even like evolutionary biology, where at least the enormous mountain of empirical evidence is complicated enough that you might need to spend a few days reading to understand. The public conversation about subsidies is patently illogical. It is as if the sober talking heads on TV in their business suits were having deep conversations about whether to pray to a golden statue of a \\$20 bill or a \\$100 bill. There *is no mechanical effect* on bank accounts of universal subsidy checks that is not entirely mediated by, and quantitatively the sole function of, the final supply level of goods.\n\n## Skyrocketing prices\n\nA price can only skyrocket when a gently sloping demand curve meets a gently sloping supply curve.\n\nIf the demand curve slopes sharply downward, this describes a state of affairs where, as the price of the good increases, lots of people *rapidly* become unwilling to pay and drop out of the market before the price has moved very far.\n\nIf the supply curve slopes sharply upward, the price can't increase very far. As the price goes up, the market is rapidly flooded with more sellers seeking to produce the good.\n\nBut if the city of San Francisco refuses to build more than a handful of apartments, and a large number of people in search of jobs *really* want to live in San Francisco and do *not* want to be driven out even as prices go up... then prices go *way* up. Until, however reluctantly, the least wealthy people are driven out.\n\n\"I already knew that,\" you say. Okay, but now consider health care and college.\n\nIn the United States there are, if I recall correctly, only 350 people allowed to become orthodontists every year. That is how many residents the orthodontic schools have decided to accept. Everyone else gets turned away.\n\nFor as long as that *deeply* entrenched system holds, it can accomplish literally nothing to try to subsidize orthodontics. You cannot cause more poor children to have braces by having the government offer to pay part of the costs of orthodontia. 350 orthodontists per year can only put braces on a limited number of children. No amount of shuffling money around or tweaking insurance regulations can allow more children to have braces than that. \n\nOnce upon a time after World War II, the US government passed a GI bill subsidizing college for military veterans. And in actual practical reality... a lot more people ended up going to college! So there must have been more colleges springing up, or existing colleges must have expanded to serve more students. That's the only possible way that more total people could have gone to college.\n\nLater, US society decided there still weren't enough people going to college. So the US government offered to subsidize insurance on student loans, in order to allow more students to get bank loans to pay for college. And what happened that time... is that US college prices skyrocketed. So now people are leaving college with a crippling, immiserating load of student debt, and if you don't like that, don't go to college.\n\nSo: the second time a subsidy was tried, there was some increased barrier or difficulty to starting a new college. Or at least, it was hard to start a new college that the new students actually wanted to go to. We can also infer that the demand function sloped down very slowly with increasing price: lots of people *really really* wanted to go to college. Or really really wanted to go to a college with an existing reputation, instead of a new for-profit college that nobody had heard about. And existing reputable colleges were unwilling or unable to expand. We know all this, because otherwise the price couldn't have skyrocketed.\n\nYou can't *effectively* subsidize housing unless higher housing prices can cause there to be more housing.\n\nYou can't effectively subsidize education unless higher tuitions can cause new attractive universities to spring up, or old reputable universities to expand.\n\nThere's literally nothing you can do to cause more people to have more healthcare by moving around insurance premiums, unless the resulting higher prices are causing more doctors and more hospitals to exist.\n\nAll this **cost disease** isn't a simple issue for our civilization, and economists don't agree on exactly what's happening, let alone what to do about it.\n\nOn the other hand, politicians almost always *only* talk about demand -- who purchases the good, how much they pay. They talk about supply not at all. They talk about mortgage tax deductions and federally insured mortgages; restrictions on building apartments, not so much. When costs go up, they talk about the need for higher subsidies instead of asking why supply isn't already expanding. \"Where are the new colleges?\" they don't cry. \"How do we have so few hospitals, with prices so high?\" they don't say. \"We must do something about the limited number of orthodontic residencies so that more children can have braces!\" you will never hear any legislator say. Which is, on standard economics, a recipe for prices that never stop going up.\n\nIf you were around for the 2008 Presidential election in the US, there was this a case where some gasoline refineries had broken down, and thus the US was experiencing a gasoline shortage with correspondingly high prices.\n\nAnd there was floated a proposal to temporarily repeal some gasoline taxes...\n\n...which would have been a *pure gift* to existing gasoline refineries. The total supply of gasoline was price-unresponsive; it physically couldn't go up until new refineries were built or repaired. The situation was almost perfectly analogous to the city under siege in which there were only 3000 loaves of bread and no more could be made.\n\nPretty much every academic economist in the United States, Republican and Democrat alike, agreed in unison: \"Lifting the gas tax will not change prices at the pump. It won't even cause gas stations to make more money. Literally the only people who benefit from shifting this demand curve to the right are the owners of gasoline refineries.\"\n\nPresidential candidate John McCain was in favor of temporarily lifting the gas tax. Presidential candidate Hillary Clinton was in favor of temporarily lifting the gas tax. In an extraordinary moment that stunned all of us who knew economics, Barack Obama came out *against* lifting the tax.\n\nOf course McCain and Clinton almost certainly knew why the measure would be futile. And they probably weren't in the pay of gas refinery owners either. What's going on, I think, is that political journalists *believe* that voters are too stupid to understand literally any abstraction whatsoever. It doesn't matter whether or not journalists are correct about that, because so long as journalists *believe* that, they will report on any discussion of economics as a blunder in the political horse race. And politicians know that being reported on as having 'blundered' can be fatal. So McCain and Clinton didn't dare publicly oppose decreased gas taxes, even though they both knew it was stupid. Or something.\n\nI haven't the tiniest idea what to do about that.\n\nThe **cost disease** across housing, education, and above all, medicine, seems like it could all by itself smash an otherwise pleasant outcome where everything gets cheaper as a result of automation.\n\nI have no idea how anyone in AI or machine learning can do anything to help solve this.\n\nSomehow or other, the political equilibrium seems to naturally forbid any politician to mention, at all, what economics would suggest as the actual key elements of the problem.\n\nI have no good ideas on how to solve that either.\n\nWhat standard economics does say is that you *can't* solve a cost disease by having the government pay more or trying to move existing subsidies around. You can't help it with a startup that makes doing medical paperwork more efficient. To make costs come down, you need to either (a) make people stop wanting surgeons, bachelors' degrees, and bedrooms; or (b) you need to somehow make more of those things exist, in a world where supply is currently not increasing even as the prices are already skyrocketing.\n\n## Absorbed costs\n\nAt this point somebody usually points out that various studies are showing that insurance companies, or doctors, or whoever, are not making an excess profit.\n\nThere's a phenomenon here I don't fully understand, which looks to me something like this: when a good is in restricted supply and high demand, weeds start to grow on the supply chain.\n\nThis isn't just a financial phenomenon. In the US there was a recent controversy about whether medical residents, doctors-in-training, ought to be allowed to work 30-hour-shifts -- 30 hours on the job without a break -- or whether limiting them to 24-hour shifts would be safer. Trying to organize my mental understanding of this phenomenon, it seems to me that this could not happen if not for the fact that there is a huge oversupply of people who want to be doctors, compared to people who are allowed to become doctors. So if you impose this kind of horrible agony, the students don't flee, but stick around.\n\nIt's not that anyone is deliberately sitting down to think about how to torture students. It's not even that the medical school has a financial incentive to do it. My guess is that there's a noise, an entropy, a carelessness, that is present in organizations by default; and the only thing that *opposes* this entropy is if it threatens the organization's survival. So long as the organization can go on operating and making money *even if* it is torturing its medical residents, there just isn't enough counterforce to oppose the entropy that operates by default to make people's lives horrible.\n\nSimilarly with universities and the explosion of administrative costs. If we were thinking of the universities as intelligent beings trying to maximize their profits, they would be opposing administrative expansion regardless of the overall industry situation; every dollar paid to a needless administrator is a dollar out of their own pockets. But that's not actually how large organizations work; they have no such unified will. There's an entropic force that adds administrators and paperwork by default, and so long as the university's survival is not threatened by the present level of entropy, so long as the college goes on getting enough students and tuition and alumni donations to keep functioning, then there isn't the organizational will to oppose that entropy. Most individual people aren't absolutely driven to maximize their earnings and minimize their expenses, so long as they can afford their apartments and not lose their jobs. We should expect this tendency to be even greater for large organizations where no one executive suffers all the organizational inefficiencies as their personal out-of-pocket losses. Why should any particular person drive themselves mad trying to stop it, especially if other parts of the organization are unenthusiastic about supporting the effort? I know little about the empirics of this field, but what little of the literature I've read suggests that in practice, firms seem motivated to cut costs when they must do so to stay competitive -- to survive at all in the market -- more than they automatically do so out of an organizational will to save every possible dollar.\n\nFor whatever reason, whether or not the above story is anything like correct, there does seem to be some analogue of Parkinson's Law (\"work expands to fill the time available for its completion\") which says that in conditions of restricted supply and low competition between suppliers, costs and inconveniences and barnacles expand into the excess wiggle room so created, whether or not anyone profits thereby. Lots of medical residents want medical residencies in restricted supply, so medical residences become horrible to expand into the wiggle room provided by that demand. If university tuitions are skyrocketing because of supply restrictions and subsidies, then administrative costs will expand into that slack. On my hypothesis this is because of background entropic forces that no longer pose threats to organizational survival, but for whatever reason it certainly does seem to happen.\n\nWhich implies that the massive amounts of paperwork and administrative costs in the medical industry are not the cause of high prices for medicine, they are *caused by high prices for medicine.*\n\nRestricted supply and subsidy means the prices *must* be high to equalize supply and demand. Since they have to be high, and especially since custom makes it look bad to just take out all that money as shareholder profit, there is a vast wiggle room into which will expand barnacles, hospital paperwork, insurance paperwork, high-cost secondary suppliers for goods and services, etcetera. On my hypothesis this is because when you are a supplier in restricted license and demand is high, these inefficiencies do not threaten your organizational survival and so they happen by default. But even if you don't buy that particular hypothesis for *why* barnacles expand to fill the wiggle room, they clearly do.\n\nSo there's no point in trying to fix the price of medicine by trying to eliminate all that inefficient paperwork. It will just grow back as barnacles somewhere else. It never caused the price increase in the first place. There had to be a skyrocketing price to balance the subsidized demand with the restricted supply, and something automatically filled the wiggle room created by that price.\n\n# Rising rents\n\nFrom [a speech by Winston Churchill in 1909](http://www.landvaluetax.org/current-affairs-comment/winston-churchill-said-it-all-better-then-we-can.html):\n\n> Some years ago in London there was a toll bar on a bridge across the Thames, and all the working people who lived on the south side of the river had to pay a daily toll of one penny for going and returning from their work. The spectacle of these poor people thus mulcted of so large a proportion of their earnings offended the public conscience, and agitation was set on foot, municipal authorities were roused, and at the cost of the taxpayers, the bridge was freed and the toll removed. All those people who used the bridge were saved sixpence a week, but within a very short time rents on the south side of the river were found to have risen about sixpence a week, or the amount of the toll which had been remitted!\n\nOf course this outcome was an *inevitable* consequence of there being a limited amount of housing on the south side of the river. The landlords *couldn't* have refused to raise those rents--they could instead have introduced a new hidden price in the time required to apply for apartments, or the social capital and pull required to get the apartments, but supply and demand must equalize. If housing was relatively unrestricted, the higher rent might later induce the construction of more buildings on the same land; but that couldn't happen instantly when the bridge toll dropped.\n\n\"Rent\" has a number of different complicated definitions in economics, most of which are pretty much equivalent for our purposes. I googled around briefly and picked one that I liked:\n\n\"The essence of the conception of rent is the conception of a surplus earned by a particular part of a factor of production over and above the minimum sum necessary to induce it to do its work.\" (Joan Robinson.)\n\nLand doesn't need any inducement to go on existing and supporting buildings; the supply of land has nothing to do with the price of land. So any part of the price being paid to use a building, which derives just from the price of the land underneath the building, is \"rent\" in the economic sense. Whatever part of the cost is necessary to induce a janitor to keep the building clean, is not \"rent\" in the economic sense, even if it shows up in the monthly rent payment in the colloquial sense of rent.\n\nI expect most readers coming in will already have heard of rents and have an idea that rent-seeking is a public choice problem. For purposes of discussing what to do about automation-driven unemployment, especially analyzing notions like the basic income, we are not interested in rent as a generic public choice problem. We are worried about a particular kind of rent that increases to soak up all the benefit whenever we try to help people. Also from Churchill's speech:\n\n> In the parish of Southwark, about 350 pounds a year was given away in doles of bread by charitable people in connection with one of the churches. As a consequence of this charity, the competition for small houses and single-room tenements is so great that rents are considerably higher in the parish!\n\nNow imagine that instead of being given bread, they'd been given a basic income. The result wouldn't have been exactly the same--if everyone in the country was getting the basic income, competition for those particular houses would not have been as great. But you can see why this *particular* kind of rent increase is of particular concern.\n\n## The economic definition of a quantity of rent\n\nConsider taxi medallions in New York City, before and after Uber (illegally) busted their (legal) cartel. Now that Uber is around in 2017, there are around 650,000 rides per day (300,000 taxi rides and 350,000 Uber+Lyft) in NYC. In 2010 there were around 450,000 taxi rides per day. The difference between these numbers is due to a previous legal limit on the number of rides; only 13,605 taxi medallions were allowed to exist. These medallions cost on the order of $1 million and were owned by holding companies that extracted huge portions of the taxi fares from the actual taxi drivers.\n\nI expect you are probably already familiar with this overall situation, and I mention it just to exhibit the technical definition of rent.\n\nIn the non-medallion-constrained equilibrium, suppose that:\n\n- The supply-demand equalizing price is \\$5/ride.\n- The 20,000 people who are willing to accept the least payment to drive cars, become drivers; the price that is so low as to induce all but 20,000 drivers to leave the market is \\$5/ride.\n- The 650,000 people who are willing to pay the most for rides, become riders. The price which is so high as to induce all but 650,000 people to give up on taxi rides, is \\$5/ride.\n\nWe now restrict the number of taxi medallions to 13,605; an inelastic supply of taxi medallions is now required as a new factor of production for taxi rides. As a result:\n\n- The number of rides goes down.\n- The price of rides goes up.\n- The 13,605 people who will accept the least payment to drive cars, become drivers; the price so low as to induce all but 13,605 drivers to leave the market is \\$4/ride.\n- The 450,000 people who will pay the most for rides become riders. The price so high as to induce all but 450,000 riders to give up is \\$7/ride.\n\nThen (\\$7-\\$4=\\$3)/ride will go as **rent** to the owners of taxi medallions. This rent is derived from control of an inelastic bottleneck on the production of rides. To say that this supply is inelastic is to say that there would be no less of it, on the margins, if the price were marginally less; paying \\$2/ride instead of \\$3/ride to the medallion owners would not induce some medallions to give up on existing. It is this sense that, by the definition of rent, the **rentier** is *on the margins* being paid to do nothing; whatever part of the money is \"rent\", is by definition not there to induce somebody to do more work and create greater supply.\n\nImagine that aliens magically agree to fund the New York state government, and New York repeals the state income tax. The people in NYC become richer; they gain more purchasing power; they are willing to pay more for their goods; their demand curve is shifted to the right.\n\nIf taxi medallions exist:\n\n- The inelastic supply of rides stays the same.\n- The consumer price of rides goes up.\n- All of the resulting increase in price is captured by taxi-medallion owners, none by taxi drivers.\n\nIf taxi medallions *don't* exist:\n\n- The price of taxi rides increases, but not by nearly as much, because supply is elastic and can rise to meet the increased purchasing power.\n- The increase in price goes to the taxi drivers.\n- Taxi riders get more rides.\n\n### Rent-collectors don't always look like idle rich\n\nAlthough this essay is supposed to mostly *not* be about justice and morality, economic rents are an obvious flashpoint for concern about unfair deserts and social parasites. So I note parenthetically that rent passing through a person's hand doesn't mean that person corresponds to the stereotype of an idle rentier lying back and being lazy.\n\nIn many cases, the metaphorical taxi cab drivers also own the metaphorical medallions. In these cases part of their pay is the amount that they get as metaphorical taxi drivers for their elastic supply of labor, and part of their pay is the pure rent on controlling an inelastic supply of medallions. But all of it *looks to them* like they are being paid to do work.\n\nThis is what it is like to be an orthodontist in the USA when only 350 orthodontists are allowed to graduate per year. You are still running around all day putting on braces, but that is not where *most* of the money flowing into your office is really coming from. As an orthodontist you almost certainly won't have the slightest understanding of that; of course you deserve your salary, you work hard all day long!\n\nOne also observes that *somebody* has to be an orthodontist; somebody has to fill out the ranks of those 350 people and make the braces. An orthodontist is only personally guilty of socially destructive behavior if they personally make some choice that supports the limit on graduating 350 orthodontists per year.\n\nFurthermore: The people who actually collect the enormous cash prizes are often dead and buried by the time the present day comes around.\n\nMy landlord owns a house in Berkeley. It was an expensive house at the time he bought it; the possible increase in future rents was already baked into the price. He had to take out a loan to buy it. He has to make ongoing payments on those loans. His ownership of the piece of land his house is standing on, is idly generating free rents; but my landlord is not seeing any of that money. He does not get free profits and an easy life.\n\nIn turn, the bank that gave my landlord a loan is getting some real interest on the actual investment, and some amount of rent in virtue of it being one of a limited number of entities that are legally allowed to be a bank and make loans to people like my landlord. But somebody who owns *shares* in the bank did not get those shares for free. And so on.\n\nIf Berkeley repealed all the housing restrictions tomorrow, my landlord would be screwed. That's what happened to the owners of taxi medallions in New York City, many of whom took out loans to buy the medallions, when Uber and Lyft came along.\n\nWhen the original rent-seekers decades earlier talked bureaucrats into creating supply restrictions (for the good of the dear people who must be protected from bad suppliers, of course!) a new *necessary* factor of production was created. Decades later, people must take out loans to buy that factor of production; and so they make no excess profit. There are no twenty-dollar bills lying in the street; there is no easy way to become an idle rentier and never have to work again.\n\nObserve that these people now have a *strong* incentive to keep the supply restrictions in place.\n\nThis is technically termed \"**rent-seeking**.\" But that term sounds like we're talking about Elsevier grabbing all the academic journals and charging monopolistic rent for sitting back and doing nothing, where before journal subscriptions were cheap. Of course Elsevier is not unique; there are plenty of villainous rent-seekers trying to actively tighten supply restrictions, or building monopolistic fences around goods that are cheap to produce. But in practice, \"rent-seeking\" often also comes from people who aren't seeing any excess profit themselves and would be completely screwed over if the supply restriction were busted.\n\nAs it happens my landlord is a libertarian, pardon me, neoliberal. So far as I know, he has never personally voted to maintain a housing restriction. But I wouldn't see it as Elsevier-style mustache-twirling villainy if he did.\n\n## Monopolistic rents and the Ferguson Police Department\n\nBefore the city of Ferguson became a national flashpoint due to Michael Brown being shot by the local police department, Ferguson had [32,975 outstanding arrest warrants for nonviolent offenses, in a town of 21,000 residents](http://www.huffingtonpost.com/nathan-robinson/the-shocking-finding-from-the-doj-ferguson_b_6858388.html). (By comparison Boston, with 645,000 people, issued 2,300 criminal warrants.)\n\nBluntly, the residents of Ferguson were being treated as cattle and milked for fines.\n\nWhat happens if you try to give the residents of Ferguson a basic income?\n\nWell, if the citizens of Ferguson were previously being milked for around as much as they can output... why wouldn't the Ferguson Police Department just issue more warrants, once the citizens can stand some more milking?\n\nStepping back for a moment and considering some less charged technical definitions, \"**monopolistic rents**\" arise when a single price-setter controls *all* of some factor of production; as if a single cartel owned all the taxi medallions in NYC. Then they can collect even higher rents, in some cases, by setting the supply lower than the maximum that factor of production allows. Maybe if you set the price of a ride at \\$10, all but 300,000 taxi riders drop out of the market, who can be serviced by fewer taxi drivers, and those who remain are those willing to work for \\$3.50/ride. Then the total rent you collect is (\\$6.50 * 300,000), higher than the (\\$3 * 450,000) you would receive if all medallions were being used.\n\nThis *is* mustache-twirling villainy: the gains from trade on 150,000 taxi rides are being destroyed in a socially negative-sum game.\n\n(There are non-economists who imagine that this alone is the entire business of some evil force labeled \"capitalism\", but let's not go there.)\n\nElsevier collects *monopolistic rents* on its captive journals. There are other science journals in existence, but researchers tell the university that they need *particular* journals that Elsevier controls, and the university cannot substitute other journals instead.\n\nIf you've been following along with the rest of this document, you may have previously wondering something like, \"According to this overall outlook on price, if \\$2,000/year is the supply-demand equalizing price on a journal, why blame Elsevier for charging that much?\" The answer is that the marginal cost of production for a journal is very low; if lots of people were competing to supply a particular journal, more universities would have that journal and the price would be much lower. Since Elsevier has total control of the supply of a good that costs them very little to produce, they can make the supply curve be anything they like; and so they really are solely to blame for the point where that supply curve intersects demand.\n\nFor purposes of talking about hypothetical productivity-driven unemployment and remedies like the basic income, monopolistic rents especially matter because *they can rise to consume any increase in productivity or purchasing power.* If you have total control of a necessary factor of production, if you can at will say \"no\" and shut down the entire trade, you can charge whatever tariff you like on that trade. You can try to capture as much money as the trade can stand without shutting down; take nearly all of the gains from trade for yourself. If the trade becomes more gainful, you can just grab more of the gains, and all the other traders are no better off.\n\nWe could try to shoehorn the Ferguson Police Department into this view, by saying that they have a monopoly on \"not being in jail\" and that this is a necessary factor of production for which they can demand any price they like. But we don't actually need an economic view of the Ferguson PD; the point I mean to convey is that the Ferguson Police Department can take whatever they want from you, and the more you can afford, they more they can take.\n\nNow consider what happens if there's *more than one* person who has sole control of one of your factors of production.\n\nIn this case, everyone predating on you faces a kind of commons problem. They all want you to survive to go on being milked, so they don't want the milking collective to take too much; but if they demand any less money themselves, that just leaves you with more money for a different milker to seize.\n\nHowever this situation resolves itself, it's *really* not going to help if you try to give that person a basic income. It doesn't even help much to pry *one* of the milkers off their back, if there's at least two more poorly coordinated milkers remaining.\n\nI sometimes go around saying, \"Reading essays written by actual poor people in the US suggests that the *first* thing we could do to help them is to outlaw towing fees.\" But even this is just a gloss on an even worse situation: maybe it doesn't help to outlaw \\$100/day towing fees so long as that just makes court costs and late payment costs go up somewhere else.\n\nNow consider the present system of intellectual property rights. In particular, patents.\n\nThe problem with this system is not just that the US patent office goes around issuing patents on \"a system that uses the letter 'e' in online communications\" or whatever.\n\nThe problem is that it creates *many* parties each of whom has the theoretical ability to individually say \"no\" to, and shut down the production of, any good or service complicated enough to depend on more than one patent.\n\nIf you do *not* have [compulsory patent licensing with court-set fees](https://en.wikipedia.org/wiki/Compulsory_license), then why should any one patent troll--or even the holder of a rare real patent--stop short of demanding the company's entire profit?\n\n## Bigger companies and a decline of competition\n\nEconomists generally expect there to be a story behind a monopoly rent: a **barrier to entry,** or a barrier to price competition, that prevents anybody else from strolling in and selling similar or substitutable goods more cheaply.\n\nA number of alarming indicators seem to indicate that developed economies and the United States in particular are exhibiting something like stagnant, locked-up markets; fewer startups, fewer successful startups, more goods being sold into markets where there is less competition. This also goes along with other alarming indicators like people moving between states less often, but for now let's focus on the increasing lack of competition; those are the trends that most obviously threaten to keep prices high despite automation, or extract any increases in income.\n\nIf you are a libertarian, pardon me, neoliberal, you will loudly observe that quite often the *barrier to entry* for decreased competition takes the form of a law, since merchants are quite good at crossing obstacles like mere mountains and rivers. Such barriers indeed commonly arise from local or national laws:\n\n- Making it illegal to sell any similar or substitutable good.\n- Making it illegal to lower the price on any similar or substitutable good.\n- Imposing high fixed costs or entry costs to potential competitors, in the form of:\n - The expense of regulatory compliance, barring the market to bright young entrepreneurs or anyone else without deep pockets;\n - The time and delay of regulatory compliance, barring the market to anyone without deep pockets to keep going through the long delay;\n - The risk of not receiving regulatory approval;\n - A litigious environment in which a corporation must be large enough to support a large legal staff in order to survive.\n \nThe libertarian will then observe the existence of a \"regulatory ratchet\" in which the volume of law and bureaucracy seems to only increase over time within a country (or, for that matter, an individual large corporation); and suggest that this will be a force for decreased competition.\n \nConventional economics says that this is all obviously qualitatively correct, and that only the quantitative degree to which it is *the key* force responsibly for an increasing lack of competition and dynamic turnover could reasonably be disputed.\n\nOther classical forces producing large corporations are **economies of scale** and **network effects.** The ways in which regulatory burdens produce large corporations per se, and not just more expensive goods, are just because of the economies of scale and network effects in dealing with laws and regulators.\n\nAll fixed costs or up-front costs imply economies of scale, and advanced technology often has a fixed support cost or a large up-front cost. Larger markets in which it's easier to sell to all the consumers, likewise imply \"economies of scale\" in this sense: you pay a one-time cost in time and effort to set up with Amazon, and then you get access to Amazon's whole market. Whoever has the cheapest price on a standardized good might capture nearly all of Amazon's market within a nationality, which is the **winner-take-all** special case of economies of scale. To the extent that more goods are supplied through Amazon and fewer through regional malls, that much concentration of the market will emerge without regulatory forces.\n\n(Legal monopolies a la patent rights and copyrights would be filed by libertarians under \"Whether or not you think those laws are good ideas, they are certainly instances of the government being responsible for the largeness of the corporation.\")\n\nThere also other, weirder factors that might possibly be producing increasingly noncompetitive markets:\n\n- It's been suggested that the rise of index funds and other broad-based funds, such that shareholders in one company are usually shareholders in that company's competitors, is statistically implicated in a decreased enthusiasm for price competition between those companies.\n- Regulatory burdens often produce a duopoly or small-N-opoly instead of a monopoly. In these cases modern computing and networking seems to have enabled these few players to cooperate on what is, to them, a Prisoner's Dilemma, and do so without explicit price-fixing agreements that violate the letter of the law on antitrust. An airline with few competitors in a region, but more than zero competitors, will try raising its ticket prices a dollar overnight; the competitor then possibly raises their ticket prices a dollar; and if not, the original airline brings its price back down. This is apparently easier to do now that all the prices are online being updated every thirty seconds.\n- 'Confusopolies' such as the mattress industry (Scott Adams's term, I don't know if there's an official alternative) arise when a small number of market players simultaneously act to reduce information clarity for all customers. E.g. by requiring every mattress to have a different name and price inside every store, in order to diminish the ability of customers to compare alternatives and select the cheapest price for the same alternative. The few players tacitly cooperate on confusion, rather than engaging in more visibly illegal price collusion.\n- When an area of economic of activity is already dominated by large companies for whatever reason, these large companies are often highly bureaucratic themselves. The purchasing departments of these large companies then act like their own little regulatory environments, or they prefer to deal with existing partners, or with big-name reputable suppliers, or you'd need a costly specialized sales team to glad-hand the purchasing department, etcetera. So bigness is in this sense contagious: big bureaucratic companies with highly bureaucratic purchasing departments, will thereby advantage big suppliers in that segment of the economy.\n\nEyeballing this landscape myself, it seems to me that there's a lot of force to the libertarian-pardon-me-neoliberal thesis which suggests that a pretty large amount of this non-competitiveness phenomenon would somehow go away if we could, e.g., diminish regulatory and litigatory burdens by a factor of 10 and reform intellectual property rights. Arguendo, regulatory barriers to entry are what initially create an equilibrium where only one, two, or a small handful of companies sell the goods you need. Then once the market is already controlled by large companies, other factors like coordinated pricing can become a problem for consumers, *and* big corporate bureaucracies become a new barrier to entry for any related companies that need to interact with the big players.\n\nBut so far as principle goes, there are forces listed that have nothing obvious to do with the government, such as index fund shareholders. There are forces creating large corporations that aren't *just* about regulatory barriers, like economies of scale. I can't think of anything offhand I've read that presents a well-researched case about the quantitative extent to which all these forces are \"the problem\". Eyeballing the landscape, it seems to me that there are a *lot* more entrepreneurial dogs not barking which are silenced by a law or regulatory compliance burden or threat of litigation, than are being silenced by Google and Facebook being big enormous companies; but this kind of eyeballing is not a substitute for careful investigation.\n\nIf you hear anyone mentioning that not all inequality is bad, they are, hopefully, referring to the point that some inequality comes from manufacturing economies of scale that have nothing to do with regulators, and winner-take-all markets that are being won by lowest prices or best goods. At least in isolation, and not considering knock-on effects, these inequality-producing forces are positive-sum and generally beneficial. Other inequality is produced by monopolistic rent extraction that ultimately derives from regulatory barriers or other declines in price competition, which is a negative-sum phenomenon. The good reason to be alarmed by rising inequality is if it's coming from negative-sum rent extraction driven by decreasing price competition. The statistics on dynamism suggest that this may in fact be the case in an increasingly large slice of the economy.\n\nAnd again to restate the main point: this in-practice empirical trend toward more and more economic interactions being with larger and larger big corporations that stay around for longer and longer, is not just a problem because of inequality per se or a less dynamic society. It's a problem because it increases the extent to which (a) technological productivity increases are unlikely to decrease prices, and (b) any increased income is liable to be extracted in the form of higher prices elsewhere.\n\nAll the technology in San Francisco has not made computer programmers there *nearly* as much better off as one might naively expect from comparing their nominal salaries to Montana salaries. A huge class of non-computer-programmers ended up actively worse off than before Google came into their lives, *mostly* because of skyrocketing apartment rents. Today, increased productivity is being converted into gloom by housing restrictions; but tomorrow it could be decreasing competition and an increasing prevalence of effective monopolies and duopolies.\n\n# Labor markets failing to clear\n\nMarkets are said to **clear** when all the matched buyers and sellers who would be willing to trade at a mutually agreeable price, are actually trading.\n\nClearing a market doesn't happen automatically. Clearing a market that wasn't previously clearing can be a huge innovation and sometimes even a profitable one. Craigslist didn't literally *clear* the market for everyone willing in principle to sell old laptops or buy old laptops, nor did Ebay, but they moved the market closer to clearing and sparked a lot of trades that didn't happen before.\n\nIf I am shouting \"I'd love to sell an apple for 40 cents!\" and you are shouting \"I'd love to buy an apple for 40 cents!\" and the two of us are separated by a gaping chasm in the Earth that prevents us from ever meeting one another, we can say this apple market has **failed to clear.**\n\nOf course one could also say that we're merely unwilling to pay the non-monetary prices of climbing down and up the chasm. But at that rate, you might as well say that when it's illegal to sell apples for less than fifty cents, we're merely unwilling to pay the non-monetary price of risking jail. So to make the definition non-tautologous, and allow us to talk about markets sometimes *not* clearing, we shouldn't consider it mandatory to use all-embracing definitions of price.\n\nWhat about sales tax? If there's a city sales tax of two cents an apple, you are effectively buying the apple for 42 cents (from you) and I am effectively selling it for 40 cents (to me). We can't sell/buy for 41 cents, even if that's a mutually agreeable price. It might seem like smuggling libertarianism in sideways to declare that things like sales taxes are not allowed to count as a legitimate part of the price of an apple. Why not say that the apple comes packaged with the extra good of complying with local laws and supporting your city government? Someone has to pay for the police that prevent you from being outright murdered, so why engage in the fantasy of an entirely taxless environment etcetera etcetera.\n\nI'm not sure how academically standard the following stance is, but:\n\nIt seems to me that the notion of a 'clearing market' and what exactly counts as a 'price', is a flexible point of view; we can change the variable definitions and get a different but still reasonable answer. For example, there are people on Silk Road 2 who *are* willing to accept a risk of arrest as part of the price of buying and selling drugs. If you define prices as being allowed to include the risk of arrest, we can talk about how close Silk Road 2 comes to clearing that market. If we then tilt our head the other way and see the 'market' as the people who would buy and sell drugs at purely nominal prices if that were legal and carried no risk of arrest, Silk Road 2 isn't remotely close to clearing that market so defined.\n\nFor purposes of *considering technological unemployment* and what we will consider to be 'clearing the labor market', I think it makes a lot of sense to factor out literally everything from the prices that could possibly be factored out. Not just minimum wage laws, not just sales taxes, but even things like income taxes and associated paperwork. If I would trade an hour of babysitting for 10 apples, and you would sell me 10 apples for an hour of babysitting, but we aren't willing to trade if you need a business license and I have to work an extra half-hour because of income taxes, I think it makes sense to view this conceptually as a trade that could in principle clear, but isn't clearing. %%note: If you assume that we're considering markets clearing to always be good things (false: consider the market for nonconsensual sex), it would be an aggressive moral stance even from the standpoint of big-L Libertarianism for me to define a labor market as not clearing *because of income tax* and claim that this already establishes a problem that must be fixed by any means necessary.\n\nI however am not advocating this as a moral stance in which we assume it must be desirable for the market to completely clear and that every possible means to that end must be taken. I think that defining the clearing of the labor market in the most aggressive possible way, is a *conceptually useful* way to organize our thoughts about potential consequences of technological unemployment and policy responses. %%\n\nBecause: If the labor force participation rate is already dropping like a stone, and some people are making not-outright-stupid predictions of a huge new tidal wave of automation coming in, then you should be willing to consider a *lot* of options, cast a very wide net for policies to analyze. And this includes looking at possibilities like e.g. decreasing payroll taxes or sales taxes on humans who are having trouble trading their labor for apples. So it makes sense to take a step back and look at everything that contributes to or interferes with \"the labor market clearing\" in the broadest possible sense: literally everyone who'd be willing to trade babysitting and apples if there were literally no other obstacles in the way.\n\n## Labor mobility\n\nOne of the things that can prevent a labor market from clearing is if we are both willing to trade with one another, but you are at point A and I am at point B, and it is hard for either of us to move.\n\n### Once again, skyrocketing rents \n\nOkay, yes, I know, you're probably a little sick of hearing about it by now, but *once again:*\n\nIf I want to trade my babysitting for your apples, and you live in San Francisco, and I live in Montana, and I can't afford to move to San Francisco, we can view this as a labor market failing to clear.\n\nThis is conceptually a different problem from rents being extracted from the people who actually do live in San Francisco. We are looking here at an entirely different source of lost value, the trades that don't occur because people can't afford to live near San Francisco at all.\n\nI've seen estimates on \"how much would be gained in the US if people could afford to move to where all the new jobs are\" in the range of 5-10% of US GDP, and if anything I suspect that's a severe understatement; not to mention that these gains would flow disproportionately to people who are now disadvantaged.\n\n(No, really, those anti-housing laws are a *big damn problem.*)\n\n### Other problems that are extra-bad problems because they also inhibit labor mobility\n\n- Occupational licensing is extra bad because often your license in one state doesn't carry over to another state.\n- No national markets in health insurance are extra bad, because you will need to redo your health insurance if you move to another state.\n- State benefits such as disability or Medicaid are not portable, meaning an enormous effective income hit from crossing state lines.\n\n### International labor mobility\n\nFor people whose circle of concern does not stop at national borders:\n\nEstimated economic gains if all your fellow Earth humans could easily move to anywhere on Earth where jobs for humans can be found: 50% of planetary GDP.\n\nAgain, pretty high on the list of points one ought to ponder if you otherwise expect mass disemployment all over Earth.\n\nThis is one place where there's an obvious way that the AI and machine learning community in particular can try to intervene to make things better: develop better automatic language translation. Build apps for those allegedly approaching augmented reality headsets that provide subtitles for speakers, or automatically translate visible text. This will make it easier for people to relocate across national boundaries in seach of jobs, in the cases where that *is* legal; and some jobs and some labor can more easily move to whatever few countries make them simultaneously welcome. Governments are not the *only* forces that ever prevent people from doing things; problems and inconveniences like language barriers also count. %%note: Be warned: nationalists will take offense that you are making it easier to trade with non-national people much poorer than themselves. They will almost certainly see it in terms of you benefiting the rich corporations inside their country, at the expense of relatively poor people inside their country whom the rich corporations would otherwise need to trade with instead. My model of their mental model is not so much that they're evil or uncaring, as that their emotions do not believe on a core level that the extremely poor people being benefited actually exist. %% %%note: Be warned: many leftists do not believe emotionally that anyone can possibly be benefiting from a trade where they are still poor at the end and the corporation is still rich at the end. Clearly the poor people are being exploited since they are still poor at the end, so clearly a bad exploitative transaction is taking place and the poor people would be better off if this did not happen. It will appear to them that your automatic translators are facilitating these evil transactions.%%\n\n\n\n\n# Conclusions\n\nSuppose tomorrow the heads of Google and Facebook and Apple and Amazon and Microsoft and Tesla and Uber and YCombinator came to me and said, \"If we were to all act with one accord, is there anything we can do right now to make life better in the United States, without needing to massively reform the whole US government? You're not allowed to talk to us about anything else, just that one problem.\"\n\nThen I would reply:\n\n\"You should all get together and build a new city someplace with low rents, in a state with no state-level anti-housing laws; and try to organize an understanding and commitment among the new citizens of this city to not pass any laws against building as many skyscrapers as needed. You should move as much of your companies there as you can manage, and as much of the tech industry as you can persuade to follow you, all at the same time. You should put body cameras on the police officers, and not have Mafia-run towing companies with late fees; you should insure your poorer citizens for 7 days of Uber if their cars break down, and let nothing stand in the way of cheap babysitting. You should build a university there, with serious prestige because of all the prestigious researchers you will bring there; and make sure that university is ready to take in as many students as can possibly managed, at tuitions not far from the real cost of teaching, even if that means not having giant LED boards in the athletic centers.\"\n\n\"*And,*\" I would continue, \"*you should locate this new city in a swing state,* the largest swing state you can find such that you think this new population will be able to threaten to tilt the vote there in election years. Then your faction will not have zero political power the way it does within California and New York where your votes are worthless, and you can actually start pushing on changes like NGDP level targeting that would help with all the *other* problems.\"\n\nAfter another moment's thought, I would add: \"Your first priority on the *state* level should be unclogging the supply lines on medicine within that state; get as close as you can to outright occupational delicensing. Push price transparency laws. Build nonprofit hospitals that can use nonprofit H1B visas to bring in doctors from India and the UK: some countries produce excellent doctors that can pass the occupational licensing filter, and have supply pipelines that are less clogged. Remember, in the end, everything you do adds up to nothing for the country as a whole if it all gets sucked up in a nationally increased cost of healthcare because you are using goods in limited national supply. Now can I *please* have a word with you about--\"\n\nHonestly, if you want a story about how the tech industry managed to screw over the rest of the United States, locating all the new jobs in a region with anti-housing laws would be *number one* on my list.\n\nWe can even frame a story about how staying in the high-rent regions was selfish, a case of bad inequality in action, a decision for which one might be held morally culpable. When I asked a friend why Google didn't set up a campus outside the Bay Area so it could employ all the programmers who don't want to live near Mountain View, my friend replied that no project manager would want their project to be located outside the Bay Area because then they would be too remote from the center of political power in Mountain View. (I was surprised that court politics dominated Google to that extent, but I checked with other Googler friends and they agreed.)\n\nAnd once you start looking at it from that angle, you realize that the people who decide Where The Company Shall Be Located, or where the company stays once it's established, are disproportionately people who can afford the rents. Sure, there are network effects to being in New York, there's some real benefit to the company of being there. But there are also many programmers and non-programmers who would be willing to work for less nominal money if they got to have non-cramped houses and sane commutes. Could the company grow faster that way, if it's not one of the companies that really *need* to be in New York? And I don't know the answer to that. But it's worth observing that the people who make that decision, regardless of what decision would benefit the *company* the most, are the people who personally benefit the most and suffer the least from being in New York. They personally live in apartments that aren't tiny, they personally get the city amenities, and they're not personally driving the 2-hour commutes.\n\nOf course if I'm *not* telling a moral story and indulging in some pleasant righteous indignation, then I'd personally chalk up the observed outcome to maybe 5% personal selfishness, tops. In reality it's probably more like 95% network effects where the startup needs to be in easy car range of the venture capitalists; followed by inertial effects where the company grows but it can't move away from the city because employees have already put down roots, or it's hired people with roots. Above all, it's the Nash equilibrium where you do worse by unilaterally moving out of the high-rent region, unless everyone else you're tangled up with moves at the same time.\n\nNonetheless, for whatever reason it happened, it *really* screwed over the rest of the USA.\n\nAnother way in which the poor states got screwed over is that they're in the same currency region as the rich states, and their wages are anchored to the national minimum wage. Flowing money is required to pay wages; and the national minimum wage means that wages in poor states require minimum amounts of flowing money to animate. When lots of money flows to the rich states, the Federal Reserve estimates that enough money is flowing in the country as a whole, and doesn't want to create any more because then they think there would be too much inflation. Then residents of poor states can't move to the rich states *or* print more dollars *or* decrease local wages so that less flowing money can animate each job. The European Union has similar problems with enough euros flowing in Germany and not enough euros flowing in Spain or Italy or Greece.\n\n## Can basic income defeat the mysterious poverty equilibrium?\n\nIf you'll give me a moment to depart the path of standard economics, I personally have a question regarding a certain bizarre situation, regarding which economics gives no standard answer so far as I know:\n\nWhy are there still poor people in developed countries?\n\nA thousand years ago, it used to be that 98% of the population were farmers. Between then and now, agricultural productivity went up by a factor of 100 in developed countries. If you told somebody back in 1017 CE that this would happen, they might naively imagine that there wouldn't be poor people any more. They might naively imagine that very few people, if any, would be forced to work hard from dawn to dusk. Now blink and see the paradox, the bizarre state of affairs: *Why was that prediction naive?*\n\n*Yes,* poor people in today's developed countries have nicer shoes. They die less often. They have 20 changes of worn clothing instead of 1. Some of their houses have television sets, which was a great luxury in the 1950s and something that nobody from 1017 CE could have had at any price. *They're still poor.* Nobody from 1017 would mistake them for being 2017's rich people after five minutes of conversation. They'd be amazed that poor people in 2017 have so much stuff, sure. They would also recognize the hunted haunted looks, the debts bearing down, the desperate scrabble for work, the exhaustion and despair and the towing fees; and they would perceive that these were not the future's equivalent of a thriving, upright farmer with something to be proud of.\n\nIf a 100-fold increase in productivity did not manage to give almost everyone at least as much pride as a thriving farmer, can basic income be the last straw that breaks poverty's back? Before we can even begin to answer that, we'd need a good analysis of what the hell happened over the last thousand years that *didn't* eliminate poverty.\n\nTo my present state of personal knowledge, this looks like one of the giant inexplicable mysteries that has been staring us in the face for so long that people forget to be confused by it. By which I mean: consider how, until the early twentieth century, nobody said \"Wait, what the hell is syntax and how the hell do human children learn it and go around generating sentences?\" In principle, this incredibly deep scientific question could have been asked much earlier, far back in the nineteeth century, by some scientist in search of an important problem on which to found their career. *If* someone had earlier noticed what they didn't understand, and seen the incredible mystery staring them in the face, in the form of children walking around doing what everyone expected them to do and took for granted was the way that things had always been.\n\nWhy the hell doesn't a 100-fold increase in productivity eliminate lives of desperation, despair, exhaustion, hunted looks, hand-to-mouth living, and an unending fear of your car being towed?\n\nI think the existence of a class of people that can't defend themselves from more than one milker, is probably part of the answer. It doesn't seem like nearly a complete answer. I suspect there's also some kind of weird equilibrium in which societies feel freer to destroy more wealth as people would otherwise become richer. People in 1850 didn't give themselves modern levels of regulatory burdens.\n\nBut there is *some kind* of poverty equilibrium, with restoring forces powerful enough to defeat a 100-fold improvement in productivity. I am skeptical that, after the last thousand years, a basic income will *finally* be the force that defeats poverty once and for all, especially since we don't know why poverty shrugged off all the previous assaults. I wonder if you could actually give everyone in Niger a basic income and actually have everyone in Niger be better off, rather than the village chief and the national government competing for who can seize it first, and the land rents going up, and Monsanto charging more for seeds.\n\nOf course there are these lovely things called \"experiments\" that mean we can actually try things instead of just theorizing about them. And doing those with basic income still seems like a good idea. I'm just registering my worry that the restoring forces of the poverty equilibrium may not act instantly, and some of them may be dependent on regional rather than local income levels.\n\nSo you'd want to test giving the basic income to all the people in a region at once, not one person in a village (this part seems to be getting tested properly in some cases, yay).\n\nBut more importantly you'd want to watch out for the people seeming better-off at first, but then a little poorer, and then a little poorer, by the time the experiment ended 5 years later.", "date_published": "2017-08-18T04:00:51Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "83b"} {"id": "f8d1259cc2830e8206fb34ff544377ed", "title": "Corporations vs. superintelligences", "url": "https://arbital.com/p/corps_vs_si", "source": "arbital", "source_type": "text", "text": "It is sometimes suggested that corporations are relevant analogies for [superintelligences](https://arbital.com/p/41l). To evaluate this analogy without simply falling prey to the continuum fallacy, we need to consider which specific thresholds from the standard list of [advanced agent properties](https://arbital.com/p/2c) can reasonably be said to apply in full force to corporations. This suggests roughly the following picture:\n\n- Corporations generally exhibit [infrahuman, par-human, or high-human](https://arbital.com/p/7mt) levels of ability on non-heavily-parallel tasks. On cognitive tasks that parallelize well across massive numbers of humans being paid to work on them, corporations exhibit [superhuman](https://arbital.com/p/7mt) levels of ability compared to an individual human.\n - In order to try and grasp the overall performance boost from organizing into a corporation, consider a Microsoft-sized corporation trying to play Go in 2010. The corporation could potentially pick out its strongest player and so gain high-human performance, but would probably not play very far above that individual level, and so would not be able to defeat the individual world champion. Consider also the famous chess game of Kasparov vs. The World, which Kasparov ultimately won.\n - On massively parallel cognitive tasks, corporations exhibit strongly superhuman performance; the best passenger aircraft designable by Boeing seems likely to be far superior to the best passenger aircraft that could be designed by a single engineer at Boeing.\n- In virtue of being composed of humans, corporations have most of the advanced-agent properties that humans themselves do:\n - They can deploy **[general intelligence](https://arbital.com/p/7vh)** and **[cross-domain consequentialism](https://arbital.com/p/9h).**\n - They possess **[https://arbital.com/p/-3nf](https://arbital.com/p/-3nf)** and operate in the **[https://arbital.com/p/-78k](https://arbital.com/p/-78k).**\n - They can deploy **realistic psychological models** of humans and try to deceive them.\n- Also in virtue of being composed of humans, corporations are not in general **[Vingean-unpredictable,](https://arbital.com/p/1c0)** hence not systematically **[cognitively uncontainable.](https://arbital.com/p/9f)** Without constituent researchers who know secret phenomena of a domain, corporations are not **[strongly cognitively uncontainable.](https://arbital.com/p/2j)**\n- Corporations are not [epistemically efficient](https://arbital.com/p/6s) relative to humans, except perhaps in limited domains for the extremely few such that have deployed internal prediction markets with sufficiently high participation and subsidy. (The *stock prices* of large corporations are efficient, but the corporations aren't; often the stock price tanks after the corporation does something stupid.)\n- Corporations are not [instrumentally efficient.](https://arbital.com/p/6s) No currently known method exists for aggregating human strategic acumen into an instrumentally efficient conglomerate the way that prediction markets try to do for epistemic predictions about near-term testable events. It is often possible for a human to see a better strategy for accomplishing the corporation's pseudo-goals than the corporation is pursuing.\n- Corporations generally exhibit little interest in fundamental cognitive self-improvement, e.g. extremely few of them have deployed internal prediction markets (perhaps since the predictions of these internal prediction markets are often embarrassing to overconfident managers). Since corporate intelligence is almost entirely composed of humans, most of the basic algorithms running a corporation are not subject to improvement by the corporation. Attempts to do crude analogues of this tend to, e.g., bog down the entire corporation in bureaucracy and internal regulations, rather than resulting in genetic engineering of better executives or an [intelligence explosion](https://arbital.com/p/428).\n- Corporations have no basic speed advantage over their constituent humans, since speed does not parallelize.\n\nSometimes discussion of analogies between corporations and hostile superintelligences focuses on a purported misalignment with human values.\n\nAs mentioned above, corporations are *composed of* consequentialist agents, and can often deploy consequentialist reasoning to this extent. The humans inside the corporation are not all always pulling in the same direction, and this can lead to non-consequentialist behavior by the corporation considered as a whole; e.g. an executive may not maximize financial gain for the company out of fear of personal legal liability or just other life concerns.\n\nOn many occasions some corporations have acted psychopathically with respect to the outside world, e.g. tobacco companies. However, even tobacco companies are still composed entirely of humans who might balk at being e.g. [turned into paperclips](https://arbital.com/p/10h). It is possible to *imagine* circumstances under which a Board of Directors might wedge itself into pressing a button that turned everything including themselves into paperclips. However, acting in a unified way to pursue an interest of *the corporation* that is contrary to the non-financial personal interests of all executives *and* directors *and* employees *and* shareholders, does not well-characterize the behavior of most corporations under most circumstances.\n\nThe conditions for [the coherence theorems implying consistent expected utility maximization](https://arbital.com/p/7hh) are not met in corporations, as they are not met in the constituent humans. On the whole, the *strategic acumen* of big-picture corporate strategy seems to behave more like Go than like airplane design, and indeed corporations are usually strategically dumber than their smartest employee and often seem to be strategically dumber than their CEOs. Running down the list of [https://arbital.com/p/2vl](https://arbital.com/p/2vl) suggests that corporations exhibit some such behaviors sometimes, but not all of them nor all of the time. Corporations sometimes act like they wish to survive; but sometimes act like their executives are lazy in the face of competition. The directors and employees of the company will not go to literally any lengths to ensure the corporation's survival, or protect the corporation's (nonexistent) representation of its utility function, or converge their decision processes toward optimality (again consider the lack of internal prediction markets to aggregate epistemic capabilities on near-term resolvable events; and the lack of any known method for agglomerating human instrumental strategies into an efficient whole).\n\nCorporations exist in a strongly multipolar world; they operate in a context that includes other corporations of equal size, alliances of corporations of greater size, governments, an opinionated public, and many necessary trade partners, all of whom are composed of humans running at equal speed and of equal or greater intelligence and strategic acumen. Furthermore, many of the resulting compliance pressures are applied directly to the individual personal interests of the directors and managers of the corporation, i.e., the decision-making CEO might face individual legal sanction or public-opinion sanction independently of the corporation's expected average earnings. Even if the corporation did, e.g., successfully assassinate a rival's CEO, not all of the resulting benefits to the corporation would accrue to the individuals who had taken the greatest legal risks to run the project.\n\nPotential strong disanalogies to a [https://arbital.com/p/-10h](https://arbital.com/p/-10h) include the following:\n\n- A paperclip maximizer can get much stronger returns on cognitive investment and reinvestment owing to being able to optimize its own algorithms at a lower level of organization.\n- A paperclip maximizer can operate in much faster serial time.\n- A paperclip maximizer can scale single-brain algorithms (rather than hiring more humans to try to communicate with each other across verbal barriers, a paperclip maximizer can potentially solve problems that require one BIG brain using high internal bandwidth).\n- A paperclip maximizer can scale continuous, perfectly cooperative and coordinated copies of itself as more computational power becomes available.\n- Depending on the returns on cognitive investment, and the timescale on which it occurs, a paperclip maximizer undergoing an intelligence explosion can end up with a strong short-term intelligence lead on the nearest rival AI projects (e.g. because the times separating the different AI projects were measured on a human scale, with the second-leading project 2 months behind the leading project, and this time difference was amplified by many orders of magnitude by fast serial cognition once the leading AI became capable of it).\n- Strongly superhuman cognition potentially leads the paperclip maximizer to rapidly overcome initial material disadvantages.\n - E.g. a paperclip maximizer that can e.g. crack protein folding to develop its own biological organisms or bootstrap nanotechnology, or that develops superhuman psychological manipulation of humans, potentially acquires a strong positional advantage over all other players in the system and can ignore game-theoretic considerations (you don't have to play the Iterated Prisoner's Dilemma if you can simply disassemble the other agent and use their atoms for something else).\n- Strongly superhuman strategic acumen means the paperclip maximizer can potentially deploy tactics that literally no human has ever imagined.\n- Serially fast thinking and serially fast actions can take place faster than humans (or corporations) can react.\n- A paperclip maximizer is *actually* motivated to *literally* kill all opposition including all humans and turn everything within reach into paperclips.\n\nTo the extent one credits the dissimilarities above as relevant to whatever empirical question is at hand, arguing by analogy from corporations to superintelligences--especially under the banner of \"corporations *are* superintelligences!\"--would be an instance of the [noncentral fallacy](https://arbital.com/p/noncentral_fallacy) or [https://arbital.com/p/-reference_class_tennis](https://arbital.com/p/-reference_class_tennis). Using the analogy to argue that \"superintelligences are no more dangerous than corporations\" would be the \"precedented therefore harmless\" variation of the [https://arbital.com/p/-7nf](https://arbital.com/p/-7nf). Using the analogy to argue that \"corporations are the real danger,\" without having previously argued out that superintelligences are harmless or that superintelligences are sufficiently improbable, would be [https://arbital.com/p/-derailing](https://arbital.com/p/-derailing).", "date_published": "2017-03-25T05:41:36Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "83z"} {"id": "0bdf064dcdd928ef27c217370ee39c8d", "title": "Difficulty of AI alignment", "url": "https://arbital.com/p/alignment_difficulty", "source": "arbital", "source_type": "text", "text": "This page attempts to list basic propositions in computer science which, if they are true, would be ultimately responsible for rendering difficult the task of getting good outcomes from a [sufficiently advanced Artificial Intelligence](https://arbital.com/p/7g1).\n\n[https://arbital.com/p/auto-summary-to-here](https://arbital.com/p/auto-summary-to-here)\n\n# \"Difficulty.\"\n\nBy saying that these propositions would, if true, seem to imply \"difficulties\", we don't mean to imply that these problems are unsolvable. We could distinguish possible levels of \"difficulty\" as follows:\n\n- The problem is straightforwardly solvable, but must in fact be solved.\n- The problem is straightforwardly solvable if foreseen in advance, but does not *force* a general solution in its early manifestations--if the later problems have not been explicitly foreseen, early solutions may fail to generalize. Projects which are not exhibiting sufficient foresight may fail to future-proof for the problem, even though it is in some sense easy.\n- The problem seems solvable by applying added effort, but the need for this effort will contribute *substantial additional time or resource requirements* to the aligned version of the AGI project; implying that unsafe clones or similar projects would have an additional time advantage. E.g., computer operating systems can be made more secure, but it adds rather more than 5% to development time and requires people willing to take on a lot of little inconveniences instead of doing things the most convenient way. If there are enough manifested difficulties like this, and the sum of their severity is great enough, then...\n - If there is strongly believed to be a great and unavoidable resource requirement even for safety-careless AGI projects, then we have a worrisome situation in which coordination among the leading five AGI projects is required to avoid races to the bottom on safety, and arms-race scenarios where the leading projects don't trust each other are extremely bad.\n - If the probability seems great enough that \"A safety-careless AGI project can be executed using few enough resources, relative to every group in the world that might have those resources and a desire to develop AGI, that there would be dozens or hundreds of such projects\" then a sufficiently great [added development for AI alignment](https://arbital.com/p/7wl) *forces* [closed AI development scenarios](https://arbital.com/p/closed_is_cooperative). (Because open development would give projects that skipped all the safety an insuperable time advantage, and there would be enough such projects that getting all of them to behave is impossible. (Especially in any world where, like at present, there are billionaires with great command of computational resources who don't seem to understand [Orthogonality](https://arbital.com/p/1y).))\n- The problem seems like it should in principle have a straightforward solution, but it seems like there's a worrisome probability of screwing up along the way, meaning...\n - It requires substantial additional work and time to solve this problem reliably and know that we have solved it (see above), or\n - Feasible amounts of effort still leave a worrying residue of probability that the attempted solution contains a land mine.\n- The problem seems unsolvable using realistic amounts of effort, it which case aligned-AGI designs are constrained to avoid confronting it and we must find workarounds.\n- The problem seems like it ought to be solvable somehow, but we are not sure exactly how to solve it. This could imply that...\n - Novel research and perhaps genius is required to avoid this type of failure, even with the best of good intentions;\n - This might be a kind of conceptual problem that takes a long serial time to develop, and we should get started on it sooner;\n - We should start considering alternative design pathways that would work around or avoid the difficulty, in case the problem is not solved.", "date_published": "2017-05-25T15:09:06Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "8dh"} {"id": "5f44aea7d9f78035171f8f735a6d8cf8", "title": "Instrumental goals are almost-equally as tractable as terminal goals", "url": "https://arbital.com/p/instrumental_goals_equally_tractable", "source": "arbital", "source_type": "text", "text": "One counterargument to the Orthogonality Thesis asserts that agents with terminal preferences for goals like e.g. resource acquisition will always be much better at those goals than agents which merely try to acquire resources on the way to doing something else, like making paperclips. Therefore, by filtering on real-world competent agents, we filter out all agents which do not have terminal preferences for acquiring resources.\n\nA reply is that \"figuring out how to do $W_4$ on the way to $W_3$, on the way to $W_2$, on the way to $W_1$, without that particular way of doing $W_4$ stomping on your ability to later achieve $W_2$\" is such a ubiquitous idiom of cognition or supercognition that (a) any competent agent must already do that all the time, and (b) it doesn't seem like adding one more straightforward target $W_0$ to the end of the chain should usually result in greatly increased computational costs or greatly diminished ability to optimize $W_4$.\n\nE.g. contrast the necessary thoughts of a paperclip maximizer acquiring resources in order to turn them into paperclips, and an agent with a terminal goal of acquiring and hoarding resources.\n\nThe paperclip maximizer has a terminal utility function $U_0$ which counts the number of paperclips in the universe (or rather, paperclip-seconds in the universe's history). The paperclip maximizer then identifies a sequence of subgoals and sub-subgoals $W_1, W_2, W_3...W_N$ corresponding to increasingly fine-grained strategies for making paperclips, each of which is subject to the constraint that it doesn't stomp on the previous elements of the goal hierarchy. (For simplicity of exposition we temporarily pretend that each goal has only one subgoal rather than a family of conjunctive and disjunctive subgoals.)\n\nMore concretely, we can imagine that $W_1$ is \"get matter under my control (in a way that doesn't stop me from making paperclips with it)\", that is, if we were to consider the naive or unconditional description $W_1'$ \"get matter under my 'control' (whether or not I can make paperclips with it)\", we are here interested in a subset of states $W_1 \\subset W_1'$ such that $\\mathbb E[https://arbital.com/p/U_0|W_1](https://arbital.com/p/U_0|W_1)$ is high. Then $W_2$ might be \"explore the universe to find matter (in such a way that it doesn't interfere with bringing that matter under control or turning it into paperclips)\", $W_3$ might be \"build interstellar probes (in such a way that ...)\", and as we go further into the hierarchy we will find $W_{10}$ \"gather all the materials for an interstellar probe in one place (in such a way that ...)\", $W_{20}$ \"lay the next 20 sets of rails for transporting the titanium cart\", and $W_{25}$ \"move the left controller upward\".\n\nOf course by the time we're that deep in the hierarchy, any efficient planning algorithm is making some use of real independences where we can reason relatively myopically about how to lay train tracks without worrying very much about what the cart of titanium is being used for. (Provided that the strategies are constrained enough in domain to not include any strategies that stomp distant higher goals, e.g. the strategy \"build an independent superintelligence that just wants to lay train tracks\"; if the system were optimizing that broadly it would need to check distant consequences and condition on them.)\n\nThe reply would then be that, in general, any feat of superintelligence requires making a ton of big, medium-sized, and little strategies all converge on a single future state in virtue of all of those strategies having been selected sufficiently well to optimize the expectation $\\mathbb E[https://arbital.com/p/U|W_1,W_2,...](https://arbital.com/p/U|W_1,W_2,...).$ for some $U.$ A ton of little and medium-sized strategies must have all managed not to collide with each other or with larger big-picture considerations. If you can't do this much then you can't win a game of Go or build a factory or even walk across the room without your limbs tangling up.\n\nThen there doesn't seem to be any good reason to expect an agent which is instead optimizing directly the utility function $U_1$ which is \"acquire and hoard resources\" to do a very much better job of optimizing $W_{10}$ or $W_{25}.$ When $W_{25}$ already needs to be conditioned in such a way as to not stomp on all the higher goals $W_2, W_3, ...$ it just doesn't seem that much less constraining to target $U_1$ versus $U_0, W_1.$ Most of the cognitive labor in the sequence does not seem like it should be going into checking for $U_0$ at the end instead of checking for $U_1$ at the end. It should be going into, e.g., figuring out how to make any kind of interstellar probe and figuring out how to build factories.\n\nIt has not historically been the case that the most computationally efficient way to play chess is to have competing agents inside the chess algorithm trying to optimize different unconditional utility functions and bidding on the right to make moves in order to pursue their own local goal of \"protect the queen, regardless of other long-term consequences\" or \"control the center, regardless of other long-term consequences\". What we are actually trying to get is the chess move such that, conditioning on that chess move and the sort of future chess moves we are likely to make, our chance of winning is the highest. The best modern chess algorithms do their best to factor in anything that affects long-range consequences whenever they know about those consequences. The best chess algorithms don't try to factor things into lots of colliding unconditional urges, because sometimes that's not how \"the winning move\" factors. You can extremely often do better by doing a deeper consequentialist search that conditions multiple elements of your strategy on longer-term consequences in a way that prevents your moves from stepping on each other. It's not very much of an exaggeration to say that this is why humans with brains that can imagine long-term consequences are smarter than, say, armadillos.\n\nSometimes there are subtleties we don't have the computing power to notice, we can't literally actually condition on the future. But \"to make paperclips, acquire resources and use them to make paperclips\" versus \"to make paperclips, acquire resources regardless of whether they can be used to make paperclips\" is not subtle. We'd expect a superintelligence that was [efficient relative to humans](https://arbital.com/p/6s) to understand and correct at least those divergences between $W_1$ and $W_1'$ that a human could see, using at most the trivial amount of computing power represented by a human brain. To the extent that particular choices are being selected-on over a domain that is likely to include choices with huge long-range consequences, one expends the computing power to check and condition on the long-range consequences; but a supermajority of choices shouldn't require checks of this sort; and even choices about how to design train tracks that do require longer-range checks are not going to be very much more tractable depending on whether the distant top of the goal hierarchy is something like \"make paperclips\" or \"hoard resources\".\n\nEven supposing that there could be 5% more computational cost associated with checking instrumental strategies for stepping on \"promote fun-theoretic eudaimonia\", which might ubiquitously involve considerations like \"make sure none of the computational processes you use to do this are themselves sentient\", this doesn't mean you can't have competent agents that go ahead and spend 5% more computation. Iit's simply the correct choice to build subagents that expend 5% more computation to maintain coordination on achieving eudaimonia, rather than building subagents that expend 5% less computation to hoard resources and never give them back. It doesn't matter if the second kind of agents are less \"costly\" in some myopic sense, they are vastly less useful and indeed actively destructive. So nothing that is choosing so as to optimize its expectation of $U_0$ will build a subagent that generally optimizes its own expectation of $U_1.$", "date_published": "2021-08-18T21:33:39Z", "authors": ["mrkun", "Eric Bruylant", "Niplav Yushtun", "Eliezer Yudkowsky"], "summaries": ["One counterargument to the [Orthogonality Thesis](https://arbital.com/p/1y) asserts that agents with terminal preferences for goals like e.g. resource acquisition will always be much better at those goals than agents which merely try to acquire resources on the way to doing something else, like making paperclips. A reply is that any competent agent optimizing a utility function $U_0$ must have the ability to execute many subgoals and sub-subgoals $W_1, W_2, ... W_25$ that are all conditioned on arriving at the same future, and it is not especially easier to optimize $W_25$ if you promote $W_1$ to an unconditional terminal goal. E.g. it is not much harder to design a battery to power interstellar probes to gather resources to make paperclips than to design a battery to power interstellar probes to gather resources and hoard them."], "tags": ["B-Class"], "alias": "8v4"} {"id": "a5d393ab351fe654f0d3ce3682ed52ee", "title": "Separation from hyperexistential risk", "url": "https://arbital.com/p/hyperexistential_separation", "source": "arbital", "source_type": "text", "text": "A principle of [AI alignment](https://arbital.com/p/2v) that does not seem reducible to [other principles](https://arbital.com/p/7v8) is \"The AGI design should be widely separated in the design space from any design that would constitute a hyperexistential risk\". A hyperexistential risk is a \"fate worse than death\", that is, any AGI whose outcome is worse than quickly killing everyone and filling the universe with [paperclips](https://arbital.com/p/7ch).\n\nAs an example of this principle, suppose we could write a first-generation AGI which contained an explicit representation of our exact true value function $V,$ but where we were not in this thought experiment absolutely sure that we'd solved the problem of getting the AGI to align on that explicit representation of a utility function. This would violate the principle of hyperexistential separation, because an AGI that optimizes $V$ is near in the design space to one that optimizes $-V.$ Similarly, suppose we can align an AGI on $V$ but we're not certain we've built this AGI to be immune to decision-theoretic extortion. Then this AGI distinguishes the global minimum of $V$ as the most effective threat against it, which is something that could increase the probability of $V$-minimizing scenarios being realized.\n[https://arbital.com/p/auto-summary-to-here](https://arbital.com/p/auto-summary-to-here)\n\nThe concern here is a special case of [shalt-not backfire](https://arbital.com/p/) whereby identifying a negative outcome to the system moves us closer in the design space to realizing it.\n\nOne seemingly obvious patch to avoid disutility maximization might be to give the AGI a utility function $U = V + W$ where $W$ says that the absolute worst possible thing that can happen is for a piece of paper to have written on it the SHA256 hash of \"Nopenopenope\" plus 17. Then if, due to otherwise poor design permitting single-bit errors to have vast results, a cosmic ray flips the sign of the AGI's effective utility function, the AGI tiles the universe with pieces of paper like that; this is no worse than ordinary paperclips. Similarly, any extortion against the AGI would use such pieces of paper as a threat. $W$ then functions as a honeypot or distractor for disutility maximizers which prevents them from minimizing our own true utility.\n\nThis patch would not actually work because this is a rare special case of a utility function *[not](https://arbital.com/p/3r6)* being [reflectively consistent](https://arbital.com/p/2rb). By the same reasoning we use to add $W$ to the AI's utility function $U,$ we might expect the AGI to realize that the only thing causing this weird horrible event to happen would be that event's identification by its representation of $U,$ and thus the AGI would be motivated to delete its representation of $W$ from its successor's utility function.\n\nA patch to the patch might be to have $W$ single out a class of event which we didn't otherwise care about, but would otherwise happen at least once on its own over the otherwise expected history of the universe. If so, we'd need to weight $W$ relative to $V$ within $U$ such that $U$ still motivated expending only a small amount of effort on easily preventing the $W$-disvalued event, rather than all effort being spent on averting $W$ to the neglect of $V.$\n\nA deeper solution for an early-generation [Task AGI](https://arbital.com/p/6w) would be to *never try to explicitly represent complete human values,* especially the parts of $V$ that identify things we dislike more than death. If you avoid [impacts in general](https://arbital.com/p/2pf) except for operator-whitelisted impacts, then you would avoid negative impacts along the way, rather than the AI containing an explicit description of what is the worst sort of impact that needs to be avoided. In this case, the AGI just doesn't contain the information needed to compute states of the universe that we'd consider worse than death; flipping the sign of the utility function $U,$ or subtracting components from $U$ and then flipping the sign, doesn't identify any state we consider worse than paperclips. The AGI no longer *neighbors a hyperexistential risk in the design space;* there is no longer a short path we can take in the design space, by any simple negative miracle, to get from the AGI to a fate worse than death.\n\nSince hyperexistential catastrophes are narrow special cases (or at least it seems this way and we sure hope so), we can avoid them much more widely than ordinary existential risks. A Task AGI powerful enough to do anything [pivotal](https://arbital.com/p/6y) seems unavoidably very close in the design space to something that would destroy the world if we took out all the internal limiters. By the act of having something powerful enough to destroy the world lying around, we are closely neighboring the destruction of the world within an obvious metric on possibilities. Anything powerful enough to save the world can be transformed by a simple negative miracle into something that (merely) destroys it.\n\nBut we don't fret terribly about how a calculator that can add 17 + 17 and get 34 is very close in the design space to a calculator that gets -34; we just try to prevent the errors that would take us there. We try to constrain the state trajectory narrowly enough that it doesn't slop over into any \"neighboring\" regions. This type of thinking is plausibly the best we can do for ordinary existential catastrophes, which occupy very large volumes of the design space near any AGI powerful enough to be helpful.\n\nBy contrast, an \"I Have No Mouth And I Must Scream\" scenario requires an AGI that specifically wants or identifies particular very-low-value regions of the outcome space. Most simple utility functions imply reconfiguring the universe in a way that merely kills us; a hyperexistential catastrophe is a much smaller target. Since hyperexistential risks can be extremely bad, we prefer to avoid even very tiny probabilities of them; and since they are narrow targets, it is reasonable to try to avoid *being anywhere near them* in the state space. This can be seen as a kind of Murphy-proofing; we will naturally try to rigidify the state trajectory and perhaps succeed, but errors in our reasoning are likely to take us to nearby-neighboring possibilities despite our best efforts. You would still need bad luck on top of that to end up in the particular neighborhood that denotes a hyperexistential catastrophe, but this is the type of small possibility that seems worth minimizing further.\n\nThis principle implies that *general* inference of human values should not be a target of an early-generation Task AGI. If a [meta-utility](https://arbital.com/p/7t8) function $U'$ contains all of the information needed to identify all of $V,$ then it contains all of the information needed to identify minima of $V.$ This would be the case if e.g. an early-generation AGI was explicitly identifying a meta-goal along the lines of \"learn all human values\". However, this consideration weighing against general value learning of true human values might not apply to e.g. a Task AGI that was learning inductively from human-labeled examples, if the labeling humans were not trying to identify or distinguish within \"dead or worse\" and just assigned all such cases the same \"bad\" label. There are still subtleties to worry about in a case like that, by which simple negative miracles might end up identifying the true $V$ anyway in a goal-valent way. But even on the first step of \"use the same label for death and worse-than-death as events to be avoided, likewise all varieties of bad fates better than death as a type of consequence to notice and describe to human operators\", it seems like we would have moved substantially further away in the design space from hyperexistential catastrophe.", "date_published": "2017-12-04T20:38:46Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "8vk"} {"id": "9837900e3f48d729a2f2a9f3caed0184", "title": "Cognitive uncontainability", "url": "https://arbital.com/p/uncontainability", "source": "arbital", "source_type": "text", "text": "[Vingean unpredictability](https://arbital.com/p/9g) is when an agent is cognitively uncontainable because it is smarter than us: if you could predict in advance exactly where [Deep Blue](https://arbital.com/p/) would move, you could play chess at least as well as Deep Blue yourself by doing whatever you predicted Deep Blue would do in your shoes.\n\nAlthough Vingean unpredictability is the classic way in which cognitive uncontainability can arise, other possibilities are [imaginable](https://arbital.com/p/9d). For instance, the AI could be operating in a [rich domain](https://arbital.com/p/9j) and searching a different part of the search space that humans have difficulty handling, while still being dumber or less competent overall than a human. In this case the AI's strategies might still be unpredictable to us, even while it was less effective or competent overall. Most [anecdotes about AI algorithms doing surprising things](https://arbital.com/p/) can be viewed from this angle.\n\nAn extremely narrow, exhaustibly searchable domain may yield cognitive containability even for intelligence locally superior to a human's. Even a perfect Tic-Tac-Toe player can only draw against a human who knows the basic strategies, because humans can also play perfect Tic-Tac-Toe. Of course this is only true so long as the agent can't modulate some transistors to form a wireless radio, escape onto the Internet, and offer a nearby bystander twenty thousand dollars to punch the human in the face - in which case the agent's strategic options would have included, in retrospect, things that affected the real world; and the real world is a much more complicated domain than Tic-Tac-Toe. There's some sense in which [richer domains](https://arbital.com/p/9j) seem likely to feed into increased cognitive uncontainability, but it's worth remembering that every game and every computer is embedded into the extremely complicated real world.\n\n[Strong cognitive uncontainability](https://arbital.com/p/2j) is when the agent knows some facts we don't, that it can use to formulate strategies that we wouldn't be able to recognize in advance as successful. From the perspective of e.g. the 11th century C.E. trying to cool their house, bringing in cool water from the nearby river to run over some nearby surfaces might be an understandable solution; but if you showed them the sketch of an air conditioner, without running the air conditioner or explaining how it worked, they wouldn't recognize this sketch as a smart solution because they wouldn't know the further facts required to see why it would work. When an agent can win using options that we didn't imagine, couldn't invent, and wouldn't understand even if we caught a glimpse of them in advance, it is strongly cognitively uncontainable in the same way that the 21st century is strongly uncontainable from the standpoint of the 11th century.", "date_published": "2015-12-16T15:13:34Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["An [agent](https://arbital.com/p/2s) is cognitively uncontainable in a [domain](https://arbital.com/p/), relative to us, when we can't hold all of the agent's possible strategies inside our own minds, and we can't make sharp predictions about what it can and can't do. For example, the most powerful modern chess programs, playing a human novice, would be cognitively uncontainable on the chess board (the novice can't imagine everything the chess program might do), but easily cognitively containable in the context of the larger world (the novice knows the chess program won't suddenly reach out and upset the board). One of the most critical [advanced agent properties](https://arbital.com/p/2c) is if an agent is cognitively uncontainable in the real world."], "tags": ["B-Class"], "alias": "9f"} {"id": "3a9cee773329f5b3df629b7f58bc06f2", "title": "Vingean uncertainty", "url": "https://arbital.com/p/Vingean_uncertainty", "source": "arbital", "source_type": "text", "text": "> Of course, I never wrote the “important” story, the sequel about the first amplified human. Once I tried something similar. John Campbell’s letter of rejection began: “Sorry—you can’t write this story. Neither can anyone else.”...\n> “Bookworm, Run!” and its lesson were important to me. Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem writers face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding. \n> -- [Vernor Vinge](https://books.google.com/books?id=tEMQpbiboH0C&pg=PA44&lpg=PA44&dq=vinge+%22pass+beyond+our+understanding%22+%22john+campbell%22&source=bl&ots=UTTxJ7Pndr&sig=88zngfy45_he2nJePP5dd0CTuR4&hl=en&sa=X&ved=0ahUKEwjD34_wrubJAhUHzWMKHVXYAocQ6AEIHTAA#v=onepage&q=vinge%20%22pass%20beyond%20our%20understanding%22%20%22john%20campbell%22&f=false), True Names and other Dangers, p. 47.\n\nVingean unpredictability is a key part of how we think about a [consequentialist intelligence](https://arbital.com/p/9h) which we believe is smarter than us in a domain. In particular, we usually think we can't predict exactly what a smarter-than-us agent will do, because if we could predict that, we would be that smart ourselves ([https://arbital.com/p/1c0](https://arbital.com/p/1c0)).\n\nIf you could predict exactly what action [Deep Blue](https://arbital.com/p/1bx) would take on a chessboard, you could play as well as Deep Blue by making whatever move you predicted Deep Blue would make. It follows that Deep Blue's programmers necessarily sacrificed their ability to intuit Deep Blue's exact moves in advance, in the course of creating a superhuman chessplayer.\n\nBut this doesn't mean Deep Blue's programmers were confused about the criterion by which Deep Blue chose actions. Deep Blue's programmers still knew in advance that Deep Blue would try to *win* rather than lose chess games. They knew that Deep Blue would try to steer the chess board's future into a particular region that was high in Deep Blue's preference ordering over chess positions. We can predict the *consequences* of Deep Blue's moves better than we can predict the moves themselves.\n\n\"Vingean uncertainty\" is the peculiar epistemic state we enter when we're considering sufficiently intelligent programs; in particular, we become less confident that we can predict their exact actions, and more confident of the final outcome of those actions.\n\n(Note that this rejects the claim that we are epistemically helpless and can know nothing about beings smarter than ourselves.)\n\nFurthermore, our ability to think about agents smarter than ourselves is not limited to knowing a particular goal and predicting its achievement. If we found a giant alien machine that seemed very well-designed, we might be able to infer the aliens were superhumanly intelligent even if we didn't know the aliens' ultimate goals. If we saw metal pipes, we could guess that the pipes represented some stable, optimal mechanical solution which was made out of hard metal so as to retain its shape. If we saw superconducting cables, we could guess that this was a way of efficiently transporting electrical work from one place to another, even if we didn't know what final purpose the electricity was being used for. This is the idea behind [https://arbital.com/p/10g](https://arbital.com/p/10g): if we can recognize that an alien machine is efficiently harvesting and distributing energy, we might recognize it as an intelligently designed artifact in the service of *some* goal even if we don't know the goal.\n\n# Noncontainment of belief within the action probabilities\n\nWhen reasoning under Vingean uncertainty, due to our [lack of logical omniscience](https://arbital.com/p/), our beliefs about the consequences of the agent's actions are not fully contained in our probability distribution over the agent's actions.\n\nSuppose that on each turn of a chess game playing against Deep Blue, I ask you to put a probability distribution on Deep Blue's possible chess moves. If you are a rational agent you should be able to put a [well-calibrated](https://arbital.com/p/1bw) probability distribution on these moves - most trivially, by assigning every legal move an equal probability (if Deep Blue has 20 legal moves, and you assign each move 5% probability, you are guaranteed to be well-calibrated).\n\nNow imagine a randomized game player RandomBlue that, on each round, draws randomly from the probability distribution you'd assign to Deep Blue's move from the same chess position. In every turn, your belief about where you'll observe RandomBlue move, is equivalent to your belief about where you'd see Deep Blue move. But your belief about the probable end of the game is very different. (This is only possible due to your lack of logical omniscience - you lack the computing resources to map out the complete sequence of expected moves, from your beliefs about each position.)\n\nIn particular, we could draw the following contrast between your reasoning about Deep Blue and your reasoning about RandomBlue:\n\n- When you see Deep Blue make a move to which you assigned a low probability, you think the rest of the game will go worse for you than you expected (that is, Deep Blue will do better than you previously expected).\n- When you see RandomBlue make a move that you assigned a low probability (i.e., a low probability that Deep Blue would make that move in that position), you expect to beat RandomBlue sooner than you previously expected (things will go worse for RandomBlue than your previous average expectation).\n\nThis reflects our belief in something like the [instrumental efficiency](https://arbital.com/p/6s) of Deep Blue. When we estimate the probability that Deep Blue makes a move $x$, we're estimating the probability that, as Deep Blue estimated each move $y$'s expected probability of winning $EU[https://arbital.com/p/y](https://arbital.com/p/y)$, Deep Blue found $\\forall y \\neq x: EU[https://arbital.com/p/x](https://arbital.com/p/x) > EU[https://arbital.com/p/y](https://arbital.com/p/y)$ (neglecting the possibility of exact ties, which is unlikely with deep searches and floating-point position-value estimates). If Deep Blue picks $z$ instead of $x$, we know that Deep Blue estimated $\\forall y \\neq z: EU[https://arbital.com/p/z](https://arbital.com/p/z) > EU[https://arbital.com/p/y](https://arbital.com/p/y)$ and in particular that Deep Blue estimated $EU[https://arbital.com/p/z](https://arbital.com/p/z) > EU[https://arbital.com/p/x](https://arbital.com/p/x)$. This could be because the expected worth of $x$ to Deep Blue was less than expected, but for low-probability move $z$ to be better than all other moves as well implies that $z$ had an unexpectedly high value relative to our own estimates. Thus, when Deep Blue makes a very unexpected move, we mostly expect that Deep Blue saw an unexpectedly good move that was better than what we thought was the best available move.\n\nIn contrast, when RandomBlue makes an unexpected move, we think the random number generator happened to land on a move that we justly assigned low worth, and hence we expect to defeat RandomBlue faster than we otherwise would have.\n\n# Features of Vingean reasoning\n\nSome interesting features of reasoning under Vingean uncertainty:\n\n- We may find ourselves more confident of the predicted consequences of an action than of the predicted action.\n- We may be more sure about the agent's instrumental strategies than its goals.\n- Due to our lack of logical omniscience, our beliefs about the agent's action-mediated relation to the environment are not screened off by our probability distribution over the system's probable next actions.\n - We update on the probable consequence of an action, and on the probable consequences of other actions not taken, after observing that the agent actually outputs that action.\n- If there is a compact way to describe the previous consequences of the agent's previous actions, we might try to infer that this consequence is a *goal* of the agent. We might then predict similar consequences in the future, even without being able to predict the agent's specific next actions.\n\nOur expectation of Vingean unpredictability in a domain may break down [if the domain is extremely simple and sufficiently closed](https://arbital.com/p/9j). In this case there may be an optimal play that we already know, making superhuman (unpredictable) play impossible.\n\n# Cognitive uncontainability\n\nVingean unpredictability is one of the core reasons to expect [cognitive uncontainability](https://arbital.com/p/9f) in [sufficiently intelligent agents](https://arbital.com/p/2c).\n\n# Vingean reflection\n\n[https://arbital.com/p/1c1](https://arbital.com/p/1c1) is reasoning about cognitive systems, especially cognitive systems very similar to yourself (including your actual self), under the constraint that you can't predict the exact future outputs. Deep Blue's programmers, by reasoning about the way Deep Blue was searching through game trees, could arrive at a well-justified but abstract belief that Deep Blue was 'trying to win' (rather than trying to lose) and reasoning effectively to that end.\n\nIn [https://arbital.com/p/1c1](https://arbital.com/p/1c1) we need to make predictions about the consequence of operating an agent in an environment, without knowing the agent's exact future actions - presumably via reasoning on some more abstract level, somehow. In [https://arbital.com/p/-1mq](https://arbital.com/p/-1mq), [https://arbital.com/p/1c0](https://arbital.com/p/1c0) appears in the rule that we should talk about our successor's specific actions only inside of quantifiers.\n\n\"Vingean reflection\" may be a much more general issue in the design of advanced cognitive systems than it might appear at first glance. An agent reasoning about the consequences of *its current code*, or considering what will happen if it *spends another minute thinking,* can be viewed as doing Vingean reflection. Vingean reflection can also be seen as the study of how a given agent *wants* thinking to occur in cognitive computations, which may be importantly different from how the agent *currently* thinks. (If these two coincide, we say the agent is [reflectively stable](https://arbital.com/p/1fx).)\n\n[https://arbital.com/p/1mq](https://arbital.com/p/1mq) is presently the main line of research trying to (slowly) get started on formalizing Vingean reflection and reflective stability.", "date_published": "2016-06-20T23:55:06Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["[https://arbital.com/p/1c0](https://arbital.com/p/1c0) says that you (usually) can't predict *exactly* what an entity smarter than you will do, because if you knew exactly what a smart agent would do, you would be at least that smart yourself. If you can predict exactly what move [Deep Blue](https://arbital.com/p/1bx) will make on a chessboard, you can play chess as well as Deep Blue by moving to the same place you predict Deep Blue would.\n\nThis doesn't mean Deep Blue's programmers were ignorant of all aspects of their creation. They understood where Deep Blue was working to steer the board's future - that Deep Blue was trying to win (rather than lose) chess games.\n\n\"Vingean uncertainty\" is the epistemic state we enter into when we consider an agent too smart for us to predict its exact actions. In particular, we will probably become *more* confident of the agent achieving its goals - that is, become more confident of which final outcomes will result from the agent's actions - even as we become *less* confident of which exact actions the agent will take."], "tags": ["B-Class"], "alias": "9g"} {"id": "5a7d26117f3ef0df03e75e437f8f20a5", "title": "Consequentialist cognition", "url": "https://arbital.com/p/consequentialist", "source": "arbital", "source_type": "text", "text": "summary: \"Consequentialism\" is the name for the backward step from preferring future outcomes to selecting current actions.\n\nE.g: You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford. (If this sounds extremely basic and obvious, it's meant to be.) An air conditioner isn't designed by liking metal that joins together at right angles, it's designed such that the future consequence of running the air conditioner will be cold air.\n\nConsequentialism requires:\n\n- Being able to predict or guess the future outcomes of different actions or policies;\n- Having a way to order outcomes, ranking them from lowest to highest;\n- Searching out actions that are predicted to lead to high-ranking futures;\n- Outputting those actions.\n\nOne might say that humans are empirically more powerful than mice because we are better consequentialists. If we want to eat, we can envision a spear and throw it at prey. If we want the future consequence of a well-lit room, we can envision a solar power panel.\n\nMany of the issues in [AI alignment](https://arbital.com/p/2v) and the [safety of advanced agents](https://arbital.com/p/2l) arise when a machine intelligence starts to be a consequentialist across particular interesting domains.\n\nConsequentialist reasoning selects policies on the basis of their predicted consequences - it does action $X$ because $X$ is forecasted to lead to preferred outcome $Y$. Whenever we reason that an agent which prefers outcome $Y$ over $Y'$ will therefore do $X$ instead of $X',$ we're implicitly assuming that the agent has the cognitive ability to do consequentialism at least about $X$s and $Y$s. It does means-end reasoning; it selects means on the basis of their predicted ends plus a preference over ends.\n\nE.g: When we [infer](https://arbital.com/p/2vl) that a [paperclip maximizer](https://arbital.com/p/10h) would try to [improve its own cognitive abilities](https://arbital.com/p/3ng) given means to do so, the background assumptions include:\n\n- That the paperclip maximizer can *forecast* the consequences of the policies \"self-improve\" and \"don't try to self-improve\";\n- That the forecasted consequences are respectively \"more paperclips eventually\" and \"less paperclips eventually\";\n- That the paperclip maximizer preference-orders outcomes on the basis of how many paperclips they contain;\n- That the paperclip maximizer outputs the immediate action it predicts will lead to more future paperclips.\n\n(Technically, since the forecasts of our actions' consequences will usually be uncertain, a coherent agent needs a [utility function over outcomes](https://arbital.com/p/1fw) and not just a preference ordering over outcomes.)\n\nThe related idea of \"backward chaining\" is one particular way of solving the cognitive problems of consequentialism: start from a desired outcome/event/future, and figure out what intermediate events are likely to have the consequence of bringing about that event/outcome, and repeat this question until it arrives back at a particular plan/policy/action.\n\nMany narrow AI algorithms are consequentialists over narrow domains. A chess program that searches far ahead in the game tree is a consequentialist; it outputs chess moves based on the expected result of those chess moves and your replies to them, into the distant future of the board.\n\nWe can see one of the critical aspects of human intelligence as [cross-domain consequentialism](https://arbital.com/p/cross_consequentialism). Rather than only forecasting consequences within the boundaries of a narrow domain, we can trace chains of events that leap from one domain to another. Making a chess move wins a chess game that wins a chess tournament that wins prize money that can be used to rent a car that can drive to the supermarket to get milk. An Artificial General Intelligence that could learn many domains, and engage in consequentialist reasoning that leaped across those domains, would be a [sufficiently advanced agent](https://arbital.com/p/2c) to be interesting from most perspectives on interestingness. It would start to be a consequentialist about the real world.\n\n# Pseudoconsequentialism\n\nSome systems are [https://arbital.com/p/-pseudoconsequentialist](https://arbital.com/p/-pseudoconsequentialist) - they in some ways *behave as if* outputting actions on the basis of their leading to particular futures, without using an explicit cognitive model and explicit forecasts.\n\nFor example, natural selection has a lot of the power of a cross-domain consequentialist; it can design whole organisms around the consequence of reproduction (or rather, inclusive genetic fitness). It's a fair approximation to say that spiders weave webs *because* the webs will catch prey that the spider can eat. Natural selection doesn't actually have a mind or an explicit model of the world; but millions of years of selecting DNA strands that did in fact previously construct an organism that reproduced, gives an effect *sort of* like outputting an organism design on the basis of its future consequences. (Although if the environment changes, the difference suddenly becomes clear: natural selection doesn't immediately catch on when humans start using birth control. Our DNA goes on having been selected on the basis of the *old* future of the ancestral environment, not the *new* future of the actual world.)\n\nSimilarly, a reinforcement-learning system learning to play Pong might not actually have an explicit model of \"What happens if I move the paddle here?\" - it might just be re-executing policies that had the consequence of winning last time. But there's still a future-to-present connection, a pseudo-backwards-causation, based on the Pong environment remaining fairly constant over time, so that we can sort of regard the Pong player's moves as happening *because* it will win the Pong game.\n\n# Ubiquity of consequentialism\n\nConsequentialism is an extremely basic idiom of optimization:\n\n- You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford.\n- An air conditioner is an artifact selected from possibility space such that the future consequence of running the air conditioner will be cold air.\n- A butterfly, by virtue of its DNA having been repeatedly selected to *have previously* brought about the past consequence of replication, will, under stable environmental conditions, bring about the future consequence of replication.\n- A rat that has previously learned a maze, is executing a policy that previously had the *consequence* of reaching the reward pellets at the end: A series of turns or behavioral rule that was neurally reinforced in virtue of the future conditions to which it led the last time it was executed. This policy will, given a stable maze, have the same consequence next time.\n- Faced with a superior chessplayer, we enter a state of [Vingean uncertainty](https://arbital.com/p/9g) in which we are more sure about the final consequence of the chessplayer's moves - that it wins the game - than we have any surety about the particular moves made. To put it another way, the main abstract fact we know about the chessplayer's next move is that the consequence of the move will be winning.\n- As a chessplayer becomes strongly superhuman, its play becomes [instrumentally efficient](https://arbital.com/p/6s) in the sense that *no* abstract description of the moves takes precedence over the consequence of the move. A weak computer chessplayer might be described in terms like \"It likes to move its pawn\" or \"it tries to grab control of the center\", but as the chess play improves past the human level, we can no longer detect any divergence from \"it makes the moves that will win the game later\" that we can describe in terms like \"it tries to control the center (whether or not that's really the winning move)\". In other words, as a chessplayer becomes more powerful, we stop being able to describe its moves that will ever take priority over our beliefs that the moves have a certain consequence.\n\nAnything that Aristotle would have considered as having a \"final cause\", or teleological explanation, without being entirely wrong about that, is something we can see through the lens of cognitive consequentialism or pseudoconsequentialism. A plan, a design, a reinforced behavior, or selected genes: Most of the complex order on Earth derives from one or more of these.\n\n# Interaction with advanced safety\n\nConsequentialism or pseudoconsequentialism, over various domains, is an [advanced agent property](https://arbital.com/p/2c) that is a key requisite or key threshold in several issues of AI alignment and advanced safety:\n\n- You get [unforeseen maxima](https://arbital.com/p/47) because the AI connected up an action you didn't think of, with a future state it wanted.\n- It seems [foreseeable](https://arbital.com/p/6r) that some issues will be [patch-resistant](https://arbital.com/p/48) because of the [https://arbital.com/p/-42](https://arbital.com/p/-42) effect: after one road to the future is blocked off, the next-best road to that future is often a very similar one that wasn't blocked.\n- Reasoning about [https://arbital.com/p/-2vl](https://arbital.com/p/-2vl) generally relies on at least pseudoconsequentialism - they're strategies that *lead up to* or would be *expected to lead up to* improved achievement of other future goals.\n - This means that, by default, lots and lots of the worrisome or problematic convergent strategies like \"resist being shut off\" and \"build subagents\" and \"deceive the programmers\" arise from some degree of consequentialism, combined with some degree of [grasping the relevant domains](https://arbital.com/p/3nf).\n\nAbove all: The human ability to think of a future and plan ways to get there, or think of a desired result and engineer technologies to achieve it, is *the* source of humans having enough cognitive capability to be dangerous. Most of the magnitude of the impact of an AI, such that we'd want to align in the first place, would come in a certain sense from that AI being a sufficiently good consequentialist or solving the same cognitive problems that consequentialists solve.\n\n# Subverting consequentialism?\n\nSince consequentialism seems tied up in so many issues, some of the proposals for making alignment easier have in some way tried to retreat from, limit, or subvert consequentialism. E.g:\n\n- [Oracles](https://arbital.com/p/6x) are meant to \"answer questions\" rather than output actions that lead to particular goals.\n- [Imitation-based](https://arbital.com/p/44z) agents are meant to imitate the behavior of a reference human as perfectly as possible, rather than selecting actions on the basis of their consequences.\n\nBut since consequentialism is so close to the heart of why an AI would be [sufficiently useful](https://arbital.com/p/6y) in the first place, getting rid of it tends to not be that straightforward. E.g:\n\n- Many proposals for [what to actually do](https://arbital.com/p/6y) with Oracles involve asking them to plan things, with humans then executing the plans.\n- An AI that [imitates](https://arbital.com/p/44z) a human doing consequentialism must be [representing consequentialism inside itself somewhere](https://arbital.com/p/1v0).\n\nSince 'consquentialism' or 'linking up actions to consequences' or 'figuring out how to get to a consequence' is so close to what would make advanced AIs useful in the first place, it shouldn't be surprising if some attempts to subvert consequentialism in the name of safety run squarely into [an unresolvable safety-usefulness tradeoff](https://arbital.com/p/42k).\n\nAnother concern is that consequentialism may to some extent be a convergent or default outcome of optimizing anything hard enough. E.g., although natural selection is a pseudoconsequentialist process, it optimized for reproductive capacity so hard that [it eventually spit out some powerful organisms that were explicit cognitive consequentialists](https://arbital.com/p/2rc) (aka humans).\n\nWe might similarly worry that optimizing any internal aspect of a machine intelligence hard enough would start to embed consequentialism somewhere - policies/designs/answers selected from a sufficiently general space that \"do consequentialist reasoning\" is embedded in some of the most effective answers.\n\nOr perhaps a machine intelligence might need to be consequentialist in some internal aspects in order to be [smart enough to do sufficiently useful things](https://arbital.com/p/6y) - maybe you just can't get a sufficiently advanced machine intelligence, sufficiently early, unless it is, e.g., choosing on a consequential basis what thoughts to think about, or engaging in consequentialist engineering of its internal elements.\n\nIn the same way that [expected utility](https://arbital.com/p/18t) is the only coherent way of making certain choices, or in the same way that natural selection optimizing hard enough on reproduction started spitting out explicit cognitive consequentialists, we might worry that consequentialism is in some sense central enough that it will be hard to subvert - hard enough that we can't easily get rid of [instrumental convergence](https://arbital.com/p/10g) on [problematic strategies](https://arbital.com/p/2vl) just by getting rid of the consequentialism while preserving the AI's usefulness.\n\nThis doesn't say that the research avenue of subverting consequentialism is automatically doomed to be fruitless. It does suggest that this is a deeper, more difficult, and stranger challenge than, \"Oh, well then, just build an AI with all the consequentialist aspects taken out.\"", "date_published": "2016-06-11T03:04:41Z", "authors": ["Eric Bruylant", "Olivia Schaefer", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["\"Consequentialism\" is picking out immediate actions on the basis of which future outcomes you predict will result.\n\nE.g: Going to the airport, not because you really like airports, but because you predict that if you go to the airport now you'll be in Oxford tomorrow. Or throwing a ball in the direction that your cerebellum predicts will lead to the future outcome of a soda can being knocked off the stump.\n\nAn extremely basic and ubiquitous idiom of cognition."], "tags": ["B-Class"], "alias": "9h"} {"id": "c1c2e568c305d334e7e464c7f700c956", "title": "Rich domain", "url": "https://arbital.com/p/rich_domain", "source": "arbital", "source_type": "text", "text": "A [domain](https://arbital.com/p/7vf) is 'rich', relative to our own intelligence, to the extent that (1) its [search space](https://arbital.com/p/) is too large and irregular for all of the best strategies to be searched by our own intelligence, and (2) the known mechanics of the domain do not permit us to easily place absolute bounds against some event occurring or some goal of interest being strategically obtainable.\n\nFor the pragmatic implications, see [https://arbital.com/p/9f](https://arbital.com/p/9f) and [https://arbital.com/p/9t](https://arbital.com/p/9t).\n\n**[https://arbital.com/p/4v](https://arbital.com/p/4v)**\n\n%%comment:\n\nspectrum: Logical tic-tac-toe, logical chess, logical go, real-world tic-tac-toe, human brain, Internet, real world\n\nsince this is about advanced safety, we have to assume that there's a smarter-than-us intelligence that could seek out and exploit the tiniest loophole in our reasoning about how safe we are, and accordingly be safe and conservative in what we think is a 'narrow' domain, and flag every time we rely on the assumption that the agent's strategy space is narrow and therefore we can cognitively contain something smarter than us\n\ntalk about that old example of the evolutionary algorithm that used transistors to form a radio to form a clock circuit; gravity means that everything interacts with everything\n\nparadigmatic example: That Alien Message\n\n%%", "date_published": "2017-02-18T02:30:49Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress"], "alias": "9j"} {"id": "81089d489c2f3486d582dc79bbf48069", "title": "Glossary (Value Alignment Theory)", "url": "https://arbital.com/p/value_alignment_glossary", "source": "arbital", "source_type": "text", "text": "The parent page for the definitions of words that are given a special or unusual meaning inside [value alignment theory](https://arbital.com/p/2v).\n\nIf a word or phrase could be mistaken for ordinary English (e.g. 'value' or 'utility function'), then [you should create a glossary page](https://arbital.com/p/9p) indicating its special meaning inside VAT, and the first use of that word in any potentially confusing context should be linked here. While other special phrases (like Value Alignment Theory) should also be linked to their concept pages for understandability, they do not need glossary definitions apart from their existing concept pages. However, an overloaded word like ['value'](https://arbital.com/p/55) needs its own brief (or lengthy) page that can be quickly consulted by somebody wondering what that word is taken to mean in the context of VAT.\n\nSee also [https://arbital.com/p/5b](https://arbital.com/p/5b).", "date_published": "2015-12-17T21:40:28Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "9m"} {"id": "fefe2e78e4526bcd1919dc9580c0bed0", "title": "Programmer", "url": "https://arbital.com/p/value_alignment_programmer", "source": "arbital", "source_type": "text", "text": "The 'programmers' are the human beings creating whatever [advanced agent](https://arbital.com/p/2c) is being talked about.\n\nThe word 'programmer' isn't meant to imply that the AI is hardcoded as a fixed algorithm and then run; it's just the compact word we use to refer to the AI's creators. (E.g., the creation process could include explicitly coding algorithms, teaching the agent, exposing it to particular experiences, labeling its experiences for purposes of supervised learning, or any number of other nurturing processes extending over years.)\n\nMany proposals for an AI's [preference framework](https://arbital.com/p/), or solving the [value identification problem](https://arbital.com/p/), require that the AI explicitly model the programmers and, even before then, figure out which objects inside the AI's beliefs about the environment *are* the programmers. These are respectively the [programmer modeling](https://arbital.com/p/) and [programmer identification](https://arbital.com/p/) problems.", "date_published": "2015-12-15T23:10:38Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class", "Definition", "Glossary (Value Alignment Theory)"], "alias": "9r"} {"id": "8c1ab5c67d491c8b04855e90cde9230e", "title": "Logical game", "url": "https://arbital.com/p/logical_game", "source": "arbital", "source_type": "text", "text": "In the context of [AI Alignment Theory](https://arbital.com/p/2v), a 'logical' game is one that we are, for purposes of thought experiment, treating as having only the mathematical structure of the game as usually understood. In real-world chess, you can potentially bribe the opposing player, drug them, shoot them, or rearrange the board when they're not looking. In logical chess, we consider the entire universe to have shrunken to the size of the chess board plus two [Cartesian](Cartesian_agent-1) players, and we imagine that the conventional rules of chess are the absolute and unalterable laws of physics.\n\nThus, real-world chess is a [rich domain](https://arbital.com/p/9j), and logical chess is not. In fact, since everything in the real universe is constantly interacting (e.g., [a pebble thrown on the Earth exerts a gravitational influence on the Moon](https://arbital.com/p/)), to consider a conceptual example of something that is definitely, indisputably a narrow domain, we must generally resort to imagining *logical* (not real-world) Tic-Tac-Toe.", "date_published": "2021-03-04T00:31:01Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Rob Bensinger", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "9s"} {"id": "e12467ee2e258b6097775a63521d60cb", "title": "Almost all real-world domains are rich", "url": "https://arbital.com/p/real_is_rich", "source": "arbital", "source_type": "text", "text": "The proposition that almost all real-world problems occupy rich domains, or *could* occupy rich domains so far as we know, due to the degree to which most things in the real world entangle with many other real things.\n\nIf playing a *real-world* game of chess, it's possible to:\n\n- make a move that is especially likely to fool the opponent, given their cognitive psychology\n- annoy the opponent\n- try to cause a memory error in the opponent\n- bribe the opponent with an offer to let them win future games\n- bribe the opponent with candy\n- drug the opponent\n- shoot the opponent\n- switch pieces on the game board when the opponent isn't looking\n- bribe the referees with money\n- sabotage the cameras to make it look like the opponent cheated\n- [force some poorly designed circuits to behave as a radio](https://arbital.com/p/) so that you can break onto a nearby wireless Internet connection and build a smarter agent on the Internet who will create molecular nanotechnology and optimize the universe to make it look just like you won the chess game\n- or accomplish whatever was meant to be accomplished by 'winning the game' via some entirely different path.\n\nSince 'almost all' and 'might be' are not precise, for operational purposes this page's assertion will be taken to be, \"Every [superintelligence](https://arbital.com/p/41l) with options more complicated than those of a [Zermelo-Fraenkel provability Oracle](https://arbital.com/p/70), should be taken from our subjective perspective to have an at least a 1/3 probability of being [cognitively uncontainable](https://arbital.com/p/9f).\"\n\n%%comment:\n*(Work in progress)*\n\n\n\na central difficulty of one approach to Oracle research is to so drastically constrain the Oracle's options that the domain becomes strategically narrow from its perspective (and we can know this fact well enough to proceed)\n\ngravitational influence of pebble thrown on Earth on moon, but this *seems* not usefully controllable because we *think* the AI can't possibly isolate *any* controllable effect of this entanglement.\n\nwhen we build an agent based on our belief that we've found an exception to this general rule, we are violating the Omni Test.\n\ncentral examples: That Alien Message, the Zermelo-Frankel oracle\n\n%%", "date_published": "2017-02-18T02:34:47Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress"], "alias": "9t"} {"id": "9ee9861364b9f87b6194d96bd8f74291", "title": "List", "url": "https://arbital.com/p/list_meta_tag", "source": "arbital", "source_type": "text", "text": "This meta tag is for pages that are basically just a list of things.", "date_published": "2015-12-16T16:32:09Z", "authors": ["Alexei Andreev"], "summaries": [], "tags": [], "alias": "19m"} {"id": "f272c22613589a8922095f8aa71b7283", "title": "Probability theory", "url": "https://arbital.com/p/probability_theory", "source": "arbital", "source_type": "text", "text": "Probability theory is the mathematics governing quantitative degrees of belief.", "date_published": "2015-12-18T21:16:55Z", "authors": ["Tsvi BT", "Nate Soares", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "1bv"} {"id": "e20f5a43c8776f6db428234b90ffb995", "title": "Well-calibrated probabilities", "url": "https://arbital.com/p/calibrated_probabilities", "source": "arbital", "source_type": "text", "text": "A system of [probability](https://arbital.com/p/1bv) assignments is \"well-calibrated\" if things to which it assigns 70% probability, happen around 70% of the time. This is the goal to which [bounded rationalists](https://arbital.com/p/) aspire - not to perfectly predict everything, but to be no more or less certain than our information warrants.", "date_published": "2015-12-18T21:20:44Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "1bw"} {"id": "a8dd9f0e86df7a47efe793d27946f330", "title": "Ability to read algebra", "url": "https://arbital.com/p/reads_algebra", "source": "arbital", "source_type": "text", "text": "This requisite asks whether you can read a sentence that throws in algebra or a mathematical concept, without slowing down too much. For instance, a sentence remarking that in the limit of flipping N coins each with a 1/N probability of coming up heads, the chance of never getting heads is 1/*e*. If that *kind* of sentence is one you can read, you should mark yourself as understanding this requisite, so you will automatically be shown Arbital pages and tabs that invoke math of that level.", "date_published": "2016-07-26T21:55:30Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1lx"} {"id": "702702ca1c88e3b90febef8f9c30cd6c", "title": "Bayesian update", "url": "https://arbital.com/p/bayes_update", "source": "arbital", "source_type": "text", "text": "A Bayesian *update* or *belief revision* is a change in [probabilistic](https://arbital.com/p/1rf) beliefs after gaining new knowledge. For example, after observing a patient's test result, we might revise our probability that a patient has a certain disease. If this belief revision obeys [Bayes's Rule](https://arbital.com/p/1lz), then it is called [Bayesian](https://arbital.com/p/1r8).\n\nBayesian belief updates have a number of other interesting properties, and exemplify many key principles of clear reasoning or [rationality](https://arbital.com/p/9l). Mapping Bayes's Rule onto real-life problems of encountering new evidence allows us to reproduce many intuitive features that have been suggested for \"how to revise beliefs in the face of new evidence\".\n\n- [](https://arbital.com/p/21v)\n- The scientific virtues of [falsifiability, advance prediction, boldness, precision, and falsificationism](https://arbital.com/p/220) can be seen in a Bayesian light.", "date_published": "2017-02-08T17:36:41Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["*Bayesian updating* or *Bayesian belief revision* is a way of changing [probabilistic](https://arbital.com/p/1rf) beliefs, in response to evidence, in a way that obeys [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) and in particular [https://arbital.com/p/1lz](https://arbital.com/p/1lz).\n\nWhen we observe a piece of evidence that is more likely to be seen if X is true than if X is false, we should assign more credence to X. For example, if I take a random object from your kitchen, and then I tell you this object is sharp, you should assign more credence than you did previously that it's a knife (even though forks are also sharp, so you won't be certain).\n\nFor more on this subject see [the Arbital guide to Bayes's rule](https://arbital.com/p/1zq)."], "tags": ["Start"], "alias": "1ly"} {"id": "f74d29d4d246eb25a2ec91195c600eb3", "title": "Bayes' rule", "url": "https://arbital.com/p/bayes_rule", "source": "arbital", "source_type": "text", "text": "summary(Technical): Bayes' rule (aka Bayes' theorem) is the quantitative [law](https://arbital.com/p/1bv) governing how to [revise probabilistic beliefs in response to observing new evidence](https://arbital.com/p/1ly). Suppose we previously thought that the probability of $h_1$, denoted $\\mathbb {P}(h_1)$, was twice as great as $\\mathbb {P}(h_2)$. Now we see a new piece of evidence, $e_0$, such that the probability of our seeing $e_0$ if $h_1$ is true (denoted by $\\mathbb {P}(e_0\\mid h_1)$) is one-fourth as great as $\\mathbb {P}(e_0\\mid h_2)$ (the probability of seeing $e_0$ if $h_2$ is true). After observing $e_0$, we should think that $h_1$ is now half as likely as $h_2$:\n\n$$\\frac{\\mathbb {P}(h_1\\mid e_0)}{\\mathbb {P}(h_2\\mid e_0)} = \\frac{\\mathbb {P}(h_1)}{\\mathbb {P}(h_2)} \\cdot \\frac{\\mathbb {P}(e_0\\mid h_1)}{\\mathbb {P}(e_0\\mid h_2)}$$\n\n[More generally](https://arbital.com/p/1zj), Bayes' rule states: $\\mathbb P(\\mathbf{H}\\mid e) \\propto \\operatorname{\\mathbb {P}}(e\\mid \\mathbf{H}) \\cdot \\operatorname{\\mathbb {P}}(\\mathbf{H}).$\n\nBayes' rule (aka Bayes' theorem) is the quantitative law of [probability theory](https://arbital.com/p/1bv) governing how to [revise](https://arbital.com/p/1ly) probabilistic beliefs in response to observing new evidence.\n\nYou may want to start at the [Guide](https://arbital.com/p/1zq) or the [Fast Intro](https://arbital.com/p/693).\n\n# The laws of reasoning\n\nImagine that, as part of a clinical study, you're being tested for a rare form of cancer, which affects 1 in 10,000 people. You have no reason to believe that you are more or less likely than average to have this form of cancer. You're administered a test which is 99% accurate, both in terms of [https://arbital.com/p/-specificity](https://arbital.com/p/-specificity) and [https://arbital.com/p/-sensitivity](https://arbital.com/p/-sensitivity): It _correctly_ detects the cancer (in patients who have it) 99% of the time, and it _incorrectly_ detects cancer (in patients who don't have it) only 1% of the time. The test results come back positive. What's the chance that you have cancer?\n\nBayes' rule says that the answer is precisely a 1 in 102 chance, which is a probability a little below 1%. The remarkable thing about this is that there is only one answer: the odds of you having that type of cancer, given the above information, is _exactly_ 1 in 102; no more, no less.\n\n%comment: (999,900 * 0.99 + 100 * 0.99) / (100 * 0.99) = (10098 / 99) = 102. Please leave this comment here so the above paragraph is not edited to be wrong.%\n\nThis is one of the key insights of Bayes' rule: Given what you knew, and what you saw, the maximally accurate state of belief for you to be in is completely pinned down. While that belief state is quite difficult to find in practice, we know how to find it in principle. If you want your beliefs to become more accurate as you observe the world, Bayes' rule gives some hints about what you need to do.\n\n# Learn Bayes' rule\n\n- [__Bayes' rule: Odds form.__](https://arbital.com/p/1x5) Bayes' rule is simple, if you think in terms of relative odds.\n- [__Bayes' rule: Proportional form.__](https://arbital.com/p/1zm) The fastest way to say something both convincing and true about belief-updating.\n- [__Bayes' rule: Log-odds form.__](https://arbital.com/p/1zh) A simple transformation of Bayes' rule reveals tools for measuring degree of belief, and strength of evidence.\n- [__Bayes' rule: Probabilistic form.__](https://arbital.com/p/554) The original formulation of Bayes' rule.\n- [__Bayes' rule: Functional form.__](https://arbital.com/p/1zj) Bayes' rule for continuous variables.\n- [__Bayes' rule: Vector form.__](https://arbital.com/p/1zg) For when you want to apply Bayes' rule to lots of evidence and lots of variables, all in one go.\n\n# Implications of Bayes' rule\n\n- [__A Bayesian view of scientific virtues.__](https://arbital.com/p/220) Why is it that science relies on bold, precise, and falsifiable predictions? Because of Bayes' rule, of course.\n- [__Update by inches.__](https://arbital.com/p/update_by_inches) It's virtuous to change your mind in response to overwhelming evidence. It's even more virtuous to shift your beliefs a little bit at a time, in response to *all* evidence (no matter how small).\n- [__Belief revision as probability elimination.__](https://arbital.com/p/1y6) Update your beliefs by throwing away large chunks of probability mass.\n- [__Shift towards the hypothesis of least surprise.__](https://arbital.com/p/552) When you see new evidence, ask: which hypothesis is *least surprised?*\n- [__Extraordinary claims require extraordinary evidence.__](https://arbital.com/p/21v) The people who adamantly claim they were abducted by aliens do provide *some* evidence for aliens. They just don't provide quantitatively *enough* evidence.\n- [__Ideal reasoning via Bayes' rule.__](https://arbital.com/p/) Bayes' rule is to reasoning as the [Carnot cycle](https://arbital.com/p/https://en.wikipedia.org/wiki/Carnot_cycle) is to engines: Nobody can be a perfect Bayesian, but Bayesian reasoning is still the theoretical ideal.\n\n# Related content\n\n- [__Subjective probability.__](https://arbital.com/p/4vr) Probability is in the mind, not the world. If you don't know whether a tossed coin came up heads or tails, that's a fact about you, not a fact about the coin.\n- [__Probability theory.__](https://arbital.com/p/1bv) The quantification and study of objects that represent uncertainty about the world, and methods for making those representations more accurate.\n- [__Information theory.__](https://arbital.com/p/3qq) The quantification and study of information, communication, and what it means for one object to tell us about another.\n\n# Other articles and introductions\n\n- [Wikipedia](https://arbital.com/p/http://en.wikipedia.org/wiki/Bayes%27_rule)\n- [Better Explained](https://arbital.com/p/http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/)", "date_published": "2017-02-21T22:15:37Z", "authors": ["Alexei Andreev", "Sandi x", "Dan Davies", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["When an observation is more likely given one state of the world than another, we should increase our credence that we're in the world that was more likely to have produced that observation. \n\nSuppose that Professor Plum and Miss Scarlet are two suspects in a murder, and that we start out thinking that Professor Plum is twice as likely to have committed the murder as Miss Scarlet. We then discover that the victim was poisoned. We think that, on occasions where they do commit murders, Professor Plum is around one-fourth as likely to use poison as Miss Scarlet. Then after observing the victim was poisoned, we should think Professor Plum is around half as likely to have committed the murder as Miss Scarlet: $2 \\times \\dfrac{1}{4} = \\dfrac{1}{2}.$\n\nThe quantitative rule at work here, in its various forms, is known as Bayes' rule."], "tags": ["B-Class"], "alias": "1lz"} {"id": "ff95cc5c5111000ce3fa6ff11ffbccb3", "title": "Arithmetical hierarchy", "url": "https://arbital.com/p/arithmetical_hierarchy", "source": "arbital", "source_type": "text", "text": "summary(Technical): The arithmetical hierarchy classifies statements by the number of nested, unbounded quantifiers they contain. The classes $\\Delta_0$, $\\Pi_0$, and $\\Sigma_0$ are equivalent and include statements containing only bounded quantifiers, e.g. $\\forall x < 10: \\exists y < x: x + y < 10$. If, treating $x, y, z...$ as constants, a statement $\\phi(x, y, z...)$ would be in $\\Sigma_n,$ then adjoining the unbounded universal quantifiers $\\forall x: \\forall y: \\forall z: ... \\phi(x, y, z...)$ creates a $\\Pi_{n+1}$ statement. Similarly, adjoining existential quantifiers to a $\\Pi_n$ statement creates a $\\Sigma_{n+1}$ statement. Statements that can be equivalently formulated to be in both $\\Pi_n$ and $\\Sigma_n$ are said to lie in $\\Delta_n$. Interesting consequences include, e.g., $\\Pi_1$ statements are falsifiable by simple observation, $\\Sigma_1$ statements are verifiable by observation, and statements strictly in higher classes can only be probabilistically verified by observation.\n\nThe arithmetical hierarchy classifies statements according to the number of unbounded $\\forall x$ and $\\exists y$ quantifiers, treating adjacent quantifiers of the same type as a single quantifier.\n\nThe formula $\\phi(x, y) \\leftrightarrow [https://arbital.com/p/(](https://arbital.com/p/(),$ treating $x$ and $y$ as constants, contains no quantifiers and would occupy the lowest level of the hierarchy, $\\Delta_0 = \\Pi_0 = \\Sigma_0.$ (Assuming that the operators $+$ and $=$ are themselves considered to be in $\\Delta_0$, or from another perspective, that for any particular $c$ and $d$ we can verify whether $c + d = d + c$ in bounded time.)\n\nAdjoining any number of $\\forall x_1: \\forall x_2: ...$ quantifiers to a statement that would be in $\\Sigma_n$ if the $x_i$ were considered as constants, creates a statement in $\\Pi_{n+1}.$ Thus, the statement $\\forall x: (x + 3) = (3 + x)$ is in $\\Pi_1.$\n\nSimilarly, adjoining $\\exists x_1: \\exists x_2: ...$ to a statement in $\\Pi_n$ creates a statement in $\\Sigma_{n+1}.$ Thus, the statement $\\exists y: \\forall x: (x + y) = (y + x)$ is in $\\Sigma_2$, while the statement $\\exists y: \\exists x: (x + y) = (y + x)$ is in $\\Sigma_1.$\n\nStatements in both $\\Pi_n$ and $\\Sigma_n$ (e.g. because they have provably equivalent formulations belonging to both classes) are said to lie in $\\Delta_n.$\n\nQuantifiers that can be bounded by $\\Delta_0$ functions of variables already introduced are ignored by this classification schema: the sentence $\\forall x: \\exists y < x: (x + y) = (y + x)$ is said to lie in $\\Pi_1$, not $\\Pi_2$. We can justify this by observing that for any particular $c,$ the statement $\\forall x < c: \\phi(x)$ can be expanded into the non-quantified statement $\\phi(0) \\wedge \\phi(1) ... \\wedge \\phi(c)$ and similarly $\\exists x < c: \\phi(x)$ expands to $\\phi(0) \\vee \\phi(1) \\vee ...$\n\nThis in turn justifies collapsing adjacent quantifiers of the same type inside the classification schema. Since, e.g., we can uniquely encode every pair (x, y) in a single number $z = 2^x \\cdot 3^y$, to say \"there exists a pair (x, y)\" or \"for every pair (x, y)\" it suffices to quantify over z encoding (x, y) with x and y less than z.\n\nWe say that $\\Delta_{n+1}$ includes the entire sets $\\Pi_n$ and $\\Sigma_n$, since from a $\\Pi_{n}$ statement we can produce a $\\Pi_{n+1}$ statement just by adding an inner $\\exists$ quantifier and then ignoring it, and we can obtain a $\\Sigma_{n+1}$ statement from a $\\Pi_{n}$ statement by adding an outer $\\forall$ quantifier and ignoring it, etcetera.\n\nThis means that the arithmetic hierarchy talks about *power sufficient to resolve statements*. To say $\\phi \\in \\Pi_n$ asserts that if you can resolve all $\\Pi_n$ formulas then you can resolve $\\phi$, which might potentially also be doable with less power than $\\Pi_n$, but can definitely not require more power than $\\Pi_n.$\n\n# Consequences for epistemic properties\n\nAll and only statements in $\\Sigma_1$ are *verifiable by observation*. If $\\phi \\in \\Delta_0$ then the sentence $\\exists x: \\phi(x)$ can be positively known by searching for and finding a single example. Conversely, if a statement involves an unbounded universal quantifier, we can never be sure of it through simple observation because we can't observe the truth for every possible number.\n\nAll and only statements in $\\Pi_1$ are *falsifiable by observation*. If $\\phi$ can be tested in bounded time, then we can falsify the whole statement $\\forall x: \\phi(x)$ by presenting some single x of which $\\phi$ is false. Conversely, if a statement involves an unbounded existential quantifier, we can never falsify it directly through a bounded number of observations because there could always be some higher, as-yet untested number that makes the sentence true.\n\nThis doesn't mean we can't get [probabilistic confirmation and disconfirmation](https://arbital.com/p/1ly) of sentences outside $\\Sigma_1$ and $\\Pi_1.$ E.g. for a $\\Pi_2$ statement, \"For every x there is a y\", each time we find an example of a y for another x, we might become a little more confident, and if for some x we fail to find a y after long searching, we might become a little less confident in the entire statement.", "date_published": "2016-01-16T23:51:28Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["The arithmetical hierarchy classifies logical statements by the number of nested clauses saying \"for every object\" and \"there exists an object\". Statements with one \"for every object\" clause belong in $\\Pi_1$, and statements with one \"there exists an object\" clause belong in $\\Sigma_1$. Saying \"There exists an object x such that (some $\\Pi_n$ statement treating x as a constant)\" creates a $\\Sigma_{n+1}$ statement. Similarly, adding a \"For every x\" clause outside a $\\Sigma_n$ statement creates a $\\Pi_{n+1}$ statement. Statements that can be formulated in both $\\Pi_n$ and $\\Sigma_n$ are said to lie in $\\Delta_n$. Some interesting consequences are that $\\Pi_1$ statements are falsifiable by observation, $\\Sigma_1$ statements are verifiable by observation, and statements strictly in higher classes can only be probabilistically verified by observation."], "tags": ["C-Class", "Needs links"], "alias": "1mg"} {"id": "3667fa4dc1a5aefec6b44f624c5ef349", "title": "Ability to read logic", "url": "https://arbital.com/p/reads_logic", "source": "arbital", "source_type": "text", "text": "This requisite asks whether you can read a sentence that throws in logical ideas and notation, without slowing down too much. If the statement $(\\exists v: \\forall w > v: \\forall x>0, y>0, z>0: x^w + y^w \\neq z^w) \\rightarrow ((1 = 0) \\vee (1 + 0 = 0 + 1))$ makes sense after a bit of staring, you should mark yourself as having this requisite. You will then automatically be shown Arbital pages and tabs containing such notation.", "date_published": "2016-07-26T21:59:02Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1mh"} {"id": "431e98b8a152b3db27f721a374b1b039", "title": "Arithmetical hierarchy: If you don't read logic", "url": "https://arbital.com/p/1mj", "source": "arbital", "source_type": "text", "text": "The arithmetical hierarchy is a way of stratifying statements by how many \"for every number\" and \"there exists a number\" clauses they contain.\n\nSuppose we say \"2 + 1 = 1 + 2\". Since this is only a statement about three specific numbers, this statement would occupy the lowest level of the arithmetical hierarchy, which we can equivalently call $\\Delta_0,$ $\\Pi_0,$ or $\\Sigma_0$ (the reason for using all three terms will soon become clear).\n\nNext, suppose we say, \"For all numbers x: (1 + x) = (x + 1).\" This generalizes over *all* numbers - a universal quantifier - and then makes a statement about each particular number x, that (1 + x) = (x + 1), that involves no further quantifiers and can be verified immediately. This statement is said to be in $\\Pi_1.$\n\nSuppose we say, \"There exists a y such that $y^9 = 9^y.$\" This is a single existential quantifier. To verify it by sheer brute force, we'd need to start from 0 and then consider successive integers y, checking for each particular y whether it was true that $y^9 = 9^y.$ Since the statement has a single existential quantifier over y, surrounding a statement that for any particular y is in $\\Delta_0,$ it is said to be in $\\Sigma_1.$\n\nSuppose we say, \"For every number x, there exists a prime number y that is greater than x.\" For any particular $c$ the statement \"There is a prime number x that is greater than $c$\" lies in $\\Sigma_1.$ Universally quantifying over all $c,$ outside of the $\\Sigma_1$ statement about any particular $c,$ creates a statement in $\\Pi_2.$\n\nSimilarly, the statement \"There exists a number x such that, for every number y, $(x + y) > 10^9$ would be in $\\Sigma_2,$ since it adjoins a \"there exists a number x...\" to a statement that lies in $\\Pi_1$ for any particular $x.$\n\nGeneralizing, putting a \"There exists an x...\" quantifier outside a $\\Pi_n$ statement creates a $\\Sigma_{n+1}$ statement, and putting a \"For all y\" quantifier outside a $\\Sigma_n$ statement about y creates a $\\Pi_{n+1}$ statement.\n\nIf there are equivalent ways of formulating a sentence such that it can be seen to occupy both $\\Sigma_n$ and $\\Pi_n$, we say that it belongs to $\\Delta_n.$ \n\n# Consequences for epistemic reasoning\n\nStatements in $\\Sigma_1$ are *verifiable*. Taking \"There exists $y$ such that $y^9 = 9^y$\" as the example, soon as we find any one particular $y$ such that $y^9 = 9^y,$ we can verify the central formula $y^9 = 9^y$ for that particular $y$ immediately, and then we're done.\n\nStatements in $\\Pi_1$ are *falsifiable*. We can decisively demonstrate them to be wrong by finding a particular example where the core statement is false. \n\nSentences in $\\Delta_1$ are those which are both falsifiable and verifiable in finite time.\n\n$\\Pi_2$ and $\\Sigma_2$ statements are not definitely verifiable or falsifiable by brute force. E.g. for a $\\Pi_2$ statement, \"For every x there is a y\", even after we've found a y for many particular x, we haven't tested all the x; and even if we've searched some particular x and not yet found any y, we haven't yet searched all possible y. But statements in this class can still be probabilistically supported or countersupported by examples; each time we find an example of a y for another x, we might become a little more confident, and if for some x we fail to find a y after a long time searching, we might become a little less confident.\n\n# Subtleties\n\n## Bounded quantifiers don't count\n\nThe statement, \"For every number $x,$ there exists a prime number $y$ smaller than $x^x$\" is said to lie in $\\Pi_1$, not $\\Pi_2$. Since the existence statement is bounded by $x^x$, a function which can itself be computed in bounded time, in principle we could just search through every possible $y$ that is less than $x^x$ and test it in bounded time. For any particular $x,$ such as $x = 2,$ we could indeed replace the statement \"There exists a prime number $y$ less than $2^2$\" with the statement \"Either 0 is a prime number, or 1 is a prime number, or 2 is a prime number, or 3 is a prime number\" which contains no quantifiers at all. Thus, in general within the arithmetical hierarchy, bounded quantifiers don't count.\n\nWe similarly observe that the statement \"For every number $x,$ there exists a prime number $y$ smaller than $x^x$\" is *falsifiable* - we could falsify it by exhibiting some particular constant $c,$ testing all the numbers smaller than $c^c,$ and failing to find any primes. (As would in fact be the case if we tested $c=1.$)\n\n## Similar adjacent quantifiers can be collapsed into a single quantifier\n\nSince bounded quantifiers don't count, it follows more subtly that we can combine adjacent quantifiers of the same type, since there are bounded ways to *encode* multiple numbers in a single number. For example, the numbers x and y can be encoded into a single number $z = 2^x \\cdot 3^y$. So if I want to say, \"For every nonzero integers x, y, and z, it is not the case that $x^3 + y^3 = z^3$\" I can actually just say, \"There's no number $w$ such that there exist nonzero x, y, and z *less than w* with $w = 2^x \\cdot 3^y \\cdot 5^z$ and $x^3 + y^3 = z^3.$\" Thus, the three adjacent universal quantifiers over all x, y, and z can be combined. However, if the sentence is \"for all x there exists y\", there's no way to translate that into a statement about a single number z, so only alike quantifiers can be collapsed in this way.\n\nWith these subtleties in hand, we can see, e.g., that Fermat's Last Theorem belongs in $\\Pi_1,$ since FLT says, \"For *every* w greater than 2 and x, y, z greater than 0, it's not the case that $x^w + y^w = z^w.$ This implies that like any other $\\Pi_1$ statement, Fermat's Last Theorem should be falsifiable by brute force but not verifiable by brute force. If a counterexample existed, we could eventually find it by brute force (even if it took longer than the age of the universe) and exhibit that example to decisively disprove FLT; but there's no amount of brute-force verification of particular examples that can prove the larger theorem.\n\n## How implications interact with falsifiability and verifiability\n\nIn general, if the implication $X \\rightarrow Y$ holds, then:\n\n- If $Y$ is falsifiable, $X$ is falsifiable.\n- If $X$ is verifiable, $Y$ is verifiable.\n\nThe converse implications do not hold.\n\nAs an example, consider the $\\Pi_2$ statement \"For every prime $x$, there is a larger prime $y$\". Ignoring the existence of proofs, this statement is unfalsifiable by direct observation. The falsifiable $\\Pi_1$ statement, \"For every prime $x$, there is a larger prime $y = f(x) = 4x+1$ would if true imply the $\\Pi_2$ statement.\" But this doesn't make the $\\Pi_2$ statement falsifiable. Even if the $\\Pi_1$ assertion about the primeness of $4x+1$ in particular is false, the $\\Pi_2$ statement can still be true (as is indeed the case). [ Patrick, is there a particular reason we want this knowledge to be accessible to people who don't natively read logic? I.e. were you making something else rely on it?](https://arbital.com/p/comment:)", "date_published": "2016-04-04T02:04:32Z", "authors": ["Patrick LaVictoire", "Alexei Andreev", "Eric Bruylant", "Noah Walton", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Needs clickbait", "C-Class", "Needs links"], "alias": "1mj"} {"id": "4426b94a4baa975a529ae602c1138145", "title": "Math 0", "url": "https://arbital.com/p/math0", "source": "arbital", "source_type": "text", "text": "A reader at the **Math 0** level has only a grasp of basic arithmetic and solving problems that employ basic arithmetical concepts. Generally, they will not have any experience with [https://arbital.com/p/-algebra](https://arbital.com/p/-algebra).\n\nThere's no shame in being Math 0! We all started learning at some point, and Arbital aims to provide explanations of important mathematical concepts for readers of _all_ backgrounds, including people at a Math 0 level. Even if you learned math to a Math 1 level in high school and forgot it all afterwards, we're here to help you remember what you need to know, and understand it better than you did before.\n\n## Writing for a Math 0 audience\n\nWhen writing for readers at a Math 0 level, avoid the use of any variables or special symbols whenever possible, unless you're introducing the new symbol to them in the first place. Also try to keep article text at a readability level of a seventh or eighth grade student.\n\nUse plenty of images — giving visualizations of what's going on is a very helpful tool in teaching.\n\nIf you're having a hard time wrangling down the subject to that level of readability, perhaps reconsider the level of your tutorial — the subject might not be accessible to Math 0 readers anyway.", "date_published": "2016-07-09T16:01:44Z", "authors": ["Joe Zeng", "Alexei Andreev", "Mark Chimes", "Jaime Sevilla Molina", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1r3"} {"id": "0b2532ff47db97e691cdd2255da61160", "title": "Math 1", "url": "https://arbital.com/p/math1", "source": "arbital", "source_type": "text", "text": "A reader at the **Math 1** level has enough mathematical ability to encompass \"good at math\" in a colloquial sense. They know at least some basic algebra, and how to apply algebraic thinking to problems in some contexts. If you threw a simple to moderate math puzzle at them, they could probably figure out what they were supposed to do without balking and saying \"nope, I don't know how to do this\".\n\n## Writing for a Math 1 audience\n\nAt this level, you can start to use letters to represent numbers as variables and manipulate them directly.", "date_published": "2016-07-09T16:41:22Z", "authors": ["Joe Zeng", "Alexei Andreev", "Nate Soares", "Eric Bruylant", "Mark Chimes", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1r5"} {"id": "6356c637ad0f58c11c71de85a64a5d3d", "title": "Math 2", "url": "https://arbital.com/p/math2", "source": "arbital", "source_type": "text", "text": "Do you work with math on a fairly routine basis? Do you have little trouble grasping new mathematical ideas that use language you already know? Having this requisite would be typical of a computer programmer, a physical engineer, or someone else who routinely works with mathematically-structured ideas. At this level, you start to see LaTeX formulas in passing, but they'll have explanations attached.", "date_published": "2016-01-26T01:57:46Z", "authors": ["Joe Zeng", "Alexei Andreev", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1r6"} {"id": "31188f93a061f544194c0e93e8fb462a", "title": "Math 3", "url": "https://arbital.com/p/math3", "source": "arbital", "source_type": "text", "text": "A reader at the **Math 3** level can read the sorts of things that a research-level mathematician could — if you're Math 3, it's okay to throw LaTeX formulas straight at you, using standard notation, with a minimum of handholding.\n\nAt the Math 3 level, different schools of mathematics may have their own standard notation, so somebody who is Math 3 in one discipline or subject may not necessarily be Math 3 in another.\n\n## Writing for a Math 3 audience\n\nWhen writing for a Math 3 audience, all bets are off on readability. You can use as much formal notation as you like in order to define your point properly and clearly.", "date_published": "2016-07-09T00:05:11Z", "authors": ["Joe Zeng", "Alexei Andreev", "Patrick Stevens", "Nate Soares", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1r7"} {"id": "b3040db4e2f0c3517c86168eab5833fb", "title": "Ability to read calculus", "url": "https://arbital.com/p/reads_calculus", "source": "arbital", "source_type": "text", "text": "Check off this requisite if you can read sentences containing integrals and differentiations.", "date_published": "2016-07-26T21:59:15Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Just a requisite", "Stub"], "alias": "1r9"} {"id": "c692127ebaff44a272ab456e39d366c7", "title": "Odds", "url": "https://arbital.com/p/odds", "source": "arbital", "source_type": "text", "text": "summary(Technical): Odds express relative chances. If the odds for X versus Y are 2 : 3, this expresses that we think that X is 2/3 = 0.666... times as likely as Y. Clearly, odds of 6 : 9 express the same idea; odds are invariant up to multiplication by a positive factor. When an odds ratio [exhausts all the possibilities](https://arbital.com/p/1rd), then we can convert its components to probabilities by [normalizing](https://arbital.com/p/1rk) them so that they sum to 1. In the example above, the probabilities would be $2:3 = \\frac{2}{3+2}:\\frac{3}{3+2} = 0.4:0.6.$\n\n![](https://i.imgur.com/GVZnz2c.png?0)\n\nOdds are a tool for expressing relative [chances](https://arbital.com/p/1rf). If the odds of a tree in a forest being sick versus healthy are 2 : 3, this says that there are 2 sick trees for every 3 healthy trees. (The probability of a tree being sick, in this case, is 2/5 or 40%.)\n\n![](https://i.imgur.com/GVZnz2c.png?0)\n\nOdds are expressed in the form \"X to Y\", e.g. \"7 to 9 for X versus Y\", more compactly written as $7:9$.\n\nThe representation of chances as odds is often used in gambling and [https://arbital.com/p/-statistics](https://arbital.com/p/-statistics).", "date_published": "2016-10-24T19:03:35Z", "authors": ["Stephanie Zolayvar", "Alexei Andreev", "Eric Rogstad", "Gregor Gerasev", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky", "Emile Kroeger"], "summaries": ["Odds are a tool for expressing *relative* chances. If the odds of a tree in a forest being sick versus healthy are 2 : 3, this says that there are 2 sick trees for every 3 healthy trees. (The *probability* of a tree being sick, in this case, is 2/5 or 40%.)\n\n![](https://i.imgur.com/GVZnz2c.png?0)"], "tags": ["Concept"], "alias": "1rb"} {"id": "d76756557429cf7c05bfe7c977405c25", "title": "Mutually exclusive and exhaustive", "url": "https://arbital.com/p/exclusive_exhaustive", "source": "arbital", "source_type": "text", "text": "A set of propositions is \"mutually exclusive and exhaustive\" when exactly one of the propositions must be true. For example, of the two propositions \"The sky is blue\" and \"It is not the case that the sky is blue\", exactly one of those must be the case. Therefore, the [probabilities](https://arbital.com/p/1rf) of those propositions must sum to exactly 1.\n\nIf a set $X$ of propositions is \"mutually exclusive\", this states that for every two distinct propositions, the probability that both of them will be true simultaneously is zero:\n\n$\\forall i: \\forall j: i \\neq j \\implies \\mathbb{P}(X_i \\wedge X_j) = 0.$\n\nThis implies that for every two distinct propositions, the probability of their union equals the sum of their probabilities:\n\n$\\mathbb{P}(X_i \\vee X_j) = \\mathbb{P}(X_i) + \\mathbb{P}(X_j) - \\mathbb{P}(X_j \\wedge X_j) = \\mathbb{P}(X_i) + \\mathbb{P}(X_j).$\n\nThe \"exhaustivity\" condition states that the union of all propositions in $X,$ has probability $1$ (the probability of at least one $X_i$ happening is $1$):\n\n$\\mathbb{P}(X_1 \\vee X_2 \\vee \\dots \\vee X_N) = 1.$\n\nTherefore mutual exclusivity and exhaustivity imply that the probabilities of the propositions sum to 1:\n\n$\\displaystyle \\sum_i \\mathbb{P}(X_i) = 1.$", "date_published": "2016-04-26T21:40:08Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Fedor Belolutskiy"], "summaries": [], "tags": ["Math 2", "C-Class"], "alias": "1rd"} {"id": "b0be896c14af3aa4dcc8e5554f2d567c", "title": "Probability", "url": "https://arbital.com/p/probability", "source": "arbital", "source_type": "text", "text": "*Probabilities* are the central subject of the discipline of [https://arbital.com/p/1bv](https://arbital.com/p/1bv). $\\mathbb{P}(X)$ denotes our level of belief, or someone's level of belief, that the proposition $X$ is true. In the classical and canonical representation of probability, 0 expresses absolute incredulity, and 1 expresses absolute credulity. %%knows-requisite([https://arbital.com/p/1mh](https://arbital.com/p/1mh)): Furthermore, [mutually exclusive](https://arbital.com/p/1rd) events have additive classical probabilities: $\\mathbb{P}(X \\wedge Y) = 0 \\implies \\mathbb{P}(X \\vee Y) = \\mathbb{P}(X) + \\mathbb{P}(Y).$%%\n\nFor the standard probability axioms, see https://en.wikipedia.org/wiki/Probability_axioms. \n\n# Notation\n\n$\\mathbb{P}(X)$ is the probability that X is true.\n\n$\\mathbb{P}(\\neg X) = 1 - \\mathbb{P}(X)$ is the probability that X is false.\n\n$\\mathbb{P}(X \\wedge Y)$ is the probability that both X and Y are true.\n\n$\\mathbb{P}(X \\vee Y)$ is the probability that X or Y or both are true.\n\n$\\mathbb{P}(X|Y) := \\frac{\\mathbb{P}(X \\wedge Y}{\\mathbb{P}(Y)}$ is the **[conditional probability](https://arbital.com/p/1rj) of X given Y.** That is, $\\mathbb{P}(X|Y)$ is **the degree to which we would believe X, assuming Y to be true.** $\\mathbb{P}(yellow|banana)$ expresses \"The probability that a banana is yellow.\" $\\mathbb{P}(banana|yellow)$ expresses \"The probability that a yellow thing is a banana\".\n\n# Centrality of the classical representation\n\nWhile there are other ways of expressing quantitative degrees of belief, such as [odds ratios](https://arbital.com/p/1rb), there are several especially useful properties or roles of classical probabilities that give them a central / convergent / canonical status among possible ways of representing credence.\n\n[Odds ratios](https://arbital.com/p/1rb) are isomorphic to probabilities - we can readily go back and forth between a probability of 20%, and odds of 1:4. But unlike odds ratios, probabilities have the further appealing property of being able to add the probabilities of two [mutually exclusive](https://arbital.com/p/1rd) possibilities to arrive at the probability that one of them occurs. The 1/6 probability of a six-sided die turning up 1, plus the 1/6 probability of a die turning up 2, equals the 1/3 probability that the die turns up 1 or 2. The odds ratios 1:5, 1:5, and 1:2 don't have this direct relation (though we could convert to probabilities, add, and then convert back to odds ratios).\n\nThus, classical probabilities are uniquely the quantities that must appear in the [expected utilities](https://arbital.com/p/18v) to weigh how much we proportionally care about the uncertain consequences of our decisions. When an outcome has classical probability 1/3, we multiply the degree to which we care by a factor of 1/3, not by, e.g., the odds ratio 1:2.\n\nIf the amount you'd pay for a lottery ticket that paid out on 1 or 2 was more or less than twice the price you paid for a lottery ticket that only paid out on 1, or a lottery ticket that paid out on 2, then I could buy from you and sell to you a combination of lottery tickets such that you would end up with a certain loss. This is an example of a [Dutch book argument](https://arbital.com/p/dutch_book), which is one kind of coherence theorem that underpins classical probability and its role in choice. (If we were dealing with actual betting and gambling, you might reply that you'd just refuse to bet on disadvantageous combinations; but in the much larger gamble that is life, \"doing nothing\" is just one more choice with an uncertain, probabilistic payoff.)\n\nThe combination of several such coherence theorems, most notably including the [Dutch Book arguments](http://plato.stanford.edu/entries/dutch-book/), [Cox's Theorem](https://en.wikipedia.org/wiki/Cox%27s_theorem) and its variations for probability theory, and the [Von Neumann-Morgenstern theorem](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) (VNM) and its variations for expected utility, together give the classical probabilities between 0 and 1 a *central* status in the theory of [epistemic and instrumental rationality](https://arbital.com/p/9l). Other ways of representing scalar probabilities, or alternatives to scalar probability, would need to be converted or munged back into classical probabilities in order to animate [agents](https://arbital.com/p/agents) making coherent choices.\n\nThis also suggests that bounded agents which *approximate* coherence, or at least manage to avoid blatantly self-destructive violations of coherence, might have internal mental states which can be *approximately* viewed as corresponding to classical probabilities. Perhaps not in terms of such agents necessarily containing floating-point numbers that directly represent those probabilities internally, but at least in terms of our being able to look over the agent's behavior and deduce that they were \"behaving as if\" they had assigned some coherent classical probability.", "date_published": "2016-08-26T09:14:18Z", "authors": ["Alan Liddell", "Alexei Andreev", "Eric Rogstad", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Math 2", "C-Class"], "alias": "1rf"} {"id": "d6793f0f002e26ab844436a58dad721b", "title": "Joint probability", "url": "https://arbital.com/p/joint_probability", "source": "arbital", "source_type": "text", "text": "To write \"the chance that both X and Y are true\" using standard [https://arbital.com/p/1rf](https://arbital.com/p/1rf) notation, we write $\\mathbb{P}(X \\wedge Y)$ or $\\mathbb{P}(X, Y)$.", "date_published": "2016-06-16T15:43:03Z", "authors": ["Tsvi BT", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "1rh"} {"id": "d14db7955b590afadff99e3adf449a6b", "title": "Conditional probability", "url": "https://arbital.com/p/conditional_probability", "source": "arbital", "source_type": "text", "text": "summary: $\\mathbb{P}(X\\mid Y)$ means \"The [probability](https://arbital.com/p/1rf) that X is true, assuming Y is true.\"\n\n- $\\mathbb{P}(yellow\\mid banana)$ is \"the chance that something is yellow, given that we know it is a banana\" or equivalently \"the chance that a banana is yellow\".\n- $\\mathbb{P}(banana\\mid yellow)$ expresses \"how much we should think a random object is a banana, after being told that it was yellow\" or \"the chance that a yellow thing is a banana\".\n\nTo calculate a conditional probability $\\mathbb{P}(X\\mid Y)$, we consider only the cases where Y is true, and ask about the cases where X is also true.\n\nSuppose a barrel contains 15 round green marbles, 5 round blue marbles, 70 square green marbles, and 10 square blue marbles. \"The probability that a marble is blue, after we've been told that it's round\" or \"The probability a round marble is blue\" is calculated by restricting our attention to only the 20 round marbles, and asking about the 5 marbles that are both blue and round.\n\nLetting $\\mathbb{P}(blue \\wedge round)$ denote \"the probability a marble is both blue and round\":\n\n$\\mathbb{P}(blue\\mid round) := \\frac{\\mathbb{P}(blue \\wedge round)}{\\mathbb{P}(round)} = \\frac{\\text{5% blue and round marbles}}{\\text{20% round marbles}} = \\frac{5}{20} = 0.25.$\n\nIn general, $\\mathbb{P}(X\\mid Y) := \\frac{\\mathbb{P}(X \\wedge Y)}{\\mathbb{P}(Y)}.$\n\nsummary(Technical): $\\mathbb{P}(X\\mid Y) := \\frac{\\mathbb{P}(X \\wedge Y)}{\\mathbb{P}(Y)}$ is the answer to the question, \"Assuming $Y$ to be true, what is the probability of $X$?\" or \"Constraining our attention to only possibilities where $Y$ is true, what is the probability of $X \\wedge Y$ inside those cases?\" (Where $X \\wedge Y$ denotes \"X and Y\" or \"Both X and Y are true\".)\n\nThus, $\\mathbb P(observation\\mid hypothesis)$ would denote the [likelihood](https://arbital.com/p/56v) of seeing some observation, *if* a hypothesis is true. $\\mathbb P(hypothesis\\mid observation)$ would denote the [revised](https://arbital.com/p/1ly) probability we ought to assign to a hypothesis, after learning that the observation was true.\n\nThe conditional probability $\\mathbb{P}(X\\mid Y)$ means \"The [probability](https://arbital.com/p/1rf) of $X$ given $Y$.\" That is, $\\mathbb P(left\\mid right)$ means \"The probability that $left$ is true, assuming that $right$ is true.\"\n\n$\\mathbb P(yellow\\mid banana)$ is the probability that a banana is yellow - if we know something to be a banana, what is the probability that it is yellow?\n\n$\\mathbb P(banana\\mid yellow)$ is the probability that a yellow thing is a banana - if the right side is known to be $yellow$, then we ask the question on the left, what is the probability that this is a $banana$?\n\n# Definition\n\nTo obtain the probability $\\mathbb P(left \\mid right),$ we constrain our attention to only cases where $right$ is true, and ask about cases within $right$ where $left$ is also true.\n\nLet $X \\wedge Y$ denote \"$X$ and $Y$\" or \"$X$ and $Y$ are both true\". Then:\n\n$$\\mathbb P(left \\mid right) = \\dfrac{\\mathbb P(left \\wedge right)}{\\mathbb P(right)}.$$\n\nWe can see this as a kind of \"zooming in\" on only the cases where $right$ is true, and asking, *within* this universe, for the cases where $right$ *and* $left$ are true.\n\n# Example 1\n\nSuppose you have a bag containing objects that are either red or blue, and either square or round, where the number of each is given by the following table:\n\n$$\\begin{array}{l\\mid r\\mid r}\n& Red & Blue \\\\\n\\hline\nSquare & 1 & 2 \\\\\n\\hline\nRound & 3 & 4\n\\end{array}$$\n\nIf you reach in and feel a round object, the conditional probability that it is red is given in by zooming in on only the round objects, and asking about the frequency of objects that are round *and* red inside this zoomed-in view:\n\n$$\\mathbb P(red\\mid round) = \\dfrac{\\mathbb P(red \\wedge round)}{\\mathbb P(round)} = \\dfrac{3}{3 + 4} = \\dfrac{3}{7}$$\n\nIf you look at the object nearest the top, and can see that it's blue, but not see the shape, then the conditional probability that it's a square is:\n\n$$\\mathbb P(square\\mid blue) = \\dfrac{\\mathbb P(square \\wedge blue)}{\\mathbb P(blue)} = \\dfrac{2}{2 + 4} = \\dfrac{1}{3}$$\n\n![conditional probabilities bag](https://i.imgur.com/zscEdLj.png?0)\n\n# Example 2\n\nSuppose you're Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.\n\nThe Scotland Yard detective says, \"Aha! Then it's Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have left a red hair there. $\\mathbb P(red hair\\mid Scarlet) = 99\\%,$ let's say, which is a near-certain conviction, so we're done.\"\n\n\"But no,\" replies Sherlock Holmes. \"You see, but you do not correctly track the meaning of the conditional probabilities, detective. The knowledge we require for a conviction is not $\\mathbb P(redhair\\mid Scarlet),$ the chance that Miss Scarlet would leave a red hair, but rather $\\mathbb P(Scarlet\\mid redhair),$ the chance that this red hair was left by Scarlet. There are other people in this city who have red hair.\"\n\n\"So you're saying...\" the detective said slowly, \"that $\\mathbb P(redhair\\mid Scarlet)$ is actually much lower than $1$?\"\n\n\"No, detective. I am saying that just because $\\mathbb P(redhair\\mid Scarlet)$ is high does not imply that $\\mathbb P(Scarlet\\mid redhair)$ is high. It is the latter probability in which we are interested - the degree to which, *knowing* that a red hair was left at the scene, we *infer* that Miss Scarlet was the murderer. This is not the same quantity as the degree to which, *assuming* Miss Scarlet was the murderer, we would *guess* that she might leave a red hair.\"\n\n\"But surely,\" said the detective, \"these two probabilities cannot be entirely unrelated?\"\n\n\"Ah, well, for that, you must read up on [Bayes' rule](https://arbital.com/p/1lz).\"\n\n# Example 3\n\n> \"Even if most Dark Wizards are from Slytherin, very few Slytherins are Dark Wizards. There aren't all that many Dark Wizards, so not all Slytherins can be one.\"\n\n> \"So yeh're saying, that most Dark Wizards are Slytherins... but...\"\n\n> \"But most Slytherins are not Dark Wizards.\"\n\n— Harry Potter and the Methods of Rationality, Ch. 100", "date_published": "2016-10-08T00:05:05Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["$\\mathbb{P}(X\\mid Y)$ means \"The [probability](https://arbital.com/p/1rf) that X is true, assuming Y is true.\"\n\n$\\mathbb{P}(yellow\\mid banana)$ is \"the chance that something is yellow, given that we know it is a banana\" or \"the chance that a banana is yellow\".\n\nConversely, $\\mathbb{P}(banana\\mid yellow)$ expresses \"how much we should think a random object is a banana, after being told that it was yellow.\""], "tags": ["C-Class"], "alias": "1rj"} {"id": "af97c0bd7aec7f70d6127b5484ec264c", "title": "Normalization (probability)", "url": "https://arbital.com/p/normalize_probabilities", "source": "arbital", "source_type": "text", "text": "\"Normalization\" is an arithmetical procedure carried out to obtain a set of [probabilities](https://arbital.com/p/1rf) summing to exactly 1, in cases where we believe that [exactly one of the corresponding possibilities is true](https://arbital.com/p/1rd), and we already know the [relative probabilities](https://arbital.com/p/1rb).\n\nFor example, suppose that the [odds](https://arbital.com/p/1rb) of Alexander Hamilton winning a presidential election are 3 : 2. But Alexander Hamilton must either win or not win, so the *probabilities* of him winning *or* not winning should sum to 1. If we just add 3 and 2, however, we get 5, which is an unreasonably large probability.\n\nIf we rewrite the odds as 0.6 : 0.4, we've preserved the same proportions, but made the terms sum to 1. We therefore calculate that Hamilton has a 60% probability of winning the election.\n\nWe normalized those odds by dividing each of the terms by the sum of terms, i.e., went from 3 : 2 to $\\frac{3}{3+2} : \\frac{2}{3+2} = 0.6 : 0.4.$\n\nIn converting the odds $m : n$ to $\\frac{m}{m+n} : \\frac{n}{m+n},$ the factor $\\frac{1}{m+n}$ by which we multiply all elements of the ratio is called a [normalizing constant](https://arbital.com/p/https://en.wikipedia.org/wiki/Normalizing_constant).\n\nMore generally, if we have a relative-odds function $\\mathbb{O}(H)$ where $H$ has many components, and we want to convert this to a probability function $\\mathbb{P}(H)$ that sums to 1, we divide every element of $\\mathbb{O}(H)$ by the sum of all elements in $\\mathbb{O}(H).$ That is:\n\n$\\mathbb{P}(H_i) = \\frac{\\mathbb{O}(H_i)}{\\sum_i \\mathbb{O}(H_i)}$\n\nAnalogously, if $\\mathbb{O}(x)$ is a continuous distribution on $X$, we would normalize it (create a proportional probability function $\\mathbb{P}(x)$ whose integral is equal to 1) by dividing $\\mathbb{O}(x)$ by its own integral:\n\n$\\mathbb{P}(x) = \\frac{\\mathbb{O}(x)}{\\int \\mathbb{O}(x) \\operatorname{d}x}$\n\nIn general, whenever a probability function on a variable is *proportional* to some other function, we can obtain the probability function by *normalizing* that function:\n\n$\\mathbb{P}(H) \\propto \\mathbb{O}(H) \\implies \\mathbb{P}(H) = \\frac{\\mathbb{O}(H)}{\\sum \\mathbb{O}(H)}$", "date_published": "2016-10-07T21:37:30Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["\"Normalization\" obtains a set of [probabilities](https://arbital.com/p/1rf) summing to 1, in [cases where they ought to sum to 1](https://arbital.com/p/1rd). We do this by dividing each pre-normalized number by the sum of all pre-normalized numbers.\n\nSuppose the [odds](https://arbital.com/p/1rb) of Alexander Hamilton winning an election are 3 : 2. We think the proportions are right (Alexander is 1.5 times as likely to win as not win) but we want *probabilities*. To say that Hamilton has probability 3 of winning the election would be very strange indeed. But if we divide each of the terms by the sum of all the terms, they'll end up summing to one: $3:2 \\cong \\frac{3}{3+2} : \\frac{2}{3+2} = 0.6 : 0.4.$ Thus, the probability that Hamilton wins is 60%."], "tags": [], "alias": "1rk"} {"id": "86b0bd2dfc9fd5ceaba2e08a1c0a36fe", "title": "Prior probability", "url": "https://arbital.com/p/prior_probability", "source": "arbital", "source_type": "text", "text": "\"Prior [probability](https://arbital.com/p/1rf)\", \"prior [odds](https://arbital.com/p/1rb)\", or just \"prior\" refers to a state of belief that obtained before seeing a piece of new evidence. Suppose there are two suspects in a murder, Colonel Mustard and Miss Scarlet. After determining that the victim was poisoned, you think Mustard and Scarlet are respectively 25% and 75% likely to have committed the murder. *Before* determining that the victim was poisoned, perhaps, you thought Mustard and Scarlet were equally likely to have committed the murder (50% and 50%). In this case, your \"prior probability\" of Miss Scarlet committing the murder was 50%, and your \"posterior probability\" after seeing the evidence was 75%.\n\nThe prior probability of a hypothesis $H$ is often being written with the unconditioned notation $\\mathbb P(H)$, while the posterior after seeing the evidence $e$ is often being denoted by the [conditional probability](https://arbital.com/p/1rj) $\\mathbb P(H\\mid e).$%%note: [E. T. Jaynes](http://bayes.wustl.edu/) was known to insist on using the explicit notation $\\mathbb P (H\\mid I_0)$ to denote the prior probability of $H$, with $I_0$ denoting the prior, and never trying to write any entirely unconditional probability $\\mathbb P(X)$. Since, said Jaynes, we always have *some* prior information.%% %%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)): This however is a heuristic rather than a law, and might be false inside some complicated problems. If we've already seen $e_0$ and are now updating on $e_1$, then in this new problem the new prior will be $\\mathbb P(H\\mid e_0)$ and the new posterior will be $\\mathbb P(H\\mid e_1 \\wedge e_0).$ %%\n\nFor questions about how priors are \"ultimately\" determined, see [https://arbital.com/p/11w](https://arbital.com/p/11w).", "date_published": "2016-08-04T12:27:46Z", "authors": ["Alexei Andreev", "Cuyler Brehaut", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Needs summary"], "alias": "1rm"} {"id": "3d3687fdbcb03a6ca2e3b32e8f40022e", "title": "Interest in mathematical foundations in Bayesianism", "url": "https://arbital.com/p/bayes_want_foundations", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)", "date_published": "2016-02-24T15:28:04Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["\"Want\" this [requisite](https://arbital.com/p/1ln) if you prefer to see extra information about the mathematical foundations in [Bayesian reasoning](https://arbital.com/p/1r8)."], "tags": ["B-Class"], "alias": "1rn"} {"id": "c400e33ba232489f09747a108278fd7b", "title": "Posterior probability", "url": "https://arbital.com/p/posterior_probability", "source": "arbital", "source_type": "text", "text": "\"Posterior [probability](https://arbital.com/p/1rf)\" or \"posterior [odds](https://arbital.com/p/1rb)\" refers our state of belief *after* seeing a piece of new evidence and doing a [Bayesian update](https://arbital.com/p/1ly). Suppose there are two suspects in a murder, Colonel Mustard and Miss Scarlet. Before determining the victim's cause of death, perhaps you thought Mustard and Scarlet were equally likely to have committed the murder (50% and 50%). After determining that the victim was poisoned, you now think that Mustard and Scarlet are respectively 25% and 75% likely to have committed the murder. In this case, your \"[prior probability](https://arbital.com/p/1rm)\" of Miss Scarlet committing the murder was 50%, and your \"posterior probability\" *after* seeing the evidence was 75%. The posterior probability of a hypothesis $H$ after seeing the evidence $e$ is often denoted using the [conditional probability notation](https://arbital.com/p/1rj) $\\mathbb P(H\\mid e).$", "date_published": "2016-07-10T05:08:40Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Needs summary", "C-Class"], "alias": "1rp"} {"id": "2605889581ac196181cfd5dfd09bdf17", "title": "Relative likelihood", "url": "https://arbital.com/p/relative_likelihood", "source": "arbital", "source_type": "text", "text": "Relative likelihoods express how *relatively* more likely an observation is, comparing one hypothesis to another. For example, suppose we're investigating the murder of Mr. Boddy, and we find that he was killed by poison. The suspects are Miss Scarlett and Colonel Mustard. Now, suppose that the [probability](https://arbital.com/p/-1rf) that Miss Scarlett would use poison, if she _were_ the murderer, is 20%. And suppose that the probability that Colonel Mustard would use poison, if he were the murderer, is 10%. Then, Miss Scarlett is *twice as likely* to use poison as a murder weapon as Colonel Mustard. Thus, the \"Mr. Boddy was poisoned\" evidence supports the \"Scarlett\" hypothesis twice as much as the \"Mustard\" hypothesis, for relative likelihoods of $(2 : 1).$\n\nThese likelihoods are called \"relative\" because it wouldn't matter if the respective probabilities were 4% and 2%, or 40% and 20% — what matters is the _relative proportion_.\n\nRelative likelihoods may be given between many different hypotheses at once. Given the evidence $e_p$ = \"Mr. Boddy was poisoned\", it might be the case that Miss Scarlett, Colonel Mustard, and Mrs. White have the respective probabilities 20%, 10%, and 1% of using poison any time they commit a murder. In this case, we have three hypotheses — $H_S$ = \"Scarlett did it\", $H_M$ = \"Mustard did it\", and $H_W$ = \"White did it\". The relative likelihoods between them may be written $(20 : 10 : 1).$\n\nIn general, given a list of hypotheses $H_1, H_2, \\ldots, H_n,$ the relative likelihoods on the evidence $e$ can be written as a [scale-invariant list](https://arbital.com/p/) of the likelihoods $\\mathbb P(e \\mid H_i)$ for each $i$ from 1 to $n.$ In other words, the relative likelihoods are\n\n$$ \\alpha \\mathbb P(e \\mid H_1) : \\alpha \\mathbb P(e \\mid H_2) : \\ldots : \\alpha \\mathbb P(e \\mid H_n) $$\n\nwhere the choice of $\\alpha > 0$ does not change the value denoted by the list (i.e., the list is [scale-invariant](https://arbital.com/p/scale_invariant_list)). For example, the relative likelihood list $(20 : 10 : 1)$ above denotes the same thing as the relative likelihood list $(4 : 2 : 0.20)$ denotes the same thing as the relative likelihood list $(60 : 30 : 3).$ This is why we call them \"relative likelihoods\" — all that matters is the ratio between each term, not the absolute values.\n\nAny two terms in a list of relative likelihoods can be used to generate a [https://arbital.com/p/-56t](https://arbital.com/p/-56t) between two hypotheses. For example, above, the likelihood ratio $H_S$ to $H_M$ is 2/1, and the likelihood ratio of $H_S$ to $H_W$ is 20/1. This means that the evidence $e_p$ supports the \"Scarlett\" hypothesis 2x more than it supports the \"Mustard\" hypothesis, and 20x more than it supports the \"White\" hypothesis.\n\nRelative likelihoods summarize the [strength of the evidence](https://arbital.com/p/22x) represented by the observation that Mr. Boddy was poisoned — under [Bayes' rule](https://arbital.com/p/1lz), the evidence points to Miss Scarlett to the same degree whether the absolute probabilities are 20% vs. 10%, or 4% vs. 2%.\n\nBy Bayes' rule, the way to update your beliefs in the face of evidence is to take your [prior](https://arbital.com/p/1rm) [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb) and simply multiply them by the corresponding relative likelihood list, to obtain your [posterior](https://arbital.com/p/1rp) odds. See also [https://arbital.com/p/1x5](https://arbital.com/p/1x5).", "date_published": "2016-08-04T12:00:09Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Al Prihodko", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Start"], "alias": "1rq"} {"id": "49ed47409c00611dfb6fc67137835794", "title": "Bayes' rule examples", "url": "https://arbital.com/p/bayes_rule_examples", "source": "arbital", "source_type": "text", "text": "This page and its tabs store exemplar problems for Bayes' rule. You can suggest additional example problems by leaving a comment on the appropriate tab.\n\nProblem types by tab:\n\n- [Introductory](https://arbital.com/p/22w). Meant as a small set of problems for people who haven't heard of Bayes' rule before, with examples that will be discussed in the introductory pages for Bayes' rule.\n- [Homework](https://arbital.com/p/). Problems for people who already know Bayes' rule and want to test their grasp of it in straightforward ways.\n- [Clever](https://arbital.com/p/). Problems for people who wish to be mathematically entertained, or that exhibit some surprising facet or gotcha of Bayes' rule.\n- [Realistic](https://arbital.com/p/1x4). Instances of Bayesian reasoning that have arisen in real life.", "date_published": "2016-07-10T19:53:43Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Start"], "alias": "1wt"} {"id": "6b8f407fd42e4e35215928d50d3d3bc6", "title": "Waterfall diagram", "url": "https://arbital.com/p/bayes_waterfall_diagram", "source": "arbital", "source_type": "text", "text": "Waterfall diagrams, like [frequency diagrams](https://arbital.com/p/560), provide a way of visualizing [Bayes' Rule](https://arbital.com/p/1lz). Recall that Bayes' rule (in the [odds form](https://arbital.com/p/1x5)) says that the posterior odds between any two hypotheses is equal to the prior odds times the relative likelihoods.\n\nFor example, in the [Diseasitis](https://arbital.com/p/22s) problem, a patient is 20% likely to be sick (and 80% likely to be healthy) a priori, and they take a test that is 3x more likely to come back positive if they are sick. The odds of a patient being sick _given_ a positive test are thus $(1 : 4) \\times (3 : 1) = (3 : 4).$\n\nUsing waterfall diagrams, we can visualize the prior odds as two separate streams of water at the top of a waterfall, and the relative likelihoods as the proportion of water from each stream that makes it to the shared pool at the bottom. The posterior odds can then be visualized as the proportion of water in the shared pool that came from each different prior stream.\n\n![Waterfall diagram](https://i.imgur.com/CXsoZhA.png?0)\n\nSee [https://arbital.com/p/1x1](https://arbital.com/p/1x1) for a walkthrough of the diagram.\n\nWaterfall diagrams make it clear that, when calculating the posterior odds, what matters is the _relative proportion_ between how much each gallon of red water vs each gallon of blue water makes it into the shared pool. If 45% of red water and 15% of blue water made it to the bottom, that would give the same _relative proportion_ of red and blue water in the shared pool at the bottom as 90% and 30%.\n\n![Same relative proportions](https://i.imgur.com/6FOndjc.png?0)\n\nThus, it is only the relative likelihoods (and not the absolute likelihoods) that matter when calculating posterior odds.\n\nSimilarly, changing the water flows at the top of the waterfall from (20 gallons/sec red water : 80 gallons/sec blue water) to (40 gallons/sec red water : 160 gallons/sec blue water) would double the total water at bottom, but not change the relative proportions of blue and red water. So only the *relative* prior odds matter to the *relative* posterior odds.", "date_published": "2016-09-29T17:12:43Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Nate Soares", "Eric Bruylant", "Salil Kalghatgi", "Eliezer Yudkowsky"], "summaries": ["Waterfall diagrams, like [frequency diagrams](https://arbital.com/p/560), provide a way of visualizing [Bayes' Rule](https://arbital.com/p/1lz). For example, if 20% of the patients in the screening patient are sick (red) and 80% are healthy (blue); and 90% of the sick patients get positive test results; and 30% of the healthy patients get positive test results, we could visualize the probability flows using the following diagram:\n\n![Waterfall diagram](https://i.imgur.com/CXsoZhA.png?0)"], "tags": ["B-Class"], "alias": "1wy"} {"id": "5dc9477d352bf181d42d119ca9067e1f", "title": "Waterfall diagrams and relative odds", "url": "https://arbital.com/p/bayes_waterfall_diseasitis", "source": "arbital", "source_type": "text", "text": "Imagine a waterfall with two streams of water at the top, a red stream and a blue stream. These streams separately approach the top of the waterfall, with some of the water from both streams being diverted along the way, and the remaining water falling into a shared pool below.\n\n![unlabeled waterfall](https://i.imgur.com/D8EhY65.png?0)\n\nSuppose that:\n\n- At the top of the waterfall, 20 gallons/second of red water are flowing down, and 80 gallons/second of blue water are coming down.\n- 90% of the red water makes it to the bottom.\n- 30% of the blue water makes it to the bottom.\n\nOf the purplish water that makes it to the bottom of the pool, how much was originally from the red stream and how much was originally from the blue stream?\n\n%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nThis is structurally identical to the [Diseasitis](https://arbital.com/p/22s) problem from [before](https://arbital.com/p/55z):\n\n- 20% of the patients in the screening population start out with Diseasitis.\n- Among patients with Diseasitis, 90% turn the tongue depressor black.\n- 30% of the patients without Diseasitis will also turn the tongue depressor black.\n%%\n\n%%!if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nThis is structurally similar to the following problem, such as medical students might encounter:\n\nYou are a nurse screening 100 patients for Diseasitis, using a tongue depressor which usually turns black for patients who have the sickness.\n\n- 20% of the patients in the screening population start out with Diseasitis.\n- Among patients with Diseasitis, 90% turn the tongue depressor black (true positives).\n- However, 30% of the patients without Diseasitis will also turn the tongue depressor black (false positives).\n\nWhat is the chance that a patient with a blackened tongue depressor has Diseasitis?\n%%\n\nThe 20% of sick patients are analogous to the 20 gallons/second of red water; the 80% of healthy patients are analogous to the 80 gallons/second of blue water:\n\n![top labeled waterfall](https://i.imgur.com/eQh2qUt.png?0)\n\nThe 90% of the sick patients turning the tongue depressor black is analogous to 90% of the red water making it to the bottom of the waterfall. 30% of the healthy patients turning the tongue depressor black is analogous to 30% of the blue water making it to the bottom pool.\n\n![middle labeled waterfall](https://i.imgur.com/6GBBYO5.png?0)\n\nTherefore, the question \"what portion of water in the final pool came from the red stream?\" has the same answer as the question \"what portion of patients that turn the tongue depressor black are sick with Diseasitis?\"\n\n%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nNow for the faster way of answering that question.\n%%\n\nWe start with *4 times as much blue water as red water* at the top of the waterfall.\n\nThen each molecule of red water is 90% likely to make it to the shared pool, and each molecule of blue water is 30% likely to make it to the pool. (90% of red water and 30% of blue water make it to the bottom.) So each molecule of red water is *3 times as likely* (0.90 / 0.30 = 3) as a molecule of blue water to make it to the bottom.\n\nSo we multiply prior proportions of $1 : 4$ for red vs. blue by relative likelihoods of $3 : 1$ and end up with final proportions of $(1 \\cdot 3) : (4 \\cdot 1) = 3 : 4$, meaning that the bottom pool has 3 parts of red water to 4 parts of blue water.\n\n![labeled waterfall](https://i.imgur.com/QIrtuVU.png?0)\n\nTo convert these *relative* proportions into an *absolute* probability that a random water molecule at the bottom is red, we calculate 3 / (3 + 4) to see that 3/7ths (roughly 43%) of the water in the shared pool came from the red stream.\n\nThis proportion is the same as the 18 : 24 sick patients with positive results, versus healthy patients with positive test results, that we would get by [thinking about 100 patients](https://arbital.com/p/560).\n\nThat is, to solve the Diseasitis problem in your head, you could convert this word problem:\n\n> 20% of the patients in a screening population have Diseasitis. 90% of the patients with Diseasitis turn the tongue depressor black, and 30% of the patients without Diseasitis turn the tongue depressor black. Given that a patient turned their tongue depressor black, what is the probability that they have Diseasitis?\n\nInto this calculation:\n\n> Okay, so the initial odds are (20% : 80%) = (1 : 4), and the likelihoods are (90% : 30%) = (3 : 1). Multiplying those ratios gives final odds of (3 : 4), which converts to a probability of 3/7ths.\n\n(You might not be able to convert 3/7 to 43% in your head, but you might be able to eyeball that it was a chunk less than 50%.)\n\nYou can try doing a similar calculation for this problem:\n\n- 90% of widgets are good and 10% are bad.\n- 12% of bad widgets emit sparks.\n- Only 4% of good widgets emit sparks.\n\nWhat percentage of sparking widgets are bad? If you are sufficiently comfortable with the setup, try doing this problem entirely in your head.\n\n(You might try visualizing a waterfall with good and bad widgets at the top, and only sparking widgets making it to the bottom pool.)\n%todo: Have a picture of a waterfall here, with no numbers, but with the parts labeled, that can be expanded if the user wants to expand it.%\n\n%%hidden(Show answer):\n- There's (1 : 9) bad vs. good widgets.\n- Bad vs. good widgets have a (12 : 4) relative likelihood to spark.\n- This simplifies to (1 : 9) x (3 : 1) = (3 : 9) = (1 : 3), 1 bad sparking widget for every 3 good sparking widgets.\n- Which converts to a probability of 1/(1+3) = 1/4 = 25%; that is, 25% of sparking widgets are bad.\n\nSeeing sparks didn't make us \"believe the widget is bad\"; the probability only went to 25%, which is less than 50/50. But this doesn't mean we say, \"I still believe this widget is good!\" and toss out the evidence and ignore it. A bad widget is *relatively more likely* to emit sparks, and therefore seeing this evidence should cause us to think it *relatively more likely* that the widget is a bad one, even if the probability hasn't yet gone over 50%. We increase our probability from 10% to 25%.%%\n\n%%if-before([https://arbital.com/p/1x8](https://arbital.com/p/1x8)):\nWaterfalls are one way of visualizing the \"odds form\" of \"Bayes' rule\", which states that **the prior odds times the likelihood ratio equals the posterior odds.** In turn, this rule can be seen as formalizing the notion of \"the strength of evidence\" or \"how much a piece of evidence should make us update our beliefs\". We'll take a look at this more general form next.\n%%\n\n%%!if-before([https://arbital.com/p/1x8](https://arbital.com/p/1x8)):\nWaterfalls are one way of visualizing the [odds form](https://arbital.com/p/1x5) of [Bayes' rule](https://arbital.com/p/1lz), which states that **the prior odds times the likelihood ratio equals the posterior odds**.\n%%", "date_published": "2017-01-30T07:27:09Z", "authors": ["Robert Eidschun", "Eric Bruylant", "Alexei Andreev", "Malo Bourgon", "Nate Soares", "Khana Santamaria", "Adom Hartell", "Eliezer Yudkowsky"], "summaries": ["Waterfall diagrams, like [frequency diagrams](https://arbital.com/p/560), provide a way of visualizing [Bayes' Rule](https://arbital.com/p/1lz). For example, if 20% of the patients in the screening patient are sick (red) and 80% are healthy (blue); and 90% of the sick patients get positive test results; and 30% of the healthy patients get positive test results, we could visualize the probability flows using the following diagram:\n\n![Waterfall diagram](https://i.imgur.com/CXsoZhA.png?0)\n\nThis diagram helps show that only the *relative* ratios matter to the final answer. Twice as much water flowing from both streams at the top, or half as much water from each stream making it to the bottom, wouldn't change the relative proportions in the end result."], "tags": ["B-Class", "Proposed A-Class"], "alias": "1x1"} {"id": "7f373443d4ac85999ffd08eb47c81adc", "title": "Humans doing Bayes", "url": "https://arbital.com/p/bayes_for_humans", "source": "arbital", "source_type": "text", "text": "Tag for examples of human beings applying explicitly Bayesian reasoning in everyday life. Parent for rules, heuristics, and concepts that are specifically about the human, cognitive use of explicit Bayesian reasoning.", "date_published": "2016-02-07T23:30:59Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "1x2"} {"id": "a70ebfa8d922eb109e8f6a0a0ab1f61e", "title": "Explicit Bayes as a counter for 'worrying'", "url": "https://arbital.com/p/explicit_bayes_counters_worry", "source": "arbital", "source_type": "text", "text": "One of the possible [human uses of explicit Bayesian reasoning](https://arbital.com/p/1x2) is that making up explicit probabilities and doing the Bayesian calculation lets us summarize the relevant considerations in one place. This can counter a mental loop of 'worrying' by bouncing back and forth between focusing on individual considerations.\n\nOne example comes from a woman who was a test subject for an early version of the Bayes intro at the same time she was dating on OKCupid, and a 96% OKCupid match canceled their first date for coffee without explanation. After bouncing back and forth mentally between 'maybe there was a good reason he canceled' versus 'that doesn't seem like a good sign', she decided to try Bayes. She estimated that a man like this one had [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) of 2 : 5 for desirability vs. undesirability, based on his OKCupid profile and her past experience with 96% matches. She then estimated a 1 : 3 [likelihood ratio](https://arbital.com/p/1rq) for desirable vs. undesirable men flaking on the first date. This worked out to 2 : 15 [posterior](https://arbital.com/p/1rp) odds for the man being undesirable, which she decided was unfavorable enough to not pursue him further.\n\nThe point of this mental exercise wasn't that the numbers she made up were exact. Rather, by laying out all the key factors in one place, she stopped her mind from bouncing back and forth between %%knows-requisite([https://arbital.com/p/1rj](https://arbital.com/p/1rj)): visualizing $\\mathbb P(\\text{cancel}|\\text{desirable})$ and visualizing $\\mathbb P(\\text{cancel}|\\text{undesirable})$, which were pulling her back and forth as arguments pointing in different directions.%% %%!knows-requisite([https://arbital.com/p/1rj](https://arbital.com/p/1rj)): switching between imagining reasons why a good prospect might've canceled, versus imagining reasons why a bad prospect might've canceled.%% By combining both ideas into a likelihood ratio, and moving on to posterior odds, she summarized the considerations and was able to stop unproductively 'worrying'.\n\nWorrying isn't always unproductive. Paying more attention to an individual consideration like \"Maybe there were reasons a good dating prospect might've canceled anyway?\" might cause you to adjust your estimate of this consideration, thereby changing your posterior odds. But you can get the *illusion* of progress by switching your attention from one already-known consideration to another, since it feels like these considerations are pulling on your posterior intuition each time you focus on them. It feels like your beliefs are changing and cognitive work is being performed, but actually you're just going in circles. This is the 'unproductive worrying' process that you can interrupt by doing an explicitly Bayesian calculation that summarizes what you already know into a single answer.\n\n(She did send him a rejection notice spelling out her numerical reasoning; since, if he wrote back in a way indicating that he actually understood that, he might've been worth a second chance.)\n\n(He didn't.)", "date_published": "2016-02-08T00:13:04Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["By making up plausible Bayesian probabilities, we can summarize all of our knowledge about a problem in the same place at the same time, rather than 'worrying' by bouncing back and forth between different parts of it. To use one real-life example, if an OKCupid match cancels your first date without explanation, then you might be tempted to 'worry' by bouncing your focus back and forth between possible valid reasons to cancel the date, versus possible ways it was a bad sign. Alternatively, you could estimate [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) of 2 : 5 for desirability vs. undesirability of a 96% OKCupid match, a 1 : 3 [likelihood ratio](https://arbital.com/p/1rq) for desirable vs. undesirable men flaking on the first date, decide that the [posterior](https://arbital.com/p/1rp) odds of 2 : 15 don't warrant further pursuit, and then be done. The point isn't that the made-up numbers are exact, but that they summarize the relevant weights into a single place and calculation, terminating the 'worry' cycle of alternating attention on particular pieces, and letting us be done."], "tags": ["C-Class"], "alias": "1x3"} {"id": "a0dcddb33c673b6944b862ec81b0c5d9", "title": "Realistic (Math 1)", "url": "https://arbital.com/p/bayes_examples_realistic_math1", "source": "arbital", "source_type": "text", "text": "### What's the chance that a potentially good partner would flake on the first date?\n\nFrom a test subject for an early version of the Bayes intro:\n\nA 96% OKCupid match canceled their first date for coffee without providing an explanation.\n\nShe estimated that a man like this one had [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) of 2 : 5 for desirability vs. undesirability, based on his OKCupid profile and her past experience with 96% matches. She then estimated a 1 : 3 [likelihood ratio](https://arbital.com/p/1rq) for desirable vs. undesirable men flaking on the first date. This worked out to 2 : 15 [posterior](https://arbital.com/p/1rp) odds for the man being undesirable, which she decided was unfavorable enough to not pursue him further.\n\nShe [used this explicitly Bayesian calculation to interrupt a 'worrying' cycle](https://arbital.com/p/1x3) wherein she was focusing on one consideration, then a different consideration, arguing for pursuing further / not pursuing further. Making up numbers and doing the Bayesian calculation [terminated this cycle](https://arbital.com/p/1x3).", "date_published": "2016-02-27T21:45:47Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Humans doing Bayes", "Needs summary"], "alias": "1x4"} {"id": "6c2bdd426481be7315e4acee8d153e7f", "title": "Bayes' rule: Odds form", "url": "https://arbital.com/p/bayes_rule_odds", "source": "arbital", "source_type": "text", "text": "One of the more convenient forms of [Bayes' rule](https://arbital.com/p/1lz) uses [relative odds](https://arbital.com/p/1rb). Bayes' rule says that, when you observe a piece of evidence $e,$ your [posterior](https://arbital.com/p/1rp) odds $\\mathbb O(\\boldsymbol H \\mid e)$ for your hypothesis [https://arbital.com/p/-vector](https://arbital.com/p/-vector) $\\boldsymbol H$ given $e$ is just your [prior](https://arbital.com/p/1rm) odds $\\mathbb O(\\boldsymbol H)$ on $\\boldsymbol H$ times the [https://arbital.com/p/-56s](https://arbital.com/p/-56s) $\\mathcal L_e(\\boldsymbol H).$\n\nFor example, suppose we're trying to solve a mysterious murder, and we start out thinking the odds of Professor Plum vs. Miss Scarlet committing the murder are 1 : 2, that is, Scarlet is twice as likely as Plum to have committed the murder [a priori](https://arbital.com/p/1rm). We then observe that the victim was bludgeoned with a lead pipe. If we think that Plum, *if* he commits a murder, is around 60% likely to use a lead pipe, and that Scarlet, *if* she commits a murder, would be around 6% likely to us a lead pipe, this implies [relative likelihoods](https://arbital.com/p/1rq) of 10 : 1 for Plum vs. Scarlet using the pipe. The [posterior](https://arbital.com/p/1rp) odds for Plum vs. Scarlet, after observing the victim to have been murdered by a pipe, are $(1 : 2) \\times (10 : 1) = (10 : 2) = (5 : 1)$. We now think Plum is around five times as likely as Scarlet to have committed the murder.\n\n# Odds functions\n\nLet $\\boldsymbol H$ denote a [https://arbital.com/p/-vector](https://arbital.com/p/-vector) of hypotheses. An odds function $\\mathbb O$ is a function that maps $\\boldsymbol H$ to a set of [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb). For example, if $\\boldsymbol H = (H_1, H_2, H_3),$ then $\\mathbb O(\\boldsymbol H)$ might be $(6 : 2 : 1),$ which says that $H_1$ is 3x as likely as $H_2$ and 6x as likely as $H_3.$ An odds function captures our *relative* probabilities between the hypotheses in $\\boldsymbol H;$ for example, (6 : 2 : 1) odds are the same as (18 : 6 : 3) odds. We don't need to know the absolute probabilities of the $H_i$ in order to know the relative odds. All we require is that the relative odds are proportional to the absolute probabilities:\n$$\\mathbb O(\\boldsymbol H) \\propto \\mathbb P(\\boldsymbol H).$$\n\nIn the example with the death of Mr. Boddy, suppose $H_1$ denotes the proposition \"Reverend Green murdered Mr. Boddy\", $H_2$ denotes \"Mrs. White did it\", and $H_3$ denotes \"Colonel Mustard did it\". Let $\\boldsymbol H$ be the vector $(H_1, H_2, H_3).$ If these propositions respectively have [prior](https://arbital.com/p/1rm) probabilities of 80%, 8%, and 4% (the remaining 8% being reserved for other hypotheses), then $\\mathbb O(\\boldsymbol H) = (80 : 8 : 4) = (20 : 2 : 1)$ represents our *relative* credences about the murder suspects — that Reverend Green is 10 times as likely to be the murderer as Miss White, who is twice as likely to be the murderer as Colonel Mustard.\n\n# Likelihood functions\n\nSuppose we discover that the victim was murdered by wrench. Suppose we think that Reverend Green, Mrs. White, and Colonel Mustard, *if* they murdered someone, would respectively be 60%, 90%, and 30% likely to use a wrench. Letting $e_w$ denote the observation \"The victim was murdered by wrench,\" we would have $\\mathbb P(e_w\\mid \\boldsymbol H) = (0.6, 0.9, 0.3).$ This gives us a [https://arbital.com/p/-56s](https://arbital.com/p/-56s) defined as $\\mathcal L_{e_w}(\\boldsymbol H) = P(e_w \\mid \\boldsymbol H).$\n\n# Bayes' rule, odds form\n\nLet $\\mathbb O(\\boldsymbol H\\mid e)$ denote the [posterior](https://arbital.com/p/1rp) odds of the hypotheses $\\boldsymbol H$ after observing evidence $e.$ [Bayes' rule](https://arbital.com/p/1xr) then states:\n\n$$\\mathbb O(\\boldsymbol H) \\times \\mathcal L_{e}(\\boldsymbol H) = \\mathbb O(\\boldsymbol H\\mid e)$$\n\nThis says that we can multiply the relative prior credence $\\mathbb O(\\boldsymbol H)$ by the likelihood $\\mathcal L_{e}(\\boldsymbol H)$ to arrive at the relative posterior credence $\\mathbb O(\\boldsymbol H\\mid e).$ Because odds are invariant under multiplication by a positive constant, it wouldn't make any difference if the _likelihood_ function was scaled up or down by a constant, because that would only have the effect of multiplying the final odds by a constant, which does not affect them. Thus, only the [relative likelihoods](https://arbital.com/p/-1rq) are necessary to perform the calculation; the absolute likelihoods are unnecessary. Therefore, when performing the calculation, we can simplify $\\mathcal L_e(\\boldsymbol H) = (0.6, 0.9, 0.3)$ to the relative likelihoods $(2 : 3 : 1).$\n\nIn our example, this makes the calculation quite easy. The prior odds for Green vs White vs Mustard were $(20 : 2 : 1).$ The relative likelihoods were $(0.6 : 0.9 : 0.3)$ = $(2 : 3 : 1).$ Thus, the relative posterior odds after observing $e_w$ = Mr. Boddy was killed by wrench are $(20 : 2 : 1) \\times (2 : 3 : 1) = (40 : 6 : 1).$ Given the evidence, Reverend Green is 40 times as likely as Colonel Mustard to be the killer, and 20/3 times as likely as Mrs. White.\n\nBayes' rule states that this *relative* proportioning of odds among these three suspects will be correct, regardless of how our remaining 8% probability mass is assigned to all other suspects and possibilities, or indeed, how much probability mass we assigned to other suspects to begin with. For a proof, see [https://arbital.com/p/1xr](https://arbital.com/p/1xr).\n\n# Visualization\n\n[Frequency diagrams](https://arbital.com/p/560), [waterfall diagrams](https://arbital.com/p/1wy), and [spotlight diagrams](https://arbital.com/p/1zm) may be helpful for explaining or visualizing the odds form of Bayes' rule.", "date_published": "2016-10-12T22:56:37Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A form of [Bayes' rule](https://arbital.com/p/1lz) that uses relative [odds](https://arbital.com/p/1rb).\n\nSuppose we're trying to solve a mysterious murder, and we [start out](https://arbital.com/p/1rm) thinking the odds of Professor Plum vs. Miss Scarlet committing the murder are 1 : 2, that is, Scarlet is twice as likely as Plum to have committed the murder. We then observe that the victim was bludgeoned with a lead pipe. If we think that Plum, *if* he commits a murder, is around 60% likely to use a lead pipe, and that Scarlet, *if* she commits a murder, would be around 6% likely to us a lead pipe, this implies [relative likelihoods](https://arbital.com/p/1rq) of 10 : 1 for Plum vs. Scarlet using the pipe.\n\nThe [posterior](https://arbital.com/p/1rp) odds for Plum vs. Scarlet, after observing the victim to have been murdered by a pipe, are $(1 : 2) \\times (10 : 1) = (10 : 2) = (5 : 1)$. We now think Plum is around five times as likely as Scarlet to have committed the murder."], "tags": ["B-Class"], "alias": "1x5"} {"id": "c68d9f7eb7553d32b79647e71e057268", "title": "Introduction to Bayes' rule: Odds form", "url": "https://arbital.com/p/bayes_rule_odds_intro", "source": "arbital", "source_type": "text", "text": "%%!if-after([https://arbital.com/p/1x1](https://arbital.com/p/1x1)):\n**This introduction is meant to be read after the introductions to [frequency visualizations](https://arbital.com/p/55z) and [waterfall visualizations](https://arbital.com/p/1x1).**\n%%\n\nIn general, Bayes' rule states:\n\n$$ \\textbf{Prior odds} \\times \\textbf{Relative likelihoods} = \\textbf{Posterior odds}$$\n\nIf we consider the [waterfall visualization](https://arbital.com/p/1wy) of [the Diseasitis example](https://arbital.com/p/22s), \nthen we can visualize how *relative odds* are appropriate for thinking about the two rivers at the top of the waterfall. \n\n![Waterfall visualization](https://i.imgur.com/eQh2qUt.png?0)\n\nThe *proportion* of red vs. blue water at the bottom will be the same whether there's 200 vs. 800 gallons per second of red vs. blue water at the top of the waterfall, or 20,000 vs. 80,000 gallons/sec, or 1 vs. 4 gallons/second. So long as the rest of the waterfall behaves in a proportional way, we'll get the same proportion of red vs blue at the bottom. Thus, we're justified in ignoring the _amount_ of water and considering only the relative proportion between amounts.\n\nSimilarly, what matters is the *relative* proportion between how much of each gallon of red water makes it into the shared pool, and how much of each gallon of blue water, makes it. 45% and 15% of the red and blue water making it to the bottom would give the same *relative proportion* of red and blue water in the bottom pool as 90% and 30%.\n\n![Changing the proportion makes no difference](https://i.imgur.com/6FOndjc.png?0)\n\nThis justifies throwing away the specific data that 90% of the red stream and 30% of the blue stream make it down, and summarizing this into *relative likelihoods* of (3 : 1).\n\nMore generally, suppose we have a medical test that detects a sickness with a 90% true positive rate (10% false negatives) and a 30% false positive rate (70% true negatives). A positive result on this test represents the same *strength of evidence* as a test with 60% true positives and 20% false positives. A negative result on this test represents the same *strength of evidence* as a test with 9% false negatives and 63% true negatives.\n\nIn general, the strength of evidence is summarized by how *relatively* likely different states of the world make our observations. %%!if-before([https://arbital.com/p/1zh](https://arbital.com/p/1zh)): For more on this idea, see [https://arbital.com/p/22x](https://arbital.com/p/22x). %% %%if-before([https://arbital.com/p/1zh](https://arbital.com/p/1zh)): More on this later. %%\n\n# The equation\n\nTo state Bayes' rule in full generality, and prove it as a theorem, we'll need to introduce some new notation.\n\n## Conditional probability\n\nFirst, when $X$ is a proposition, $\\mathbb P(X)$ will stand for the [probability](https://arbital.com/p/1rf) of $X.$\n\nIn other words, $X$ is something that's either true or false in reality, but we're uncertain about it, and $\\mathbb P(X)$ is a way of expressing our [degree of belief](https://arbital.com/p/4y9) that $X$ is true. A patient is, in fact, either sick or healthy; but if you don't know which of these is the case, the evidence might lead you to assign a 43% subjective probability that the patient is sick.\n\n$\\mathbb \\neg X$ will mean \"$X$ is false\", so $\\mathbb P(\\neg X)$ is the \"the probability $X$ is false\".\n\nThe Diseasitis involved some more complicated statements than this, though; in particular it involved:\n\n- The 90% chance that a patient blackens the tongue depressor, *given* that they have Diseasitis.\n- The 30% chance that a patient blackens the tongue depressor, *given* that they're healthy.\n- The 3/7 chance that a patient has Diseasitis, *given* that they blackened the tongue depressor.\n\nIn these cases we want to go from some fact that is *assumed* or *known* to be true (on the right), to some other proposition (on the left) whose new probability we want to ask about, taking into account that assumption.\n\nProbability statements like those are known as \"conditional probabilities\". The standard notation for conditional probability expresses the above quantities as:\n\n- $\\mathbb P(blackened \\mid sick) = 0.9$\n- $\\mathbb P(blackened \\mid \\neg sick) = 0.3$\n- $\\mathbb P(sick \\mid blackened) = 3/7$\n\nThis standard notation for $\\mathbb P(X \\mid Y)$ meaning \"the probability of $X$, assuming $Y$ to be true\" is a helpfully symmetrical vertical line, to avoid giving you any visual clue to remember that the assumption is on the right and the inferred proposition is on the left. \n\nConditional probability is *defined* as follows. Using the notation $X \\wedge Y$ to denote \"X and Y\" or \"both $X$ and $Y$ are true\":\n\n$$\\mathbb P(X \\mid Y) := \\frac{\\mathbb P(X \\wedge Y)}{\\mathbb P(Y)}$$\n\nE.g. in the Diseasitis example, $\\mathbb P(sick \\mid blackened)$ is calculated by dividing the 18% students who are sick *and* have blackened tongue depressors ($\\mathbb P(sick \\wedge blackened)$), by the total 42% students who have blackened tongue depressors ($\\mathbb P(blackened)$).\n\nOr $\\mathbb P(blackened \\mid \\neg sick),$ the probability of blackening the tongue depressor *given* that you're healthy, is equivalent to the 24 students who are healthy *and* have blackened tongue depressors, divided by the 80 students who are healthy. 24 / 80 = 3/10, so this corresponds to the 30% false positives we were told about at the start.\n\nWe can see the law of conditional probability as saying, \"Let us restrict our attention to worlds where $Y$ is the case, or thingies of which $Y$ is true. Looking only at cases where $Y$ is true, how many cases are there *inside* that restriction where $X$ is *also* true - cases with $X$ *and* $Y$?\"\n\nFor more on this, see [https://arbital.com/p/1rj](https://arbital.com/p/1rj).\n\n## Bayes' rule\n\nBayes' rule says:\n\n$$\\textbf{Prior odds} \\times \\textbf{Relative likelihoods} = \\textbf{Posterior odds}$$\n\nIn the Diseasitis example, this would state:\n\n$$\\dfrac{\\mathbb P({sick})}{\\mathbb P(healthy)} \\times \\dfrac{\\mathbb P({blackened}\\mid {sick})}{\\mathbb P({blackened}\\mid healthy)} = \\dfrac{\\mathbb P({sick}\\mid {blackened})}{\\mathbb P(healthy\\mid {blackened})}.$$\n\n%todo: apparently the parallel is not super obvious, and maybe we can use slightly different colors in the text to make it clearer that e.g. Prior odds -> sick/healthy%\n\nThe [prior](https://arbital.com/p/1rm) odds refer to the relative proportion of sick vs healthy patients, which is $1 : 4$. Converting these odds into probabilities gives us $\\mathbb P(sick)=\\frac{1}{4+1}=\\frac{1}{5}=20\\%$.\n\nThe [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq) refers to how much more likely each sick patient is to get a positive test result than each healthy patient, which (using [conditional probability notation](https://arbital.com/p/1rj)) is $\\frac{\\mathbb P(positive \\mid sick)}{\\mathbb P(positive \\mid healthy)}=\\frac{0.90}{0.30},$ aka relative likelihoods of $3 : 1.$\n\nThe [posterior](https://arbital.com/p/1rp) odds are the relative proportions of sick vs healthy patients among those with positive test results, or $\\frac{\\mathbb P(sick \\mid positive)}{\\mathbb P(healthy \\mid positive)} = \\frac{3}{4}$, aka $3 : 4$ odds.\n\nTo extract the *probability* from the relative odds, we keep in mind that probabilities of [mutually exclusive and exhaustive propositions](https://arbital.com/p/1rd) need to sum to $1,$ that is, there is a 100% probability of *something* happening. Since everyone is either sick or not sick, we can [normalize](https://arbital.com/p/1rk) the odd ratio $3 : 4$ by dividing through by the sum of terms:\n\n$$(\\frac{3}{3+4} : \\frac{4}{3+4}) = (\\frac{3}{7} : \\frac{4}{7}) \\approx (0.43 : 0.57)$$\n\n...ending up with the probabilities (0.43 : 0.57), proportional to the original ratio of (3 : 4), but summing to 1. It would be very odd if something had probability $3$ (300% probability) of happening.\n\nUsing the waterfall visualization:\n\n![labeled waterfall](https://i.imgur.com/CXsoZhA.png?0)\n\nWe can generalize this to _any_ two **hypotheses** $H_j$ and $H_k$ with **evidence** $e$, in which case Bayes' rule can be written as:\n\n$$\\dfrac{\\mathbb P(H_j)}{\\mathbb P(H_k)} \\times \\dfrac{\\mathbb P(e \\mid H_j)}{\\mathbb P(e \\mid H_k)} = \\dfrac{\\mathbb P(H_j \\mid e)}{\\mathbb P(H_k \\mid e)}$$\n\nwhich says \"the posterior odds ratio for hypotheses $H_j$ vs $H_k$ (after seeing the evidence $e$) are equal to the prior odds ratio times the ratio of how well $H_j$ predicted the evidence compared to $H_k.$\"\n\nIf $H_j$ and $H_k$ are [https://arbital.com/p/-1rd](https://arbital.com/p/-1rd), we can convert the posterior odds into a posterior probability for $H_j$ by [normalizing](https://arbital.com/p/1rk) the odds - dividing through the odds ratio by the sum of its terms, so that the elements of the new ratio sum to $1.$\n\n## Proof of Bayes' rule\n\nRearranging [the definition of conditional probability](https://arbital.com/p/1rj), $\\mathbb P(X \\wedge Y) = \\mathbb P(Y) \\cdot \\mathbb P(X|Y).$ E.g. to find \"the fraction of all patients that are sick *and* get a positive result\", we multiply \"the fraction of patients that are sick\" times \"the probability that a sick patient blackens the tongue depressor\".\n\nThen this is a proof of Bayes' rule:\n\n$$\n\\frac{\\mathbb P(H_j)}{\\mathbb P(H_k)}\n\\cdot\n\\frac{\\mathbb P(e_0 | H_j)}{\\mathbb P(e_0 | H_k)}\n=\n\\frac{\\mathbb P(e_0 \\wedge H_j)}{\\mathbb P(e_0 \\wedge H_k)}\n= \n\\frac{\\mathbb P(H_j \\wedge e_0)/\\mathbb P(e_0)}{\\mathbb P(H_k \\wedge e_0)/\\mathbb P(e_0)}\n= \n\\frac{\\mathbb P(H_j | e_0)}{\\mathbb P(H_k | e_0)}\n$$\n\nQED.\n\nIn the Diseasitis example, these proof steps correspond to the operations:\n\n$$\n\\frac{0.20}{0.80}\n\\cdot\n\\frac{0.90}{0.30}\n=\n\\frac{0.18}{0.24}\n= \n\\frac{0.18/0.42}{0.24/0.42}\n= \n\\frac{0.43}{0.57}\n$$\n\nUsing red for sick, blue for healthy, grey for a mix of sick and healthy patients, and + signs for positive test results, the calculation steps can be visualized as follows:\n\n![bayes venn](https://i.imgur.com/YBc2nYo.png?0)\n\n%todo: maybe replace this diagram with pie-chart circles in exactly right proportions (but still with the correct populations of + signs)%\n\nThis process of observing evidence and using its likelihood ratio to transform a prior belief into a posterior belief is called a \"[Bayesian update](https://arbital.com/p/1ly)\" or \"belief revision.\"\n\n%%if-before([https://arbital.com/p/21v](https://arbital.com/p/21v)):\nCongratulations! You now know (we hope) what Bayes' rule is, and how to apply it to simple setups. After this, the path continues with further implications %if-before([https://arbital.com/p/1zg](https://arbital.com/p/1zg)): and additional forms% of Bayes' rule. This might be a good time to take a break, if you want one--but we hope you continue on this Arbital path after that!\n%%\n\n%%!if-before([https://arbital.com/p/1zg](https://arbital.com/p/1zg)):\n- For the generalization of the odds form of Bayes' rule to multiple hypotheses and multiple items of evidence, see [https://arbital.com/p/1zg](https://arbital.com/p/1zg).\n- For a transformation of the odds form that makes the strength of evidence even more directly visible, see [https://arbital.com/p/1zh](https://arbital.com/p/1zh).\n%%", "date_published": "2016-10-25T22:15:44Z", "authors": ["Eric Bruylant", "Conor Duggan", "M J", "Simon Grimm", "Tom Voltz", "Miguel Lima Medín", "Alexei Andreev", "Mott Smith", "Nate Soares", "John Woodgate", "yassine chaouche", "Killian McGuinness", "Elias Mannherz", "Ryan Bush", "Eliezer Yudkowsky"], "summaries": ["The odds form of Bayes' rule is a form of [Bayes' rule](https://arbital.com/p/1lz) that uses [relative likelihoods](https://arbital.com/p/1rq) and [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb). It says that in general, the [prior](https://arbital.com/p/1rm) odds times the [likelihood ratio](https://arbital.com/p/56t) equals the [posterior](https://arbital.com/p/1rp) odds.\n\n[Suppose](https://arbital.com/p/22s) that the odds of a particular patient being sick are about $(1 : 4),$ i.e., they may be drawn from a population where there is 1 sick patient for every 4 healthy ones. Suppose we administer a test which comes back positive 90% of the time if the patient is _actually_ sick, but also comes back positive 30% of the time even when the patient is healthy; this means relative likelihoods of $(90 : 30) = (3 : 1)$: the test is 3x as likely to come back positive if the patient is sick. Bayes' rule then says that the odds of the patient being sick _after_ observing a positive test are $(1 : 4) \\times (3 : 1) = (3 : 4),$ for a final probability of about $3/(3+4) \\approx 43\\%.$"], "tags": ["B-Class"], "alias": "1x8"} {"id": "842ae487515aa1656471a047ee5c3b49", "title": "Proof of Bayes' rule", "url": "https://arbital.com/p/bayes_rule_proof", "source": "arbital", "source_type": "text", "text": "Bayes' rule (in the [odds form](https://arbital.com/p/1x5)) says that, for every pair of hypotheses $H_i$ and $H_j$ and piece of evidence $e,$\n\n$$\\dfrac{\\mathbb P(H_i)}{\\mathbb P(H_j)} \\times \\dfrac{\\mathbb P(e \\mid H_i)}{\\mathbb P(e \\mid H_j)} = \\dfrac{\\mathbb P(H_i \\mid e)}{\\mathbb P(H_j \\mid e)}.$$\n\nBy the definition of [conditional probability](https://arbital.com/p/1rj), $\\mathbb P(e \\land H)$ $=$ $\\mathbb P(H) \\cdot \\mathbb P(e \\mid H),$ so\n\n$$ \\dfrac{\\mathbb P(H_i)}{\\mathbb P(H_j)} \\times \\dfrac{\\mathbb P(e\\mid H_i)}{\\mathbb P(e\\mid H_j)} = \\dfrac{\\mathbb P(e \\wedge H_i)}{\\mathbb P(e \\wedge H_j)} $$\n\nDividing both the numerator and the denominator by $\\mathbb P(e),$ we have\n\n$$ \\dfrac{\\mathbb P(e \\wedge H_i)}{\\mathbb P(e \\wedge H_j)} = \\dfrac{\\mathbb P(e \\wedge H_i) / \\mathbb P(e)}{\\mathbb P(e \\wedge H_j) / \\mathbb P(e)} $$\n\nInvoking the definition of conditional probability again,\n\n$$ \\dfrac{\\mathbb P(e \\wedge H_i) / \\mathbb P(e)}{\\mathbb P(e \\wedge H_j) / \\mathbb P(e)} = \\dfrac{\\mathbb P(H_i\\mid e)}{\\mathbb P(H_j\\mid e)}.$$\n\nDone.\n\n---\n\nOf note is the equality\n\n$$\\frac{\\mathbb P(H_i\\mid e)}{\\mathbb P(H_j\\mid e)} = \\frac{\\mathbb P(H_i \\land e)}{\\mathbb P(H_j \\land e)},$$\n\nwhich says that the posterior odds (on the left) for $H_i$ (vs $H_j$) given evidence $e$ is exactly equal to the prior odds of $H_i$ (vs $H_j$) in the parts of $\\mathbb P$ where $e$ was already true. $\\mathbb P(x \\land e)$ is the amount of probability mass that $\\mathbb P$ allocated to worlds where both $x$ and $e$ are true, and the above equation says that after observing $e,$ your belief in $H_i$ relative to $H_j$ should be equal to $H_i$'s odds relative to $H_j$ _in those worlds._ In other words, Bayes' rule can be interpreted as saying: \"Once you've seen $e$, simply throw away all probability mass except the mass on worlds where $e$ was true, and then continue reasoning according to the remaining probability mass.\" See also [https://arbital.com/p/1y6](https://arbital.com/p/1y6).\n\n## Illustration (using the Diseasitis example)\n\nSpecializing to the [Diseasitis](https://arbital.com/p/22s) problem, using red for sick, blue for healthy, and + signs for positive test results, the proof above can be visually depicted as follows:\n\n![bayes venn](https://i.imgur.com/YBc2nYo.png?0)\n\nThis visualization can be read as saying: The ratio of the initial sick population (red) to the initial healthy population (blue), times the ratio of positive results (+) in the sick population to positive results in the blue population, equals the ratio of the positive-and-red population to positive-and-blue population. Thus we can divide both into the proportion of the whole population which got positive results (grey and +), yielding the posterior odds of sick (red) vs healthy (blue) among only those with positive results.\n\n\nThe corresponding numbers are:\n\n$$\\dfrac{20\\%}{80\\%} \\times \\dfrac{90\\%}{30\\%} = \\dfrac{18\\%}{24\\%} = \\dfrac{0.18 / 0.42}{0.24 / 0.42} = \\dfrac{3}{4}$$\n\nfor a final probability $\\mathbb P(sick)$ of $\\frac{3}{7} \\approx 43\\%.$\n\n## Generality\n\nThe [odds](https://arbital.com/p/1x5) and [proportional](https://arbital.com/p/1zm) forms of Bayes' rule talk about the *relative* probability of two hypotheses $H_i$ and $H_j.$ In the particular example of Diseasitis it happens that [every patient is either sick or not-sick](https://arbital.com/p/1rd), so that we can [normalize](https://arbital.com/p/1rk) the final odds 3 : 4 to probabilities of $\\frac{3}{7} : \\frac{4}{7}.$ However, the proof above shows that even if we were talking about two different possible diseases and their total prevalances did not sum to 1, the equation above would still hold between the *relative* prior odds for $\\frac{\\mathbb P(H_i)}{\\mathbb P(H_j)}$ and the *relative* posterior odds for $\\frac{\\mathbb P(H_i\\mid e)}{\\mathbb P(H_j\\mid e)}.$\n\nThe above proof can be specialized to the probabilistic case; see [https://arbital.com/p/56j](https://arbital.com/p/56j).", "date_published": "2016-07-10T19:24:01Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "1xr"} {"id": "19312e354d04a74942121c05229eeeb4", "title": "Belief revision as probability elimination", "url": "https://arbital.com/p/bayes_rule_elimination", "source": "arbital", "source_type": "text", "text": "One way of understanding the reasoning behind [Bayes' rule](https://arbital.com/p/1lz) is that the process of updating $\\mathbb P$ in the face of new evidence can be interpreted as the elimination of probability mass from $\\mathbb P$ (namely, all the probability mass inconsistent with the evidence).\n\n%todo: we have a request to use a not-Diseasitis problem here, because it was getting repetitive.%\n\n%%!if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nWe'll use as a first example the [Diseasitis](https://arbital.com/p/22s) problem:\n\n> You are screening a set of patients for a disease, which we'll call Diseasitis. You expect that around 20% of the patients in the screening population start out with Diseasitis. You are testing for the presence of the disease using a tongue depressor with a sensitive chemical strip. Among patients with Diseasitis, 90% turn the tongue depressor black. However, 30% of the patients without Diseasitis will also turn the tongue depressor black. One of your patients comes into the office, takes your test, and turns the tongue depressor black. Given only that information, what is the probability that they have Diseasitis?\n%%\n\n%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nWe'll again start with the [https://arbital.com/p/22s](https://arbital.com/p/22s) example: a population has a 20% prior prevalence of Diseasitis, and we use a test with a 90% true-positive rate and a 30% false-positive rate.\n%%\n\nIn the situation with a single, individual patient, before observing any evidence, there are four possible worlds we could be in:\n\n%todo: change LaTeX to show Sick in red, Healthy in blue%\n\n$$\\begin{array}{l|r|r}\n& Sick & Healthy \\\\\n\\hline\nTest + & 18\\% & 24\\% \\\\\n\\hline\nTest - & 2\\% & 56\\%\n\\end{array}$$\n\nTo *observe* that the patient gets a positive result, is to eliminate from further consideration the possible worlds where the patient gets a negative result, and vice versa:\n\n![bayes elimination](https://i.imgur.com/LGeGIzW.png?0)\n\nSo Bayes' rule says: to update your beliefs in the face of evidence, simply throw away the probability mass that was inconsistent with the evidence.\n\n# Example: Socks-dresser problem\n\nRealizing that *observing evidence* corresponds to *eliminating probability mass* and concerning ourselves only with the probability mass that remains, is the key to solving the [sock-dresser search](https://arbital.com/p/55b) problem:\n\n> You left your socks somewhere in your room. You think there's a 4/5 chance that they've been tossed into some random drawer of your dresser, so you start looking through your dresser's 8 drawers. After checking 6 drawers at random, you haven't found your socks yet. What is the probability you will find your socks in the next drawer you check?\n\n%todo: request for a picture here%\n\n(You can optionally try to solve this problem yourself before continuing.)\n\n%%hidden(Answer):\n\nWe initially have 20% of the probability mass in \"Socks outside the dresser\", and 80% of the probability mass for \"Socks inside the dresser\". This corresponds to 10% probability mass for each of the 8 drawers (because each of the 8 drawers is equally likely to contain the socks).\n\nAfter eliminating the probability mass in 6 of the drawers, we have 40% of the original mass remaining, 20% for \"Socks outside the dresser\" and 10% each for the remaining 2 drawers.\n\nSince this remaining 40% probability mass is now our whole world, the effect on our probability distribution is like amplifying the 40% until it expands back up to 100%, aka [renormalizing the probability distribution](https://arbital.com/p/1rk). Within the remaining prior probability mass of 40%, the \"outside the dresser\" hypothesis has half of it (prior 20%), and the two drawers have a quarter each (prior 10% each).\n\nSo the probability of finding our socks in the next drawer is 25%.\n\nFor some more flavorful examples of this method of using Bayes' rule, see [The ups and downs of the hope function in a fruitless search](https://arbital.com/p/https://www.gwern.net/docs/statistics/1994-falk).\n\n%%\n\n# Extension to subjective probability\n\nOn the Bayesian paradigm, this idiom of *belief revision as conditioning a [probability distribution](https://arbital.com/p/probability_distribution) on evidence* works both in cases where there are statistical populations with objective frequencies corresponding to the probabilities, _and_ in cases where our uncertainty is _subjective._\n\nFor example, imagine being a king thinking about a uniquely weird person who seems around 20% likely to be an assassin. This doesn't mean that there's a population of similar people of whom 20% are assassins; it means that you weighed up your uncertainty and guesses and decided that you would bet at odds of 1 : 4 that they're an assassin.\n\nYou then estimate that, if this person is an assassin, they're 90% likely to own a dagger — so far as your subjective uncertainty goes; if you imagine them being an assassin, you think that 9 : 1 would be good betting odds for that. If this particular person is not an assassin, you feel like the probability that she ought to own a dagger is around 30%.\n\nWhen you have your guards search her, and they find a dagger, then (according to students of Bayes' rule) you should update your beliefs in the same way you update your belief in the Diseasitis setting — where there _is_ a large population with an objective frequency of sickness — despite the fact that this maybe-assassin is a unique case. According to a Bayesian, your brain can track the probabilities of different possibilities regardless, even when there are no large populations and objective frequencies anywhere to be found, and when you update your beliefs using evidence, you're not \"eliminating people from consideration,\" you're eliminating _probability mass_ from certain _possible worlds_ represented in your own subjective belief state.\n\n%todo: add answer check for assassin%", "date_published": "2016-10-08T16:59:55Z", "authors": ["Alexei Andreev", "Logan L", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "1y6"} {"id": "3b1e95291a381a3b1c162e7abd926740", "title": "Probability notation for Bayes' rule", "url": "https://arbital.com/p/bayes_probability_notation", "source": "arbital", "source_type": "text", "text": "[Bayes' rule](https://arbital.com/p/1lz) relates [prior](https://arbital.com/p/1rm) belief and the [likelihood](https://arbital.com/p/1rq) of evidence to [posterior](https://arbital.com/p/1rp) belief.\n\nThese quantities are often written using [conditional probabilities](https://arbital.com/p/1rj):\n\n- [Prior](https://arbital.com/p/1rm) belief in the hypothesis: $\\mathbb P(H).$\n- [Likelihood](https://arbital.com/p/1rq) of evidence, conditional on the hypothesis: $\\mathbb P(e \\mid H).$\n- [Posterior](https://arbital.com/p/1rp) belief in hypothesis, after seeing evidence: $\\mathbb P(H \\mid e).$\n\nFor example, Bayes' rule in the [odds form](https://arbital.com/p/1x5) describes the relative belief in a hypothesis $H_1$ vs an alternative $H_2,$ given a piece of evidence $e,$ as follows:\n\n$$\\dfrac{\\mathbb P(H_1)}{\\mathbb P(H_2)} \\times \\dfrac{\\mathbb P(e \\mid H_1)}{\\mathbb P(e \\mid H_2)} = \\dfrac{\\mathbb P(H_1\\mid e)}{\\mathbb P(H_2\\mid e)}.$$", "date_published": "2016-07-10T19:10:01Z", "authors": ["Alonzo Church", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["[Bayes' rule](https://arbital.com/p/1lz) relates [prior](https://arbital.com/p/1rm) belief and the [likelihood](https://arbital.com/p/1rq) of evidence to [posterior](https://arbital.com/p/1rp) belief.\n\nThese quantities are often denoted using [conditional probabilities](https://arbital.com/p/1rj):\n\n- [Prior](https://arbital.com/p/1rm) belief in hypothesis: $\\mathbb P(H).$\n- [Likelihood](https://arbital.com/p/1rq) of evidence, conditional on hypothesis: $\\mathbb P(e \\mid H).$\n- [Posterior](https://arbital.com/p/1rp) belief: $\\mathbb P(H \\mid e).$"], "tags": ["Stub"], "alias": "1y9"} {"id": "77f149dbc359711f0521e7cf4b501edd", "title": "Probability notation for Bayes' rule: Intro (Math 1)", "url": "https://arbital.com/p/bayes_probability_notation_math1", "source": "arbital", "source_type": "text", "text": "To denote some of the quantities used in Bayes' rule, we'll need [conditional probabilities](https://arbital.com/p/1rj). The conditional probability $\\mathbb{P}(X\\mid Y)$ means \"The [probability](https://arbital.com/p/1rf) of $X$ given $Y$.\" That is, $\\mathbb P(\\mathrm{left}\\mid \\mathrm{right})$ means \"The probability that $\\mathrm{left}$ is true, assuming that $\\mathrm{right}$ is true.\"\n\n$\\mathbb P(\\mathrm{yellow}\\mid \\mathrm{banana})$ is the probability that a banana is yellow - if we know something to be a banana, what is the probability that it is yellow? $\\mathbb P(\\mathrm{banana}\\mid \\mathrm{yellow})$ is the probability that a yellow thing is a banana - if the right, known side is yellowness, then, we ask the question on the left, what is the probability that this is a banana?\n\nIn probability theory, the definition of \"conditional probability\" is that the conditional probability of $L,$ given $R,$ is found by looking at the probability of possibilities with both $L$ *and* $R$ *within* all possibilities with $R.$ Using $L \\wedge R$ to denote the logical proposition \"L and R both true\":\n\n$\\mathbb P(L\\mid R) = \\frac{\\mathbb P(L \\wedge R)}{\\mathbb P(R)}$\n\nSuppose you have a bag containing objects that are either red or blue, and either square or round:\n\n$$\\begin{array}{l|r|r}\n& Red & Blue \\\\\n\\hline\nSquare & 1 & 2 \\\\\n\\hline\nRound & 3 & 4\n\\end{array}$$\n\nIf you reach in and feel a round object, the conditional probability that it is red is:\n\n$\\mathbb P(\\mathrm{red} \\mid \\mathrm{round}) = \\dfrac{\\mathbb P(\\mathrm{red} \\wedge \\mathrm{round})}{\\mathbb P(\\mathrm{round})} \\propto \\dfrac{3}{3 + 4} = \\frac{3}{7}$\n\nIf you look at the object nearest the top, and can see that it's blue, but not see the shape, then the conditional probability that it's a square is:\n\n$\\mathbb P(\\mathrm{square} \\mid \\mathrm{blue}) = \\dfrac{\\mathbb P(\\mathrm{square} \\wedge \\mathrm{blue})}{\\mathbb P(\\mathrm{blue})} \\propto \\dfrac{2}{2 + 4} = \\frac{1}{3}$\n\n![conditional probabilities bag](https://i.imgur.com/zscEdLj.png?0)\n\n# Updating as conditioning\n\nBayes' rule is useful because the process of *observing new evidence* can be interpreted as *conditioning a probability distribution.*\n\nAgain, the Diseasitis problem:\n\n> 20% of the patients in the screening population start out with Diseasitis. Among patients with Diseasitis, 90% turn the tongue depressor black. 30% of the patients without Diseasitis will also turn the tongue depressor black. Among all the patients with black tongue depressors, how many have Diseasitis?\n\nConsider a single patient, before observing any evidence. There are four possible worlds we could be in, the product of (sick vs. healthy) times (positive vs. negative result):\n\n$$\\begin{array}{l|r|r}\n& Sick & Healthy \\\\\n\\hline\nTest + & 18\\% & 24\\% \\\\\n\\hline\nTest - & 2\\% & 56\\%\n\\end{array}$$\n\nTo actually *observe* that the patient gets a negative result, is to eliminate from further consideration the possible worlds where the patient gets a positive result:\n\n![bayes elimination](https://i.imgur.com/LGeGIzW.png?0)\n\nOnce we observe the result $\\mathrm{positive}$, all of our future reasoning should take place, not in our old $\\mathbb P(\\cdot),$ but in our new $\\mathbb P(\\cdot \\mid \\mathrm{positive}).$ This is why, after observing \"$\\mathrm{positive}$\" and revising our probability distribution, when we ask about the probability the patient is sick, we are interested in the new probability $\\mathbb P(\\mathrm{sick}\\mid \\mathrm{positive})$ and not the old probability $\\mathbb P(\\mathrm{sick}).$\n\n## Example: Socks-dresser problem\n\nRealizing that *observing evidence* corresponds to [eliminating probability mass](https://arbital.com/p/1y6) and concerning ourselves only with the probability mass that remains, is the key to solving the [sock-dresser search](https://arbital.com/p/55b) problem:\n\n> You left your socks somewhere in your room. You think there's a 4/5 chance that they're in your dresser, so you start looking through your dresser's 8 drawers. After checking 6 drawers at random, you haven't found your socks yet. What is the probability you will find your socks in the next drawer you check?\n\nWe initially have 20% of the probability mass in \"Socks outside the dresser\", and 80% of the probability mass for \"Socks inside the dresser\". This corresponds to 10% probability mass for each of the 8 drawers.\n\nAfter eliminating the probability mass in 6 of the drawers, we have 40% of the original mass remaining, 20% for \"Socks outside the dresser\" and 10% each for the remaining 2 drawers.\n\nSince this remaining 40% probability mass is now our whole world, the effect on our probability distribution is like amplifying the 40% until it expands back up to 100%, aka [renormalizing the probability distribution](https://arbital.com/p/1rk). This is why we divide $\\mathbb P(L \\wedge R)$ by $\\mathbb P(R)$ to get the new probabilities.\n\nIn this case, we divide \"20% probability of being outside the dresser\" by 40%, and then divide the 10% probability mass in each of the two drawers by 40%. So the new probabilities are 1/2 for outside the dresser, and 1/4 each for the 2 drawers. Or more simply, we could observe that, among the remaining probability mass of 40%, the \"outside the dresser\" hypothesis has half of it, and the two drawers have a quarter each.\n\nSo the probability of finding our socks in the next drawer is 25%.\n\nNote that as we open successive drawers, we both become more confident that the socks are not in the dresser at all (since we eliminated several drawers they could have been in), *and* also expect more that we might find the socks in the next drawer we open (since there are so few remaining).\n\n# Priors, likelihoods, and posteriors\n\nBayes' theorem is generally inquiring about some question of the form $\\mathbb P(\\mathrm{hypothesis}\\mid \\mathrm{evidence})$ - the $\\mathrm{evidence}$ is known or assumed, so that we are now mentally living in the revised probability distribution $\\mathbb P(\\cdot\\mid \\mathrm{evidence}),$ and we are asking what we infer or guess about the $hypothesis.$ This quantity is the **[posterior probability](https://arbital.com/p/1rp)** of the $\\mathrm{hypothesis}.$\n\nTo carry out a Bayesian revision, we also need to know what our beliefs were before we saw the evidence. (E.g., in the Diseasitis problem, the chance that a patient who hasn't been tested yet is sick.) This is often written $\\mathbb P(\\mathrm{hypothesis}),$ and the hypothesis's probability isn't being conditioned on anything because it is our **[prior](https://arbital.com/p/1rm)** belief.\n\nThe remaining pieces of key information are the **[likelihoods](https://arbital.com/p/1rq)** of the evidence, given each hypothesis. To interpret the meaning of the positive test result as evidence, we need to imagine ourselves in the world where the patient is sick - *assume* the patient to be sick, as if that were known - and then ask, just as if we hadn't seen any test result yet, what we think the probability of the evidence would be in that world. And then we have to do a similar operation again, this time mentally inhabiting the world where the patient is healthy. And unfortunately, it so happens that the standard notation are such as to make this idea be denoted $\\mathbb P(\\mathrm{evidence}\\mid \\mathrm{hypothesis})$ - looking deceptively like the notation for the posterior probability, but written in the reverse order. Not surprisingly, this trips people up a bunch until they get used to it. (You would at least hope that the standard symbol $\\mathbb P(\\cdot \\mid \\cdot)$ wouldn't be *symmetrical,* but it is. Alas.)\n\n## Example\n\nSuppose you're Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.\n\nThe Scotland Yard detective says, \"Aha! Then it's Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have left a red hair there. $\\mathbb P(\\mathrm{redhair}\\mid \\mathrm{Scarlet}) = 99\\%,$ let's say, which is a near-certain conviction, so we're done.\"\n\n\"But no,\" replies Sherlock Holmes. \"You see, but you do not correctly track the meaning of the [conditional probabilities](https://arbital.com/p/1rj), detective. The knowledge we require for a conviction is not $\\mathbb P(\\mathrm{redhair}\\mid \\mathrm{Scarlet}),$ the chance that Miss Scarlet would leave a red hair, but rather $\\mathbb P(\\mathrm{Scarlet}\\mid \\mathrm{redhair}),$ the chance that this red hair was left by Scarlet. There are other people in this city who have red hair.\"\n\n\"So you're saying...\" the detective said slowly, \"that $\\mathbb P(\\mathrm{redhair}\\mid \\mathrm{Scarlet})$ is actually much lower than $1$?\"\n\n\"No, detective. I am saying that just because $\\mathbb P(\\mathrm{redhair}\\mid \\mathrm{Scarlet})$ is high does not imply that $\\mathbb P(\\mathrm{Scarlet}\\mid \\mathrm{redhair})$ is high. It is the latter probability in which we are interested - the degree to which, *knowing* that a red hair was left at the scene, we *infer* that Miss Scarlet was the murderer. The posterior, as the Bayesians say. This is not the same quantity as the degree to which, *assuming* Miss Scarlet was the murderer, we would *guess* that she might leave a red hair. That is merely the likelihood of the evidence, conditional on Miss Scarlet having done it.\"\n\n## Visualization\n\nUsing the [waterfall](https://arbital.com/p/1wy) for the [Diseasitis problem](https://arbital.com/p/22s):\n\n![waterfall labeled probabilities](https://i.imgur.com/f9E0ltp.png?0)", "date_published": "2016-08-03T14:30:21Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "1yc"} {"id": "a74741134ac2179542110b39e9c3b063", "title": "Bayes' rule: Vector form", "url": "https://arbital.com/p/bayes_rule_multiple", "source": "arbital", "source_type": "text", "text": "%todo: This page conflates two concepts: (1) You can perform a Bayesian update on multiple hypotheses at once, by representing hypotheses via vectors; and (2) you can perform multiple Bayesian updates by multiplying by all the likelihood functions (and only normalizing once at the end). We should probably have one page for each concept, and we should possibly split this page in order to make them. (It's not yet clear whether we want one unified page for both ideas, as this one currently is.)%\n%comment: Comment from ESY: it seems to me that these two concepts are sufficiently closely related, and sufficiently combined in their demonstration, that we want to explain them on the same page. They could arguably have different concept pages, though.%\n\n[Bayes' rule](https://arbital.com/p/1lz) in the [odds form](https://arbital.com/p/1x5) says that for every pair of hypotheses, their relative prior odds, times the relative likelihood of the evidence, equals the relative posterior odds.\n\nLet $\\mathbf H$ be a vector of hypotheses $H_1, H_2, \\ldots$ Because Bayes' rule holds between every pair of hypotheses in $\\mathbf H,$ we can simply multiply an odds vector by a likelihood vector in order to get the correct posterior vector:\n\n$$\\mathbb O(\\mathbf H) \\times \\mathcal L_e(\\mathbf H) = \\mathbb O(\\mathbf H \\mid e)$$\n\n%comment: Comment from EN: It seems to me that the dot product would be more appropriate.%\n\nwhere $\\mathbb O(\\mathbf H)$ is the vector of relative prior odds between all the $H_i$, $\\mathcal L_e(\\mathbf H)$ is the vector of relative likelihoods with which each $H_i$ predicted $e,$ and $\\mathbb O(\\mathbf H \\mid e)$ is the relative posterior odds between all the $H_i.$\n\nIn fact, we can keep multiplying by likelihood vectors to perform multiple updates at once:\n\n$$\\begin{array}{r}\n\\mathbb O(\\mathbf H) \\\\\n\\times\\ \\mathcal L_{e_1}(\\mathbf H) \\\\\n\\times\\ \\mathcal L_{e_2}(\\mathbf H \\wedge e_1) \\\\\n\\times\\ \\mathcal L_{e_3}(\\mathbf H \\wedge e_1 \\wedge e_2) \\\\\n= \\mathbb O(\\mathbf H \\mid e_1 \\wedge e_2 \\wedge e_3)\n\\end{array}$$\n\nFor example, suppose there's a bathtub full of coins. Half of the coins are \"fair\" and have a 50% probability of producing heads on each coinflip. A third of the coins is biased towards heads and produces heads 75% of the time. The remaining coins are biased against heads, which they produce only 25% of the time. You pull out a coin at random, flip it 3 times, and get the result THT. What's the chance that this was a fair coin?\n\nWe have three hypotheses, which we'll call $H_{fair},$ $H_{heads}$, and $H_{tails}$ and respectively, with relative odds of $(1/2 : 1/3 : 1/6).$ The relative likelihoods that these three hypotheses assign to a coin landing heads is $(2 : 3 : 1)$; the relative likelihoods that they assign to a coin landing tails is $(2 : 1 : 3).$ Thus, the posterior odds for all three hypotheses are:\n\n$$\\begin{array}{rll}\n(1/2 : 1/3 : 1/6) = & (3 : 2 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n\\times & (2 : 3 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n= & (24 : 6 : 9) & = (8 : 2 : 3) = (8/13 : 2/13 : 3/13)\n\\end{array}$$\n\n...so there is an 8/13 or ~62% probability that the coin is fair.\n\nIf you were only familiar with the [probability form](https://arbital.com/p/554) of Bayes' rule, which only works for one hypothesis at a time and which only uses probabilities (and so [normalizes](https://arbital.com/p/1rk) the odds into probabilities at every step)...\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e\\mid H_i)P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k)P(H_k)}$$\n\n...then you might have had some gratuitous difficulty solving this problem.\n\nAlso, if you hear the idiom of \"convert to odds, multiply lots and lots of things, convert back to probabilities\" and think \"hmm, this sounds like a place where [transforming into log-space](https://arbital.com/p/4h0) (where all multiplications become additions) might yield efficiency gains,\" then congratulations, you just invented the [log-odds from of Bayes' rule](https://arbital.com/p/1zh). Not only is it efficient, it also gives rise to a natural unit of measure for \"strength of evidence\" and \"strength of belief\".\n\n# Naive Bayes\n\nMultiplying an array of odds by an array of likelihoods is the idiom used in Bayesian spam filters. Suppose that there are three categories of email, \"Business\", \"Personal\", and \"Spam\", and that the user hand-labeling the last 100 emails has labeled 50 as Business, 30 as Personal, and 20 as spam. The word \"buy\" has appeared in 10 Business emails, 3 Personal emails, and 10 spam emails. The word \"rationality\" has appeared in 30 Business emails, 15 Personal emails, and 1 spam email.\n\nFirst, we assume that the frequencies in our data are representative of the 'true' frequencies. (Taken literally, if we see a word we've never seen before, we'll be multiplying by a zero probability. [Good-Turing frequency estimation](https://en.wikipedia.org/wiki/Good%E2%80%93Turing_frequency_estimation) would do better.)\n\nSecond, we make the *naive Bayes* assumption that a spam email which contains the word \"buy\" is no more or less likely than any other spam email to contain the word \"rationality\", and so on with the other categories.\n\nThen we'd filter a message containing the phrase \"buy rationality\" as follows:\n\n[Prior](https://arbital.com/p/1rm) odds: $(5 : 3 : 2)$\n\n[Likelihood ratio](https://arbital.com/p/1rq) for \"buy\": \n\n$$\\left(\\frac{10}{50} : \\frac{3}{30} : \\frac{10}{20}\\right) = \\left(\\frac{1}{5} : \\frac{1}{10} : \\frac{1}{2}\\right) = (2 : 1 : 5)$$\n\nLikelihood ratio for \"rationality\": \n\n$$\\left(\\frac{30}{50} : \\frac{15}{30} : \\frac{1}{20}\\right) = \\left(\\frac{3}{5} : \\frac{1}{2} : \\frac{1}{20}\\right) = (12 : 10 : 1)$$\n\nPosterior odds:\n\n$$(5 : 3 : 2) \\times (2 : 1 : 5) \\times (12 : 10 : 1) = (120 : 30 : 10) = \\left(\\frac{12}{16} : \\frac{3}{16} : \\frac{1}{16}\\right)$$\n\n%%comment: 12/16 is intentionally not in lowest form so that the 12 : 3 : 1 ratio can be clear.%%\n\nThis email would be 75% likely to be a business email, if the Naive Bayes assumptions are true. They're almost certainly *not* true, for reasons discussed in more detail below. But while Naive Bayes calculations are usually quantitatively wrong, they often point in the right qualitative direction - this email may indeed be more likely than not to be a business email.\n\n(An actual implementation should add [log-likelihoods](https://arbital.com/p/1zh) rather than multiplying by ratios, so as not to risk floating-point overflow or underflow.)\n\n# Non-naive multiple updates\n\nTo do a multiple update less naively, we must do the equivalent of asking about the probability that a Business email contains the word \"rationality\", *given* that it contained the word \"buy\".\n\nAs a real-life example, in a certain [rationality workshop](http://rationality.org/), one participant was observed to have taken another participant to a museum, and also, on a different day, to see their workplace. A betting market soon developed on whether the two were romantically involved. One participant argued that, as an eyeball estimate, someone was 12 times as likely to take a fellow participant to a museum, or to their workplace, if they were romantically involved, vs. just being strangers. They then multiplied their prior odds by a 12 : 1 likelihood ratio for the museum trip and another 12 : 1 likelihood ratio for the workplace trip, and concluded that these two were almost certainly romantically attracted.\n\nIt later turned out that the two were childhood acquaintances who were not romantically involved. What went wrong?\n\nIf we want to update hypotheses on multiple pieces of evidence, we need to mentally stay inside the world of each hypothesis, and condition the likelihood of future evidence on the evidence already observed. Suppose the two are *not* romantically attracted. We observe them visit a museum. Arguendo, we might indeed suppose that this has a probability of, say, 1% (we don't usually expect strangers to visit museums together) which might be about 1/12 the probability of making that observation if the two were romantically involved.\n\nBut after this, when we observe the workplace visit, we need to ask about the probability of the workplace visit, *given* that the two were romantically attracted *and* that they visited a museum. This might suggest that if two non-attracted people visit a museum together *for whatever reason*, they don't just have the default probability of a non-attracted couple of making a workplace visit. In other words:\n\n$$\\mathbb P({workplace}\\mid \\neg {romance} \\wedge {museum}) \\neq \\mathbb P({workplace}\\mid \\neg {romance})$$\n\nNaive Bayes, in contrast, would try to approximate the quantity $\\mathbb P({museum} \\wedge {workplace} \\mid \\neg {romance})$ as the product of $\\mathbb P({museum}\\mid \\neg {romance}) \\cdot \\mathbb P({workplace}\\mid \\neg {romance}).$ This is what the participants did when they multiplied by a 1/12 likelihood ratio twice.\n\nThe result was a kind of double-counting of the evidence — they took into account the prior improbability of a random non-romantic couple \"going places together\" twice in a row, for the two pieces of evidence, and ended up performing a total update that was much too strong. \n\nNaive Bayes spam filters often end up assigning ludicrously extreme odds, on the order of googols to one, that an email is spam or personal; and then they're sometimes wrong anyways. If an email contains the phrase \"pharmaceutical\" and \"pharmacy\", a spam filter will double-count the improbability of a personal email talking about pharmacies, rather than considering that if I actually do get a personal email talking about a pharmacy, it is much more likely to contain the word \"pharmaceutical\" as well. So because of the Naive Bayes assumption, naive Bayesian spam filters are not anything remotely like well-calibrated, and they update much too extremely on the evidence. On the other hand, they're often extreme in the correct qualitative direction — something assigned googol-to-one odds of being spam isn't *always* spam but it might be spam, say, 99.999% of the time.\n\nTo do non-naive Bayesian updates on multiple pieces of evidence, just remember to mentally inhabit the world *where the hypothesis is true*, and then ask about the likelihood of each successive piece of evidence, *in* the world where the hypothesis is true *and* the previous pieces of evidence were observed. Don't ask, \"What is the likelihood that a non-romantic couple would visit one person's workplace?\" but \"What is the likelihood that a non-romantic couple which previously visited a museum for some unknown reason would also visit the workplace?\"\n\nIn our example with the coins in the bathtub, the likelihoods of the evidence were independent on each step - *assuming* a coin to be fair, it's no more or less likely to produce heads on the second flip after producing heads on the first flip. So in our bathtub-coins example, the Naive Bayes assumption was actually true.", "date_published": "2017-05-22T22:31:28Z", "authors": ["Eric Bruylant", "Ryan White", "Patrick LaVictoire", "gia dang", "Alexei Andreev", "Eliezer Yudkowsky", "Nate Soares", "Nick Jordan", "Francis Marineau", "Erik Nash", "Adom Hartell", "Connor Flexman", "Nate Windwood", "Fedor Belolutskiy"], "summaries": ["The [odds form of Bayes' rule](https://arbital.com/p/1x5) works for odds ratios between more than two hypotheses, and applying multiple pieces of evidence. Suppose there's a bathtub full of coins. 1/2 of the coins are fair and have a 50% probability of producing heads on each coinflip; 1/3 of the coins produce 25% heads; and 1/6 produce 75% heads. You pull out a coin at random, flip it 3 times, and get the result HTH. You may calculate:\n\n$$\\begin{array}{rll}\n(1/2 : 1/3 : 1/6) = & (3 : 2 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n\\times & (2 : 3 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n= & (24 : 6 : 9) & = (8 : 2 : 3)\n\\end{array}$$"], "tags": ["C-Class"], "alias": "1zg"} {"id": "8c0af231f56c9b457e092d46bfdb78bb", "title": "Bayes' rule: Log-odds form", "url": "https://arbital.com/p/bayes_log_odds", "source": "arbital", "source_type": "text", "text": "[The odds form](https://arbital.com/p/1x5) of [Bayes's Rule](https://arbital.com/p/1lz) states that the [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/odds_ratio) times the [likelihood ratio](https://arbital.com/p/56t) equals the [posterior odds](https://arbital.com/p/1rp). We can take the log of both sides of this equation, yielding an equivalent equation which uses addition instead of multiplication.\n\nLetting $H_i$ and $H_j$ denote hypotheses and $e$ denote evidence, the log-odds form of Bayes' rule states:\n\n$$\n\\log \\left ( \\dfrac\n {\\mathbb P(H_i\\mid e)}\n {\\mathbb P(H_j\\mid e)}\n\\right )\n=\n\\log \\left ( \\dfrac\n {\\mathbb P(H_i)}\n {\\mathbb P(H_j)}\n\\right )\n+\n\\log \\left ( \\dfrac\n {\\mathbb P(e\\mid H_i)}\n {\\mathbb P(e\\mid H_j)}\n\\right ).\n$$\n\nThis can be numerically efficient for when you're carrying out [lots of updates one after another](https://arbital.com/p/1zg). But a more important reason to think in log odds is to get a better grasp on the notion of 'strength of evidence'.\n\n# Logarithms of likelihood ratios\n\nSuppose you're visiting your friends Andrew and Betty, who are a couple. They promised that one of them would pick you up from the airport when you arrive. You're not sure which one is in fact going to pick you up ([prior odds](https://arbital.com/p/1rm) of 50:50), but you do know three things:\n\n1. They have both a blue car and a red car. Andrew prefers to drive the blue car, Betty prefers to drive the red car, but the correlation is relatively weak. (Sometimes, which car they drive depends on which one their child is using.) Andrew is 2x as likely to drive the blue car as Betty.\n2. Betty tends to honk the horn at you to get your attention. Andrew does this too, but less often. Betty is 4x as likely to honk as Andrew.\n3. Andrew tends to run a little late (more often than Betty). Betty is 2x as likely to have the car already at the airport when you arrive.\n\nAll three observations are independent as far as you know (that is, you don't think Betty's any more or less likely to be late if she's driving the blue car, and so on).\n\nLet's say we see a blue car, already at the airport, which honks.\n\nThe odds form of this calculation would be a $(1 : 1)$ prior for Betty vs. Andrew, times likelihood ratios of $(1 : 2) \\times (4 : 1) \\times (2 : 1),$ yielding posterior odds of $(1 \\times 4 \\times 2 : 2 \\times 1 \\times 1) = (8 : 2) = (4 : 1)$, so it's 4/5 = 80% likely to be Betty.\n\nHere's the log odds form of the same calculation, using 1 **bit** to denote each factor of $2$ in belief or evidence:\n\n- Prior belief in Betty of $\\log_2 (\\frac{1}{1}) = 0$ bits.\n- Evidence of $\\log_2 (\\frac{1}{2}) = {-1}$ bits against Betty.\n- Evidence of $\\log_2 (\\frac{4}{1}) = {+2}$ bits for Betty.\n- Evidence of $\\log_2 (\\frac{2}{1}) = {+1}$ bit for Betty.\n- Posterior belief of $0 + {^-1} + {^+2} + {^+1} = {^+2}$ bits that Betty is picking us up.\n\nIf your posterior belief is +2 bits, then your posterior odds are $(2^{+2} : 1) = (4 : 1),$ yielding a posterior probability of 80% that Betty is picking you up.\n\nEvidence and belief represented this way is *additive,* which can make it an easier fit for intuitions about \"strength of credence\" and \"strength of evidence\"; we'll soon develop this point in further depth.\n\n# The log-odds line\n\nImagine you start out thinking that the hypothesis $H$ is just as likely as $\\lnot H,$ its negation. Then you get five separate independent $2 : 1$ updates in favor of $H.$ What happens to your probabilities?\n\nYour odds (for $H$) go from $(1 : 1)$ to $(2 : 1)$ to $(4 : 1)$ to $(8 : 1)$ to $(16 : 1)$ to $(32 : 1).$\n\nThus, your probabilities go from $\\frac{1}{2} = 50\\%$ to $\\frac{2}{3} \\approx 67\\%$ to $\\frac{4}{5} = 80\\%$ to $\\frac{8}{9} \\approx 89\\%$ to $\\frac{16}{17} \\approx 94\\%$ to $\\frac{32}{33} \\approx 97\\%.$\n\nGraphically representing these changing probabilities on a line that goes from 0 to 1:\n\n![0 updates](https://i.imgur.com/pJOVvl1.png)\n![3 updates](https://i.imgur.com/9AxEpQA.png)\n![5 updates](https://i.imgur.com/HHeyAzO.png)\n\nWe observe that the probabilities approach 1 but never get there — they just keep stepping across a fraction of the remaining distance, eventually getting all scrunched up near the right end.\n\nIf we instead convert the probabilities into log-odds, the story is much nicer. 50% probability becomes 0 bits of credence, and every independent $(2 : 1)$ observation in favor of $H$ shifts belief by one unit along the line.\n\n![the log-odds line](https://i.imgur.com/TquetiU.png)\n\n(As for what happens when we approach the end of the line, there isn't one! 0% probability becomes $-\\infty$ bits of credence and 100% probability becomes $+\\infty$ bits of credence.%%%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)):%%note: This un-scrunching of the [https://arbital.com/p/-interval](https://arbital.com/p/-interval) $(0,1)$ into the entire [https://arbital.com/p/-real_line](https://arbital.com/p/-real_line) is done by an application of the [inverse logistic](https://arbital.com/p/inverse_logistic_function) function.%% %%%)\n\n# Intuitions about the log-odds line\n\nThere are a number of ways in which this infinite log-odds line is a better place to anchor your intuitions about \"belief\" than the usual [1](https://arbital.com/p/0,) probability interval. For example:\n\n- Evidence you are twice as likely to see if the hypothesis is true than if it is false is ${+1}$ bits of evidence and a ${^+1}$-bit update, regardless of how confident or unconfident you were to start with--the strength of new evidence, and the distance we update, shouldn't *depend on* our prior belief.\n- If your credence in something is 0 bits--neither positive or negative belief--then you think the odds are 1:1.\n- The distance between $0.01$ and $0.000001$ is much greater than the distance between $0.11$ and $0.100001.$\n\nTo expand on the final point: on the 0-1 probability line, the difference between 0.01 (a 1% chance) and 0.000001 (a 1 in a million chance) is roughly the same as the distance between 11% and 10%. This doesn't match our sense for the intuitive strength of a claim: The difference between \"1 in 100!\" and \"1 in a million!\" feels like a far bigger jump than the difference between \"11% probability\" and \"a hair over 10% probabiility.\"\n\nOn the log-odds line, a 1 in 100 credibility is ${^-2}$ orders of magnitude, and a \"1 in a million\" credibility is ${^-6}$ orders of magnitude. The distance between them is minus 4 orders of magnitude, that is, $\\log_{10}(10^{-6}) - \\log_{10}(10^{-2})$ yields ${^-4}$ magnitudes, or roughly ${^-13.3}$ bits. On the other hand, 11% to 10% is $\\log_{10}(\\frac{0.10}{0.90}) - \\log_{10}(\\frac{0.11}{0.89}) \\approx {^-0.954}-{^-0.907} \\approx {^-0.046}$ magnitudes, or ${^-0.153}$ bits.\n\nThe log-odds line doesn't compress the vast differences available near the ends of the probability spectrum. Instead, it exhibits a \"belief bar\" carrying on indefinitely in both directions--every time you see evidence with a likelihood ratio of $2 : 1,$ it adds one more bit of credibility.\n\nThe [Weber-Fechner law](https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law) says that most human sensory perceptions are logarithmic, in the sense that a factor-of-2 intensity change *feels* like around the same amount of increase no matter where you are on the scale. Doubling the physical intensity of a sound feels to a human like around the same amount of change in that sound whether the initial sound was 40 decibels or 60 decibels. That's why there's an exponential decibel scale of sound intensities in the first place!\n\nThus the log-odds form should be, in a certain sense, the *most* intuitive variant of Bayes' rule to use: Just add the evidence-strength to the belief-strength! If you can make your feelings of evidence-strength and belief-strength be proportional to the logarithms of ratios, that is.\n\nFinally, the log-odds representation gives us an even easier way to see how [extraordinary claims require extraordinary evidence](https://arbital.com/p/21v): If your prior belief in $H$ is -30 bits, and you see evidence on the order of +5 bits for $H$, then you're going to wind up with -25 bits of belief in $H$, which means you still think it's far less likely than the alternatives.\n\n# Example: Blue oysters\n\nConsider the [blue oyster](https://arbital.com/p/54v) example problem:\n\nYou're collecting exotic oysters in Nantucket, and there are two different bays from which you could harvest oysters.\n\n- In both bays, 11% of the oysters contain valuable pearls and 89% are empty.\n- In the first bay, 4% of the pearl-containing oysters are blue, and 8% of the non-pearl-containing oysters are blue.\n- In the second bay, 13% of the pearl-containing oysters are blue, and 26% of the non-pearl-containing oysters are blue.\n\nWould you rather have a blue oyster from the first bay or the second bay? Well, we first note that the likelihood ratio from \"blue oyster\" to \"full vs. empty\" is $1 : 2$ in either case, so both kinds of blue oyster are equally valuable. (Take a moment to reflect on how obvious this would *not* seem before learning about Bayes' rule!)\n\nBut what's the chance of (either kind of) a blue oyster containing a pearl? Hint: this would be a good time to convert your credences into bits (factors of 2).\n\n%%hidden(Answer):\n89% is around 8 times as much as 11%, so we start out with ${^-3}$ bits of belief that a random oyster contains a pearl.\n\nFull oysters are 1/2 as likely to be blue as empty oysters, so seeing that an oyster is blue is ${^-1}$ bits of evidence against it containing a pearl.\n\nPosterior belief should be around ${^-4}$ bits or $(1 : 16)$ against, or a probability of 1/17... so a bit more than 5% (1/20) maybe? (Actually 5.88%.)\n%%\n\n# Real-life example: HIV test\n\n\nA study of Chinese blood donors %note: Citation needed% found that roughly 1 in 100,000 of them had HIV (as determined by a very reliable gold-standard test). The non-gold-standard test used for initial screening had a *sensitivity* of 99.7% and a *specificity* of 99.8%, meaning respectively that $\\mathbb P({positive}\\mid {HIV}) = .997$ and $\\mathbb P({negative}\\mid \\neg {HIV}) = .998$, i.e., $\\mathbb P({positive} \\mid \\neg {HIV}) = .002.$\n\nThat is: the prior odds are $1 : 100,000$ against HIV, and a positive result in an initial screening favors HIV with a likelihood ratio of $500 : 1.$\n\nUsing log base 10 (because those are [easier to do in your head](https://arbital.com/p/416)):\n\n- The prior belief in HIV was about -5 magnitudes.\n- The evidence was a tad less than +3 magnitudes strong, since 500 is less than 1,000. ($\\log_{10}(500) \\approx 2.7$).\n\nSo the posterior belief in HIV is a tad underneath -2 magnitudes, i.e., less than a 1 in 100 chance of HIV.\n\nEven though the screening had a $500 : 1$ likelihood ratio in favor of HIV, someone with a positive screening result _really_ should not panic!\n\nAdmittedly, this setup had people being screened randomly, in a relatively non-AIDS-stricken country. You'd need separate statistics for people who are getting tested for HIV because of specific worries or concerns, or in countries where HIV is highly prevalent. *Nonetheless,* the points that \"only a tiny fraction of people have illness X\" and that \"preliminary observations Y may not have *correspondingly tiny* false positive rates\" are worth remembering for many illnesses X and observations Y.\n\n# Exposing infinite credences\n\nThe log-odds representation exposes the degree to which $0$ and $1$ are very unusual among the classical probabilities. For example, [if you ever assign probability absolutely 0 or 1 to a hypothesis, then no amount of evidence can change your mind about it, ever](https://arbital.com/p/4mq).\n\nOn the log-odds line, credences range from $-\\infty$ to $+\\infty,$ with the infinite extremes corresponding to probability $0$ and $1$ which can thereby be seen as \"infinite credences\". That's not to say that $0$ and $1$ probabilities should never be used. For an ideal reasoner, the probability $\\mathbb P(X) + \\mathbb P(\\lnot X)$ should be 1 (where $\\lnot X$ is the logical negation of $X$).%note:For us mere mortals, [consider avoiding extreme probabilities even then](https://arbital.com/p/http://lesswrong.com/lw/jr/how_to_convince_me_that_2_2_3/).% Nevertheless, these infinite credences of 0 and 1 behave like 'special objects' with a qualitatively different behavior from the ordinary credence spectrum. Statements like \"After seeing a piece of strong evidence, my belief should never be exactly what it was previously\" are false for extreme credences, just as statements like \"subtracting 1 from a number produces a lower number\" are false if you insist on regarding %%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)): [$\\aleph_0$](https://arbital.com/p/aleph_0) %% %%!knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)): infinity %% as a number.\n\n# Evidence in decibels\n\nE.T. Jaynes, in *Probability Theory: The Logic of Science* (section 4.2), reports that using decibels of evidence makes them easier to grasp and use by humans.\n\nIf an hypothesis has a likelihood ratio of $o$, then its evidence in decibels is given by the formula $e = 10\\log_{10}(o)$.\n\nIn this scheme, multiplying the likelihood ratio by 2 means approximately adding 3dB. Multiplying by 10 means adding 10dB.\n\nJayne reports having used decimal logarithm first, for their ease of calculation and having tried to switch to natural logarithms with the advent of pocket calculators. But decimal logarithms were found to be easier to grasp.", "date_published": "2017-07-09T22:25:14Z", "authors": ["Szymon Slawinski", "Joe Zeng", "Alexei Andreev", "Eric Rogstad", "Viktor Gregor", "Pierre Thierry", "Nate Soares", "John Schmitt", "Lars Øvlisen", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["[Bayes's Rule](https://arbital.com/p/1x5) states that the [prior odds](https://arbital.com/p/1rm) times the [likelihood ratio](https://arbital.com/p/56t) equals the [posterior odds](https://arbital.com/p/1rp). We can take the logarithms of both sides of the equation to get an equivalent rule which uses addition instead of multiplication. Letting $H_i$ and $H_j$ denote hypotheses and $e$ denote evidence:\n\n$$\n\\log \\left ( \\dfrac\n {\\mathbb P(H_i\\mid e)}\n {\\mathbb P(H_j\\mid e)}\n\\right )\n=\n\\log \\left ( \\dfrac\n {\\mathbb P(H_i)}\n {\\mathbb P(H_j)}\n\\right )\n+\n\\log \\left ( \\dfrac\n {\\mathbb P(e\\mid H_i)}\n {\\mathbb P(e\\mid H_j)}\n\\right ).\n$$\n\nTo use a real-life example, a study of Chinese blood donors found that roughly 1 in 100,000 of them had HIV (as determined by a very reliable gold-standard test). The non-gold-standard test used for initial screening had a sensitivity of 99.7% and a specificity of 99.8%, meaning that it was 500 times as likely to return positive for infected as non-infected patients. Then our prior belief is -5 orders of magnitude against HIV, and if we then observe a positive test result, this is evidence of strength +2.7 orders of magnitude for HIV. Our posterior belief is -2.3 orders of magnitude, or odds of less than 1 to a 100, against HIV."], "tags": ["B-Class", "Proposed A-Class"], "alias": "1zh"} {"id": "63ae8813b27dd846cf5dd66e6893e040", "title": "Bayes' rule: Functional form", "url": "https://arbital.com/p/bayes_rule_functional", "source": "arbital", "source_type": "text", "text": "[Bayes' rule](https://arbital.com/p/1lz) generalizes to continuous [functions](https://arbital.com/p/3jy), and states, \"The [posterior](https://arbital.com/p/1rp) probability [density](https://en.wikipedia.org/wiki/Probability_density_function) is *proportional* to the [likelihood](https://arbital.com/p/1rq) function times the [prior](https://arbital.com/p/1rm) probability [density](https://en.wikipedia.org/wiki/Probability_density_function).\"\n\n$$\\mathbb P(H_x\\mid e) \\propto \\mathcal L_e(H_x) \\cdot \\mathbb P(H_x)$$\n\n## Example\n\nSuppose we have a biased coin with an unknown bias $b$ between 0 and 1 of coming up heads on each individual coinflip. Since the bias $b$ is a continuous variable, we express our beliefs about the coin's bias using a [probability density function](https://en.wikipedia.org/wiki/Probability_density_function) $\\mathbb P(b),$ where $\\mathbb P(b)\\cdot \\mathrm{d}b$ is the probability that $b$ is in the interval $[+ \\mathrm{d}b](https://arbital.com/p/b)$ for $\\mathrm db$ small. (Specifically, the probability that $b$ is in the interval $[b](https://arbital.com/p/a,)$ is $\\int_a^b \\mathbb P(b) \\, \\mathrm db.$)\n\nBy hypothesis, we start out completely ignorant of the bias $b,$ meaning that all initial values for $b$ are equally likely. Thus, $\\mathbb P(b) = 1$ for all values of $b,$ which means that $\\mathbb P(b)\\, \\mathrm db = \\mathrm db$ (e.g., the chance of $b$ being found in the interval from 0.72 to 0.76 is 0.04).\n\n![plot y = 1 + x * 0, x = 0 to 1](https://i.imgur.com/jUHn9pq.png?0)\n\nWe then flip the coin, and observe it to come up tails. This is our first piece of evidence. The likelihood $\\mathcal L_{t_1}(b)$ of observation $t_1$ given bias $b$ is a continuous function of $b$, equal to 0.4 if $b = 0.6,$ 0.67 if $b = 0.33,$ and so on (because $b$ is the probability of heads and the observation was tails).\n\nGraphing the likelihood function $\\mathcal L_{t_1}(b)$ as it takes in the fixed evidence $t_1$ and ranges over variable $b,$ we obtain the straightforward graph $\\mathcal L_{t_1}(b) = 1 - b.$\n\n![plot y = 1 - x, x = 0 to 1](https://i.imgur.com/piyKfWe.png?0)\n\nIf we multiply the likelihood function by the prior probability function as it ranges over $b$, we obtain a *relative probability* function on the posterior, $\\mathbb O(b\\mid t_1) = \\mathcal L_{t_1}(b) \\cdot \\mathbb P(b) = 1 - b,$ which gives us the same graph again:\n\n![plot y = 1 - x, x = 0 to 1](https://i.imgur.com/piyKfWe.png?0)\n\nBut this can't be our posterior *probability* function because it doesn't integrate to 1. $\\int_0^1 (1 - b) \\, \\mathrm db = \\frac{1}{2}.$ (The area under a triangle is half the base times the height.) [Normalizing](https://arbital.com/p/1rk) this relative probability function will give us the posterior probability function:\n\n$\\mathbb P(b \\mid t_1) = \\dfrac{\\mathbb O(b \\mid t_1)}{\\int_0^1 \\mathbb O(b \\mid t_1) \\, \\mathrm db} = 2 \\cdot (1 - f)$\n\n![plot y = 2(1 - x), x = 0 to 1](https://i.imgur.com/PtBSP6M.png?0)\n\nThe shapes are the same, and only the *y*-axis labels have changed to reflect the different heights of the pre-normalized and normalized function.\n\nSuppose we now flip the coin another two times, and it comes up heads then tails. We'll denote this piece of evidence $h_2t_3.$ Although these two coin tosses pull our beliefs about $b$ in opposite directions, they don't cancel out — far from it! In fact, one value of $b$ (\"the coin is always tails\") is completely eliminated by this evidence, and many extreme values of $b$ (\"almost always heads\" and \"almost always tails\") are hit badly. That is, while the heads and the coins tails pull our beliefs in opposite directions, they don't pull with the same strength on all possible values of $b.$\n\nWe multiply the old belief\n\n![plot y = 2(1 - x), x = 0 to 1](https://i.imgur.com/PtBSP6M.png?0)\n\nby the additional pieces of evidence\n\n![](https://i.imgur.com/aOx1avR.png?0)\n\nand\n\n![](https://i.imgur.com/piyKfWe.png?0)\n\nand obtain the posterior *relative* density\n\n![plot y = 2(1 - x)x(1 - x), x = 0 to 1](https://i.imgur.com/Gi7VqGo.png?0)\n\nwhich is proportional to the [normalized](https://arbital.com/p/1rk) posterior probability\n\n![plot y = 12(1 - x)x(1 - x), x = 0 to 1](https://i.imgur.com/tQueclr.png?0)\n\nWriting out the whole operation from scratch:\n\n$$\\mathbb P(b \\mid t_1h_2t_3) = \\frac{\\mathcal L_{t_1h_2t_3}(b) \\cdot \\mathbb P(b)}{\\mathbb P(t_1h_2t_3)} = \\frac{(1 - b) \\cdot b \\cdot (1 - b) \\cdot 1}{\\int_0^1 (1 - b) \\cdot b \\cdot (1 - b) \\cdot 1 \\, \\mathrm{d}b} = {12\\cdot b(1 - b)^2}$$\n\nNote that it's okay for a posterior probability density to be greater than 1, so long as the total probability *mass* isn't greater than 1. If there's probability density 1.2 over an interval of 0.1, that's only a probability of 0.12 for the true value to be found in that interval.\n\nThus, intuitively, Bayes' rule \"just works\" when calculating the posterior probability density from the prior probability density function and the (continuous) likelihood ratio function. A proof is beyond the scope of this guide; refer to [Proof of Bayes' rule in the continuous case](https://arbital.com/p/).", "date_published": "2016-10-10T23:21:34Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["[Bayes' rule](https://arbital.com/p/1lz) generalizes to continuous [functions](https://arbital.com/p/3jy), and says, \"The [posterior](https://arbital.com/p/1rp) probability [density](https://en.wikipedia.org/wiki/Probability_density_function) is *proportional* to the [likelihood](https://arbital.com/p/1rq) function times the [prior](https://arbital.com/p/1rm) probability [density](https://en.wikipedia.org/wiki/Probability_density_function).\"\n\n$$\\mathbb P(H_x\\mid e) \\propto \\mathcal L_e(H_x) \\cdot \\mathbb P(H_x)$$"], "tags": ["B-Class"], "alias": "1zj"} {"id": "87bad4f8e1cb6629b145324dc87776f8", "title": "Bayes' rule: Proportional form", "url": "https://arbital.com/p/bayes_rule_proportional", "source": "arbital", "source_type": "text", "text": "If $H_i$ and $H_j$ are hypotheses and $e$ is a piece of evidence, [Bayes' rule](https://arbital.com/p/1lz) states:\n\n$$\\dfrac{\\mathbb P(H_i)}{\\mathbb P(H_j)} \\times \\dfrac{\\mathbb P(e\\mid H_i)}{\\mathbb P(e\\mid H_j)} = \\dfrac{\\mathbb P(H_i\\mid e)}{\\mathbb P(H_j\\mid e)}$$\n\n%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nIn the [Diseasitis problem](https://arbital.com/p/22s), we use this form of Bayes' rule to justify calculating the posterior odds of sickness via the calculation $(1 : 4) \\times (3 : 1) = (3 : 4).$\n%%\n\n%%!if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):\nIn the [Diseasitis problem](https://arbital.com/p/22s), 20% of the patients in a screening population have Diseasitis, 90% of sick patients will turn a chemical strip black, and 30% of healthy patients will turn a chemical strip black. We can use the form of Bayes' rule above to justify solving this problem via the calculation $(1 : 4) \\times (3 : 1) = (3 : 4).$\n%%\n\nIf instead of treating the ratios as odds, we actually calculate out the numbers for each term of the equation, we instead get the calculation $\\frac{1}{4} \\times \\frac{3}{1} = \\frac{3}{4},$ or $0.25 \\times 3 = 0.75.$\n\nIf we try to directly interpret this, it says: \"If a patient starts out 0.25 times as likely to be sick as healthy, and we see a test result that is 3 times as likely to occur if the patient is sick as if the patient is healthy, we conclude the patient is 0.75 times as likely to be sick as healthy.\"\n\nThis is valid reasoning, and we call it the *proportional* form of Bayes' rule. To get the probability back out, we reason that if there's 0.75 sick patients to every 1 healthy patient in a bag, the bag comprises 0.75/(0.75 + 1) = 3/7 = 43% sick patients.\n\n# Spotlight visualization\n\nOne way of looking at this result is that, since odds ratios are equivalent under multiplication by a positive constant, we can fix the right side of the odds ratio as equaling 1 and ask about what's on the left side. This is what we do when seeing the calculation as $(0.25 : 1) \\cdot (3 : 1) = (0.75 : 1),$ the form suggested by the theorem proved above.\n\nWe could visualize Bayes' rule as a pair of spotlights with different starting intensities, that go through lenses that amplify or reduce each incoming unit of light by a fixed multiplier. In the [Diseasitis](https://arbital.com/p/22s) case, if we fix the right-side blue beam as having a starting intensity of 1 and a multiplying lens of 1, and we fix the left-side beam of having a starting intensity of 0.25 and a multiplying lens of 3x, then the result gives us a visualization of the calculation prescribed by Bayes' rule:\n\n![bayes lights](https://i.imgur.com/BIgoE87.png?0)\n\nNote the similarity to a [waterfall diagram](https://arbital.com/p/1wy). The main thing the spotlight visualization adds is that we can imagine varying the absolute intensities of the lights and lenses, while preserving their relative intensities, in such a way as to make the right-side beams and lenses equal 1.\n\n[draw the pre-proportional, odds form of the spotlight visualization.](https://arbital.com/p/fixme:)\n\n%todo: add example problem in proportional/spotlight form%\n\n# Usefulness in informal argument\n\nThe proportional form of Bayes' rule is perhaps the fastest way of describing Bayesian reasoning that sounds like it ought to be true. If you were having a fictional character suddenly give a Bayesian argument in the middle of a story being read by many people who'd never heard of Bayes' rule, you might have them [say](http://hpmor.com/chapter/86):\n\n> \"Suppose the Dark Mark is certain to continue while the Dark Lord's sentience lives on, but a priori we'd only have guessed a twenty percent chance of the Dark Mark continuing to exist after the Dark Lord dies. Then the observation, \"The Dark Mark has not faded\" is five times as likely to occur in worlds where the Dark Lord is alive as in worlds where the Dark Lord is dead. Is that really commensurate with the prior improbability of immortality? Let's say the prior odds were a hundred-to-one against the Dark Lord surviving. If a hypothesis is a hundred times as likely to be false versus true, and then you see evidence five times more likely if the hypothesis is true versus false, you should update to believing the hypothesis is twenty times as likely to be false as true.\"\n\nSimilarly, if you were a doctor trying to explain the meaning of a positive test result to a patient, you might say: \"If we haven't seen any test results, patients like you are a thousand times as likely to be healthy as sick. This test is only a hundred times as likely to be positive for sick as for healthy patients. So now we think you're ten times as likely to be healthy as sick, which is still a pretty good chance!\"\n\n[Visual diagrams](https://arbital.com/p/1wy) and special notation for [odds](https://arbital.com/p/1x5) and [relative likelihoods](https://arbital.com/p/1rq) might make Bayes' rule more intuitive, but the proportional form is probably the most valid-*sounding* thing that *is* quantitatively correct that you can say in three sentences.\n\n[write a from-scratch Standalone Intro of the proportional form of Bayes' rule in particular, using the Diseasitis example and going from frequency diagram to waterfall to spotlight, with no proofs, just to justify the proportional form. add to Main a statement that if you can phrase things in proportional form, there exists a Standalone Intro that justifies it quickly.](https://arbital.com/p/fixme:)", "date_published": "2016-10-10T22:24:58Z", "authors": ["Viktor Riabtsev", "Alexei Andreev", "Eric Rogstad", "Himanshu Chaturvedi", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky", "V N"], "summaries": ["Suppose that Professor Plum and Miss Scarlet are two suspects in a murder, and that we start out thinking that Professor Plum is twice as likely to have committed the murder as Miss Scarlet. We then discover that the victim was poisoned. We think that Professor Plum is around one-fourth as likely to use poison as Miss Scarlet. Then after observing the victim was poisoned, we should think Plum is around half as likely to have committed the murder as Scarlet: $2 \\times \\dfrac{1}{4} = \\dfrac{1}{2}.$ This reasoning is valid by [Bayes' rule](https://arbital.com/p/1lz)."], "tags": ["C-Class"], "alias": "1zm"} {"id": "390aa450a95c238ccebbf1686863546a", "title": "Bayes' rule: Guide", "url": "https://arbital.com/p/bayes_rule_guide", "source": "arbital", "source_type": "text", "text": "Bayes' rule or Bayes' theorem is the law of probability governing *the strength of evidence* - the rule saying *how much* to revise our probabilities (change our minds) when we learn a new fact or observe new evidence.\n\nYou may want to learn about Bayes' rule if you are:\n\n- A professional who uses statistics, such as a scientist or doctor;\n- A computer programmer working in machine learning;\n- A human being.\n\nAs Philip Tetlock found when studying \"superforecasters\", people who were especially good at predicting future events:\n\n> The superforecasters are a numerate bunch: many know about Bayes' theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes' theorem is Bayes' core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.

— Philip Tetlock and Dan Gardner, [_Superforecasting_](https://arbital.com/p/https://en.wikipedia.org/wiki/Superforecasting)\n\n## Learning Bayes' rule\n\nThis guide to Bayes' rule uses Arbital's technology to allow for multiple flavors of introduction. They vary by technical level, speed, and topics covered. After you pick your path, remember that you can still switch between pages, in particular by using the \"Say what?\" and \"Go faster\" buttons.\n\n[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)\n\n%%%wants-requisite([https://arbital.com/p/62d](https://arbital.com/p/62d)):\n%%wants-requisite([https://arbital.com/p/62f](https://arbital.com/p/62f)):\n%box:\n*Your path will go over all forms of Baye's Rule, along with developing deep appreciation for its scientific usefulness. Your path will contain 12 pages:*\n\n * *Frequency diagrams: A first look at Bayes*\n * *Waterfall diagrams and relative odds*\n * *Introduction to Bayes' rule: Odds form*\n * *Bayes' rule: Proportional form*\n * *Extraordinary claims require extraordinary evidence*\n * *Ordinary claims require ordinary evidence*\n * *Bayes' rule: Log-odds form*\n * *Shift towards the hypothesis of least surprise*\n * *Bayes' rule: Vector form*\n * *Belief revision as probability elimination*\n * *Bayes' rule: Probability form*\n * *Bayesian view of scientific virtues*\n%\n\n%start-path([https://arbital.com/p/61b](https://arbital.com/p/61b))%\n%%\n%%!wants-requisite([https://arbital.com/p/62f](https://arbital.com/p/62f)):\n%box:\nNo time to waste! Let's plunge directly into [a single-page abbreviated introduction to Bayes' rule](https://arbital.com/p/693).\n%\n%%\n%%%\n\n%%%!wants-requisite([https://arbital.com/p/62d](https://arbital.com/p/62d)):\n%%wants-requisite([https://arbital.com/p/62f](https://arbital.com/p/62f)):\n%box:\n*Your path will teach you the basic odds form of Bayes' rule at a reasonable pace and then delve into the deep mysteries of the Bayes' Rule! Your path will contain 8 pages:*\n\n* *Frequency diagrams: A first look at Bayes*\n* *Waterfall diagrams and relative odds*\n* *Introduction to Bayes' rule: Odds form*\n* *Belief revision as probability elimination*\n* *Extraordinary claims require extraordinary evidence*\n* *Ordinary claims require ordinary evidence*\n* *Shift towards the hypothesis of least surprise*\n* *Bayesian view of scientific virtues*\n%\n\n%start-path([https://arbital.com/p/62f](https://arbital.com/p/62f))%\n%%\n\n%%!wants-requisite([https://arbital.com/p/62f](https://arbital.com/p/62f)):\n%box:\n*Your path will teach you the basic odds form of Bayes' rule at a reasonable pace. It will contain 3 pages:*\n\n* *Frequency diagrams: A first look at Bayes*\n* *Waterfall diagrams and relative odds*\n* *Introduction to Bayes' rule: Odds form*\n%\n\n%start-path([https://arbital.com/p/62c](https://arbital.com/p/62c))%\n%%\n%%%", "date_published": "2016-10-25T21:43:14Z", "authors": ["Joe Zeng", "Alexei Andreev", "Eric Rogstad", "Tim Huegerich", "Nate Soares", "John John", "Eric Bruylant", "Kira Kutscher", "Chris Cooper", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Guide", "A-Class"], "alias": "1zq"} {"id": "361b589c7efc0a42c66a7a859ea05ee2", "title": "Path: Multiple angles on Bayes's Rule", "url": "https://arbital.com/p/bayes_rule_details", "source": "arbital", "source_type": "text", "text": "Congratulations! You've now learned multiple angles and approaches to Bayes' rule. Please continue on to the next step of this path!", "date_published": "2016-07-10T21:46:05Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A page with all the details of Bayes' rule as requisites, to help autogenerate a sequence that includes all the requisites."], "tags": ["Just a requisite", "B-Class"], "alias": "207"} {"id": "da12a718d2e7b4c1bb0276ab64220c95", "title": "Ignorance prior", "url": "https://arbital.com/p/ignorance_prior", "source": "arbital", "source_type": "text", "text": "An ignorance prior is a a [prior](https://arbital.com/p/1rm) [probability function](https://arbital.com/p/1zj) on some problem of interest, usually with the intended properties of being simple to describe and facilitating good learning from the evidence. A classic example would be the [inductive prior](https://arbital.com/p/21b) for [Laplace's Rule of Succession](https://arbital.com/p/21c).", "date_published": "2016-03-03T22:10:51Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "219"} {"id": "4be6d0067799c6a8e5d1dd5884de2827", "title": "Inductive prior", "url": "https://arbital.com/p/inductive_prior", "source": "arbital", "source_type": "text", "text": "An \"inductive prior\" is a state of belief, before seeing any evidence, which is conducive to learning when the evidence finally appears. A classic example would be observing a coin come up heads or tails many times. If the coin is biased to come up heads 1/4 of the time, the inductive prior from [Laplace's Rule of Succession](https://arbital.com/p/21c) will start predicting future flips to come up tails with 3/4 probability. The [maximum entropy prior](https://arbital.com/p/) for the coin, which says that every coinflip has a 50% chance of coming up heads and that all sequences of heads and tails are equally probable, will never start to predict that the next flip will be heads, even after observing the coin come up heads thirty times in a row.\n\nThe prior in [https://arbital.com/p/11w](https://arbital.com/p/11w) is another example of an inductive prior - far more powerful, far more complicated, and [entirely unimplementable on physically possible hardware](https://arbital.com/p/1mk).", "date_published": "2016-03-03T04:10:29Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "21b"} {"id": "383fd59a20c715b47d619543964dbb82", "title": "Laplace's Rule of Succession", "url": "https://arbital.com/p/laplace_rule_of_succession", "source": "arbital", "source_type": "text", "text": "# Theorem and proof\n\nSuppose a sequence $X_1, \\dots, X_n$ of binary values (0 or 1), e.g., a potentially non-fair coin which comes up heads or tails on each flip.\n\nLaplace's Rule of succession says that if:\n\n- 1. We think each coinflip $X_i$ has an independent, unknown likelihood $f$ of coming up heads, and\n- 2. We think that all values of $f$ between 0 and 1 have equal probability density *a priori*...\n\nThen, after observing $M$ heads and $N$ tails, the expected probability of heads on the next coinflip is:\n\n$\\dfrac{M + 1}{M + N + 2}$\n\nProof:\n\nFor a hypothetical value of $f$, each coinflip observed has a [likelihood](https://arbital.com/p/1rq) of $f$ if heads or $1 - f$ if tails.\n\nThe [prior](https://arbital.com/p/1rm) is uniform between 0 and 1, so a prior density of 1 everywhere.\n\nBy [Bayes's Rule](https://arbital.com/p/1zj), after seeing M heads and N tails, the [posterior](https://arbital.com/p/1rp) probability density over $f$ is *proportional* to $1 \\cdot f^M(1 - f)^N.$\n\nThen the [normalizing constant](https://arbital.com/p/1rk) is: $\\int_0^1 f^M(1 - f)^N \\operatorname{d}\\!f = \\frac{M!N!}{(M + N + 1)!}.$\n\nSo the posterior probability density function is $f^M(1 - f)^N \\frac{(M + N + 1)!}{M!N!}.$\n\nIntegrating this function, times $f,$ from 0 to 1, will yield the marginal probability of getting heads on the next flip.\n\nThe answer is thus:\n\n$\\dfrac{(M+1)!N!}{(M + N + 2)!} \\cdot \\dfrac{(M + N + 1)!}{M!N!} = \\dfrac{M + 1}{M + N + 2}.$\n\n# Simpler proof by combinatorics\n\nAlthough Laplace's Rule of Succession was originally proved (by Thomas Bayes) by finding the posterior probability density and integrating, and the proof of Laplace's Rule illustrates the core idea of an [inductive prior](https://arbital.com/p/21b) in Bayesianism, a simpler intuition for the proof also exists.\n\nConsider the problem originally posed by Thomas Bayes: An initial billiard is rolled back and forth between the left and right edges of an ideal billiards table until friction brings it to a halt. We then roll M + N additional billiard balls, and observe that M halt to the left of the initial billiard, and N halt to the right of it. If this is all we know, what is the probability the next ball halts on the left, or right?\n\nSuppose that we rolled a total of 5 additional billiards, and 2 halted to the left of the original, and 3 halted to the right. Then, using **|** to symbolize the initial billiard, the billiards would have come to rest in the order:\n\n- **LL|RRR**\n\nSuppose we now roll a new billiard, symbolized by **+**, until it comes to a halt. It's equally likely to appear at:\n\n- **+LL|RRR**\n- **L+L|RRR**\n- **LL+|RRR**\n- **LL|+RRR**\n- **LL|R+RR**\n- **LL|RR+R**\n- **LL|RRR+**\n\nThis means there are 3 ways the ball could be ordered on the left of the **|**, and 4 ways it could be ordered on the right. Since all left-to-right orderings of 7 randomly rolled billiard balls are equally likely a priori, we assign 3/7 probability that the ball comes to a rest on the left of the original ball's position.\n\nSee also the [Wikipedia page](https://en.wikipedia.org/wiki/Rule_of_succession).\n\n# Use and abuse\n\nLaplace's Rule of Succession assumes that all prior values of the frequency $f$ are undistinguished *a priori* in our subjective knowledge.\n\nFor example, Laplace used the rule to estimate a probability of the sun rising tomorrow, given that it had risen every day for the past 5000 years, and arrived at odds of around 1826251:1. But today when we have physical knowledge of the Sun's operation, not every possible 'rate at which the Sun rises each day' is undistinguished. Furthermore, even in Laplace's time, he should have perhaps thought it especially likely a priori that \"the Sun always rises\" and \"the Sun never rises\" were distinguished as unusually likely frequencies of the Sun rising, a priori. \n\nThe Rule of Succession follows from assuming approximate ignorance about prior frequencies. It does not, of itself, justify this assumption. Variations of the rule of succession are obtainable by taking different priors, corresponding to different views of what should count as uninformative. See discussion on the Wikipedia page on [non-informative priors](https://www.wikipedia.com/en/Prior_probability#/Uninformative_priors). For example if starting with the Jeffreys' prior, then after observing $M$ heads and $N$ tails, the expected probability of heads on the next coinflip is:\n\n$\\dfrac{M + \\dfrac{1}{2}}{M + N + 1}$\n\n# Nomenclature\n\nLaplace's Rule of Succession was the famous problem proved by Thomas Bayes in \"An Essay towards solving a Problem in the Doctrine of Chances\", read to the Royal Society in 1763, after Bayes's death. Pierre-Simon Laplace, the first systematizer of what we now know as Bayesian reasoning, was so impressed by this theorem that he named the central theorem of his new discipline after Thomas Bayes. The original theorem proven by Bayes was popularized by Laplace in arguments about the problem of induction, and so became known as Laplace's Rule of Succession.", "date_published": "2016-10-31T11:29:12Z", "authors": ["Alexei Andreev", "Jakub Łopuszański", "Owen Cotton-Barratt", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["Suppose you roll an initial billiards ball on a billiards table, letting it bounce off the left and right sides until it comes to a halt. You mark down where the billiard ball halted. You then roll additional billiards, and [observe](https://arbital.com/p/1ly) that M come to rest to the left of the original billiard, and N halt to the right of the original billiard. If this is all someone knows, what probability should they assign that the next billiard comes to halt on the left, or the right? Laplace's Rule of succession says that the [odds](https://arbital.com/p/1rb) are (M + 1) : (N + 1) for left vs. right. If we flip a coin with an unknown bias and observe 2 heads and 7 tails, our probability of getting heads next time is (2 + 1)/(2 + 7 + 2) = 3/11. On the very first flip, it's 1 : 1 or 1/2."], "tags": ["C-Class"], "alias": "21c"} {"id": "8254755688e51b85c3a9bd95242d3157", "title": "Extraordinary claims require extraordinary evidence", "url": "https://arbital.com/p/bayes_extraordinary_claims", "source": "arbital", "source_type": "text", "text": "[Bayes' rule](https://arbital.com/p/1lz) tells us how strong a piece of evidence has to be in order to support a given hypothesis. This lets us see whether a piece of evidence is sufficient, or insufficient, to drive the probability of a hypothesis to over 50%.\n\nFor example, consider the [sparking widgets](https://arbital.com/p/559) problem:\n\n> 10% of widgets are bad and 90% are good. 4% of good widgets emit sparks, and 12% of bad widgets emit sparks. Can you calculate in your head what percentage of sparking widgets are bad?\n\nThe prior odds are 1 : 9 for bad widgets vs. good widgets.\n\n12% of bad widgets and 4% of good widgets emit sparks, so that's a likelihood ratio of 3 : 1 for sparking (bad widgets are three times as likely to emit sparks).\n\n$(1 : 9 ) \\times (3 : 1) \\ = \\ (3 : 9) \\ \\cong \\ (1 : 3)$ posterior odds for bad vs. good sparking widgets. So 1/4 of sparking widgets are bad. \n\nBad widgets started out relatively rare: 1 in 10. We applied a test — looking for sparks — that was only 3 times more likely to identify bad widgets as opposed to good ones. *The evidence was weaker than the prior improbability of the claim.*\n\nThis doesn't mean we toss out the evidence and ignore it. It does mean that, after updating on the observation of sparkiness, we only gave 25% posterior probability to the widget being bad — the probability didn't go over 50%.\n\nWhat would need to change to drive the probability of widget badness to over 50%? We would need evidence with a more extreme likelihood ratio, more extreme than the (9 : 1) prior odds. For example, if instead bad widgets were 50% likely to spark and good widgets were 5% likely to spark, the posterior odds would go to (10 : 9) or 53%.\n\nIn other words: For a previously implausible proposition $X$ to end up with a high posterior probability, the likelihood ratio for the new evidence favoring $X$ over its alternatives, needs to be *more extreme* than the prior odds against $X.$\n\nThis is the quantitative argument behind the qualitative statement that \"extraordinary claims require extraordinary evidence\" (a claim popularized by Carl Sagan, which dates back to at least Pierre-Simon Laplace).\n\nThat is: an \"[extraordinary claim](https://arbital.com/p/26y)\" is one with a low [prior probability](https://arbital.com/p/1rm) in advance of considering the evidence, and \"extraordinary evidence\" is evidence with an extreme [likelihood ratio](https://arbital.com/p/1rq) favoring the claim over its alternatives.\n\n# What makes evidence extraordinary?\n\nThe likelihood ratio is defined as:\n\n$$\\text{Likelihood ratio} = \\dfrac{\\text{Probability of seeing the evidence, assuming the claim is true}}{\\text{Probability of seeing the evidence, assuming the claim is false}}$$\n\nTo obtain an extreme likelihood ratio, the bottom of the fraction has to be very low. The top of the fraction being very high doesn't help much. If the top of the fraction is 99% and the bottom is 70%, that's still not a very extreme ratio, and it doesn't help much if the top is 99.9999% instead.\n\nSo to get extremely strong evidence, we need to see an observation which is very _improbable,_ given \"business as usual,\" but fairly likely according to the extraordinary claim. This observation would be deserving of the title, \"extraordinary evidence\".\n\n# Example of an extraordinary claim and ordinary evidence: Bookcase aliens.\n\nConsider the following hypothesis: What if there are Bookcase Aliens who teleport into our houses at night and drop off bookcases?\n\nBob offers the following evidence for this claim: \"Last week, I visited my friend's house, and there was a new bookcase there. If there were no bookcase aliens, I wouldn't have expected that my friend would get a new bookcase. But if there are Bookcase Aliens, then the probability of my finding a new bookcase there was much higher. Therefore, my observation, 'There is a new bookcase in my friend's house,' is strong evidence supporting the existence of Bookcase Aliens.\"\n\nIn an intuitive sense, we have a notion that Bob's evidence \"There is a new bookcase in my friend's house\" is not as extraordinary as the claim \"There are bookcase aliens\" - that the evidence fails to lift the claim. Bayes's Rule makes this statement precise.\n\nBob is, in fact, correct that his observation, \"There's a new bookcase in my friend's house\", is indeed evidence favoring the Bookcase Aliens. Depending on how long it's been since Bob last visited that house, there might ceteris paribus be, say, a 1% chance that there would be a new bookcase there. On the other hand, the Bookcase Aliens hypothesis might assign, say, 50% probability that the Bookcase Aliens would target this particular house among others. If so, that's a [likelihood ratio](https://arbital.com/p/1rq) of 50:1 favoring the Bookcase Aliens hypothesis.\n\nHowever, a reasonable prior on Bookcase Aliens would assign this a very *low* [prior probability](https://arbital.com/p/1rm) given our other, previous observations of the world. Let's be conservative and assign odds of just 1 : 1,000,000,000 against Bookcase Aliens. Then to raise our [posterior belief](https://arbital.com/p/1rp) in Bookcase Aliens to somewhere in the \"pragmatically noticeable\" range of 1 : 100, we'd need to see evidence with a cumulative likelihood ratio of 10,000,000 : 1 favoring the Bookcase Aliens. 50 : 1 won't cut it.\n\nWhat would need to change for the observation \"There's a new bookcase in my friend's house\" to be convincing evidence of Bookcase Aliens, compared to the alternative hypothesis of \"business as usual\"?\n\nAs suggested by the Bayesian interpretation of [strength of evidence](https://arbital.com/p/22x), what we need to see is an observation which is nigh-impossible if there are *not* bookcase aliens. We would have to believe that, [conditional](https://arbital.com/p/1rj) on \"business as usual\" being true, the [likelihood](https://arbital.com/p/1rq) of seeing a bookcase was on the order of 0.00000001%. That would then take the [likelihood ratio](https://arbital.com/p/1rq), aka strength of evidence, into the rough vicinity of a billion to one favoring Bookcase Aliens over \"business as usual\".\n\nWe would still need to consider whether there might be *other* alternative hypotheses besides Bookcase Aliens and \"business as usual\", such as a human-operated Bookcase Conspiracy. But at least we wouldn't be dealing with an observation that was so unsurprising (conditional on business as usual) as to be unable to support *any* kind of extraordinary claim.\n\nHowever, if instead we suppose that Bookcase Aliens are allegedly 99.999999% probable to add a bookcase to Bob's friend's house, very little changes - the likelihood ratio is 99.99999% : 1% or 100 : 1 instead of 50 : 1. To obtain an extreme likelihood ratio, we mainly need a tiny denominator rather than a big numerator. In other words, \"extraordinary evidence\".\n\n# What makes claims extraordinary?\n\nAn obvious next question is what makes a claim 'extraordinary' or 'ordinary'. This is a [deep separate topic](https://arbital.com/p/26y), but as an example, consider the claim that the Earth is becoming warmer due to carbon dioxide being added to its atmosphere.\n\nTo evaluate the ordinariness or extraordinariness of this claim:\n\n- We _don't_ ask whether the future consequences of this claim seem extreme or important.\n- We _don't_ ask whether the policies that would be required to address the claim are very costly.\n- We ask whether \"carbon dioxide warms the atmosphere\" or \"carbon dioxide fails to warm the atmosphere\" seems to conform better to the *deep, causal* generalizations we already have about carbon dioxide and heat.\n- *If* we've already considered the deep causal generalizations like those, we don't ask about generalizations *causally downstream* of the deep causal ones we've already considered. (E.g., we don't say, \"But on every observed day for the last 200 years, the global temperature has stayed inside the following range; it would be 'extraordinary' to leave that range.\")\n\nThese tests suggest that \"Large amounts of added carbon dioxide will incrementally warm Earth's atmosphere\" would have been an 'ordinary' claim in advance of trying to find any evidence for or against it - it's just how you would expect a greenhouse gas to work, more or less. Thus, one is not entitled to demand a prediction made by this hypothesis that is wildly unlikely under any other hypothesis before believing it.\n\n# Incremental updating\n\nA key feature of the Bookcase Aliens example is that the followers of Bayes' rule acknowledges the observation of a new bookcase as being, locally, a single piece of evidence with a 50 : 1 likelihood ratio favoring Bookcase Aliens. The Bayesian doesn't toss the observation out the window because it's *insufficient* evidence; it just gets accumulated into the pool. If you visit house after house, and see new bookcase after new bookcase, the Bayesian slowly, incrementally, begins to wonder if something strange is going on, rather than dismissing each observation as 'insufficient evidence' and then forgetting it.\n\nThis stands in contrast to the instinctive way humans often behave, where, having concluded that they should not believe in Bookcase Aliens on the basis of the evidence in front of them, they discard that evidence entirely, denounce it, and say that it was never any evidence at all. (This is \"[treating arguments like soldiers](https://arbital.com/p/https://wiki.lesswrong.com/wiki/Arguments_as_soldiers)\" and acting like any evidence in favor of a proposition has to be \"defeated.\")\n\nThe Bayesian just says \"yes, that is evidence in favor of the claim, but it's not quantitatively enough evidence.\" This idiom also stands in contrast to the practice of treating any concession an opponent makes as a victory. If true claims are supposed to have all their arguments upheld and false claims are supposed to have all their enemy arguments defeated, then a single undefeated claim of *support* stands as a proof of victory, no matter how strong or weak the evidence that it provides. Not so with Bayesians — a Bayesian considers the bookcase observation to be locally a piece of evidence favoring Bookcase Aliens, just massively insufficient evidence.\n\n# Overriding evidence\n\nIf you think that a proposition has prior odds of 1 to a $10^{100}$, and then somebody presents evidence with a likelihood ratio of $10^{94}$ to one favoring the proposition, you shouldn't say, \"Oh, I guess the posterior odds are 1 to a million.\" You should instead question whether either (a) you were wrong about the prior odds or (b) the evidence isn't as strong as you assessed.\n\nIt's not that hard to end up believing a hypothesis that had very low prior odds. For example, whenever you look at the exact pattern of 10 digits generated by a random number generator, you're coming to believe a hypothesis that had prior odds on the order of ten billion to 1 against it.\n\nBut this should only happen with _true_ hypotheses. It's much rarer to find strong support for false hypotheses. Indeed, \"strong evidence\" is precisely \"that sort of evidence we almost *never* see, when the proposition turns out to be false\".\n\nImagine tossing a fair coin at most 300 times, and asking how often the sequence of heads and tails that it generates along the way, ever supports the false hypothesis \"this coin comes up heads 3/4ths of the time\" strongly over the true hypothesis \"this coin is fair\". As you can verify using [this code](https://arbital.com/p/https://gist.github.com/Soares/941bdb13233fd0838f1882d148c9ac14), the sequence of coinflips will at some point support the false hypothesis at the 10 : 1 level on about 8% of runs; it will at some point support the false hypothesis at the 100 : 1 level on about 0.8% of runs, and it will at some point support the false hypothesis at the 1000 : 1 level on about 0.08% of runs. (Note that we are less and less likely to be more and more deceived.)\n\nSeeing evidence with a strength of $(10^{94} : 1)$ / 94 orders of magnitude / 312 [bits of evidence](https://arbital.com/p/1zh) supporting a false hypothesis should only happen to you, on average, once every IT DIDN'T HAPPEN.\n\nWitnessing an observation that truly has a $10^{-94}$ probability of occurring if the hypothesis is false, in a case where the hypothesis *is in fact false,* is something that will not happen to anyone even once over the expected lifetime of this universe.\n\nSo if you think that the prior odds for a coin being unfair are $(1 : 10^{100})$ against, and then you see the coin flipped 312 times and coming up heads each time... you do not say, \"Well, my new posterior odds are $(1 : 10^6)$ against the coin being unfair.\" You say, \"I guess I was wrong about the prior odds being that low.\"", "date_published": "2016-10-08T02:52:18Z", "authors": ["Alexei Andreev", "sdfsdf sdfsdf", "Nate Soares", "Grady Simon", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Extraordinary claims", "Needs summary", "B-Class"], "alias": "21v"} {"id": "65637a28d936968a19f073cf7ba4e3ff", "title": "Bayesian view of scientific virtues", "url": "https://arbital.com/p/bayes_science_virtues", "source": "arbital", "source_type": "text", "text": "A number of scientific virtues are explained intuitively by Bayes' rule, including:\n\n- __Falsifiability:__ A good scientist should say what they do *not* expect to see if a theory is true.\n- __Boldness:__ A good theory makes bold experimental predictions (that we wouldn't otherwise expect)\n- __Precision:__ A good theory makes precise experimental predictions (that turn out correct)\n- __Falsificationism:__ Acceptance of a scientific theory is always provisional; rejection of a scientific theory is pretty permanent.\n- __Experimentation:__ You find better theories by making observations, and then updating your beliefs.\n\n# Falsifiability\n\n\"Falsifiability\" means saying which events and observations should definitely *not* happen if your theory is true.\n\nThis was first popularized as a scientific virtue by Karl Popper, who wrote, in a famous critique of Freudian psychoanalysis:\n\n> Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning him (a case of repression) could not possibly be predicted or excluded by Freud’s theory; *the theory was compatible with everything that could happen.*\n\nIn a Bayesian sense, we can see a hypothesis's falsifiability as a requirement for obtaining strong [likelihood ratios](https://arbital.com/p/1rq) in favor of the hypothesis, compared to, e.g., the alternative hypothesis \"I don't know.\"\n\nSuppose you're a very early researcher on gravitation, named Grek. Your friend Thag is holding a rock in one hand, about to let it go. You need to predict whether the rock will move downward to the ground, fly upward into the sky, or do something else. That is, you must say how your theory $Grek$ assigns its probabilities over $up, down,$ and $other.$\n\nAs it happens, your friend Thag has his own theory $Thag$ which says \"Rocks do what they want to do.\" If Thag sees the rock go down, he'll explain this by saying the rock wanted to go down. If Thag sees the rock go up, he'll say the rock wanted to go up. Thag thinks that the Thag Theory of Gravitation is a very good one because it can explain any possible thing the rock is observed to do. This makes it superior compared to a theory that could *only* explain, say, the rock falling down.\n\nAs a Bayesian, however, you realize that since $up, down,$ and $other$ are [mutually exclusive and exhaustive](https://arbital.com/p/1rd) possibilities, and *something* must happen when Thag lets go of the rock, the [conditional probabilities](https://arbital.com/p/1rj) $\\mathbb P(\\cdot\\mid Thag)$ must sum to $\\mathbb P(up\\mid Thag) + \\mathbb P(down\\mid Thag) + \\mathbb P(other\\mid Thag) = 1.$\n\nIf Thag is \"equally good at explaining\" all three outcomes - if Thag's theory is equally compatible with all three events and produces equally clever explanations for each of them - then we might as well call this $1/3$ probability for each of $\\mathbb P(up\\mid Thag), \\mathbb P(down\\mid Thag),$ and $\\mathbb P(other\\mid Thag)$. Note that Thag theory's is isomorphic, in a probabilistic sense, to saying \"I don't know.\"\n\nBut now suppose Grek make falsifiable prediction! Grek say, \"Most things fall down!\"\n\nThen Grek not have all probability mass distributed equally! Grek put 95% of probability mass in $\\mathbb P(down\\mid Grek)!$ Only leave 5% probability divided equally over $\\mathbb P(up\\mid Grek)$ and $\\mathbb P(other\\mid Grek)$ in case rock behave like bird.\n\nThag say this bad idea. If rock go up, Grek Theory of Gravitation disconfirmed by false prediction! Compared to Thag Theory that predicts 1/3 chance of $up,$ will be likelihood ratio of 2.5% : 33% ~ 1 : 13 against Grek Theory! Grek embarrassed!\n\nGrek say, she is confident rock *does* go down. Things like bird are rare. So Grek willing to stick out neck and face potential embarrassment. Besides, is more important to learn about if Grek Theory is true than to save face.\n\nThag let go of rock. Rock fall down.\n\nThis evidence with likelihood ratio of 0.95 : 0.33 ~ 3 : 1 favoring Grek Theory over Thag Theory.\n\n\"How you get such big likelihood ratio?\" Thag demand. \"Thag never get big likelihood ratio!\"\n\nGrek explain is possible to obtain big likelihood ratio because Grek Theory stick out neck and take probability mass *away* from outcomes $up$ and $other,$ risking disconfirmation if that happen. This free up lots of probability mass that Grek can put in outcome $down$ to make big likelihood ratio if $down$ happen.\n\nGrek Theory win because falsifiable and make correct prediction! If falsifiable and make wrong prediction, Grek Theory lose, but this okay because Grek Theory not Grek.\n\n# Advance prediction\n\nOn the next experiment, Thag lets go of the rock, watches it fall down, and then says, \"Thag Theory assign 100% probability to $\\mathbb P(down\\mid Thag)$.\"\n\nGrek replies, \"Grek think if Thag see rock fly up instead, Thag would've said $\\mathbb P(up\\mid Thag) = 1.$ Thag engage in [hindsight bias](https://en.wikipedia.org/wiki/Hindsight_bias).\"\n\n\"Grek can't prove Thag biased,\" says Thag. \"Grek make ad hominem argument.\"\n\n\"New rule,\" says Grek. \"Everyone say probability assignment *before* thing happens. That way, no need to argue afterwards.\"\n\nThag thinks. \"Thag say $\\mathbb P(up\\mid Thag) = 1$ and $\\mathbb P(down\\mid Thag) = 1$.\"\n\n\"Thag violate [probability axioms](https://en.wikipedia.org/wiki/Probability_axioms),\" says Grek. \"Probability of all [mutually exclusive](https://arbital.com/p/1rd) outcomes must sum to $1$ or less. But good thing Thag say in advance so Grek can see problem.\"\n\n\"That not fair!\" objects Thag. \"Should be allowed to say afterwards so nobody can tell!\"\n\n## Formality\n\nThe rule of advance prediction is much more pragmatically important for informal theories than formal ones; and for these purposes, a theory is 'formal' when the theory's predictions are produced in a sufficiently mechanical and determined way that anyone can plug the theory into a computer and get the same answer for what probability the theory assigns.\n\nWhen Newton's Theory of Gravitation was proposed, it was considered not-yet-fully-proven because retrodictions such as the tides, elliptical planetary orbits, and Kepler's Laws, had all been observed *before* Newton proposed the theory. Even so, a pragmatic Bayesian would have given Newton's theory a lot of credit for these retrodictions, because unlike, say, a psychological theory of human behavior, it was possible for *anyone* - not just Newton - to sit down with a pencil and derive exactly the same predictions from Newton's Laws. This wouldn't completely eliminate the possibility that Newton's Theory had in some sense been overfitted to Kepler's Laws and the tides, and would then be incapable of further correct new predictions. But it did mean that, as a formal theory, there could be less pragmatic worry that Thagton was just saying, \"Oh, well, of course my theory of 'Planets go where they want' would predict elliptical orbits; elliptical orbits look nice.\"\n\n## Asking a theory's adherents what the theory says.\n\nThag picks up another rock. \"I say in advance that Grek Theory assign 0% probability to rock going down.\" Thag drops the rock. \"Thag disprove Grek Theory!\"\n\nGrek shakes her head. \"Should ask advocates of Grek Theory what Grek Theory predicts.\" Grek picks up another rock. \"I say Grek Theory assign $\\mathbb P(down\\mid Grek) = 0.95$.\"\n\n\"I say Grek Theory assign $\\mathbb P(down\\mid Grek) = 0$,\" counters Thag.\n\n\"That not how science work,\" replies Grek. \"Thag should say what Thag's Theory says.\"\n\nThag thinks for a moment. \"Thag Theory says rock has 95% probability of going down.\"\n\n\"What?\" says Grek. \"Thag just copying Grek Theory! Also, Thag not say that *before* seeing rocks go down!\"\n\nThag smiles smugly. \"Only Thag get to say what Thag Theory predict, right?\"\n\nAgain for pragmatic reasons, we should *first* ask the adherents of an informal theory to say what the theory predicts (a formal theory can simply be operated by anyone, and if this is not true, we will not call the theory 'formal').\n\nFurthermore, since you can find a fool following any cause, you should ask the smartest or most technically adept advocates of the theory. If there's any dispute about who those are, ask separate representatives from the leading groups. Fame is definitely not the key qualifier; you should ask Murray Gell-Mann and not Deepak Chopra about quantum mechanics, even if more people have heard of Deepak Chopra's beliefs about quantum mechanics than have heard about Murray Gell-Mann. If you really can't tell the difference, ask them both, don't ask only Chopra and then claim that Chopra gets to be *the* representative because he is most famous.\n\nThese types of courtesy rules would not be necessary if we were dealing with a sufficiently advanced Artificial Intelligence or ideal rational agent, but it makes sense for human science where people may be motivated to falsely construe another theory's probability assignments.\n\nThis informal rule has its limits, and there may be cases where it seems really obvious that a hypothesis's predictions ought not to be what the hypothesis's adherents claim, or that the theory's adherents are just stealing the predictions of a more successful theory. But there ought to be a *large* (if defeasible) bias in favor of letting a theory's adherents say what that theory predicts.\n\n# Boldness\n\nA few minutes later, Grek is picking up another rock. \"$\\mathbb P(down\\mid Grek) = 0.95$,\" says Grek.\n\n\"$\\mathbb P(down\\mid Thag) = 0.95$,\" says Thag. \"See, Thag assign high probability to outcomes observed. Thag win yet?\"\n\n\"No,\" says Grek. \"[Likelihood ratios](https://arbital.com/p/1rq) 1 : 1 all time now, even if we believe Thag. Thag's theory not pick up advantage. Thag need to make *bold* prediction other theories not make.\"\n\nThag frowns. \"Thag say... rock will turn blue when you let go this time? $\\mathbb P(blue\\mid Thag) = 0.90$.\"\n\n\"That very bold,\" Grek says. \"Grek Theory not say that (nor any other obvious 'common sense' or 'business as usual' theories). Grek think that $\\mathbb P(blue\\mid \\neg Thag) < 0.01$ so Thag prediction definitely has virtue of boldness. Will be big deal if Thag prediction come true.\"\n\n$\\dfrac{\\mathbb P(Thag\\mid blue)}{\\mathbb P(\\neg Thag\\mid blue)} > 90 \\cdot \\dfrac{\\mathbb P(Thag)}{\\mathbb P(\\neg Thag)}$\n\n\"Thag win now?\" Thag says.\n\nGrek lets go of the rock. It falls down to the ground. It does not turn blue.\n\n\"Bold prediction not correct,\" Grek says. \"Thag's prediction virtuous, but not win. Now Thag lose by 1 : 10 likelihood ratio instead. Very science, much falsification.\"\n\n\"Grek lure Thag into trap!\" yells Thag.\n\n\"Look,\" says Grek, \"whole point is to set up science rules so *correct* theories can win. If wrong theorists lose quickly by trying to be scientifically virtuous, is feature rather than bug. But if Thag try to be good and loses, we shake hands and everyone still think well of Thag. Is normative social ideal, anyway.\"\n\n# Precision\n\nAt a somewhat later stage in the development of gravitational theory, the Aristotelian synthesis of Grek and Thag's theories, \"Most things have a final destination of being at the center of the Earth, and try to approach that final destination\" comes up against Galileo Galilei's \"Most unsupported objects accelerate downwards, and each second of passing time the object gains another 9.8 meters per second of downward speed; don't ask me why, I'm just observing it.\"\n\n\"You're not just predicting that rocks are observed to move downward when dropped, are you?\" says Aristotle. \"Because I'm already predicting that.\"\n\n\"What we're going to do next,\" says Galileo, \"is predict *how long* it will take a bowling ball to fall from the Leaning Tower of Pisa.\" Galileo takes out a pocket stopwatch. \"When my friend lets go of the ball, you hit the 'start' button, and as soon as the ball hits the ground, you hit the 'stop' button. We're going to observe exactly what number appears on the watch.\"\n\nAfter some further calibration to determine that Aristotle has a pretty consistent reaction time for pressing the stopwatch button if Galileo snaps his fingers, Aristotle looks up at the bowling ball being held from the Leaning Tower of Pisa.\n\n\"I think it'll take anywhere between 0 and 5 seconds inclusive,\" Aristotle says. \"Not sure beyond that.\"\n\n\"Okay,\" says Galileo. \"I measured this tower to be 45 meters tall. Now, if air resistance is 0, after 3 seconds the ball should be moving downward at a speed of 9.8 * 3 = 29.4 meters per second. That speed increases continuously over the 3 seconds, so the ball's *average* speed will have been 29.4 / 2 = 14.7 meters per second. And if the ball moves at an average speed of 14.7 meters per second, for 3 seconds, it will travel downward 44.1 meters. So the ball should take just a little more than 3 seconds to fall 45 meters. Like, an additional 1/29th of a second or so.\"\n\n![tower cartoon](https://i.imgur.com/kUeMG6w.png?0)\n\n\"Hm,\" says Aristotle. \"This pocketwatch only measures whole seconds, so your theory puts all its probability mass on 3, right?\"\n\n\"Not literally *all* its probability mass,\" Galileo says. \"It takes you some time to press the stopwatch button once you see the ball start to fall, but it also takes you some time to press the button after you see the ball hit the ground. Those two sources of measurement error should *mostly* cancel out, but maybe they'll happen not to on this particular occasion. We don't have all *that* precise or well-tested an experimental setup here. Like, if the stopwatch breaks and we observe a 0, then that will be a *defeat* for Galilean gravity, but it wouldn't imply a *final* refutation - we could get another watch and make better predictions and make up for the defeat.\"\n\n\"Okay, so what probabilities do you assign?\" says Aristotle. \"I think my own theory is about equally good at explaining any falling time between 0 and 5 seconds.\"\n\n![aristotle probability](https://i.imgur.com/LQNDTki.png?0)\n\nGalileo ponders. \"Since we haven't tested this setup yet... I think I'll put something like 90% of my probability mass on a falling time between 3 and 4 seconds, which corresponds to an observable result of the watch showing '3'. Maybe I'll put another 5% probability on air resistance having a bigger effect than I think it should over this distance, so between 4 and 5 seconds or an observable of '4'. Another 4% probability on this watch being slower than I thought, so 4% for a *measured* time between 2 and 3 and an observation of '2'. 0.99% probability on the stopwatch picking this time to break and show '1' (or '0', but that we both agree shouldn't happen), and 0.01% probability on an observation of '2' which basically shouldn't happen for any reason I can think of.\"\n\n![galileo probability](https://i.imgur.com/4CoLkuW.png?0)\n\n\"Well,\" says Aristotle, \"your theory certainly has the scientific virtue of *precision,* in that, by concentrating most of its probability density on a narrow region of possible precise observations, it will gain a great likelihood advantage over *vaguer* theories like mine, which roughly say that 'things fall down' and have made 'successful predictions' each time things fall down, but which don't predict exactly how long they should take to fall. If your prediction of '3' comes true, that'll be a 0.9 : 0.2 or 4.5 : 1 likelihood ratio favoring Galilean over Aristotelian gravity.\" \n\n\"Yes,\" says Galileo. \"Of course, it's not enough for the prediction to be precise, it also has to be correct. If the watch shows '4' instead, that'll be a likelihood ratio of 0.05 : 0.20 or 1 : 4 against my theory. It's better to be vague and right than to be precise and wrong.\"\n\nAristotle nods. \"Well, let's test it, then.\"\n\nThe bowling ball is dropped.\n\nThe stopwatch shows 3 seconds.\n\n\"So do you believe my theory yet?\" says Galileo.\n\n\"Well, I believe it somewhere in the range of four and a half times as much as I did previously,\" says Aristotle. \"But that part where you're plugging in numbers like 9.8 and calculations like the *square* of the time strike me as kinda complicated. Like, if I'm allowed to plug in numbers that precise, and do things like square them, there must be hundreds of different theories I could make which would be that complicated. By the [quantitative form of Occam's Razor](https://arbital.com/p/11w), we need to [penalize the prior probability](https://arbital.com/p/) of your theory for its [algorithmic complexity](https://arbital.com/p/5v). One observation with a likelihood ratio of 4.5 : 1 isn't enough to support all that complexity. I'm not going to believe something that complicated because I see a stopwatch showing '3' just that one time! I need to see more objects dropped from various different heights and verify that the times are what you say they should be. If I say the prior complexity of your theory is, say, [20 bits](https://arbital.com/p/1zh), then 9 more observations like this would do it. Of course, I expect you've already made more observations than that in private, but it only becomes part of the public knowledge of humankind after someone replicates it.\"\n\n\"But of course,\" says Galileo. \"I'd like to check your experimental setup and especially your calculations the first few times you try it, to make sure you're not measuring in feet instead of meters, or forgetting to halve the final speed to get the average speed, and so on. It's a formal theory, but in practice I want to check to make sure you're not making a mistake in calculating it.\"\n\n\"Naturally,\" says Aristotle. \"Wow, it sure is a good thing that we're both Bayesians and we both know the governing laws of probability theory and how they motivate the informal social procedures we're following, huh?\"\"\n\n\"Yes indeed,\" says Galileo. \"Otherwise we might have gotten into a heated argument that could have lasted for hours.\"\n\n# Falsificationism\n\nOne of the reasons why Karl Popper was so enamored of \"falsification\" was the observation that falsification, in science, is more definite and final than confirmation. A classic parable along these lines is Newtonian gravitation versus General Relativity (Einsteinian gravitation) - despite the tons and tons of experimental evidence for Newton's theory that had accumulated up to the 19th century, there was no sense in which Newtonian gravity had been *finally* verified, and in the end it was finally discarded in favor of Einsteinian gravity. Now that Newton's gravity has been tossed on the trash-heap, though, there's no realistic probability of it *ever* coming back; the discard, unlike the adoption, is final.\n\nWorking in the days before Bayes became widely known, Popper put a *logical* interpretation on this setup. Suppose $H \\rightarrow E,$ hypothesis H logically implies that evidence E will be observed. If instead we observe $\\neg E$ we can conclude $\\neg H$ by the law of the contrapositive. On the other hand, if we observe $E,$ we can't logically conclude $H.$ So we can logically falsify a theory, but not logically verify it.\n\nPragmatically, this often isn't how science works.\n\nIn the nineteenth century, observed anomalies were accumulating in the observation of Uranus's orbit. After taking into account all known influences from all other planets, Uranus still was not *exactly* where Newton's theory said it should be. On the logical-falsification view, since Newtonian gravitation said that Uranus ought to be in a certain precise place and Uranus was not there, we ought to have become [infinitely certain](https://arbital.com/p/) that Newton's theory was false. Several theorists did suggest that Newton's theory might have a small error term, and so be false in its original form.\n\nThe actual outcome was that Urbain Le Verrier and John Couch Adams independently suspected that the anomaly in Uranus's orbit could be accounted for by a previously unobserved eighth planet. And, rather than *vaguely* say that this was their hypothesis, in a way that would just spread around the probability mass for Uranus's location and cause Newtonian mechanics to be not *too* falsified, Verrier and Adams independently went on to calculate where the eighth planet ought to be. In 1846, Johann Galle observed Neptune, based on Le Verrier's observations - a tremendous triumph for Newtonian mechanics.\n\nIn 1859, Urbain Le Verrier recognized another problem: Mercury was not exactly where it should be. While Newtonian gravity did predict that Mercury's orbit should precess (the point of closest approach to the Sun should itself slowly rotate around the Sun), Mercury was precessing by 38 arc-seconds per century more than it ought to be (later revised to 43). This anomaly was harder to explain; Le Verrier thought there was [a tiny planetoid orbiting the Sun inside the orbit of Mercury](https://en.wikipedia.org/wiki/Vulcan_) (hypothetical_planet)).\n\nA bit more than half a century later, Einstein, working on the equations for General Relativity, realized that Mercury's anomalous precession was exactly explained by the equations in their simplest and most elegant form.\n\nAnd that was the end of Newtonian gravitation, permanently.\n\nIf we try to take Popper's logical view of things, there's no obvious difference between the anomaly with Uranus and the anomaly with Mercury. In both cases, the straightforward Newtonian prediction seemed to be falsified. If Newtonian gravitation could bounce back from one logical disconfirmation, why not the other?\n\nFrom a Bayesian standpoint, we can see the difference as follows:\n\nIn the case of Uranus, there was no attractive alternative to Newtonian mechanics that was making better predictions. The current theory seemed to be [strictly confused](https://arbital.com/p/227) about Uranus, in the sense that the current Newtonian model was making confident predictions about Uranus that were much wronger than the theory expected to be on average. This meant that there ought to be *some* better alternative. It didn't say that the alternative had to be a non-Newtonian one. The low $\\mathbb P(UranusLocation\\mid currentNewton)$ created a potential for some modification of the current model to make a better prediction with higher $\\mathbb P(UranusLocation\\mid newModel)$, but it didn't say what had to change in the new model.\n\nEven after Neptune was observed, though, this wasn't a *final* confirmation of Newtonian mechanics. While the new model assigned very high $\\mathbb P(UranusLocation\\mid Neptune \\wedge Newton),$ there could, for all anyone knew, be some unknown Other theory that would assign equally high $\\mathbb P(UranusLocation\\mid Neptune \\wedge Other).$ In this case, Newton's theory would have no likelihood advantage versus this unknown Other, so we could not say that Newton's theory of gravity had been confirmed over *every other possible* theory.\n\nIn the case of Mercury, when Einstein's formal theory came along and assigned much higher $\\mathbb P(MercuryLocation\\mid Einstein)$ compared to $\\mathbb P(MercuryLocation\\mid Newton),$ this created a huge likelihood ratio for Einstein over Newton and drove the probability of Newton's theory very low. Even if someday some other theory turns out to be better than Einstein, to do equally well at $\\mathbb P(MercuryLocation\\mid Other)$ and also get even better $\\mathbb P(newObservation\\mid Other),$ the fact that Einstein's theory did do much better than Newton on Mercury tells us that it's *possible* for simple theories to do much better on Mercury, in a simple way, that's definitely not Newtonian. So whatever Other theory comes along will also do better on Mercury than $\\mathbb P(MercuryLocation\\mid Newton)$ in a non-Newtonian fashion, and Newton will just be at a new, huge likelihood disadvantage against this Other theory.\n\nSo - from a Bayesian standpoint - after explaining Mercury's orbital precession, we can't be sure Einstein's gravitation is correct, but we *can* be sure that Newton's gravitation is wrong.\n\nBut this doesn't reflect a logical difference between falsification and verification - everything takes place inside a world of probabilities.\n\n## Possibility of permanent confirmation\n\nIt's worth noting that although Newton's theory of gravitation was false, something *very much like it* was true. So while the belief \"Planets move *exactly* like Newton says\" could only be provisionally accepted and was eventually overturned, the belief, \"All the kind of planets we've seen so far, in the kind of situations we've seen so far, move *pretty much* like Newtonian gravity says\" was much more strongly confirmed.\n\nThis implies that, contra Popper's rejection of the very notion of confirmation, *some* theories can be finally confirmed, beyond all reasonable doubt. E.g., the DNA theory of biological reproduction. No matter what we wonder about quarks, there's no plausible way we could be wrong about the existence of molecules, or about there being a double helix molecule that encodes genetic information. It's reasonable to say that the theory of DNA has been forever confirmed beyond a reasonable doubt, and will never go on the trash-heap of science no matter what new observations may come.\n\nThis is possible because DNA is a non-fundamental theory, given in terms like \"molecules\" and \"atoms\" rather than quarks. Even if quarks aren't exactly what we think, there will be something enough *like* quarks to underlie the objects we call protons and neutrons and the existence of atoms and molecules above that, which means the objects we call DNA will still be there in the new theory. In other words, the biological theory of DNA has a \"something *sort of like this* must be true\" theory *underneath* it. The hypothesis that what Joseph Black called 'fixed air' and we call 'carbon dioxide', is in fact made up of one carbon atom and two oxygen atoms, has been permanently confirmed in a way that Newtonian gravity was not permanently confirmed.\n\nThere's some amount of observation which would convince us that all science was a lie and there were fairies in the garden, but short of that, carbon dioxide is here to stay.\n\nNonetheless, in ordinary science when we're trying to figure out controversies, working to Bayes' rule implies that a virtuous scientist should think like Karl Popper suggested:\n\n- Treat disconfirmation as stronger than confirmation;\n- Only provisionally accept hypotheses that have a lot of favorable-seeming evidence;\n- Have some amount of disconfirming evidence and prediction-failures that makes you permanently put a hypothesis on the trash-heap and give up hope on its resurrection;\n- Require a qualitatively *more* powerful kind of evidence than that, with direct observation of the phenomenon's parts and processes in detail, before you start thinking of a theory as 'confirmed'.\n\n# Experimentation\n\nWhatever the likelihood for $\\mathbb P(observation\\mid hypothesis)$, it doesn't change your beliefs unless you actually execute the experiment, learn whether $observation$ or $\\neg observation$ is true, and [condition your beliefs in order to update your probabilities](https://arbital.com/p/1y6).\n\nIn this sense, Bayes' rule can also be said to motivate the experimental method. Though you don't necessarily need a lot of math to realize that drawing an accurate map of a city requires looking at the city. Still, since the experimental method wasn't quite obvious for a lot of human history, it could maybe use all the support it can get - including the central Bayesian idea of \"Make observations to update your beliefs.\"", "date_published": "2017-02-16T17:22:52Z", "authors": ["Eric Bruylant", "Patrick LaVictoire", "Otto Mossberg", "Alexei Andreev", "Eric Rogstad", "Dewi Morgan", "Nate Soares", "Adom Hartell", "Eliezer Yudkowsky"], "summaries": ["The classic scientific virtues of falsifiability (saying what we should *not* expect to see if a hypothesis is true), making bold experimental predictions (that aren't predicted by other theories), and precision (making narrow predictions about quantitative measurements) can be seen in the light of [Bayes' rule](https://arbital.com/p/1lz) as properties which allow theories to gain stronger evidence (greater [likelihood ratios](https://arbital.com/p/1rq)) in their favor when their predictions are correct. We can also interpret traditional ideas like falsificationism - acceptance being provisional, and rejection being more forceful - as features that arise from the probability theory of Bayes' rule."], "tags": ["B-Class"], "alias": "220"} {"id": "3f811bf6caf02987a01bd183c709bf4e", "title": "Strictly confused", "url": "https://arbital.com/p/strictly_confused", "source": "arbital", "source_type": "text", "text": "A hypothesis is \"strictly confused\" by the data if the hypothesis does much worse at predicting the data than it expected to do. If, on average, you expect to assign around 1% likelihood to the exact observation you see, and you actually see something to which you assigned 0.000001% likelihood, you are strictly confused.\n\n%%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)):\nI.e., letting $H$ be a hypothesis and $e_0$ be the data observed from some set $E$ of possible observations, we say that $H$ is \"strictly confused\" when\n\n$$ \\log \\mathbb P(e_0 \\mid H) \\ll \\sum_{e \\in E} \\mathbb P(e \\mid H) \\cdot \\log \\mathbb P(e \\mid H)$$\n\n%%\n\n# Motivation and examples\n\nIn Bayesian reasoning, the main reason to reject a hypothesis is when we find a better hypothesis. Suppose we think a coin is fair, and we flip it 100 times, and we see that the coin comes up \"HHHHHHH...\" or all heads. After doing this 100 times, the hypothesis \"This is a double-headed coin\" has a likelihood ratio of $2^{100} : 1$ favoring it over the \"fair coin\" hypothesis, and the \"double-headed coin\" hypothesis isn't *more* improbable than $2^{-100}$ a priori.\n\nBut this relies on the insight that there's a simple / a priori plausible *alternative* hypothesis that does better. What if the coin is producing TTHHTTHHTTHH and we just never happen to think of 'alternating pairs of tails and heads' as a hypothesis? It's possible to do better by thinking of a better hypothesis, but so far as the 'fair coin' hypothesis sees the world, TTHHTTHH... is no more or less likely than any other possible sequence it could encounter; the first eight coinflips have a probability of $2^{-8}$ and this would have been true no matter which eight coinflips were observed. After observing 100 coinflips, the fair coin hypothesis will assign them a collective probability of $2^{-100},$ and in this sense, no sequence of 100 coinflips is any more 'surprising' or 'confusing' than any other from *within* the perspective of the fair coin hypothesis.\n\nWe can't say that we're 'confused' or 'surprised' on seeing a long sequence of coinflips to which we assigned some very low probability on the order of $2^{-100} \\approx 10^{-30},$ because we expected to assign a probability that low.\n\nOn the other hand, suppose we think that a coin is biased to produce 90% heads and 10% tails, and we flip it 100 times and get some fair-looking sequence like \"THHTTTHTTTTHTHTHHH...\" (courtesy of random.org). Then we *expected* to assign the observed sequence a probability in the range of $0.9^{90} \\cdot 0.1^{10} \\approx 7\\cdot 10^{-15},$ but we *actually* saw a sequence we assigned probability around $0.9^{50} \\cdot 0.1^{50} \\approx 5 \\cdot 10^{-53}.$ We don't need to consider any other hypotheses to realize that we are very confused. We don't need to have *invented* the concept of a 'fair coin', or know that the 'fair coin' hypothesis would have assigned a much higher likelihood in the region of $7 \\cdot 10^{-31},$ to realize that there's something wrong with the current hypothesis.\n\nIn the case of the supposed fair coin that produces HHHHHHH, we only do poorly relative to a better hypothesis 'all heads' that makes a superior prediction. In the case of the supposed 90%-heads coin that produces a random-looking sequence, we do poorly than we expected to do from inside the 90%-heads hypothesis, so we are doing poorly in an absolute, non-relative sense.\n\nBeing strictly confused is a sign that tells us to look for *some* alternative hypothesis in advance of our having any idea whatsoever what that alternative hypothesis might be.\n\n# Distinction from frequentist p-values\n\nThe classical frequentist test for rejecting the null hypothesis involves considering the probability assigned to particular 'obvious'-seeming partitions of the data, and asking if we ended up inside a low-probability partition.\n\nSuppose you think some coin is fair, and you flip the coin 100 times and see a random-looking sequence \"THHTTTHTT...\"\n\nSomeone comes along and says, \"You know, this result is very surprising, given your 'fair coin' theory. You really didn't expect that to happen.\"\n\n\"How so?\" you reply.\n\nThey say, \"Well, among all sequences of 1000 coins, only 1 in 16 such sequences start with a string like THHT TTHTT, a palindromic quartet followed by a palindromic quintet. You confidently predicted that had a 15/16 chance of *not* happening, and then you were surprised.\"\n\n\"Okay, look,\" you reply, \"if you'd written down that *particular* prediction in advance and not a lot of others, I might be interested. Like, if I'd already thought that way of partitioning the data — namely, 'palindrome quartet followed by palindrome quintet' vs. '*not* palindrome quartet followed by palindrome quintet' — was a specially interesting and distinguished one, I might notice that I'd assigned the second partition 15/16 probability and then it failed to actually happen. As it is, it seems like you're really reaching.\"\n\nWe can think of the frequentist tests for rejecting the fair-coin hypothesis as a *small* set of 'interesting partitions' that were written down in advance, which are supposed to have low probability given the fair coin. For example, if a coin produces HHHHH HTHHH HHTHH, a frequentist says, \"*Partitioning by number of heads*, the fair coin hypothesis says that on 15 flips we should get between 12 and 3 heads, inclusive, with a probability of 98.6%. You are therefore surprised because this event you assigned 98.6% probability failed to happen. And yes, we're just checking the number of heads and a few other obvious things, not for palindromic quartets followed by palindromic quintets.\"\n\nPart of the point of being a Bayesian, however, is that we try to only reason on the data we actually observed, and not put that data into particular partitions and reason about those partitions. The partitioning process introduces potential subjectivity, especially in an academic setting fraught with powerful incentives to produce 'statistically significant' data - the equivalent of somebody insisting that palindromic quartets and quintets are special, or that counting heads isn't special.\n\nE.g., if we flip a coin six times and get HHHHHT, this is \"statistically significant p < 0.05\" if the researcher decided to flip coins until they got at least one T and then stop, in which case a fair coin has only a 1/32 probability of requiring six or more steps to produce a T. If on the other hand the researcher decided to flip the coin six times and then count the number of tails, the probability of getting 1 or fewer T in six flips is 7/64 which is not 'statistically significant'.\n\nThe Bayesian says, \"If I use [the Rule of Succession](https://arbital.com/p/21c) to denote the hypothesis that the coin has an unknown bias between 0 and 1, then the sequence HHHHHT is assigned 1/30 probability by the Rule of Succession and 1/64 probability by 'fair coin', so this is evidence with a likelihood ratio of ~ 2 : 1 favoring the hypothesis that the coin is biased - not enough to [overcome](https://arbital.com/p/) any significant [prior improbability](https://arbital.com/p/).\"\n\nThe Bayesian arrives at this judgment by only considering the particular, exact data that was observed, and not any larger partitions of data. To compute the probability flow between two hypotheses $H_1$ and $H_2$ we only need to know the likelihoods of our *exact* observation given those two hypotheses, not the likelihoods the hypotheses assign to any partitions into which that observation can be put, etcetera.\n\nSimilarly, the Bayesian looks at the sequence HHHHH HTHHH HHTHH and says: this specific, exact data that we observed gives us a likelihood ratio of (1/1680 : 1/32768) ~ (19.5 : 1) favoring \"[The coin has an unknown bias](https://arbital.com/p/21c) between 0 and 1\" over \"The coin is fair\". With that already said, the Bayesian doesn't see any need to talk about the total probability of the fair coin hypothesis producing data inside a partition of similar results that could have been observed but weren't.\n\nBut even though Bayesians usually try avoid thinking in terms of rejecting a null hypothesis using partitions, saying \"I'm strictly confused!\" gives a Bayesian a way of saying \"Well, I know *something's* wrong...\" that doesn't require already having the insight to propose a better alternative, or even the insight to realize that some particular partitioning of the data is worth special attention.", "date_published": "2016-07-04T02:08:40Z", "authors": ["Alexei Andreev", "Roland G", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky"], "summaries": ["A hypothesis is \"strictly confused\" by the data if the hypothesis does much worse at predicting the data than it expected to do. If, on average, you expect to assign around 1% likelihood to the exact observation you see, and you actually see something to which you assigned 0.000001% likelihood, you are strictly confused."], "tags": ["Non-standard terminology", "B-Class"], "alias": "227"} {"id": "40f2550821e7c4b447bb542416c0df1c", "title": "Diseasitis", "url": "https://arbital.com/p/diseasitis", "source": "arbital", "source_type": "text", "text": "A nurse is screening a student population for a certain illness, Diseasitis (lit. \"inflammation of the disease\").\n\n- Based on prior epidemiological studies, you expect that around 20% of the students in a screening population like this one will actually have Diseasitis.\n\nYou are testing for the presence of the disease using a color-changing tongue depressor with a sensitive chemical strip.\n\n- Among students with Diseasitis, 90% turn the tongue depressor black.\n- 30% of the students without Diseasitis will also turn the tongue depressor black.\n\nOne of your students comes into the office, takes your test, and turns the tongue depressor black.\n\nGiven only that information, what is the probability that they have Diseasitis?\n\nThis problem is used as a central example in several introductions to Bayes's Rule, including all paths in the [Arbital Guide to Bayes' Rule](https://arbital.com/p/1zq) and the [High-Speed Intro to Bayes' Rule](https://arbital.com/p/693). A simple, unnecessarily difficult calculation of the answer can be found in [https://arbital.com/p/55z](https://arbital.com/p/55z).", "date_published": "2016-10-01T04:01:55Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["A nurse is screening a set of patients for an illness, Diseasitis, using a tongue depressor that tends to turn black in the presence of the disease.\n\n- On average, around 20% of the patients in a population like this one will actually have Diseasitis. (20% prior prevalence.)\n- Among the patients with Diseasitis, 90% turn the tongue depressor black. (90% true positives.)\n- 30% of patients without Diseasitis also turn the tongue depressor black. (30% false positives.)\n\nThe probability that a patient with a blackened tongue depressor has Diseasitis can be found using [Bayes' rule](https://arbital.com/p/693)."], "tags": ["B-Class", "Example problem"], "alias": "22s"} {"id": "8c4ea053be69de18d042a66d0141fe31", "title": "Just a requisite", "url": "https://arbital.com/p/requisite_meta_tag", "source": "arbital", "source_type": "text", "text": "A tag which indicates that a page is mainly meant to serve as an anonymous requisite inside Arbital's requisite system.", "date_published": "2016-02-22T20:06:05Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "22t"} {"id": "c1d6b932b1294dd2b690eaf90893e80b", "title": "Introductory Bayesian problems", "url": "https://arbital.com/p/22w", "source": "arbital", "source_type": "text", "text": "Introductory problems:\n\n1. [https://arbital.com/p/22s](https://arbital.com/p/22s)\n2. [https://arbital.com/p/55b](https://arbital.com/p/55b)\n3. [https://arbital.com/p/559](https://arbital.com/p/559)\n4. [https://arbital.com/p/54v](https://arbital.com/p/54v)\n\n(Hover or click to read them.)", "date_published": "2016-07-10T19:53:09Z", "authors": ["Alexei Andreev", "Nate Soares", "Eric Bruylant", "Eliezer Yudkowsky", "Vojta Kovarik"], "summaries": [], "tags": ["Start", "Needs summary"], "alias": "22w"} {"id": "b364171ef8e1fbe1c1c59fa628c64b4a", "title": "Strength of Bayesian evidence", "url": "https://arbital.com/p/bayes_strength_of_evidence", "source": "arbital", "source_type": "text", "text": "From a Bayesian standpoint, the strength of evidence can be identified with its [relative likelihood](https://arbital.com/p/1rq). See the [log odds form of Bayes' rule](https://arbital.com/p/1zh) for a representation which makes this unusually clear.", "date_published": "2016-07-08T13:27:37Z", "authors": ["Nate Soares", "Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "22x"} {"id": "86ca78b962421fce4ced27f19587ab8d", "title": "Path: Insights from Bayesian updating", "url": "https://arbital.com/p/bayes_update_details", "source": "arbital", "source_type": "text", "text": "You've finished reading our current collection of Bayesian insights! There aren't many here yet, but there might be more later. Please continue on to the next part of this path!", "date_published": "2016-03-03T23:30:25Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky"], "summaries": ["A small page that collects the currently explained list of Bayesian insights as requisites, to help Arbital auto-generate a learning path that includes all those requisites."], "tags": ["Just a requisite", "Stub"], "alias": "25y"} {"id": "43626c6a66d7062a02de4b95d1067410", "title": "Finishing your Bayesian path on Arbital", "url": "https://arbital.com/p/bayes_guide_end", "source": "arbital", "source_type": "text", "text": "You've now reached the end of whatever you asked the [Bayes' rule guide](https://arbital.com/p/1zq) to explain to you! \n\nIf you'd like to join in and help create more math content like this, we'd [love to have you](https://arbital.com/p/4d6)!\n\n%%!knows-requisite([https://arbital.com/p/25y](https://arbital.com/p/25y)): If you'd like to keep reading about Bayesian reasoning here on Arbital, there's a couple of less polished pages on real-world insights derived from Bayes' rule — [click here to start on a path to learn them](https://arbital.com/p/https://arbital.com/learn/?path=bayes_update_details,bayes_guide_end).%%\n\nYou might also be interested in reading [this explanation of Arbital's goals.](https://arbital.com/p/58p)\n\nThank you!", "date_published": "2016-08-04T12:21:21Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "25z"} {"id": "bc61474a50464202fc8402500c61ac6c", "title": "Extraordinary claims", "url": "https://arbital.com/p/extraordinary_claims", "source": "arbital", "source_type": "text", "text": "# Summary\n\nWhat makes something count as an 'extraordinary claim' for the purposes of determining whether it [requires extraordinary evidence](https://arbital.com/p/21v)?\n\nBroadly speaking:\n\n- Not-yet-supported complexity or precise details; by [Occam's Razor](https://arbital.com/p/occams_razor), this requires added support from the evidence.\n- Violation of (deep, causal) generalizations; behaving in a way that's out of character for (the lowest level of) the universe.\n\nSome properties that do *not* make a claim inherently 'extraordinary' in the above sense:\n\n- Having future consequences that are important, or extreme-sounding, or that would imply we need to take costly policies.\n- Whether few or many people already believe it.\n- Whether there might be bad reasons to believe it.\n\nThe ultimate grounding for the notion of 'extraordinary claim' would come from [Solomonoff induction](https://arbital.com/p/11w), or some generalization of Solomonoff induction to handle more [https://arbital.com/p/-naturalistic](https://arbital.com/p/-naturalistic) reasoning about the world. Since this page is a [Work In Progress](https://arbital.com/p/4v), it currently only lists out the derived heuristics, rather than trying to explain in any great detail how those heuristics might follow from Solomonoff-style reasoning.\n\n# Example 1: Anthropogenic global warming\n\nConsider the idea of [anthropogenic global warming](https://en.wikipedia.org/wiki/Global_warming) as it might have been analyzed in *advance* of observing the actual temperature record. Would the claim, \"Adding lots of carbon dioxide to the atmosphere will result in increased global temperatures and climate change\", be an 'extraordinary' claim requiring extraordinary evidence to verify, or an ordinary claim not requiring particularly strong evidence? We assume in this case you can do all the physics reasoning or ecological reasoning you want, but you can't actually look at the temperature record yet.\n\nThe core argument (in *advance* of looking at the temperature record) will be: \"Carbon dioxide is a greenhouse gas, so adding sufficient amounts of it to the atmosphere ought to trap more heat, which ought to raise the equilibrium temperature of the Earth.\"\n\nTo evaluate the ordinariness or extraordinariness of this claim:\n\nWe *don't* ask whether the future consequences of the claim are extreme or important. Suppose that adding carbon dioxide actually did trap more heat; would the Standard Model of physics think to itself, \"Uh oh, that has some extreme consequences\" and decide to let the heat radiate away anyway? Obviously not; the laws of physics have no privileged tendency to avoid consequences that are, on a human scale, very important in a positive or negative direction.\n\nWe *don't* ask whether the policies required to address the claim are very costly - this isn't something that would prevent the causal mechanisms behind the claim from operating, and more generally, reality doesn't try to avoid inconveniencing us, so it doesn't affect the prior probability we assign to a claim in advance of seeing any evidence.\n\nWe *don't* ask whether someone has a motive to lie to us about the claim, or if they might be inclined to believe it for crazy reasons. If someone has a motive to lie to us about the evidence, this affects the strength of evidence, rather than lowering the [prior probability](https://arbital.com/p/1rm). Suppose somebody said, \"Hey, I own an apartment in New York, and I'll rent it to you for $2000/month.\" They might be lying and trying to trick you out of the money, but this doesn't mean \"I own an apartment in New York\" is an extraordinary claim. Lots of people own apartments in New York. It happens all the time, even. The monetary stake means that the person might have a motive to lie to you, but this affects the likelihood ratio, not the prior odds. If we're just considering their unsupported word, the probability that they'll say \"I own an apartment in New York\", given that they *don't* own an apartment in New York, might be unusually high because they could be trying to run a rent scam. But this doesn't mean we have to call in physicists to check out whether the apartment is really there - we just need stronger, but ordinary, evidence. Similarly, even if there was someone tempted to lie about global warming, we'd consider this as a potential weakness of the evidence they offer us, but not a weakness in the prior probability of the proposition \"Adding carbon dioxide to the atmosphere heats it up.\"\n\n(Similarly, wanting strong evidence about a subject doesn't always coincide with the underlying claim being improbable. Maybe you're considering *buying* a house in San Francisco, and *millions* of dollars are at stake. This implies a high [value of information](https://arbital.com/p/value_of_information) and you might want to invest in extra-strong evidence like having a third party check the title to the house. But this isn't because it's a Lovecraftian anomaly for anyone to own a house in San Francisco. The money at stake just means that we're willing to pay more to eliminate small residues of improbability from this very ordinary claim.)\n\nWe *do* ask whether \"adding carbon dioxide warms the atmosphere\" or \"carbon dioxide doesn't warm the atmosphere\" seems more consonant with the previously observed behavior of carbon dioxide.\n\nAfter we finish figuring out how carbon dioxide molecules and infrared photons usually behave, we *don't* give priority to generalizations like, \"For as long as we've observed it, the average summer temperature in Freedonia has never gone over 30C.\" It's true that the predicted consequences of carbon dioxide behaving like it usually does, are violating another generalization about how Freedonia usually behaves. But we generally give priority to *deeper* generalizations continuing, i.e., generalizations that are *lower-level* or *closer to the start of causal chains.*\n\n- The behavior of carbon dioxide is lower-level - it's generalizing over a class of molecules that we can observe in very great detail and make very precise generalizations about. The weight of a carbon dioxide molecule (with the standard isotopes in both cases), or the amount of infrared light the corresponding gas allows to pass, is something that varies much less than the summer temperature in Freedonia - it's a very precise, very strong generalization.\n- The behavior of carbon dioxide is closer to the start of the chain of cause and effect. The summer temperature in Freedonia is something that's caused by, or happens as a result of, a particular level of carbon dioxide in Freedonia's atmosphere. We'd expect changes that happened toward the start of the causal chain to produce changes in the effects at the end of the causal chain. Conversely, it would be very surprising if the Freedonia-surface-temperature generalization can reach back and force carbon dioxide to have a different permeability to infrared.\n\nWe *don't* consider whether lots of prestigious scientists believe in global warming. If you expect that lots of prestigious scientists usually won't believe in a proposition like global warming in worlds where global warming is false, then observing an apparent scientific consensus might be moderately strong *evidence* favoring the claim. But that isn't part of the *prior probability* before seeing any evidence. For that, we want to ask about how complicated the claim is, and whether it violates or obeys generalizations we already know about.\n\nAnother way of looking at a test of extraordinariness is to ask whether the claim's truth or falsity would imply learning more about the universe that we didn't already know. If you'd never observed the temperature record, and had only guessed *a priori* that adding carbon dioxide would warm the atmosphere, you wouldn't be too surprised to go look at the temperature record and find that nothing seemed to be happening. In this case, rather than imagining that you were wrong about the behavior of infrared light, you might suspect, for example, that plants were growing more and absorbing the carbon dioxide, keeping the total atmospheric level in equilibrium. But in this case you would have learned *a new fact not already known to you (or science)* which explained why global temperatures were not rising. So to expect that outcome in *advance* would be a more extraordinary claim than to not expect it. If we can imagine some not-too-implausible ways that a claim could be wrong, but they'd all require us to postulate new facts we don't solidly know, then this doesn't make the original claim 'extraordinary'. It's still a very ordinary claim that we'd start believing in after seeing an ordinary amount of evidence.", "date_published": "2016-03-04T05:16:35Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["To determine whether something is an 'extraordinary claim', for purposes of deciding whether it [requires extraordinary evidence](https://arbital.com/p/21v), we consider / don't consider these factors:\n\n- Do consider: Whether the claim requires lots of complexity that isn't already implied by other evidence.\n- Don't consider: Whether the claim has future consequences that are extreme or important-sounding.\n- Don't consider: Whether the claim's truth would imply that we ought to do costly or painful things.\n- Do consider: Whether the claim violates low-level or causal generalizations.\n- Don't consider: Whether the claim, if locally true, would imply downstream effects that violate high-level or surface generalizations.\n\nWe don't automatically *believe* claims that pass such tests; we just consider them as ordinary claims which we'd believe on obtaining ordinary evidence."], "tags": ["C-Class"], "alias": "26y"} {"id": "cb7adf260e98f8570669a1b23e3f7ee1", "title": "Prior", "url": "https://arbital.com/p/bayesian_prior", "source": "arbital", "source_type": "text", "text": "Our (potentially rich or complex) state of knowledge and *propensity to learn,* before seeing the evidence, expressed as a [probability function](https://arbital.com/p/1zj). This is a deeper and more general concept than '[https://arbital.com/p/-1rm](https://arbital.com/p/-1rm)'. A prior probability is like guessing the chance that it will be cloudy outside, in advance of looking out a window. The more general notion of a Bayesian prior would include probability distributions that answered the question, \"*Suppose* I saw the Sun rising on 999 successive days; would I afterwards think the probability of the Sun rising on the next day was more like 1000/1001, 1/2, or 1 - 10^-6?\" In a sense, a baby can be said to have a 'prior' before it opens its eyes, and then to develop a model of the world by updating on the evidence it sees after that point. The baby's 'prior' expresses not just its current ignorance, but the different kinds of worlds the baby would end up believing in, depending on what sensory evidence they saw over the rest of their lives. Key subconcepts include [ignorance priors](https://arbital.com/p/219) and [inductive priors](https://arbital.com/p/21b), and key examples are [Laplace's Rule of Succession](https://arbital.com/p/21c) and [Solomonoff induction](https://arbital.com/p/11w).", "date_published": "2016-03-04T05:11:28Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "27p"} {"id": "4911b8ef8474cea32f481fb8e52f61bf", "title": "Kevin Clancy", "url": "https://arbital.com/p/KevinClancy", "source": "arbital", "source_type": "text", "text": "summary: Nothing here yet.\n\nAutomatically generated page for \"Kevin Clancy\" user.\nIf you are the owner, click [here to edit](https://arbital.com/edit/299).", "date_published": "2016-03-04T00:00:00Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": [], "alias": "299"} {"id": "6c67cae7cb78b80442b563bb0783864e", "title": "Travis Rivera", "url": "https://arbital.com/p/TravisRivera", "source": "arbital", "source_type": "text", "text": "summary: Machine Learning and statistics expert.", "date_published": "2017-09-11T19:51:14Z", "authors": ["Travis Rivera"], "summaries": [], "tags": [], "alias": "2cl"} {"id": "94755a14d316833ea5df58500d481273", "title": "Uncountability", "url": "https://arbital.com/p/uncountable", "source": "arbital", "source_type": "text", "text": "Sizes of infinity fall into two broad classes: *countable* infinities, and *uncountable* infinities.\n\nA set $S$ is *uncountable* if it is not [countable](https://arbital.com/p/6f8).", "date_published": "2016-10-21T09:05:50Z", "authors": ["Alexei Andreev", "Malcolm McCrimmon", "Patrick Stevens", "Jason Gross", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": [], "tags": ["C-Class", "Proposed B-Class", "Concept"], "alias": "2w0"} {"id": "90fd177a626147e026192fd151abcc28", "title": "Uncountability: Intro (Math 1)", "url": "https://arbital.com/p/2w1", "source": "arbital", "source_type": "text", "text": "[Collections of things](https://arbital.com/p/3jz) which are the same [size](https://arbital.com/p/4w5) as or smaller than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.\n\nAll uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity). There is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite collections of things %%note: Specifically, mathematical systems which use the [https://arbital.com/p/69b](https://arbital.com/p/69b), see the [technical](https://arbital.com/p/4zp) page for details.%%. To demonstrate this, we'll use a 2d grid.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## Real and Rational numbers\n\n[Real numbers](https://arbital.com/p/4bc) are numbers with a [decimal expansion](https://arbital.com/p/4sl), for example 1, 2, 3.5, $\\pi$ = 3.14159265... Every real number has an infinite decimal expansion, for example 1 = 1.0000000000..., 2 = 2.0000000000..., 3.5 = 3.5000000000... Recall that the rational numbers are [fractions](https://arbital.com/p/fraction) of [integers](https://arbital.com/p/48l), for example $1 = \\frac11$, $\\frac32$, $\\frac{100}{101}$, $\\frac{22}{7}$. The positive integers are the integers greater than zero (i.e. 1, 2, 3, 4, ..). \n\nThere is a [https://arbital.com/p/-theorem](https://arbital.com/p/-theorem) in math that states that the rational numbers are *countable* %%note: You can see the theorem [here](https://arbital.com/p/511).%%, that is, that the set of rational numbers is the same size as the set of positive integers, and another theorem which states that the real numbers are *uncountable*, that is, that the set of real numbers is strictly bigger. By \"same size\" and \"strictly bigger\", we mean that it is possible to match every rational number with some positive integer in a way so that there are no rational numbers, nor positive integers, left unmatched, but that any matching you make between real numbers and positive integers leaves some real numbers not matched with anything. \n\n## Rational grid\n\nIf you imagine laying the rational numbers out on a two-dimensional grid, so that the number $p / q$ falls at $(p, q)$, then we may match the positive integers with the rational numbers by walking in a spiral pattern out from zero, skipping over numbers that we have already counted (or that are undefined, such as zero divided by any number). The beginning of this sequence is $\\frac01$, $\\frac11$, $\\frac12$, $\\frac{-1}{2}$, $\\frac{-1}{1}$, ... Graphically, this is:\n\n![A counting of the rational numbers](//i.imgur.com/OS5hr4U.png?1)\n\nThis shows that the rational numbers are countable.\n\n## Reals are uncountable\n\nThe real numbers, however, cannot be matched with the positive integers. I show this by [contradiction](https://arbital.com/p/46z). %%note:That is to say, I show that if there is such a matching, then we can conclude nonsensical statements (and if making a new assumption allows us to conclude nonsense, then the assumption itself must be nonsense.%%\n\nSuppose we have such a matching. We can construct a new real number that differs in its $n^\\text{th}$ decimal digit from the real number matched with $n$.\n\nFor example, if we were given a matching that matched 1 with 1.8, 2 with 1.26, 3 with 5.758, 4 with 1, and 5 with $\\pi$, then our new number could be 0.11111, which differs from 1.8 in the first decimal place (the 0.1 place), 1.26 in the second decimal place (the 0.01 place), and so on. It is clear that this number cannot be matched with any number under the matching we are given, because, if it were matched with $n$, then it would differ from itself in the $n^\\text{th}$ decimal digit, which is nonsense. Thus, there is no matching between the real numbers and the positive integers.\n\n## See also\n\nIf you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!\n\nArbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)!", "date_published": "2016-10-26T19:27:53Z", "authors": ["Eric Bruylant", "Mark Chimes", "Patrick Stevens", "Jason Gross"], "summaries": ["[Collections of things](https://arbital.com/p/3jz) which are the same [size](https://arbital.com/p/4w5) as or smaller than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.\n\nAll uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity). There is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite collections of things, and some infinite collections are larger than others."], "tags": ["C-Class"], "alias": "2w1"} {"id": "eaae48f38c51132ce84125823d4ce75b", "title": "Uncountability: Intuitive Intro", "url": "https://arbital.com/p/uncountability_intuitive", "source": "arbital", "source_type": "text", "text": "[Collections ](https://arbital.com/p/3jz) which have less than or the same [number of items](https://arbital.com/p/4w5) than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.\n\nAll uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity) and some infinite collections are larger than others %%note: At least, within mathematical systems which include the [https://arbital.com/p/69b](https://arbital.com/p/69b), see the [technical](https://arbital.com/p/4zp) page for details.%%. To demonstrate this, we'll explore a graphical demonstration with tiles and paths.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## Tiles and paths\n\n![A colored sidewalk, the top row red, the bottom row blue](//i.imgur.com/8HDHt19.png)\n\nConsider, as shown above, a sidewalk that goes on forever in one direction, which is made up of equal-sized square tiles. The sidewalk is two squares across. Consider a person who walks forever on it, obeying the following rule: Each step the person takes must be to one of the two tiles immediately in front of that person; no going backwards, no skipping tiles, no going sideways, no standing in place forever. The following is the beginning of one possible path:\n![A zig-zagging path that begins on blue.](//i.imgur.com/1oO6YeD.png)\n\nNow let's ask two questions:\n\n 1. How many tiles are there?\n 2. How many possible paths are there?\n\nIn both cases, you could just say that there are infinitely many, and that would be correct. But now let's consider a third question:\n\n 3. Is the number of tiles the same as the number of possible paths?\n\nIt turns out that there is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite [collections of things](https://arbital.com/p/3jz), and some infinite collections are larger than others. In particular, some infinite collections are *countable* (like the [set](https://arbital.com/p/3jz) of all [https://arbital.com/p/-45h](https://arbital.com/p/-45h)s), while others are *uncountable* (like the set of all [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc)s). As we will see, it can be shown that the number of tiles on our infinite sidewalk is countable, but that the number of possible paths one could take, following the rules above, is uncountable. So there are in fact *more* possible paths than there are tiles.\n\nLet's dig into exactly what this means and why it's true.\n\n## Pairing off\n\nWe say that two collections of things are the \"same size\" if you can match the items together completely: you can pair each of the things in the first collection with exactly one of the things in the second collection, in such a way that there is nothing left unpaired. For example, given two sets of three things each, we may pair them. Here is an example of such a pairing:\n\n![Example pairing showing that 3 = 3: Three things on each side, all matched up: a cow (http://www.faqs.org/photo-dict/phrase/348/cow.html) matched to a racecar (http://www.zcars.com.au/wrc/), an airplane (http://www.penziononyx.cz/) matched to a watermelon (http://www.free-extras.com/tags/1/watermelon.htm),the earth (http://www.treehugger.com/2010/04/18-week/) matched to a computer (http://www.sb.fsu.edu/~xray/Xrf/anaconda.html)](//i.imgur.com/JphY27q.png)\n \nYou might think it obvious, then, that the number of paths our person can walk is bigger than the number of tiles. We can match each tile with the path that starts on a tile the same color as it, and changes to the other color after it hits this tile. For example, we would match the third red tile with the path\n![An example of a path](//i.imgur.com/s7yjGBX.png)\n\nIt is important to note, however, that it is not sufficient that we find some matching that leaves things left over. We must show that *every* matching leaves things left over. For example, an infinite sidewalk that is one tile across has just as many tiles as an infinite sidewalk that is two tiles across, as we can see from the picture below by matching the 1R on top with the 1R on bottom, the 1B on top with the 1B on bottom, the 2R on top with the 2R on bottom, and so on.\n\n![A two-tile-wide sidewalk, with each column having a B tile and an R tile](//i.imgur.com/4YecTkO.png) \n![A one-tile-wide sidewalk, alternating B and R tiles](//i.imgur.com/DDGBX7G.png) \n\nIn fact, if we were only to require that *some* matching leave extra tiles, then the number of tiles in a sidewalk that is one tile wide would not be equal to itself, for we could match the first tile with 1B (in the bottom picture above), the second tile with 2B, and so on, and we would leave over half the tiles!\n\nIn fact, even if we had a *field* of tiles that is infinite in every direction, we would still have no more tiles than if we had only a sidewalk that is one tile across. The following matching shows this:\n\n![A one-tile-wide sidewalk, with each tile numbered starting at 1](//i.imgur.com/P4b0CPc.png)\n![A field of tiles](//i.imgur.com/glPTsSG.png)\n\n## An unpairable path\n\nYou might wonder, given that there are so many different ways to match up infinitely many things, how we can know that there is no matching that catches everything. I will now prove that, no matter how you try to match paths (ways of walking) and tiles, you will miss some paths. Since we have already seen that the number of tiles in a sidewalk two tiles wide is the same as the number of tiles in a sidewalk one tile wide, I will show that any matching between paths and tiles in a sidewalk one tile wide misses some paths. I will do this by creating a path that does not match the path we have chosen for any tile %%note: This type of proof is known as a [https://arbital.com/p/-46z](https://arbital.com/p/-46z).%%.\n\nSuppose we are given a matching between tiles and paths. Since we have numbered the tiles in a sidewalk one tile wide ($\\fbox{1}\\fbox{2}\\fbox{3}\\fbox{4}\\fbox{5}\\fbox{6}\\fbox{7}\\fbox{8}\\overline{\\underline{\\vphantom{1234567890}\\cdots}}$), we also have a numbering of the paths in our matching. Consider a new path that differs from the [$n^\\text{th}$](https://arbital.com/p/nth) path in our matching on the $n^\\text{th}$ tile, that is, the $n^\\text{th}$ step that you take. For example, if our first eight paths are\n\n![8 paths: 1: BRBRBRBR, 2: BBBBBBBB, 3: RRRRRRRR, 4: BRRRRRBB, 5: RBBRBRRB, 6: RBRRBBRR, 7: BRRRRRRB, 8: BRRBBBBB](//i.imgur.com/0Snexa1.png)\n\nthen our new path is\n\n![The path RRBBRRBR](//i.imgur.com/s7yjGBX.png)\n\nClearly, this path is not any of the ones in the matching, because it differs from every single path at some point (in particular, it differs from the $n^\\text{th}$ path on the $n^\\text{th}$ tile, the $n^\\text{th}$ step you take, which is highlighted in yellow).\n \nBecause we can repeat this exercise no matter what matching we're given, that means *any* possible matching will always leave out at least one path. Thus, the number of paths a person can take must be strictly larger than the number of tiles in the sidewalk.\n\n## See also\n\nIf you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!\n\nArbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)!", "date_published": "2016-11-26T17:32:51Z", "authors": ["Malcolm McCrimmon", "Eric Rogstad", "Jason Gross", "Eric Bruylant", "Joe Zeng"], "summaries": ["[Collections of things](https://arbital.com/p/3jz) which are the same [size](https://arbital.com/p/4w5) as or smaller than the collection of all [natural numbers](https://arbital.com/p/-45h) are called *countable*, while larger collections (like the set of all [real numbers](https://arbital.com/p/-4bc)) are called *uncountable*.\n\nAll uncountable collections (and some countable collections) are [infinite](https://arbital.com/p/infinity). There is a meaningful and [well-defined](https://arbital.com/p/5ss) way to compare the sizes of different infinite collections of things, and some infinite collections are larger than others."], "tags": ["Math 0", "B-Class", "Proposed A-Class"], "alias": "2w7"} {"id": "c1a4ba80f42446ee9fc70ea9f21ad8b0", "title": "Gödel encoding and self-reference", "url": "https://arbital.com/p/godel_codes", "source": "arbital", "source_type": "text", "text": "A Gödel encoding is a way of using the formal machinery of proving theorems in arithmetic to discuss that very machinery. [Kurt Gödel](https://en.wikipedia.org/wiki/Kurt_G%C3%B6del) rocked the world of mathematics in 1931 by showing that this was possible, and that it had surprising consequences for the sort of things that could ever be formally proven in a given system.\n\nThe original formulation of this self-referential framework was really messy (involving prime factorizations of huge numbers), but we can think about it more easily now by thinking about computer programs. In order to get mathematical statements that refer to themselves, we can first show that we can talk interchangeably about mathematical proofs and computer programs, and then show how to write computer programs that refer to themselves.\n\nTo be specific about the machinery of proving theorems, we'll use the formal system of [Peano Arithmetic](https://arbital.com/p/3ft).\n\nWhat kind of self-reference do we want for our computer program? We would like to be able to write a program that takes its own source code and performs computations on it. But it doesn't count for the program to access its own location in memory; we'd like something we could write directly into an interpreter, with no other framework, and get the same result.\n\nThis kind of program is known as a [quine](https://arbital.com/p/322).\n\nGödel encoding is equivalent to an encoding of a computer program as a binary string.\n\nIf we think about any conjecture in mathematics that can be stated in terms of arithmetic, you can write a program that loops over all possible strings, checks whether any of them is a valid proof of the conjecture, and halts if and only if it finds one. In the other direction, we can imagine proving a theorem about whether a particular computer program ever halts.", "date_published": "2016-05-06T15:58:15Z", "authors": ["Patrick LaVictoire"], "summaries": [], "tags": [], "alias": "31z"} {"id": "3c5b4dd6d7b925908166b8e6604dce09", "title": "Nate Soares", "url": "https://arbital.com/p/NateSoares", "source": "arbital", "source_type": "text", "text": "Automatically generated page for \"Nate Soares\" user. Click [here to edit](https://arbital.com/edit/32).", "date_published": "2015-09-04T00:00:00Z", "authors": ["Nate Soares"], "summaries": [], "tags": [], "alias": "32"} {"id": "498d6e7f382906c2f09d8fe3877ec058", "title": "Quine", "url": "https://arbital.com/p/quine", "source": "arbital", "source_type": "text", "text": "A quine is a computer program that 'knows' its own entire source code via indirect self-reference, rather than by getting it as input. The classic exercise in this domain is writing a (non-empty) program that takes no input and which outputs its own entire source code. But the same trick can be used to do more than that; for any program that takes a string as input and performs some operations on it, we can write a quining program that takes no input and performs the same operations on the program's own source code.\n\nThe trick to write a quining program is to take a recipe for substituting a string into various places in a program, and then to apply this recipe to itself. If you've not encountered this idea before, but you know a programming language, it is a nice exercise to try and write a quine which prints its own source code.\n\nAn example of a quine in Python (due to [https://arbital.com/p/32](https://arbital.com/p/32)) which prints its own source code:\n\n\ttemplate = 'template = {hole}\\nprint(template.format(hole=repr(template)))'\n\tprint(template.format(hole=repr(template)))\n\nWikipedia has a [list of examples of quining programs](https://en.wikipedia.org/wiki/Quine_) (computing)#Examples) in various languages.\n\nNamed after the logician [W.V.O. Quine](https://en.wikipedia.org/wiki/Willard_Van_Orman_Quine), as coined by [Douglas Hofstadter](https://en.wikipedia.org/wiki/Douglas_Hofstadter) in [Gödel, Escher, Bach](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach), in the informal context of English phrases like\n\n> \"Yields falsehood when preceded by its quotation\" yields falsehood\n> when preceded by its quotation.\n\nQuining has been used for practical purposes and even seen in the wild, most famously in [Ken Thompson's illustration of a C compiler Trojan backdoor](http://en.wikipedia.org/wiki/Backdoor_) (computing)#Compiler_backdoors).", "date_published": "2016-05-08T18:27:17Z", "authors": ["Nate Soares", "Jaime Sevilla Molina", "Patrick LaVictoire"], "summaries": [], "tags": [], "alias": "322"} {"id": "5de1174226d8bd0b1a10b9fa2a1b5e5d", "title": "Math playpen", "url": "https://arbital.com/p/playpen_mathematics", "source": "arbital", "source_type": "text", "text": "$$\\mathcal{P}e^{i \\oint_C A_\\mu dx^\\mu} \\to g(x) \\mathcal{P}e^{i \\oint_C A_\\mu dx^\\mu} g^{-1}(x)\\,$$\n\nNumbers, woo!\n\nCan I bork the page? Trying to bork the page… 3\n\nasdf", "date_published": "2016-08-12T21:29:42Z", "authors": ["Eric Rogstad", "Kevin Clancy", "Patrick Stevens", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "3cj"} {"id": "e7418c5d21df093f5cc3f8c800b317ab", "title": "Order theory", "url": "https://arbital.com/p/order_theory", "source": "arbital", "source_type": "text", "text": "Introduction\n==========\n\nOrder pervades mathematics and the sciences. Often, a reader's intuitive notion of order is all that is necessary for understanding a particular invocation of the notion of order. However, a deeper examination of order reveals a rich taxonomy over the types of orderings that can arise and leads to powerful theorems with applications in mathematics, computer science, the natural sciences, and the social sciences.\n\nThe following are examples of orders that one might encounter in life:\n\n- In a work place, employees may have a *rank* that determines which other employees they have authority over. IT workers report to the IT manager, marketers report to the marketing manager, managers report to the CEO, etc.\n\n- In a grocery store, someone planning to buy corn flake cereal may consider the ordering on the *prices* of various brands of shredded wheat to decide which to purchase.\n\n- At lunchtime, someone with a craving for burritos may order the nearby burrito restaurants by their *distance* from her workplace. She may also order these burrito joints by *quality*, and consider both orderings in her decision.\n\nNow that we've seen some concrete examples of order, we can begin working toward a rigorous mathematical definition. In each of the above examples, we have some *underlying set* of objects that we are comparing: employees, corn flake brands, and burrito restaurants, respectively. We also have an *ordering*, which is a [binary relation](https://arbital.com/p/3nt) determining whether or not one object is \"less than\" another. This suggests that order esentially a pair of a set and some binary relation defined on it. Is this all we need to capture the notion of order?\n\nHere are a few examples of binary relations which may or may not be orderings. Do they agree with your intuitive notion of ordering?\n\n* 1.) The relation which includes all pairs of people $(a,b)$ such that $a$ and $b$ are friends.\n* 2.) The relation of pairs of people $(a,b)$ such that $a$ is a genetic descendant of $b$.\n* 3.) The relation on days of the week which contains the exactly pairs $(a,b)$ such that a directly precedes b: $\\{ (Monday, Tuesday), (Tuesday, Wednesday), ... \\}$\n\nIt turns out that only item 2 agrees with the mathematical definition of ordering. Intuitively, item 1 is not an ordering because of its symmetry: a friendship between two people does not distinguish one friend as being \"less than\" the other (not a healthy friendship, at least). Friendship is actually an instance of another important class of binary relations called the [equivalence relations](https://arbital.com/p/equivalence_relation_mathematics). Item 3 is not an ordering because it lacks transitivity: Monday directly precedes Tuesday and Tuesday directly precedes Wednesday, but Monday does not directly precede Wednesday.\n\nPosets\n=====\n\nWe're now ready to provide a formal, mathematical definition, for a class of objects called *posets* (a contraction of partially ordered set), which captures the idea of ordering.\n\n--\n\n__Definition: Poset__\n\nA *poset* is a pair $\\langle P, \\leq \\rangle$ of a set $P$ and a binary relation $\\leq$ on $P$ such that for all $p,q,r \\in P$, the following properties are satisfied: \n\n- Reflexivity: $p \\leq p$\n- Transitivity: $p \\leq q$ and $q \\leq r$ implies $p \\leq r$\n- Anti-Symmetry: $p \\leq q$ and $q \\leq p$ implies $p = q$\n\n$P$ is referred to as the poset's *underlying set* and $\\leq$\n is referred to as its *order relation*.\n\n--\n\nThe above definition may strike some readers as more general than expected. Indeed, in both mathematics and conversational English, when someone states that a set of objects is ordered, they often mean that it is totally ordered. A total order is a poset for which any two elements of $a$ and $b$ of the underlying set are *comparable*; that is, $a \\leq b$ or $b \\leq a$. But our definition of a poset allows the possibility that two elements are *incomparable*. Recall our example of employees in a work place. In our *reports-to* relation, both an IT manager and a marketing manager report to the CEO; however, it would not make sense for an IT manager to report to a marketing manager or vice versa. The marketing and IT managers are thus incomparable. We write $a \\parallel b$ to state that two poset elements $a$ and $b$ are incomparable.\n\nAnother important distinction must be made. Partial orders as we have described them are not *strict* orders. From any poset $\\langle P, \\leq \\rangle$, we can derive an strict order relation $<$, which includes exactly those pairs of $\\leq$ relating two elements of $P$ that are distinct. It should be noted that, while strict orders are transitive and vacuously anti-symmetric, they are *not* partial orders because they are not reflexive. In everyday situations, strict orders seem to be more useful, but in mathematics non-strict orderings are of more use, and so non-strictness is built into the definition of poset.", "date_published": "2016-05-27T00:35:53Z", "authors": ["Kevin Clancy", "Joe Zeng", "Alexei Andreev"], "summaries": ["The study of binary relations that are reflexive, transitive, and anti-symmetric."], "tags": ["Work in progress"], "alias": "3dt"} {"id": "91af3d7936fd41735f07bb801f0d159d", "title": "Group theory", "url": "https://arbital.com/p/group_theory", "source": "arbital", "source_type": "text", "text": "Group theory is the study of the [algebraic structures](https://arbital.com/p/3gx) known as \"[groups](https://arbital.com/p/3gd)\". A group $G$ is a collection of elements $X$ paired with an operator $\\bullet$ that combines elements of $X$ while obeying certain laws. Roughly speaking, $\\bullet$ treats elements of $X$ as composable, invertible actions.\n\nGroup theory has many applications. Historically, groups first appeared in mathematics as groups of \"substitutions\" of mathematical functions; for example, the group of [integers](https://arbital.com/p/48l) $\\mathbb{Z}$ [acts](https://arbital.com/p/3t9) on the set of functions $f : \\mathbb{R} \\to \\mathbb{R}$ via the substitution $n : f(x) \\mapsto f(x - n)$, which corresponds to translating the graph of $f$ $n$ units to the right. The functions which are invariant under this group action are precisely the functions which are periodic with period $1$, and group theory can be used to explain how this observation leads to the expansion of such a function as a [Fourier series](https://arbital.com/p/Fourier_series) $f(x) = \\sum \\left( a_n \\cos 2 \\pi n x + b_n \\sin 2 \\pi n x \\right)$. \n\nGroups are used as a building block in the formalization of many other mathematical structures, including [fields](https://arbital.com/p/481), [vector spaces](https://arbital.com/p/3w0), and [integers](https://arbital.com/p/48l). Group theory has various [applications to physics](https://arbital.com/p/group_theory_and_physics). For a list of example groups, see the [examples page](https://arbital.com/p/algebraic_group_examples). For a list of the key theorems in group theory, see the [main theorems page](https://arbital.com/p/algebraic_group_theorems).\n\n# Interpretations, visualizations, and uses\n\nGroup theory abstracts away from the elements in the [underlying set](https://arbital.com/p/3gz) $X$: the group axioms do not care about what sort of thing is in $X$; they care only about the way that $\\bullet$ relates them. $X$ tells us how many elements $G$ has; all other information resides in the choices that $\\bullet$ makes to combine two group elements into a third group element. Thus, group theory can be seen as the study of possible relationships between objects that obey the group laws; regardless of what the objects themselves are. To visualize a group, we need only visualize the way that the elements relate to each other via $\\bullet$. This is the approach taken by [group multiplication tables](https://arbital.com/p/group_multiplication_table) and [group diagrams](https://arbital.com/p/cayley_diagram).\n\nGroup theory is interesting in part because the constraints on $\\bullet$ are at a \"sweet spot\" between \"too lax\" and \"too restrictive.\" Group structure crops up in many areas of physics and mathematics, but the group axioms are still restrictive enough to make groups fairly easy to work with and reason about. For example, if the [order](https://arbital.com/p/3gg) of $G$ is prime then there is only one possible group that $G$ can be (up to isomorphism). There are only 2, 2, 5, 2, and 2 groups of order 4, 6, 8, 9, and 10 (respectively). There are only 16 groups of order 100. If a group structure can be found in an object, this makes the behavior of the object fairly easy to analyze (especially if the order of the group is small). Group structure is relatively common in math and physics; for example, the solutions to a polynomial equation are acted on by a group called the [Galois group](https://arbital.com/p/Galois_group) (a fact from which the [unsolvability of quintic polynomials by radicals](https://arbital.com/p/unsolvability_of_the_quintic) was proven). Group theory is thus a useful tool for figuring out how various mathematical and physical objects behave. For more on this idea, see the page on [group actions](https://arbital.com/p/3t9).\n\nRoughly speaking, the constraints on $\\bullet$ can be interpreted as saying that $\\bullet$ has to treat the elements like they are a \"complete set of transformations\" of a physical object. The axiom of identity then says \"one of the elements has to be interpreted as a transformation that does nothing\"; the axiom of inversion says \"each transformation must be invertible;\" and so on. In this light, group theory can be seen as the study of possible complete sets of transformations. Because the set of possible groups is relatively limited and well-behaved, group theory is a useful tool for studying the relationship between transformations of physical objects. For more discussion of this interpretation of group theory, see [https://arbital.com/p/groups_and_transformations](https://arbital.com/p/groups_and_transformations).\n\nGroups can also be seen as a tool for studying symmetry. Roughly speaking, given a set $X$ and some redundant structure atop $X$, a symmetry of $X$ is a function from $X$ to itself that preserves the given structure. For example, a physicist might build a model of two planets interacting in space, where an arbitrary point picked out as the origin. The behavior of the planets in relation to each other shouldn't depend on which point they label $(0, 0, 0)$, but many questions (such as \"where is the planet now?\") depend on the choice of origin. To study facts that are true regardless of where the origin is (rather than being artifacts of the representation), they might let $X$ be the possible states of the model, with then ask what properties of $X$ are preserved when the origin is shifted — i.e., they ask \"what facts about this model would still be true if I had picked a different origin?\" Translation of the origin is a \"symmetry\" of those properties: It takes one state of $X$ to another state of $X$, and preserves all properties of the physical system that are independent of the choice of origin. When a model has multiple symmetries (such as rotation invariance as well as translation invariance), those symmetries form a group under composition. Group theory tells us about how those symmetries must relate to one another. For more on this topic, see [https://arbital.com/p/groups_and_symmetries](https://arbital.com/p/groups_and_symmetries).\n\nIn fact, laws of physics can be interpreted as \"properties of the universe that are true at all places and all times, regardless of how you label things.\" Thus, the laws of physics can be studied by considering the properties of the universe that are preserved under certain transformations (such as changing labels). Group theory puts constraints on how these symmetries in the laws of physics relate to one another, and thus helps constrain the search for possible laws of physics. For more on this topic, see [https://arbital.com/p/groups_and_physics](https://arbital.com/p/groups_and_physics).\n\nFinally, in a different vein, groups can be seen as a building block for other mathematical concepts. As an example, consider numbers. \"Number\" is a fairly fuzzy concept; the term includes the integers [$\\mathbb Z$](https://arbital.com/p/48l), the rational numbers [$\\mathbb Q$](https://arbital.com/p/4zq) (which include fractions), the real numbers [$\\mathbb R$](https://arbital.com/p/4bc) (which include numbers such as $\\sqrt{2}$), and so on. The arrow from $\\mathbb Z \\to \\mathbb Q \\to \\mathbb R$ points in the direction of _specialization:_ Each step adds constraints (such as \"now you must include fractions\") and thus narrows the scope of study (there are fewer things in the world that act like real numbers than there are that act like integers). Group theory is the result of following the arrow in the opposite direction: groups are like integers but _more permissive._ Groups have an identity element that corresponds to $0$ and an operation that corresponds to $+$, but they don't need to be infinitely large and they don't need to have anything corresponding to multiplication. From this point of view, \"group\" is a generalization of \"integer\", and from that generalization, we can build up many different mathematical objects, including [rings](https://arbital.com/p/3gq), [fields](https://arbital.com/p/481), and [vector spaces](https://arbital.com/p/3w0). Empirically, many interesting and useful mathematical objects use groups as one of their building blocks, and thus, some familiarity with group theory is helpful when learning about other mathematical structures. For more on this topic, see the [tree of algebraic structures](https://arbital.com/p/5dz).", "date_published": "2016-07-25T06:36:17Z", "authors": ["Stephanie Zolayvar", "Eric Rogstad", "Tsvi BT", "Alexei Andreev", "Patrick Stevens", "Nate Soares", "Qiaochu Yuan"], "summaries": ["Group theory is the study of the [algebraic structures](https://arbital.com/p/3gx) known as \"[groups](https://arbital.com/p/3gd)\". A group $G$ is a collection of elements $X$ paired with an operator $\\bullet$ that combines elements of $X$ while obeying certain laws. Roughly speaking, $\\bullet$ treats elements of $X$ as composable, invertible actions. Group theory has many applications in pure and applied mathematics, as well as in the hard sciences like physics and chemistry. Groups are used as a building block in the formalization of many other mathematical structures, including [vector spaces](https://arbital.com/p/3w0) and [integers](https://arbital.com/p/48l)."], "tags": [], "alias": "3g8"} {"id": "cb0400bd6881eb732758d231565afab4", "title": "Group", "url": "https://arbital.com/p/group_mathematics", "source": "arbital", "source_type": "text", "text": "summary(technical): A group $G$ is a pair $(X, \\bullet)$ where $X$ is a [set](https://arbital.com/p/3jz) and $\\bullet$ is a [operation](https://arbital.com/p/3h7) obeying the following laws:\n\n 1. **[Closure](https://arbital.com/p/3gy):** The operation is a function. For all $x, y$ in* $X$, $x \\bullet y$ is defined and in $X$. We abbreviate $x \\bullet y$ as $xy$.\n 2. **[Associativity](https://arbital.com/p/3h4):** $x(yz) = (xy)z$ for all $x, y, z \\in X$.\n 3. **[Identity](https://arbital.com/p/54p):** There is an element $e$ such that $xe=ex=x$ for all $x \\in X$.\n 4. **[Inverses](https://arbital.com/p/-inverse_element):** For each $x$ in $X$, there is an element $x^{-1} \\in X$ such that $xx^{-1}=x^{-1}x=e$.\n\nThe operation need not be [commutative](https://arbital.com/p/-3jb), but if it is then the group is called [abelian](https://arbital.com/p/-3h2).\n\n\nA group is an abstraction of a collection of symmetries of an object. The collection of symmetries of a triangle (rotating by $120^\\circ$ or $240^\\circ$ degrees and flipping), rearrangements of a collection of objects (permutations), or rotations of a sphere, are all examples of groups. A group abstracts from these examples by forgetting what the symmetries are symmetries of, and only considers how symmetries behave. \n\nA group $G$ is a pair $(X, \\bullet)$ where:\n\n - $X$ is a [set](https://arbital.com/p/3jz), called the \"underlying set.\" By abuse of notation, $X$ is usually denoted by the same symbol as the group $G$, which we will do for the rest of the article.\n - $\\bullet : G \\times G \\to G$ is a binary [operation](https://arbital.com/p/3h7). That is, a function that takes two elements of a set and returns a third. We will abbreviate $x \\bullet y$ by $xy$ when not ambiguous. This operation is subject to the following axioms: \n- **[Closure](https://arbital.com/p/3gy):** $\\bullet$ is a function. For all $x, y$ in $X$, $x \\bullet y$ is defined and in $X$. We abbreviate $x \\bullet y$ as $xy$.\n- **[Identity](https://arbital.com/p/-54p):** There is an element $e$ such that $xe=ex=x$ for all $x \\in X$.\n- **[Inverses](https://arbital.com/p/-inverse_element):** For each $x$ in $X$, there is an element $x^{-1} \\in X$ such that $xx^{-1}=x^{-1}x=e$.\n- **[Associativity](https://arbital.com/p/3h4):** $x(yz) = (xy)z$ for all $x, y, z \\in X$.\n\n1) The set X is the collection of abstract symmetries that this group represents. \"Abstract,\" because these elements aren't necessarily symmetries *of* something, but almost all examples will be.\n\n2) The operation $\\bullet$ is the abstract composition operation.\n\n3) The axiom of closure is redundant, since $\\bullet$ is defined as a function $G \\times G \\to G$, but it is useful to emphasize this, as sometimes one can forget to check that a given subsets of symmetries of an object is closed under composition.\n\n4) The axiom of identity says that there is an element $e$ in $G$ that is a do-nothing symmetry: If you apply $\\bullet$ to $e$ and $x$, then $\\bullet$ simply returns $x$. The identity is unique: Given two elements $e$ and $z$ that satisfy axiom 2, we have $ze = ez = z.$ Thus, we can speak of \"the identity\" $e$ of $G$. This justifies the use of $e$ in the axiom of inversion: axioms 1 through 3 ensure that $e$ exists and is unique, so we can reference it in axiom 4.\n\n$e$ is often written $1$ or $1_G$, because $\\bullet$ is often treated as an analog of multiplication on the set $X$, and $1$ is the multiplicative [identity](https://arbital.com/p/54p). (Sometimes, e.g. in the case of [rings](https://arbital.com/p/3gq), $\\bullet$ is treated as an analog of addition, in which case the identity is often written $0$ or $0_G$.)\n\n5) The axiom of inverses says that for every element $x$ in $X$, there is some other element $y$ that $\\bullet$ treats like the opposite of $x$, in the sense that $xy = e$ and vice versa. The inverse of $x$ is usually written $x^{-1}$, or sometimes $(-x)$ in cases where $\\bullet$ is analogous to addition.\n\n6) The axiom of associativity says that \\bullet behaves like composition of functions. When composing a bunch of functions, it doesn't matter what order the individual compositions are computed in. When composing $f$, $g$, and $h$, we can compute $g \\circ f$, and then compute $h \\circ (g \\circ f)$, or we can compute $h \\circ g$ and then compute $(h \\circ g) \\circ f$, and we will get the same result.\n\n%%%knows-requisite([https://arbital.com/p/3h3](https://arbital.com/p/3h3)):\nEquivalently, a group is a [monoid](https://arbital.com/p/3h3) which satisfies \"every element has an inverse\".\n%%%\n\n%%%knows-requisite([https://arbital.com/p/4c7](https://arbital.com/p/4c7)):\nEquivalently, a group is a category with exactly one object, which satisfies \"every arrow has an inverse\"; the arrows are viewed as elements of the group. This justifies the intuition that groups are collections of symmetries. The object of this category can be thought of an abstract object that the isomorphisms are symmetries of. A functor from this category into the category of sets associates this object with a set, and each of the morphisms a permutation of that set.\n%%%\n\n# Examples\n\nThe most familiar example of a group is perhaps $(\\mathbb{Z}, +)$, the integers under addition. To see that it satisfies the group axioms, note that:\n\n1. (a) $\\mathbb{Z}$ is a set, and (b) $+$ is a function of type $\\mathbb Z \\times \\mathbb Z \\to \\mathbb Z$\n2. $(x+y)+z=x+(y+z)$\n3. $0+x = x = x + 0$\n4. Every element $x$ has an inverse $-x$, because $x + (-x) = 0$.\n\nFor more examples, see the [examples page](https://arbital.com/p/3t1).\n\n# Notation\n\nGiven a group $G = (X, \\bullet)$, we say \"$X$ forms a group under $\\bullet$.\" $X$ is called the [underlying set](https://arbital.com/p/3gz) of $G$, and $\\bullet$ is called the _group operation_.\n\n$x \\bullet y$ is usually abbreviated $xy$.\n\n$G$ is generally allowed to substitute for $X$ when discussing the group. For example, we say that the elements $x, y \\in X$ are \"in $G$,\" and sometimes write \"$x, y \\in G$\" or talk about the \"elements of $G$.\"\n\nThe [order of a group](https://arbital.com/p/3gg), written $|G|$, is the size $|X|$ of its underlying set: If $X$ has nine elements, then $|G|=9$ and we say that $G$ has order nine.\n\n# Resources\n\nGroups are a ubiquitous and useful algebraic structure. Whenever it makes sense to talk about symmetries of a mathematical object, or physical system, groups pop up. For a discussion of group theory and its various applications, refer to the [group theory](https://arbital.com/p/3g8) page.\n\nA group is a [monoid](https://arbital.com/p/3h3) with inverses, and an associative [loop](https://arbital.com/p/algebraic_loop). For more on how groups relate to other [algebraic structures](https://arbital.com/p/3gx), refer to the [tree of algebraic structures](https://arbital.com/p/5dz).", "date_published": "2016-12-31T13:05:14Z", "authors": ["Daniel Satanove", "m g", "Dylan Hendrickson", "Eric Rogstad", "Patrick Stevens", "Louis Paquin", "Nate Soares", "Qiaochu Yuan", "Eric Bruylant", "Mark Chimes", "Joe Zeng"], "summaries": ["A group is an abstraction of a collection of symmetries of an object. The collection of symmetries of a triangle (rotating by $120^\\circ$ or $240^\\circ$ degrees and flipping), rearrangements of a collection of objects (permutations), or rotations of a sphere, are all examples of groups.\n\nA group abstracts from these examples by forgetting what the symmetries are symmetries *of*, and only considers how symmetries behave.\n\n- Any two symmetries can be composed. For the symmetries of flipping and rotating a triangle, the whole action of rotating, and then flipping a triangle at once is a symmetry. So there is an **[operation](https://arbital.com/p/3h7)** on a group which represents composition of symmetries.\n- There is always a do-nothing symmetry. When composed with another symmetry, the do-nothing symmetry doesn't change that symmetry. So the operation has an **[identity](https://arbital.com/p/-54p)**.\n- Any symmetry can be reversed. That is, for any symmetry, there is another symmetry which when composed with the first, gives the do-nothing symmetry. Flipping a triangle, and then flipping it again, is the same as doing nothing. So the operation has an **[inverse](https://arbital.com/p/-inverse_mathematics)** operation.\n- Like any collection of functions, when composing a bunch of functions, it doesn't matter what order the individual compositions are computed in, as long as the *overall* composition ends up in the right order. When composing $f$, $g$, and $h$, we can compute $g \\circ f$, and then compute $h \\circ (g \\circ f)$, or we can compute $h \\circ g$ and then compute $(h \\circ g) \\circ f$, and we will get the same result. So the operation is **[associative](https://arbital.com/p/-3h4)**.\n\nNote that it is not necessarily the case that the operation is [commutative](https://arbital.com/p/-3jb). Flipping and then rotating a triangle will give different symmetry than rotating and then flipping. If it is commutative, then the group is called [abelian](https://arbital.com/p/-3h2)."], "tags": ["Needs brief summary", "B-Class"], "alias": "3gd"} {"id": "c8c97d0f252f725872ddbbc6ae6714a3", "title": "Order of a group", "url": "https://arbital.com/p/group_order", "source": "arbital", "source_type": "text", "text": "The order $|G|$ of a [group](https://arbital.com/p/3gd) $G$ is the size of its [underlying set](https://arbital.com/p/3gz). For example, if $G=(X,\\bullet)$ and $X$ has nine elements, we say that $G$ has order $9$. If $X$ is infinite, we say $G$ is infinite; if $X$ is finite, we say $G$ is finite.\n\nThe [order of an element](https://arbital.com/p/4cq) $g \\in G$ of a group is the smallest nonnegative integer $n$ such that $g^n = e$, or $\\infty$ if there is no such integer. The relationship between this usage of order and the above usage of order is that the order of $g \\in G$ in this sense is the order of the [https://arbital.com/p/subgroup](https://arbital.com/p/subgroup) $\\langle g \\rangle = \\{ 1, g, g^2, \\dots \\}$ of $G$ [generated by](https://arbital.com/p/generating_set) $g$ in the above sense.", "date_published": "2016-06-16T18:52:17Z", "authors": ["Alexei Andreev", "Patrick Stevens", "Nate Soares", "Qiaochu Yuan", "Eric Bruylant"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "3gg"} {"id": "3ed23ad1a6160a90b0ba8f3f62305ae4", "title": "Ring", "url": "https://arbital.com/p/algebraic_ring", "source": "arbital", "source_type": "text", "text": "summary(Technical): A ring $R$ is a triple $(X, \\oplus, \\otimes)$ where $X$ is a [set](https://arbital.com/p/3jz) and $\\oplus$ and $\\otimes$ are binary [operations](https://arbital.com/p/set_theory_operation) subject to the ring axioms. We write $x \\oplus y$ for the application of $\\oplus$ to $x, y \\in X$, which must be defined, and similarly for $\\otimes$. Terminology varies across sources; our rings will have both operations [commutative](https://arbital.com/p/3jb) and will have an [https://arbital.com/p/-54p](https://arbital.com/p/-54p) under multiplication, denoted $1$.\n\nA ring $R$ is a triple $(X, \\oplus, \\otimes)$ where $X$ is a [set](https://arbital.com/p/3jz) and $\\oplus$ and $\\otimes$ are binary [operations](https://arbital.com/p/set_theory_operation) subject to the ring axioms. We write $x \\oplus y$ for the application of $\\oplus$ to $x, y \\in X$, which must be defined, and similarly for $\\otimes$. It is standard to abbreviate $x \\otimes y$ as $xy$ when $\\otimes$ can be inferred from context. The ten ring axioms (which govern the behavior of $\\oplus$ and $\\otimes$) are as follows:\n\n1. $X$ must be a [commutative group](https://arbital.com/p/3h2) under $\\oplus$. That means:\n * $X$ must be [closed](https://arbital.com/p/3gy) under $\\oplus$.\n * $\\oplus$ must be [associative](https://arbital.com/p/associative_function).\n * $\\oplus$ must be [commutative](https://arbital.com/p/commutative_function).\n * $\\oplus$ must have an identity, which is usually named $0$.\n * Every $x \\in X$ must have an inverse $(-x) \\in X$ such that $x \\oplus (-x) = 0$.\n2. $X$ must be a [monoid](https://arbital.com/p/3h3) under $\\otimes$. That means:\n * $X$ must be [closed](https://arbital.com/p/3gy) under $\\otimes$.\n * $\\otimes$ must be [associative](https://arbital.com/p/associative_function).\n * $\\otimes$ must have an identity, which is usually named $1$.\n3. $\\otimes$ must [distribute](https://arbital.com/p/distributive_property) over $\\oplus$. That means:\n * $a \\otimes (x \\oplus y) = (a\\otimes x) \\oplus (a\\otimes y)$ for all $a, x, y \\in X$.\n * $(x \\oplus y)\\otimes a = (x\\otimes a) \\oplus (y\\otimes a)$ for all $a, x, y \\in X$.\n \nThough the axioms are many, the idea is simple: A ring is a [commutative group](https://arbital.com/p/3h2) equipped with an additional operation, under which the ring is a [monoid](https://arbital.com/p/3h3), and the two operations play nice together (the monoid operation [distributes](https://arbital.com/p/distributive_property) over the group operation).\n\nA ring is an [algebraic structure](https://arbital.com/p/3gx). To see how it relates to other algebraic structures, refer to the [tree of algebraic structures](https://arbital.com/p/5dz).\n\n# Examples\n\nThe integers $\\mathbb{Z}$ form a ring under addition and multiplication.\n\n[Add more example rings.](https://arbital.com/p/fixme:)\n\\[in progress.\\](https://arbital.com/p/work)\n\n# Notation\n\nGiven a ring $R = (X, \\oplus, \\otimes)$, we say \"$R$ forms a ring under $\\oplus$ and $\\otimes$.\" $X$ is called the [underlying set](https://arbital.com/p/3gz) of $R$. $\\oplus$ is called the \"additive operation,\" $0$ is called the \"additive identity\", $-x$ is called the \"additive inverse\" of $x$. $\\otimes$ is called the \"multiplicative operation,\" $1$ is called the \"multiplicative identity\", and a ring does not necessarily have multiplicative inverses.\n\n# Basic properties\n\n[Add the basic properties of rings.](https://arbital.com/p/fixme:)\n\\[in progress.\\](https://arbital.com/p/work)\n\n# Interpretations, Visualizations, and Applications\n\n[Add ](https://arbital.com/p/fixme:)\n\\[in progress.\\](https://arbital.com/p/work)", "date_published": "2016-07-28T11:14:11Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Nate Soares", "Patrick Stevens"], "summaries": ["A ring is a kind of [https://arbital.com/p/-3gx](https://arbital.com/p/-3gx) which we obtain by considering [groups](https://arbital.com/p/3gd) as being \"things with addition\" and then endowing them with a multiplication operation which must interact appropriately with the pre-existing addition. Terminology varies across sources; we will take \"ring\" to refer to \"commutative ring with $1$\"."], "tags": ["Start", "Work in progress", "Needs clickbait"], "alias": "3gq"} {"id": "d565cc582884f2eca36dfeb1f2df3b11", "title": "Algebraic structure", "url": "https://arbital.com/p/algebraic_structure", "source": "arbital", "source_type": "text", "text": "Roughly speaking, an algebraic structure is a set $X$, known as the [underlying set](https://arbital.com/p/3gz), paired with a collection of [operations](https://arbital.com/p/3h7) that obey a given set of laws. For example, a [group](https://arbital.com/p/3gd) is a set paired with a single binary operation that satisfies the four group axioms, and a [ring](https://arbital.com/p/3gq) is a set paired with two binary operations that satisfy the ten ring axioms.\n\nIn fact, algebraic structures can have more than one underlying set. Most have only one (including [monoids](https://arbital.com/p/3h3), [groups](https://arbital.com/p/3gd), [rings](https://arbital.com/p/3gq), [fields](https://arbital.com/p/algebraic_field), [lattices](https://arbital.com/p/algebraic_lattice), and [arithmetics](https://arbital.com/p/algebraic_arithmetic)), and differ in how their associated operations work. More complex algebraic structures (such as [algebras](https://arbital.com/p/algebraic_algebra), [modules](https://arbital.com/p/algebraic_module), and [vector spaces](https://arbital.com/p/3w0)) have two underlying sets. For example, vector spaces are defined using both an underlying [field](https://arbital.com/p/algebraic_field) of scalars and an underlying [commutative group](https://arbital.com/p/3h2) of vectors.\n\nFor a map of algebraic structures and how they relate to each other, see the [tree of algebraic structures](https://arbital.com/p/algebraic_structure_tree).", "date_published": "2016-06-07T03:58:30Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Needs lenses", "Formal definition", "Needs clickbait"], "alias": "3gx"} {"id": "ccc2fbb487846b8d590047fbca2fef63", "title": "Closure", "url": "https://arbital.com/p/closure_mathematics", "source": "arbital", "source_type": "text", "text": "A [set](https://arbital.com/p/3jz) $S$ is _closed_ under an [operation](https://arbital.com/p/3h7) $f$ if, whenever $f$ is fed elements of $S$, it produces another element of $S$. For example, if $f$ is a trinary operation (i.e., a function of three arguments) then \"$S$ is closed under $f$\" means \"if $x, y, z \\in S$ then $f(x, y, z) \\in S$\".\n\nFor example, the set [$\\mathbb Z$](https://arbital.com/p/integer) is closed under addition (because adding two integers yields another integer), but the set $\\mathbb Z_5 = \\{0, 1, 2, 3, 4, 5\\}$ is not (because $1 + 5$ is not in $\\mathbb Z_5$).", "date_published": "2016-06-07T03:57:29Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "3gy"} {"id": "42abd4f342ff294a6cb579ba3c7e0be3", "title": "Underlying set", "url": "https://arbital.com/p/underlying_set", "source": "arbital", "source_type": "text", "text": "What do a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), and a [topological space](https://arbital.com/p/) have in common? Each is a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) with some structure built on top of it, and in each case, we call the set the *underlying set*. %%note: A group is a set with an [operation](https://arbital.com/p/3h7), a poset is a set with an [ordering](https://arbital.com/p/3dt), and a topological space is a set with a collection of subsets that satisfy a [certain property](https://arbital.com/p/).%%\n\n###Algebraic structures###\n\nAn [algebraic structure](https://arbital.com/p/3gx) is a set equipped with operators that follow certain laws, such as a [group](https://arbital.com/p/3gd), which is a pair $(X, \\bullet)$ where $X$ is a [set](https://arbital.com/p/3jz) and $\\bullet$ is an operator that follows certain laws. Given an algebraic structure, we can simply throw away the operators and recover the set ($X$, in this case), which is known as the \"underlying set\" of the structure.\n\nThe underlying set is sometimes known as the \"carrier set.\" Some algebraic structures have more than one underlying set; for example, a vector space is an algebraic structure built out of a [field](https://arbital.com/p/481) of scalars and a [commutative group](https://arbital.com/p/3h2), in which case the term \"underlying set\" is ambiguous.", "date_published": "2016-06-17T21:33:00Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant"], "summaries": ["What do a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), and a [topological space](https://arbital.com/p/) have in common? Each is a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) with some structure built on top of it, and in each case, we call the set the *underlying set*."], "tags": ["Math 2", "Start", "Needs clickbait"], "alias": "3gz"} {"id": "2d00c50341de8d81d6b93085b35ca955", "title": "Abstract algebra", "url": "https://arbital.com/p/abstract_algebra", "source": "arbital", "source_type": "text", "text": "Abstract algebra is the study of [algebraic structures](https://arbital.com/p/3gx), including [groups](https://arbital.com/p/3gd), [rings](https://arbital.com/p/3gq), [fields](https://arbital.com/p/algebraic_field), [modules](https://arbital.com/p/algebraic_module), [vector spaces](https://arbital.com/p/vector_space), [lattices](https://arbital.com/p/algebraic_lattice), [arithmetics](https://arbital.com/p/algebraic_arithmetic), and [algebras](https://arbital.com/p/algebraic_algebra).\n\nThe main idiom of abstract algebra is [abstracting away from the objects](https://arbital.com/p/abstract_over_objects): Abstract algebra concerns itself with the manipulation of objects, by focusing not on the objects themselves but on the relationships between them.\n\nIf you find any collection of objects that are related to each other in a manner that follows the laws of some algebraic structure, then those relationships are governed by the corresponding theorems, regardless of what the objects are. An abstract algebraist does not ask \"what are numbers, really?\"; rather, they say \"I see that the operations of 'adding apples to the table' and 'removing apples from the table' follow the laws of numbers (in a limited domain), thus, theorems about numbers can tell what to expect as I add or remove apples (in that limited domain).\"\n\nFor a map of algebraic structures and how they relate to each other, see the [tree of algebraic structures](https://arbital.com/p/algebraic_structures_tree).", "date_published": "2016-05-14T18:20:39Z", "authors": ["Eric Rogstad", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "3h0"} {"id": "b4cd2e443d0ca89a25d04aaba3f41043", "title": "Abelian group", "url": "https://arbital.com/p/abelian_group", "source": "arbital", "source_type": "text", "text": "An abelian group is a [group](https://arbital.com/p/3gd) $G=(X, \\bullet)$ where $\\bullet$ is [commutative](https://arbital.com/p/3jb). In other words, the group operation satisfies the five axioms:\n\n1. [Closure](https://arbital.com/p/3gy): For all $x, y$ in $X$, $x \\bullet y$ is defined and in $X$. We abbreviate $x \\bullet y$ as $xy$.\n2. [Associativity](https://arbital.com/p/3h4): $x(yz) = (xy)z$ for all $x, y, z$ in $X$.\n3. Identity: There is an element $e$ such that for all $x$ in $X$, $xe=ex=x$.\n4. Inverses: For each $x$ in $X$ is an element $x^{-1}$ in $X$ such that $xx^{-1}=x^{-1}x=e$.\n5. [Commutativity](https://arbital.com/p/3jb): For all $x, y$ in $X$, $xy=yx$.\n\nThe first four are the standard [group axioms](https://arbital.com/p/3gd); the fifth is what distinguishes abelian groups from groups. \n\nCommutativity gives us license to re-arrange chains of elements in formulas about commutative groups. For example, if in a commutative group with elements $\\{1, a, a^{-1}, b, b^{-1}, c, c^{-1}, d\\}$, we have the claim $aba^{-1}db^{-1}=d^{-1}$, we can shuffle the elements to get $aa^{-1}bb^{-1}d=d^{-1}$ and reduce this to the claim $d=d^{-1}$. This would be invalid for a nonabelian group, because $aba^{-1}$ doesn't necessarily equal $aa^{-1}b$ in general.\n\nAbelian groups are very well-behaved groups, and they are often much easier to deal with than their non-commutative counterparts. For example, every [https://arbital.com/p/576](https://arbital.com/p/576) of an abelian group is [normal](https://arbital.com/p/4h6), and all finitely generated abelian groups are a [direct product](https://arbital.com/p/group_theory_direct_product) of [cyclic groups](https://arbital.com/p/47y) (the [structure theorem for finitely generated abelian groups](https://arbital.com/p/structure_theorem_for_finitely_generated_abelian_groups)).", "date_published": "2016-07-18T15:59:32Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Qiaochu Yuan"], "summaries": ["An abelian group is a [group](https://arbital.com/p/3gd) where the operation is [commutative](https://arbital.com/p/3jb). That is, an abelian group $G$ is a pair $(X, \\bullet)$ where $X$ is a [set](https://arbital.com/p/3jz) and $\\bullet$ is a binary [operation](https://arbital.com/p/3h7) obeying the four group axioms plus an axiom of commutativity:\n\n1. [Closure](https://arbital.com/p/3gy): For all $x, y$ in $X$, $x \\bullet y$ is defined and in $X$. We abbreviate $x \\bullet y$ as $xy$.\n2. [Associativity](https://arbital.com/p/3h4): $x(yz) = (xy)z$ for all $x, y, z$ in $X$.\n3. Identity: There is an element $e$ such that for all $x$ in $X$, $xe=ex=x$.\n4. Inverses: For each $x$ in $X$ is an element $x^{-1}$ in $X$ such that $xx^{-1}=x^{-1}x=e$.\n5. [Commutativity](https://arbital.com/p/3jb): For all $x, y$ in $X$, $xy=yx$.\n \nAbelian groups are very \"well-behaved\" groups that are often easier to deal with than their non-commuting counterparts."], "tags": [], "alias": "3h2"} {"id": "fba8e042d6b78edcb7e725ee8a7396c4", "title": "Monoid", "url": "https://arbital.com/p/algebraic_monoid", "source": "arbital", "source_type": "text", "text": "A monoid $M$ is a pair $(X, \\diamond)$ where $X$ is a [set](https://arbital.com/p/set_theory_set) and $\\diamond$ is an [associative](https://arbital.com/p/associative_function) binary [operator](https://arbital.com/p/3h7) with an identity. $\\diamond$ is often interpreted as concatenation; data structures that support concatenation and have an \"empty element\" (such as lists, strings, and the natural numbers under addition) are examples of monoids.\n\n Monoids are [algebraic structures](https://arbital.com/p/3gx). We write $x \\diamond y$ for the application of $\\diamond$ to $x, y \\in X$, which must be defined. $x \\diamond y$ is commonly abbreviated $xy$ when $\\diamond$ can be inferred from context. The monoid axioms (which govern the behavior of $\\diamond$) are as follows.\n\n1. (Closure) For all $x, y$ in $X$, $xy$ is also in $X$.\n1. (Associativity) For all $x, y, z$ in $X$, $x(yz) = (xy)z$.\n2. (Identity) There is an $e$ in $X$ such that, for all $x$ in $X$, $xe = ex = x.$\n\nThe axiom of closure says that $x \\diamond y \\in X$, i.e. that combining two elements of $X$ using $\\diamond$ yields another element of $X$. In other words, $X$ is [closed](https://arbital.com/p/3gy) under $\\diamond$.\n\nThe axiom of associativity says that $\\diamond$ is an [associative](https://arbital.com/p/3h4) operation, which justifies omitting parenthesis when describing the application of $\\diamond$ to many elements in sequence..\n\nThe axiom of identity says that there is some element $e$ in $X$ that $\\diamond$ treats as \"empty\": If you apply $\\diamond$ to $e$ and $x$, then $\\diamond$ simply returns $x$. The identity is unique: Given two elements $e$ and $z$ that satisfy the axiom of identity, we have $ze = e = ez = z.$ Thus, we can speak of \"the identity\" $e$ of $M$. $e$ is often written $1$ or $1_M$.\n\n%%%knows-requisite([https://arbital.com/p/4c7](https://arbital.com/p/4c7)):\nEquivalently, a monoid is a category with exactly one object.\n%%%\n\nMonoids are [semigroups](https://arbital.com/p/algebraic_semigroup) equipped with an identity. [Groups](https://arbital.com/p/3gd) are monoids with inverses. For more on how monoids relate to other [algebraic structures](https://arbital.com/p/algerbraic_structure), refer to the [tree of algebraic structures](https://arbital.com/p/algebraic_structure_tree).\n\n# Notation\n\nGiven a monoid $M = (X, \\diamond)$, we say \"$X$ forms a monoid under $\\diamond$.\" For example, the set of finite bitstrings forms a monoid under concatenation: The set of finite bitstrings is closed under concatenation; concatenation is an associative operation; and the empty bitstring is a finite bitstring that acts like an identity under concatenation. \n\n$X$ is called the [underlying set](https://arbital.com/p/3gz) of $M$, and $\\diamond$ is called the _monoid operation_. $x \\diamond y$ is usually abbreviated $xy$. $M$ is generally allowed to substitute for $X$ when discussing the monoid. For example, we say that the elements $x, y \\in X$ are \"in $M$,\" and sometimes write \"$x, y \\in M$\" or talk about the \"elements of $M$.\"\n\n# Examples\n\nBitstrings form a monoid under concatenation, with the empty string as the identity.\n\nThe set of finite lists of elements drawn from $Y$ (for any set $Y$) form a monoid under concatenation, with the empty list as the identity.\n\nThe natural numbers [$\\mathbb N$](https://arbital.com/p/45h) form a monid under addition, with $0$ as the identity.\n\nMonoids have found some use in functional programming languages such as [https://arbital.com/p/https://en.wikipedia.org/wiki/Haskell_](https://arbital.com/p/https://en.wikipedia.org/wiki/Haskell_) and [https://arbital.com/p/https://en.wikipedia.org/wiki/Scala_](https://arbital.com/p/https://en.wikipedia.org/wiki/Scala_), where they are used to generalize over data types in which values can be \"combined\" (by some operation $\\diamond$) and which include an \"empty\" value (the identity).", "date_published": "2016-06-15T09:21:48Z", "authors": ["Nate Soares", "Eric Bruylant", "Louis Paquin", "Patrick Stevens"], "summaries": [], "tags": ["Needs clickbait"], "alias": "3h3"} {"id": "af4e0a03c90776e82c05dfd1955490fb", "title": "Associative operation", "url": "https://arbital.com/p/associative_operation", "source": "arbital", "source_type": "text", "text": "An **associative operation** $\\bullet : X \\times X \\to X$ is a [binary](https://arbital.com/p/3kb) [operation](https://arbital.com/p/3h7) such that for all $x, y, z$ in $X$, $x \\bullet (y \\bullet z) = (x \\bullet y) \\bullet z$. For example, $+$ is an associative function, because $(x + y) + z = x + (y + z)$ for all values of $x, y,$ and $z$. When an associative function is used to combine many elements in a row, parenthesis can be dropped, because the order of application is irrelevant.\n\nImagine that you're trying to use $f$ to combine 3 elements $x, y,$ and $z$ into one element, via two applications of $f$. $f$ is associative if $f(f(x, y), z) = f(x, f(y, z)),$ i.e., if the result is the same regardless of whether you apply $f$ to $x$ and $y$ first (and then apply that result to $z$), or whether you apply $f$ to $y$ and $z$ first (and then apply $x$ to that result).\n\nVisualizing $f$ as a [physical mechanism](https://arbital.com/p/3mb), there are two different ways to hook up two copies of $f$ together to create a function $f_3 : X \\times X \\times X \\to X,$ which takes three inputs and produces one output:\n\n![Two ways of combining f](http://i.imgur.com/Ezs1P8l.png)\n\nAn associative function $f$ is one where the result is the same no matter which way the functions are hooked up, which means that the result of using $f$ twice to turn three inputs into one output yields the same output regardless of the order in which we combine adjacent inputs.\n\n![3-arity f](http://i.imgur.com/WCT9HaA.png)\n\nBy similar argument, an associative operator $f$ also gives rise (unambiguously) to functions $f_4, f_5, \\ldots,$ meaning that [associative functions can be seen as a family of functions on lists](https://arbital.com/p/3ms).\n\nThis justifies the omission of parenthesis when writing expressions where an associative operator $\\bullet$ is applied to many inputs in turn, because the order of application does not matter. For example, multiplication is associative, so we can write expressions such as $2 \\cdot 3 \\cdot 4 \\cdot 5$ without ambiguity. It makes no difference whether we compute the result by first multiplying 2 by 3, or 3 by 4, or 4 by 5.\n\nBy contrast, the function `prependx` that sticks its inputs together and puts an `x` on the front is not associative: `prependx(prependx(\"a\",\"b\"),\"c\") = \"xxabc\"`, but `prependx(\"a\",prependx(\"b\",\"c\"))=xaxbc`.", "date_published": "2016-07-08T20:17:26Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Dylan Hendrickson", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "B-Class"], "alias": "3h4"} {"id": "507b8646ce8e52c8f029b2cd3c465cff", "title": "Operator", "url": "https://arbital.com/p/operator_mathematics", "source": "arbital", "source_type": "text", "text": "An operation $f$ on a [set](https://arbital.com/p/3jz) $S$ is a function that takes some values from $S$ and produces a new value. An operation can take any number of values from $S$, including zero (in which case $f$ is simply a constant) or infinitely many (in which case we call $f$ an \"infinitary operation\"). Common operations take a finite non-zero number of parameters. Operations often produce a value that is also in $S$ (in which case we say $S$ is [closed](https://arbital.com/p/3gy) under $f$), but that is not always the case.\n\nFor example, the function $+$ is a binary operation on [$\\mathbb N$](https://arbital.com/p/45h), meaning it takes two values from $\\mathbb N$ and produces another. Because $+$ produces a value that is also in $\\mathbb N$, we say that $\\mathbb N$ is closed under $+$.\n\nThe function $\\operatorname{neg}$ that maps $x$ to $-x$ is a unary operation on [$\\mathbb Z$](https://arbital.com/p/48l): It takes one value from $\\mathbb Z$ as input, and produces an output in $\\mathbb Z$ (namely, the negation of the input). $\\operatorname{neg}$ is also a unary operation on $\\mathbb N$, but $\\mathbb N$ is not closed under $\\operatorname{neg}$ (because $\\operatorname{neg}(3)=-3$ is not in $\\mathbb N$).\n\nThe number of values that the operator takes as input is called the [arity](https://arbital.com/p/3h8) of the operator. For example, the function $\\operatorname{zero}$ which takes no inputs and returns $0$ is a zero-arity operator; and the operator $f(a, b, c, d) = ac - bd$ is a four-arity operator (which can be used on any [ring](https://arbital.com/p/3gq), if we interpret multiplication and subtraction as ring operations).", "date_published": "2016-06-14T10:29:12Z", "authors": ["Eric Bruylant", "Nate Soares", "Patrick Stevens"], "summaries": [], "tags": ["Math 2", "C-Class"], "alias": "3h7"} {"id": "9e1cfac5fcde481d84b26c2819f7d989", "title": "Arity (of a function)", "url": "https://arbital.com/p/function_arity", "source": "arbital", "source_type": "text", "text": "The arity of a [function](https://arbital.com/p/3jy) is the number of parameters that it takes. For example, the function $f(a, b, c, d) = ac - bd$ is a function with arity 4, and $+$ is a function with arity 2; 2-arity functions are known as [binary functions](https://arbital.com/p/3kb).\n\nA function is said to take multiple parameters when its [domain](https://arbital.com/p/3js) is the product of multiple sets. For example, consider the function `is_older_than` that takes (as input) a person and an age and returns `yes` if the person is older than that age, and `no` otherwise. The domain of `is_older_than` is the set of all pairs of people and ages, which we might write as $(\\mathrm{People} \\times \\mathrm{Ages})$. Because this set is a product of two sets, we say that `is_older_than` is a function of two parameters, and that it has arity 2.", "date_published": "2016-05-13T21:37:27Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "3h8"} {"id": "5c265539fb0ea8ad49c1eb3974ada72d", "title": "Commutative operation", "url": "https://arbital.com/p/commutative_operation", "source": "arbital", "source_type": "text", "text": "A commutative function $f$ is a [function](https://arbital.com/p/3jy) that takes multiple inputs from a [set](https://arbital.com/p/3jz) $X$ and produces an output that does not depend on the ordering of the inputs. For example, the binary operation $+$ is commutative, because $3 + 4 = 4 + 3.$ The string concatenation function [`concat`](https://arbital.com/p/3jv) is not commutative, because `concat(\"3\",\"4\")=\"34\"` does not equal `concat(\"4\",\"3\")=\"43\"`.", "date_published": "2016-07-17T19:33:35Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait"], "alias": "3jb"} {"id": "18cef11090dd0deb39b8078d26ba4902", "title": "Commutativity: Intuition", "url": "https://arbital.com/p/commutativity_intuition", "source": "arbital", "source_type": "text", "text": "# Commutativity as an artifact of notation\n\nInstead of thinking of a commutative function $f(x, y)$ as a function that takes an ordered pair of inputs, we can think of $f$ as a function that takes an unordered [bag](https://arbital.com/p/3jk) of inputs, and therefore _can't_ depend on their order. On this interpretation, the fact that functions are always given inputs in a particular order is an artifact of our definitions, not a fundamental property of functions themselves. If we had notation for functions applied to arguments in no particular order, then commutative functions would be the norm, and non-commutative functions would require additional structure imposed on their inputs.\n\nIn a world of linear left-to-right notation, where $f(x, y)$ means \"$f$ applied to $x$ first and $y$ second\", commutativity looks like a constraint. In an alternative world where functions are applied to their inputs in parallel, with none of them distinguished as \"first\" by default, commutativity is the natural state of affairs.\n\n# Commutativity as symmetry in the output\n\nCommutativity can be seen as a form of symmetry in the output of a [binary](https://arbital.com/p/3kb) [function](https://arbital.com/p/3jy). Imagine a binary function as a physical mechanism of wheels and gears, that takes two inputs in along conveyer belts (one on the left, one on the right), manipulates those inputs (using mechanical sensors and manipulators and tools), and produces a result that is placed on an outgoing conveyer belt.\n\nThe output of a commutative function is symmetric in the way it relates to the inputs. Consider a function that takes two wooden blocks and glues them together. The function might _manipulate_ them in a symmetric fashion (if both blocks are picked up simultaneously, and have glue applied simultaneously, and are pushed together simultaneously), but the output is not symmetric: If a red block comes in on the left and a blue block comes in on the right, then the resulting block is red on the left and blue on the right, and the function is not commutative (though it is [associative](https://arbital.com/p/3h4)).\n\nBy contrast, a function that mixes the red and blue together — producing, for example, the uniform color purple or the unordered [set](https://arbital.com/p/3jz) $\\{b, d, e, l, u, r\\}$ — would be commutative, because the way that the output relates to each input is independent of which side the input came in on.\n\nA function is probably commutative if you can visualize the output itself as left/right symmetric, even if the left input was very different from the right input. For example, we can use the following visualization to show that multiplication is commutative. Imagine a function that takes in two stacks of poker chips, one on the left conveyer belt and one on the right conveyer belt. The function has a square of wet plaster in the middle. On each conveyer belt, an arm removes chips from the stack of poker chips until the stack only has one chip left, and for each chip that is removed, a perpendicular cut is put in the plaster (as in the diagram below). The plaster is then allowed to set, and the plaster pieces are then shaken out into a big bag. A third arm removes plaster pieces from the bag one at a time, and adds a poker chip to the outgoing conveyer belt for each one. To visualize this, see the diagram below:\n\n![Why multiplication commutes](http://i.imgur.com/Q3QBUT6.png)\n\nIf the input conveyer belts had 4 and 6 poker chips on them (respectively), then the output belt will have 24 chips on it (because 24 different chunks of plaster will have fallen into the bag), but the chips don't necessarily come from any one particular side: The output relates to each input in a manner that doesn't depend on which side they came in.\n\n%%%knows-requisite([https://arbital.com/p/1r6](https://arbital.com/p/1r6)):\nIn mathematics, a [symmetry](https://arbital.com/p/symmetry_mathematics) is some structure on a set that is preserved under some transformation of that set. In our case, the structure is the value of the output, which is preserved under the transformation of changing which side the inputs come in on. Formally, let $X^2$ be the set of all pairs of two values from the set $X;$ each point in $X^2$ is a pair $(x_1, x_2).$ $X^2$ can be visualized as an [$|X|$](https://arbital.com/p/4w5) by $|X|$ grid. Consider a function $f : X^2 \\to Y$ as a structure on $X^2$ that assigns a value $f(x_1, x_2)$ to each point $(x_1, x_2);$ this can be visualized as a terrain atop $X^2$ induced by $f$. Consider the transformation $\\operatorname{swap} : X^2 \\to X^2$ that maps $(x_1, x_2)$ to $(x_2, x_1),$ and the set $\\operatorname{swap}(X^2)$ generated by applying $\\operatorname{swap}$ to all points in $X^2$. This can be visualized as reflecting $X^2$ along a diagonal. $f$ also induces a structure on $\\operatorname{swap}(X^2).$ If the structure of $f$ on $X^2$ is identical to the structure of $f$ on $\\operatorname{swap}(X^2),$ then $f$ is a symmetry of $X^2$ under the transformation $\\operatorname{swap}$. This occurs exactly when $f(x_1, x_2)=f(x_2, x_1)$ for all $(x_1, x_2)$ pairs, which is the formal definition of commutativity.\n%%%", "date_published": "2016-08-27T12:09:10Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["We can think of commutativity either as an artifact of notation, or as a symmetry in the output of a function (with respect to the ordering of the inputs)."], "tags": ["Needs clickbait", "B-Class"], "alias": "3jj"} {"id": "3043bb47c4a090b1e0be3a52cc6c2a33", "title": "Bag", "url": "https://arbital.com/p/bag_mathematics", "source": "arbital", "source_type": "text", "text": "In mathematics, a \"bag\" is an unordered list. A bag differs from a [set](https://arbital.com/p/set_mathematics) in that it can contain the same value more than once. A bag differs from a list in that its elements are not ordered. For example, $\\operatorname{Bag}(1, 1, 2, 3) = \\operatorname{Bag}(2, 1, 3, 1) \\neq \\operatorname{Bag}(1, 2, 3).$", "date_published": "2016-05-12T00:17:03Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Stub"], "alias": "3jk"} {"id": "bf25aa6531191203d990abfefabb3c3c", "title": "String (of text)", "url": "https://arbital.com/p/text_string", "source": "arbital", "source_type": "text", "text": "A string (of text) is a series of letters (often denoted by quote marks), such as `\"abcd\"` or `\"hello\"` or `\"this is a string\"`. For example, the function [concat](https://arbital.com/p/3jv) takes two strings as inputs and puts them together, i.e., `concat(\"one\",\"two\")=\"onetwo\"`.", "date_published": "2016-05-12T21:17:27Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Definition"], "alias": "3jr"} {"id": "44ff3f25dce5c8a1acf66482f58c5852", "title": "Domain (of a function)", "url": "https://arbital.com/p/function_domain", "source": "arbital", "source_type": "text", "text": "The domain $\\operatorname{dom}(f)$ of a [function](https://arbital.com/p/3jy) $f : X \\to Y$ is $X$, the set of valid inputs for that function. For example, the domain of $+$ is the set of all pairs of numbers, and the domain of the division operation is the set of all pairs $(x, y)$ of numbers where $y$ is non-zero.\n\nVisualizing a function as a map that takes every point in an input set to one point in an output set, the domain is the input set (pictured on the left in red in the image below).\n\n![Domain, Codomain, and Image](http://i.imgur.com/ZZgVHaI.png)", "date_published": "2016-05-14T03:12:53Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "3js"} {"id": "56f584177f49c2c442487aa05bfd425c", "title": "concat (function)", "url": "https://arbital.com/p/string_concatenation", "source": "arbital", "source_type": "text", "text": "The string concatenation function `concat` puts two [strings](https://arbital.com/p/3jr) together, i.e., `concat(\"one\",\"two\")=\"onetwo\"`.", "date_published": "2016-05-12T21:16:32Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3jv"} {"id": "faa7d406ea14a304fc6386c082a3bf91", "title": "Function", "url": "https://arbital.com/p/function", "source": "arbital", "source_type": "text", "text": "Intuitively, a function $f$ is a procedure (or machine) that takes an input and performs some operations to produce an output. For example, the function \"+\" takes a pair of numbers as input and produces their sum as output: on input (3, 6) it produces 9 as output, on input (2, 18) it produces 20 as output, and so on.\n\nFormally, in mathematics, a function $f$ is a relationship between a [set](https://arbital.com/p/3jz) $X$ of inputs and a set $Y$ of outputs, which relates each input to exactly one output. For example, $-$ is a function that relates the pair $(4, 3)$ to $1,$ and $(19, 2)$ to $17,$ and so on. In this case, the input set is all possible pairs of [numbers](https://arbital.com/p/number), and the output set is numbers. We write [$f : X \\to Y$](https://arbital.com/p/3vl) (and say \"$f$ has the [type](https://arbital.com/p/type_mathematics) $X$ to $Y$\") to denote that $f$ is some function that relates inputs from the set $X$ to outputs from the set $Y$. For example, $- : (\\mathbb N \\times \\mathbb N) \\to \\mathbb N,$ which is read \"subtraction is a function from [natural number](https://arbital.com/p/natural_number)-pairs to natural numbers.\"\n\n$X$ is called the [domain](https://arbital.com/p/3js) of $f.$ $Y$ is called the [codomain](https://arbital.com/p/3lg) of $f$. We can visualize a function as a mapping between domain and codomain that takes every element of the domain to exactly one element of the codomain, as in the image below.\n\n![Domain, Codomain, and Image](http://i.imgur.com/ZZgVHaI.png)\n\n[Talk about how they're a pretty dang fundamental concept.](https://arbital.com/p/fixme:)\n[Talk about how we can think of functions as mechanisms.](https://arbital.com/p/fixme:)\n[Add pages on set theoretic and type theoretic formalizations.](https://arbital.com/p/fixme:)\n[Talk about their relationships to programming.](https://arbital.com/p/fixme:)\n[Talk about generalizations including partial functions, multifunctions, etc.](https://arbital.com/p/fixme:)\n[Give some history, e.g. Church-Turing, Ackerman, recursion, etc. ](https://arbital.com/p/fixme:)\n\n# Examples\n\nThere is a function $f : \\mathbb{R} \\to \\mathbb{R}$ from the [real numbers](https://arbital.com/p/real_number) to the real numbers which sends every real number to its square; symbolically, we can write $f(x) = x^2$. \n\n(TODO)", "date_published": "2016-06-08T17:21:49Z", "authors": ["Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Mark Chimes"], "summaries": [], "tags": ["Start", "Needs lenses", "Work in progress", "Needs clickbait"], "alias": "3jy"} {"id": "94798c4015751d28c5bcab6847823aac", "title": "Set", "url": "https://arbital.com/p/set_mathematics", "source": "arbital", "source_type": "text", "text": "A __set__ is an unordered collection of distinct [objects](https://arbital.com/p/-5xx). The objects in a set are typically referred to as its __elements__. A set is usually denoted by listing all of its elements between braces. For example, $\\{1, 3, 2\\}$ denotes a set of three numbers, and because sets are unordered, $\\{3, 2, 1\\}$ denotes the same set. $\\{1, 2, 2, 3, 3, 3\\}$ does not denote a set, because the elements of a set must be distinct.\n\nAnother way to denote sets is the so-called abstraction method, in which the members of a set are given an explicit description, leaving no need for listing them. For example, the set from the example above $\\{1, 3, 2\\}$ can be described as the set of all natural numbers $x$ which are less than $4$. Formally that is denoted as $\\{x \\mid (x < 4) \\text{ and } (x \\text{ is a natural number})\\}$.\n\n[I am new to using Latex for editing, so I am not sure if this is the best way to display what's above. Also, \"x is a natural number\" is the same as x∈N but we did not get to the membership operator yet.](https://arbital.com/p/comment:)\n\nUsing the abstraction method allows for denoting sets with infinitely many elements, which would be impossible by listing them all. For example, the set $\\{x \\mid x = 2n \\text{ for some natural } n \\}$ is the set of all even numbers.\n\nA set doesn't need to contain things of the same type, nor does it need to contain things that can all be brought to the same place: We could define a set $S$ that contains the apple nearest you, the left shoe which you wore last, the number 17, and London (though it's not clear why you'd want to). Rather, a set is an arbitrary boundary that we draw around some collection of objects, for our own purposes.\n\nThe use of sets is that we can manipulate representations of sets and study relationships between sets without concern for the actual objects in the sets. For example, we can say \"there are 35 ways to choose three objects from a set of seven objects\" regardless of whether the objects in the set are apples, people, or abstract concepts. %%note: This is an example of [abstracting over the objects](https://arbital.com/p/abstract_over_objects).%%\n\nIt's also worth noting that a set of elements is itself a single distinct object, different from the things it contains, and in fact, one set can contain other sets among its elements. For example, $\\{1,2,\\{1,2\\}\\}$ is a set containing three elements: $1$, $2$ and $\\{1,2\\}$.\n\n##Examples\n\n - $\\{1,5,8,73\\}$ is a set, containing numbers $1$, $5$, $8$ and $73$;\n - $\\{\\{0,-3,8\\}\\}$ is a set, containing __one__ element — the set $\\{0,-3,8\\}$;\n - $\\{\\text{Mercury}, \\text{Venus}, \\text{Earth}, \\text{Mars} \\}$ is a set of four planets;\n - $\\{x \\mid x \\text{ is a human, born on 01.01.2000} \\}$ is a set all people whose age is the same as the current year number;\n - $\\{\\text{author's favorite mug}, \\text{Arbital's main page}, 73, \\text{the tallest man born in London}\\}$ is a set of four seemingly random objects.\n\n## Set membership\n*Main page: [Set membership](https://arbital.com/p/set_membership)*\n\nSet membership can be stated using the symbols $∈$ and $∉$. They describe the contents of a set. $∈$ indicates what is in a set. $∉$ indicates what is **not** in a set. For example, \"$x ∈ A$\" translated into English is \"$x$ is a member of the set $A$.\" and \"$x ∉ A$\" translates to \"$x$ is **not** in the set $A$.\"\n\n##Set cardinality\n*Main page: [https://arbital.com/p/4w5](https://arbital.com/p/4w5)*\n\nThe size of a set is called its __cardinality__. If $A$ is a finite set then the cardinality of $A$, denoted $|A|$, is the number of elements $A$ contains. When $|A| = n$, we say that $A$ is a set of cardinality $n$. There exists a [bijection](https://arbital.com/p/499) from any finite set of cardinality $n$ to the set $\\{0, ..., (n-1)\\}$ containing the first $n$ natural numbers. We can generalize this idea to infinite sets: we say that two infinite sets have the same cardinality if there exists a bijection between them. Any set in bijective correspondence with [$\\mathbb N$](https://arbital.com/p/45h) is called __countably infinite__, while any infinite set that is not in bijective correspondence with $\\mathbb N$ is call __uncountably infinite__. All countably infinite sets have the same cardinality, whereas there are multiple distinct uncountably infinite cardinalities.\n\n## See also\n\n* [https://arbital.com/p/5s5](https://arbital.com/p/5s5) - Common [operations](https://arbital.com/p/3jy) in set theory.\n* [https://arbital.com/p/sets_examples](https://arbital.com/p/sets_examples) - Examples of significant sets.\n* [https://arbital.com/p/2w0](https://arbital.com/p/2w0) - A concrete exercise in comparing the cardinalities of infinite sets", "date_published": "2016-08-29T00:54:56Z", "authors": ["Kevin Clancy", "Ilia Zaichuk", "Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Travis Rivera"], "summaries": [], "tags": ["Needs clickbait", "C-Class"], "alias": "3jz"} {"id": "b8c0050059a0641d200f9cf39e7a5fb0", "title": "Binary function", "url": "https://arbital.com/p/binary_function", "source": "arbital", "source_type": "text", "text": "A binary function $f$ is a [function](https://arbital.com/p/3jy) of two inputs (i.e., a function with [arity](https://arbital.com/p/3h8) 2). For example, $+,$ $-,$ $\\times,$ and $\\div$ are all binary functions.", "date_published": "2016-05-13T21:50:16Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Math 2", "Needs clickbait", "Stub"], "alias": "3kb"} {"id": "4875392fa378d60e236abc3d9e98348c", "title": "Associativity: Intuition", "url": "https://arbital.com/p/associativity_intuition", "source": "arbital", "source_type": "text", "text": "Associative functions can be interpreted as families of functions that reduce lists down to a single output by combining adjacent elements in any order. Alternatively, associativity can be seen as a generalization of \"listyness,\" which captures and generalizes the \"it doesn't matter whether you added [b](https://arbital.com/p/a,) to c or a to [c](https://arbital.com/p/b,), the result is [b, c](https://arbital.com/p/a,) regardless\" aspect of lists.\n\nThere are many different ways for a function to be associative, so it is difficult to provide a single litmus test for looking at a function and telling whether it associates (aside from just checking the associative axiom directly). However, a few heuristics can be used to make a good guess.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Associative operators as natural functions on lists\n\nThe [generalized associative law](https://arbital.com/p/3ms) says that an [associative](https://arbital.com/p/3h4) [function](https://arbital.com/p/3jy) $f : X \\times X \\to X$ gives rise to a method for combining any non-empty list of $X$ elements into a single output, where the order in which adjacent elements are combined doesn't affect the result. We can flip that result around, and interpret associative operators as the pairwise versions of a certain class of \"natural\" functions for combining the elements of a list.\n\nOn this interpretation, we start by noting that some methods for reducing a list down to a single element can be broken down into pairwise combinations of adjacent elements, while others can't. For example, when we're trying to compute $3 + 4 + 5 + 6,$ we can pick any two adjacent elements and start by combining those using the binary version of $+$. But when we're trying to compute `adjacent_ones([0, 0, 1, 1, 0](https://arbital.com/p/0,))` to check whether the list has any two adjacent ones, we're going to run into trouble if we only look for adjacent ones in the pairs (0, 0), (0, 1), and (1, 0).\n\nThe lists that can be reduced by pairwise combination of adjacent pairs have a nice locality property; the result can be computed only by looking at adjacent elements (without worrying about the global structure). Locality is a common idiom in physics and mathematics, so we might start by asking what sort of functions on lists have this locality property. The answer is \"any list that can be broken down into pairwise combinations of elements such that the order doesn't matter.\" If we formalize that notion, we get the result that any function on lists with this locality property corresponds (in a one-to-one fashion) to an associative operation. Thus, we can view associativity as the mathematical formalization of this nice \"locality\" property on lists.\n\nEmpirically, this locality property turns out to be quite useful for math, physics, and in computer programming, as evidenced by the commonality of associative operators. See, for example, [https://arbital.com/p/3g8](https://arbital.com/p/3g8), or the pages on [semigroups](https://arbital.com/p/algebraic_semigroup) and [monoids](https://arbital.com/p/3h3).\n\n# Associativity as a generalization of \"list\"\n\nThe above interpretation gives primacy to lists, and interprets associative operators in terms of natural functions on lists. We can invert that argument by treating associativity as a _generalization_ of what it means for something to act \"list-like.\"\n\nA list $[b, c, d, \\ldots](https://arbital.com/p/a,)$ is a set of elements that have been combined by some \"combiner\" function, where the order of the elements matters, but the order _in which they were combined_ does not matter. For example, if we combine $a$ with $b$ (forming $[b](https://arbital.com/p/a,)$) and then combine that with $c$, then we get the same list as if we combine $b$ and $c$ into $[c](https://arbital.com/p/b,)$ first, and then combine $a$ with that.\n\nThe very fact that we can unambiguously say \"the list $[b, c](https://arbital.com/p/a,)$\" without worrying about the order that the elements were combined in means that lists are built out of an associative \"combination\" operator. On this interpretation, associativity is capturing part of the essence of listyness, and associativity _in general_ generalizes this notion. For example, associative operators are allowed to be a little forgetful about what exact elements you combined (e.g., 3 + 4 = 2 + 5) so long as you retain the \"it doesn't matter what order you combine the things in\" property. In other words, we can view associativity as \"part of what it means to be list-like.\"\n\n(One particularly important property of lists — namely, that they can be empty — is not captured by associativity alone. Associative operators on sets that have an element that acts like an \"empty list\" are called \"monoids.\" For more on the idea of generalizing the notion of \"list\", refer to the page on [monoids](https://arbital.com/p/3h3).)\n\n# Associative mechanisms\n\nThe above two interpretations give an intuition for what it _means_ that a function is associative. This still leaves open the question of _how_ a function can be associative. Imagine $f : X \\times X \\to Y$ as a [physical mechanism](https://arbital.com/p/3mb) of wheels and gears. Someone says \"$f$ is associative.\" What does that _mean,_ in terms of the function's physical mechanisms? What should we expect to see when we pop the function open, given the knowledge that it \"is associative\"?\n\nRecall that associativity says that the two methods for combining two instantiations of the function yield the same output:\n\n![Associative paths](http://i.imgur.com/Ezs1P8l.png)\n\nThus, the ultimate physical test of associativity is hooking up two instantiations of $f$ as in the left diagram, and then checking whether dragging the mechanisms of the lower-right instantiation above the mechanisms of the upper-left instantiation (thereby reconfiguring the system according to the diagram on the right) causes the behavior of the overall system to change. What happens when the right-hand-side instantiation is given access to the middle input first versus second? Does that affect the behavior at all? If not, $f$ is associative.\n\nThis is not always an easy property to check by looking at the mechanisms of $f$ alone, and sometimes functions that appear non-associative (at first glance) turn out to be associative by apparent coincidence. In other words, there are many different ways for a function to be associative, so it is difficult to give a single simple criterion for determining associativity by looking at the internals of the function. However, we can use a few heuristics that help one distinguish associative functions from non-associative ones.\n\n## Heuristic: Can two copies of the function operate in parallel?\n\n$f$ is associative if, when using two copies of $f$ to reduce three inputs to one output, then changing whether the right-hand copy gets access to the middle tape first vs second does not affect the output. One heuristic for checking whether this is the case is to check whether both copies of $f$ can make use of the middle input at the same time, without getting in each other's way. If so, $f$ is likely associative.\n\nFor example, consider an implementation of $+$ that gets piles of poker chips as input (where a pile of $n$ chips represents the number $n$) and computes the output by simply sweeping all the poker chips from its input belts onto its output belt. To make a function that adds three piles of chips together, you could set up two two-pile adders in the configuration of the diagram on the left, but you could also have two two-tape sweepers operating on three tapes in parallel, such that they both sweep the middle tape. This parallelization wouldn't change the output, and thus $+$ is associative.\n\nBy contrast, consider a mechanism that takes wooden blocks as input, and glues them together, and nails silver-colored caps on either end of the glued block. For example, if you put in a red block on the left and a blue block on the right, you get a silver-red-blue-silver block in the output. You could set up two copies of these like the diagram on the left, but if you tired to parallelize them, you'd get into trouble — each mechanism would be trying to nail one of its caps into the place that the other mechanism was attempting to apply glue. And indeed, this mechanism is non-associative.\n\nThis heuristic is imperfect. Some mechanisms that seem difficult to parallelize are still associative. For example, consider the multiplier mechanism, which takes two poker piles as input and puts a copy of the left pile onto the output tape for every chip in the right pile. It would be difficult to parallelize two copies of this function: One would be trying to count the chips in the middle pile while the other was attempting to copy the chips in the middle pile, and the result might not be pretty. However, multiplication _is_ associative, because a pile of $x$-many copies of a ($y$ copies of $z$)-many poker chips has the same number of chips as a pile of ($x$ copies $y$)-many copies of $z$-many poker chips. \n\n## Heuristic: Does the output interpretation match both input interpretations?\n\nAnother (vaguer) heuristic is to ask whether the output of the function should _actually_ be treated as the same sort of thing as the input to the function. For example, recall the `adjacent_ones` function from above, which checks a list for adjacent ones, and returns `1` if it finds some and `0` otherwise. The inputs to `adjacent_ones` are 0 and 1, and the output is 0 or 1, but the output interpretation doesn't quite match the input interpretation: Intuitively, the output is _actually_ intended to mean \"yes there were adjacent ones\" or \"not here weren't adjacent ones\", and so applying `adjacent_ones` to the output of `adjacent_ones` is possible but ill-advised. If there is a mismatch between the output interpretation and at least one of the input interpretations, then the function probably isn't associative.\n\nFor example, imagine a person who is playing a game that works as follows. The board has three positions: red, green, and blue. The player's objective is to complete as many clockwise red-green-blue cycles as possible, without ever backtracking in the counter-clockwise direction.\n\n![3-cycle board](http://i.imgur.com/bxCnXUs.png)\n\n Each turn, the game offers them a choice of one of the three spaces, and they get to choose whether or not to travel to that square or stay where they are. Clearly, their preferences depend on where they currently are: If they're on \"red\", \"green\" is a good move and \"blue\" is a bad one; but if they're on \"blue\" then choosing \"green\" is ill-advised. We can consider a binary function $f$ which takes their current position on the left and the proposed position on the right, and returns the position that the player prefers. For example, $f(red,blue)=red,$ $f(red,green)=green,$ $f(blue,blue)=blue,$ $f(blue,green=blue).$ In this case, the interpretation of the left input is a \"player position,\" the interpretation of the right input is an \"offered move\", and the interpretation of the output is the resulting \"player position.\" The output interpretation mismatches one of the input interpretations, which implies that $f$ probably isn't associative, and indeed it is not: $f(f(red, green), blue))=blue,$ whereas $f(red, f(green, blue))=red.$ The former expression can be interpreted as \"where the player would be if they started at red, and were then offered green, and were then offered blue.\" The latter expression doesn't have a great interpretation, because it's feeding the output of $f(green, blue)$ (a player position) in as an \"offered move.\"\n\nIf the interpretation of the output (in this case, \"player position\") mismatches the interpretations of at least one of the inputs, then the function likely isn't associative. However, this heuristic is also imperfect: The most obvious interpretations of the inputs and outputs to the subtraction function are \"they're all just numbers,\" and subtraction still fails to associate.\n\n## Further discussion\n\nThere are many different ways for a function to be associative, so it is difficult to give a simple litmus test. The ultimate test is always to imagine using two copies of $f$ to combine three outputs into one, and check whether the result changes depending on whether the left-hand copy of $f$ gets to run first (in which case it gets to access the second input belt at the source) or second (in which case its right-hand input is the right-hand copy's output). For examples of functions that pass or fail this ultimate test, refer to the [examples page](https://arbital.com/p/3mt).", "date_published": "2016-08-19T21:22:18Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["Associative functions can be interpreted as families of functions that reduce lists down to a single output by combining adjacent elements in any order. Alternatively, associativity can be seen as a generalization of \"listyness,\" which captures and generalizes the \"it doesn't matter whether you added [b](https://arbital.com/p/a,) to c or a to [c](https://arbital.com/p/b,), the result is [b, c](https://arbital.com/p/a,) regardless\" aspect of lists.\n\nThere are many different ways for a function to be associative, so it is difficult to provide a single litmus test for looking at a function and telling whether it associates (aside from just checking the associative axiom directly). However, a few heuristics can be used to make a good guess."], "tags": ["Needs clickbait", "B-Class"], "alias": "3kc"} {"id": "646d508e45e254d57e208c96c793af46", "title": "List", "url": "https://arbital.com/p/list_mathematics", "source": "arbital", "source_type": "text", "text": "A list is an ordered collection of objects, such as `[1, 2, 3](https://arbital.com/p/0,)` or `[\"blue\", 0, \"shoe\"](https://arbital.com/p/\"red\",)`. Lists are generally denoted by square brackets, with elements separated by commas. Lists allow repetition (unlike [sets](https://arbital.com/p/3jz)) and are ordered (unlike [bags](https://arbital.com/p/3jk)).\n\nSometimes lists are typed, in which case they contain only one type of element. For example, `[1, 2, 3](https://arbital.com/p/0,)` is a well-typed list of numbers, whereas the list `[\"blue\", 0, \"shoe\"](https://arbital.com/p/\"red\",)` is not well-typed (because it contains both numbers and [strings](https://arbital.com/p/3jr)). The set of all lists of type $X$ is usually denoted $[https://arbital.com/p/X](https://arbital.com/p/X)$ or $X^{\\le \\omega}$, where $\\omega$ is the smallest infinite [limit ordinal](https://arbital.com/p/limit_ordinal). The set of all finite lists of type $X$ is usually denoted $X^{< \\omega}$, and the set of all infinite lists of type $X$ is usually denoted $X^{\\omega}$.", "date_published": "2016-05-12T22:04:40Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Math 2"], "alias": "3kd"} {"id": "4293fee2305bc0f705cecd3933f8ebc0", "title": "Codomain (of a function)", "url": "https://arbital.com/p/function_codomain", "source": "arbital", "source_type": "text", "text": "The codomain $\\operatorname{cod}(f)$ of a [function](https://arbital.com/p/3jy) $f : X \\to Y$ is $Y$, the set of possible outputs for the function. For example, the codomain of [concat](https://arbital.com/p/3jv) is the set of all [strings](https://arbital.com/p/3jr), and the codomain of the function $+$ is the set of all numbers.\n\nVisualizing a function as a map that takes every point in an input set to one point in an output set, the codomain is the output set (pictured on the right in blue in the image below).\n\n![Domain, Codomain, and Image](http://i.imgur.com/ZZgVHaI.png)\n\nThe codomain of a function is not to be confused with the [image](https://arbital.com/p/3lh) of a function, which is the set of points in $Y$ that can actually be reached by following $f$, and which may not include the whole set $Y$. For example, consider all the functions that take a [real number](https://arbital.com/p/real_number) as input and produce another real number. Many of those functions cannot be made to produce every possible real number: For example, the function $\\operatorname{square} : \\mathbb R \\to \\mathbb R$ only produces non-negative numbers. For more on the distinction, see the page on [codomain vs image](https://arbital.com/p/3lv).\n\n[Add a lens talking about how codomains are arbitrary but often natural/useful. Use example like how we can consider $+$ to have codomain $\\mathbb N$, $\\mathbb Z$, etc., and examples like Ackerman where the codomain \"nat\" makes by far the most sense ](https://arbital.com/p/fixme:)", "date_published": "2016-05-14T03:12:41Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Definition", "Needs accessible summary"], "alias": "3lg"} {"id": "04de1087ede6971ba7c2a7b56ecfe337", "title": "Image (of a function)", "url": "https://arbital.com/p/function_image", "source": "arbital", "source_type": "text", "text": "The image $\\operatorname{im}(f)$ of a [function](https://arbital.com/p/3jy) $f : X \\to Y$ is the set of all possible outputs of $f$, which is a subset of $Y$. Using [set builder notation](https://arbital.com/p/3lj), $\\operatorname{im}(f) = \\{f(x) \\mid x \\in X\\}.$\n\nVisualizing a function as a map that takes every point in an input set to one point in an output set, the image is the set of all places where $f$-arrows land (pictured as the yellow subset of $Y$ in the image below).\n\n![Domain, Codomain, and Image](http://i.imgur.com/ZZgVHaI.png)\n\nThe image of a function is not to be confused with the [codomain](https://arbital.com/p/3lg), which is the _type_ of output that the function produces. For example, consider the [Ackermann function](https://arbital.com/p/43x), which is a very fast-growing (and difficult to compute) function. When someone asks what sort of thing the Ackermann function produces, the natural answer is not \"something from a sparse and hard-to-calculate set of numbers that I can't tell you off the top of my head\"; the natural answer is \"it outputs a number.\" In this case, the codomain is \"number\", while the image is the sparse and hard-to-calculate subset of numbers. For more on this distinction, see the page on [codomain vs image](https://arbital.com/p/3lv).", "date_published": "2016-06-10T14:45:50Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "3lh"} {"id": "8d1bd0cc14608fab7fb0cf0e4a2b81e3", "title": "Set builder notation", "url": "https://arbital.com/p/set_builder_notation", "source": "arbital", "source_type": "text", "text": "$\\{ 2n \\mid n \\in \\mathbb N \\}$ denotes the set of all even numbers, using set builder notation. Set builder notation involves an expression on the left and a series of constraints on the right, separated by a pipe and placed between curly braces. The expression on the left makes use of variables that are introduced and constrained on the right. The result denotes the set of all possible values on the left-hand side that obey the constraints on the right-hand side. For example, $\\{ (x, y) \\mid x \\in \\mathbb R, y \\in \\mathbb R, x \\cdot y = 1 \\}$ is the set of all pairs of real numbers whose product is 1.", "date_published": "2016-05-13T21:58:21Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Definition"], "alias": "3lj"} {"id": "44f00f88aaefcecdad61331cc3ff70c9", "title": "Range (of a function)", "url": "https://arbital.com/p/function_range", "source": "arbital", "source_type": "text", "text": "The \"range\" of a function is an ambiguous term that is generally used to refer to either the function's [codomain](https://arbital.com/p/3lg) or [image](https://arbital.com/p/3lh).", "date_published": "2016-05-13T21:39:55Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Definition"], "alias": "3lm"} {"id": "75272a01fafb8c61495e5f275e502b48", "title": "Codomain vs image", "url": "https://arbital.com/p/codomain_vs_image", "source": "arbital", "source_type": "text", "text": "A function $f : X \\to Y$ with [domain](https://arbital.com/p/3js) $X$ and [codomain](https://arbital.com/p/3lg) $Y$ cannot necessarily produce all values in $Y$. For example, consider the function that takes a [real number](https://arbital.com/p/real_number) and squares it. We could say that this function has codomain $\\mathbb R$, because it produces a real number, even though its image is the set of non-negative real numbers.\n\n![Domain, Codomain, and Image](http://i.imgur.com/EHBsE1C.png)\n\nIf $f$ maps $X$ to a subset $I$ of $Y$, why not just say that its codomain is $I$ and do away with the distinction between codomain and image? There are at least two cases where the distinction is useful.\n\n1. It is useful to distinguish codomain from image in cases where the _type of thing_ that the function produces is easy to specify, but the _specific values_ are hard to calculate. Consider, for instance, the [Ackermann function](https://arbital.com/p/43x). It definitely produces numbers (i.e. its codomain is $\\mathbb N$), but it's not easy to figure out what numbers (exactly) it produces. When talking about what sort of outputs to expect from the Ackermann function, the natural answer is \"a number,\" rather than \"either 1 through 7, 9, 11, 13, 29, 61, 125, 65533, $2^{65536} − 3$, or some other ungodly large numbers that I lack the resources to calculate.\" For this purpose, the codomain can be thought of as a natural/simple boundary drawn around a complex image.\n\n2. Cases where we're considering a set of all functions that map into a certain set. For example, if we're discussing all possible functions from the [natural numbers](https://arbital.com/p/natural_number) to the set $\\{0, 1\\},$ we want to be able to talk about \"functions with codomain $\\{0, 1\\}$\" even if some of those functions are things like \"always return 0\" (which have an image smaller than $\\{0, 1\\}$). For this purpose, the codomain can be thought of as a context-dependent tool that's useful when considering (or making use of) a bunch of different functions that are all producing objects of the same type.", "date_published": "2016-06-10T14:46:07Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "Patrick Stevens"], "summaries": ["It is useful to distinguish codomain from image both (a) when the type of thing that the function produces is simple to describe, but the specific objects produced are complicated and hard to identify; and (b) when talking about (or making use of) a collection of functions from a set $X$ to a set $Y$, many of which have an image which is a subset of $Y$."], "tags": ["Needs clickbait"], "alias": "3lv"} {"id": "38fa832a63176192af9e3e1db8ece17a", "title": "Function: Physical metaphor", "url": "https://arbital.com/p/function_physical_metaphor", "source": "arbital", "source_type": "text", "text": "Many functions can be visualized as physical mechanisms of wheels and gears, that take their inputs on conveyor belts, manipulate them (using mechanical sensors and tools), and produce an output which is placed on an outgoing conveyor belt.\n\n![A binary function](http://i.imgur.com/lKxAXcK.png)\n\nFor example, we can visualize the function $+$ as a function that takes two stacks of poker chips in as input (one on the left conveyor belt and one on the right conveyor belt) and produces its output by dumping both stacks onto the output belt. We can visualize the function $\\times$ as a function that also gets a stack of poker chips on each input belt, and which puts a copy of the left stack onto the output tape for each poker chip in the right stack.\n\nAll [computable](https://arbital.com/p/computable_function) functions can (in principle) be implemented by a physical system of wheels and gears (or circuits and wires, etc.), and it is conjectured that the computable functions are the _only_ functions that can be implemented by physical systems. (This conjecture is known as the [Church-Turing thesis](https://arbital.com/p/church_turing_thesis).)", "date_published": "2016-05-15T05:55:52Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["Many functions can be visualized as physical mechanisms of wheels and gears, that take their inputs on conveyor belts, manipulate them (using mechanical sensors and tools), and produce an output which is placed on an outgoing conveyor belt.\n\n![A binary function](http://i.imgur.com/lKxAXcK.png)\n\nFor example, we can visualize the function $+$ as a function that takes two stacks of poker chips in as input (one on the left conveyor belt and one on the right conveyor belt) and produces its output by dumping both stacks onto the output belt."], "tags": ["Needs clickbait"], "alias": "3mb"} {"id": "4974327384eb96b49b6406cadc446920", "title": "Generalized associative law", "url": "https://arbital.com/p/generalized_associative_law", "source": "arbital", "source_type": "text", "text": "Given an [associative](https://arbital.com/p/3h4) [operator](https://arbital.com/p/3h7) $\\cdot$ and a [list](https://arbital.com/p/3kd) $[b, c, \\ldots](https://arbital.com/p/a,)$ of parameters, all ways of reducing the list to a single element by combining adjacent pairs yield the same result. In other words, while associativity only talks about the behavior of $f$ when combining three elements, this is sufficient to ensure that $f$ has similar good behavior when combining arbitrarily many elements\n\nFor example, consider the list $[b, c, d, e](https://arbital.com/p/a,),$ and abbreviate $a \\cdot b$ as $ab.$ $((ab)c)(de)$ then denotes the output created by first combining $a$ and $b$, and then combining that with $c$, and then combining $d$ and $e$, and then putting the two results together. $a(b(c(de))$ denotes the result of combining $d$ and $e$ first, and then combining $c$ with that, and then combining $b$ with that, and then combining $a$ with that. The generalized associative law says that all possible orders of reducing the list (by combining adjacent elements) yield the same result, which means we are justified in dropping all parenthesis (and just writing $abcde$ for the result of reducing the list $[b, c, d, e](https://arbital.com/p/a,)$ using $\\cdot$), because the order of application does not matter.\n\nVisually, given an associative [function](https://arbital.com/p/3jy) $f$, there are five ways to use three copies of $f$ to combine four inputs into one output:\n\n![Five ways of hooking up three instantiations of f](http://i.imgur.com/OvLsjqr.png)\n\nThe generalized associative law says that all five yield the same result. Because all five combinations are equivalent, there is only one function $f_4$ that uses three copies of $f$ to combine four inputs into one output. Similarly, there is only one function $f_5$ that combines five elements into one using four applications of $f,$ and so on.\n\n# Proof sketch\n\nAssociativity of $\\cdot$ lets us shift parenthesis to the left or right when manipulating expressions that use $\\cdot$ multiple times in a row, because associativity says that $(x\\cdot y) \\cdot z = x \\cdot (y \\cdot z).$ Abbreviating $x \\cdot y$ as $xy,$ we can use this rule to prove the generalized associative law.\n\nLet a \"reduction path\" of a list $[b, c, d](https://arbital.com/p/a,)$ be an expression of the form $a(b(cd)),$ where the parenthesis tell us explicitly how to use applications of $\\cdot$ on adjacent elements to combine the elements of the list into a single output. To prove the generalized associative law for lists of length 4, observe that any reduction path can be turned into any other, by repeatedly shifting the parenthesis left or right. For example:\n\n$a(b(cd))=a((bc)d)=(a(bc))d=((ab)c)d=(ab)(cd).$\n\nIn this case, the expression is transformed by shifting the inner parentheses left, then shifting the outer parentheses left, then shifting the inner parentheses left again, then shifting the outer parenthesis right, with each step allowed by associativity. By a similar method (of \"walking\" the parenthesis all over the reduction path one step at a time), any two reduction paths (of the same list) can be shown equivalent.\n\n# Implications\n\nAn associative function $f : X \\times X \\to X$ gives rise to a single natural function $f_n$ for reducing a list of $n$ inputs to a single output, given any number $n \\ge 1$. ($f_1$ is the function that uses zero applications of $f,$ and thus leaves the input untouched.) This means that we can see associative functions not as just binary functions, but as a family of functions for reducing lists to a single output.\n\nFurthermore, whenever we want to use an associative function to reduce a list to a single element, we can do so in a \"local\" fashion, by repeatedly combining adjacent elements in the list, without worrying about the order in which we do so. This is certainly not true for all functions that reduce a list to a single output! For example, consider the function `adjacent_ones` which takes a list of ones and zeroes and produces a `1` if the list contains any adjacent ones, and a `0` otherwise. `adjacent_ones([https://arbital.com/p/0,0,0,1,1,0](https://arbital.com/p/0,0,0,1,1,0))=1`, but we can't compute this result by just combining adjacent elements pairwise willy-nilly: If we use the order ((00)(01))(10) we get the answer `0`, when the correct answer is `1`.\n\nAssociative functions are precisely those that give rise to a family of functions for reducing lists where the result _can_ be computed by just combining adjacent elements pairwise willy-nilly. Thus, associative operators describe list operations that can be computed _locally,_ only ever looking at two adjacent elements, without worrying that some important global structure (such as \"are there two ones in a row?\") will be destroyed.\n\nThis family of functions also has the nice property that if you reduce one list $[b, c, \\ldots](https://arbital.com/p/a,)$ to a single element $\\alpha,$ and another list $[y, z, \\ldots](https://arbital.com/p/x,)$ to a single element $\\chi,$ then $f(\\alpha, \\chi)$ is the same as reducing $[b, c, \\ldots, x, y, z, \\ldots](https://arbital.com/p/a,):$ in other words, we can reduce two lists separately and then combine the results, and get the same answer as we would have gotten by combining the lists first and then reducing them to a single result.\n\n# What about empty lists?\n\nAny associative function $f$ can be used to reduce a non-empty finite list to a single element, by repeatedly applying $f$ to adjacent elements, without worrying about the order. This works for one-element lists, despite the fact that $f$ takes two parameters, because for one-element lists we can just use zero applications of $f$. But what about zero-element lists?\n\nTo create a family $f_n : X^n \\to X$ of functions on lists with the nice locality properties discussed above for all $n \\ge 0,$ we need some \"canonical\" element of $0_X$ in $X$ that $f_0$ can return given an empty list, such that $0_X$ \"acts like an empty element\" with respect to $f.$ [Monoids](https://arbital.com/p/3h3) formalize this notion of \"an associative operator plus an element that acts like an empty element,\" and thus, they capture the idea of a family of functions that can reduce any list (of finite size) to a single output, regardless of what order adjacent elements are combined in.", "date_published": "2016-05-16T23:38:37Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": ["Given an [associative](https://arbital.com/p/3h4) [operator](https://arbital.com/p/3h7) $\\cdot$ and a [list](https://arbital.com/p/3kd) $[b, c, \\ldots](https://arbital.com/p/a,)$ of parameters, all ways of reducing the list to a single element by combining adjacent pairs yield the same result. In other words, while associativity only talks about the behavior of $f$ when combining three elements, this is sufficient to ensure that $f$ has similar good behavior when combining arbitrarily many elements. Thus, we can unambiguously drop parenthesis when using $\\cdot$ to combine multiple elements. In other words, an associative function $f : X \\times X \\to X$ gives rise to a method for combining any non-empty finite list of elements of $X$ into a single output, such that the result can be computed by combining adjacent elements in any order."], "tags": ["Needs clickbait"], "alias": "3ms"} {"id": "17d9701e9bc5343934434750edf40914", "title": "Associativity: Examples", "url": "https://arbital.com/p/associativity_examples", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## Positive examples\n\n### Addition\n\n$(x + y) + z = x + (y + z)$ for all numbers $x, y,$ and $z.$ Thus, addition associates. One easy way to see this fact is to consider a [physical system](https://arbital.com/p/3mb) that implements addition, e.g., by taking two piles of poker chips (where a poker chip with $n$ chips represents the number $n$) in on two input belts, and producing a pile of poker chips on the output belt. This function can be implemented by simply shoving both input piles onto the output pile. Clearly, when combining three piles, it doesn't matter which piles get shoved onto the output tape in which order, so addition is associative.\n\n### Multiplication\n\n$(x \\times y) \\times z = x \\times (y \\times z)$ for all numbers $x, y,$ and $z.$ Thus, multiplication associates. It is, however, a little harder to see why this must be the case for multiplication.\n\nImagine a physical system implementing multiplication, by taking stacks of poker chips as input (where stacks with $n$ chips in them represents the number $n$), and producing an output by putting a copy of the left stack on the output tape for every chip in the right stack. Imagine using this function to combine three stacks of poker chips, a blue stack (representing $x$), a yellow stack (representing $y$), and a red stack (representing $z$). We can either calculate $(x \\times y) \\times z$ or $x \\times (y \\times z).$ In both cases, the result will be a bunch of copies of the original blue $x$ stack. The question is, how many copies will there be in each case?\n\nIn the first case, the machine first puts $y$ copies of the blue stack in a line, and then makes $z$ copies of that line. In the second case, the machine first makes $z$ copies of the yellow stack. Each of those yellow stacks corresponds to one of the lines from the first case, and each yellow chip on each yellow stack corresponds to one of the blue stacks in the first case. A blue stack is placed on the output in the second case for each of those yellow chips, so the number of blue stacks in each case is the same.\n\n### Concatenation\n\nThe concatenation of [strings](https://arbital.com/p/3jr) is another associative function: `concat(\"a\",concat(\"b\",\"c\")) = concat(concat(\"a\",\"b\"),\"c\") = \"abc\"`.\n\nConcatenation is an example of an associative function that is not [commutative](https://arbital.com/p/3jb): When reducing a list of strings to a single string, it doesn't matter what order you combine adjacent elements in, but it _does_ matter that you leave the elements in their original order.\n\nIn fact, any function that just sticks its inputs together (possibly with some extra stuff in the middle) is associative: A function which takes wooden blocks as input, and glues them together (with a small white block in the middle) is associative, because if you put in blue, yellow, and red blocks then you get a blue-white-yellow-white-red block out the end, regardless of the order you combine adjacent blocks in.\n\n## Negative examples\n\n### Functions with a history\n\nThe function `xconcat` that concatenates its inputs and puts an \"x\" on the front: `xconcat(\"A\", xconcat(\"B\",\"C\")) = \"xAxBC\"`, but `xconcat(xconcat(\"A\", \"B\"), \"C\") = \"xxABC\"`. The problem in this case is that the output carries a trace of which adjacent elements were combined in which order, which makes the function non-associative.\n\nIn fact, any function that carries a history of the application order with it can't be associative. Thus, if the inputs carry history about which inputs were combined with which other inputs, and the output preserves (and adds to) that history, the function can't be associative. Associativity is, fundamentally, about the output not depending on which path was taken to get to it.\n\n### Subtraction\n\nSubtraction does not associate, as $(5-3)-2=0$ but $5-(3-2)=4.$ To see what went wrong, first notice that all inputs contribute either positively or negatively to the result. Label all inputs (in their original positive states) with up-arrows; we will track whether that input has a positive or negative impact on the final output. Subtraction flips its right-hand input upside down before combining it with its left-hand input, so given inputs labeled $\\uparrow$ and $\\uparrow$ we should label its output $\\uparrow\\downarrow.$ When combining three inputs, we can either combine the left two first or the right two first. If we combine the left two first, then the second subtraction is run on inputs labeled $\\uparrow\\downarrow$ and $\\uparrow,$ and produces an output $\\uparrow\\downarrow\\downarrow,$ in which the first input contributes positively and the other inputs contribute negatively. But if we combine the right two inputs first, then the second subtraction is run on inputs labeled $\\uparrow$ and $\\uparrow\\downarrow,$ and produces an output $\\uparrow\\downarrow\\uparrow,$ in which the first and third inputs contribute positively and the second contributes negatively. Thus, the contribution of the third input depends on which adjacent elements are combined in which order, so subtraction does not associate.\n\n### Cyclical preferences\n\nConsider a person who is playing a game that works as follows. The board has three positions: red, green, and blue. The player's objective is to complete as many clockwise red-green-blue cycles as possible, without ever backtracking in the counter-clockwise direction.\n\n![3-cycle board](http://i.imgur.com/bxCnXUs.png)\n\n Each turn, the game offers them a choice of one of the three spaces, and they get to choose whether or not to travel to that square or stay where they are. Clearly, their preferences depend on where they currently are: If they're on \"red\", \"green\" is a good move and \"blue\" is a bad one; but if they're on \"blue\" then choosing \"green\" is ill-advised.\n\nWe can consider a binary operation $?$ which takes their current position on the left and the proposed position on the right, and returns the position that the player prefers. Specifically:\n\n\\begin{align}\n& red \\ ?\\ red \\ &= red\\\\\n& red \\ ?\\ green \\ &= green\\\\\n& red \\ ?\\ blue \\ &= red\\\\\n\\end{align}\n\\begin{align}\n& green \\ ?\\ red \\ &= green\\\\\n& green \\ ?\\ green \\ &= green\\\\\n& green \\ ?\\ blue \\ &= blue\\\\\n\\end{align}\n\\begin{align}\n& blue \\ ?\\ red \\ &= red\\\\\n& blue \\ ?\\ green \\ &= blue\\\\\n& blue \\ ?\\ blue \\ &= blue\n\\end{align}\n\nThis function does not associate, because $(red\\ ?\\ green)\\ ?\\ blue = blue$ but $red\\ ?\\ (green\\ ?\\ blue)=red.$ To show that a function is not associative, it is sufficient to simply do out the whole function table and then find any one case that violates the axiom of associativity, as above.", "date_published": "2016-08-19T21:28:56Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["Yes: [https://arbital.com/p/Addition](https://arbital.com/p/Addition), [https://arbital.com/p/multiplication](https://arbital.com/p/multiplication), [string concatenation](https://arbital.com/p/3jv). No: [https://arbital.com/p/subtraction](https://arbital.com/p/subtraction), [https://arbital.com/p/division](https://arbital.com/p/division), a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) that concatenates strings and then puts an \"x\" on the front."], "tags": ["B-Class"], "alias": "3mt"} {"id": "03ed6e07651b113eb5417c9689e68b9c", "title": "Commutativity: Examples", "url": "https://arbital.com/p/commutativity_examples", "source": "arbital", "source_type": "text", "text": "## Positive examples\n\n### Addition\n\n$x+y = y+x$ for all numbers $x$ and $y,$ so addition commutes. One easy way to see this fact is to consider a physical system that implements addition, e.g., by taking two piles of poker chips (where a poker chip with nn chips represents the number nn) in on two input belts, and producing a pile of poker chips on the output belt. This function can be implemented by simply shoving both input piles onto the output pile. Because a pile of chips represents the same number regardless of how those chips are arranged on the output belt, it doesn't matter which pile comes in on which input belt, so addition is commutative.\n\n### Multiplication\n\n$x \\times y = y \\times x$ for all numbers $x$ and $y,$ so multiplication also commutes. It is a bit harder to see why this must be so in the case of multiplication, as $x \\times y$ can be interpreted as \"take the $x$ input and stretch it out according to the $y$ input,\" and it's not entirely obvious that \"stretching the $x$ input according to $y$\" should yield the same result as \"stretching the $y$ input according to $x.$\"\n\nTo see that multiplication commutes, it is helpful to visualize _copying_ $x$ (rather than stretching it out) into $y$ many rows. This allows us to visualize $x \\times y$ as a square divided into $x$ parts along one side and $y$ parts along the other side, which makes it clearer that multiplication commutes.\n\n![Multiplicaiton commutes](http://i.imgur.com/Q3QBUT6.png)\n\nIt is worth noting that the concept of \"multiplication\" generalizes beyond the concept of \"numbers,\" and in those contexts, \"stretching $x$ out according to $y$\" and \"stretching $y$ out according to $x$\" might be quite different operations. For example, [multiplication of matrices](https://arbital.com/p/matrix_multiplication) does not commute.\n\n### Maximum and minimum\n\nThe [max](https://arbital.com/p/maximum_function) and [min](https://arbital.com/p/minimum_function) functions commute, because \"which of these elements is largest/smallest?\" is a question that doesn't depend upon the order that the elements are presented in.\n\n### Rock, Paper, Scissors\n\nThe question of who won in a game of [rock-paper-scissors](https://arbital.com/p/rock_paper_scissors) is commutative. Writing $r$ for rock, $p$ for paper, $s$ for scissors, and $?$ for the binary operator saying who won, we have $r ? p = p,$ $r ? s = r,$ and $p ? s = s,$ and so on. This function is commutative: who won a game of rock-paper-scissors does not depend upon which side of the judge they were standing on.\n\nThis provides an example of a commutative function that is not [associative](https://arbital.com/p/3h4): $r?p=p?r$ and so on, but $(r?p)?s=s$ while $r?(p?s)=r.$ In other words, who won a game of rock-paper-scissors does not depend on which side of the judge they were standing on, but if you have a line of people throw \"rock,\" \"paper,\" or \"scissors,\" then the question \"who won?\" _does_ depend on which end of the line you start from.\n\n## Negative examples\n\n### Concatenation\n\nAny function that includes its inputs in its outputs _in order_ is not commutative. For example, the function `pair` which puts its inputs into a pair (such that `pair(x, y) = (x, y)`) is not commutative. The function [concat](https://arbital.com/p/3jv), which also sticks its inputs together in order, is also not commutative, though it is associative.\n\n### Division\n\nDivision is not commutative, because $x / y$ does not equal $y / x$ in general. In this case, the function is using $x$ and $y$ in a very different way. One is interpreted as a numerator that will be cut up, the other is treated as a denominator that says how many times to cut the numerator up. Numerator and denominator are not exchangeable concepts: Division needs to know which input is intended to be the numerator and which is intended to be the denominator.\n\n### Matrix multiplication\n\n[Matrix multiplication](https://arbital.com/p/matrix_multiplication) is not only non-commutative, it may fail to even make sense. Two [matrices](https://arbital.com/p/linear_algebra_matrix) can be multiplied if and only if the number of columns in the left-hand matrix is equal to the number of rows in the right-hand matrix. Thus, a $2 \\times 3$ matrix can be multiplied with a $3 \\times 5$ matrix if and only if the $2 \\times 3$ matrix is on the left. Clearly, then, matrix multiplication does not commute.", "date_published": "2016-05-15T11:23:17Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["Yes: addition, multiplication, maximum, minimum, rock-paper-scissors. No: subtraction, division, string concatenation, matrix multiplication."], "tags": ["Needs clickbait"], "alias": "3mv"} {"id": "b4cecce4b24baec4487ddfb6026d1ad2", "title": "Associativity vs commutativity", "url": "https://arbital.com/p/associativity_vs_commutativity", "source": "arbital", "source_type": "text", "text": "[Associativity](https://arbital.com/p/3h4) and [commutativity](https://arbital.com/p/3jb) are often confused, because they are both constraints on how a [function](https://arbital.com/p/3jy) treats its inputs, and they are both constraints saying \"these two paths for calculating the output must yield the same answer.\" The difference is that commutativity says \"this function shouldn't care what order its own arguments come in,\" while associativity says \"different ways of using this function multiple times should yield the same results.\"\n\nCommutativity is about an invariance in _one_ call to the function: If you run the function once with inputs $x$ and $y,$ you'll get the same result as if you run it once with inputs $y$ and $x.$\n\nAssociativity is about an invariance in _multiple_ calls to the function: If you combine four elements using three function calls in the order $a \\cdot (b \\cdot (c \\cdot d)),$ you'll get the same answer as if you use the three function calls in the order $((a \\cdot b) \\cdot c) \\cdot d.$\n\nCommutativity means you can swap the order of the elements. Associativity means you can drop parenthesis when combining multiple elements in a row.\n\nIf an operator $\\cdot$ is both commutative and associative, then when you're combining multiple elements in a row, you're welcome to combine them in _any_ order: For example, when adding $3 + 2 + (-7) + 5 + (-2) + (-3) + 7,$ we can re-arrange the list to get $3 - 3 + 2 - 2 + 7 - 7 + 5 = 5,$ which makes calculation easy. To see that this is the case, note that the result won't change if we swap any two adjacent elements, because associativity means that we can pick any two adjacent elements to start with and commutativity says that the order of those elements doesn't matter. Then, we can re-arrange the list however we like using a series of two-element swaps.\n[Perhaps conditionalize this parenthetical on something like \"cares about abstract algebra\" or \"wants tidbits they don't understand to point them towards cool things\" or something.](https://arbital.com/p/fixme:)\n(This is why [commutative groups](https://arbital.com/p/3h2) are especially easy to work with.)\n\n## Examples\n\nAddition is both commutative and associative (same with multiplication). Subtraction is neither commutative nor associative (same with division).\n\n[String concatenation](https://arbital.com/p/3jv) is associative but not commutative: If you're sticking a bunch of [strings](https://arbital.com/p/3jr) together end on end, then it doesn't matter which adjacent pairs you combine first, but `\"onetwo\"` is a very different string than `\"twoone\"`.\n\nRock-paper-scissors is commutative but not associative. The winner of \"rock v scissors\" is rock, and the winner of \"scissors v rock\" is rock, and so on; but if you have a bunch of people in a line make rock, paper, or scissors signs, then who is the winer depends on which end of the line you start from: If the line has three people, and they throw [paper, scissors](https://arbital.com/p/rock,), then the winner will be whoever threw scissors if you start from the left (by finding the winner between rock and paper, and then playing that against scissors) or rock if you start on the right (by finding the winner between paper and scissors, and then playing that against rock).\n\nThe function `pair` which puts its inputs into a pair (such that `pair(x, y) = (x, y)`) is neither commutative nor associative: `(x, y)` does not equal `(y, x)`, and `(x, (y, z))` does not equal `((x, y), z)`. The output of `pair` preserves the ordering and structure of all its inputs, and leaves a trace that allows one to distinguish which inputs it was called on when. Both commutativity and associativity require the function to be \"forgetful\" about some aspect of how it was called: Commutative functions need to be insensitive to the ordering of their arguments; associative functions need to leave no trace that can be used to figure out which inputs were combined in what order. Thus, any function that bakes a lot of history about what it was called with into the output is unlikely to be commutative or associative.\n\n## Mnemonics\n\nIf it helps you to remember which is which, consider these two mnemonics:\n\nFor \"commutativity,\" imagine that the two parameters to the function each live on one side of town and work on the other side. Each morning, they pass each other on their morning _commute,_ as they swap places.\n\nFor \"associativity,\" imagine a bunch of _associates_ all standing in a line, ranked according to their hierarchy in a law firm. Any two adjacent people are willing to converse with each other, although people on one end of the line might not be willing to associate with people on the other end.", "date_published": "2016-05-15T11:13:51Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait"], "alias": "3mw"} {"id": "99cbde297bf4c5c58b0c42d466c8f479", "title": "Eigenvalues and eigenvectors", "url": "https://arbital.com/p/eigenvalues_and_eigenvectors", "source": "arbital", "source_type": "text", "text": "Consider a linear transformation represented by a matrix $A$, and some vector $v$. If $Av = \\lambda v$, we say that $v$ is an _eigenvector_ of $A$ with corresponding eigenvalue $\\lambda$. Intuitively, this means that $A$ doesn't rotate or change the direction of $v$; it can only stretch it ($|\\lambda| > 1$) or squash it ($|\\lambda| < 1$) and maybe flip it ($\\lambda < 0$). While this notion may initially seem obscure, it turns out to have many useful applications, and many fundamental properties of a linear transformation can be characterized by its eigenvalues and eigenvectors.", "date_published": "2016-06-20T21:30:23Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Zack M. Davis", "Nate Soares", "Ryan Neufeld", "Qiaochu Yuan", "Eric Bruylant"], "summaries": [], "tags": [], "alias": "3n3"} {"id": "65961170c571e442ebea9dc12bc8491d", "title": "Logarithm", "url": "https://arbital.com/p/logarithm", "source": "arbital", "source_type": "text", "text": "summary(Technical): $\\log_b(n)$ is defined to be the number $x$ such that $b^x = n.$ Thus, logarithm functions satisfy the following properties, among others:\n\n- $\\log_b(1) = 0$\n- $\\log_b(b) = 1$\n- $\\log_b(x\\cdot y) = log_b(x) + \\log_b(y)$\n- $\\log_b(\\frac{x}{y}) = \\log_b(x) - \\log_b(y)$\n- $\\log_b(x^n) = n\\log_b(x)$\n- $\\log_b(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}) = \\frac{\\log_b(x)}{n}$\n- $\\log_b(n) = \\frac{\\log_a(n)}{\\log_a(b)}$\n\nsummary(Inverse exponentials): Logarithms are the inverse of [exponentials](https://arbital.com/p/exponential). That is, for any base $b$ and number $n,$ $\\log_b(b^n) = n$ and $b^{\\log_b(n)} = n.$\n\nsummary(Measure of data): A message that singles out one thing from a set of $n$ carries $\\log(n)$ units of data, where the unit of information depends on the base of the logarithm. For example, a message singling out one thing from 1024 carries about three [decits](https://arbital.com/p/3wn) of data (because $\\log_{10}(1024) \\approx 3$), or exactly ten [bits](https://arbital.com/p/3p0) of data (because $\\log_2(1024)=10$). For details, see [https://arbital.com/p/+3p0](https://arbital.com/p/+3p0).\n\nsummary(Generalized lengths): A quick way of approximating the logarithm base 10 is to look at the length of a number: 103 is a 3-digit number but it's _almost_ a 2-digit number, so its logarithm (base ten) is a little higher than 2 (it's about 2.01). 981 is also a three-digit number, and it's using nearly all three of those digits, so its logarithm (base ten) is just barely lower than 3 (it's about 2.99). In this way, logarithms generalize the notion of \"length,\" and in particular, $\\log_b(n)$ measures the generalized length of the number $n$ when it's written in $b$-ary notation.\n\nThe logarithm base $b$ of a number $n,$ written $\\log_b(n),$ is the answer to the question \"how many times do you have to multiply 1 by $b$ to get $n$?\" For example, $\\log_{10}(100)=2,$ and $\\log_{10}(316) \\approx 2.5,$ because $316 \\approx$ $10 \\cdot 10 \\cdot \\sqrt{10},$ and [multiplying by $\\sqrt{10}$ corresponds to multiplying by 10 \"half a time\"](https://arbital.com/p/).\n\nIn other words, $\\log_b(x)$ counts the number of $b$-factors in $x$. For example, $\\log_2(100)$ counts the number of \"doublings\" in the number 100, and $6 < \\log_2(100) < 7$ because scaling an object up by a factor of 100 requires more than 6 (but less than 7) doublings. For an introduction to logarithms, see the [Arbital logarithm tutorial](https://arbital.com/p/3wj). For an advanced introduction, see the [advanced logarithm tutorial](https://arbital.com/p/advanced_log_tutorial).\n\nFormally, $\\log_b(n)$ is defined to be the number $x$ such that $b^x = n,$ where $b$ and $n$ are numbers. $b$ is called the \"base\" of the logarithm, and has a relationship to the [base of a number system](https://arbital.com/p/number_base). For a discussion of common and useful bases for logarithms, see the page on [logarithm bases](https://arbital.com/p/logarithm_bases). $x$ is unique if by \"number\" we mean [$\\mathbb R$](https://arbital.com/p/4bc), but may not be unique if by \"number\" we mean [$\\mathbb C$](https://arbital.com/p/complex_number). For details, see the page on [complex logarithms](https://arbital.com/p/complex_logarithm).\n\n# Basic properties\n\nLogarithms satisfy a number of desirable properties, including:\n\n- $\\log_b(1) = 0$ for any $b$\n- $\\log_b(b) = 1$ for any $b$\n- $\\log_b(x\\cdot y) = log_b(x) + \\log_b(y)$\n- $\\log_b(x^n) = n\\log_b(x)$\n- $\\log_a(n) = \\frac{\\log_b(n)}{\\log_b(a)}$\n\nFor an expanded list of properties, explanations of what they mean, and the reasons for why they hold, see [https://arbital.com/p/+3wp](https://arbital.com/p/+3wp).\n\n# Interpretations\n\n- Logarithms can be interpreted as a generalization of the notion of the [length of a number](https://arbital.com/p/number_length): 103 and 981 are both three digits long, but, intuitively, 103 is only barely using three digits, whereas 981 is pushing its three digits to the limit. Logarithms quantify this intuition: the [common logarithm](https://arbital.com/p/common_logarithm) of 103 is approximately 2.01, and the common log of 981 is approximately 2.99. Logarithms give rise to a notion of exactly how many digits a number is \"actually\" making use of, and give us a notion of \"fractional digits.\" For more on this interpretation (and why it is 316, not 500, that is two and a half digits long), see [https://arbital.com/p/+416](https://arbital.com/p/+416).\n\n- Logarithms can be interpreted as a measure of how much data it takes to carry a message. Imagine that you and I are both facing a collection of 100 different objects, and I'm thinking of one of them in particular. If I want to tell you which one I'm thinking of, how many digits do I need to transmit to you? The answer is $\\log_{10}(100)=2,$ assuming that by \"digit\" we mean \"some method of encoding one of the symbols 0-9 in a physical medium.\" Measuring data in this way is the cornerstone of [information theory](https://arbital.com/p/3qq).\n\n- Logarithms are the inverse of exponentials. The function $\\log_b(\\cdot)$ inverts the function $b^{\\ \\cdot}.$ In other words, $\\log_b(n) = x$ implies that $b^x = n,$ so $\\log_b(b^x)=x$ and $b^{\\log_b(n)}=n.$ Thus, logarithms give us tools for analyzing anything that grows exponentially. If a population of bacteria doubles each day, then logarithms measure days in terms of bacteria — that is, they can tell you how long it will take for the population to reach a certain size. For more on this idea, see [https://arbital.com/p/+3wr](https://arbital.com/p/+3wr).\n\n# Applications\n\nLogarithms are ubiquitous in many fields, including mathematics, physics, computer science, cognitive science, and artificial intelligence, to name a few. For example:\n\n- In mathematics, the most natural logarithmic base is [$e$](https://arbital.com/p/mathematics_e) ([Why?](https://arbital.com/p/log_e_is_natural)) and the log base $e$ of $x$ is written $\\ln(x)$, pronounced \"[natural log](https://arbital.com/p/natural_logarithm) of x.\" The natural logarithm of a number gives one notion of the \"intrinsic length\" of a number, a concept that proves useful when reasoning about other properties of that number. For example, the quantity of prime numbers smaller than $x$ is approximately $\\frac{x}{\\ln(x)},$ this is the [prime number theorem](https://arbital.com/p/prime_number_theorem).\n\n- Logarithms also give us tools for measuring the runtime (or memory usage) of algorithms. When an algorithm uses a [divide and conquer](https://arbital.com/p/divide_and_conquer) approach, the amount of time (or memory) used by the algorithm increases logarithmically as the input size grows linearly. For example, the amount of time that it takes to perform a [binary search](https://arbital.com/p/binary_search) through $n$ possibilities is $\\log_2(n),$ which means that the search takes one unit longer to run every time the set of things to search through doubles in size.\n\n- Logarithms give us tools for studying the tools we use to represent numbers. For example, humans tend to use ten different symbols to represent numbers (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9), while computers tend to use two digits (0 and 1). Are some representations better or worse than others? What are the pros and cons of using more or fewer symbols? For more on these questions, see [Number bases](https://arbital.com/p/number_base).\n\n- The human brain encodes various perceptions logarithmically. For example, the perceived tone of a sound goes up by one octave every time the frequency of air vibrations doubles. Your perception of tone is proportional to the logarithm (base 2) of the frequency at which the air is vibrating. See also [Hick's law](https://arbital.com/p/https://en.wikipedia.org/wiki/Hick%27s_law).", "date_published": "2016-06-20T21:56:48Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": ["The logarithm base $b$ of a number $n,$ written $\\log_b(n),$ is the answer to the question \"how many times do you have to multiply 1 by $b$ to get $n$?\" For example, $\\log_{10}(1000)=3,$ because $10 \\cdot 10 \\cdot 10 = 1000,$ and $\\log_2(16)=4$ because $2 \\cdot 2 \\cdot 2 \\cdot 2 = 16.$"], "tags": [], "alias": "3nd"} {"id": "ec92b11db18d9778bc5ea6f9401c1fa2", "title": "Relation", "url": "https://arbital.com/p/relation_mathematics", "source": "arbital", "source_type": "text", "text": "%%comment:\nI do not want to be shortened. The motivation for this is that I would prefer that someone has the ability to learn everything they need to know about relations just by reading the popup summary.\n%%\n\nsummary: A **relation** is a [set](https://arbital.com/p/3jz) of [tuples](https://arbital.com/p/tuple_mathematics), all of which have the same [arity](https://arbital.com/p/tuple_arity). The inclusion of a tuple in a relation indicates that the components of the tuple are related. A set of $n$-tuples is called an $n$*-ary relation*. Sets of pairs are called binary relations, sets of triples are called ternary relations, etc.\n\nExamples of binary relations include the equality relation on natural numbers $\\{ (0,0), (1,1), (2,2), ... \\}$ and the predecessor relation $\\{ (0,1), (1,2), (2,3), ... \\}$. When a symbol is used to denote a specific binary relation ($R$ is commonly used for this purpose), that symbol can be used with infix notation to denote set membership: $xRy$ means that the pair $(x,y)$ is an element of the set $R$.\n\nA **relation** is a [set](https://arbital.com/p/3jz) of [tuples](https://arbital.com/p/tuple_mathematics), all of which have the same [arity](https://arbital.com/p/tuple_arity). The inclusion of a tuple in a relation indicates that the components of the tuple are related. A set of $n$-tuples is called an $n$*-ary relation*. Sets of pairs are called binary relations, sets of triples are called ternary relations, etc.\n\nExamples of binary relations include the equality relation on natural numbers $\\{ (0,0), (1,1), (2,2), ... \\}$ and the predecessor relation $\\{ (0,1), (1,2), (2,3), ... \\}$. When a symbol is used to denote a specific binary relation ($R$ is commonly used for this purpose), that symbol can be used with infix notation to denote set membership: $xRy$ means that the pair $(x,y)$ is an element of the set $R$.", "date_published": "2016-07-07T15:11:14Z", "authors": ["Kevin Clancy", "Alexei Andreev", "Dylan Hendrickson", "Nate Soares", "Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Formal definition"], "alias": "3nt"} {"id": "3a3dd9bdc126d385f21c30ea630f11fc", "title": "Bit (of data)", "url": "https://arbital.com/p/data_bit", "source": "arbital", "source_type": "text", "text": "A bit of data (not to be confused with a [https://arbital.com/p/shannon](https://arbital.com/p/shannon), an [abstract bit](https://arbital.com/p/3y3), or a [bit of evidence](https://arbital.com/p/evidence_bit)) is the amount of data required to single out one message from a set of two. If you want to single out one of the hands of one of your biological grandparents (such as the left hand of your paternal grandmother), you can do that with three bits of data: One to single out \"left hand\" or \"right hand\", one to single out \"maternal\" or \"paternal\", and one to single out \"grandfather\" or \"grandmother\". In order for someone to look at the data and know which hand you singled out, they need to know what method you used to [encode](https://arbital.com/p/encoding) the message into the data.\n\nData can be stored in physical systems which can be put into multiple states: A coin can be used to store a single bit of data (by placing it either heads or tails); a normal six-sided die can be used to store a little over [2.5 bits](https://arbital.com/p/3ty) of data. In general, a message that singles out one thing from a set of $n$ is defined to contain [$\\log_2](https://arbital.com/p/3nd) bits of data.\n\n\"Bit\" is a portmanteau of \"binary digit\": \"binary\" because it is the amount of information required to single out one message from a set of 2. The 2 is arbitrary; analogous units of data exist for every other number. For example, a [https://arbital.com/p/3ww](https://arbital.com/p/3ww) is the amount of data required to single out one message from a set of three, a [https://arbital.com/p/3wn](https://arbital.com/p/3wn) is the amount of data it takes to single out one message from a set of ten, and so on. It is easy to convert between units, for example, a decit is $\\log_2(10) \\approx 3.32$ bits, because it takes a little over three bits to pick one thing out from a set of 10. See also the pages on [converting between units of data](https://arbital.com/p/) and [fractional bits](https://arbital.com/p/3ty).\n\n[Talk about how we want to define data such that two objects hold twice as much.](https://arbital.com/p/fixme:)\n\nThe amount of data you can transmit (or store) grows [exponentially](https://arbital.com/p/exponential) in the number of bits at your disposal. For example, consider a punch card that can hold ten bits of data. You can use one punch card to pick out a single thing from a set of $2^{10}=1024.$ From two punch cards you can pick out a single thing from a set of $2^{20}=1048576.$ The number of _things you can distinguish_ using two punch cards is 1024 times larger than the number of things you can distinguish with one punch card, and the amount of data you can encode using two punch cards is precisely twice as much (20 bits) as the amount of information you can encode using one punch card (10 bits). In other words, you can single out one object from a collection of $n$ (or store a number between 1 and $n$) using [$\\log_2](https://arbital.com/p/3nd) bits of data. For more on why the amount of data is logarithmic in the number of possible messages, see [Data is logarithmic](https://arbital.com/p/).\n\n[Add a formalization section.](https://arbital.com/p/fixme:)", "date_published": "2016-06-02T19:07:48Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": ["A bit of data is the amount of data required to single out one message from a set of two. Equivalently, it is the amount of data required to cut the set of possible messages in half. If you want to single out one of the hands of one of your biological grandparents (such as the left hand of your paternal grandmother), you can do that by transmitting three bits of data: One to single out \"left hand\" or \"right hand\", one to single out \"maternal\" or \"paternal\", and one to single out \"grandfather\" or \"grandmother\". We can also speak of storing data in physical systems which can be put into multiple states: A coin can be used to store a single bit of data (by placing it either heads or tails); a normal six-sided die can be used to store a little over [2.5 bits](https://arbital.com/p/3ty) of data. In general, a message that singles out one message from a set of $n$ is defined to contain [$\\log_2](https://arbital.com/p/3nd) bits of data."], "tags": ["Math 1", "Work in progress", "C-Class", "Proposed B-Class"], "alias": "3p0"} {"id": "63e62d239c33b59dabaf45807e73217d", "title": "Information theory", "url": "https://arbital.com/p/information_theory", "source": "arbital", "source_type": "text", "text": "The study (and quantificaiton) of information, and its communication and storage.", "date_published": "2016-05-20T06:35:10Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "jorgejuan lafuente"], "summaries": [], "tags": ["Needs clickbait", "Stub"], "alias": "3qq"} {"id": "5ec6a1158a5f86923dc59d5f501d0e34", "title": "Partially ordered set", "url": "https://arbital.com/p/poset", "source": "arbital", "source_type": "text", "text": "A **partially ordered set** (also called a **poset**) is a pair $\\langle P, \\leq \\rangle$ of a set $P$ and a [binary relation](https://arbital.com/p/3nt) $\\leq$ on $P$ such that for all $p,q,r \\in P$, the following properties are satisfied: \n\n- [Reflexivity](https://arbital.com/p/5dy): $p \\leq p$\n- [Transitivity](https://arbital.com/p/573): $p \\leq q$ and $q \\leq r$ implies $p \\leq r$\n- [Anti-symmetry](https://arbital.com/p/5lt): $p \\leq q$ and $q \\leq p$ implies $p = q$\n\n$P$ is referred to as the poset's underlying set and $\\leq$\n is referred to as its order relation. Posets are the central object of study in [order theory](https://arbital.com/p/3dt).\n\nNotations and fundamentals\n=======================\n\nGreater than\n--------------------\n\nA greater than relation $\\geq$ can be derived as the transpose of a poset's less than relation: $a \\geq b$ means that $b \\leq a$. \n\nIncomparability\n-------------------------\n\nFor poset elements $p$ and $q$, if neither $p \\leq q$ nor $q \\leq p$ then we say that $p$ and $q$ are *incomparable*, and write $p \\parallel q$. \n\nStrict orders\n--------------------\n\nFrom any poset $\\langle P, \\leq \\rangle$, we can derive an underlying strict order $<$. For $p, q \\in P$, $p < q$ means that the following conditions are satisfied:\n\n- 1.) $p \\leq q$\n- 2.) $p \\neq q$\n\n\nThe cover relation\n-----------------------------\n\nFrom any poset $\\langle P, \\leq \\rangle$, we can derive an underlying *cover relation* $\\prec$, defined such that for $p,q \\in P$, $p \\prec q$ whenever the following two conditions are satisfied:\n\n- 1.) $p < q$\n- 2.) For all $r \\in P$, $p \\leq r < q$ implies $p = r$\n\nSimply put, $p \\prec q$ means that $q$ is the smallest element of $P$ which is strictly greater than $p$.\n$p \\prec q$ is pronounced \"$p$ is covered by $q$\", or \"$q$ covers $p$\", and $q$ is said to be a *cover* of $p$. \n\nHasse diagrams\n--------------------------\n\nLet $\\langle P, \\leq \\rangle$ be the poset such that $P = \\{ p, q, r \\}$ and $\\leq = \\{(p,p),(p,q)(p,r),(q,q),(q,r),(r,r) \\}$. The order relation $\\leq$, being binary, can be depicted as a directed graph, though the resulting image leaves something to be desired.\n\n![Naive first attempt at representing P with a directed graph](http://i.imgur.com/fOkTwXW.png)\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n rankdir = BT;\n p -> q;\n q -> r;\n p -> p;\n r -> r;\n q -> q;\n p -> r;\n}\n%%%\n\nIncluding an edge for every pair in $\\leq$ results in a rather noisy looking graph. In particular, as long as we are under assumption that the graph we are viewing depicts a poset, placing a reflexive edge on each node is tedious and redundant. Our first step toward cleaning up this graph is to remove all reflexive edges, instead leaving reflexivity as an implicit assumption. Likewise, we can leave those edges which are implied by transitivity — $(p,r)$ is the only such edge in this poset — implicit. As a final step, we remove the arrow heads from our edges, leaving their directions implied by the y-coordinates of the nodes they connect: an edge between two nodes means that the poset element corresponding to the lower node is less than the poset element corresponding to the higher node. The simplified depiction of $\\langle P, \\leq \\rangle$ then follows.\n\n![Hasse diagram for P](http://i.imgur.com/W8erVrC.png)\n\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n rankdir = BT;\n p -> q;\n q -> r;\n}\n%%%\n\n\nSuch a depiction is called a *Hasse diagram*; drawing a Hasse diagram is the standard method for visually depicting a poset. Now that we understand what Hasse diagrams are, we can provide a concise definition: a Hasse diagram is a graph-based visual depiction of a poset $\\langle P, \\leq \\rangle$, where elements of $P$ are represented as nodes, and for all $(p,q) \\in P^2$ such that $p \\prec q$, an upward edge is drawn from $p$ to $q$. \n\n$p \\leq q$ requires that $p$ must appear lower in a Hasse diagram than $q$. However, the converse is not true. In the following Hasse diagram, $t \\parallel r$ even though $t$ is positioned lower than $r$.\n\n![t is lower than r but incomparable to r](http://i.imgur.com/hmyUZv9.png)\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n rankdir = BT;\n p -> t;\n p -> r;\n t -> q;\n q -> s;\n r -> s;\n}\n%%%\n \n\nDuality\n-----------\n\nEvery poset $\\langle P, \\leq \\rangle$ has a corresponding *dual* poset $\\langle P^{\\partial}, \\geq \\rangle$, where $P^{\\partial}$ (pronounced \"P op\") is a set whose elements are in correspondence with those of $P$, and $\\geq$ is the transpose of $\\leq$ discussed previously. The Hasse diagram of $P^{\\partial}$ is obtained by flipping the Hasse diagram of $P$ upside down. Whenever $\\phi$ is a propsition about posets, we obtain $\\phi$'s *dual proposition* $\\phi^{\\partial}$ by replacing all occurences of $\\leq$ with $\\geq$ and all occurrences of $\\geq$ with $\\leq$. \n\nThe existence of dual posets and dual propositions gives rise to the *duality principle*: if a proposition $\\forall P. \\phi$, quantified over all posets, is true then its dual $\\forall P. \\phi^{\\partial}$ is also true. The duality principle works because instantiating $\\forall P. \\phi$ with a poset $P$ is equivalent to instantiating $\\forall P. \\phi^{\\partial}$ with $P^{\\partial}$. This is due to the fact that $a \\leq b$ in $P$ iff $a \\geq b$ in $P^{\\partial}$. Theorems involving posets often come with a free dual theorem thanks to the duality principle. \n\n%%%knows-requisite([https://arbital.com/p/4c7](https://arbital.com/p/4c7)):\nPoset as category\n==============\n\nAny poset $\\langle P, \\leq \\rangle$ can be viewed as a [category](https://arbital.com/p/category) in which the objects are the elements of $P$, and for all $p,q \\in P$, there is a unique morphism from $p$ to $q$ iff $p \\leq q$. This category has closure under morphism composition due to the transitivity of $\\leq$ and identity morphisms due to the reflexivity of $\\leq$. Associativity of morphism composition stems from the uniqueness of arrows between any pair of objects. The functors between poset categories are [monotone maps](https://arbital.com/p/poset_monotone_map).\n%%%\n\nAdditional Material\n----------------------------------\n\nFor some additional examples of posets, see [https://arbital.com/p/43s](https://arbital.com/p/43s).\nTo test your knowledge of posets, try these [exercises](https://arbital.com/p/4l1).\nThe most useful operators on elements of a poset are called the [join and meet](https://arbital.com/p/3rc).\nWe can relate the elements of one poset to another through the use of [monotone functions](https://arbital.com/p/5jg).", "date_published": "2017-01-29T22:34:08Z", "authors": ["Kevin Clancy", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Joe Zeng"], "summaries": ["A partially ordered set (also called a poset) is a pair $\\langle P, \\leq \\rangle$ of a set $P$ and a [binary relation](https://arbital.com/p/3nt) $\\leq$ on $P$ such that for all $p,q,r \\in P$, the following properties are satisfied: \n\n- [Reflexivity](https://arbital.com/p/5dy): $p \\leq p$\n- [Transitivity](https://arbital.com/p/573): $p \\leq q$ and $q \\leq r$ implies $p \\leq r$\n- [Anti-symmetry](https://arbital.com/p/5lt): $p \\leq q$ and $q \\leq p$ implies $p = q$\n\n$P$ is referred to as the poset's underlying set and $\\leq$\n is referred to as its order relation. When the order relation of a poset $\\langle P, \\leq \\rangle$ is clear from context, the poset can be referred to merely as $P$. Posets are the central object of study in [order theory](https://arbital.com/p/3dt)."], "tags": ["Math 2", "B-Class"], "alias": "3rb"} {"id": "2ecdf4d31e3c17adac58466ffb6c515c", "title": "Join and meet", "url": "https://arbital.com/p/math_join", "source": "arbital", "source_type": "text", "text": "Let $\\langle P, \\leq \\rangle$ be a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), and let $S \\subseteq P$. The **join** of $S$ in $P$, denoted by $\\bigvee_P S$, is an element $p \\in P$ satisfying the following two properties:\n\n* p is an *upper bound* of $S$; that is, for all $s \\in S$, $s \\leq p$.\n* For all upper bounds $q$ of $S$ in $P$, $p \\leq q$.\n\n$\\bigvee_P S$ does not necessarily exist, but if it does then it is unique. The notation $\\bigvee S$ is typically used instead of $\\bigvee_P S$ when $P$ is clear from context. Joins are often called *least upper bounds* or *supremums*. For $a, b$ in $P$, the join of $\\{a,b\\}$ in $P$ is denoted by $a \\vee_P b$, or $a \\vee b$ when $P$ is clear from context.\n\nThe dual concept of the join is that of the meet. The **meet** of $S$ in $P$, denoted by $\\bigwedge_P S$, is defined an element $p \\in P$ satisfying.\n\n* p is a *lower bound* of $S$; that is, for all $s$ in $S$, $p \\leq s$.\n* For all lower bounds $q$ of $S$ in $P$, $q \\leq p$.\n\nMeets are also called *infimums*, or *greatest lower bounds*. The notations $\\bigwedge S$, $p \\wedge_P q$, and $p \\wedge q$ are all have meanings that are completely analogous to the aforementioned notations for joins. \n\nBasic example\n--------------------------\n\n![Joins Failing to exist in a finite lattice](http://i.imgur.com/sx1Ss9w.png)\n\nThe above Hasse diagram represents a poset with elements $a$, $b$, $c$, and $d$. $\\bigvee \\{a,b\\}$ does not exist because the set $\\{a,b\\}$ has no upper bounds. $\\bigvee \\{c,d\\}$ does not exist for a different reason: although $\\{c, d\\}$ has upper bounds $a$ and $b$, these upper bounds are incomparable, and so $\\{c, d\\}$ has no *least* upper bound. There do exist subsets of this poset which possess joins; for example, $a \\vee c = a$, $\\bigvee \\{b,c,d\\} = b$, and $\\bigvee \\{c\\} = c$.\n\nNow for some examples of meets. $\\bigwedge \\{a, b, c, d\\}$ does not exist because $c$ and $d$ have no common lower bounds. However, $\\bigwedge \\{a,b,d\\} = d$ and $a \\wedge c = c$.\n\nAdditional Material\n---------------------------------\n\n* [Examples](https://arbital.com/p/3v4)\n* [Exercises](https://arbital.com/p/4ll)\n\nFurther reading\n---------------\n* [Lattices](https://arbital.com/p/46c)", "date_published": "2016-12-21T04:42:35Z", "authors": ["Eric Bruylant", "Kevin Clancy"], "summaries": ["Let $\\langle P, \\leq \\rangle$ be a [poset](https://arbital.com/p/3rb), and let $S \\subseteq P$. The **join** of $S$ in $P$, denoted by $\\bigvee_P S$, is an element $p \\in P$ satisfying the following two properties:\n\n* p is an *upper bound* of $S$; that is, for all $s \\in S$, $s \\leq p$.\n* For all upper bounds $q$ of $S$ in $P$, $p \\leq q$.\n\n$\\bigvee_P S$ does not necessarily exist, but if it does then it is unique. The notation $\\bigvee S$ is typically used instead of $\\bigvee_P S$ when $P$ is clear from context. Joins are often called *least upper bounds* or *supremums*. For $a, b$ in $P$, the join of $\\{a,b\\}$ in $P$ is denoted by $a \\vee_P b$, or $a \\vee b$ when $P$ is clear from context. **Meets** are greatest lower bounds, and are related to joins by duality."], "tags": ["Math 2", "B-Class"], "alias": "3rc"} {"id": "6308f6f859cc6b76ea490274e54f936f", "title": "Start", "url": "https://arbital.com/p/start_meta_tag", "source": "arbital", "source_type": "text", "text": "Start-Class pages are visibly incomplete, or need significant work to be useful. This covers pages missing essential information, pages without coherent structure, pages lacking [summaries](https://arbital.com/p/1kl) (or first paragraphs that serve as a summary), pages with sufficiently severe prose or stylistic issues, and any other problem that prevents pages from being useful to readers or fitting into their domains. Start-Class pages have significant content, but are still in the category \"should be seen by editors\" rather than \"should be seen by readers.\"\n\nContrast:\n\n- [https://arbital.com/p/72](https://arbital.com/p/72), for tiny pages containing only a paragraph or a couple of sentences of text.\n- [https://arbital.com/p/4gs](https://arbital.com/p/4gs), for pages which only contain a formal definition and don't try to explain the topic.\n\n**Also** use the tag [https://arbital.com/p/4v](https://arbital.com/p/4v) if the page is being actively edited.\n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-08-03T20:19:29Z", "authors": ["Kevin Clancy", "Joe Zeng", "Alexei Andreev", "Eric Rogstad", "Stephanie Zolayvar", "Nate Soares", "Eric Bruylant", "Jaime Sevilla Molina", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "3rk"} {"id": "d6806f8cbbbd75dc9807f82a8c52605e", "title": "Type theory", "url": "https://arbital.com/p/type_theory", "source": "arbital", "source_type": "text", "text": "Hub page for type theory explanations", "date_published": "2016-05-25T18:53:54Z", "authors": ["Alexei Andreev", "Jack Gallagher"], "summaries": [], "tags": ["Work in progress", "Stub"], "alias": "3sz"} {"id": "87091663f1dad0c1ed884be852450c91", "title": "Group: Examples", "url": "https://arbital.com/p/group_examples", "source": "arbital", "source_type": "text", "text": "# The symmetric groups\n\nFor every positive integer $n$ there is a group $S_n$, the [symmetric group](https://arbital.com/p/497) of order $n$, defined as the group of all permutations (bijections) $\\{ 1, 2, \\dots n \\} \\to \\{ 1, 2, \\dots n \\}$ (or any other [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) with $n$ elements). The symmetric groups play a central role in group theory: for example, a [group action](https://arbital.com/p/3t9) of a group $G$ on a set $X$ with $n$ elements is the same as a [homomorphism](https://arbital.com/p/47t) $G \\to S_n$. \n\nUp to [conjugacy](https://arbital.com/p/4bj), a permutation is determined by its [cycle type](https://arbital.com/p/4cg). \n\n# The dihedral groups\n\nThe [dihedral groups](https://arbital.com/p/4cy) $D_{2n}$ are the collections of symmetries of an $n$-sided regular polygon. It has a [presentation](https://arbital.com/p/5j9) $\\langle r, f \\mid r^n, f^2, (rf)^2 \\rangle$, where $r$ represents rotation by $\\tau/n$ degrees, and $f$ represents reflection. \n\nFor $n > 2$, the dihedral groups are non-commutative.\n\n# The general linear groups\n\nFor every [field](https://arbital.com/p/481) $K$ and positive integer $n$ there is a group $GL_n(K)$, the [general linear group](https://arbital.com/p/general_linear_group) of order $n$ over $K$. Concretely, this is the group of all invertible $n \\times n$ [matrices](https://arbital.com/p/matrix) with entries in $K$; more abstractly, this is the [automorphism group](https://arbital.com/p/automorphism) of a [vector space](https://arbital.com/p/3w0) of [dimension](https://arbital.com/p/vector_space_dimension) $n$ over $K$. \n\nIf $K$ is [algebraically closed](https://arbital.com/p/algebraically_closed_field), then up to conjugacy, a matrix is determined by its [Jordan normal form](https://arbital.com/p/Jordan_normal_form).", "date_published": "2016-10-21T15:25:45Z", "authors": ["Daniel Satanove", "Patrick Stevens", "Qiaochu Yuan", "Eric Bruylant", "Mark Chimes"], "summaries": ["Examples of [groups](https://arbital.com/p/-3gd), including the [symmetric groups](https://arbital.com/p/-497) and [general linear groups](https://arbital.com/p/-general_linear_group)."], "tags": [], "alias": "3t1"} {"id": "814b38cb532f4606381f6a171e916e69", "title": "Sample space", "url": "https://arbital.com/p/sample_space_probability", "source": "arbital", "source_type": "text", "text": "Motivation\n===\n\nIf we are uncertain about some part of the world, then there are multiple different things that might happen there, as far as we know. \nThe sample space for a part of the world is the set of things that could possibly happen there. We can use a sample space to reason using [probability theory](https://arbital.com/p/1bv), so that we assign probabilities to [events](https://arbital.com/p/event_probability) in a consistent way. \n\n\nDefinition\n===\n\nA sample space is a [set](https://arbital.com/p/3jz), usually denoted $\\Omega$.", "date_published": "2016-05-25T20:31:39Z", "authors": ["Tsvi BT", "Stephanie Zolayvar"], "summaries": ["A sample space is the [set](https://arbital.com/p/3jz) $\\Omega$ of possible ways that a part of the world might be, as far as you know."], "tags": ["Stub"], "alias": "3t4"} {"id": "852d395a22e630d70b58cf7733fe9ba8", "title": "Group theory: Examples", "url": "https://arbital.com/p/group_theory_examples", "source": "arbital", "source_type": "text", "text": "# Even and odd functions\n\nRecall that a function $f : \\mathbb{R} \\to \\mathbb{R}$ is [even](https://arbital.com/p/even_function) if $f(-x) = f(x)$, and [odd](https://arbital.com/p/odd_function) if $f(-x) = - f(x)$. A typical example of an even function is $f(x) = x^2$ or $f(x) = \\cos x$, while a typical example of an odd function is $f(x) = x^3$ or $f(x) = \\sin x$. \n\nWe can think about evenness and oddness in terms of [group theory](https://arbital.com/p/3g8) as follows. There is a group called the [cyclic group](https://arbital.com/p/cyclic_group) $C_2$ of [order](https://arbital.com/p/3gg) $2$ acting on the set of functions $\\mathbb{R} \\to \\mathbb{R}$: in other words, each element of the group describes a function of [type](https://arbital.com/p/3sz)\n\n$$ (\\mathbb{R} \\to \\mathbb{R}) \\to (\\mathbb{R} \\to \\mathbb{R}) $$\n\nmeaning that it takes as input a function $\\mathbb{R} \\to \\mathbb{R}$ and returns as output another function $\\mathbb{R} \\to \\mathbb{R}$.\n\n$C_2$ has two elements which we'll call $1$ and $-1$. $1$ is the identity element: it acts on functions by sending a function $f(x)$ to the same function $f(x)$ again. $-1$ sends a function $f(x)$ to the function $f(-x)$, which visually corresponds to reflecting the graph of $f(x)$ through the y-axis. The group multiplication is what the names of the group elements suggests, and in particular $(-1) \\times (-1) = 1$, which corresponds to the fact that $f(-(-x)) = f(x)$. \n\nAny time a group $G$ [acts](https://arbital.com/p/3t9) on a set $X$, it's interesting to ask what elements are [invariant](https://arbital.com/p/invariant_under_a_group_action) under that group action. Here the invariants of functions under the action of $C_2$ above are the even functions, and they form a [https://arbital.com/p/subspace](https://arbital.com/p/subspace) of the [vector space](https://arbital.com/p/vector_space) of all functions. \n\nIt turns out that every function is uniquely the sum of an even and an odd function, as follows:\n\n$$f(x) = \\underbrace{\\frac{f(x) + f(-x)}{2}}_{\\text{even}} + \\underbrace{\\frac{f(x) - f(-x)}{2}}_{\\text{odd}}.$$\n\nThis is a special case of various more general facts in [representation theory](https://arbital.com/p/3tn), and in particular can be thought of as the simplest case of the [discrete Fourier transform](https://arbital.com/p/discrete_Fourier_transform), which in turn is a [toy model](https://arbital.com/p/mathematical_toy_model) of the theory of [Fourier series](https://arbital.com/p/Fourier_series) and the [Fourier transform](https://arbital.com/p/Fourier_transform). \n\nIt's also interesting to observe that the cyclic group $C_2$ shows up in lots of other places in mathematics as well. For example, it is also the group describing how even and odd numbers add1 (where even corresponds to $1$ and odd corresponds to $-1$); this is the simplest case of [modular arithmetic](https://arbital.com/p/modular_arithmetic). \n\n1That is: an even plus an even make an even, an odd plus an odd make an even, and an even plus an odd make an odd.", "date_published": "2016-05-25T20:34:31Z", "authors": ["Eric Rogstad", "Qiaochu Yuan"], "summaries": [], "tags": [], "alias": "3t6"} {"id": "0f23178ffd1e43e2424a825044b0bb4e", "title": "Programming in Dependent Type Theory", "url": "https://arbital.com/p/prog_dep_typ", "source": "arbital", "source_type": "text", "text": "All code in this article was written in the Lean theorem prover, which means you can copy any of it and paste it [here](https://leanprover.github.io/tutorial/?live) to try it out.\n\n## Arithmetic and Interaction\nWhile Lean is nominally an interactive theorem prover, much of the power of type theory comes from the fact that you can treat it like a programming language.\n\nThere are a few different commands for interacting with Lean.\nThe one I make the most use of is the `check` command, which prints out the type of the expression following it.\nSo, for example, `check 3` outputs `num`, and `check (3 : nat)` outputs `nat`.\n\nWe can also make definitions\n\n definition five : num := 3 + 2\n\nThis declares a new constant `five` of type `num` to which we give the value `3 + 2`.\nWe can define functions with similar syntax\n\n definition num.const₀ : num → num → num := λ x y, x -- this is a comment\n -- Lean can infer types whenever they're unique\n definition num.const₁ := λ (x y : num), x\n -- we can also name the arguments in the definition rather than the function body\n definition num.const₂ (x y : num) := x\n\nThe definition of polymorphic functions becomes the first point where we get a hint about what makes programming in dependent type theory different from, say, Haskell.\nIn dependent type theory, the term and type languages are unified, so in order to write a polymorphic function we must take the type as an argument.\n\n definition poly_id (A : Type) := λ (a : A), a\n -- or, equivalently\n definition poly_id₁ := λ A (a : A), a\n -- applied to arguments\n check poly_id num 1 -- num\n check poly_id (num → num → num) num.const -- num → num → num\n\nExercise: write a polymorphic version of `num.const₀`.\n\n%%hidden(Show solution):\n definition poly_const (A B : Type) (a : A) (b : B) := a\n%%\n\nHaving to explicitly indicate types everywhere is a pain.\nIn order to get around that, most proof assistants provide support for implicit arguments, which let you leave out arguments that only have one valid value.\nIn Lean, the syntax for implicits looks like this:\n\n definition id {A : Type} := λ (a : A), a\n\n## Inductive types ##\nOf course, none of this would be that useful if we couldn't define *new* types.\nThere are lots of ways to craft new types in dependent type theory, but among the most fundamental is the creation of inductive types.\n\nTo define a new inductive type, you give a list of constructor tags, each associated with a type representing the arguments it takes.\nThe simplest ones are just enumerations. For example, the days of the week:\n\n inductive weekday : Type :=\n | mon : weekday\n | tue : weekday\n | wed : weekday\n | thu : weekday\n | fri : weekday\n | sat : weekday\n | sun : weekday\n\nThis creates a new type `weekday`, and seven new constants (`weekday.mon`, `weekday.tue`, `weekday.wed`...) of type `weekday`.\nIf you're familiar with Haskell, you'll correctly notice that this looks an awful lot like GADT declarations.\n\nJust like in Haskell, we can parametrize our types over other types, making new types like `either`:\n\n inductive either (A B : Type) : Type :=\n | inl {} : A → either A B``\n | inr {} : B → either A B\n\nFrom this declaration, we get a new constant `either : Type → Type → Type`.\nThis represents the union of the types `A` and `B`, the type of values that belong either to `A` or `B`.\n\nWe can also define recursive types, such as natural numbers\n\n inductive nat : Type :=\n | zero : nat\n | succ : nat → nat\n\nThe easiest way to define functions over `nat`s is recursively.\nFor example, we can define addition as\n\n definition add (n : nat) : nat -> nat\n | nat.zero := n\n | (nat.succ m) := nat.succ (add m) -- n is constant at every recursive call\n\nBringing both of these together we can define the type of linked lists\n\n inductive list (A : Type) : Type := \n | nil {} : list A\n | cons : A → list A → list A\n\nWe can also define functions over lists by pattern matching\n\n definition map {A B : Type} (f : A → B) : list A → list B\n | list.nil := list.nil\n | (list.cons x xs) := list.cons (f x) (map xs) -- f is constant at every recursive call\n\nExercise: write `foldr` and `foldl` by pattern matching\n\n%%hidden(Show solution):\n definition foldr {A B : Type} (r : A → B → B) (vnil : B) : list A → B\n | list.nil := vnil\n | (list.cons x xs) := r x (foldr xs)\n\n definition foldl {A B : Type} (r : B → A → B) : B → list A → B :=\n | b list.nil := b\n | b (list.cons x xs) := foldl (r b x) xs\n%%", "date_published": "2016-05-26T03:21:54Z", "authors": ["Jack Gallagher"], "summaries": [], "tags": ["Type theory", "Work in progress"], "alias": "3t7"} {"id": "96c88468f19898ec4208a4aae73d25bb", "title": "Group action", "url": "https://arbital.com/p/group_action", "source": "arbital", "source_type": "text", "text": "An action of a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$ on a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $X$ is a function $\\alpha : G \\times X \\to X$ ([colon-to notation](https://arbital.com/p/3vl)), which is often written $(g, x) \\mapsto gx$ ([mapsto notation](https://arbital.com/p/3vm)), with $\\alpha$ omitted from the notation, such that\n\n1. $ex = x$ for all $x \\in X$, where $e$ is the identity, and\n2. $g(hx) = (gh)x$ for all $g, h \\in G, x \\in X$, where $gh$ implicitly refers to the group operation in $G$ (also omitted from the notation).\n\nEquivalently, via [https://arbital.com/p/currying](https://arbital.com/p/currying), an action of $G$ on $X$ is a [group homomorphism](https://arbital.com/p/47t) $G \\to \\text{Aut}(X)$, where $\\text{Aut}(X)$ is the [automorphism group](https://arbital.com/p/automorphism_group) of $X$ (so for sets, the group of all bijections $X \\to X$, but phrasing the definition this way makes it natural to generalize to other [categories](https://arbital.com/p/category_theory)). It's a good exercise to verify this; Arbital [has a proof](https://arbital.com/p/49c).\n\nGroup actions are used to make precise the notion of \"symmetry\" in mathematics. \n\n# Examples\n\nLet $X = \\mathbb{R}^2$ be the [Euclidean plane](https://arbital.com/p/Euclidean_geometry). There's a group acting on $\\mathbb{R}^2$ called the [Euclidean group](https://arbital.com/p/Euclidean_group) $ISO(2)$ which consists of all functions $f : \\mathbb{R}^2 \\to \\mathbb{R}^2$ preserving distances between two points (or equivalently all [isometries](https://arbital.com/p/isometry)). Its elements include translations, rotations about various points, and reflections about various lines.", "date_published": "2016-06-14T15:04:49Z", "authors": ["Eric Rogstad", "Patrick Stevens", "Qiaochu Yuan"], "summaries": [], "tags": [], "alias": "3t9"} {"id": "d16c1c456b8c539aba63e1eb530e0652", "title": "Probability distribution (countable sample space)", "url": "https://arbital.com/p/probability_distribution_countable", "source": "arbital", "source_type": "text", "text": "Definition\n===\n\nA probability distribution on a countable [https://arbital.com/p/3t4](https://arbital.com/p/3t4) $\\Omega$ is a [https://arbital.com/p/3jy](https://arbital.com/p/3jy) $\\mathbb{P}: \\Omega \\to [https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ such that $\\sum_{\\omega \\in \\Omega} \\mathbb{P}(\\omega) = 1$. \n\nIntuition\n=====\n\nWe express a belief that \"$x\\in \\Omega$ happens with probability $r$\" by setting $\\mathbb{P}(x) = r$. So a probability distribution divides up our anticipation of what will happen, out of the set $\\Omega$ of things that might possibly happen.", "date_published": "2016-06-11T04:40:48Z", "authors": ["Tsvi BT"], "summaries": ["A probability distribution on a countable [https://arbital.com/p/3t4](https://arbital.com/p/3t4) $\\Omega$ is a [https://arbital.com/p/3jy](https://arbital.com/p/3jy) $\\mathbb{P}: \\Omega \\to [https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ such that $\\sum_{\\omega \\in \\Omega} \\mathbb{P}(\\omega) = 1$."], "tags": ["Stub"], "alias": "3tb"} {"id": "2adb6eb2f1c3a456a8cee272c693fb28", "title": "Sample spaces are too large", "url": "https://arbital.com/p/sample_spaces_are_too_large", "source": "arbital", "source_type": "text", "text": "When we are reasoning about an uncertain world using [probability theory](https://arbital.com/p/1bv), we have a [sample space](https://arbital.com/p/3t4) of possible ways the world could be, and we assign\nprobabilities to outcomes in the sample space with a [probability distribution](https://arbital.com/p/3tb). Unfortunately, often the sample space (which is supposed to contain every possible way the world might turn out) is very large and possibly infinite.\n\nFor example, if we are thinking about a box with some particles in it, our sample space is the enormous set of all possible arrangements of the particles. \nTo reason usefully about the box, we'd have to do computations using a gigantic table specifying a separate number for every single distinct arrangement of particles. \nTwo arrangements that are pretty much the same, as far as we care, take up just as much of our computational resources as two importantly different arrangements.\nSo, even though we could in principle reason consistently under uncertainty just using a probability distribution over a sample space, we cannot do so easily,\nbecause we are trying to keep track of too much.", "date_published": "2016-05-25T20:08:44Z", "authors": ["Tsvi BT"], "summaries": ["Sample spaces enumerate everything that could possibly happen, so that we can reason consistently under uncertainty. But this enumeration usually enumerates too many things, making it difficult to do computations using the raw sample space."], "tags": [], "alias": "3tg"} {"id": "1046ac367095b447ab989920bf55ca9c", "title": "Group: Exercises", "url": "https://arbital.com/p/group_exercises", "source": "arbital", "source_type": "text", "text": "# Preliminaries\n\n1. Show that the identity element in a group is unique. That is, if $G$ is a group and two elements $e_1, e_2 \\in G$ both satisfy the axioms describing the identity element, then $e_1 = e_2$. \n\n%%hidden(Show solution):\nBy definition, an identity element $e$ satisfies $eg = ge = g$ for all $g \\in G$. Hence if $e_1$ is an identity, then $e_1 e_2 = e_2 e_1 = e_1$. And if $e_2$ is an identity, then $e_2 e_1 = e_1 e_2 = e_2$. Hence $e_1 = e_2$. Note that this argument makes no use of inverses, and so is valid for [monoids](https://arbital.com/p/3h3). \n%%\n\n2. Show that inverses are also unique. That is, if $g \\in G$ is an element of a group and $h_1, h_2 \\in G$ both satisfy the axioms describing the inverse of $g$, then $h_1 = h_2$. \n\n%%hidden(Show solution):\nBy definition, an inverse $h$ of $g$ satisfies $hg = gh = e$. So $h_1 g = g h_1 = e$ and $h_2 g = g h_2 = e$. Hence, on the one hand,\n\n$$h_1 g h_2 = (h_1 g) h_2 = (e) h_2 = h_2$$\n\nand, on the other hand,\n\n$$h_1 g h_2 = h_1 (g h_2) = h_1 (e) = h_1.$$\n\nHence $h_1 = h_2$.\n%%\n\n# Examples involving numbers\n\nDetermine whether the following sets equipped with the specified binary operations are groups. If so, describe their identity elements (which by the previous exercise must be unique) and how to take inverses. \n\n1. The real numbers $\\mathbb{R}$ together with the addition operation $(x, y) \\mapsto x + y$. \n\n%%hidden(Show answer):\nYes, this is a group. The identity element is $0$, and inverse is given by $x \\mapsto -x$. \n%%\n\n2. The real numbers $\\mathbb{R}$ together with the multiplication operation $(x, y) \\mapsto xy$. \n\n%%hidden(Show answer): \nNo, this is not a group. $0 \\in \\mathbb{R}$ has the property that $0 \\times x = 0$ for all real numbers $x$, so it can't be invertible no matter what the identity is. \n%%\n\n3. The positive real numbers $\\mathbb{R}_{>0}$ together with the multiplication operation $(x, y) \\mapsto xy$. \n\n%%hidden(Show answer): \nYes, this is a group. The identity is $1$, and inverse is given by $x \\mapsto \\frac{1}{x}$. In fact this group is [isomorphic](https://arbital.com/p/49x) to $(\\mathbb{R}, +)$; can you name the isomorphism? \n%%\n\n4. The real numbers $\\mathbb{R}$ together with the operation $(x, y) \\mapsto x + y - 1$.\n\n%%hidden(Show answer):\nYes, this is a group (in fact [isomorphic](https://arbital.com/p/49x) to $(\\mathbb{R}, +)$; can you name the isomorphism?). The identity element is $1$, and inverse is given by $x \\mapsto 2 - x$ (can you explain why, conceptually?). \n%%\n\n5. The real numbers $\\mathbb{R}$ together with the operation $(x, y) \\mapsto \\frac{x + y}{1 + xy}$. \n\n%%hidden(Show answer):\nNo, this is not a group. It's easy to be tricked into thinking it is, because if you just work through the algebra, it seems that all of the group axioms hold. However, this operation is not an operation! It's not defined if the denominator is $0$, because then we'd be [dividing by zero](https://arbital.com/p/division_by_zero). \n\nThis operation is interesting and useful, though, when it is defined. It shows up in [special relativity](https://arbital.com/p/special_relativity), where it describes how velocities add relativistically (in units where the speed of light is $1$). \n%%", "date_published": "2016-07-01T01:15:34Z", "authors": ["Patrick Stevens", "Qiaochu Yuan", "Eric Bruylant", "Mark Chimes", "Colin Benner"], "summaries": ["Test your understanding of the definition of a [group](https://arbital.com/p/-3gd) with these exercises."], "tags": ["Exercise "], "alias": "3tj"} {"id": "5706f72c696399cb75feb704025e4806", "title": "Uncountable sample spaces are way too large", "url": "https://arbital.com/p/uncountable_sample_spaces_are_too_large", "source": "arbital", "source_type": "text", "text": "If the sample space $\\Omega$ is [uncountable](https://arbital.com/p/2w0), then in general we can't even define a probability distribution over $\\Omega$ in the same way we defined\n[probability distributions](https://arbital.com/p/3tb) over countable sample spaces, i.e. by just assigning numbers to each point in the sample space. Any function $f: \\Omega \\to [https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ with $\\sum_{\\omega \\in \\Omega} f(\\omega) = 1$ can only assign positive values to at most countably many elements of $\\Omega$. But this means we can't, for example, talk about a\n[uniform distribution](https://arbital.com/p/uniform_distribution) over the interval $[https://arbital.com/p/0,2](https://arbital.com/p/0,2)$, which intuitively should assign equal probability to everywhere in the interval.", "date_published": "2016-06-16T07:05:14Z", "authors": ["Tsvi BT"], "summaries": ["If the sample space $\\Omega$ is [uncountable](https://arbital.com/p/2w0), then in general we can't even define a probability distribution over $\\Omega$ in the same way we defined\n[probability distributions](https://arbital.com/p/3tb) over countable sample spaces, i.e. by just assigning numbers to each point in the sample space. Any function $f: \\Omega \\to [https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ with $\\sum_{\\omega \\in \\Omega} f(\\omega) = 1$ can only assign positive values to at most countably many elements of $\\Omega$."], "tags": [], "alias": "3tl"} {"id": "b6d7fbf1c75952197de81c3707f8cace", "title": "Linear algebra", "url": "https://arbital.com/p/linear_algebra", "source": "arbital", "source_type": "text", "text": "The study of [linear transformations](https://arbital.com/p/linear_transformation) and [vector spaces](https://arbital.com/p/vector_space).", "date_published": "2016-05-25T22:40:09Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "3tr"} {"id": "c82bf333d2372a85a6527146fd2fae61", "title": "Bit (of data): Examples", "url": "https://arbital.com/p/bit_examples", "source": "arbital", "source_type": "text", "text": "In the game \"20 questions\", one player (the \"leader\") thinks of a concept, and the other players ask 20 yes-or-no questions in attempts to guess that concept. In this game, the players extract 20 bits of data from the leader, which (if the questions are selected wisely) is enough data to single out one concept from a set of roughly one million concepts (1048576, to be precise).\n\nThree bits of data are required to specify one of the hands of one of your biological grandparents: One bit for \"maternal\" or \"paternal\", one bit for \"male\" or \"female,\" one bit for \"left\" or \"right.\"\n\nThe president of the United States of America is selected from a set of about 320,000,000 people. This selection encodes roughly 28 bits of data. Where do those bits come from? Between 1/4 and 1/2 of the population is eligible for the presidency, so 1-2 bits come from constraints on who is allowed to be president. Another ~1 bit comes from the voters in the presidential election (when one candidate is selected from a set of two finalists). 2-4 more bits come from the primary elections (when the two finalists are selected from a set of ~4-32). However, most of the bits that go into choosing the president — 21 or more of them — come from self-selection and private politicking.", "date_published": "2016-05-31T04:19:18Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Work in progress", "Stub"], "alias": "3tx"} {"id": "ed593cb97c58a0414d9f6093be16e8ab", "title": "Fractional bits", "url": "https://arbital.com/p/fractional_bit", "source": "arbital", "source_type": "text", "text": "It takes $\\log_2(8) = 3$ [bits](https://arbital.com/p/3p0) of data to [carry](https://arbital.com/p/3y1) one message from a set of 8 possible messages. Similarly, it takes $\\log_2(1024) = 10$ bits to carry one message from a set of 1024 possibilities. How many bits does it take to carry one message from a set of 3 possibilities? By the definition of \"bit,\" the answers is $\\log_2(3) \\approx 1.58.$ What does this mean? That it takes about one and a half yes-or-no questions to single out one thing from a set of three? What is \"half of a yes-or-no question\"?\n\nFractional bits can be interpreted as telling about the [expected](https://arbital.com/p/4b5) cost of transmitting information: If you want to single out one thing from a set of 3 using bits (in, e.g., the [https://arbital.com/p/+3tz](https://arbital.com/p/+3tz) thought experiment) then you'll have to purchase two bits, but sometimes you'll be able to sell one of them back, which means you can push your expected cost lower than 2. How low can you push it? The lower bound is $\\log_2(3),$ which is the minimum expected cost of adding a [3-message](https://arbital.com/p/3v9) to a long message when encoding your message as cleverly as possible. For more on this interpretation, see [https://arbital.com/p/+3v2](https://arbital.com/p/+3v2).\n\nFractional bits can also be interpreted as conversion rates between types of information: \"A 3-message carries about 1.58 bits\" can be interpreted as \"one [https://arbital.com/p/3ww](https://arbital.com/p/3ww) is worth about 1.58 bits.\" To understand this, see [https://arbital.com/p/427](https://arbital.com/p/427), or [How many bits is a trit?](https://arbital.com/p/)\n\nFractional units of data can also be interpreted as a measure of how much we're using the digits allocated to encoding a number. For example, working with fractional [decits](https://arbital.com/p/3wn) instead of fractional bits, it only takes about 2.7 decits carry a [500-message](https://arbital.com/p/3v9), despite the fact that the number 500 clearly requires 3 decimal digits to write down. What's going on here? Well, we could declare that all numbers that start with a number $n \\ge 5$ will be interpreted as if they start with the number $n - 5.$ Then we have two ways of representing each number (for example, 132 can be represented as both 132 and 632). Thus, if we have 3 decits but we only need to encode a 500-message, we have one bit to spare: We can encode one extra bit in our message according to whether we use the low representation or the high representation of the intended number. Thus, the amount of data it takes to communicate a 500-message is one bit lower than the amount of data it takes to encode a 1000-message — for a total cost of 3 decits minus one bit, which comes out to about 2.70 decits (or just short of 9 bits). For more on this interpretation, see [https://arbital.com/p/+44l](https://arbital.com/p/+44l).", "date_published": "2016-06-24T02:18:43Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": [], "tags": ["Needs summary"], "alias": "3ty"} {"id": "aa27c0ecd03537f1a16af7ac849a358e", "title": "GalCom", "url": "https://arbital.com/p/galcom", "source": "arbital", "source_type": "text", "text": "summary(Rules):\n\n1. It costs 1 galcoin per bit to reserve on-peak bits in advance. Galcoins are very expensive.\n2. You can reprogram your machine on Vega in advance, using cheap, off-peak bits. Don't worry about the cost of those.\n3. You can resell reserved bits that you don't use (including fractional bits).\n4. Any decrease in the expected cost of transmitting a message, no matter how small, is worthwhile.\n\n_The GalCom hypothetical is a thought experiment where sending a bit of information is a clearly defined and very expensive action, which makes it useful for understanding various concepts in [https://arbital.com/p/-3qq](https://arbital.com/p/-3qq) and [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv)._\n\nThe year is 21026. Humanity has become a flourishing inter-stellar civilization. Capitalism never died, and stock markets are thriving in various star systems. Due to the light-speed limitations, the markets are out of sync, and there's a lot of money to be made by setting up an automated trader in one star system, moving to another star system, and sending information about the stock market to your automated trader the instant that that information becomes available.\n\nYou set up your own automated trader in the Vega star system, and you currently make a living by transmitting market data from the Deneb star system. To transmit that data, you use the deep-space Galactic Communications network known as \"GalCom.\"\n\nYou have a lot of data to transmit, and sending information on GalCom isn't cheap. Fortunately, it _is_ efficient: GalCom is a highly optimized network used by trillions of citizens to send and receive pulses of light across the cosmos. To send information via GalCom, you purchase [bits](https://arbital.com/p/3p0). For each bit you purchase, you are allowed to control a single pulse in the GalCom signal, making it either be present (representing a 1) or absent (representing a 0).\n\nIt's relatively cheap to buy bits on non-peak hours (to, e.g., reprogram your machine on Vega), but it's very expensive to reserve bits during peak hours (i.e., just after a juicy earnings report is published). You make your money by being among the very first people to send information about the stock market, so you have to reserve those very expensive precisely-timed bits in advance. This means that you have to know what messages you might want to send in advance, and reserve enough bits for the longest possible message that you might send. Fortunately, there's a secondary market on peak-hour bits, so if you end up reserving 5 bits and then only using 3 of them, you can resell the last two bits.\n\nFor example, say you know that you're going to send one of the following messages: `\"buy\"`, `\"sell\"`, or `\"hold\"`. GalCom would require you to purchase 4 bytes of information (enough to transmit four letters), and then if you actually end up transmitting `\"buy\"` you can sell the last byte back (because you only used 3 of the 4).\n\nOf course, you can send that message using much less than 4 bytes, if you're clever, both by developing efficient [encoding schemes](https://arbital.com/p/) and by [accounting for the fact that some outcomes are more likely than others](https://arbital.com/p/). Your profit margins depend on you finding ways to transmit information as efficiently as possible. To do that, you can use [https://arbital.com/p/-3qq](https://arbital.com/p/-3qq).", "date_published": "2016-05-28T12:51:26Z", "authors": ["Nate Soares"], "summaries": ["In the GalCom thought experiment, you live in the future, and make your money by living in the Deneb star system and sending information to an automated stock trader you own in the Vega star system. Sending interstellar information is possible, but very expensive — and you have a lot of data to send. If you're clever, you can reprogram the computer on Vega in the low-cost off-hours, but to send the key market data (at the critical moment) you have to reserve high-cost precisely-timed bits far in advance. When reserving those bits you have to pick a fixed set of messages and buy enough bits to cover the longest message that you might want to send. If you end up sending a shorter message you can sell some of the bits back. You send a very high volume of data, so small decreases in the average cost of sending a bit are worth a lot of effort."], "tags": ["Thought experiment", "Non-standard terminology"], "alias": "3tz"} {"id": "716d8dff85b8a4b2852b2f9540bca2e2", "title": "GalCom: Rules", "url": "https://arbital.com/p/3v0", "source": "arbital", "source_type": "text", "text": "1. It costs 1 galcoin per bit to reserve on-peak bits in advance. (Galcoins are very expensive.)\n2. You can reprogram your machine on Vega in advance, using slow, off-peak bits. The costs of reprogramming your machine on Vega to make more efficient use of the on-peak bits are inconsequential compared to the gains when you send on-peak bits, that is, you don't need to worry about the cost of reprogramming your machine.\n3. If you wind up not needing all the on-peak bits that you reserved, you can resell the bits you did not use (assuming your computer on Vega uses the appropriate protocols, which is the sort of thing you can set up in advance).\n4. The market for bits is very, very competitive; you are allowed to resell [fractional bits](https://arbital.com/p/3ty) (including [trits](https://arbital.com/p/trit), [decits](https://arbital.com/p/decit), and so on).\n5. You make your living by sending lots and lots of data, and your profits depend on your efficiency. Due to the volume of information you transmit, you are risk-neutral in the cost of bits, and thus, very small decreases in the [expected cost](https://arbital.com/p/expectation) of transmitting a message are worth quite a bit of money to you.", "date_published": "2016-05-27T12:02:07Z", "authors": ["Nate Soares"], "summaries": [], "tags": [], "alias": "3v0"} {"id": "3cc76de665c32d0c325354bad28ccf6d", "title": "Real analysis", "url": "https://arbital.com/p/real_analysis", "source": "arbital", "source_type": "text", "text": "Real analysis is a branch of mathematics that deals with real numbers, real-valued functions, sequences, series, differentiation, integration, and related subjects.", "date_published": "2016-05-26T22:04:10Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": ["Stub"], "alias": "3v1"} {"id": "27faabd36341c03cc8b57614cf1e94ef", "title": "Fractional bits: Expected cost interpretation", "url": "https://arbital.com/p/fractional_bits_as_expected_cost", "source": "arbital", "source_type": "text", "text": "In the [GalCom](https://arbital.com/p/3tz) thought experiment, you regularly have to send large volumes of information through deep space to your automated stock trader in the Vega star system, and every bit of information is very expensive. Imagine that a very important earnings report is coming up for Company X, and you want to transmit instructions to your automated trader in the Vega system the instant that the information becomes available. You've decided that you're capable of distinguishing between 7 different outcomes, which each appear equally likely to you. You label them -3, -2, -1, 0, +1, +2, and +3 (according to how the company performed compared to your median estimate). How many bits do you need to reserve on GalCom to ensure that you can send one of those 7 messages the moment that the earnings report comes out?\n\n$\\log_2(7)$ is about 2.807, which implies that two bits won't do — with two bits, you could only single out one message from a set of four. So you have to purchase at least 3 bits. (In general, if you want to send a message on GalCom that distinguishes between $n$ outcomes, you have to reserve at least [$\\lceil \\log_2](https://arbital.com/p/3vc) bits.)\n\nOnce you've purchased 3 bits, there are 8 different signals that you'll be able to send via GalCom to the Vega system: 000, 001, 010, 011, 100, 101, 110, and 111. You can program your automated trader to interpret these signals as -3, -2, -1, 0, +1, +2, and +3 respectively, and then you have one code — 111 — left over. Is there any way to make money off the fact that you have an extra code?\n\n(You're invited to pause and attempt to answer this question yourself. Is there any way to program the computer on Vega to use your 8 codes in such a way that you sometimes get to sell one bit back to GalCom?)\n\n
\n
\n
\n
\n
\n\nYou can recoup some of the losses as follows: Program your machine on Vega to recognize both 110 _and_ 111 as +3. Then, if the earnings report for company X comes out as \"very good,\" you get to resell one bit to GalCom. Why? Because both 110 and 111 will be interpreted as +3, so you don't care which one gets sent. Therefore, you can choose to send 110 if the next GalCom bit was going to be a 0, and 111 if the next GalCom bit was going to be a 1, and thereby shave one bit off of the next person's message (assuming your automated trader is programmed to forward that one bit where it needs to go, which is the sort of thing you can set up ahead of time).\n\n![](http://i.imgur.com/mbJ4Apo.png)\n\n_3 bits are enough to single out a leaf in a [binary tree](https://arbital.com/p/binary_tree) of depth 3. You only have seven messages you want to encode, which means one of the codes gets doubled up — in this case, both 110 and 111 are assigned to +3. Thus, your computer on Vega already knows that the answer is +3 by the time GalCom has transmitted 11, so if the earnings report comes up +3 you can repurpose the third bit._\n\nBecause you set up your codes such that you thought each outcome was equally likely, your [expected cost](https://arbital.com/p/4b5) of sending the message is 2.875 bits: 7/8ths of the time you pay for 3 bits, but 1/8th of the time you only pay for 2 bits. If you're sending one message from a set of 7, you have to set aside at least 3 bits at the beginning, but every so often you get to sell one of them back.\n\nIf you send many, many [7-messages](https://arbital.com/p/3v9) using the encoding above, then on average, each 7-message costs 2.875 bits.\n\nBut $\\log_2(7) \\neq 2.875,$ it's actually close to about 2.807. What gives?\n\nIn short, the above encoding is not the most efficient way to pack 7-messages. Or, rather, it's the most efficient way to pack one 7-message in isolation, but if we have lots of 7-messages, we can do better on average. Imagine two people teaming up to send two 7-messages. Two 7-messages is the same as one 49-message, where $(m, n)$ is encoded as $7m + n,$ i.e., 17 means \"2\" to the first person and \"3\" to the second. Sending a 49 message requires reserving $\\lceil \\log_2(49) \\rceil = 6$ bits on GalCom, thereby purchasing 64 different possible signals (000000 through 111111). $64 - 49 = 15$ of those signals will have a double meaning, which means that 15/49 of the time, one bit can be resold to GalCom. Thus, the expected cost to send two messages from two sets of seven is $6 - \\frac{15}{49} \\approx 5.694$ bits. When the two parties split the cost, they each pay only half that, or roughly 2.847 bits in expectation — slightly better than the 2.875 they would have paid sending one 7-message in isolation.\n\nIf you pack three 7-messages together, the expected cost comes out to $(9 - \\frac{169}{343})\\approx 8.507$ bits, for an average cost of $\\approx 2.836$ bits per 7-message: slightly closer still to the $2.807$ bound. In the limit, as you pack more and more 7-messages together, $\\log_2(7)$ is the smallest average cost you can get. For more on why this is the lower bound, see [How many bits is a trit?](https://arbital.com/p/trit_in_bits), [https://arbital.com/p/427](https://arbital.com/p/427), and [https://arbital.com/p/44l](https://arbital.com/p/44l).\n\nThus, we can interpret \"a 7-message carries about 2.807 bits of information\" as \"sending lots of 7-messages costs about 2.807 bits on average, if you pack your message as cleverly as possible.\"\n\nMore generally, the fractional portion of the amount of information carried by a message can be interpreted as the lower bound for the expected cost of sending that message. If we have to send a single $n$-message across GalCom in isolation, then we have to reserve at least $\\lceil \\log_2(n) \\rceil$ bits. But if we append our message to a longer message, and pack our message carefully, then we can push our expected cost lower and lower. What's the lower bound on the expected cost? $\\log_2(n).$\n\nWhy is $\\log_2(n)$ the lower bound? The proof has two parts. First, we have to show that you can in fact get closer and closer to an average cost of $\\log_2(n)$ as you send more and more messages. Second, imagine you could send a $b$-message for $x < \\log_2(b)$ bits, on average. Then you could use $b$-messages to encode lots and lots of 2-messages, which (by the same method) would cost on average $\\log_b(2) \\cdot x$ bits per $2$-message — $\\log_b(2)$ $b$-messages per $2$-message, times $x$ bits per $b$-message. But $\\log_b(2) \\cdot \\log_2(b) = 1$ for all $b,$ which means that, if you could send a $b$-message for less than $\\log_2(b)$ bits, then you could use bits to send more than one 2-message per bit!", "date_published": "2016-06-24T02:29:24Z", "authors": ["Nate Soares"], "summaries": [], "tags": [], "alias": "3v2"} {"id": "da5cc54823ea8bddd84fb169dbda2b88", "title": "Join and meet: Examples", "url": "https://arbital.com/p/join_examples", "source": "arbital", "source_type": "text", "text": "A union of sets and the least common multiple of a set of natural numbers can both be viewed as joins. In addition, joins can be useful to the designers of statically typed programming languages.\n\n%%%knows-requisite([https://arbital.com/p/3v1](https://arbital.com/p/3v1)):\nThe real numbers\n------------------------------\n\nConsider the [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) $\\langle \\mathbb{R}, \\leq \\rangle$ of all real numbers ordered by the standard comparison relation. For any non-empty $X \\subseteq \\mathbb{R}$, $\\bigvee X$ exists if and only if $X$ has an upper bound; this fact falls directly out of the definition of the set of real numbers.\n%%%\n\n\nSubtyping\n------------------\n\nStatically typed programming languages often define a poset of types ordered by the subtyping relation; Scala is one such language. Consider the following Scala program.\n\n![A scala program with class inheritance](http://i.imgur.com/IUkzT7A.png)\n\nWhen a programmer defines a class hierarchy in an object-oriented language, they are actually defining a poset of types. The above program defines the simple poset shown in the following Hasse diagram.\n\n![Hasse diagram for the nominal subtyping relation defined by the above program](http://i.imgur.com/dk1EHuA.png)\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n rankdir = BT;\n Dog -> Animal;\n Cat -> Animal;\n}\n%%%\n\nNow consider the expression *if (b) dog else cat*. If b is true, then it evaluates to a value of type Dog. If b is false, then it evaluates to a value of type Cat. What type, then, should *if (b) dog else cat* have? Its type should be the join of Dog and Cat, which is Animal.\n\nPower sets\n-------------------\n\nLet $X$ be a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz). Consider the [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) $\\langle \\mathcal{P}(X), \\subseteq \\rangle$, the power set of $X$ ordered by inclusion. In this poset, joins are unions: for all $A \\subseteq \\mathcal{P}(X)$, $\\bigvee A = \\bigcup A$. This can be shown as follows. Let $A \\subseteq \\mathcal{P}(X)$. Then $\\bigcup A$ is an upper bound of $A$ because a union contains each of its constituent sets. Furthermore, $\\bigcup A$ is the *least* upper bound of $A$. For let $Z$ be an upper bound of $A$. Then $x \\in \\bigcup A$ implies $x \\in Y$ for some $Y \\in A$, and since $Y \\subseteq Z$, we have $x \\in Y \\subseteq Z$. Since $x \\in \\bigcup A$ implies $x \\in Z$, we have $\\bigcup A \\subseteq Z$. Hence, $\\bigvee A = \\bigcup A$.\n\nDivisibility\n------------------\n\nConsider the poset $\\langle \\mathbb Z_+, | \\rangle$ of divisibility on the positive integers. In this poset, the upper bounds of an integer are exactly its multiples. Thus, the join of a set of positive integers in $\\langle \\mathbb Z_+, | \\rangle$ is their [least common multiple](https://arbital.com/p/least_common_multiple). Dually, the meet of a set of positive integers in $\\langle \\mathbb Z_+, | \\rangle$ is their [greatest common divisor](https://arbital.com/p/greatest_common_divisor).", "date_published": "2016-06-20T01:57:20Z", "authors": ["Eric Rogstad", "Kevin Clancy"], "summaries": [], "tags": [], "alias": "3v4"} {"id": "2ba4c4e5b021b6365bc80abc399bfc40", "title": "n-message", "url": "https://arbital.com/p/n_message", "source": "arbital", "source_type": "text", "text": "A message singling out one thing from a set of $n$ is sometimes called an $n$-message. For example, a message singling out one thing from a set of 13 is a 13-message. An $n$-message carries $\\log_2(n)$ [bits](https://arbital.com/p/3p0) of information. For a formal account of what it means for a message to \"single out one thing from a set of $n$,\" see [https://arbital.com/p/3xd](https://arbital.com/p/3xd).", "date_published": "2016-06-07T03:44:19Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": ["A message singling out one thing from a set of $n$ is sometimes called an $n$-message. For example, a message singling out one thing from a set of 13 is a 13-message. An $n$-message carries $\\log_2(n)$ [bits](https://arbital.com/p/3p0) of information."], "tags": ["Non-standard terminology", "Definition"], "alias": "3v9"} {"id": "f1bebe00339f0da03fe148a3a5f2d461", "title": "Thought experiment", "url": "https://arbital.com/p/thought_experiment", "source": "arbital", "source_type": "text", "text": "Meta-tag for thought experiments.", "date_published": "2016-05-27T12:08:22Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Meta tags"], "alias": "3vb"} {"id": "0c608bc805797496ed624dadc1838074", "title": "Ceiling", "url": "https://arbital.com/p/mathematics_ceiling", "source": "arbital", "source_type": "text", "text": "The ceiling of a [real number](https://arbital.com/p/real_number) $x,$ denoted $\\lceil x \\rceil$ or sometimes $\\operatorname{ceil}(x),$ is the first [https://arbital.com/p/integer](https://arbital.com/p/integer) $n \\ge x.$ For example, $\\lceil 3.72 \\rceil = 4, \\lceil 4 \\rceil = 4,$ and $\\lceil -3.72 \\rceil = -3.$ In other words, the ceiling function rounds its input up to the nearest integer.\n\nFor the function that rounds its input down to the nearest integer, see [https://arbital.com/p/floor](https://arbital.com/p/floor). Ceiling and floor are not to be confused with [fix](https://arbital.com/p/fix_towards_zero) and [https://arbital.com/p/ceilfix](https://arbital.com/p/ceilfix), which round towards and away from zero (respectively).\n\nFormally, ceiling is a [function](https://arbital.com/p/3jy) of type $\\mathbb R \\to \\mathbb Z.$ The ceiling function can also be defined on [complex numbers](https://arbital.com/p/complex_number).", "date_published": "2016-05-27T13:27:47Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Math 2", "Needs clickbait", "C-Class"], "alias": "3vc"} {"id": "04ce6eeb599a2637f6c45d318d8df4bf", "title": "Colon-to notation", "url": "https://arbital.com/p/colon_to_notation", "source": "arbital", "source_type": "text", "text": "In mathematics, the notation $f : X \\to Y$ (here, \"colon-to notation,\" because the arrow $\\to$ is written \"\\to\" in [https://arbital.com/p/LaTeX](https://arbital.com/p/LaTeX)) means that $f$ is a [function](https://arbital.com/p/3jy) with [domain](https://arbital.com/p/3js) $X$ and [codomain](https://arbital.com/p/3lg) $Y$. It can be read \"$f$, a function from $X$ to $Y$.\" \n\nThis can be thought of as ascribing a function type to the value $f$. The use of a colon to express that a given value has a given type, as is done in type theory, is a generalization of this notation.\n\n# Examples\n\n$f : \\mathbb{R} \\to \\mathbb{R}$ means that $f$ is a function from the real numbers to the real numbers, such as $x \\mapsto x^2$ ([mapsto notation](https://arbital.com/p/3vm)). \n\n$f : \\mathbb{R} \\times \\mathbb{R} \\to \\mathbb{R}$ means that $f$ is a function from pairs of real numbers to real numbers. The $\\times$ here refers to the [Cartesian product](https://arbital.com/p/3xb) of [sets](https://arbital.com/p/3jz).", "date_published": "2016-08-04T14:54:29Z", "authors": ["Kevin Clancy", "Dylan Hendrickson", "Eric Rogstad", "Izaak Meckler", "Qiaochu Yuan"], "summaries": [], "tags": ["Non-standard terminology"], "alias": "3vl"} {"id": "d20b90bd342b7e9041537d5892f55f86", "title": "Mapsto notation", "url": "https://arbital.com/p/mapsto_notation", "source": "arbital", "source_type": "text", "text": "In mathematics, the arrow $\\mapsto$ (which in [https://arbital.com/p/LaTeX](https://arbital.com/p/LaTeX) is called \"\\mapsto\") is used to describe what a [function](https://arbital.com/p/3jy) does to an arbitrary input. It is commonly used in combination with [colon-to notation](https://arbital.com/p/3vl), which describes what a function's [domain](https://arbital.com/p/3js) and [codomain](https://arbital.com/p/3lg) are. \n\nBy itself, $\\mapsto$ can be used to describe a function without naming it, and so is a way to describe [anonymous functions](https://arbital.com/p/anonymous_function) in mathematics. \n\n# Examples\n\nThe function $f(x) = x^2$ from the [real numbers](https://arbital.com/p/4bc) to the real numbers can be described using a combination of colon-to notation and mapsto notation as\n\n$$f : \\mathbb{R} \\to \\mathbb{R}$$\n$$x \\mapsto x^2.$$\n\nIn one line, although this is less common,\n\n$$f : \\mathbb{R} \\ni x \\mapsto x^2 \\in \\mathbb{R}.$$\n\n(see [in notation](https://arbital.com/p/3vq)).", "date_published": "2016-07-23T12:44:19Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Kevin Clancy", "Qiaochu Yuan"], "summaries": [], "tags": [], "alias": "3vm"} {"id": "8977275a3482edbaf1f0ec3f0133d7e3", "title": "In notation", "url": "https://arbital.com/p/in_notation", "source": "arbital", "source_type": "text", "text": "In mathematics, the notation $x \\in X$ (where $\\in$ is written \"\\in\" in [https://arbital.com/p/5xw](https://arbital.com/p/5xw)) means that $X$ is a [set](https://arbital.com/p/3jz) and $x$ is an element of that set.\n\n# Examples\n\n$r \\in \\mathbb{R}$ means that $r$ is a [real number](https://arbital.com/p/4bc).", "date_published": "2016-08-27T12:02:06Z", "authors": ["Eric Bruylant", "Kevin Clancy", "Qiaochu Yuan"], "summaries": [], "tags": [], "alias": "3vq"} {"id": "936fc03b1a0172d1f9afa94642f125df", "title": "Vector space", "url": "https://arbital.com/p/vector_space", "source": "arbital", "source_type": "text", "text": "A vector space is a [field](https://arbital.com/p/algebraic_field) $F$ paired with a [https://arbital.com/p/3gd](https://arbital.com/p/3gd) $V$ and a function $\\cdot : F \\times V \\to V$ (called \"scalar multiplication\") such that $1 \\cdot v = v$ and such that scalar multiplication distributes over addition. Elements of the field are called \"scalars,\" elements of the group are called \"vectors.\"\n\nAlso there are some nice geometric interpretations and stuff.", "date_published": "2016-06-13T14:10:50Z", "authors": ["Nate Soares", "Patrick Stevens"], "summaries": [], "tags": ["Stub"], "alias": "3w0"} {"id": "2d60c56463a9e41215a17311a063dabf", "title": "Subspace", "url": "https://arbital.com/p/vector_subspace", "source": "arbital", "source_type": "text", "text": "A subspace $U=(F_U, V_U)$ of a [https://arbital.com/p/3w0](https://arbital.com/p/3w0) $W=(F_W, V_W)$ is a vector space where $F_U = F_W$ and $V_U$ is a [https://arbital.com/p/subgroup](https://arbital.com/p/subgroup) of $V_W,$ and $V_U$ is [closed](https://arbital.com/p/) under scalar multiplication.", "date_published": "2016-05-27T17:54:25Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3w1"} {"id": "f9d1f35d87425515b221fddd7039a085", "title": "Sum of vector spaces", "url": "https://arbital.com/p/vector_space_sum", "source": "arbital", "source_type": "text", "text": "The sum of two [vector spaces](https://arbital.com/p/3w0) $U$ and $W,$ written $U + W,$ is a vector space where the set of vectors is all possible $u + w$ for every $u \\in U$ and $w \\in W.$", "date_published": "2016-05-27T17:57:02Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3w2"} {"id": "d0c37e9b5b10264c2aa2e0cbf364c928", "title": "Direct sum of vector spaces", "url": "https://arbital.com/p/vector_space_direct_sum", "source": "arbital", "source_type": "text", "text": "The direct sum of two [vector spaces](https://arbital.com/p/3w0) $U$ and $W,$ written $U \\oplus W,$ is just the [sum](https://arbital.com/p/3w2) of $U$ and $W,$ but it can only be applied when $U$ and $W$ are [linearly independent](https://arbital.com/p/).", "date_published": "2016-05-27T17:58:14Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3w3"} {"id": "415b133637af3d2aa1bd550605c038d5", "title": "Logarithm: Examples", "url": "https://arbital.com/p/log_examples", "source": "arbital", "source_type": "text", "text": "$\\log_{10}(100)=2.$ $\\log_2(4)=2.$ $\\log_2(3)\\approx 1.58.$ (TODO)", "date_published": "2016-05-28T01:20:41Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3wg"} {"id": "4a0492c7cab116a9a0c03a14c12aa2cc", "title": "Logarithm: Exercises", "url": "https://arbital.com/p/log_exercises", "source": "arbital", "source_type": "text", "text": "Without using a calculator: What is $\\log_{10}(4321)$? What integer is it larger than, what integer is it smaller than, and which integer is it closer to? How do you know? (TODO)", "date_published": "2016-05-27T19:19:24Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Exercise ", "Stub"], "alias": "3wh"} {"id": "eac67fe60d89e577a67ccceebd1f0471", "title": "Decit", "url": "https://arbital.com/p/decit", "source": "arbital", "source_type": "text", "text": "A decit, or \"decimal digit\" (also known as a \"dit\", \"ban\", or \"hartley\") is the amount of [information](https://arbital.com/p/3xd) required to single out one thing from a set of ten. That is, a decit is a measure of information equal to $\\log_2(10)\\approx 3.32$ [shannons](https://arbital.com/p/shannon).", "date_published": "2016-06-07T03:44:52Z", "authors": ["Arthur Milchior", "Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "3wn"} {"id": "aa36c10e1f1541c38a4b273e0222823d", "title": "Logarithmic identities", "url": "https://arbital.com/p/log_identities", "source": "arbital", "source_type": "text", "text": "Recall that [$\\log_b](https://arbital.com/p/3nd) is defined to be the (possibly fractional) number of times that you have to multiply 1 by $b$ to get $n.$ Logarithm functions satisfy the following properties, for any base $b$:\n\n- [Inversion of exponentials](https://arbital.com/p/): $b^{\\log_b(n)} = \\log_b(b^n) = n.$\n- [Log of 1 is 0](https://arbital.com/p/): $\\log_b(1) = 0$\n- [Log of the base is 1](https://arbital.com/p/): $\\log_b(b) = 1$\n- [Multiplication is addition in logspace](https://arbital.com/p/): $\\log_b(x\\cdot y) = log_b(x) + \\log_b(y).$\n- [Exponentiation is multiplication in logspace](https://arbital.com/p/): $\\log_b(x^n) = n\\log_b(x).$\n- [Symmetry across log exponents](https://arbital.com/p/): $x^{\\log_b(y)} = y^{\\log_b(x)}.$\n- [Change of base](https://arbital.com/p/): $\\log_a(n) = \\frac{\\log_b(n)}{\\log_b(a)}$", "date_published": "2016-05-28T01:16:13Z", "authors": ["Nate Soares"], "summaries": ["- [Inversion of exponentials](https://arbital.com/p/): $b^{\\log_b(n)} = \\log_b(b^n) = n.$\n- [Log of 1 is 0](https://arbital.com/p/): $\\log_b(1) = 0$\n- [Log of the base is 1](https://arbital.com/p/): $\\log_b(b) = 1$\n- [Multiplication is addition in logspace](https://arbital.com/p/): $\\log_b(x\\cdot y) = log_b(x) + \\log_b(y).$\n- [Exponentiation is multiplication in logspace](https://arbital.com/p/): $\\log_b(x^n) = n\\log_b(x).$\n- [Symmetry across log exponents](https://arbital.com/p/): $x^{\\log_b(y)} = y^{\\log_b(x)}.$\n- [Change of base](https://arbital.com/p/): $\\log_b(n) = \\frac{\\log_a(n)}{\\log_a(b)}$"], "tags": ["Start", "Needs lenses"], "alias": "3wp"} {"id": "5bf5cd17b68c7060401c3db5e593cf5e", "title": "Needs lenses", "url": "https://arbital.com/p/needs_lenses_meta_tag", "source": "arbital", "source_type": "text", "text": "A tag for pages that have a technical/non-intuitive main lens, and which need lenses that can explain the concept less formally.", "date_published": "2016-06-24T23:04:13Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev", "Stephanie Zolayvar"], "summaries": [], "tags": [], "alias": "3wq"} {"id": "ec339e25fb9644fa6e6813f820927ff8", "title": "Logarithms invert exponentials", "url": "https://arbital.com/p/log_inverts_exp", "source": "arbital", "source_type": "text", "text": "The function [$\\log_b](https://arbital.com/p/3nd) inverts the function $b^{(\\cdot)}.$ In other words, $\\log_b(n) = x$ implies that $b^x = n,$ so $\\log_b(b^x)=x$ and $b^{\\log_b(n)}=n.$ (For example, $\\log_2(2^3) = 3$ and $2^{\\log_2(8)} = 8.$) Thus, logarithms give us tools for analyzing anything that grows [exponentially](https://arbital.com/p/4ts). If a population of bacteria grows exponentially, then logarithms can be used to answer questions about how long it will take the population to reach a certain size. If your wealth is accumulating interest, logarithms can be used to ask how long it will take until you have a certain amount of wealth. (TODO)", "date_published": "2016-07-04T19:14:19Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "Joe Zeng"], "summaries": [], "tags": ["Needs clickbait", "Stub"], "alias": "3wr"} {"id": "82a543dd0e34156f674791cb9b6d0686", "title": "How many bits to a trit?", "url": "https://arbital.com/p/bits_in_a_trit", "source": "arbital", "source_type": "text", "text": "There are [$\\log_2](https://arbital.com/p/3nd) [bits](https://arbital.com/p/3p0) to a [https://arbital.com/p/3ww](https://arbital.com/p/3ww). This can be interpreted a few different ways:\n\n1. If you multiply the number of messages you might want to send by 3, then the cost of encoding the message will go up by 1.58 bits on average. See [https://arbital.com/p/+marginal_message_cost](https://arbital.com/p/+marginal_message_cost) for more on this interpretation.\n2. If you pack $n$ of independent and equally likely [3-messages](https://arbital.com/p/3v9) together into one giant $3^n$ message, then the cost (in bits) per individual 3-message drops as $n$ grows, ultimately converging to $\\log_2(3)$ bits per 3-message as $n$ gets very large. For more on this, see [https://arbital.com/p/+average_message_cost](https://arbital.com/p/+average_message_cost) and the [GalCom example of encoding trits using bits](https://arbital.com/p/3zn).\n3. The infinite expansion of $\\log_2(3) = 1.58496250072\\ldots$ tells us not just how many bits it takes to send one 3-message $(\\approx \\lceil 1.585 \\rceil = 2)$ but also how long it takes to send any number of 3-messages put together. For example, it costs 2 bits to send one 3-message; 16 bits to send 10; 159 bits to send 1000; 1585 to send 10,000; 15850 to send 100,000; 158497 to send 1,000,000; and so on. For more on this interpretation, see the [\"series of ceilings\"](https://arbital.com/p/log_series_of_ceilings) interpretation of logarithms.", "date_published": "2016-06-02T23:04:18Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": ["$\\log_2(3) \\approx 1.585.$ This can be interpreted a few different ways:\n\n1. If you multiply the number of messages you might want to send by 3, then the cost of encoding the message will go up by 1.58 bits on average.\n2. If you pack $n$ of independent and equally likely [3-messages](https://arbital.com/p/3v9) together into one giant $3^n$ message, then the cost (in bits) per individual 3-message drops as $n$ grows, ultimately converging to $\\log_2(3)$ bits per 3-message as $n$ gets very large.\n3. The infinite expansion of $\\log_2(3) = 1.58496250072\\ldots$ tells us not just how many bits it takes to send one 3-message $(\\approx \\lceil 1.585 \\rceil = 2)$ but also how long it takes to send any number of 3-messages put together. For example, it costs 2 bits to send one 3-message; 16 bits to send 10; 159 bits to send 1000; 1585 to send 10,000; 15850 to send 100,000; 158497 to send 1,000,000; and so on."], "tags": ["Needs lenses", "Work in progress"], "alias": "3wv"} {"id": "a78eaf17aebf8fab0a372a09f3bfaa0c", "title": "Trit", "url": "https://arbital.com/p/trit", "source": "arbital", "source_type": "text", "text": "A trit is the trinary analog of the [https://arbital.com/p/3y2](https://arbital.com/p/3y2): Where a bit says 2, a trit says 3. \"Trit,\" like \"bit,\" is overloaded:\n\n- An abstract trit (analog of the [abstract bit](https://arbital.com/p/3y3)) is an element of a [set](https://arbital.com/p/3jz) with three elements. Thus, there are three abstract trits. They are often denoted 0, 1, and 2.\n- A trit of data (analog of a [bit of data](https://arbital.com/p/3p0)) is the amount of data that can be carried by a physical object that can be placed in any one of three states.\n- A trit of information (analog of a [https://arbital.com/p/shannon](https://arbital.com/p/shannon)) is the amount of information someone receives when they start out maximally uncertain about which one of three outcomes happened, and then they learn which one happened. A trit of information is $\\log_2(3)\\approx 1.58$ bits of information.\n- A trit of evidence (analog of a [bit of evidence](https://arbital.com/p/evidence_bit)) in favor of a hypothesis is an observation that is three times as likely if the hypothesis is true (versus if the hypothesis is false).\n\n\"Trit\" is a portmanteau of \"trinary digit.\"", "date_published": "2016-06-07T03:56:27Z", "authors": ["Nate Soares"], "summaries": ["The trinary analog of the [https://arbital.com/p/3y2](https://arbital.com/p/3y2). Depending on the context, a trit can mean any of the following things:\n\n1. Abstract trit: one thing from a set of three\n2. Trit of data: the amount of data carried by an object that can be put into three states\n3. Trit of information: $\\log_2(3) \\approx 1.58$ [Sh](https://arbital.com/p/shannon) of information\n4. Trit of evidence: $3:1$ evidence in favor of a specific hypothesis"], "tags": ["Stub"], "alias": "3ww"} {"id": "0569cdd7be60067629d4cc4d434cf120", "title": "Intradependent encodings can be compressed", "url": "https://arbital.com/p/3x3", "source": "arbital", "source_type": "text", "text": "Given an encoding scheme $E$ which gives an [https://arbital.com/p/3x4](https://arbital.com/p/3x4) of a message $m,$ we can in principle design a more efficient coding $E^\\prime$ that gives a shorter encoding of $m.$ For example, $E$ encodes 8-letter English words as a series of letters, $m$ is aardvark, then $E(m)$ is intradependent (because you already know what the message is by the time you've seen \"aardv\"), so you can define an alternative encoding $E^\\prime$ which encodes \"aardvark\" as just \"aardv\", thereby saving three letters.\n\nThe practice of taking an intradependent encoding and finding a more efficient one is known as [https://arbital.com/p/compression](https://arbital.com/p/compression).", "date_published": "2016-05-29T14:38:50Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "3x3"} {"id": "389d5898847f56955a7b8738b7037147", "title": "Intradependent encoding", "url": "https://arbital.com/p/intradependent_encoding", "source": "arbital", "source_type": "text", "text": "We call an [https://arbital.com/p/encoding](https://arbital.com/p/encoding) $E(m)$ of a message $m$ \"intradependent\" if the fact that $E(m)$ encodes $m$ can be deduced without looking at the whole encoding. For example, imagine that you know you're going to see an 8-letter English word, and you see the letters \"aardv\". You can then deduce that the message is \"aardvark\" without looking at the last three letters, because that's the only valid message that starts with \"aardv\".\n\n(Seeing \"aard\" is not enough, because [aardwolfs](https://arbital.com/p/https://en.wikipedia.org/wiki/Aardwolf) exist.)\n\nIn an intradependent encoding, some parts of the encoding carry information about the other parts. For example, once you've seen \"aard\", there are $26^4 = 456976$ possible combinations of the next four letters, but \"aard\" cuts the space down to just two possibilities — \"vark\" and \"wolf\". This means that the first four letters carry $\\log_2(26^4) - 1 \\approx 17.8$ bits of information about the last four. (The fifth letter carries one final bit of information, in the choice between \"v\" and \"w\". The last three letters carry no new information.)\n\nFormally, the intradependence of an encoding is defined to be the sum of [mutual information](https://arbital.com/p/mutual_information) between each symbol in the encoding and all the previous symbols. Given an intradependent encoding, it is possible in principle to design a more efficient encoding; see [https://arbital.com/p/+intradependent_compression](https://arbital.com/p/+intradependent_compression).", "date_published": "2016-05-29T13:18:16Z", "authors": ["Nate Soares"], "summaries": ["An encoding $E(m)$ of a message $m$ is intradependent if the fact that $E(m)$ encodes $m$ can be deduced without looking at the whole encoding. For example, if you know you're going to see an 8-letter English word, and you see \"aardv\", you know that the message is \"aardvark\", even without looking at the last three letters.\n\nFormally, intradependence is defined in terms of the mutual information shared by symbols in the encoding. Given an encoding is intradependent, it's possible (in principle) to develop a more efficient encoding that is less intradependent."], "tags": ["Start", "Non-standard terminology"], "alias": "3x4"} {"id": "a3d77e46fc24e1cd8bc085c908331446", "title": "Non-standard terminology", "url": "https://arbital.com/p/nonstandard_terminology_meta_tag", "source": "arbital", "source_type": "text", "text": "A tag for terminology that is Arbital-specific, Arbital-originated, or just not very common outside of Arbital.", "date_published": "2016-07-06T04:56:06Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "3x5"} {"id": "f6bce6388d65fac7924d12f9c705a5b5", "title": "Dependent messages can be encoded cheaply", "url": "https://arbital.com/p/encoding_dependent_messages", "source": "arbital", "source_type": "text", "text": "Say you want to transmit a [2-message](https://arbital.com/p/3v9), a 4-message, and a 256-message to somebody. For example, you might want to tell them which way a coin came up, a cardinal direction, and a letter (encoded as one byte of [ASCII](https://arbital.com/p/https://en.wikipedia.org/wiki/ASCII) text). How many [bits](https://arbital.com/p/3p0) of information does it take to transmit those three messages?\n\nAt most, it takes 11 bits: One for the coin, two for the direction, and 8 for the ASCII byte. For example, [north, A](https://arbital.com/p/heads,) might be encoded as 0001000001, where 0 means \"heads\", 00 means \"north\", and 10000001 means \"A\" in ASCII.\n\nBut what if the messages depend on each other? What if the way that the cardinal direction was picked was by looking at the coin (such that you always say north if the coin lands heads, and south if the coin lands tails), and then the letter is picked by looking at the direction (such that you always say A for north, B for east, Z for south, and Y for west). Then how many bits does it take to transmit the message [north, A](https://arbital.com/p/heads,)? Only one! Why? Because there are now only two possible messages: [north, A](https://arbital.com/p/heads,) and [south, Z](https://arbital.com/p/tails,). Given two people who know the links between the three messages, all you need to tell them is how the coin came up, and they can figure out the entire message.\n\nFormally, if you want to send multiple messages, and those messages share [mutual information](https://arbital.com/p/mutual_information), then the amount of [https://arbital.com/p/information](https://arbital.com/p/information) it takes to encode all three messages together is less than the amount of information it takes to encode each one separately. (In fact, the amount of information you can save by encoding them both together is at most the amount of mutual information between them).\n\nLooking at the collection of multiple messages as a single message, this fact is an immediate consequence of the fact that [some messages being more likely than others means you can develop more efficient encodings for them](https://arbital.com/p/).\n\nAlternatively, this fact can be seen as a corollary of the fact that [intradependent encodings can be compressed](https://arbital.com/p/intradependent_compression): Given three messages $m_1, m_2, m_3$ and an encoding scheme $E$, the encoding $E(m_1)E(m_2)E(m_3)$ made by simply putting all three encodings together can be interpreted as a single [intradependent encoding](https://arbital.com/p/3x4) of the triplet $(m_1, m_2, m_3)$.", "date_published": "2016-05-29T16:56:20Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "3x6"} {"id": "7da1ff1a6badb5486e3b869dc630cb02", "title": "Cartesian product", "url": "https://arbital.com/p/cartesian_product", "source": "arbital", "source_type": "text", "text": "The Cartesian product of two sets $A$ and $B,$ denoted $A \\times B,$ is the set of all [ordered pairs](https://arbital.com/p/ordered_pair) $(a, b)$ such that $a \\in A$ and $b \\in B.$ For example, if $\\mathbb B \\times \\mathbb N$ is the set of all pairs of a [https://arbital.com/p/boolean](https://arbital.com/p/boolean) with a [natural number](https://arbital.com/p/natural_number), and it contains elements like (true, 0), (false, 17), (true, 17), (true, 100), and (false, 101).\n\nCartesian product are often referred to as just \"products.\"\n\nCartesian products can be constructed from more than two sets, for example, $\\mathbb B^3 = \\mathbb B \\times \\mathbb B \\times \\mathbb B$ is the set of all [https://arbital.com/p/boolean](https://arbital.com/p/boolean) [3-tuples](https://arbital.com/p/tuple). (The $\\times$ operator is [associative](https://arbital.com/p/3h4), so we don't need to write parenthesis when using it on a whole chain of sets.) A product of $n$ sets is called an $n$-ary product.", "date_published": "2016-05-29T21:19:35Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Start", "Needs image", "Needs clickbait"], "alias": "3xb"} {"id": "81be3aeb6c1ea1858ffec4fd20ab0859", "title": "Information", "url": "https://arbital.com/p/information", "source": "arbital", "source_type": "text", "text": "summary(Technical): Given a probability distribution $\\mathrm P$ over a set $O$ of possible observations, an observation $o \\in O$ is said to carry $\\log \\frac{1}{\\mathrm P(o)}$ units of information (with respect to $\\mathrm P$), where the base of the logarithm determines the unit of information. The standard choice is log base 2, in which case the information is measured in [shannons](https://arbital.com/p/shannon).\n\nInformation is a measure of how much a message grants an observer the ability to predict the world. For a formal description of what this means, see [Information: Formalization](https://arbital.com/p/). Information is observer-dependent: If someone tells both of us your age, you'd learn nothing, while I'd learn how old you are. [https://arbital.com/p/+3qq](https://arbital.com/p/+3qq) gives us tools for quantifying and studying information.\n\nInformation is measured in [shannons](https://arbital.com/p/shannon), which are also the units used to measure [https://arbital.com/p/uncertainty](https://arbital.com/p/uncertainty) and [https://arbital.com/p/entropy](https://arbital.com/p/entropy). Given that you're about to observe a coin that certainly came up either heads or tails, one shannon is the difference between utter uncertainty about which way the coin came up, and total certainty that the coin came up heads. Specifically, the amount of information in an observation is quantified as the [https://arbital.com/p/3nd](https://arbital.com/p/3nd) of the [https://arbital.com/p/reciprocal](https://arbital.com/p/reciprocal) of the [https://arbital.com/p/1rf](https://arbital.com/p/1rf) that the observer assigned to that observation.\n\nFor a version of the previous sentence written in English, see [Measuring information](https://arbital.com/p/measuring_information). For a discussion of why this quantity in particular is called \"information,\" see [Information: Intro](https://arbital.com/p/).\n\n# Information vs Data\n\nThe word \"information\" has a precise, technical meaning within the field of [https://arbital.com/p/3qq](https://arbital.com/p/3qq). Information is not to be confused with [https://arbital.com/p/data](https://arbital.com/p/data), which is a measure of how many messages a communication medium can distinguish in principle. For example, a series of three ones and zeros is 3 [bits of data](https://arbital.com/p/3p0), but the amount of information those three bits carry depends on the observer. Unfortunately, in colloquial usage (and even in some texts on information theory!) the word \"information\" is used interchangeably with the word \"data\"; matters are not helped by the fact that the standard unit of information is sometimes called a \"bit\" (the name for the standard unit of data), despite the fact that these units are distinct. The proper name for a binary unit of information is a \"[https://arbital.com/p/shannon](https://arbital.com/p/shannon).\"\n\nThat said, there are many links between information and data (and between shannons and bits). For instance:\n\n* An object with a [https://arbital.com/p/3y1](https://arbital.com/p/3y1) of $n$ bits can carry anywhere between 0 and $\\infty$ [shannons](https://arbital.com/p/shannon) of information (depending on the observer), but the maximum amount of information an observer can consistently [_expect_](https://arbital.com/p/expectation) from observing the object is $n$ shannons. For details, see [https://arbital.com/p/+expected_info_capacity](https://arbital.com/p/+expected_info_capacity).\n* The number of shannons an observer gets from an observation is equal to the number of bits in the [encoding](https://arbital.com/p/encoding) for that observation in their [https://arbital.com/p/ideal_encoding](https://arbital.com/p/ideal_encoding). In other words, shannons measure the number of bits of data you would use to communicate that observation to someone who knows everything you know (except for that one observation). For details, see [https://arbital.com/p/+ideal_encoding](https://arbital.com/p/+ideal_encoding) and [Information as encoding length](https://arbital.com/p/).\n\nFor more on the difference between information and data, see [https://arbital.com/p/+info_vs_data](https://arbital.com/p/+info_vs_data).\n\n# Information and Entropy\n\n[https://arbital.com/p/+entropy](https://arbital.com/p/+entropy) is a measure on probability distributions which, intuitively, measures the total uncertainty of that distribution. Specifically, entropy measures the number of shannons of information that the distribution [expects](https://arbital.com/p/expectation) to gain by being told the actual state of the world. As such, it can be interpreted as a measure of how much information the distribution says it is missing about the world. See also [Information and entropy](https://arbital.com/p/).\n\n# Formalization\n\nThe amount of information an observation carries to an observer is the [https://arbital.com/p/3nd](https://arbital.com/p/3nd) of the [https://arbital.com/p/reciprocal](https://arbital.com/p/reciprocal) of the [https://arbital.com/p/1rf](https://arbital.com/p/1rf) that they assigned to that observation. In other words, given a probability distribution $\\mathrm P$ over a set $O$ of possible observations, an observation $o \\in O$ is said to carry $\\log_2\\frac{1}{\\mathrm P(o)}$ shannons of information with respect to $\\mathrm P$. A different choice for the base of the logarithm corresponds to a different unit of information; see also [Converting between units of information](https://arbital.com/p/). For a full formalization, see [Information: Formalization](https://arbital.com/p/). For an understanding of why information is logarithmic, see [Information is logarithmic](https://arbital.com/p/). For a full understanding of why we call this quantity in particular \"information,\" see [Information: Intro](https://arbital.com/p/).", "date_published": "2016-05-31T14:32:54Z", "authors": ["Nate Soares"], "summaries": ["Information is a measure of how much a message grants an observer the ability to predict the world. Information is observer-dependent: If someone tells both of us your age, you'd learn nothing, while I'd learn how old you are. Information is measured in [shannons](https://arbital.com/p/shannon). If you are about to observe a coin that came up either \"heads\" or \"tails,\" one shannon is the difference between utter uncertainty about which way the coin came up, and total certainty that it came up \"heads.\" Information can be used to quantify both (a) how much uncertainty a person has; and (b) how much uncertainty a message resolves."], "tags": ["Start", "Work in progress"], "alias": "3xd"} {"id": "7ab5f686f107366419cf89069831f822", "title": "Data capacity", "url": "https://arbital.com/p/data_capacity", "source": "arbital", "source_type": "text", "text": "The data capacity of an object is defined to be the [https://arbital.com/p/-3nd](https://arbital.com/p/-3nd) of the number of different distinguishable states the object can be placed into. For example, a coin that can be placed heads or tails has a data capacity of $\\log(2)$ units of [https://arbital.com/p/-data](https://arbital.com/p/-data). The choice of [base](https://arbital.com/p/-logarithm_base) for the logarithm determines the unit of data; common units include the [https://arbital.com/p/-3p0](https://arbital.com/p/-3p0), [https://arbital.com/p/-nat](https://arbital.com/p/-nat), and [https://arbital.com/p/-3wn](https://arbital.com/p/-3wn) (corresponding to base 2, e, and 10). For example, the coin has a data capacity of $\\log_2(2)=1$ bit, and a pair of two dice (which can be placed into 36 distinguishable states) have a data capacity of $\\log_2(36) \\approx 5.17$ bits. Note that the data capacity of a channel depends on the ability of an observer to distinguish different states of the object: If the coin is a penny, and I'm able to tell whether you placed the image of Abraham Lincoln facing North, South, West, or East (regardless of whether the coin is heads or tails), then that coin has a data capacity of $\\log_2(8) = 3$ bits when used to transmit a message from you to me.\n\nThe data capacity of an object is closely related to the [channel capacity](https://arbital.com/p/channel_capacity) of a [communications channel](https://arbital.com/p/communications_channel). The difference is that the channel capacity is the amount of data the channel can transmit per unit time (measured, e.g., in bits per second), while the data capacity of an object is the amount of data that can be encoded by putting the object into a particular state (measured, e.g., in bits).\n\nWhy is data capacity defined as the logarithm of the number of states an object can be placed into? Intuitively, because a 2GB hard drive is supposed to carry twice as much data as a 1GB hard drive. More concretely, note that if you have $n$ copies of a physical object that can be placed into $b$ different states, then you can use those to encode $b^n$ different messages. For example for example, with three coins, you can encode eight different messages: HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT. The number of messages that the objects can encode grows exponentially with the number of copies. Thus, if we want to define a unit of message-carrying-capacity that grows linearly in the number of copies of an object (such that 3 coins hold 3x as much data as 1 coin, and a 9GB hard drive holds 9x as much data as a 1GB hard drive, and so on) then data [must](https://arbital.com/p/47k) grow logarithmically with the number of messages.\n\nThe data capacity of an object bounds the length of the message that you can send using that object. For example, it takes about 5 bits of data to encode a single letter A-Z, so if you want to transmit an 8-letter word to somebody, you need an object with a data capacity of $5 \\cdot 8 = 40$ bits. In other words, if you have 40 coins in a coin jar on your desk, and if we worked out an [encoding scheme](https://arbital.com/p/encoding_scheme) (such as [ASCII](https://arbital.com/p/https://en.wikipedia.org/wiki/ASCII)) ahead of time, then you can tell me any 8-letter word using those coins.\n\nWhat does it mean to say that an object \"can\" be placed into different states, and what does it mean for those states to be \"distinguishable\"? [https://arbital.com/p/+3qq](https://arbital.com/p/+3qq) is largely agnostic about the answer to those questions. Rather, _given_ a set of states that you claim you could put an object into, which you claim I can distinguish, information theory can tell you how to use those objects in a clever way to send messages to me. For more on what it means for two physical subsystems to communicate with each other by encoding messages into their environment, see [Communication](https://arbital.com/p/communication).\n\nNote that the amount of [https://arbital.com/p/-3xd](https://arbital.com/p/-3xd) carried by an object is not generally equal to the data capacity of the object. For example, say a trusted mutual friend of ours places a coin \"heads\" if your eyes are blue and \"tails\" otherwise. If I already know what color your eyes are, then the state of the coin doesn't carry any information for me. If instead I was very sure that your eyes are blue, but actually you've been wearing blue contact lenses and your eyes are really brown, then the coin may carry significantly more than 1 [https://arbital.com/p/-452](https://arbital.com/p/-452) of information for me. See also [https://arbital.com/p/+3xd](https://arbital.com/p/+3xd) and [Information is subjective](https://arbital.com/p/). The amount of information carried by an object to me (measured in [shannons](https://arbital.com/p/452)) may be either more or less than its data capacity — More if the message is very surprising to me, less if I already knew the message in advance. The relationship between the data capacity of an object and the amount of information it carries to me is that the maximum amount of information I can [expect](https://arbital.com/p/4b5) to gain by observing the object is equal to the data capacity. For more, see [Information and data capacity](https://arbital.com/p/).", "date_published": "2016-06-24T02:11:04Z", "authors": ["Nate Soares"], "summaries": [], "tags": [], "alias": "3y1"} {"id": "62640cbfcd912a26ee735002d27f61ff", "title": "Bit", "url": "https://arbital.com/p/bit", "source": "arbital", "source_type": "text", "text": "summary(set): An element of the set [$\\mathbb B$](https://arbital.com/p/boolean), which has two elements. These elements are sometimes called 0 and 1, or true and false, or yes and no. See [Bit ](https://arbital.com/p/3y3).\n\nsummary(data): A unit of [https://arbital.com/p/-data](https://arbital.com/p/-data), namely, the amount of data required to single out one message from a set of two, or, equivalently, the amount of data required to cut a set of possible messages in half. See [Bit of data](https://arbital.com/p/3p0).\n\nsummary(information): A unit of [information](https://arbital.com/p/subjective_information), namely, the difference in certainty between having no idea which way a coin is going to come up, and being entirely certain that it's going to come up heads. While this unit of information is colloquially known as a \"bit\" (for historical reasons), it is more properly known as a [https://arbital.com/p/452](https://arbital.com/p/452).\n\nsummary(evidence): A unit of [evidence](https://arbital.com/p/-bayesian_evidence), namely, a $2 : 1$ [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq) in favor of one outcome over another. See [Bit of evidence](https://arbital.com/p/evidence_bit) and [https://arbital.com/p/1zh](https://arbital.com/p/1zh).\n\nThe term \"bit\" is [https://arbital.com/p/-overloaded](https://arbital.com/p/-overloaded). It can mean any of the following:\n\n1. An element of the set [$\\mathbb B$](https://arbital.com/p/boolean), which has two elements. These elements are sometimes called 0 and 1, or true and false, or yes and no. See [Bit ](https://arbital.com/p/3y3).\n2. A unit of [https://arbital.com/p/data](https://arbital.com/p/data), namely, the amount of data required to single out one message from a set of two, or, equivalently, the amount of data required to cut a set of possible messages in half. See [Bit of data](https://arbital.com/p/3p0).\n3. A unit of [](https://arbital.com/p/subjective_information), namely, the difference in certainty between having no idea which way a coin is going to come up, and being entirely certain that it's going to come up heads. While this unit of information is colloquially known as a \"bit\" (for historical reasons), it is more properly known as a [https://arbital.com/p/452](https://arbital.com/p/452).\n4. A unit of [evidence](https://arbital.com/p/bayesian_evidence), namely, a $2 : 1$ [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq) in favor of one outcome over another. See [Bit of evidence](https://arbital.com/p/evidence_bit) and/or [https://arbital.com/p/1zh](https://arbital.com/p/1zh).\n\nThe common theme across all the uses listed above is the number 2. An abstract bit is one of two values. A bit of data is a portion of a representation of a message that cuts the set of possible messages in half (i.e., the $\\log_2$ of the number of possible messages). A bit of information (aka a [https://arbital.com/p/452](https://arbital.com/p/452)) is the amount of information in the answer to a yes-or-no question about which the observer was maximally uncertain (i.e., the $\\log_2$ of a probability). A bit of evidence in favor of A over B is an observation which provides twice as much support for A as it does for B (i.e., the $\\log_2$ of a [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq)). Thus, if you see someone using bits as units (a bit _of data,_ or a bit _of evidence,_ etc.) you can bet that they took a $\\log_2$ of something somewhere along the way.\n\nUnfortunately, abstract bits break this pattern, so if you see someone talking about \"bits\" without disambiguating what sort of bit they mean, the most you can be sure of is that they're talking about something that has to do with the number 2. Unless they're just using the word \"bit\" to mean \"small piece,\" in which case you're in a bit of trouble.", "date_published": "2016-06-24T01:39:49Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["The term \"bit\" refers to different concepts in different fields. The common theme across all the uses is the number 2. Usually, if you see something measured in bits (such as a bit of data, a bit of evidence, or a bit of information) it means that someone is counting the number of 2-factors in something (the number of possibilities, a likelihood ratio, a probability) using [$\\log_2$](https://arbital.com/p/3nd)."], "tags": ["B-Class", "Disambiguation"], "alias": "3y2"} {"id": "3e2ea43ddf38099d778e164a30606dfd", "title": "Bit (abstract)", "url": "https://arbital.com/p/abstract_bit", "source": "arbital", "source_type": "text", "text": "An abstract bit is an element of the set [$\\mathbb B$](https://arbital.com/p/boolean), which has two elements. An abstract bit is to $\\mathbb B$ as a number is to [$\\mathbb N$](https://arbital.com/p/natural_number). The set $\\mathbb N$ contains a bunch of \"numbers\", and these numbers sometimes have many different labels, such as \"three\" and \"3\". Similarly, the set $\\mathbb B$ contains a pair of \"bits\", which sometimes have many different labels. The abstract bits are sometimes labeled 0 and 1; true and false; yes and no; or + and -. The labels don't matter; the point is that the set $\\mathbb B$ contains two elements, and we call those elements (abstract) bits.", "date_published": "2016-05-31T04:04:28Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "3y3"} {"id": "95ff05b49fdb6d609fd8ecd1b614eeea", "title": "Compressing multiple messages", "url": "https://arbital.com/p/multiple_compression", "source": "arbital", "source_type": "text", "text": "How many [bits of data](https://arbital.com/p/3p0) does it take to encode an [$n$-message](https://arbital.com/p/3v9)? Naively, the answer is $\\lceil \\log_2(n) \\rceil$ ([why?](https://arbital.com/p/n_message_bit_length)): For example, it takes 5 bits to encode a 21-message, because 4 bits are only enough to encode 16 different messages, but 5 bits are enough to encode 32. The use of the [https://arbital.com/p/3vc](https://arbital.com/p/3vc) function implies an inefficiency: 2 bits are required to encode a 3-message, but 2 bits are enough to distinguish between four different possibilities. One of those possibilities is being wasted. That inefficiency can be reduced by encoding multiple $n$-messages at the same time. For example, while an individual 3-message requires 2 bits to encode, a series of 10 3-messages requires at most 16 bits to encode: $3^{10} < 2^{16}.$\n\nWhy is it that encoding ten 3-messages together (using bits) is cheaper than encoding ten 3-messages separately? Naively, there are three different factors that allow the combined encoding to be shorter than the sum of the separate encodings: The messages could have different likelihoods ([allowing the combined message to be compressed in expectation](https://arbital.com/p/expected_compression)); the messages could be dependent on each other ([meaning they can be compressed](https://arbital.com/p/compressing_dependent_messages)); and the mismatch between bits and 3-messages gets washed out as we put more three-messages together (see [https://arbital.com/p/+3wv](https://arbital.com/p/+3wv)).\n\nIn fact, the first two factors are equivalent: 10 3-messages are equivalent to one $3^{10}$ message, and in general, [$n$ $k$-messages are equivalent to one $n^k$-message](https://arbital.com/p/n_k_messages). If the individual n-messages are dependent on each other, then different $n^k$ messages have different likelihoods: For example, if message 3 never follows message 2, then in the combined message, \"32\" never appears as a substring.\n\nThus, there are two different ways that an encoding of $k$ $n$-messages can be shorter than $k$ times the encoding of an $n$-message: The various combined messages can have different likelihoods, and the efficiency of the coding might increase. To study the effect of different likelihoods on the encoding length in isolation, we can [assume that the codings are maximally efficient](https://arbital.com/p/assume_maximum_efficiency) and see how much additional [https://arbital.com/p/compression](https://arbital.com/p/compression) the different likelihoods get us. To study code efficiency in isolation, we can [assume each message is equally likely](https://arbital.com/p/assume_equal_likelihood_messages) and see how much additional compression we get as we put more $n$-messages together. In practice, real compression involves using both techniques at once.", "date_published": "2016-06-02T23:08:29Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": [], "tags": ["Work in progress"], "alias": "3zg"} {"id": "465a2276cb72aa0895abbba5de5817d9", "title": "Communication: magician example", "url": "https://arbital.com/p/magician_message", "source": "arbital", "source_type": "text", "text": "Imagine that you and I are both magicians, performing a trick where I think of a card from a deck of cards and then leave the room, at which point you enter the room and have to guess which card I picked. To perform this trick, I leave some [https://arbital.com/p/3xd](https://arbital.com/p/3xd) in the room that allows you to figure out which card I picked. I do this by [https://arbital.com/p/encoding](https://arbital.com/p/encoding) a [https://arbital.com/p/message](https://arbital.com/p/message) into the environment (using objects that can carry [https://arbital.com/p/data](https://arbital.com/p/data)). Let's say that our props table includes (among other things) a coin and two dice (one red, one blue). Assume I have the ability to (without the audience noticing) place the coin and the two dice however I like before leaving the room. Those three objects have a [https://arbital.com/p/3y1](https://arbital.com/p/3y1) of $\\log_2(2 \\times 6 \\times 6) \\approx 6.17$ bits. I can use those objects to [represent](https://arbital.com/p/physical_encoding) the encoding of the message I want to send to you.\n\nThe messages that I want to send are messages saying which card I'm thinking of. Example messages include the message saying that I'm thinking of the ace of spades, or messages saying that I'm thinking of the king of hearts.\n\nThe usual way that humans encode messages like these is to use physical objects like pressure waves in air or pixels on a screen or ink on paper arranged to look like the following patterns: \"my card is the ace of spades\" or \"my card is the king of hearts.\" ([Don't confuse the message with the encoding.](https://arbital.com/p/))\n\nThose sentences are pretty long, though: 28 and 29 letters, respectively. A coin and two dice can't carry 29 characters. Fortunately, we don't need all those characters: I can easily encode a message about which card I'm thinking of using only two characters, a rank and a suit, such as $A♠$ or $K♡.$ and so on. However, the coin and dice don't naturally carry rank and suit symbols on them.\n\nHowever, there are only 52 possible messages I might want to encode, and the coin and dice can be put into $2 \\cdot 6 \\cdot 6 = 72$ different configurations. Thus, if we're clever, and [choose a decoding scheme ahead of time](https://arbital.com/p/), then I can set those objects in such a way that when you see them, you know which card I'm thinking of. Let's say we've agreed on the following [decoding scheme](https://arbital.com/p/):\n\n1. If the coin is heads, the card is red. If the coin is tails, the card is black.\n2. If the red die is a 1, 2, or 3 then the suit is either spades or hearts. Otherwise, the suit is either clubs or diamonds.\n3. If the red die is a 1 or 4, the card is either A, 2, 3, 4, 5, or 6. If the red die is a 2 or 5, the card is either 7, 8, 9, 10, J, or Q. If the red die is a 3 or a 6, the card is a K.\n4. If the blue die is 1, the card is either A or 7 or K; if the blue die is 2, the card is either 2 or 8 or K; If the blue die is 3, the card is either 3 or 9 or K; and so on.\n\nWhen you walk in the room, you can glance at the coin and the two dice and know which card I picked. If you see that the coin is tails-up, the red die is 1-up, and the blue die is 2-up, then you know that the card is black, either spades or hearts, between A and 6, and either 2 or 8 — in other words, the card is the 2 of spades.", "date_published": "2016-06-02T19:42:25Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Start", "Work in progress"], "alias": "3zh"} {"id": "09b3885e5815d9a891be846ee7266e91", "title": "Encoding trits with GalCom bits", "url": "https://arbital.com/p/trits_with_galcom_bits", "source": "arbital", "source_type": "text", "text": "There are [$\\log_2](https://arbital.com/p/3nd) [bits](https://arbital.com/p/3p0) to a [https://arbital.com/p/3ww](https://arbital.com/p/3ww). Why is it that particular value? Consider the [https://arbital.com/p/+3tz](https://arbital.com/p/+3tz) thought experiment, in which you make money by sending information about the stock market across the celestial reaches. Let's say that Company X is about to publish a big earnings report, and that you want to send a message to your stock trader in the Vega star system that says one of three things: \"Performance was in the top 1/3 of what I expected,\" \"Performance was in the middle 1/3 of what I expected,\" or \"Performance was in the bottom 1/3 of what I expected.\" You obviously don't want to send whole sentences; it suffices to transmit a single [https://arbital.com/p/3ww](https://arbital.com/p/3ww) of information.\n\nThat is, if you had control of a communication channel that could hold one of three signals (say, a wire that could be positively charged, negatively charged, or uncharged) which was connected to your computer on Vega, then you could program the computer to recognize a positive charge as \"top 1/3 performance\", a neutral charge as \"middle 1/3 performance\", and a negative charge as \"bottom 1/3 performance.\" You could then program it to buy, hold, or sell respectively, and make your money.\n\nUnfortunately, you don't control a wire connected to your computer on Vega. The computer on Vega is in a different star system, and the only way to communicate with it is through GalCom, the galactic communications network. And GalCom doesn't sell trits (which let you transmit one of three messages) it sells [bits](https://arbital.com/p/3y2) (which let you transmit one of two messages).\n\nSo, how many bits on GalCom will it cost you to send your trit?\n\n(You're invited to pause and find some method for encoding a trit using bits.)\n\n
\n
\n
\n
\n
\n\nYou can easily transmit a trit via two bits. If you reserve two bits on GalCom, you get to control two of the pulses of light on the sender, by choosing wether they are present or absent. This affects whether the receiver writes down 0 or 1, and those two binary digits will be sent to your machine on Vega. Thus, if you purchase two bits on GalCom, you can send one of four messages to your machine on Vega: 00, 01, 10, or 11. This is more than enough to encode a trit: You can program the machine such that 00 causes it to sell, 01 causes it to hold, and 10 causes it to buy. Then you have one code left over, namely 11, that you'll never send.\n\n![One trit encoded, inefficiently, using 2 bits](http://i.imgur.com/fKPRo2s.png)\n\n_To encode a bit using trits, you could purchase two bits and then make use of only 3 of the 4 available codes._\n\nHowever, this is inefficient: You purchased one code (11) that you are never using. Is there any way to repurpose this code to make some of your money back?\n\n(You are invited to pause and find some method of encoding a trit using two bits such that, sometimes, you're able to sell one bit back to GalCom and recover some of your losses.)\n\n
\n
\n
\n
\n
\n\nHere's one thing you could do: Program your machine on Vega such that it recognizes both 10 and 11 as \"top 1/3 performance.\"\n\n![Encoding a trit using two bits, such that 1/3 of the time you can sell one bit back to GalCom.](http://i.imgur.com/AF49bHM.png)\n\nThen, by the time you've transmitted a 1, the machine on Vega already knows to start purchasing stock: It doesn't need the next bit. This means that it doesn't matter (to you) whether you send 10 or 11; these are two different ways of representing the same message. Thus, if Company X has performance in the top 1/3 of your expectations, you can sell one bit back to GalCom: Send 10 if the next bit they were going to send was a 0, and 11 if the next bit they were going to send was a 1. You've just shaved one bit off of the next message that was going to come through GalCom!\n\n(The above exposition implies that you need to know the first bit of the next message, and that your computer on Vega needs to be set up to forward the second bit to where it needs to go. In practice, this sort of thing would be standard operating procedure on GalCom, which could use [prefix free codes](https://arbital.com/p/) to avoid the need for seeing/forwarding bits of following messages.)\n\nIf you encode your trit this way, then you reserve 2 bits in advance, and 1/3 of the time you get to sell one back. Thus, if you send lots and lots of trits using this encoding, you only pay $2 - \\frac{1}{3} \\approx 1.67$ galcoins per trit on average (assuming, [as we should](https://arbital.com/p/assume_independent_equally_likely_messages), that each message occurs with equal frequency).\n\nCan we do better? Yes. (TODO)", "date_published": "2016-06-02T20:31:13Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Start", "Work in progress"], "alias": "3zn"} {"id": "e1633ab98749ffa3c84fedbd3d9ba393", "title": "Logarithm tutorial overview", "url": "https://arbital.com/p/log_tutorial_overview", "source": "arbital", "source_type": "text", "text": "The following topics are covered in Arbital's [introductory guide to logarithms](https://arbital.com/p/3wj):\n\n## 1. Definition of the logarithm\n\nWhat is a logarithm function, anyway? You may have been told that it's a function with a graph that looks like this:\n\n![](https://upload.wikimedia.org/wikipedia/commons/thumb/1/17/Binary_logarithm_plot_with_ticks.svg/408px-Binary_logarithm_plot_with_ticks.svg.png \"By Krishnavedala. Own work, CC BY-SA 3.0, goo.gl/VzKMzu\")\n\nwhich is true, but not the whole story. This tutorial begins by asking: [https://arbital.com/p/40j](https://arbital.com/p/40j).\n\n## 2. Log as length\n\nLogarithms measure how long a number is, for a specific notion of \"length\" where fractional lengths are allowed. I don't know what $\\log_{10}(\\text{2,310,426})$ is, but I can immediately tell you that it's between 6 and 7, because it takes 7 digits to write 2,310,426:\n\n$$\\underbrace{\\text{2,310,426}}_\\text{7 digits}$$\n\nWhy is it $\\log_{10}$ that measures length? What does it mean for a measure of \"length\" to be fractional? Why do logarithms say that it is 316 (rather than 500) that is 2.5 digits long? These questions and others are answered in [https://arbital.com/p/416](https://arbital.com/p/416).\n\n## 3. Logs measure data\n\nLogarithms measure how hard it is to represent messages using physical objects in the world. Let's say that we're magicians, and I want to tell you what card I'm thinking of by using sleight-of-hand to arrange some dice. How many 6-sided dice do I need to set (without the audience noticing) to send you a message that tells you which card I'm thinking of, assuming there are 52 possibilities? The lower bound is $\\log_6(52).$ Why? The answer reveals a link between logarithms and the \"data capacity\" of physical objects.\n\nThis idea is explored across three pages: [https://arbital.com/p/427](https://arbital.com/p/427), [https://arbital.com/p/44l](https://arbital.com/p/44l), and [https://arbital.com/p/45q](https://arbital.com/p/45q).\n\n## 4. The characteristic of the logarithm\n\n Any time you see a function $f$ such that $f(x \\cdot y) = f(x) + f(y)$ for all $x, y \\in$ [$\\mathbb R^+$](https://arbital.com/p/positive_reals), $f$ is a logarithm — more or less. [https://arbital.com/p/47k](https://arbital.com/p/47k) describes the main idea, and the optional [https://arbital.com/p/4bz](https://arbital.com/p/4bz) page takes you through the proofs and discusses some technicalities.\n\n## 5. There is only one logarithm\n\nWhile there are many logarithm functions (one for each positive number except 1), there is a sense in which they're all doing exactly the same thing: Tapping into an intricate \"[logarithm lattice](https://arbital.com/p/4gp)\".\n\n## 6. Life in log-space\n\nEmpirically, logarithms have proved quite a useful tool for people and machines who have to do lots and lots of multiplications. Scientists and engineers used to use giant pre-computed tables of the logs of common numbers, and use those to make their calculations. Today, many modern learning algorithms (such as [AlphaGo](https://arbital.com/p/https://en.wikipedia.org/wiki/AlphaGo)) manipulate the logs of probabilities instead of manipulating probabilities directly. Why? This tutorial concludes by exploring when and why it is easier to deal with the logarithms of things than it is to deal with the things themselves, on the page [https://arbital.com/p/4h0](https://arbital.com/p/4h0).\n\n
\n\n---\n\n
\n\nThese six concepts are nowhere near all there is to say on the topic of logarithms. Logarithms have applications to many domains, including physics, computer science, calculus, number theory, and psychology. Logarithms have many interesting properties, such as nice [derivatives](https://arbital.com/p/47d), nice [integrals](https://arbital.com/p/integral_calculus), and interesting [approximation algorithms](https://arbital.com/p/calculating_logs). One of the bases of the logarithm, $\\log_e,$ is \"special.\" The logarithm gets [quite a bit more interesting](https://arbital.com/p/complex_log) when extended to the [complex plane](https://arbital.com/p/complex_plane). Those are all more advanced topics, which aren't covered in this tutorial. If you want to learn more about those sorts of topics, see Arbital's [advanced logarithm tutorial](https://arbital.com/p/advanced_log_tutorial).\n\nThis guide focuses on building a solid intuition for what the logarithm does, and why it has its most important properties.", "date_published": "2016-07-01T14:06:07Z", "authors": ["Nate Soares", "Patrick Stevens"], "summaries": ["The logarithm tutorial covers the following six subjects:\n\n1. What are logarithms?\n2. Logarithms as a measure of length.\n3. Logarithms as a measure of data and information.\n4. The main charactristics and properties of logarithm functions.\n4. All logarithm functions are the same (up to a multiplicative constant)\n5. Why do logarithms make some calculations more efficient?"], "tags": ["Work in progress"], "alias": "3zv"} {"id": "6434e98f9c495d12c928e0814fbdcf08", "title": "What is a logarithm?", "url": "https://arbital.com/p/log_definition", "source": "arbital", "source_type": "text", "text": "Logarithms are a group of [functions](https://arbital.com/p/3jy) that map numbers to numbers. There is a logarithm function for every positive number $b$ [except 1](https://arbital.com/p/41m), and the logarithm function for $b$ (called \"the logarithm base $b$\" or just \"log base $b$\") takes an input $x$ and counts how many times you have to multiply $1$ by $b$ to get $x$.\n\nFor example, the log base 10 of 1000 is 3, because you have to multiply 1 by 10 three times to get 1000, and the log base 2 of 16 is 4, because you have to multiply 1 by 2 four times to get 16. The log base $b$ of $x$ is written $\\log_b(x).$\n\nThe logarithm function grows slowly. For example, $\\log_{10}(x)$ is less than 1 until $x$ reaches 10, and it's less than 2 until $x$ reaches 100, and it's less than 6 until $x$ reaches 1,000,000 and so on. You've got to put a lot of oomph (specifically, a whole factor of 10) into the input in order to make the output go up by 1. Here's a graph of $\\log_{10}(x)$ inching its way towards a measly 2 as $x$ makes it all the way out to 100:\n\n![log10](http://i.imgur.com/UhUBlTt.png)\n\n[Make this graph interactive.](https://arbital.com/p/fixme:)\n\nLogarithms may grow slowly, but they never stop growing: For every number $n$, no matter how large, there is an input $x$ such that $\\log_{10}(x) > n.$\n\n## Examples\n\n1. $\\log_{10}(10000) = 4,$ because you have to multiply 1 by 10 four times to get ten thousand: $1 \\cdot 10 \\cdot 10 \\cdot 10 \\cdot 10 = 10000.$\n3. $\\log_2(8) = 3,$ because $1 \\cdot 2 \\cdot 2 \\cdot 2 = 8.$\n4. $\\log_3(9) = 2,$ because $1 \\cdot 3 \\cdot 3 = 9.$\n5. $\\log_{b}(1) = 0$ for any $b$, because no matter what $b$ is, you don't need to multiply 1 by $b$ any times to get 1.\n6. $\\log_{b}(b) = 1$ for any $b$, because no matter what $b$ is, if you multiply 1 by $b$ once, you get $b.$\n7. $\\log_{1.5}(3.375) = 3,$ because $1 \\cdot 1.5 \\cdot 1.5 \\cdot 1.5 = 3.375.$\n\nQuestion: What's $\\log_3(27)$?\n\n%%hidden(Answer):\nAnswer: 3, because $1 \\cdot 3 \\cdot 3 \\cdot 3 = 27$.\n%%\n\nQuestion: What's $\\log_4(16)$?\n\n%%hidden(Answer):\nAnswer: 2, because $1 \\cdot 4 \\cdot 4 = 16$.\n%%\n\nQuestion: What's $\\log_{10}(\\text{1,000,000})$?\n\n%%hidden(Answer):\nAnswer: 6, because $1 \\cdot 10 \\cdot 10 \\cdot 10 \\cdot 10 \\cdot 10 \\cdot 10 = 1,000,000$.\n%%\n\nAll of the examples above are cases where the answer is a whole number, because $x$ is a [https://arbital.com/p/-power](https://arbital.com/p/-power) of $b.$ Logarithms also give answers in cases where the answer is not a whole number. For example, $\\log_{10}(500) \\approx 2.7$.\n\nWhat's _that_ supposed to mean? That if you multiply 1 by 10 about 2.7 times, you get 500?\n\nYes, precisely! If you multiply 1 by ten once, and multiply the result by ten another time, and then multiply that by ten 0.7 times, the result is roughly 500. What does it mean to multiply a number by ten 0.7 times? Well, is there a number $x$ such that if we multiply 1 by $x$ ten times, it's the same as multiplying 1 by 10 seven times? Yes! That number is roughly 5:\n\n$$\\underbrace{5 \\cdot 5 \\cdot \\ldots 5}_\\text{10 times} \\approx \\underbrace{10 \\cdot 10 \\cdot \\ldots 10}_\\text{7 times}$$\n\nMultiplying by 5 (ten times) is about the same as multiplying by 10 (seven times), so multiplying by five (once) is about the same as multiplying by 10 (seven-tenths of a time). Thus, multiplying 1 by ten 2.7 times means multiplying it by 10 twice, then multiplying it by approximately 5, for a result of approximately 500.\n\nThis is why $\\log_{10}(500) \\approx 2.7$: that's how many times you need to multiply 1 by 10 to get 500.\n\nIf that answer isn't satisfactory yet, don't worry: Throughout the tutorial, we'll explore both (a) what this means, and (b) why this is the \"right way\" to multiply a number by $b$ fractionally-many times.\n\nYou might be saying to yourself, \"Ok, I can see how 500 is kinda sorta one times ten 2.7 times, but how do you _figure that out?_ As far as I can tell, you pulled that \"$5^{10}$ is approximately $10^7$\" thing out of thin air. Where did that come from? How is the logarithm actually calculated?\"\n\nThe annoying answer is that I calculated $\\log_{10}(500)$ by typing \"log10(500)\" into [https://arbital.com/p/http://www.wolframalpha.com/input/?i=log10](https://arbital.com/p/http://www.wolframalpha.com/input/?i=log10). In more seriousness, in the coming posts, you'll learn how to find things like $\\log_{10}(500)$ on your own.\n\nThat said, the outputs of logarithms are generally difficult to find, but useful once you've found them. Historically, scientists and engineers would pay money for giant tables of pre-computed logarithm values, because those sped up their practical calculations considerably. Why are the outputs of logarithm functions so convenient to use? That question will be answered later in this tutorial.\n\nYou might also be saying to yourself, \"ok, I buy that there's a way to see 500 as '1 times 10, 2.7 times,' but what does that _mean?_ What's the point of looking at 500 that way?\"\n\nIt's hard to give a short answer to that question, but only because there are so many good answers. One answer goes something like, \"if you think of 500 as 2.7 tens, and you think of 8000 as 3.9 tens, then you can multiply the numbers on the left by adding the representations on the right\". For example, $500 \\cdot 8000$ is approximately $2.7 + 3.9 = 6.6$ tens.\n\nBut that's getting ahead of ourselves a bit. The most intuitive interpretation to start with is probably this one: Logarithms are measuring how \"long\" a number is, in terms of how many digits it takes to write that number down, for a generalized notion of \"length.\"\n\nIt may seem that numbers always take a whole number of digits to write down: 139 and 931 are both written using three digits ('9', '3', and '1'). However, as we will see, there's a sense in which 139 is barely using its third digit, while 931 is using all three of its digits to almost their full extent. Logarithms quantify this intuition, and the precise answer that they give reveals some interesting facts about how many digits it costs to write a given number down.", "date_published": "2016-09-14T23:31:20Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Abdikarim Aweis Abo", "Nate Soares", "Manoj Neelapu", "Eric Bruylant", "Jeremy Schlatter", "Joe Zeng"], "summaries": ["Logarithms are a group of [functions](https://arbital.com/p/3jy) that take a number as input and produce another number. There is a logarithm function for every positive number $b$ [except 1](https://arbital.com/p/41m), and the logarithm function for $b$ (called \"the logarithm base $b$\" or just \"log base $b$\") takes an input $x$ and counts how many times you have to multiply $1$ by $b$ to get $x$. For example, the log base 10 of 1000 is 3, because you have to multiply 1 by 10 three times to get 1000. The log base $b$ of $x$ is written $\\log_b(x).$"], "tags": ["B-Class"], "alias": "40j"} {"id": "0bcb23d00e47abab6242c85a272ca8bd", "title": "Log as generalized length", "url": "https://arbital.com/p/log_as_length", "source": "arbital", "source_type": "text", "text": "Here are a handful of examples of how the [https://arbital.com/p/-3nd](https://arbital.com/p/-3nd) base 10 behaves. Can you spot the pattern?\n$$\n\\begin{align}\n\\log_{10}(2) &\\ \\approx 0.30 \\\\\n\\log_{10}(7) &\\ \\approx 0.85 \\\\\n\\log_{10}(22) &\\ \\approx 1.34 \\\\\n\\log_{10}(70) &\\ \\approx 1.85 \\\\\n\\log_{10}(139) &\\ \\approx 2.14 \\\\\n\\log_{10}(316) &\\ \\approx 2.50 \\\\\n\\log_{10}(123456) &\\ \\approx 5.09 \\\\\n\\log_{10}(654321) &\\ \\approx 5.82 \\\\\n\\log_{10}(123456789) &\\ \\approx 8.09 \\\\\n\\log_{10}(\\underbrace{987654321}_\\text{9 digits}) &\\ \\approx 8.99\n\\end{align}\n$$\n\nEvery time the input gets one digit longer, the output goes up by one. In other words, the output of the logarithm is roughly the length — measured in digits — of the input. ([Why?](https://arbital.com/p/437))\n\nWhy is it the log base 10 (rather than, say, the log base 2) that roughly measures the length of a number? Because numbers are normally represented in [https://arbital.com/p/-4sl](https://arbital.com/p/-4sl), where each new digit lets you write down ten times as many numbers. The logarithm base 2 would measure the length of a number if each digit only gave you the ability to write down twice as many numbers. In other words, the log base 2 of a number is roughly the length of that number when it's represented in [https://arbital.com/p/-56q](https://arbital.com/p/-56q) (where $13$ is written $\\texttt{1101}$ and so on):\n\n$$\n\\begin{align}\n\\log_2(3) = \\log_2(\\texttt{11}) &\\ \\approx 1.58 \\\\\n\\log_2(7) = \\log_2(\\texttt{111}) &\\ \\approx 2.81 \\\\\n\\log_2(13) = \\log_2(\\texttt{1101}) &\\ \\approx 3.70 \\\\\n\\log_2(22) = \\log_2(\\texttt{10110}) &\\ \\approx 4.46 \\\\\n\\log_2(70) = \\log_2(\\texttt{1010001}) &\\ \\approx 6.13 \\\\\n\\log_2(139) = \\log_2(\\texttt{10001011}) &\\ \\approx 7.12 \\\\\n\\log_2(316) = \\log_2(\\texttt{1100101010}) &\\ \\approx 8.30 \\\\\n\\log_2(1000) = \\log_2(\\underbrace{\\texttt{1111101000}}_\\text{10 digits}) &\\ \\approx 9.97\n\\end{align}\n$$\n\nIf you aren't familiar with the idea of numbers represented in other number bases besides 10, and you want to learn more, see the [https://arbital.com/p/-number_base_tutorial](https://arbital.com/p/-number_base_tutorial).\n\nHere's an interactive visualization which shows the link between the length of a number expressed in base $b$, and the logarithm base $b$ of that number:\n\n[https://arbital.com/p/visualization](https://arbital.com/p/visualization)\n\nAs you can see, if $b$ is an [https://arbital.com/p/-48l](https://arbital.com/p/-48l) greater than 1, then the logarithm base $b$ of $x$ is pretty close to the number of digits it takes to write $x$ in base $b.$\n\nPretty close, but not exactly. The most obvious difference is that the outputs of logarithms generally have a fractional portion: the logarithm of $x$ always falls a little short of the length of $x.$ This is because, insofar as logarithms act like the \"length\" function, they generalize the notion of length, making it [continuous](https://arbital.com/p/continuity).\n\nWhat does this fractional portion mean? Roughly speaking, logarithms measure not only how long a number is, but also how much that number is _really_ using its digits. 12 and 99 are both two-digit numbers, but intuitively, 12 is \"barely\" two digits long, whereas 97 is \"nearly\" three digits. Logarithms formalize this intuition, and tell us that 12 is really only using about 1.08 digits, while 97 is using about 1.99.\n\nWhere are these fractions coming from? Also, looking at the examples above, notice that $\\log_{10}(316) \\approx 2.5.$ Why is it 316, rather than 500, that logarithms claim is \"2.5 digits long\"? What would it even _mean_ for a number to be 2.5 digits long? It very clearly takes 3 digits to write down \"316,\" namely, '3', '1', and '6'. What would it mean for a number to use \"half a digit\"?\n\nWell, here's one way to approach the notion of a \"partial digit.\" Let's say that you work in a warehouse recording data using digit wheels like they used to have on old desktop computers.\n\n![A digit wheel](http://www.cl.cam.ac.uk/~djg11/howcomputerswork/mechanical-counter3.jpg)\n\nLet's say that one of your digit wheels is broken, and can't hold numbers greater than 4 — every notch 5-9 has been stripped off, so if you try to set it to a number between 5 and 9, it just slips down to 4. Let's call the resulting digit a [5-digit](https://arbital.com/p/4sj), because it can still be stably placed into 5 different states (0-4). We could easily call this 5-digit a \"partial 10-digit.\"\n\nThe question is, how much of a partial 10-digit is it? Is it half a 10-digit, because it can store 5 out of 10 values that a \"full 10-digit\" can store? That would be a fine way to measure fractional digits, but it's not the method used by logarithms. Why? Well, consider a scenario where you have to record lots and lots of numbers on these digits (such that you can tell someone how to read off the right data later), and let's say also that you have to pay me one dollar for every digit that you use. Now let's say that I only charge you 50¢ per 5-digit. Then you should do all your work in 5-digits! Why? Because two 5-digits can be used to store 25 different values (00, 01, 02, 03, 04, 10, 11, ..., 44) for \\$1, which is way more data-stored-per-dollar than you would have gotten by buying a 10-digit.%note:You may be wondering, are two 5-digits really worth more than one 10-digit? Sure, you can place them in 25 different configurations, but how do you encode \"9\" when none of the digits have a \"9\" symbol written on them? If so, see [The symbols don't matter](https://arbital.com/p/).%\n\nIn other words, there's a natural exchange rate between $n$-digits, and a 5-digit is worth more than half a 10-digit. (The actual price you'd be willing to pay is a bit short of 70¢ per 5-digit, for reasons that we'll explore shortly). A 4-digit is also worth a bit more than half a 10-digit (two 4-digits lets you store 16 different numbers), and a 3-digit is worth a bit less than half a 10-digit (two 3-digits let you store only 9 different numbers).\n\nWe now begin to see what the fractional answer that comes out of a logarithm actually means (and why 300 is closer to 2.5 digits long that 500 is). The logarithm base 10 of $x$ is not answering \"how many 10-digits does it take to store $x$?\" It's answering \"how many digits-of-various-kinds does it take to store $x$, where as many digits as possible are 10-digits; and how big does the final digit have to be?\" The fractional portion of the output describes how large the final digit has to be, using this natural exchange rate between digits of different sizes.\n\nFor example, the number 200 can be stored using only two 10-digits and one 2-digit. $\\log_{10}(200) \\approx 2.301,$ and a 2-digit is worth about 0.301 10-digits. In fact, a 2-digit is worth _exactly_ $(\\log_{10}(200) - 2)$ 10-digits. As another example, $\\log_{10}(500) \\approx 2.7$ means \"to record 500, you need two 10-digits, and also a digit worth at least $\\approx$70¢\", i.e., two 10-digits and a 5-digit.\n\nThis raises a number of additional questions:\n\n---\n\n__Question:__ Wait, there _is_ no digit that's worth 50¢. As you said, a 3-digit is worth less than half a 10-digit (because two 3-digits can only store 9 things), and a 4-digit is worth more than half a 10-digit (because two 4-digits store 16 things). If $\\log_{10}(316) \\approx 2.5$ means \"you need two 10-digits and a digit worth at least 50¢,\" then why not just have the $\\log_{10}$ of everything between 301 and 400 be 2.60? They're all going to need two 10-digits and a 4-digit, aren't they?\n\n__Answer:__ The natural exchange rates between digits is actually _way more interesting_ than it first appears. If you're trying to store either \"301\" or \"400\", and you start with two 10-digits, then you have to purchase a 4-digit in both cases. But if you start with a 10-digit and an 8-digit, then the digit you need to buy is different in the two cases. In the \"301\" case you can still make do with a 4-digit, because the 10, 8, and 4-digits together give you the ability to store any number up to $10\\cdot 8\\cdot 4 = 320$. But in the \"400\" case you now need to purchase a 5-digit instead, because the 10, 8, and 4 digits together aren't enough. The logarithm of a number tells you about _every_ combination of $n$-digits that would work to encode the number (and more!). This is an idea that we'll explore over the next few pages, and it will lead us to a much better understanding of logarithms.\n\n---\n\n__Question:__ Hold on, where did the 2.60 number come from above? How did you know that a 5-digit costs 70¢? How are you calculating these exchange rates, and what do they mean?\n\n__Answer:__ Good question. In [https://arbital.com/p/427](https://arbital.com/p/427), we'll explore what the natural exchange rate between digits is, and why.\n\n---\n\n__Question:__ $\\log_{10}(100)=2,$ but clearly, 100 is 3 digits long. In fact, $\\log_b(b^k)=k$ for any integers $b$ and $k$, but $k+1$ digits are required to represent $b^k$ in base $b$ (as a one followed by $k$ zeroes). Why is the logarithm making these off-by-one errors?\n\n__Answer:__ Secretly, the logarithm of $x$ isn't answering the question \"how hard is it to write $x$ down?\", it's answering something more like \"how many digits does it take to record a whole number less than $x$?\" In other words, the $\\log_{10}$ of 100 is the number of 10-digits you need to be able to name any one of a hundred numbers, and that's two digits (which can hold anything from 00 to 99).\n\n---\n\n__Question:__ Wait, but what about when the _input_ has a fractional portion? How long is the number 100.87? And also, $\\log_{10}(100.87249072)$ is just a hair higher than 2, but 100.87249072 is way harder to write down that 100. How can you say that their \"lengths\" are almost the same?\n\n__Answer:__ Great questions! The length interpretation on its own doesn't shed any light on how logarithm functions handle fractional inputs. We'll soon develop a second interpretation of logarithms which does explain the behavior on fractional inputs, but we aren't there yet.\n\nMeanwhile, note that the question \"how hard is it to write down an integer between 0 and $x$ using digits?\" is _very different_ from the question of \"how hard is it to write down $x$\"? For example, 3 is easy to write down using digits, while [$\\pi$](https://arbital.com/p/49r) is very difficult to write down using digits. Nevertheless, the log of $\\pi$ is very close to the log of 3. The concept for \"how hard is this number to write down?\" goes by the name of [https://arbital.com/p/-complexity](https://arbital.com/p/-complexity); see the [https://arbital.com/p/+Kolmogorov_complexity_tutorial](https://arbital.com/p/+Kolmogorov_complexity_tutorial) to learn more on this topic.\n\n---\n\n__Question:__ Speaking of fractional inputs, if $0 < x < 1$ then the logarithm of $x$ is _negative._ How does _that_ square with the length interpretation? What would it even mean for the length of the number $\\frac{1}{10}$ to be $-1$?\n\n__Answer:__ Nice catch! The length interpretation crashes and burns when the inputs are less than one.\n\n---\n\nThe \"logarithms measure length\" interpretation is imperfect. The connection is still useful to understand, because you _already_ have an intuition for how slowly the length of a number grows as the number gets larger. The \"length\" interpretation is one of the easiest ways to get a gut-level intuition for what logarithmic growth _means._ If someone says \"the amount of time it takes to search my database is logarithmic in the number of entries,\" you can get a sense for what this means by remembering that logarithmic growth is like how the length of a number grows with the magnitude of that number:\n\n[https://arbital.com/p/visualization](https://arbital.com/p/visualization)\n\nThe interpretation doesn't explain what's going on when the input is fractional, but it's still one of the fastest ways to make logarithms start feeling like a natural property on numbers, rather than just some esoteric function that \"[inverts exponentials](https://arbital.com/p/3wr).\" Length is the quick-and-dirty intuition behind logarithms.\n\nFor example, I don't know what the logarithm base 10 of 2,310,426 is, but I know it's between 6 and 7, because 2,310,426 is seven digits long.\n\n$$\\underbrace{\\text{2,310,426}}_\\text{7 digits}$$\n\nIn fact, I can also tell you that $\\log_{10}(\\text{2,310,426})$ is between 6.30 and 6.48. How? Well, I know it takes six 10-digits to get up to 1,000,000, and then we need something more than a 2-digit and less than a 3-digit to get to a number between 2 and 3 million. The natural exchange rates for 2-digits and 3-digits (in terms of 10-digits) are 30¢ and 48¢ respectively, so the cost of 2,310,426 in terms of 10-digits is between \\$6.30 and \\$6.48.\n\nNext up, we'll be exploring this idea of an exchange rate between different types of digits, and building an even better interpretation of logarithms which helps us understand what they're doing on fractional inputs (and why).", "date_published": "2016-09-14T23:38:09Z", "authors": ["Nate Soares", "Michael Keenan", "Alexei Andreev"], "summaries": ["139 and 981 are both three digits long, but 139 is \"almost\" a 2-digit number and 981 is \"almost\" a 3-digit number. In a sense, 139 is using its three digits to a lesser extent. Logarithms formalize and quantify this intuition. You can estimate the logarithm (base 10) of a number by counting how many digits it has, and logarithms give us a notion of how much a number is actually using those digits."], "tags": ["B-Class"], "alias": "416"} {"id": "06c80d426548851c977b209b458b381e", "title": "Logarithm base 1", "url": "https://arbital.com/p/log_base_1", "source": "arbital", "source_type": "text", "text": "There is no [https://arbital.com/p/-3nd](https://arbital.com/p/-3nd) base 1, because no matter how many times you multiply 1 by 1, you get 1. If there _were_ a log base 1, it would send 1 to 0 (because $\\log_b(1)=0$ for every $b$), and it would also send 1 to 1 (because $\\log_b(b)=1$ for every $b$), which demonstrates some of the difficulties with $\\log_1.$ In fact, it would need to send 1 to every number, because $\\log(1 \\cdot 1) = \\log(1) + \\log(1)$ and so on. And it would need to send every $x > 1$ to $\\infty$, and every $0 < x < 1$ to $-\\infty,$ and those aren't numbers, so there's no logarithm base 1.\n\nBut if you _really_ want a logarithm base $1$, you can define $\\log_1$ to be a multifunction [from](https://arbital.com/p/3js) [$\\mathbb R^+$](https://arbital.com/p/positive_real_numebrs) [to](https://arbital.com/p/3lg) $\\mathbb R \\cup \\{ \\infty, -\\infty \\}.$ On the input $1$ it outputs $\\mathbb R$. On every input $x > 1$ it outputs $\\{ \\infty \\}$. On every input $0 < x < 1$ it outputs $\\{ -\\infty \\}$. This multifunction can be made to satisfy all the basic [properties of the logarithm](https://arbital.com/p/4bz), if you interpret $=$ as $\\in$, $1^{\\{\\infty\\}}$ as the [interval](https://arbital.com/p/interval_notation) $(1, \\infty)$, and $1^{\\{-\\infty\\}}$ as the interval $(0, 1)$. For example, $0 \\in \\log_1(1)$, $1 \\in \\log_1(1)$, and $\\log_1(1) + \\log_1(1) \\in \\log_1(1 \\cdot 1)$. $7 \\in log_1(1^7)$, and $15 \\in 1^{\\log_1(15)}$. This is not necessarily the best idea ever, but hey, the [final form](https://arbital.com/p/complex_log) of the logarithm was already a multifunction, so whatever. See also [https://arbital.com/p/log_is_a_multifunction](https://arbital.com/p/log_is_a_multifunction).\n\nWhile you're thinking about weird logarithms, see also [https://arbital.com/p/4c8](https://arbital.com/p/4c8).", "date_published": "2016-09-14T23:27:41Z", "authors": ["Nate Soares"], "summaries": ["There is no [https://arbital.com/p/-3nd](https://arbital.com/p/-3nd) base 1, because no matter how many times you multiply 1 by 1, you get 1. If there _were_ a log base 1, it would send 1 to 0 (because $\\log_b(1)=0$ for every $b$), and it would also send 1 to 1 (because $\\log_b(b)=1$ for every $b$), which demonstrates some of the difficulties with $\\log_1.$ In fact, it would need to send 1 to every number, because $\\log(1 \\cdot 1) = \\log(1) + \\log(1)$ and so on. And it would need to send every $x > 1$ to $\\infty$, and every $0 < x < 1$ to $-\\infty,$ and those aren't numbers, so there's no logarithm base 1.\n\nBut _if there was,_ it would be a [https://arbital.com/p/-multifunction](https://arbital.com/p/-multifunction) with values in the [extended real numbers](https://arbital.com/p/extended_reals). This is actually a perfectly valid way to define $\\log_1,$ though doing so is not necessarily a good idea."], "tags": ["Start"], "alias": "41m"} {"id": "e482bd475e31ffabbaf44b0077ecbb0e", "title": "Exchange rates between digits", "url": "https://arbital.com/p/digit_exchange_rates", "source": "arbital", "source_type": "text", "text": "Let's say that you want to [store a lot of data](https://arbital.com/p/storing_numbers), using physical objects such as coins, dice, or [digit wheels](https://arbital.com/p/42d). For concreteness, let's say that you're going to be given an arbitrary number between one and $2^\\text{3,000,000,000,000}$ ([a.k.a. a terabyte of data](https://arbital.com/p/numbers_are_data)), and you need to make sure you have enough physical objects to write that number down.\n\nLet's say that a coin costs 1¢. Because [$n$ coins can be used to encode any one of $2^n$ different possibilities](https://arbital.com/p/), you could get the job done with three trillion coins, which would let you encode the number (in [binary](https://arbital.com/p/binary_notation)) for the low low cost of thirty billion dollars. (At the time of writing this, three trillion bits of data will actually run you about \\$60, but leave that aside for now. Pretend you can't use modern transistors, and you have to use comparatively large objects such as coins to store data.)\n\nInstead of buying coins, you could also buy [digit wheels](https://arbital.com/p/42d), and use those instead. Because digit wheels can store more values than coins, it takes fewer digit wheels than coins to do the job. If digit wheels also cost 1¢, you should just buy digit wheels; that will cost a mere $9B. If instead digit wheels cost a million dollars apiece, you should probably buy coins instead. Somewhere in between those two prices, digit wheels switch from being a good deal to being a bad deal. The question is, if coins are worth 1¢, then how much is a digit wheel worth? At what price should you buy wheels instead of coins, and at what price should you buy coins instead of wheels?\n\nAt a first glance, it may seem like the fair price is 5¢ per wheel. A coin can store two values (by being placed 'heads' or 'tails'), while a digit wheel can store ten. Ten is five times larger than 2, so perhaps the price of a digit wheel should be five times the price of a coin. However, if digit wheels cost 5¢, you should ignore digit wheels entirely and buy coins. Why? Because for only 4¢, you could buy four coins, which can hold $2^4=16$ different values. Four coins store a [16-digit](https://arbital.com/p/4sj), and a 16-digit is worth more than a 10-digit, so you would have no reason to even consider buying digit wheels at a price of 5¢ per wheel.\n\nSo what is the fair price for a digit wheel? 4¢ is still too high, because that makes a 10-digit cost the same as a 16-digit, and the 16-digit is always better at that price. What about 3¢? At that price, the answer is much less clear. On the one hand, spending 3¢ on coins gets you the ability to write down only 8 possibilities, while spending 3¢ on wheels lets you write down 10 different possibilities. On the other hand, if you're trying to store the number 101, you need either 7 coins (7¢, because $2^6 < 101 < 2^7$) or 3 wheels (one per digit, because a wheel stores a digit, for a total of 9¢), in which case the coins are better.\n\nGiven that digit wheels can store ten values and cost 3¢ each, and coins can store two values and cost 1¢ each:\n\n__Question:__ When storing the number 8000, should you buy coins or wheels?\n\n%%hidden(Answer):\nWheels. To store 8000, you need 13 coins (because $2^{12} < 8000 < 2^{13}$) for a cost of 13¢, but only 4 wheels (because 8000 is 4 digits long) for a cost of 12¢.\n%%\n\n__Question:__ When storing the number 15,000, should you buy coins or wheels?\n\n%%hidden(Answer):\nCoins. Storing 15,000 costs 14 coins ($2^{13} < 15,000 < 2^{14}$) or 5 wheels; given that wheels cost 3¢ and coins cost 1¢, the 14 coins are cheaper.\n%%\n\n__Question:__ When storing the number 230,000, should you buy coins or wheels?\n%%hidden(Answer):\nIt doesn't matter. Storing 230,000 takes either 18 coins or 6 wheels, which costs 18¢ either way.\n%%\n\nAt 3¢ per digit wheel, whether you should buy coins or wheels depends on which number you want to store.\n\nBut _once the number you're storing gets large enough,_ the digit wheels become a better deal, and stay that way. In this case, the final transition happens when the number you're storing is 11 digits long or longer. Why? Say that the number $x$ you want to store is $n$ digits long in decimal notation. $n$ digit wheels can always encode more possibilities than $3n$ coins (because $10^n > 2^{3n}$ for all $n$), but sometimes $x$ can be written using fewer than $3n$ coins (as in the case of 15,000). This stops happening once $n$ is so large than $10^n$ is $2^3$ times larger than $2^{3n}$, at which point no $n$-digit number can be encoded using $2^{3(n-1)}$ or fewer coins. This happens once $n \\ge 11,$ i.e., once $x$ is at least 11 digits long. ([Proof.](https://arbital.com/p/))\n\nThis probably isn't too surprising: For 3¢ you can either buy 3 coins or one digit wheel. The coins store 8 possibilities, the wheel stores 10, and it's no surprise that a huge collection of 10-digits is unambiguously better than an equally sized collection of 8-digits. \n\nSo digit wheels are worth more than 3x as much as a coin, and less than 4x. How can we find the right price exactly? We could use exactly the same argument as above to check whether 3.5¢ is too high or too low, except that it's not clear what it would mean to buy \"three and a half coins.\" We can get around that problem by considering ten-packs of coins vs ten-packs of digit wheels. If wheels cost 3.5¢ and coins cost 1¢, then with 35¢, you could either buy 10 wheels or 35 coins. Which encodes more possibilities? The coins, because $10^{10} < 2^{35}.$ Thus, by a [generalized version of the argument above](https://arbital.com/p/), for numbers sufficiently large, the coins are a better deal than the wheels — at that price, you should buy wheels.\n\nAs you can verify, $2^{33} < 10^{10} < 2^{34},$ so the fair price must be between 3.3¢ and 3.4¢. And $2^{332} < 10^{100} < 2^{333},$ so it must be between 3.32¢ and 3.33¢. We could keep going, and keep getting digits of the fair price. And we could keep going forever, because the fair price is [https://arbital.com/p/-irrational](https://arbital.com/p/-irrational). ([Why?](https://arbital.com/p/logs_are_usually_irrational))\n\nIn general, if digit wheels cost $p$ times as much as coins, the [general argument](https://arbital.com/p/) we've been using says that if you're storing a large number and $2^p > 10$ then you should buy coins, whereas if $2^p < 10$ then you should buy digit wheels. Thus, the fair price must be the number $p$ such that $2^p = 10,$ and its first three digits are 3.32.\n\nIn other words, while a digit wheel holds 5 times as many values as a coin, it is only worth about 3.32 times as much as a coin (in terms of its ability to store data). Why? _Because it takes about 3.32 coins to emulate a digit-wheel._ What does that mean? Well, 3 coins is not enough to emulate a digit wheel (3 give you only 8 possibilities), 4 coins is more than enough (4 give you 16 possibilities). And 33 coins is not enough to emulate 10 digit wheels, but 34 coins are more than enough. And 332 coins are not enough to emulate 100 digit wheels, but 333 coins are more than enough.\n\nA digit wheel isn't worth 5 times as much as a coin because, when you buy multiple coins, their data storage capacity _multiplies,_ rather than adding. Each new coin lets you say _twice as many things,_ not just two more things. With five coins you can store $2 \\cdot 2 \\cdot 2 \\cdot 2 \\cdot 2 = 32$ different values, not $2 + 2 + 2 + 2 + 2 = 10.$ When finding the exchange rate between coins and digit wheels, the question is not \"How many more values does a digit wheel have written on it?\" Rather, the question is \"How many coins does it take to emulate a digit wheel?\" The answer to _that_ question is \"More than 332 coins per hundred digit wheels, less than 333 per hundred.\"\n\nThe fair price $p$ of digit wheels (in terms of coins) is the number $p$ such that $2^p = 10$ exactly. We have a name for this number, and it's $\\log_2(10),$ and the argument described above is a [simple algorithm for computing logarithms](https://arbital.com/p/). (Logarithm outputs are \"difficult to calculate but useful to have,\" in general.)\n\nThus, we can look at $\\log_b(x)$ as describing the fair price of $x$-digits in terms of $b$-digits. For example, if coins are worth 1¢, then dice are worth $\\log_2(6) \\approx 2.58$ cents, as you can verify by checking that $2^2 < 6 < 2^3$ and $2^{25} < 6^{10} < 2^{26}$ and $2^{258} < 6^{100} < 2^{259}.$ (Exercise: Find the next digit of $\\log_2(6)$ using this method.)\n\nThis interpretation is a great intuition pump for what $\\log_b(x)$ \"really means\" if $b$ and $x$ are whole-numbers. Imagine two sets of objects, one (say, weird dice) that can be placed in $b$ different states, and another (say, weird wheels) that can be placed in $x$ different states. Imagine you're storing a really really large number, and imagine that the $b$-objects cost 1¢. At what price of $x$-objects would you be indifferent between using $x$-objects versus $b$-objects to store data? In other words, how many $b$-objects does it take to [\"emulate\"](https://arbital.com/p/emulating_digits) an $x$-object?\n\nProbing this interpretation also explains many of the properties of the logarithm. For example, the fact that $\\log_b(x)$ is the fair price of $x$-digits in terms of $b$-digits immediately implies that $\\log_x(b) = \\frac{1}{\\log_b(x)}$, because if an $x$-digit is worth three $b$-digits then a $b$-digit must be worth a third of an $x$-digit.\n\nThis interpretation still doesn't shed light onto what logarithms are doing when their inputs are not whole numbers. For example, what's $\\log_{1.5}(2.5)$? What the heck would an object that can be placed into \"1.5 different states\" even be? As we will see shortly, this notion of there being a \"natural exchange rate\" between digits reveals an interpretation of the logarithm that explains what it's doing with fractional inputs.", "date_published": "2016-06-24T03:02:48Z", "authors": ["Eric Rogstad", "Nate Soares", "Alexei Andreev"], "summaries": ["There is a natural exchange rate between types of digits. For example, a 100-digit is worth exactly twice as much as a 10-digit, because you can build a 100-digit out of two 10-digits. Furthermore, a 10-digit is worth between 3x and 4x as much as a 2-digit, because you can emulate a [https://arbital.com/p/-42d](https://arbital.com/p/-42d) using between 3 and 4 coins. The cost of an $n$-digit in terms of $b$-digits is $\\log_b(n).$"], "tags": ["B-Class"], "alias": "427"} {"id": "fd054887ff2501df2ea261f9bd7fe5d5", "title": "Digit wheel", "url": "https://arbital.com/p/digit_wheel", "source": "arbital", "source_type": "text", "text": "A mechanical device for storing a number from 0 to 9. Devices such as these were often attached to the desks of engineers, as part of historical \"desktop computers.\"\n\n![](http://www.cl.cam.ac.uk/~djg11/howcomputerswork/mechanical-counter3.jpg)\n\nEach wheel can stably rest in any one of ten states. Digit wheels are often useful when it comes to thinking about how much data a physical object can store, as they were physical objects that were clearly designed for storing data. See also [https://arbital.com/p/3y1](https://arbital.com/p/3y1).", "date_published": "2016-06-07T22:29:02Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": ["A mechanical device for storing a number from 0 to 9.\n\n![](http://www.cl.cam.ac.uk/~djg11/howcomputerswork/mechanical-counter3.jpg)"], "tags": [], "alias": "42d"} {"id": "76827dfab5f9a652a8181b07d954a750", "title": "Graham's number", "url": "https://arbital.com/p/grahams_number", "source": "arbital", "source_type": "text", "text": "Graham's number is a... rather large number. Letting $f(x) = 3\\uparrow^n 3$ (in [https://arbital.com/p/+knuth_up_arrow_notation](https://arbital.com/p/+knuth_up_arrow_notation)) and $f^n(x) = \\underbrace{f(f(f(\\cdots f(f(x)) \\cdots ))}_{n\\text{ applications of }f}$, Graham's number is defined to be $f^{64}(4).$\n\nThe result is sizable. For an explanation of how large this is and why, see Tim Urban's [explanation](https://arbital.com/p/http://waitbutwhy.com/2014/11/1000000-grahams-number.html) at Wait but Why.", "date_published": "2016-06-07T23:06:08Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": ["Graham's number is a... rather large number. Tim Urban of _Wait but Why_ gives a [good introduction](https://arbital.com/p/http://waitbutwhy.com/2014/11/1000000-grahams-number.html).\n\n![](http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-content/uploads/2014/11/grahams-number.png)"], "tags": ["Stub"], "alias": "42j"} {"id": "5ab9c1f1b411bfaab5625a14e0c49816", "title": "A googolplex", "url": "https://arbital.com/p/googolplex", "source": "arbital", "source_type": "text", "text": "The number $10^{10^{100}}$ is named \"a googolplex.\" That is, a googolplex is the number represented (in [https://arbital.com/p/-decimal_notation](https://arbital.com/p/-decimal_notation)) as a 1 followed by a [https://arbital.com/p/-42m](https://arbital.com/p/-42m) zeroes, i.e., $10^{googol}$, which is $ 10^{10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000}.$", "date_published": "2016-06-08T18:52:18Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "42l"} {"id": "1156da9aa23dae5e1542526470ee70f5", "title": "A googol", "url": "https://arbital.com/p/googol", "source": "arbital", "source_type": "text", "text": "$10^{100},$ i.e., 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.\n\nThe search engine Google is named after the number googol. (The difference for the spelling, as the apocryphal story goes, comes from the fact that one of the early Google investors misspelled the name on a check.)", "date_published": "2016-06-07T22:39:38Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Stub"], "alias": "42m"} {"id": "8b08198b4845ecc30027148282ad8370", "title": "Why is log like length?", "url": "https://arbital.com/p/why_is_log_like_length", "source": "arbital", "source_type": "text", "text": "If a number $x$ is $n$ digits long (in [https://arbital.com/p/-decimal_notation](https://arbital.com/p/-decimal_notation)), then its logarithm (base 10) is between $n-1$ and $n$. This follows directly from the [definition of the logarithm](https://arbital.com/p/40j): $\\log_{10}(x)$ is the number of times you have to multiply 1 by 10 to get $x;$ and each new digit lets you write down ten times as many numbers. In other words, if you have one digit, you can write down any one of ten different things (0-9); if you have two digits you can write down any one of a hundred different things (00-99); if you have three digits, you can write down any one of a thousand different things (000-999); and in general, each new digit lets you write down ten times as many things. Thus, the number of digits you need to write $x$ is close to the number of times you have to multiply 1 by 10 to get $x$. The only difference is that, when computing logs, you multiply 1 by 10 exactly as many times as it takes to get $x$, which might require [multiplying by 10 a fraction of a time](https://arbital.com/p/fractional_exponent) (if x is not a power of 10), whereas the number of digits in the base 10 representation of x is always a whole number.", "date_published": "2016-06-08T17:05:04Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "437"} {"id": "0355c9739a5ef99cb2d62bb9505c6fbf", "title": "Poset: Examples", "url": "https://arbital.com/p/poset_examples", "source": "arbital", "source_type": "text", "text": "The standard $\\leq$ relation on integers, the $\\subseteq$ relation on sets, and the $|$ (divisibility) relation on natural numbers are all examples of poset orders.\n\nInteger Comparison\n================\n\nThe set $\\mathbb Z$ of integers, ordered by the standard \"less than or equal to\" operator $\\leq$ forms a poset $\\langle \\mathbb Z, \\leq \\rangle$. This poset is somewhat boring however, because all pairs of elements are comparable; such posets are called chains or [totally ordered sets](https://arbital.com/p/540). Here is its Hasse diagram.\n\n![Truncated Hasse diagram](http://i.imgur.com/STsyMfJ.png)\n%%%comment:\ndot source (doctored in GIMP)\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n a [= \"-3\"](https://arbital.com/p/label)\n b [= \"-2\"](https://arbital.com/p/label)\n c [= \"-1\"](https://arbital.com/p/label)\n d [= \"0\"](https://arbital.com/p/label)\n e [= \"1\"](https://arbital.com/p/label)\n f [= \"2\"](https://arbital.com/p/label)\n g [= \"3\"](https://arbital.com/p/label)\n rankdir = BT;\n a -> b\n b -> c\n c -> d\n d -> e\n e -> f\n f -> g\n}\n%%%\nPower sets\n=========\n\nFor any set $X$, the power set of $X$ ordered by the set inclusion relation $\\subseteq$ forms a poset $\\langle \\mathcal{P}(X), \\subseteq \\rangle$. $\\subseteq$ is clearly [reflexive](https://arbital.com/p/5dy), since any set is a subset of itself. For $A,B \\in \\mathcal{P}(X)$, $A \\subseteq B$ and $B \\subseteq A$ combine to give $x \\in A \\Leftrightarrow x \\in B$ which means $A = B$. Thus, $\\subseteq$ is [antisymmetric](https://arbital.com/p/5lt). Finally, for $A, B, C \\in \\mathcal{P}(X)$, $A \\subseteq B$ and $B \\subseteq C$ give $x \\in A \\Rightarrow x \\in B$ and $x \\in B \\Rightarrow x \\in C$, and so the [transitivity](https://arbital.com/p/573) of $\\subseteq$ follows from the transitivity of $\\Rightarrow$.\n\nNote that the strict subset relation $\\subset$ is the strict ordering derived from the poset $\\langle \\mathcal{P}(X), \\subseteq \\rangle$.\n\nDivisibility on the natural numbers\n===========================\n\nLet [$\\mathbb N$](https://arbital.com/p/45h) be the set of natural numbers including zero, and let $|$ be the divides relation, where $a|b$ whenever there exists an integer $k$ such that $ak=b$. Then $\\langle \\mathbb{N}, | \\rangle$ is a poset. $|$ is reflexive because, letting k=1, any natural number divides itself. To see that $|$ is anti-symmetric, suppose $a|b$ and $b|a$. Then there exist integers $k_1$ and $k_2$ such that $a = k_1b$ and $b = k_2a$. By substitution, we have $a = k_1k_2a$. Thus, if either $k$ is $0$, then both $a$ and $b$ must be $0$. Otherwise, both $k$'s must equal $1$ so that $a = k_1k_2a$ holds. Either way, $a = b$, and so $|$ is anti-symmetric. To see that $|$ is transitive, suppose that $a|b$ and $b|c$. This implies the existence of integers $k_1$ and $k_2$ such that $a = k_1b$ and $b = k_2c$. Since by substitution $a = k_1k_2c$, we have $a|c$.", "date_published": "2016-12-06T20:34:09Z", "authors": ["Eric Rogstad", "Kevin Clancy", "Joe Zeng"], "summaries": [], "tags": [], "alias": "43s"} {"id": "810e49ae913077c59cb8d71d8766783a", "title": "Ackermann function", "url": "https://arbital.com/p/ackermann_function", "source": "arbital", "source_type": "text", "text": "The Ackermann function works as follows:\n\nOne may have noticed that addition, multiplication, and exponentiation seem to increase in \"power\", that is, when pitted against each other, it is easier to produce enormous numbers with exponentiation than with multiplication, and so on.\n\nThe Ackermann function produces a hierarchy of such growth operations, and ascends one step higher in the hierarchy each time.\n\nAddition is the first operator in the hierarchy (though if we wanted, we could define [https://arbital.com/p/-addition_as_repeated_succession](https://arbital.com/p/-addition_as_repeated_succession) and declare addition the second operator in the hierarchy). The next operator in the hierarchy is produced by iterating the previous element in the hierarchy. For instance, the next few operators in the hierarchy are multiplication, exponentiation, and tetration:\n\nMultiplication is iterated addition: $A \\cdot B = \\underbrace{A + A + \\ldots A}_{B \\text{ copies of } A}$\n\nExponentiation is iterated multiplication: $A^B = \\underbrace{A \\times A \\times \\ldots A}_{B \\text{ copies of } A}$.\n\n($A ^ B$ is written $A \\uparrow B$ in [https://arbital.com/p/knuth_up_arrow_notation](https://arbital.com/p/knuth_up_arrow_notation).)\n\nTetration is iterated exponentiation. $A \\uparrow\\uparrow B = \\underbrace{A^{A^{\\ldots^A}}}_{B \\text{ copies of } A}$ times.\n\n$\\uparrow^n$ just means $n$ up arrows, so we can also write tetration as: $A \\uparrow^2 B = \\underbrace{A \\uparrow^1 (A \\uparrow^1 (\\ldots A))}_{B \\text{ copies of } A}$\n\nNow the pattern can be noticed and generalized, following the rule: $A \\uparrow^n B = \\underbrace{A \\uparrow^{n-1} (A \\uparrow^{n-1} (\\ldots A))}_{B \\text{ copies of } A}$\n\nAnd now we can define the Ackermann function as $A(n) = n \\uparrow^n n$.\n\nThis definition is relatively small, but the functions grow at incredible rates. [Wait But Why](http://waitbutwhy.com/2014/11/1000000-grahams-number.html) has a good intuitive description of how incredibly fast tetration, pentation, and hexation grow, but by the time we get to $A(6)$, the Ackermann function already grows far more quickly than that. $A(1)=1$, $A(2)=4$, and $A(3)$ cannot be written in the universe, as it has enormously more digits than our universe has subatomic particles.\n\nInterestingly enough, the Ackermann function grows faster than all primitive recursive functions. This is related to a rather deep connection between the consistency strength of a mathematical theory (which sorts of results they are capable of proving) and the slowest-growing function for which the theory cannot prove \"this function is defined on all integers.\"", "date_published": "2016-06-10T14:56:04Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Chris Barnett", "Alex Appel", "Nate Soares", "Eric Bruylant"], "summaries": [], "tags": ["Start", "Needs summary"], "alias": "43x"} {"id": "c1ba6b513cd9258dff34ee9bce43299c", "title": "Fractional digits", "url": "https://arbital.com/p/fractional_digits", "source": "arbital", "source_type": "text", "text": "When $b$ and $x$ are [integers](https://arbital.com/p/48l), $\\log_b(x)$ has a few good interpretations. It's roughly the [length of $x$](https://arbital.com/p/416) when written in [base $b$](https://arbital.com/p/number_bases). More precisely, it's the [fair price of an $x$-digit](https://arbital.com/p/427) in terms of [$b$-digits](https://arbital.com/p/4sj). But what about when $x$ or $b$ is not a whole number? $\\log_{3.16}(5.62) \\approx 1.5$; how are we supposed to interpret this fact?\n\nWe could appeal to the definition of the logarithm, observe that $3.16^{1.5} \\approx 5.62,$ and call it a day. But what does that mean? That 3.16 multiplied by itself one-and-a-half times is roughly 5.62? That 5.62, written in base 3.16, is about 1.5 digits long? That a 5.62-digit is worth about 1.5 times as much as a 3.16-digit? What does it mean to multiply a number by itself \"half a time\"? How would you even write a number down in \"base 3.16\"? What would a 3.16-digit even look like?\n\nLet's say that you want to [store a lot of data](https://arbital.com/p/storing_numbers), and [digit wheels](https://arbital.com/p/42d) cost \\$1. At that price, an object that can store 3 different values costs about 48¢, whereas an object that stores 4 different values costs about 60¢. What storage medium, then, costs exactly 50¢? I can think of two:\n\n1. You and I could buy a digit wheel, split the costs, and share it.\n2. You and I could both pay 50¢ each for a digit wheel, and then toss a coin to see who gets to use it.\n\nHow could we split the digit wheel? The digit wheel can be put into any one of ten different states, labeled 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Here's one way to use it to encode two different messages: I choose a number $a$ between 0 and 1, then you choose a number $b$ between 0 and 4. Then we put the digit wheel into the state $5a + b.$ Thus, if you choose 0, the wheel will either say 0 or 5 (depending on which number I choose). If you choose 3, the wheel will either say 3 or 8 (depending on which number I chose). Each number you want to record (0-4) has two representations, and I choose whether we use the high one or the low one. Thus, the digit wheel records two messages: 7 means (high, 2) and so on.\n\nOf course, this is not a fair way to split the digit wheel — you get to store a [5-message](https://arbital.com/p/3v9) whereas I only get to store a 2-message.%%note:Thus we see that a [10-digit](https://arbital.com/p/4sj) is worth a 5-digit plus a 2-digit, that is, $\\log_{10}(5) + \\log_{10}(2) = 1$.%% Can we do something fairer?\n\nWell, the methodology can be used to share arbitrary $n$-digits between two people. If $x$ and $y$ are whole numbers such that $x \\cdot y \\le n$ then an $n$-digit can be used to store both an $x$-message and a $y$-message at once: You choose the [https://arbital.com/p/-quotient](https://arbital.com/p/-quotient) of $n$ divided by $x$, I choose the [https://arbital.com/p/-remainder](https://arbital.com/p/-remainder).\n\n__Exercise:__ How would you use an $18$-digit to store both a $3$-message and a $6$-message? How do you store (2, 4) using that encoding?\n%%hidden(Answer):\nHere's one way: Let $a$ be the value of the 3-message (between 0 and 2) and $b$ be the value of the 6-message (between 0 and 5). Record $6a+b.$ Using this encoding, (2, 4) is stored as the number 16.\n\nYou can verify for yourself that there is exactly one way to store each (3-message, 6-message) pair in an 18-message under this encoding.\n%%\n\n(From this we can see that, when $n = x \\cdot y,$ the cost of a $n$-digit is exactly equal to the cost of an $x$-digit plus the cost of a $y$-digit. In other words, $n = x \\cdot y$ then $\\log_b(x) + \\log_b(y) = \\log_b(n),$ no matter what base $b$ we're using.)\n\nThe fair split of the 10-digit, then, is to let us both store a 3-message: 3 is the highest integer $x$ such that $x \\cdot x < 10.$ This isn't maximally efficient, though; that encoding only uses 9 of the 10 possibilities that we paid for, and therefore it's not worth our money: Why buy what is effectively a 3-message for 50¢ when the fair price of 3-messages is 48¢? This might be worth knowing if the market isn't selling objects that carry 3-messages, and both you and I just need a tiny bit more storage, but in general it's not worth it.\n\nHowever, it becomes more worthwhile the more 10-digits we split. You might expect that, when splitting three digit wheels, we'd each get one digit wheel to ourselves plus a 3-message each from the third wheel, giving us 30 different possible messages each. However, we can do better than that if we're clever. If you pick a number $a$ between 0 and 30 (for a total of 31 different possibilities), and I also pick a number $b$ between 0 and 30, then the number $31a + b$ can always be stored on 3 digit wheels (because $31 \\cdot 30 + 30 = 960 \\le 999$), so if we're splitting three digit wheels, we can actually use them to store two 31-messages. ([Why?](https://arbital.com/p/)) This still isn't maximally efficient (the values from 961 to 999 are wasted), but it's a little better. And we can keep increasing the efficiency if we keep going: Five digit wheels can hold two 316-messages; 7 digit wheels can hold two 3162-messages, and so on. (You can verify those numbers yourself.)\n\nAs you can see, when splitting an $n$-digit, the fair split occurs at the largest whole number $x$ such that $x \\cdot x \\le n$, in which case we both get to record one $x$-message. This is why it is 316, and not 500, that can most naturally be seen as \"about 2.5 decimal digits long:\" The number that is 2.5 decimal digits long is the largest number $x$ such that, given five digits, both you and I can use those 5 digits to store independent $x$-messages. If you store a 500-message, that only leaves enough space for me to store a 200-message. If you only store a 300-message, that leaves enough space for me to store a 333-message. $x=316$ is the point where we can both store an $x$-message, because 316 is the largest whole number such that $x^2 \\le 100000.$\n\n(From this we can deduce that $\\log_b(316) \\approx \\frac{5\\log_b(10)}{2}$, no matter what the base: A 316-digit is worth about half as much as five 10-digits put together.)\n\nThe more 10-digits you split, the more efficient it gets — if we split 7 digit wheels, each of us gets to record a 3162-message; if we split 9 wheels, we each get to record a 31622-message, and so on. But the split is never maximally efficient, because 10 is not a [https://arbital.com/p/-square_number](https://arbital.com/p/-square_number): An $n$-digit can be split perfectly in half if and only if there is an $x$ such that $x \\cdot x = n,$ and this can't happen if $n$ is an odd power of 10. Thus, splitting an odd number of digit wheels never makes the final digit wheel be worth _exactly_ 50¢.\n\nYou know what storage medium is worth exactly 50¢? A 50% chance of a 10-digit.\n\nLet's say we pay 50¢ each for a digit wheel, and then toss a coin to see who gets to use it. How much does this decrease your storage costs? Half the time, you get to [reduce the number you have left to store](https://arbital.com/p/) by a factor of 10; the other half the time, you reduce the number you have left to store by a factor of 1 (i.e., not at all). How much does that reduce the number you have left to store, [on average](https://arbital.com/p/4b5)?\n\nOn average, your number gets reduced by the [https://arbital.com/p/-geometric_mean](https://arbital.com/p/-geometric_mean) of 1 and 10. Geometric means \"average out\" multiplications: Let's say that you have a thing and you're going to multiply it by 2, 12 and 9. At the end of the day, your thing is going to be 216 times larger, and you're going to have multiplied it by three numbers. Geometric means ask how big each multiplication was \"on average\" — instead of multiplying your thing by three different numbers, can we find one number $y$ such that $y \\cdot y \\cdot y = 216,$ such that multiplying your thing by $y$ three times is the same as multiplying by 2, 12, and 9 separately? The answer is yes, and $y = \\sqrt[https://arbital.com/p/3](https://arbital.com/p/3){2 \\cdot 12 \\cdot 9} = 6$, as you can verify. Thus, multiplying your thing by 6 is equivalent, [in expectation](https://arbital.com/p/4b5), to multiplying your thing by a value selected [uniformly at random](https://arbital.com/p/uniform_random) from the list [12, 9](https://arbital.com/p/2,). (For more on this, see [Geometric mean: Intuition](https://arbital.com/p/).)\n\nSimilarly, if we might reduce the size of the number we want to store by a factor of 1, and we might reduce it by a factor of 10, then \"on average\" we're reducing it by a factor of $\\sqrt[https://arbital.com/p/2](https://arbital.com/p/2){1 \\cdot 10} \\approx 3.16.$ Multiplying by $\\sqrt[https://arbital.com/p/2](https://arbital.com/p/2){10}$ twice is the same as multiplying by 1 once and 10 once, so it's equivalent [in expectation](https://arbital.com/p/4b5) to multiplying by values selected uniformly at random from the list [10](https://arbital.com/p/1,).\n\nIf _three_ people are sharing a 10-digit, then the amount of digits each one gets to use in expectation is $\\sqrt[https://arbital.com/p/3](https://arbital.com/p/3){1 \\cdot 1 \\cdot 10} \\approx 2.15.$ So that's what a 2.15 digit looks like — a 10-digit shared between three people. This can be generalized to any probability, and any $n$-digit for $1 < n \\le 10$ can be interpreted as a digit wheel that you have some probability of getting to use ([How?](https://arbital.com/p/)).\n\nThere are a few different things we can learn from this interpretation. It allows us to give a physical interpretation to facts like $\\log_{3.16}(5.62) \\approx 1.5$, which says that a 5.62-digit is worth one and a half times as much as a 3.16 digit. What's a 3.16 digit? It's (approximately) a 10-digit shared between two people, i.e., a 10-digit that you have a 50% chance of getting to use yourself. What's a 5.62-digit? It's a 10-digit that you have a 75% chance of getting to use yourself. It's 1.5x as valuable, because 75% is 1.5x as much as 50%. The upgrade from a 3.16-digit to a 5.62-digit constitutes 1.5x increase in my chance of getting to use the digit wheel.\n\nThe other thing that this interpretation tells us is that the digit worth exactly half an $n$-digit is the $\\sqrt{n}$-digit, both because (1) if you want to use an $n$-digit to store two $x$-messages for some $x$, you need $x \\cdot x$ to be at most $n$; and (2) if you have a 50% chance of getting to use an $n$-digit and a 50% chance of [nothing](https://arbital.com/p/1digit), then that's equivalent [on average](https://arbital.com/p/4b5) to a certainty of a $\\sqrt{n}$-digit.\n\nIn case you haven't noticed, we've been implicitly using the fact that doubling a digit squares the number of possibilities you have available to you: With one 10-digit you can store 10 possibilities, with two you can store $10^2 = 100.$ Because two $n$-digits [emulate](https://arbital.com/p/) an $n^2$-digit, it's no surprise that half an $n$-digit is a $\\sqrt{n}$-digit.\n\nYou might be wondering how these square root signs weaseled into a discussion about logarithms. You also might be starting to suspect that roots, logarithms, and exponentials are secretly three sides of the same coin. If so, [you're right](https://arbital.com/p/exp_log_root), though that's a topic for another day.\n\nLest you think that this gives us a complete interpretation of logarithms, note that it still doesn't explain what's going on when the inputs to the logarithm function are between 0 and 1. You might think, \"well, now that I know what a 3.16-digit is, I can use that intuition to figure out what a 0.16-digit is\" — but it's not quite as easy as that. A 3.16-digit can be interpreted as probabilistic access to a 10-digit, and the same is true for $x$-digits whenever $x > 1.$ But no matter how many people share a 10-digit, it's never worth _less_ than a 1-digit ([which is worthless](https://arbital.com/p/1digit)); $\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){10} > 1$ no matter how large $n$ gets. To understand $x$-digits for $0 < x < 1,$ we need one final puzzle piece, covered in the next post.", "date_published": "2016-07-30T00:56:30Z", "authors": ["Malcolm McCrimmon", "Nate Soares", "Eric Rogstad", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "44l"} {"id": "0bb1f8a280c141d802131976b202f6a3", "title": "Shannon", "url": "https://arbital.com/p/shannon", "source": "arbital", "source_type": "text", "text": "The shannon (Sh) is a unit of [https://arbital.com/p/-3xd](https://arbital.com/p/-3xd). One shannon is the difference in [entropy](https://arbital.com/p/info_entropy) between a [https://arbital.com/p/-probability_distribution](https://arbital.com/p/-probability_distribution) on a [https://arbital.com/p/-binary_variable](https://arbital.com/p/-binary_variable) that assigns 50% to each value, and one that assigns 100% to one of the two values.", "date_published": "2016-07-04T04:24:14Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Work in progress", "Needs clickbait", "Stub"], "alias": "452"} {"id": "c0920b3c26645d9730bc1a349b4b75d6", "title": "Natural number", "url": "https://arbital.com/p/natural_number", "source": "arbital", "source_type": "text", "text": "A **natural number** is a number like 0, 1, 2, 3, 4, 5, 6, ... which can be used to represent the count of an object. The set of natural numbers is $\\mathbb N.$ Not all sources include 0 in $\\mathbb N.$\n\nNatural numbers are perhaps the simplest type of number. They don't include [negative numbers](https://arbital.com/p/48l), [fractional numbers](https://arbital.com/p/4zq), [irrational numbers](https://arbital.com/p/4bc), [imaginary numbers](https://arbital.com/p/4zw), or any of those complexities.\n\nThanks to their simplicity, natural numbers are often the first mathematical concept taught to children. Natural numbers are equipped with a notion of [https://arbital.com/p/-addition](https://arbital.com/p/-addition) ($2 + 3 = 5$ and so on) and [https://arbital.com/p/-multiplication](https://arbital.com/p/-multiplication) ($2 \\cdot 3 = 6$ and so on), these are among the first mathematical operations taught to children.\n\nDespite their simplicity, the natural numbers are a ubiquitous and useful mathematical object. They're quite useful for counting things. They represent all the possible [cardinalities](https://arbital.com/p/4w5) of finite [sets](https://arbital.com/p/3jz). They're also a useful [data structure](https://arbital.com/p/data_structure), in that numbers can be used to [encode](https://arbital.com/p/numeric_encoding) all sorts of data. Almost all of modern mathematics can be built out of natural numbers.", "date_published": "2016-12-22T05:39:50Z", "authors": ["Eric Rogstad", "Patrick Stevens", "Nate Soares", "Martin Epstein", "Eric Bruylant", "Jaime Sevilla Molina", "Joe Zeng"], "summaries": ["0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ...\n\nThe natural numbers are the numbers we use to count things."], "tags": ["Start", "Definition"], "alias": "45h"} {"id": "02e7260050c1f1065ddcdee77944372e", "title": "Log as the change in the cost of communicating", "url": "https://arbital.com/p/log_as_comm_cost", "source": "arbital", "source_type": "text", "text": "When interpreting logarithms as [a generalization of the notion of \"length\"](https://arbital.com/p/416) and as [digit exchange rates](https://arbital.com/p/427), in both cases, multiplying the input to the logarithm base 10 by a factor of 10 caused the output to go up by one. Multiplying a number by 10 makes it one digit longer. If a [https://arbital.com/p/-42d](https://arbital.com/p/-42d) is worth \\$1, then a 1000-digit is worth exactly \\$1 more than a 100-digit, because you can [build](https://arbital.com/p/emulating_digits) a 1000-digit out of a 100-digit and a 10-digit. Thus, by symmetry _dividing_ an input to the logarithm base 10 by 10 makes the output go down by one: If you divide a number by 10, it gets one digit shorter; and any $n$-digit is worth \\$1 more than a $\\frac{n}{10}$-digit, because you can build an $n$-digit out of a $\\frac{n}{10}$-digit and a 10-digit.\n\nThis strongly implies that $\\log_{10}(\\frac{1}{10})$ should equal $-1$. If a 1000-digit costs \\$3, and a 100-digit costs \\$2, and a 10-digit costs \\$1, and a [1-digit is worthless](https://arbital.com/p/1digit), then, extrapolating the pattern, a $\\frac{1}{10}$ should cost $-\\$1.$ But what does that mean? What sort of digit is worth negative money? Can we give this extrapolation a physical intuition?\n\nYes, we can, by thinking in terms of how difficult it is to communicate information. Let's say that you and I are in separate rooms, connected only by a conveyor belt, upon which I can place physical objects like coins, dice, and [digit wheels](https://arbital.com/p/42d) that you can read. Let's imagine also that a third party is going to show me the whodunit of a game of [Clue](https://arbital.com/p/https://en.wikipedia.org/wiki/Cluedo), and then let me put some objects on the conveyor belt, and then send those objects into your room, and then ask you for the information. If you can reproduce it successfully, then we both win a lot of money. However, I have to pay for every object that I put on the conveyor belt, using the [fair prices](https://arbital.com/p/427).\n\nConsider how much I have to pay to tell you the result of a clue game. The \"whodunit\" in a clue game consists of three pieces of information:\n\n1. The name of the murderer, which is one of: Miss Scarlett, Professor Plum, Mrs. Peacock, Reverend Green, Colonel Mustard, or Mrs. White.\n2. The room in which the murder occurred, which is either the kitchen, the ballroom, the conservatory, the dining room, the cellar, the billiard room, the library, the lounge, the hall, or the study.\n3. The murder weapon, which is either the candlestick, the dagger, the lead pipe, poison, the revolver, the rope, or the wrench.\n\nThus, a typical whodunit might look like \"Professor Plum, in the conservatory, with the revolver.\" That sentence is 55 letters long, so one way for me to transmit the message would be to purchase fifty five 29-digits (capable of holding any one of 26 letters, or a space, or a comma, or a period), and send you that sentence directly. However, that might be a bit excessive, as there are in fact only $6 \\cdot 10 \\cdot 7 = 420$ different possibilities (six possible murderers, ten possible locations, seven possible weapons). As such, I only actually need to buy a 6-digit, a 10-digit, and a 7-digit. Equivalently, I could purchase a single 420-digit (if such things are on sale). We have to agree in advance what the digits mean — for example, \"the 6-digit corresponds to the murderer, in the order listed above; the 10-digit corresponds to the room, in the order listed above; the 7-digit corresponds to the weapon, in the order listed above;\" but assuming we do, I can get away with much less than fifty five 29-digits.\n\n__Exercise:__ If the only storage devices on sale are coins, how many do I need to buy to communicate the whodunit?\n%%hidden(Answer):\nNine. 8 coins only gets you 256 possibilities, and we need at least 420.\n%%\n\n__Exercise:__ If the only storage devices on sale are dice, how many do I need to buy?\n%%hidden(Answer):\nFour. $6^3 < 420 < 6^4.$\n%%\n\n__Exercise:__ If I have to choose between all coins or all dice, which should I choose, at the fair prices?\n%%hidden(Answer):\nThe coins. Four dice cost as much as $\\log_2(6) * 4 \\approx 10.33$ coins, and we can do the job with nine coins instead.\n%%\n\n__Exercise:__ If I can mix coins, dice, and digit wheels, what's the cheapest way to communicate the whodunit?\n%%hidden(Answer):\nOne coin and three dice let you send the message at a cost of only $\\log_2(2) + 3\\cdot \\log_2(6) \\approx 8.75$ coins.\n%%\n\nNow, consider what happens when the third party tells you \"Actually, in order to win, you also have to communicate the name of my favorite Clue suspect, which is Colonel Mustard. I already told the person in the other room that you need to communicate two suspects, and that you'll communicate my favorite Clue suspect second. I didn't tell them who my favorite Clue suspect was, though.\"\n\nNow, the space of possible messages has gone up by a factor of six: There are 420 possible whodunits, and each can be paired with one of six possible \"favorite suspects,\" for a total of 2520 possible messages. How does this impact my cost of communicating with you? My cost goes up by 1 die ($= \\log_2(6)$ coins $= \\log_{10}(6)$ digit wheels). When the space of possibilities goes up by a factor of 6, my costs of communication (measured, say, in coins) go up by $\\log_2(6).$\n\nNow let's say that the third party comes back in the room and tells you \"Actually, I gave the person in the other room a logic puzzle that told them which room the murder happened in; they solved it, and now they know that the murder happened in the conservatory.\"\n\nThis _reduces_ the space of possible messages I need to send, by a factor of 10. Now that both you and I know that the murder happened in the conservatory, I only need to transmit the murderer, the weapon, and the favorite suspect — one of 252 possibilities. The space of possibilities was cut into a tenth of its former size, and my cost of communicating dropped by 1 digit wheel ($= \\log_6(10)$ dice $= \\log_2(10)$ coins).\n\nOn this interpretation, logarithms are measuring how much it costs to transmit information, in terms of some \"base\" medium (such as coins, dice, or digit wheels). Every time the space of possibilities increases by a factor of $n$, my communication costs increase by $\\log_2(n)$ coins. Every time the space of possibilities decreases by a factor of $n$, my communication costs _drop_ by $\\log_2(n)$ coins.\n\nThis is the physical interpretation of logarithms that you can put your weight on: $\\log_b(x)$ measures how much more or less costly it will be to send a message (in terms of $b$-digits) when the space of possible messages changes by a factor of $x$. Paired with a physical interpretation of [fractional digits](https://arbital.com/p/44l), it can explain most of the [basic properties of the logarithm](https://arbital.com/p/log_properties):\n\n1. $\\log_b(1) = 0,$ because increasing (or decreasing) the space of possible messages by a factor of 1 doesn't affect your communication costs at all.\n2. $\\log_b(b) = 1,$ because increasing the space of possible messages by a factor of $b$ will increase your communication costs by exactly one $b$-digit.\n3. $\\log_b\\left(\\frac{1}{b}\\right) = -1,$ because decreasing the space of possible messages by a factor of $b$ saves you one $b$-digit worth of communication costs.\n4. $\\log_b(x\\cdot y) = \\log_b(x) + \\log_b(y),$ because if $n = x \\cdot y$ then one $n$-digit is exactly large enough to store one $x$-message and one $y$-message. Thus, when communicating, an $x\\cdot y$-digit is worth the same amount as one $x$-digit plus one $y$-digit.\n5. $\\log_b(x^n) = n \\cdot log_b(x),$ because $n$ $x$-digits can be used to [emulate](https://arbital.com/p/emulating_digits) one $x^n$-digit.\n\nYou might be thinking to yourself:\n\n> Wait, what does it mean for the space of possible messages to go up or down by a factor of $x$? This isn't always clear. What if you're really good at guessing who people's favorite suspect is? For that matter, what if we haven't established a convention like \"0 = Miss Scarlett; 1 = Professor Plum; ...\"? If I see an observation, the amount by which it changes the space of possible messages is subjective; it depends on my beliefs and on the beliefs of the person I'm communicating with and on the conventions that we set up beforehand. How do you actually formalize this idea?\n\nThose are great questions. Down that path lies [https://arbital.com/p/3qq](https://arbital.com/p/3qq), a field which measures communication costs using logarithms, and which lets us formalize (and quantize) ideas such as the amount of information carried by a message (to a given observer). See the [https://arbital.com/p/information_theory_tutorial](https://arbital.com/p/information_theory_tutorial) for more on this subject.\n\nWith regard to logarithms, the key idea here is an interpretation of what $\\log_b(x)$ is \"really doing.\" Given an input like \"how many possible messages are there,\" such that your costs go up by 1 unit every time the input space increases by a factor of $b$, $\\log_b(x)$ measures the change in cost when the input space increases by a factor of $x$. As we will see next, this idea generalizes beyond the domain of \"set of possible messages vs cost of communicating,\" to _any_ scenario where some measure $\\mu$ increases by $1$ every time some object scales by a factor of $b$, in which case $\\log_b(x)$ measures the change in $\\mu$ when the object scales by a factor of $x$. This is the defining characteristic of logarithms, and now that we have some solid physical interpretations of what it means, we're ready to start exploring logarithms in the abstract.", "date_published": "2016-06-12T14:40:16Z", "authors": ["Eric Rogstad", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "45q"} {"id": "b89b14e3ebafa257dc7d158e505f2e8d", "title": "Lattice (Order Theory)", "url": "https://arbital.com/p/order_lattice", "source": "arbital", "source_type": "text", "text": "A **lattice** is a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) that is closed under binary [joins and meets](https://arbital.com/p/3rc). \n\nLet $L$ be a lattice. Then for all $p,q,r \\in L$ the following properties are necessarily satisfied.\n\n* [Associativity](https://arbital.com/p/3h4) of joins and meets: $(p \\vee q) \\vee r = p \\vee (q \\vee r)$, and $(p \\wedge q) \\wedge r = p \\wedge (q \\wedge r)$\n* [Commutativity](https://arbital.com/p/3jb) of joins and meets: $p \\vee q = q \\vee p$ and $p \\wedge q = q \\wedge p$\n* Idempotency of joins and and meets: $p \\vee p = p$ and $p \\wedge p = p$\n* Absorption: $p \\vee (p \\wedge q) = p$ and $p \\wedge (p \\vee q) = p$ \n\n%%hidden(Proofs):\n\nLemma 1: Let $P$ be a poset, $S \\subseteq P$, and $p \\in P$. If both $\\bigvee S$ and $(\\bigvee S) \\vee p$ exist then $\\bigvee (S \\cup \\{p\\})$ exists as well, and $(\\bigvee S) \\vee p = \\bigvee (S \\cup \\{p\\})$.\n\nProof: See the *Join fu* exercise in [https://arbital.com/p/4ll](https://arbital.com/p/4ll).\n\n## Associativity\nLet $L$ be a lattice and $p,q,r,s \\in L$ such that $s = p \\vee (q \\vee r)$. We apply the above lemma, along with commutativity and closure of lattices under binary joins, to get \n$$p \\vee (q \\vee r) = (q \\vee r) \\vee p = (\\bigvee \\{q, r\\}) \\vee p = \\bigvee (\\{q, r\\} \\cup \\{p\\}) =$$\n$$\\bigvee \\{ q, r, p \\} = \\bigvee (\\{p, q\\} \\cup \\{r\\}) = (\\bigvee \\{p, q\\}) \\vee r = (p \\vee q) \\vee r.$$\n\nBy duality, we also have the associativity of binary meets.\n\n## Commutativity\n\nLet $L$ be a lattice and $p,q \\in L$. Then $p \\vee q = \\bigvee \\{ p, q \\} = q \\vee p$. Binary joins are therefore commutative. By duality, binary meets are also commutative.\n\n## Idempotency\n\nLet $L$ be a lattice and $p \\in L$. Then $p \\vee p = \\bigvee \\{ p \\} = p$. The property that for all $p \\in L$, $p \\vee p = p$ is called *idempotency*. By duality, we also have the idempotency of meets: for all $p \\in L$, $p \\wedge p = p$.\n\n## Absorption\n\nSince $p \\wedge q$ is the greatest *lower bound* of $\\{p,q\\}$, $p \\wedge q \\leq p$. Because $p \\leq p$ and $(p \\wedge q) \\leq p$, $p$ is an upper bound of $\\{p, p \\wedge q\\}$, and so $p \\vee (p \\wedge q) \\leq p$. On the other hand, $p \\vee (p \\wedge q)$ is the least *upper bound* of $\\{p, p \\wedge q\\}$, and so $p \\leq p \\vee (p \\wedge q)$. By anti-symmetry, $p = p \\vee (p \\wedge q)$.\n\n%%\n\nClosure under finite joins and meets\n--------------------------------------------------------------\n\nLet $L$ be a lattice and $S = \\{ s_1, ..., s_n \\}$ be some finite subset of $L$. Then an inductive argument shows that $\\bigvee S$ exists. \n\n%%hidden(Proof):\nHere again, we will need Lemma 1, stated in the proofs of the four lattice properties.\n\nOur proof proceeds by induction on the cardinality of $S$.\n\nThe base case is $\\bigvee \\{ s_1 \\} = s_1 \\in L$. For the inductive step, we suppose that $\\bigvee \\{s_1, ..., s_i \\}$ exists. Then, applying lemma 1, we have $\\bigvee \\{s_1, ..., s_{i+1} \\} = \\bigvee \\{s_1, ..., s_i \\} \\vee s_{i+1}$. Applying our inductive hypothesis and closure under binary joins, we have $\\bigvee \\{s_1, ..., s_i \\} \\vee s_{i+1}$ exists. Lattices are therefore closed under all *finite* joins, not just binary ones. Dually, lattices are closed under all finite meets.\n%%\n\nBasic positive examples\n--------------\n\nHere are two Hasse diagrams of posets which are lattices.\n\n![A diamond shaped lattice](http://i.imgur.com/OlQnU07.png)\n\n%%%comment:\n\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n a [= \"\"](https://arbital.com/p/label)\n b [= \"\"](https://arbital.com/p/label)\n c [= \"\"](https://arbital.com/p/label)\n d [= \"\"](https://arbital.com/p/label)\n\n rankdir = BT;\n a -> b\n a -> c\n b -> d\n c -> d\n}\n%%%\n\n![A cube shaped lattice](http://i.imgur.com/L0x074n.png)\n\n%%%comment:\n\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n a [= \"\"](https://arbital.com/p/label)\n b [= \"\"](https://arbital.com/p/label)\n c [= \"\"](https://arbital.com/p/label)\n d [= \"\"](https://arbital.com/p/label)\n e [= \"\"](https://arbital.com/p/label)\n f [= \"\"](https://arbital.com/p/label)\n g [= \"\"](https://arbital.com/p/label)\n h [= \"\"](https://arbital.com/p/label)\n\n rankdir = BT;\n a -> b\n a -> c\n a -> d\n b -> e\n b -> f\n c -> e\n c -> g\n d -> f\n d -> g\n e -> h\n f -> h\n g -> h\n}\n\n%%%\n\nBasic negative examples\n------------------------------------\n\nHere are two Hasse diagrams of posets which are *not* lattices.\n\n![A simple non-lattice](http://i.imgur.com/DAeuYz0.png)\n\n%%%comment:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n a [= \"\"](https://arbital.com/p/label)\n b [= \"\"](https://arbital.com/p/label)\n c [= \"\"](https://arbital.com/p/label)\n d [= \"\"](https://arbital.com/p/label)\n e [= \"\"](https://arbital.com/p/label)\n f [= \"\"](https://arbital.com/p/label)\n g [= \"\"](https://arbital.com/p/label)\n h [= \"\"](https://arbital.com/p/label)\n\n rankdir = BT;\n a -> b\n a -> c\n a -> d\n b -> e\n b -> f\n c -> e\n c -> g\n d -> f\n d -> g\n e -> h\n f -> h\n g -> h\n}\n\n%%%\n\nIn the above diagram, the two bottom elements have no common lower bounds. Therefore they have no meet, and so the depicted poset is not a lattice. However, it should be easy to verify that this poset is closed under binary joins.\n\n![Another simple non-lattice](http://i.imgur.com/5Vqk87u.png)\n\n%%%comment:\n\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n a [= \"\"](https://arbital.com/p/label)\n b [= \"\"](https://arbital.com/p/label)\n c [= \"\"](https://arbital.com/p/label)\n d [= \"\"](https://arbital.com/p/label)\n\n rankdir = BT;\n a -> b\n c -> d\n}\n\n%%%\n\nThe Hasse diagram of this poset has two connected components. No element from the left component will have a meet or a join with any element from the right component. The depicted poset is therefore *not* a lattice. \n\nThe connecting lemma\n--------------------\n\nThe connecting lemma states that for any lattice $L$ and $p,q \\in L$, $p \\vee q = p \\Leftrightarrow q \\leq p$ and dually, $p \\wedge q = p \\Leftrightarrow q \\geq p$. This simple but important lemma is so named because it establishes a connection between the lattice's join operator and its underlying poset order.\n\n%%hidden(Proof):\n\nWe prove $p \\vee q = p \\Leftrightarrow q \\leq p$; the other part follows from duality. If $p \\vee q = p$, then $p$ is an upper bound of both $p$ and $q$, and so $q \\leq p$. Going the other direction, suppose $q \\leq p$. Since $p$ is and upper bound of itself by reflexivity, it then follows that $p$ is an upper bound of $\\{p, q\\}$. There cannot be a lesser upper bound of $\\{p, q\\}$ because it would not be an upper bound of $p$. Hence, $p \\vee q = p$.\n\n%%\n\nLattices as algebraic structures\n-------------------------------------\n\nIt's also possible to formulate lattices as [algebraic structures](https://arbital.com/p/3gx) $\\langle L, \\vee, \\wedge \\rangle$ satisfying the associativity, commutativity, idempotency, and absorption laws described above. A poset $\\langle L, \\leq \\rangle$ can then be defined such that for $p, q \\in L$, $p \\leq q$ whenever $p \\vee q = q$. It can be shown that this poset is closed under binary meets and joins, and that these meets and joins are equal to the corresponding meets and joins of the algebraic lattice.\n\nAdditional material\n-------------------------------\n\nFor more examples of lattices, see [https://arbital.com/p/574](https://arbital.com/p/574).\nFor some exercises involving the concepts introduced on this page, see [https://arbital.com/p/5ff](https://arbital.com/p/5ff).", "date_published": "2016-12-06T14:07:55Z", "authors": ["Eric Rogstad", "Kevin Clancy", "Alexei Andreev"], "summaries": ["A **lattice** is a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) that is closed under binary [joins and meets](https://arbital.com/p/3rc). It follows from this definition that lattices are closed under all *finite* joins and meets, and that for all lattices $L$ and elements $p, q \\in L$, the following algebraic properties are satisfied:\n\n* [Associativity](https://arbital.com/p/3h4) of joins and meets: $(p \\vee q) \\vee r = p \\vee (q \\vee r)$, and $(p \\wedge q) \\wedge r = p \\wedge (q \\wedge r)$\n* [Commutativity](https://arbital.com/p/3jb) of joins and meets: $p \\vee q = q \\vee p$ and $p \\wedge q = q \\wedge p$\n* Idempotency of joins and and meets: $p \\vee p = p$ and $p \\wedge p = p$\n* Absorption: $p \\vee (p \\wedge q) = p$ and $p \\wedge (p \\vee q) = p$"], "tags": [], "alias": "46c"} {"id": "a7f5216e8cafff6bf8fc095bf1f19669", "title": "Uncomputability", "url": "https://arbital.com/p/uncomputability", "source": "arbital", "source_type": "text", "text": "Imagine you were tasked with the mission of finding out which things can and cannot be calculated with a computer, given that you had infinite memory and infinite time.\n\nGiven the generality of computers, you might be tempted to say everything, but it turns out that this is not the case.\n\nTo begin tackling this problem, we need a formal notion of what it means to compute something. We are going to focus on computing **natural functions**; that is, [functions](https://arbital.com/p/3jy) which transform [natural numbers](https://arbital.com/p/45h) to natural numbers.\n\nAs natural numbers can be used to encode pretty much anything, this class of computations is far wider than one might initially think. For example, we can [https://arbital.com/p/-encode](https://arbital.com/p/-encode) finite sequences of arbitrary length of natural numbers in a single natural number, and we can encode words in sequences of numbers.\n\nWe are going to say that a computer program computes a certain natural function if on input $n$ it returns the result of applying such function to $n$, for every natural $n$. We note that programs do not have to halt on inputs; they may just run forever. In that case, we will say that the program is computing a partial function, which is not defined on some inputs.\n\n%%%knows-requisite([https://arbital.com/p/2w0](https://arbital.com/p/2w0)):\nNow, the first thing to notice is that the set of possible functions from $\\mathbb{N}$ to $\\mathbb{N}$ is not [https://arbital.com/p/-enumerable](https://arbital.com/p/-enumerable), while the set of possible programs, which are finite sequences of a finite quantity of symbols, is enumerable. Thus, there cannot be a one-to-one correspondence between functions and programs; there must necessarily exist a function which is not computed by any program.\n\nSo computers cannot compute everything!\n%%%\n\n----\n##The diagonal function\n\nWe are going to construct a concrete example of a function which is not computable.\n\nImagine an infinite list of every possible Python program, associated with the infinite outputs each would produce if we feed it the sequence of natural numbers $1,2,3,...$.\n\nThus, we may end up with a table such as this:\n\nprogram #1: 0->1, 1->X, 2->3,...\n\nprogram #2:\n\netc...\n\nWhere an X means that the program does not halt on that input or does not return an output, and thus the function it represents is not defined there.\n\nNow, we are going to construct an explicit function which is not computable using this table.\n\nLet $DIAG$ be such that:\n$$\nDIAG(n) = \\left\\{\n \\begin{array}{lr}\n 1\\ if\\ M_n(n) = 0\\\\\n 0\\ if\\ M_n(n)\\mbox{ is undefined or defined and greater than 0}\n \\end{array}\n\\right.\n$$\n\nThe notation $M_n$ means *\"the $n$th program in our enumeration of programs\"*. \nLet's see what we have done there. We are looking at the $n$th entry of every program and saying that this function on input $n$ outputs something different that the $n$th program. This technique is called [https://arbital.com/p/-diagonalization](https://arbital.com/p/-diagonalization), and it is ubiquitous in computability and logic.\n\nThe function $DIAG$, known as the [https://arbital.com/p/-diagonal_function](https://arbital.com/p/-diagonal_function), is guaranteed to disagree with every program on at least one input. Thus, there cannot be a program which computes $DIAG$!\n\n----\n##The halting function\nThe reader may at this point be however not satisfied with such an unnatural example of an uncomputable function. After all, who is going to want to compute such a weird thing? Do not dismay, for there is a much better example: the halting function.\n\nLet $HALT$ be defined as:\n$$\nHALT(n,x) = \\left\\{\n \\begin{array}{lr}\n 1\\ if\\ M_n(x)\\mbox{ halts}\\\\\n 0\\ if\\ M_n(x)\\mbox{ does not halt }\n \\end{array}\n\\right.\n$$\n\nThat is, the function $HALT$ decides whether a given program is going to halt on a particular input. Now that is interesting.\n\nTo cite one imaginative use of the halting function, one could for example code a program which on any input simply ignores the input and starts looking for a counterexample of the Collatz conjecture, halting [https://arbital.com/p/-46m](https://arbital.com/p/-46m) it finds one. Then we could use the halting function to see whether the conjecture is true or false.\n\nSadly, $HALT$ is not computable. We are going to give two proofs of this fact, one by **reduction** to the diagonal function and other by **diagonalization**.\n\n###Proof by reduction\nProofs by **reduction** are quite common in computability. We start by supposing that the function we are dealing with, in this case $HALT$, is computable, and then we proceed to use this assumption to show that if that was the case we could compute another function we know is uncomputable.\n\nSuppose we have a program, which we are going to represent by $HALT$, which computes the halting function.\n\nThen we could write a program such as this one:\n\n DIAGONAL(n):\n if HALT(n,n):\n return 1 if M_n==0 else return 0\n else return 0\n\n\nWhich computes the diagonal function. As we know such a program cannot possibly exist, we conclude by *reductio ad absurdum* that $HALT$ is not computable.\n\n###Exercise\nProve that $HALT$ is reducible to $DIAGONAL$; ie, that if $DIAGONAL$ was computable we could compute $HALT$.\n\n%%hidden(Show solution):\nSuppose we want to know whether the program $PROG$ halts on input $n$. Then we first code the auxiliary program:\n\n AUX(x): \n PROG(n)\n return 0\n\n\nThat is, $AUX$ is the program which on any input first runs $PROG$ on input $n$ and then outputs $0$ if $PROG$ ever halts.\n\nThen $DIAG(\\ulcorner AUX\\urcorner)==1$ iff $PROG$ halts on input $n$.\n%%\n\n\n\n###Proof by diagonalization\nAnother nifty proof of uncomputability comes from diagonalization.\n\nAs before, let's suppose $HALT$ is computable. Then we could write a program such as this one:\n\n prog(n):\n if HALT(n,n)==1: \n loop forever\n else return 0\n\nThe program halts on input $n$ iff $M_n$ does not halt on input $n$.\n\nNow, let $\\ulcorner prog\\urcorner$ represent the number in the list of the program which occupies the $prog$ program. Here comes the question: does $prog$ halt when its input is $\\ulcorner prog \\urcorner$?\n\nSupposing either yes or no leads to a contradiction, so we conclude that such a program cannot possibly exist, and it follows from the contradiction that $HALT$ is not computable.\n\n---\nSo now you are ready to start your journey through the realm of computability!\n\nSuggested next steps include examining other uncomputable functions such as the [busy beaver sequence](https://arbital.com/p/), or proceeding to [complexity theory](https://arbital.com/p/).", "date_published": "2016-06-14T20:59:38Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "46h"} {"id": "a19e3fd036fd309a89b178f30af87d2f", "title": "Iff", "url": "https://arbital.com/p/iff", "source": "arbital", "source_type": "text", "text": "Iff is a shorthand for \"if and only if\". Its logical symbol is $\\leftrightarrow$.\n\n\"A iff B\" ($A \\leftrightarrow B$) is quite distinct from \"if A then B\" ($A \\rightarrow B$). Consider the stipulation \"If the dog barks, then it will soon bite\". This would not obligate the dog to bark a warning before biting. The \"if\" relation isn't symmetrical. As such, the dog might sometimes bite spontaneously, with no barking at all.\n\nIf we wanted to ensure that biting is always forewarned by barking, we would instead stipulate \"**Iff** dog barks, then it will soon bite\". This is equivalent to\n\n* \"the dog barks *if and only if* it will soon bite\"\n\n* \"If the dog barks then it will soon bite, and if the dog bites it will have barked beforehand\".\n\n* \"The dog barks only when it will soon bite\"\n\nWith \"iff\", the implication runs in both directions.", "date_published": "2016-10-04T19:23:12Z", "authors": ["Eric Bruylant", "M Yass", "Alexei Andreev"], "summaries": [], "tags": ["Formal definition", "Stub"], "alias": "46m"} {"id": "0130a717ad748990ff93f90fe4bf0a0b", "title": "Proof by contradiction", "url": "https://arbital.com/p/proof_by_contradiction", "source": "arbital", "source_type": "text", "text": "A proof by contradiction (a.k.a. *reductio ad absurdum*, reduction to absurdity) is a [strategy](https://arbital.com/p/5xz) used by mathematicians to show that a mathematical statement is true by proving that the [https://arbital.com/p/-negation](https://arbital.com/p/-negation) of that statement leads to being able to prove that two opposite statements are simultaneously true (a [https://arbital.com/p/-contradiction](https://arbital.com/p/-contradiction)). \n\n##Outline\nThe outline of the strategy is as follows:\n\n1. Suppose that what you want to prove is false.\n2. Derive a contradiction from it.\n3. Conclude that the supposition is wrong.\n\n##Examples\nTo illustrate the concept, we will do a simple, non rigorous reasoning. Imagine yourself in the next situation:\n\nYou are a defense lawyer. Your client is accused of stealing the cookie from the cookie jar. You want to prove her innocence. Lets say you have evidence that the jar is still sealed. Reason as follows:\n\n1. Assume she stole the cookie from the cookie jar.\n2. Then she would have had to open the jar.\n3. The jar is still sealed.\n4. For the jar to be sealed and for her to have opened it is a contradiction.\n5. Hence the assumption in 1 is false (given the deductions below it are true).\n6. Hence she did not steal the cookie from the cookie jar.\n\n\nNow we will work through an actual mathematical example: we will show that $\\sqrt 2$ is not [rational](https://arbital.com/p/4zq); that is, it cannot be expressed as the division of two [natural numbers](https://arbital.com/p/45h).\n\n1. We suppose that $\\sqrt 2$ is rational, which means that there exist $a,b\\in\\mathbb{N}$ such that $\\sqrt 2 = \\frac{a}{b}$. Without loss of generality, we can suppose that $a$ and $b$ [have no common divisors](https://arbital.com/p/coprime), since otherwise we can just divide both numbers by their greatest common divisor to obtain a pair of numbers which satisfy both properties.\n2. If this was the case, $b\\sqrt2=a$. by squaring both sides, we arrive to $2b^2=a^2$. But then $2$ divides $a$, so we can express $a$ as $2n$ for some $n\\in\\mathbb{N}$. Substituing in the previous expression, we arrive to $2b^2 = 4n^2\\implies b^2 =2 n^2$. By the same reasoning, $2$ must divide $b$, but then $a$ and $b$ have a common divisor! We have a contradiction.\n3. We conclude that our original assumption that $\\sqrt 2$ is rational must be false, and thus $\\sqrt 2$ is not rational.\n\n##When to use it\nProof by contradiction is one of the most useful techniques one can use to prove anything.\n\nIn particular, if you get stuck while doing a proof, resorting to proof by contradiction is a great way to keep exploring a problem from a different perspective. Even if you do not get to solve the problem, you may get a useful insight about the problem when performing the procedure of proof by contradiction.\n\nAlso, trying to do proof by contradiction may result in a [counterexample](https://arbital.com/p/), which dissolves the problem in question.", "date_published": "2016-08-20T11:27:57Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina", "Joe Zeng"], "summaries": [], "tags": ["Math 1"], "alias": "46z"} {"id": "9406a41469cc426d246e297fe3b9c295", "title": "Derivative", "url": "https://arbital.com/p/derivative_calculus", "source": "arbital", "source_type": "text", "text": "The derivative of $y$ with respect to $x$ describes the [https://arbital.com/p/-rate](https://arbital.com/p/-rate) at which $y$ changes, given a change in $x$. In particular, we consider how tiny changes in one variable affect another variable. To take the derivative of a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy), we can draw a line that is [https://arbital.com/p/-tangent](https://arbital.com/p/-tangent) to a graph of the function. The slope of the tangent line is the value of the derivative at that point. The derivative of a function $f(x)$ is itself a function: it returns, for any $x$, the slope of the line that is tangent to $f(x)$ at the point $(x, f(x))$.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## Examples ##\n\n - The time-derivative of your car's mileage is your car's speed (because your car's speed is how quickly your car's mileage changes *over time*)\n\n - The time-derivative of your car's speed is your acceleration (because acceleration means how quickly your *speed* is changing over time)\n\n - The time-derivative of human population size is the birth rate minus the death rate (because if we take the birth rate minus the death rate, that tells how quickly the population is changing over time)\n\n - The time-derivative of wealth is income minus spending (because if we take income minus spending, that tells us how your wealth is changing over time)\n\n - The time-derivative of blood pumped through your heart is the flow rate through the aortic valve (rates in general describe how something changes over time)\n\n - The time-derivative of charge on a capacitor is the current flowing to it (currents in general are time-derivatives of how much total stuff has flowed)\n\n - The time-derivative of how good of a life you'll have lived is how happy you are right now (according to [hedonic utilitarians](https://foundational-research.org/hedonistic-vs-preference-utilitarianism/))\n\nOkay, let's take another stab at this. Time-derivatives, or derivatives \"with respect to time,\" describe how things change over time. We can take derivatives with respect to other things too.\n\n - The derivative of your car's mileage with respect to how much fuel you've burned is your miles per gallon (because your mpg describes how your mileage changes per unit of fuel you burn)\n\n - The derivative of the temperature of a pot of water with respect to how much heat you blast it with is called the heat capacity of water\n\n - The derivative of [potential energy](https://en.wikipedia.org/wiki/Potential_energy) with respect to altitude is gravitational force (a higher up object has some stored energy that it would release if it fell; a lower down object has less \"potential energy.\" The change in potential energy as you change the altitude is why there's a force in the first place.)\n\nThere were two goals with all those examples, one explicit, and one covert. The explicit one was to give you a sense for what derivatives are. The covert one was to quietly suggest that you will never understand the the way the world works unless you understand derivatives. But hey, look at you! You kind of understand derivatives already! Let's get to the math now, shall we?\n\n## Setting Up The Math ##\n\nYou just got your new car.\n\n![](http://o.aolcdn.com/commerce/autodata/images/USC30TSC021B021001.jpg)\nIt's a [Tesla](http://waitbutwhy.com/2015/06/how-tesla-will-change-your-life.html) because you care about the environment almost as much as you care about looking awesome. Your mileage is sitting at 0. The world is your oyster. At time $t = 0$, you put your foot on the accelerator. For the next few seconds, your mileage will be $4.7 t^2$, where your mileage is in meters, and $t$ is in seconds since you pressed your foot on the accelerator. Now the first question is this: if that equation tells us how many meters we've traveled after how many seconds, how fast are we going at any given point in time?\n\nThe astute reader will have noticed that this was the first example of a time-derivative that we gave: the time-derivative of your car's mileage is your car's speed. Let's think about this *sans* math for a second. If we know where we are at any time, we should be able to figure out how fast we're going. There isn't any extra information we need. The only question is how. Well, we take the derivative of the mileage with respect to time to get our speed. In other words:\n\n$$\\frac{\\mathrm{d}}{\\mathrm{d} t} mileage = speed$$\n\nThat means the derivative with respect to $t$, where $t$ is the time in seconds. But we know what the mileage is, *in terms of $t$*. Our mileage is just $4.7 t^2$. So we can write:\n\n$$\\frac{\\mathrm{d}}{\\mathrm{d} t} 4.7 t^2 = speed$$\n\n## Solving The Math ##\n\nSorry to leave you hanging for a sec, but we're going to start with something a little simpler.\n\n![](http://i66.tinypic.com/1p87pl.png)\n$$distance\\ traveled = 2t$$\n\nIf this is the graph of how far someone has traveled after how many seconds, we can see that every second they go 2 more meters. In other words, they are traveling 2 meters per second, which you might notice is the slope of this line. In general, the derivative of a function is like the slope of the function when you graph it out. \n\nThis works fine if our function is something like $distance\\ traveled = 2t$. What if our function isn't a line though. What if it's $distance\\ traveled = t^2$?\n\n![](http://i64.tinypic.com/htu0pv.png)\n\nThings that aren't lines don't have slopes. So if this is a graph of our distance traveled over time, it's not as easy to see how fast we were going. But let's say we want to see how fast we were going at $t=1$. If we zoom in enough on that curve, it will start to flatten out into a straight line until we can't tell the difference. The slope of *that* line is what gives us our speed. The process of taking a curve like this one, and getting the \"slope\" at any given point is called \"taking the derivative.\"\n\nLet's take the derivative of $d = t^2$, where $d$ is the distance and $t$ is the time. (We'll take the derivative with respect to $t$). Prepare yourself, now take a look at the graph down there.\n\n![](http://i67.tinypic.com/mcxloj.png)\n\nWe know how to find the slope of a line if we're given two points, so we're going to do that, and then slowly move the points together until they're on top of each other. The coordinates of the points are shown above, and we can calculate the slope pretty easily by doing $\\frac{\\Delta d}{\\Delta t}$. This gives us a slope of 2.\n\nNow let's say that our first point is at $(t,t^2)$, that our second point is $h$ units to the right, so it's coordinates are $((t+h),(t+h)^2)$. Now we have:\n$$∆d=(t+h)^2-t^2$$\n$$∆t=(t+h) - t$$\nAlgebradabra:\n$$∆d=2ht + h^2$$\n$$∆t=h$$\n$$\\frac{\\Delta d}{\\Delta t}=\\frac{2ht + h^2}{h}=2t+h$$\n\nNow as we make $h$ really small, the points get closer and closer together, and the slope of the line becomes $2t$. So when $t$ is $1$, the slope is $2$. And when $t$ is $5$, the slope is $10$.\n\nWe say the derivative of $t^2$ is $2t$. With similar logic, you can show that the derivative of $4.7t^2$ is $9.4t$. And *that* means that if you put your foot on the accelerator of your new Tesla at time $t=0$, your speed after $t$ seconds will be $9.4t$. After 1 second, you'll be traveling 9.4 meters per second. After 3 seconds, you'll be going 28.2 meters per second (or 64 mph).\n\n## Concluding ##\n\nThat's what derivatives are. The Tesla case was just one example of actually finding the derivative of something. Obviously, if our distance traveled had been some totally different function of time, the derivative would have been different. We found that the derivative of $t^2$ is $2t$. Below is a list of other derivatives. You can imagine that if the function on the left was our distance traveled after a time $t$, the function on the right would be our speed at a time $t$. (All of the $c$'s and $n$'s are constants.)\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}c=0$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}ct=c$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}ct^2=2ct$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}ct^2=3ct^2$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}ct^n=nct^{n-1}$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}e^t=e^t$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}sin(t)=cos(t)$$\n$$\\frac{\\mathrm{d} }{\\mathrm{d} t}cos(t)=-sin(t)$$\n\nIf you're up for it, try to use the method we showed for solving derivatives to verify some of these. Good luck!\n\n## See also\n\nIf you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!\n\nArbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)!", "date_published": "2016-10-24T17:47:09Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Michael Cohen", "Alexei Andreev"], "summaries": [], "tags": ["Math 1", "B-Class"], "alias": "47d"} {"id": "8aa137c107959d47ffabdaab2185d990", "title": "The characteristic of the logarithm", "url": "https://arbital.com/p/log_characteristic", "source": "arbital", "source_type": "text", "text": "Consider the interpretation of logarithms as [the cost of communicating a message](https://arbital.com/p/45q). Every time the number of possible messages to send doubles, your communication costs increase by the price of a coin, or whatever cheaper [https://arbital.com/p/-storage_medium](https://arbital.com/p/-storage_medium) you have that can communicate one of two messages. It doesn't matter whether the number of possible messages goes from 4 to 8 or whether it goes from 4096 to 8192; in both cases, your costs go up by the price of a coin. It is the factor by which the set grew (or shrank) that affects the cost; not the absolute number of messages added (or removed) from the space of possibilities. If the space of possible messages halves, your costs go down by one coin, regardless of how many possibilities there were before the halving.\n\nAlgebraically, writing $f$ for the function that measures your costs, $c(x \\cdot 2) =$ $c(x) + c(2),$ and, in general, $c(x \\cdot y) =$ $c(x) + c(y),$ where we can interpret $x$ as the number of possible messages before the increase, $y$ as the factor by which the possibilities increased, and $x \\cdot y$ as the number of possibilities after the increase.\n\nThis is the key characteristic of the logarithm: It says that, when the input goes up by a factor of $y$, the quantity measured goes up by a fixed amount (that depends on $y$). When you see this pattern, you can bet that $c$ is a logarithm function. Thus, whenever something you care about goes up by a fixed amount every time something else doubles, you can measure the thing you care about by taking the logarithm of the growing thing. For example:\n\n- Consider the problem of checking whether a date is contained in a gigantic [sorted list](https://arbital.com/p/sorted_list) of dates. You can do this by jumping to the middle of the list, seeing whether your date is earlier or later than the date in the middle, and thereby cutting the search space in half. Each time you do this, you cut the list of dates you're searching for in half, and so the total number of elements you need to look at goes up by one every time the size of the list doubles. Thus, the cost of searching an ordered list grows logarithmically in the size of the list. See also [https://arbital.com/p/binary_search](https://arbital.com/p/binary_search).\n- Consider a colony of bacteria where each bacterium in the colony reproduces once per day. Thus, the size of the colony roughly doubles each day. If you care about how long this colony of bacteria has been growing, you can measure the days by taking the logarithm of the number of bacteria in the colony. The logarithm (base 2) counts how many times the colony has doubled (and the log base 3 counts how many times it has tripled, and so on).\n- The length of a number in [https://arbital.com/p/4sl](https://arbital.com/p/4sl) grows more-or-less logarithmically in the magnitude of the number: When the magnitude of the number goes up by a factor of 10, the number of digits it takes to write the number down grows by 1. However, this analogy is not perfect: Sometimes, multiplying a number by two does not increase its length (consider the number 300), and sometimes, dividing a number by 10 does not decrease its length by one digit (consider the number 1). See also [Length isn't quite logarithmic](https://arbital.com/p/log_v_length).\n\nConversely, whenever you see a $\\log_2$ in an equation, you can deduce that someone wants to measure some sort of thing by counting the number of doublings that another sort of thing has undergone. For example, let's say you see an equation where someone takes the $\\log_2$ of a [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq). What should you make of this? Well, you should conclude that there is some quantity that someone wants to measure which can be measured in terms of the number of doublings in that likelihood ratio. And indeed there is! It is known as [](https://arbital.com/p/bayesian_evidence), and the key idea is that the strength of evidence for a hypothesis $A$ over its negation $\\lnot A$ can be measured in terms of $2 : 1$ updates in favor of $A$ over $\\lnot A$. (For more on this idea, see [What is evidence?](https://arbital.com/p/)).\n\nIn fact, a given function $f$ such that $f(x \\cdot y) = f(x) + f(y)$ is almost guaranteed to be a logarithm function — modulo a few technicalities.\n\n[https://arbital.com/p/checkbox](https://arbital.com/p/checkbox)\n\n[Conditional text depending what's next on the path.](https://arbital.com/p/fixme:)", "date_published": "2016-10-19T19:34:50Z", "authors": ["Eric Rogstad", "Adom Hartell", "Nate Soares", "Alexei Andreev"], "summaries": ["Any time you find an output that adds whenever the input multiplies, you're probably looking at a (roughly) logarithmic relationship. For example, imagine storing a number using [digit wheels](https://arbital.com/p/42d). Every time the number goes up by a factor of 10, you need one additional digit wheel: It takes 3 wheels to store the number 500; 4 to store the number 5000; 5 to store the number 50000; and so on. Thus, the relationship between the magnitude of a number and the number of digits it takes to write down is logarithmic. This pattern is the key characteristic of the logarithm, and whenever you see an output adding when the input multiplies, you can measure the output using logarithms."], "tags": ["B-Class"], "alias": "47k"} {"id": "69d19c4e176ed6a73290873821770ee3", "title": "Group homomorphism", "url": "https://arbital.com/p/group_homomorphism", "source": "arbital", "source_type": "text", "text": "summary(Technical): Formally, given two groups $(G, +)$ and $(H, *)$ (which hereafter we will abbreviate as $G$ and $H$ respectively), a group homomorphism from $G$ to $H$ is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $f$ from the underlying set $G$ to the underlying set $H$, such that $f(a) * f(b) = f(a+b)$ for all $a, b \\in G$.\n\nA group homomorphism is a function between [groups](https://arbital.com/p/3gd) which \"respects the group structure\".\n\n#Definition\n\nFormally, given two groups $(G, +)$ and $(H, *)$ (which hereafter we will abbreviate as $G$ and $H$ respectively), a group homomorphism from $G$ to $H$ is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $f$ from the underlying set $G$ to the underlying set $H$, such that $f(a) * f(b) = f(a+b)$ for all $a, b \\in G$.\n\n#Examples\n\n - For any group $G$, there is a group homomorphism $1_G: G \\to G$, given by $1_G(g) = g$ for all $g \\in G$. This homomorphism is always [bijective](https://arbital.com/p/499).\n - For any group $G$, there is a (unique) group homomorphism into the group $\\{ e \\}$ with one element and the only possible group operation $e * e = e$. This homomorphism is given by $g \\mapsto e$ for all $g \\in G$. This homomorphism is usually not [injective](https://arbital.com/p/4b7): it is injective if and only if $G$ is the group with one element. (Uniqueness is guaranteed because there is only one *function*, let alone group homomorphism, from any set $X$ to a set with one element.)\n - For any group $G$, there is a (unique) group homomorphism from the group with one element into $G$, given by $e \\mapsto e_G$, the identity of $G$. This homomorphism is usually not [surjective](https://arbital.com/p/4bg): it is surjective if and only if $G$ is the group with one element. (Uniqueness is guaranteed this time by the property proved below that the identity gets mapped to the identity.)\n - For any group $(G, +)$, there is a bijective group homomorphism to another group $G^{\\mathrm{op}}$ given by taking inverses: $g \\mapsto g^{-1}$. The group $G^{\\mathrm{op}}$ is defined to have underlying set equal to that of $G$, and group operation $g +_{\\mathrm{op}} h := h + g$.\n - For any pair of groups $G, H$, there is a homomorphism between $G$ and $H$ given by $g \\mapsto e_H$.\n - There is only one homomorphism between the group $C_2 = \\{ e_{C_2}, g \\}$ with two elements and the group $C_3 = \\{e_{C_3}, h, h^2 \\}$ with three elements; it is given by $e_{C_2} \\mapsto e_{C_3}, g \\mapsto e_{C_3}$. For example, the function $f: C_2 \\to C_3$ given by $e_{C_2} \\mapsto e_{C_3}, g \\mapsto h$ is *not* a group homomorphism, because if it were, then $e_{C_3} = f(e_{C_2}) = f(gg) = f(g) f(g) = h h = h^2$, which is not true. (We have used that the identity gets mapped to the identity.)\n\n# Properties\n\n- The identity gets mapped to the identity. ([Proof.](https://arbital.com/p/49z))\n- The inverse of the image is the image of the inverse. ([Proof.](https://arbital.com/p/4b1))\n- The [image](https://arbital.com/p/3lh) of a group under a homomorphism is another group. ([Proof.](https://arbital.com/p/4b4))\n- The composition of two homomorphisms is a homomorphism. ([Proof.](https://arbital.com/p/4b6))", "date_published": "2016-06-22T16:47:46Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["A group homomorphism is a function between [groups](https://arbital.com/p/3gd) which \"respects the group structure\"."], "tags": [], "alias": "47t"} {"id": "2f290241350370cebfd494cb73da5aaf", "title": "Cyclic group", "url": "https://arbital.com/p/cyclic_group", "source": "arbital", "source_type": "text", "text": "summary(technical): A [group](https://arbital.com/p/3gd) $G$ is **cyclic** if it has a single [generator](https://arbital.com/p/generator_mathematics): there is one [element](https://arbital.com/p/element_mathematics) $g$ such that every element of the group is a [power](https://arbital.com/p/) of $g$.\n\n# Definition\n\nA cyclic group is a group $(G, +)$ (hereafter abbreviated as simply $G$) with a single generator, in the sense that there is some $g \\in G$ such that for every $h \\in G$, there is $n \\in \\mathbb{Z}$ such that $h = g^n$, where we have written $g^n$ for $g + g + \\dots + g$ (with $n$ terms in the summand).\nThat is, \"there is some element such that the group has nothing in it except powers of that element\".\n\nWe may write $G = \\langle g \\rangle$ if $g$ is a generator of $G$.\n\n# Examples\n\n - $(\\mathbb{Z}, +) = \\langle 1 \\rangle = \\langle -1 \\rangle$\n - The group with two elements (say $\\{ e, g \\}$ with identity $e$ with the only possible group operation $g^2 = e$) is cyclic: it is generated by the non-identity element. Note that there is no requirement that the powers of $g$ be distinct: in this case, $g^2 = g^0 = e$.\n - The integers [modulo](https://arbital.com/p/modular_arithmetic) $n$ form a cyclic group under addition, for any $n$: it is generated by $1$ (or, indeed, by $n-1$).\n - The [symmetric groups](https://arbital.com/p/497) $S_n$ for $n > 2$ are *not* cyclic. This can be deduced from the fact that they are not [abelian](https://arbital.com/p/3h2) (see below).\n\n# Properties\n\n## Cyclic groups are [abelian](https://arbital.com/p/3h2)\nSuppose $a, b \\in G$, and let $g$ be a generator of $G$. Suppose $a = g^i, b = g^j$. Then $ab = g^i g^j = g^{i+j} = g^{j+i} = g^j g^i = ba$.\n\n## Cyclic groups are [countable](https://arbital.com/p/2w0)\nThe elements of a cyclic group are nothing more nor less than $\\{ g^0, g^1, g^{-1}, g^2, g^{-2}, \\dots \\}$, which is an enumeration of the group (possibly with repeats).", "date_published": "2016-07-10T06:04:34Z", "authors": ["Eric Bruylant", "Mark Chimes", "Patrick Stevens"], "summaries": ["The cyclic [groups](https://arbital.com/p/3gd) are the simplest kind of group; they are the groups which can be made by simply \"repeating a single element many times\". For example, the rotations of a polygon."], "tags": [], "alias": "47y"} {"id": "e11c00b67dba32bb2c5397ca847c5c7a", "title": "Integer", "url": "https://arbital.com/p/integer", "source": "arbital", "source_type": "text", "text": "An **integer** is a [https://arbital.com/p/-54y](https://arbital.com/p/-54y) that can be represented as either a [https://arbital.com/p/-45h](https://arbital.com/p/-45h) or its [https://arbital.com/p/-additive_inverse](https://arbital.com/p/-additive_inverse). -4, 0, and 1,003 are examples integers. 499.99 is not an integer. Integers are real numbers; they are [rational numbers](https://arbital.com/p/-4zq); and they are not [fractions](https://arbital.com/p/-fraction) or [decimals](https://arbital.com/p/-decimal).\n\n## A Mathier Definition ##\n\nInstead of describing the properties of an integer we'll describe the membership rules for the [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $\\mathbb{Z}$. After we're done, anything that's been allowed into $\\mathbb{Z}$ counts as an integer.\n\nStart by putting $0$ and $1$ into $\\mathbb{Z}$. Now, pick an element of $\\mathbb{Z}$, pick another element of $\\mathbb{Z}$, and add them together (you can pick the same element twice). Is that number in $\\mathbb{Z}$ yet? No?! Well let's put it in there fast. We can do the same thing as before except instead of adding, we subtract, and if the difference isn't in $\\mathbb{Z}$ yet, we put it in there. Anything that could be let into $\\mathbb{Z}$ with these procedures is an integer.\n\nThis is not an efficient algorithm for building out $\\mathbb{Z}$, but it does show the primary motivation for having integers in the first place. Natural numbers (positive integers) are closed under addition, meaning that if you add any two elements in the set, the sum will be in the set, but natural numbers are not closed under subtraction. Integers are what you get when you expand natural numbers to make a set that is closed under subtraction as well.\n\n## Formal construction\n\nGiven access to the set $\\mathbb{N}$ of [natural numbers](https://arbital.com/p/45h), we may construct $\\mathbb{Z}$ as follows.\nTake the collection of all pairs $(a, b)$ of natural numbers, and take the [https://arbital.com/p/-quotient](https://arbital.com/p/-quotient) by the [https://arbital.com/p/-53y](https://arbital.com/p/-53y) $\\sim$ such that $(a,b) \\sim (c,d)$ if and only if $a+d = b+c$.\n(The intuition is that the pair $(a,b)$ stands for the integer $a-b$, and we take the quotient so that any given integer has just one representative.)\n\nWriting $[https://arbital.com/p/a,b](https://arbital.com/p/a,b)$ for the equivalence class of the pair $(a,b)$, we define the [https://arbital.com/p/-55j](https://arbital.com/p/-55j) structure as:\n\n- $[https://arbital.com/p/a,b](https://arbital.com/p/a,b) + [https://arbital.com/p/c,d](https://arbital.com/p/c,d) = [https://arbital.com/p/a+c,b+d](https://arbital.com/p/a+c,b+d)$\n- $[b](https://arbital.com/p/a,) \\times [d](https://arbital.com/p/c,) = [bc+ad](https://arbital.com/p/ac+bd,)$\n- $[https://arbital.com/p/a,b](https://arbital.com/p/a,b) \\leq [https://arbital.com/p/c,d](https://arbital.com/p/c,d)$ if and only if $a+d \\leq b+c$.\n\nThis does define the structure of a totally ordered ring ([proof](https://arbital.com/p/)).", "date_published": "2016-07-07T16:51:31Z", "authors": ["Michael Cohen", "Alexei Andreev", "Dylan Hendrickson", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Needs clickbait"], "alias": "48l"} {"id": "8cdf13d457438e375de34d8758688da3", "title": "Square visualization of probabilities on two events", "url": "https://arbital.com/p/496", "source": "arbital", "source_type": "text", "text": "$$\n\\newcommand{\\true}{\\text{True}}\n\\newcommand{\\false}{\\text{False}}\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\n\n\nsummary: \n$$\n\\newcommand{\\true}{\\text{True}}\n\\newcommand{\\false}{\\text{False}}\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\n\nWe can represent a [probability distribution](https://arbital.com/p/joint_probability_distribution_on_event) $\\bP(A,B)$ over two [events](https://arbital.com/p/event_probability) $A$ and $B$ as a square:\n\n\n\nWe could also represent $\\bP$ by [factoring](https://arbital.com/p/factoring_probability), so using $\\bP(A,B) = \\bP(A)\\; \\bP(B \\mid A)$ we'd make this picture:\n\n\n\n\n\nSay we have two [events](https://arbital.com/p/event_probability), $A$ and $B$, and a [probability distribution](https://arbital.com/p/joint_probability_distribution_on_event) $\\bP$ over whether or not they happen. We can represent $\\bP$ as a square:\n\n\n\nSo for example, the [probability](https://arbital.com/p/1rh) $\\bP(A,B)$ of both $A$ and $B$ occurring is the ratio of \\[area of the dark red region\\](https://arbital.com/p/the) to \\[area of the entire square\\](https://arbital.com/p/the):\n\n\n\n\nVisualizing probabilities in a square is neat because we can draw simple pictures that highlight interesting facts about our probability distribution.\n\nBelow are some pictures illustrating:\n\n* [independent events](https://arbital.com/p/4cf) (What happens if the columns and the rows in our square *both* line up?)\n\n* [marginal probabilities](https://arbital.com/p/marginal_probability) (If we're looking at a square of probabilities, where's the probability $\\bP(A)$ of $A$ or the probability $\\bP(\\neg B)$?)\n\n* [conditional probabilities](https://arbital.com/p/1rj) (Can we find in the square the probability $\\bP(B \\mid A)$ of $B$ if we condition on seeing $A$? What about the conditional probability $\\bP(A \\mid B)$?)\n\n* [factoring a distribution](https://arbital.com/p/factoring_probability) (Can we always write $\\bP$ as a square? Why do the columns line up but not the rows?)\n\n* the process of computing [joint probabilities](https://arbital.com/p/1rh) from [factored probabilities](https://arbital.com/p/factoring_probability)\n\nIndependent events\n===\n\nHere's a picture of the joint distribution of [two independent events](https://arbital.com/p/4cf) $A$ and $B$:\n\n\n\nNow the rows for $\\bP(B)$ and $\\bP(\\neg B)$ line up across the two columns. This is because $\\bP(B \\mid A) = \\bP(B) = \\bP(B \\mid \\neg A)$. When $A$ and $B$ are independent, updating on $A$ or $\\neg A$ doesn't change the probability of $B$.\n\nFor more on this visualization of independent events, see the aptly named [https://arbital.com/p/4cl](https://arbital.com/p/4cl).\n\nMarginal probabilities\n===\n\nWe can see the [marginal probabilities](https://arbital.com/p/marginal_probability) of $A$ and $B$ by looking at some of the blocks in our square. For example, to find the probability $\\bP(\\neg A)$ that $A$ doesn't occur, we just need to add up all the blocks where $\\neg A$ happens: $\\bP(\\neg A) = \\bP(\\neg A, B) + \\bP(\\neg A, \\neg B)$. \n\nHere's the probability $\\bP(A)$ of $A$, and the probability $\\bP(\\neg A)$ of $\\neg A$:\n\n\n\nHere's the probability $\\bP(\\neg B)$ of $\\neg B$:\n\n\n\nIn these pictures we're dividing by the area of the whole square. Since the probability of anything at all happening is 1, we could just leave it out, but it'll be helpful for comparison while we think about conditionals next.\n\n\n\nConditional probabilities\n===\n\nWe can start with some probability $\\bP(B)$, and then *assume* that $A$ is true to get a [conditional probability](https://arbital.com/p/1rj) $\\bP(B \\mid A)$ of $B$. Conditioning on $A$ being true is like restricting our whole attention to just the possible worlds where $A$ happens:\n\n\n\n\nThen the conditional probability of $B$ given $A$ is the proportion of these $A$ worlds where $B$ also happens:\n\n\n\nIf instead we condition on $\\neg A$, we get:\n\n\n\n\nSo our square visualization gives a nice way to see, at a glance, the conditional probabilities of $B$ given $A$ or given $\\neg A$:\n\n\n\n We don't get such nice pictures for $\\bP(A \\mid B)$: \n\n\n\n\nMore on this next.\n\nFactoring a distribution\n===\n\nRecall the square showing our joint distribution $\\bP$:\n\n\n\nNotice that in the above square, the reddish blocks for $\\bP(A,B)$ and $\\bP(A,\\neg B)$ are the same width and form a column; and likewise the blueish blocks for $\\bP(\\neg A,B)$ and $\\bP(\\neg A,\\neg B)$. This is because we chose to [factor](https://arbital.com/p/factoring_probability) our probability distribution starting with $A$:\n\n$$\\bP(A,B) = \\bP(A) \\bP( B \\mid A)\\ .$$\n\nLet's use the [equivalence](https://arbital.com/p/event_variable_equivalence) between [events](https://arbital.com/p/event_probability) and [binary random variables](https://arbital.com/p/binary_variable), so if we say $\\bP( B= \\true \\mid A= \\false)$ we mean $\\bP(B \\mid \\neg A)$. For any choice of truth values $t_A \\in \\{\\true, \\false\\}$ and $t_B \\in \\{\\true, \\false\\}$, we have \n\n$$\\bP(A = t_A,B= t_B) = \\bP(A= t_A)\\; \\bP( B= t_B \\mid A= t_A)\\ .$$\n\nThe first factor $\\bP(A = t_A)$ tells us how wide to make the red column $(\\bP(A = \\true))$ relative to the blue column $(\\bP(A = \\false))$. Then the second factor $\\bP( B= t_B \\mid A= t_A)$ tells us the proportions of dark $(B = \\true)$ and light $(B = \\false)$ within the column for $A = t_A$. \n\n\n\n\nWe could just as well have factored by $B$ first: \n\n$$\\bP(A = t_A,B= t_B) = \\bP(B= t_B)\\; \\bP( A= t_A \\mid B= t_b)\\ .$$\n\nThen we'd draw a picture like this:\n\n\n\nBy the way, earlier when we factored by $A$ first, we got simple pictures of the probabilities $\\bP(B \\mid A)$ for $B$ conditioned on $A$. Now that we're factoring by $B$ first, we have simple pictures for the conditional probability $\\bP(A \\mid B)$:\n\n\n\n\nand for the conditional probability $\\bP(A \\mid \\neg B)$:\n\n\n\n\n\n\nComputing joint probabilities from factored probabilities \n===\n\nLet's say we know the factored probabilities for $A$ and $B$, factoring by $A$. That is, we know $\\bP(A = \\true)$, and we also know $\\bP(B = \\true \\mid A = \\true)$ and $\\bP(B = \\true \\mid A = \\false)$. How can we recover the joint probability $\\bP(A = t_A, B = t_B)$ that $A = t_A$ is the case and also $B = t_B$ is the case?\n\n\nSince \n\n$$\\bP(B = \\false \\mid A = \\true) = \\frac{\\bP(A = \\true, B = \\false)}{\\bP(A = \\true)}\\ ,$$ \n\nwe can multiply the prior $\\bP(A)$ by the conditional $\\bP(\\neg B \\mid A)$ to get the joint $\\bP(A, \\neg B)$:\n\n$$\\bP(A = \\true)\\; \\bP(B = \\false \\mid A = \\true) = \\bP(A = \\true, B = \\false)\\ .$$ \n\nIf we do this at the same time for all the possible truth values $t_A$ and $t_B$, we get back the full joint distribution:\n\n", "date_published": "2016-06-19T08:37:06Z", "authors": ["Tsvi BT", "Eric Rogstad", "Team Arbital"], "summaries": [], "tags": [], "alias": "496"} {"id": "363519aa663c58390f7c6602890a0db0", "title": "Symmetric group", "url": "https://arbital.com/p/symmetric_group", "source": "arbital", "source_type": "text", "text": "The notion that group theory captures the idea of \"symmetry\" derives from the notion of the symmetric group, and the very important theorem due to Cayley that every group is a subgroup of a symmetric group.\n\n# Definition\n\nLet $X$ be a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz). A [bijection](https://arbital.com/p/499) $f: X \\to X$ is a *permutation* of $X$.\nWrite $\\mathrm{Sym}(X)$ for the set of permutations of the set $X$ (so its elements are functions).\n\nThen $\\mathrm{Sym}(X)$ is a group under the operation of composition of functions; it is the *symmetric group on $X$*.\n(It is also written $\\mathrm{Aut}(X)$, for the *automorphism group*.)\n\nWe write $S_n$ for $\\mathrm{Sym}(\\{ 1,2, \\dots, n\\})$, the *symmetric group on $n$ elements*.\n\n# Elements of $S_n$\n\nWe can represent a permutation of $\\{1,2,\\dots, n\\}$ in two different ways, each of which is useful in different situations.\n\n## Double-row notation\n\nLet $\\sigma \\in S_n$, so $\\sigma$ is a function $\\{1,2,\\dots,n\\} \\to \\{1,2,\\dots,n\\}$.\nThen we write $$\\begin{pmatrix}1 & 2 & \\dots & n \\\\ \\sigma(1) & \\sigma(2) & \\dots & \\sigma(n) \\\\ \\end{pmatrix}$$\nfor $\\sigma$.\nThis has the advantage that it is immediately clear where every element goes, but the disadvantage that it is quite hard to see the properties of an element when it is written in double-row notation (for example, \"$\\sigma$ cycles round five elements\" is hard to spot at a glance), and it is not very compact.\n\n## Cycle notation\n\n[Cycle notation](https://arbital.com/p/49f) is a different notation, which has the advantage that it is easy to determine an element's order and to get a general sense of what the element does.\nEvery element of $S_n$ [can be expressed in ](https://arbital.com/p/49k).\n\n## Product of transpositions\n\nIt is a useful fact that every permutation in a (finite) symmetric group [may be expressed](https://arbital.com/p/4cp) as a product of [transpositions](https://arbital.com/p/4cn).\n\n# Examples\n\n- The group $S_1$ is the group of permutations of a one-point set. It contains the identity only, so $S_1$ is the trivial group.\n- The group $S_2$ is isomorphic to the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) of order $2$. It contains the identity map and the map which interchanges $1$ and $2$.\n\nThose are the only two [abelian](https://arbital.com/p/3h2) symmetric groups.\nIndeed, in cycle notation, $(123)$ and $(12)$ do not commute in $S_n$ for $n \\geq 3$, because $(123)(12) = (13)$ while $(12)(123) = (23)$.\n\n- The group $S_3$ contains the following six elements: the identity, $(12), (23), (13), (123), (132)$. It is isomorphic to the [https://arbital.com/p/-4cy](https://arbital.com/p/-4cy) $D_6$ on three vertices. ([Proof.](https://arbital.com/p/group_s3_isomorphic_to_d6))\n\n# Why we care about the symmetric groups\n\nA very important (and rather basic) result is [Cayley's Theorem](https://arbital.com/p/49b), which states the link between group theory and symmetry.\n\n%%%knows-requisite([https://arbital.com/p/4bj](https://arbital.com/p/4bj)):\n# Conjugacy classes of $S_n$\n\nIt is a useful fact that the conjugacy class of an element in $S_n$ is precisely the set of elements which share its [cycle type](https://arbital.com/p/4cg). ([Proof.](https://arbital.com/p/4bh))\nWe can therefore [list the conjugacy classes](https://arbital.com/p/4bk) of $S_5$ and their sizes.\n%%%\n\n# Relationship to the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf)\n\nThe [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_n$ is defined as the collection of elements of $S_n$ which can be made by an even number of [transpositions](https://arbital.com/p/4cn). This does form a group ([proof](https://arbital.com/p/4hg)).\n\n%%%knows-requisite([https://arbital.com/p/4h6](https://arbital.com/p/4h6)):\nIn fact $A_n$ is a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $S_n$, obtained by taking the quotient by the [sign homomorphism](https://arbital.com/p/4hk).\n%%%", "date_published": "2016-06-17T13:13:31Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Needs summary"], "alias": "497"} {"id": "fdd553b3eef91d9ca4bf364e3b4e331c", "title": "Bijective function", "url": "https://arbital.com/p/bijective_function", "source": "arbital", "source_type": "text", "text": "A bijective function is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) which has an inverse. Equivalently, it is both [injective](https://arbital.com/p/injective_function) and [surjective](https://arbital.com/p/surjective_function).", "date_published": "2016-06-14T13:11:21Z", "authors": ["Eric Rogstad", "Mark Chimes", "Patrick Stevens"], "summaries": [], "tags": ["Definition", "Stub"], "alias": "499"} {"id": "c34c416412b8dba17dc1fefed02e88a1", "title": "Cayley's Theorem on symmetric groups", "url": "https://arbital.com/p/cayley_theorem_symmetric_groups", "source": "arbital", "source_type": "text", "text": "Cayley's Theorem states that every group $G$ appears as a certain subgroup of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $\\mathrm{Sym}(G)$ on the [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) of $G$.\n\n# Formal statement\n\nLet $G$ be a group.\nThen $G$ is [isomorphic](https://arbital.com/p/49x) to a subgroup of $\\mathrm{Sym}(G)$.\n\n# Proof\n\nConsider the left regular [action](https://arbital.com/p/3t9) of $G$ on $G$: that is, the function $G \\times G \\to G$ given by $(g, h) \\mapsto gh$.\nThis [induces a homomorphism](https://arbital.com/p/49c) $\\Phi: G \\to \\mathrm{Sym}(G)$ given by [https://arbital.com/p/-currying](https://arbital.com/p/-currying): $g \\mapsto (h \\mapsto gh)$.\n\nNow the following are equivalent:\n\n- $g \\in \\mathrm{ker}(\\Phi)$ the [kernel](https://arbital.com/p/49y) of $\\Phi$\n- $(h \\mapsto gh)$ is the identity map\n- $gh = h$ for all $h$\n- $g$ is the identity of $G$\n\nTherefore the kernel of the homomorphism is trivial, so it is injective.\nIt is therefore bijective onto its [image](https://arbital.com/p/3lh), and hence an isomorphism onto its image.\n\nSince [the image of a group under a homomorphism is a subgroup of the codomain of the homomorphism](https://arbital.com/p/4b4), we have shown that $G$ is isomorphic to a subgroup of $\\mathrm{Sym}(G)$.", "date_published": "2016-06-15T07:29:23Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "49b"} {"id": "9320a016b71508f1653f365bd159d043", "title": "Group action induces homomorphism to the symmetric group", "url": "https://arbital.com/p/group_action_induces_homomorphism", "source": "arbital", "source_type": "text", "text": "Just as we can [curry](https://arbital.com/p/currying) functions, so we can \"curry\" [homomorphisms](https://arbital.com/p/47t) and [actions](https://arbital.com/p/3t9).\n\nGiven an action $\\rho: G \\times X \\to X$ of group $G$ on set $X$, we can consider what happens if we fix the first argument to $\\rho$. Writing $\\rho(g)$ for the induced map $X \\to X$ given by $x \\mapsto \\rho(g, x)$, we can see that $\\rho(g)$ is a [bijection](https://arbital.com/p/499).\n\nIndeed, we claim that $\\rho(g^{-1})$ is an inverse map to $\\rho(g)$.\nConsider $\\rho(g^{-1})(\\rho(g)(x))$.\nThis is precisely $\\rho(g^{-1})(\\rho(g, x))$, which is precisely $\\rho(g^{-1}, \\rho(g, x))$.\nBy the definition of an action, this is just $\\rho(g^{-1} g, x) = \\rho(e, x) = x$, where $e$ is the group's identity.\n\nWe omit the proof that $\\rho(g)(\\rho(g^{-1})(x)) = x$, because it is nearly identical.\n\nThat is, we have proved that $\\rho(g)$ is in $\\mathrm{Sym}(X)$, where $\\mathrm{Sym}$ is the [https://arbital.com/p/-497](https://arbital.com/p/-497); equivalently, we can view $\\rho$ as mapping elements of $G$ into $\\mathrm{Sym}(X)$, as well as our original definition of mapping elements of $G \\times X$ into $X$.\n\n# $\\rho$ is a homomorphism in this new sense\n\nIt turns out that $\\rho: G \\to \\mathrm{Sym}(X)$ is a homomorphism.\nIt suffices to show that $\\rho(gh) = \\rho(g) \\rho(h)$, where recall that the operation in $\\mathrm{Sym}(X)$ is composition of permutations.\n\nBut this is true: $\\rho(gh)(x) = \\rho(gh, x)$ by definition of $\\rho(gh)$; that is $\\rho(g, \\rho(h, x))$ because $\\rho$ is a group action; that is $\\rho(g)(\\rho(h, x))$ by definition of $\\rho(g)$; and that is $\\rho(g)(\\rho(h)(x))$ by definition of $\\rho(h)$ as required.", "date_published": "2016-06-14T15:05:26Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "49c"} {"id": "6efa7fb8e0b518dd8ca9194423356a97", "title": "Cycle notation in symmetric groups", "url": "https://arbital.com/p/cycle_notation_symmetric_group", "source": "arbital", "source_type": "text", "text": "There is a convenient way to represent the elements of a [https://arbital.com/p/-497](https://arbital.com/p/-497) on a finite set.\n\n# $k$-cycle\n\nA $k$-cycle is a member of $S_n$ which moves $k$ elements to each other cyclically.\nThat is, letting $a_1, \\dots, a_k$ be distinct in $\\{1,2,\\dots,n\\}$, a $k$-cycle $\\sigma$ is such that $\\sigma(a_i) = a_{i+1}$ for $1 \\leq i < k$, and $\\sigma(a_k) = a_1$, and $\\sigma(x) = x$ for any $x \\not \\in \\{a_1, \\dots, a_k \\}$.\n\nWe have a much more compact notation for $\\sigma$ in this case: we write $\\sigma = (a_1 a_2 \\dots a_k)$.\n(If spacing is ambiguous, we put in commas: $\\sigma = (a_1, a_2, \\dots, a_k)$.)\nNote that there are several ways to write this: $(a_1 a_2 \\dots a_k) = (a_2 a_3 \\dots a_k a_1)$, for example.\nIt is conventional to put the smallest $a_i$ at the start.\n\nNote also that a cycle's inverse is extremely easy to find: the inverse of $(a_1 a_2 \\dots a_k)$ is $(a_k a_{k-1} \\dots a_1)$.\n\nFor example, the double-row notation $$\\begin{pmatrix}1 & 2 & 3 \\\\ 2 & 3 & 1 \\\\ \\end{pmatrix}$$\nis written as $(123)$ or $(231)$ or $(312)$ in cycle notation.\n\nHowever, it is unclear without context which symmetric group $(123)$ lies in: it could be $S_n$ for any $n \\geq 3$.\nSimilarly, $(145)$ could be in $S_n$ for any $n \\geq 5$.\n\n# General elements, not just cycles\n\nNot every element of $S_n$ is a cycle. For example, the following element of $S_4$ has [order](https://arbital.com/p/4cq) $2$ so could only be a $2$-cycle, but it moves all four elements:\n$$\\begin{pmatrix}1 & 2 & 3 & 4 \\\\ 2 & 1 & 4 & 3 \\\\ \\end{pmatrix}$$\n\nHowever, it may be written as the composition of the two cycles $(12)$ and $(34)$: it is the result of applying one and then the other.\nNote that since the cycles are disjoint (having no elements in common), [it doesn't matter in which order we perform them](https://arbital.com/p/49g).\nIt is a very important fact that [every permutation may be written as the product of disjoint cycles](https://arbital.com/p/49k).\nIf $\\sigma$ is a permutation obtained by first doing cycle $c_1 = (a_1 a_2 \\dots a_k)$, then by doing cycle $c_2$, then cycle $c_3$, we write $\\sigma = c_3 c_2 c_1$; this is by analogy with function composition, indicating that the first permutation to apply is on the rightmost end of the expression.\n(Be aware that some authors differ on this.)\n\n## Order of an element\n\nFirstly, a cycle has [order](https://arbital.com/p/4cq) equal to its length.\nIndeed, the cycle $(a_1 a_2 \\dots a_k)$ has the effect of rotating $a_1 \\mapsto a_2 \\mapsto a_3 \\dots \\mapsto a_k \\mapsto a_1$, and if we do this $k$ times we get back to where we started.\n(And if we do it fewer times - say $i$ times - we can't get back to where we started: $a_1 \\mapsto a_{i+1}$.)\n\nNow, suppose we have an element in disjoint cycle notation: $(a_1 a_2 a_3)(a_4 a_5)$, say, where all the $a_i$ are different.\nThen the order of this element is $3 \\times 2 = 6$, because: \n\n- $(a_1 a_2 a_3)$ and $(a_4 a_5)$ are disjoint and hence commute, so $[https://arbital.com/p/(](https://arbital.com/p/()^n = (a_1 a_2 a_3)^n (a_4 a_5)^n$\n- $(a_1 a_2 a_3)^n (a_4 a_5)^n$ is the identity if and only if $(a_1 a_2 a_3)^n = (a_4 a_5)^n = e$ the identity, because otherwise (for instance, if $(a_1 a_2 a_3)^n$ is not the identity) it would move $a_1$.\n- $(a_1 a_2 a_3)^n$ is the identity if and only if $n$ is divisible by $3$, since $(a_1 a_2 a_3)$'s order is $3$.\n- $(a_4 a_5)^n$ is the identity if and only if $n$ is divisible by $2$.\n\nThis reasoning generalises: the order of an element in disjoint cycle notation is equal to the [https://arbital.com/p/-least_common_multiple](https://arbital.com/p/-least_common_multiple) of the lengths of the cycles.\n\n# Examples\n\n- The element $\\sigma$ of $S_5$ given by first performing $(123)$ and then $(345)$ is $(345)(123) = (12453)$. Indeed, the first application takes $1$ to $2$ and the second application does not affect the resulting $2$, so $\\sigma$ takes $1$ to $2$; the first application takes $2$ to $3$ and the second application takes the resulting $3$ to $4$, so $\\sigma$ takes $2$ to $4$; the first application does not affect $4$ and the second application takes $4$ to $5$, so $\\sigma$ takes $4$ to $5$; and so on.\n\nThis example suggests a general procedure for expressing a permutation which is already in cycle form, in *disjoint* cycle form. It turns out that [this can be done in an essentially unique way](https://arbital.com/p/49k).\n\n## Cycle type\n\nThe [https://arbital.com/p/-4cg](https://arbital.com/p/-4cg) is given by taking the list of lengths of the cycles in the disjoint cycle form.", "date_published": "2016-06-15T08:17:16Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "49f"} {"id": "b20eef2d1a9ea3a5fe7d87f941754f13", "title": "Disjoint cycles commute in symmetric groups", "url": "https://arbital.com/p/disjoint_cycles_commute_symmetric_group", "source": "arbital", "source_type": "text", "text": "Consider two [cycles](https://arbital.com/p/49f) $(a_1 a_2 \\dots a_k)$ and $(b_1 b_2 \\dots b_m)$ in the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$, where all the $a_i, b_j$ are distinct.\n\nThen it is the case that the following two elements of $S_n$ are equal:\n\n- $\\sigma$, which is obtained by first performing the permutation notated by $(a_1 a_2 \\dots a_k)$ and then by performing the permutation notated by $(b_1 b_2 \\dots b_m)$\n- $\\tau$, which is obtained by first performing the permutation notated by $(b_1 b_2 \\dots b_m)$ and then by performing the permutation notated by $(a_1 a_2 \\dots a_k)$\n\nIndeed, $\\sigma(a_i) = (b_1 b_2 \\dots b_m)[https://arbital.com/p/(](https://arbital.com/p/() = (b_1 b_2 \\dots b_m)(a_{i+1}) = a_{i+1}$ (taking $a_{k+1}$ to be $a_1$), while $\\tau(a_i) = (a_1 a_2 \\dots a_k)[https://arbital.com/p/(](https://arbital.com/p/() = (a_1 a_2 \\dots a_k)(a_i) = a_{i+1}$, so they agree on elements of $(a_1 a_2 \\dots a_k)$.\nSimilarly they agree on elements of $(b_1 b_2 \\dots b_m)$; and they both do not move anything which is not an $a_i$ or a $b_j$.\nHence they are the same permutation: they act in the same way on all elements of $\\{1,2,\\dots, n\\}$.\n\nThis reasoning generalises to more than two disjoint cycles, to show that disjoint cycles commute.", "date_published": "2016-06-14T14:53:51Z", "authors": ["Patrick Stevens"], "summaries": ["In a symmetric group, if we are applying a collection of permutations which are each disjoint cycles, we get the same result no matter the order in which we perform the cycles."], "tags": [], "alias": "49g"} {"id": "e6db08fdda06bc0155b06b59630af9c9", "title": "Disjoint cycle notation is unique", "url": "https://arbital.com/p/disjoint_cycle_notation_is_unique", "source": "arbital", "source_type": "text", "text": "", "date_published": "2016-06-14T14:34:15Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Stub"], "alias": "49k"} {"id": "85920bd6b70bccbf2e87f7a9fb71da82", "title": "Pi", "url": "https://arbital.com/p/pi", "source": "arbital", "source_type": "text", "text": "Pi, usually written $π$, is a number equal to the ratio of a circle's [https://arbital.com/p/-circumference](https://arbital.com/p/-circumference) to its [https://arbital.com/p/-diameter](https://arbital.com/p/-diameter). The value of $π$ is approximately $3.141593$.\n\n![](http://rlv.zcache.ca/mathematical_definition_of_pi_ceramic_tiles-r78c5d7b1adfa46d7b24e2c8bc3bb3f65_agtk1_8byvr_324.jpg)\n\nIf the length of a curve seems like an ill-defined concept to you (maybe you only understand how lines could have lengths), consider bigger and bigger [regular polygons](https://arbital.com/p/-regular_polygon) that make better and better approximations of the circle. As the number of sides $N$ of the polygon goes to $∞$, the perimeter will approach a length of $π$ times the diameter.\n\nOne could also define $π$ to be the area of a circle divided the area of a square, whose edge is the radius of the circle.\n\n## What Kind of Number It Is ##\n\nIt's not an [https://arbital.com/p/48l](https://arbital.com/p/48l).\n\n![](https://sensemadehere.files.wordpress.com/2015/03/pi.png?w=584)\n\nIf the diameter here is 1, then the perimeter of the hexagon is 3, the perimeter of the square is 4, and the circumference of the circle is in between. There are no integers between 3 and 4.\n\nIt's [not rational](https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational), [nor is it algebraic](https://en.wikipedia.org/wiki/Lindemann%E2%80%93Weierstrass_theorem#Transcendence_of_e_and_.CF.80). It's transcendental.", "date_published": "2016-07-21T16:02:45Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Michael Cohen"], "summaries": [], "tags": [], "alias": "49r"} {"id": "37d26663af702784762e2d3397557c75", "title": "Complexity theory", "url": "https://arbital.com/p/complexity_theory", "source": "arbital", "source_type": "text", "text": "**Complexity theory** is the study of the efficiency of algorithms with respect to several metrics, usually time and memory usage. Complexity theorists aim to classify different problems into [classes of difficulty](https://arbital.com/p/complexity_class) and study the relations that hold between the classes.\n\nWhen studying [computability](https://arbital.com/p/), we are concerned with the identification of that which is or not computable in an ideal sense, without worrying about time or memory limitations.\n\nHowever, often in practice we have to be more pragmatic. A program which takes a [googol](https://arbital.com/p/42m) years to run is not going to see much use. If you need more [GbBytes](https://arbital.com/p/) to solve a computational problem that atoms exist in the universe, you may as well go ahead and declare the problem unsolvable for all practical purposes.\n\n**Complexity theory** raises the standards of computability in drawing the boundary between that which you can do with a computer and that which you cannot. It concerns the study of the [asymptotic behavior](https://arbital.com/p/oh_notation) of programs when fed inputs of growing size, in terms of the resources they consume. The kind of resources with which complexity theorists work more often are the *time* a program takes to finish and the highest *memory usage* %%note:the memory usage is frequently called **space complexity**%% in any given point of the execution.\n\nComplexity theory allows us to have a deeper understanding of what makes an algorithm efficient, which in turn allows us to develop better and faster algorithms. Surprisingly enough, it turns out that proving that some computational problems are hard to solve has incredibly important practical applications in [https://arbital.com/p/-cryptography](https://arbital.com/p/-cryptography).\n\n----\n\nThe abstract framework in which we develop this theory are [Turing machines](https://arbital.com/p/) and [decision problems](https://arbital.com/p/). In this context, the [time complexity](https://arbital.com/p/) is associated with the number of steps a TM takes to halt and return an output, while the [space complexity](https://arbital.com/p/) corresponds to the length of the tape we would need for the TM to never fall off when moving left or right.\n\nOne may worry that the complexity measures are highly dependent on the choice of computational framework used. After all, if we allow our TM to skip two spaces per step each time it moves it is going to take potentially half the time to compute something. However, the asymptotic behavior of complexity is [surprisingly robust](https://arbital.com/p/robustness_of_tm), though there are [some caveats](https://arbital.com/p/failure_of_strong_CT).\n\nThe most interesting characterization of complexity comes in the form of [complexity classes](https://arbital.com/p/), which break down the family of [decision problems](https://arbital.com/p/) into sets of problems which can be solved with particular constraints.\n\nPerhaps the most important complexity class is [$P$](https://arbital.com/p/), the class of decision problems which can be efficiently computed %%note:For example, checking whether a graph is connected or not%%. The second best known class is [$NP$](https://arbital.com/p/), the class of problems whose solutions can be easily checked %%note:An example is factoring a number: it is hard to factor $221$, but easy to multiply $13$ and $17$ and check that $13 \\cdot 17 = 221$%% . It is an open problem whether those two classes are [one and the same](https://arbital.com/p/4bd); that is, that every problem whose solutions are easy to check is also easy to solve.\n\nThere are many more important complexity classes, and it can be daunting to contemplate the sheer variety with which complexity theory deals. Feel free to take a guided tour though the [complexity zoo](https://arbital.com/p/4b9) if you want an overview of some of the most relevant.\n\nAn important concept is that of a [reduction](https://arbital.com/p/). Some complexity classes have problems such that if you were able to solve them efficiently you could translate other problems in the class to this one and solve them efficiently. Those are called [complete problems of a complexity class](https://arbital.com/p/).\n\n-----\n\nThis page is meant to be a starting point to learn complexity theory from an entry level. If there is any concept which feels mysterious to you, try exploring the greenlinks in their order of appearance. If you feel like the concepts presented are too basic, try a different lens.", "date_published": "2016-10-08T15:56:04Z", "authors": ["Eric Bruylant", "Adom Hartell", "Eric Rogstad", "Jaime Sevilla Molina"], "summaries": ["Study of the computational resources needed to solve a problem, usually time and memory"], "tags": [], "alias": "49w"} {"id": "0e4add28e6c0c6e4aac88a2d6b57a6cf", "title": "Group isomorphism", "url": "https://arbital.com/p/group_isomorphism", "source": "arbital", "source_type": "text", "text": "A group isomorphism is a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) which is [bijective](https://arbital.com/p/499).\nWe say that two groups are *isomorphic* if there is an isomorphism between them.\n\nIt turns out that isomorphism is a much more useful concept than true equality of [groups](https://arbital.com/p/-3gd), and it captures the idea that \"these two objects are the same group\": the isomorphism shows us how to relabel the elements to see that they are indeed the same group.\n\nFor example, the trivial group is in some sense \"the only group with one element\", but it can be instantiated in many different ways: as $(\\{ a \\}, +_a)$, or $(\\{ b \\}, +_b)$, and so on (where $+_x$ is the [https://arbital.com/p/-3kb](https://arbital.com/p/-3kb) taking $(x, x)$ to $x$).\nThey all behave in exactly the same ways for the purpose of group theory, but they are not literally identical.\nThey are all isomorphic, though: the map $\\{a \\} \\to \\{ b \\}$ given by $a \\mapsto b$ is an isomorphism of the respective groups.\n\nTwo groups are isomorphic if and only if they have the same [Cayley table](https://arbital.com/p/cayley_table), possibly with rearrangement of rows/columns and with relabelling of elements.", "date_published": "2016-06-15T06:30:30Z", "authors": ["Eric Bruylant", "Mark Chimes", "Patrick Stevens"], "summaries": [], "tags": ["Math 2", "C-Class", "Proposed B-Class"], "alias": "49x"} {"id": "45ad5caf675bfd06f5dfc337c978ab68", "title": "Kernel of group homomorphism", "url": "https://arbital.com/p/kernel_of_group_homomorphism", "source": "arbital", "source_type": "text", "text": "The kernel of a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $f: G \\to H$ is the collection of all elements $g$ in $G$ such that $f(g) = e_H$ the identity of $H$.\n\nIt is important to note that the kernel of any group homomorphism $G \\to H$ is always a subgroup of $G$.\nIndeed:\n\n- if $f(g_1) = e_H$ and $f(g_2) = e_H$ then $e_H = f(g_1) f(g_2) = f(g_1 g_2)$, so the kernel is closed under $G$'s operation;\n- if $f(x) = e_H$ then $e_H = f(e_G) = f(x^{-1} x) = f(x^{-1}) f(x) = f(x^{-1})$ (where we have used that [the image of the identity is the identity](https://arbital.com/p/49z)), so inverses are contained in the putative subgroup;\n- $f(e_G) = e_H$ because the image of the identity is the identity, so the identity is contained in the putative subgroup.\n\nIt turns out that the notion of \"[https://arbital.com/p/-4h6](https://arbital.com/p/-4h6)\" coincides exactly with the notion of \"kernel of homomorphism\". ([Proof.](https://arbital.com/p/4h7))\nThe \"kernel of homomorphism\" viewpoint of normal subgroups is much more strongly motivated from the point of view of [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7); Timothy Gowers [considers this to be the correct way](https://gowers.wordpress.com/2011/11/20/normal-subgroups-and-quotient-groups/) to introduce the teaching of normal subgroups in the first place.", "date_published": "2016-06-17T13:06:36Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Needs clickbait", "Definition"], "alias": "49y"} {"id": "1b5adddc1c243bd446e35420a846b442", "title": "Image of the identity under a group homomorphism is the identity", "url": "https://arbital.com/p/image_of_identity_under_group_homomorphism", "source": "arbital", "source_type": "text", "text": "For any [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $f: G \\to H$, we have $f(e_G) = e_H$ where $e_G$ is the identity of $G$ and $e_H$ the identity of $H$.\n\nIndeed, $f(e_G) f(e_G) = f(e_G e_G) = f(e_G)$, so premultiplying by $f(e_G)^{-1}$ we obtain $f(e_G) = e_H$.", "date_published": "2016-06-14T17:21:36Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "49z"} {"id": "0796e7704b28a88161136960efce6b00", "title": "Under a group homomorphism, the image of the inverse is the inverse of the image", "url": "https://arbital.com/p/group_homomorphism_image_of_inverse", "source": "arbital", "source_type": "text", "text": "For any [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $f: G \\to H$, we have $f(g^{-1}) = f(g)^{-1}$.\n\nIndeed, $f(g^{-1}) f(g) = f(g^{-1} g) = f(e_G) = e_H$, and similarly for multiplication on the left.", "date_published": "2016-06-14T17:26:14Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4b1"} {"id": "876b303b22ac6f89751ca041999ef0ee", "title": "The image of a group under a homomorphism is a subgroup of the codomain", "url": "https://arbital.com/p/image_of_group_under_homomorphism_is_subgroup", "source": "arbital", "source_type": "text", "text": "Let $f: G \\to H$ be a [https://arbital.com/p/-47t](https://arbital.com/p/-47t), and write $f(G)$ for the set $\\{ f(g) : g \\in G \\}$.\nThen $f(G)$ is a group under the operation inherited from $H$.\n\n# Proof\n\nTo prove this, we must verify the group axioms.\nLet $f: G \\to H$ be a group homomorphism, and let $e_G, e_H$ be the identities of $G$ and of $H$ respectively.\nWrite $f(G)$ for the image of $G$.\n\nThen $f(G)$ is closed under the operation of $H$: since $f(g) f(h) = f(gh)$, so the result of $H$-multiplying two elements of $f(G)$ is also in $f(G)$.\n\n$e_H$ is the identity for $f(G)$: it is $f(e_G)$, so it does lie in the image, while it acts as the identity because $f(e_G) f(g) = f(e_G g) = f(g)$, and likewise for multiplication on the right.\n\nInverses exist, by \"the inverse of the image is the image of the inverse\".\n\nThe operation remains associative: this is inherited from $H$.\n\nTherefore, $f(G)$ is a group, and indeed is a subgroup of $H$.", "date_published": "2016-06-14T17:30:27Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4b4"} {"id": "4a4fe38fe888bf3d3093737625ad39fc", "title": "Expected value", "url": "https://arbital.com/p/expected_value", "source": "arbital", "source_type": "text", "text": "The expected value of an action is the [https://arbital.com/p/-mean](https://arbital.com/p/-mean) numerical outcome of the possible results weighted by their [https://arbital.com/p/-1rf](https://arbital.com/p/-1rf). It may actually be impossible to get the expected value, for example, if a coin toss decides between you getting \\$0 and \\$10, then we say you get \"\\$5 in expectation\" even though there is no way for you to get \\$5.\n\nThe expectation of V (often shortened to \"the expected V\") is how much V you expect to get on average. For example, the expectation of a payoff, or an expected payoff, is how much money you will get on average; the expectation of the duration of a speech, or an expected duration, is how long the speech will last \"on average.\"\n\nSuppose V has discrete possible values, say $V = x_{1},$ or $V = x_{2}, ..., $ or $V = x_{k}$. Let $P(x_{i})$ refer to the probability that $V = x_{i}$. Then the expectation of V is given by:\n\n$$\\sum_{i=1}^{k}x_{i}P(x_{i})$$\n\nSuppose V has continuous possible values, x. For instance, let $x \\in \\mathbb{R}$. Let $P(x)$ be the continuous probability distribution, or $\\lim_{dx \\to 0}$ of the probability that $x 0 \\wedge x^2 > 2\\}$ and $A$ is the complement of $B$. In plainer language, $B$ consists of all the numbers greater than $\\sqrt{2}$, but because $\\sqrt{2}$ doesn't exist in the space of rational numbers, we can't use that to formulate our definition. Obviously every element of $A$ is less than every element of $B$, but $A$ has no greatest element either, because we can create a sequence of numbers in $A$ that gets bigger and bigger (as it approaches $\\sqrt{2}$) but never stops at a maximum value.\n\nFor each of these \"strict cuts\" where neither set has a \"boundary element\", we define a new irrational number to \"fill in the gap\", just like with the Cauchy sequences. For the Dedekind cuts where one of the sets does have a least or greatest element, we define a real number equal to that rational number.\n\nThis definition has the advantage that each real number is represented by a unique Dedekind cut, unlike the Cauchy sequences where multiple sequences can converge to the same number.", "date_published": "2016-08-16T20:23:25Z", "authors": ["Kevin Clancy", "M Yass", "Michael Cohen", "Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Start", "Needs summary"], "alias": "4bc"} {"id": "3b94b981139f727229f9f7d8a0746b34", "title": "P vs NP", "url": "https://arbital.com/p/P_vs_NP", "source": "arbital", "source_type": "text", "text": "The greatest currently open problem in complexity theory and computer science in general, which talks about the relationship between the class of efficiently solvable problems, [$P$](https://arbital.com/p/), and the class of problems with easily checkable solutions, [$NP$](https://arbital.com/p/).\n\nIt is a basic result that [$P\\subseteq NP$](https://arbital.com/p/). However, we still do not know whether the reverse implication, $P \\supseteq NP$, holds.\n\nA positive, constructive result would have profound implications through all the fields of science and math, not to mention the whole of society.", "date_published": "2016-06-15T14:20:07Z", "authors": ["Mark Chimes", "Jaime Sevilla Molina", "Alexei Andreev"], "summaries": ["Are problems with solutions easy to check also easy to solve?"], "tags": ["Stub"], "alias": "4bd"} {"id": "6a9393c6331eeb7e272ab4819a738fa4", "title": "P vs NP: Arguments against P=NP", "url": "https://arbital.com/p/arguments_against_P_NP", "source": "arbital", "source_type": "text", "text": "#Natural proofs\nIf $P \\neq NP,$ then there are results showing that lower bounds in complexity are inherently harder to prove in a [technical yet natural sense](https://arbital.com/p/natural_proof). In other words, if $P \\neq NP$ then proving $P \\neq NP$ is hard.\n\nThe opposite proposition for $P=NP$ is also expected to be true. That is, it would make sense that if $P=NP,$ then it should be easier to prove, since we could build far more efficient [theorem provers](https://arbital.com/p/).\n\n#Empirical argument\nWe have been trying to get a fast algorithm for [$NP$-complete](https://arbital.com/p/) problems since the 70s, without success. And take into account that \"we\" does not only comprise a minor group of theoretical computer scientists, but a whole industry trying to get faster algorithms for commercial purposes.\n\nOne could also argue more weakly that if $P=NP$, then evolution could have made use of this advantage to sped up its search process, or create more efficient minds to solve $NP$-complete problems at thoughtspeed.\n\n#The arithmetical hierarchy argument\nSome authors have drawn an analogy between the [polynomial hierarchy](https://arbital.com/p/) and the [arithmetical hierarchy](https://arbital.com/p/1mg).\n\nThere are results showing that the [arithmetical hierarchy is strict](https://arbital.com/p/), and if the analogy holds then we would have that the polynomial hierarchy is strict as well, [which automatically implies $P \\neq NP$](https://arbital.com/p/collapse_of_the_polynomial_hierarchy).", "date_published": "2016-07-08T06:15:00Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "Jaime Sevilla Molina"], "summaries": ["- Proving $P \\neq NP$ is harder if true than proving $P=NP$ if true.\n - After 50 years we haven't found any efficient algorithm for $NP$-complete problems. \n - The polynomial hierarchy is compared to the arithmetical hierarchy, and the analogy implies $P \\neq NP$"], "tags": ["Stub"], "alias": "4bf"} {"id": "786a82266ced73550751e2c96bb36bdf", "title": "Surjective function", "url": "https://arbital.com/p/surjective_function", "source": "arbital", "source_type": "text", "text": "A function $f:A \\to B$ is *surjective* if every $b \\in B$ has some $a \\in A$ such that $f(a) = b$.\nThat is, its [codomain](https://arbital.com/p/3lg) is equal to its [image](https://arbital.com/p/3lh).\n\nThis concept is commonly referred to as being \"onto\", as in \"The function $f$ is onto.\"\n\n# Examples\n\n- The function $\\mathbb{N} \\to \\{ 6 \\}$ (where $\\mathbb{N}$ is the set of [natural numbers](https://arbital.com/p/45h)) given by $n \\mapsto 6$ is surjective. However, the same function viewed as a function $\\mathbb{N} \\to \\mathbb{N}$ is not surjective, because it does not hit the number $4$, for instance.\n- The function $\\mathbb{N} \\to \\mathbb{N}$ given by $n \\mapsto n+5$ is *not* surjective, because it does not hit the number $2$, for instance: there is no $a \\in \\mathbb{N}$ such that $a+5 = 2$.", "date_published": "2016-06-14T20:17:40Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Definition"], "alias": "4bg"} {"id": "316c107cb2ea36d47d7a4769a1e37922", "title": "Conjugacy class is cycle type in symmetric group", "url": "https://arbital.com/p/conjugacy_class_is_cycle_type_in_symmetric_group", "source": "arbital", "source_type": "text", "text": "In the [https://arbital.com/p/-497](https://arbital.com/p/-497) on a finite set, the [https://arbital.com/p/-4bj](https://arbital.com/p/-4bj) of an element is determined exactly by its [cycle type](https://arbital.com/p/4cg).\n\nMore precisely, two elements of $S_n$ are [conjugate](https://arbital.com/p/group_conjugate) in $S_n$ if and only if they have the same cycle type.\n\n# Proof\n\n## Same conjugacy class implies same cycle type\nSuppose $\\sigma$ has the cycle type $n_1, \\dots, n_k$; write $$\\sigma = (a_{11} a_{12} \\dots a_{1 n_1})(a_{21} \\dots a_{2 n_2}) \\dots (a_{k 1} a_{k 2} \\dots a_{k n_k})$$\nLet $\\tau \\in S_n$.\n\nThen $\\tau \\sigma \\tau^{-1}(\\tau(a_{ij})) = \\tau \\sigma(a_{ij}) = a_{i (j+1)}$, where $a_{i (n_i+1)}$ is taken to be $a_{i 1}$.\n\nTherefore $$\\tau \\sigma \\tau^{-1} = (\\tau(a_{11}) \\tau(a_{12}) \\dots \\tau(a_{1 n_1}))(\\tau(a_{21}) \\dots \\tau(a_{2 n_2})) \\dots (\\tau(a_{k 1}) \\tau(a_{k 2}) \\dots \\tau(a_{k n_k}))$$\nwhich has the same cycle type as $\\sigma$ did.\n\n## Same cycle type implies same conjugacy class\n\nSuppose $$\\pi = (b_{11} b_{12} \\dots b_{1 n_1})(b_{21} \\dots b_{2 n_2}) \\dots (b_{k 1} b_{k 2} \\dots b_{k n_k})$$\nso that $\\pi$ has the same cycle type as the $\\sigma$ from the previous direction of the proof.\n\nThen define $\\tau(a_{ij}) = b_{ij}$, and insist that $\\tau$ does not move any other elements.\n\nNow $\\tau \\sigma \\tau^{-1} = \\pi$ by the final displayed equation of the previous direction of the proof, so $\\sigma$ and $\\pi$ are conjugate.\n\n# Example\nThis result makes it rather easy to [list the conjugacy classes](https://arbital.com/p/4bk) of $S_5$.", "date_published": "2016-06-16T18:38:51Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4bh"} {"id": "98cc311a1456e4ee82c612b53b1581e1", "title": "Conjugacy class", "url": "https://arbital.com/p/conjugacy_class", "source": "arbital", "source_type": "text", "text": "Given an element $g$ of a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$, the conjugacy class of $g$ in $G$ is $\\{ x g x^{-1} : x \\in G \\}$.\nIt is the collection of elements to which $g$ is [conjugate](https://arbital.com/p/4gk).", "date_published": "2016-06-20T07:06:50Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Formal definition", "Stub"], "alias": "4bj"} {"id": "e2badeb6fc5c40ba4ee1f1e288c3ce9e", "title": "Conjugacy classes of the symmetric group on five elements", "url": "https://arbital.com/p/symmetric_group_five_conjugacy_classes", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_5$ on five generators has size $5! = 120$ (where the exclamation mark denotes the [https://arbital.com/p/-factorial](https://arbital.com/p/-factorial) function).\nBy the result that [in a symmetric group, conjugacy classes coincide with cycle types](https://arbital.com/p/4bh), we can list the conjugacy classes of $S_5$ easily.\n\n# The table\n\n$$\\begin{array}{|c|c|c|c|}\n\\hline\n\\text{Representative}& \\text{Size of class} & \\text{Cycle type} & \\text{Order of element} \\\\ \\hline\n(12345) & 24 & 5 & 5 \\\\ \\hline\n(1234) & 30 & 4,1 & 4 \\\\ \\hline\n(123) & 20 & 3,1,1 & 3 \\\\ \\hline\n(123)(45) & 20 & 3,2 & 6 \\\\ \\hline\n(12)(34) & 15 & 2,2,1 & 2 \\\\ \\hline\n(12) & 10 & 2,1,1,1 & 2 \\\\ \\hline\ne & 1 & 1,1,1,1,1 & 1 \\\\ \\hline\n\\end{array}$$\n\n# Determining the list of cycle types and sizes\n\nThere are five elements to permute; we need to find all the possible ways of grouping them up into disjoint cycles.\nWe will go through this systematically.\nNote first that since there are only five elements to permute, there cannot be any element with a $6$-cycle or higher.\n\n## Those with largest cycle of length $5$\n\nIf there is a $5$-cycle, then it permutes everything, so its [cycle type](https://arbital.com/p/4cg) is $5$.\nThat is, we can take a representative $(12345)$, and this is the only conjugacy class with a $5$-cycle.\n\nRecall that every cycle of length $5$ may be written in five different ways: $(12345)$ or $(23451)$ or $(34512)$, and so on.\nWithout loss of generality, we may assume $1$ comes at the start (by cycling round the permutation if necessary).\n\nThen there are $4!$ ways to fill in the remaining slots of the cycle (where the exclamation mark denotes the [https://arbital.com/p/-factorial](https://arbital.com/p/-factorial) function).\n\nHence there are $24$ elements of this conjugacy class.\n\n## Those with largest cycle of length $4$\n\nIf there is a $4$-cycle, then it permutes everything except one element, so its cycle type must be $4,1$ (abbreviated as $4$).\nThat is, we can take a representative $(1234)$, and this is the only conjugacy class with a $4$-cycle.\n\nEither the element $1$ is permuted by the $4$-cycle, or it is not.\n\n- In the first case, we have $4$ possible ways to pick the image $a$ of $1$; then $3$ possible ways to pick the image $b$ of $a$; then $2$ possible ways to pick the image $c$ of $b$; then $c$ must be sent back to $1$.\nThat is, there are $4 \\times 3 \\times 2 = 24$ possible $4$-cycles containing $1$.\n- In the second case, the cycle does not contain $1$, so there are $3$ possible ways to pick the image $a$ of $2$; then $2$ possible ways to pick the image $b$ of $a$; then $1$ possible way to pick the image $c$ of $b$; then $c$ must be sent back to $2$.\nSo there are $3 \\times 2 \\times 1 = 6$ possible $4$-cycles not containing $1$.\n\nThis comes to a total of $30$ possible $4$-cycles in this conjugacy class.\n\n## Those with largest cycle of length $3$\n\nNow we have two possible conjugacy classes: $3,1,1$ and $3,2$.\n\n### The $3,1,1$ class\n\nAn example representative for this class is $(123)$.\n\nWe proceed with a slightly different approach to the $4,1$ case.\nUsing the notation for the [https://arbital.com/p/-binomial_coefficient](https://arbital.com/p/-binomial_coefficient), we have $\\binom{5}{3} = 10$ possible ways to select the numbers which appear in the $3$-cycle.\nEach selection has two distinct ways it could appear as a $3$-cycle: the selection $\\{1,2,3\\}$ can appear as $(123)$ (or the duplicate cycles $(231)$ and $(312)$), or as $(132)$ (or the duplicate cycles $(321)$ or $(213)$).\n\nThat is, we have $2 \\times 10 = 20$ elements of this conjugacy class.\n\n### The $3,2$ class\n\nAn example representative for this class is $(123)(45)$.\n\nAgain, there are $\\binom{5}{3} = 10$ possible ways to select the numbers which appear in the $3$-cycle; having made this selection, we have no further choice about the $2$-cycle.\n\nGiven a selection of the elements of the $3$-cycle, as before we have two possible ways to turn it into a $3$-cycle.\n\nWe are also given the selection of the elements of the $2$-cycle, but there is no choice about how to turn this into a $2$-cycle because, for instance, $(12)$ is equal to $(21)$ as cycles.\nSo this time the selection of the elements of the $3$-cycle has automatically determined the corresponding $2$-cycle.\n\nHence there are again $2 \\times 10 = 20$ elements of this conjugacy class.\n\n## Those with largest cycle of length $2$\n\nThere are two possible cycle types: $2,2,1$ and $2,1,1,1$.\n\n### The $2,2,1$ class\n\nAn example representative for this class is $(12)(34)$.\n\nThere are $\\binom{5}{2}$ ways to select the first two elements; then once we have done so, there are $\\binom{3}{2}$ ways to select the second two.\nHaving selected the elements moved by the first $2$-cycle, there is only one distinct way to make them into a $2$-cycle, since (for example) $(12)$ is equal to $(21)$ as permutations; similarly the selection of the elements determines the second $2$-cycle.\n\nHowever, this time we have double-counted each element, since (for example) the permutation $(12)(34)$ is equal to $(34)(12)$ by the result that [disjoint cycles commute](https://arbital.com/p/49g).\n\nTherefore there are $\\binom{5}{2} \\times \\binom{3}{2} / 2 = 15$ elements of this conjugacy class.\n\n### The $2,1,1,1$ class\n\nAn example representative for this class is $(12)$.\n\nThere are $\\binom{5}{2}$ ways to select the two elements which this cycle permutes.\nOnce we have selected the elements, there is only one distinct way to put them into a $2$-cycle, since (for example) $(12)$ is equal to $(21)$ as permutations.\n\nTherefore there are $\\binom{5}{2} = 10$ elements of this conjugacy class.\n\n## Those with largest cycle of length $1$\n\nThere is only the identity in this class, so it is of size $1$.", "date_published": "2016-06-15T13:25:24Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4bk"} {"id": "c08327750f196ede25996c5549eda438", "title": "Church-Turing thesis", "url": "https://arbital.com/p/CT_thesis", "source": "arbital", "source_type": "text", "text": "The **Church-Turing thesis** (often abbreviated **CT thesis**) states:\n> Every effectively computable function is Turing-computable\n\nThat is, for every function which can be computed by physical means %%note:So no [hypercomputers](https://arbital.com/p/)%% there exists a [Turing machine](https://arbital.com/p/) which computes exactly that.\n\nThe Church-Turing thesis is not a definite mathematical statement, but an inductive statement which affirms that every sensible model of computation we can come up with is equivalent or at least [reducible](https://arbital.com/p/reduction) to the model proposed by Turing. Thus, we cannot prove it in a mathematical sense, but we can [gather evidence for it](https://arbital.com/p/).\n\nFor example, this model was proven to coincide with Church's lambda calculus, another universal model of computation, and the equivalence between Church's lambda calculus and Turing's automatic machines is often taken as [evidence](https://arbital.com/p/4j7) that they correctly capture our intuition of \"effectively computable\".\n\nThere are many [consequences](https://arbital.com/p/) of the CT thesis for computer science in general, artificial intelligence, [epistemology](https://arbital.com/p/11w), and other fields of knowledge.", "date_published": "2016-07-21T21:12:21Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Eric Bruylant", "Connor Flexman", "Jaime Sevilla Molina"], "summaries": ["Every [https://arbital.com/p/-effectively_computable](https://arbital.com/p/-effectively_computable) [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) is [Turing-computable](https://arbital.com/p/Turing_machine)."], "tags": ["Needs summary", "Stub"], "alias": "4bw"} {"id": "6e01b36fb5cd2efc0066edab38220d6d", "title": "Properties of the logarithm", "url": "https://arbital.com/p/log_properties", "source": "arbital", "source_type": "text", "text": "With a [solid interpretation of logarithms](https://arbital.com/p/45q) under our belt, we are now in a position to look at the basic properties of the logarithm and understand what they are saying. The defining [characteristic of logarithm functions](https://arbital.com/p/47k) is that they are [https://arbital.com/p/-real_valued](https://arbital.com/p/-real_valued) functions $f$ such that\n\n__Property 1: Multiplying inputs adds outputs.__\n$$\nf(x \\cdot y) = f(x) + f(y) \\tag{1}\n$$\nfor all $x, y \\in$ [$\\mathbb R^+$](https://arbital.com/p/positive_reals). This says that whenever the input grows (or shrinks) by a factor of $y$, the output goes up (or down) by only a fixed amount, which depends on $y$. In fact, equation (1) alone tells us quite a bit about the behavior of $f,$ and from it, we can _almost_ guarantee that $f$ is a logarithm function. First, let's see how far we can get using equation (1) all by itself:\n\n---\n\n__Property 2: 1 is mapped to 0.__\n$$\nf(1) = 0. \\tag{2}\n$$\n\nThis says that the amount the output changes if the input grows by a factor of 1 is zero — i.e., the output does not change if the input changes by a factor of 1. This is obvious, as \"the input changed by a factor of 1\" means \"the input did not change.\"\n\n__Exercise:__ Prove (2) from (1).\n%%hidden(Proof):\n$$f(x) = f(x \\cdot 1) = f(x) + f(1),\\text{ so }f(1) = 0.$$\n%%\n\n---\n\n__Property 3: Reciprocating the input negates the output.__\n$$\nf(x) = -f\\left(\\frac{1}{x}\\right). \\tag{3}\n$$\n\nThis says that the way that growing the input by a factor of $x$ changes the output is exactly the opposite from the way that shrinking the input by a factor of $x$ changes the output. In terms of the \"communication cost\" interpretation, if doubling (or tripling, or $n$-times-ing) the possibility space increases costs by $c$, then halving (or thirding, or $n$-parts-ing) the space decreases costs by $c.$\n\n__Exercise:__ Prove (3) from (2) and (1).\n%%hidden(Proof):\n$$x \\cdot \\frac{1}{x} = 1,\\text{ so }f(1) = f\\left(x \\cdot \\frac{1}{x}\\right) = f(x) + f\\left(\\frac{1}{x}\\right).$$\n\n$$f(1)=0,\\text{ so }f(x)\\text{ and }f\\left(\\frac{1}{x}\\right)\\text{ must be opposites.}$$\n%%\n\n---\n\n__Property 4: Dividing inputs subtracts outputs.__\n\n$$\nf\\left(\\frac{x}{y}\\right) = f(x) - f(y). \\tag{4}\n$$\n\nThis follows immediately from (1) and (3).\n\n__Exercise:__ Give an interpretation of (4).\n%%hidden(Interpretation):\nThere are at least two good interpretations:\n\n1. $f\\left(x \\cdot \\frac{1}{y}\\right) = f(x) - f(y),$ i.e., shrinking the input by a factor of $y$ is the opposite of growing the input by a factor of $y.$\n2. $f\\left(z \\cdot \\frac{x}{y}\\right) = f(z) + f(x) - f(y),$ i.e., growing the input by a factor of $\\frac{x}{y}$ affects the output just like growing the input by $x$ and then shrinking it by $y.$\n\nTry translating these into the communication cost interpretation if it is not clear why they're true.\n%%\n\n---\n\n__Property 5: Exponentiating the input multiplies the output.__\n\n$$\nf\\left(x^n\\right) = n \\cdot f(x). \\tag{5}\n$$\n\nThis says that multiplying the input by $x$, $n$ times incurs $n$ identical changes to the output. In terms of the communication cost metaphor, this is saying that you can emulate an $x^n$ digit using $n$ different $x$-digits.\n\n__Exercise:__ Prove (5).\n%%hidden(Proof):\nThis is easy to prove when $n \\in \\mathbb N:$\n$$f\\left(x^n\\right) = f(\\underbrace{x \\cdot x \\cdot \\ldots x}_{n\\text{ times}}) = \\underbrace{f(x) + f(x) + \\ldots f(x)}_{n\\text{ times}} = n \\cdot f(x).$$\n\nFor $n \\in \\mathbb Q,$ this is a bit more difficult; we leave it as an exercise to the reader. Hint: Use the proof of (6) below, for $n \\in \\mathbb N,$ to bootstrap up to the case where $n \\in \\mathbb Q.$\n\nFor $n \\in \\mathbb R,$ this is actually _not_ provable from (1) alone; we need an additional assumption (such as [https://arbital.com/p/-continuity](https://arbital.com/p/-continuity)) on $f$.\n%%\n\nProperty 5 is actually _false,_ in full generality — it's possible to create a function $f$ that obeys (1), and obeys (5) for $n \\in \\mathbb Q,$ but which exhibits pathological behavior on irrational numbers. For more on this, see [pathological near-logarithms](https://arbital.com/p/).\n\nThis is the first place that property (1) fails us: 5 is true for $n \\in \\mathbb Q,$ but if we want to guarantee that it's true for $n \\in \\mathbb R,$ we need $f$ to be [continuous](https://arbital.com/p/continuity), i.e. we need to ensure that if $f$ follows 5 on the rationals it's not allowed to do anything insane on irrational numbers only.\n\n---\n\n__Property 6: Rooting the input divides the output.__\n\n$$\nf(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}) = \\frac{f(x)}{n}. \\tag{6}\n$$\n\nThis says that, to change the output one $n$th as much as you would if you multiplied the input by $x$, multiply the input by the $n$th root of $x$. See [https://arbital.com/p/44l](https://arbital.com/p/44l) for a physical interpretation of this fact.\n\n__Exercise:__ Prove (6).\n%%hidden(Proof):\n$$(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x})^n = x,\\text{ so }f\\left((\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x})^n\\right)\\text{ has to equal }f(x).$$\n\n$$f\\left((\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x})^n\\right) = n \\cdot f(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}),\\text{ so }f(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}) = \\frac{f(x)}{n}.$$\n%%\n\nAs with (5), (6) is always true if $n \\in \\mathbb Q,$ but not necessarily always true if $n \\in \\mathbb R.$ To prove (6) in full generality, we additionally require that $f$ be continuous.\n\n---\n\n__Property 7: The function is either trivial, or sends some input to 1.__\n\n$$\n\\text{Either $f$ sends all inputs to $0$, or there exists a $b \\neq 1$ such that $f(b)=1.$}\\tag{7}\n$$\n\nThis says that either $f$ is very boring (and does nothing regardless of its inputs), or there is some particular factor $b$ such that when the input changes by a factor of $b$, the output changes by exactly $1$. In the communication cost interpretation, this says that if you're measuring communication costs, you've got to pick some unit (such as $b$-digits) with which to measure.\n\n__Exercise:__ Prove (7).\n%%%hidden(Proof):\nSuppose $f$ does not send all inputs to $0$, and let $x$ be an input that $f$ sends to some $y \\neq 0.$ Then $f(\\sqrt[https://arbital.com/p/y](https://arbital.com/p/y){x}) = \\frac{f(x)}{y} = 1.$%%note: You may be wondering, \"what if $y$ is negative, or a fraction?\" If so, see [Strange roots](https://arbital.com/p/). Short version: $\\sqrt[https://arbital.com/p/-3/4](https://arbital.com/p/-3/4){x}$ is perfectly well-defined.%%\n\n$b$ is $\\sqrt[https://arbital.com/p/y](https://arbital.com/p/y){x}.$ We know that $b \\neq 1$ because $f(b) = 1$ whereas, by (2), $f(1) = 0$.\n%%%\n\n---\n\n__Property 8: If the function is continuous, it is either trivial or a logarithm.__\n\n$$\n\\text{If $f(b)=1$ then } f(b^x) = x. \\tag{8}\n$$\n\nThis property follows immediately from (5). Thus, (8) is always true if $x$ is a [rational](https://arbital.com/p/4zq), and if $f$ is continuous then it's also true when $x$ is irrational.\n\nProperty (8) states that if $f$ is non-trivial, then it inverts [exponentials](https://arbital.com/p/4ts) with base $b.$ In other words, $f$ counts the number of $b$-factors in $x$. In other words, $f$ counts how many times you need to multiply $1$ by $b$ to get $x$. In other words, $f = \\log_b$!\n\n---\n\nMany texts take (8) to be the defining characteristic of the logarithm. As we just demonstrated, one can also define logarithms by (1) as [continuous](https://arbital.com/p/continuity) [non-trivial](https://arbital.com/p/trivial_mathematics) functions whose outputs grow by a constant (that depends on $y$) whenever their inputs grow by a factor of $y$. All other properties of the logarithm follow from that.\n\nIf you want to remove the \"continuous\" qualifier, you're still fine as long as you stick to [rational](https://arbital.com/p/4zq) inputs. If you want to remove the \"non-trivial\" qualifier, you can interpret the function $z$ that sends everything to zero as [$\\log_\\infty$](https://arbital.com/p/4c8). Allowing $\\log_\\infty$ and restricting ourselves to rational inputs, _every_ function $f$ that satisfies equation (1) is [isomorphic](https://arbital.com/p/4f4) to a logarithm function.\n\nIn other words, if you find a function whose output changes by a constant (that depends on $y$) whenever its input grows by a factor of $y$, there is basically only one way it can behave. Furthermore, that function only has one degree of freedom — the choice of $b$ such that $f(b)=1.$ As we will see next, even that degree of freedom is rather paltry: All logarithm functions behave in essentially the same way. As such, if we find any $f$ such that $f(x \\cdot y) = f(x) + f(y)$ (or any physical process well-modeled by such an $f$), then we immediately know quite a bit about how $f$ behaves.", "date_published": "2016-10-20T23:01:11Z", "authors": ["Eric Rogstad", "Adom Hartell", "Nate Soares", "Alexei Andreev"], "summaries": ["- $\\log_b(x \\cdot y) = \\log_b(x) + \\log_b(y)$ for any $b$, this is the defining characteristic of logarithms.\n- $\\log_b(1) = 0,$ because $\\log_b(1) = \\log_b(1 \\cdot 1) = \\log_b(1) + \\log_b(1).$\n- $\\log_b\\left(\\frac{1}{x}\\right) = -\\log_b(x),$ because $\\log_b(1) = \\log_b\\left(x \\cdot \\frac{1}{x}\\right) = \\log_b(x) + \\log_b\\left(\\frac{1}{x}\\right) = 0.$\n- $\\log_b\\left(\\frac{x}{y}\\right) = \\log_b(x) - \\log_b(y),$ which follows immediately from the above.\n- $\\log_b\\left(x^n\\right) = n \\cdot \\log_b(x),$ because $x^n$ = $\\underbrace{x \\cdot x \\cdot \\ldots x}_{n\\text{ times}}$.\n- $\\log_b\\left(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}\\right) = \\frac{\\log_b(x)}{n},$ because $\\log_b(x) = \\log_b\\left((\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x})^n\\right) = n \\cdot \\log_b(\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n){x}).$\n- For every $f$ that satisfies $f(x \\cdot y) = f(x) + f(y)$ for all $x, y \\in \\mathbb R^+,$ either $f$ sends every input to 0 or there exists some $b$ such that $f(b) = 1,$ in which case we call $f$ $\\log_b.$ Thus, $\\log_b(b) = 1.$\n- $\\log_b(b^n) = n,$ because $\\log_b(x^n) = n \\log_b(x)$ and $\\log_b(b) = 1.$"], "tags": ["B-Class"], "alias": "4bz"} {"id": "8afdae452ee5208eb24be958990fd1df", "title": "Category theory", "url": "https://arbital.com/p/category_theory", "source": "arbital", "source_type": "text", "text": "summary(technical):\nA **[category](https://arbital.com/p/-4cx)** consists of a\ncollection of [objects](https://arbital.com/p/-object_category_theory) and a collection of [morphisms](https://arbital.com/p/-4d8) (also called arrows). To each morphism $f$ is associated a pair of objects: $\\text{dom}(f)$ which is the **source** or [**domain**](https://arbital.com/p/domain_of_function) and $\\text{cod}(f)$ which is the **target** or [**codomain**](https://arbital.com/p/codomain_of_function) of $f$. If $\\text{dom}(f) = X$ and $\\text{cod}(f) = Y$, this is written $f: X \\rightarrow Y$.\n\nThese morphisms must satisfy three conditions:\n\n 1. [**Composition**](https://arbital.com/p/Composition_of_functions): For any two morphisms $f: X \\rightarrow Y$ and $g: Y \\rightarrow Z$, there exists a morphism $X \\rightarrow Z$, written as $g \\circ f$ or simply $gf$. \n 2. [**Associativity**](https://arbital.com/p/3h4): For any morphisms $f: X \\rightarrow Y$, $g: Y \\rightarrow Z$ and $h:Z \\rightarrow W$ composition is associative, i.e., $h(gf) = (hg)f$.\n 3. [**Identity**](https://arbital.com/p/identity_map): For any object $X$, there is a (unique) morphism, $1_X : X \\rightarrow X$ which, when composed with another morphism, leaves it unchanged. I.e., given $f:W \\rightarrow X$ and $g:X \\rightarrow Y$ it holds that: $1_X f = f$ and $g 1_X = g$.\n\n\nsummary(motivation):\nMany mathematical constructions (such as [products](https://arbital.com/p/-product_mathematics)) appear across different fields of mathematics, consisting of different ingredients but nevertheless capturing a similar idea (and often even under the same name). Category theory allows one to precisely describe the property that these different constructions all at once. This allows one to prove [theorems](https://arbital.com/p/-theorem) about all these structures at once. Hence, once you prove that a specific mathematical structure is, say, a product, then all the category-theoretic theorems about products are true for that structure. In fact, sometimes there are structures which non-obviously satisfy a category-theoretic property. Especially when [category-theoretic duality](https://arbital.com/p/duality_category_theory) is involved.\n\nIn addition, category theory allows the simple description of [functors](https://arbital.com/p/-functor), [natural transformations](https://arbital.com/p/-natural_transformation) and [adjunctions](https://arbital.com/p/-adjunction). These are mathematically powerful concepts which are very difficult to describe without the language of category theory. In fact, one of the founders of category theory, [Saunders Mac Lane](https://en.wikipedia.org/wiki/Saunders_Mac_Lane), has remarked that category theory was initially developed in order to provide a language in which to speak about natural transformations.\n\nPowerfully, functors and adjunctions between categories allow one to translate concepts from one mathematical theory to another. They provide a \"translation\" (either full or partial) that allows one type of object to be viewed as another, and theorems to be translated across domains. In fact, using [duality](https://arbital.com/p/-duality_cateory_theory), very non-obvious translations can be found because a theorem in one category can be translated to its \"opposite theory\" in the other category. Connections which are not obvious in the language of the mathematical theories themselves, become clear in the language of category theory. \n\n\n**Category theory** studies the abstraction of mathematical objects (such as [sets](https://arbital.com/p/3jz), [groups](https://arbital.com/p/3gd), and [topological spaces](https://arbital.com/p/topology)) in terms of the [morphisms](https://arbital.com/p/4d8) between them. Such a collection of objects and morphisms is a [category](https://arbital.com/p/4cx). Morphisms often represent [functions](https://arbital.com/p/3jy). For example, in the category of sets, morphisms represent all functions, in the category of groups they represent group homomorphism and in the category of topological spaces, they represent [continuous](https://arbital.com/p/continuous) maps.\n\nCategories are usually drawn as diagrams with the objects represented by variables or points with (labeled) arrows between them representing morphisms. For this reason, morphisms are also referred to as arrows.\n\nMorphisms do not have to represent functions. For example, any [partially ordered set](https://arbital.com/p/3rb) $(P, \\leq)$ may be seen as a category where the objects are the elements of the poset and there is a (unique) morphism $x \\rightarrow y$ between two elements $x$ and $y$ if and only if $x \\leq y$.\n\n## Definition ##\nA **category** consists of a\ncollection of [objects](https://arbital.com/p/-object_category_theory) and a collection of [morphisms](https://arbital.com/p/-4d8). A morphism $f$ goes from one object, say $X$, to another, say $Y$, and is drawn as an arrow from $X$ to $Y$. Note that $X$ may equal $Y$ (in which case $f$ is referred to as an [**endomorphism**](https://arbital.com/p/endomorphism)). The object $X$ is called the **source** or [**domain**](https://arbital.com/p/domain_of_function) of $f$ and $Y$ is called the **target** or [**codomain**](https://arbital.com/p/codomain_of_function) of $f$. This is written as $f: X \\rightarrow Y$.\n\nThese morphisms must satisfy three conditions:\n\n 1. [**Composition**](https://arbital.com/p/Composition_of_functions): For any two morphisms $f: X \\rightarrow Y$ and $g: Y \\rightarrow Z$, there exists a morphism $X \\rightarrow Z$, written as $g \\circ f$ or simply $gf$. \n 2. [**Associativity**](https://arbital.com/p/3h4): For any morphisms $f: X \\rightarrow Y$, $g: Y \\rightarrow Z$ and $h:Z \\rightarrow W$ composition is associative, i.e., $h(gf) = (hg)f$.\n 3. [**Identity**](https://arbital.com/p/identity_map): For any object $X$, there is a (unique) morphism, $1_X : X \\rightarrow X$ which, when composed with another morphism, leaves it unchanged. I.e., given $f:W \\rightarrow X$ and $g:X \\rightarrow Y$ it holds that: $1_X f = f$ and $g 1_X = g$.\n\nNote that composition is written 'backwards' since given an element $x \\in X$ and two functions $f: X \\rightarrow Y$ and $g: Y \\rightarrow Z$, the result of applying $f$ then $g$ is $g(f(x))$ which equals $(g \\circ f)(x)$.\n\n\n## Motivation ##\n\nMany mathematical constructions (such as [products](https://arbital.com/p/-product_mathematics)) appear across different fields of mathematics, consisting of different ingredients but nevertheless capturing a similar idea (and often even under the same name). Category theory allows one to precisely describe the property that these different constructions all at once. This allows one to prove [theorems](https://arbital.com/p/-theorem) about all these structures at once. Hence, once you prove that a specific mathematical structure is, say, a product, then all the category-theoretic theorems about products are true for that structure. In fact, sometimes there are structures which non-obviously satisfy a category-theoretic property. Especially when [category-theoretic duality](https://arbital.com/p/duality_category_theory) is involved.\n\nIn addition, category theory allows the simple description of [functors](https://arbital.com/p/-functor), [natural transformations](https://arbital.com/p/-natural_transformation) and [adjunctions](https://arbital.com/p/-adjunction). These are mathematically powerful concepts which are very difficult to describe without the language of category theory. In fact, one of the founders of category theory, [Saunders Mac Lane](https://en.wikipedia.org/wiki/Saunders_Mac_Lane), has remarked that category theory was initially developed in order to provide a language in which to speak about natural transformations.\n\nPowerfully, functors and adjunctions between categories allow one to translate concepts from one mathematical theory to another. They provide a \"translation\" (either full or partial) that allows one type of object to be viewed as another, and theorems to be translated across domains. In fact, using [duality](https://arbital.com/p/-duality_cateory_theory), very non-obvious translations can be found because a theorem in one category can be translated to its \"opposite theory\" in the other category. Connections which are not obvious in the language of the mathematical theories themselves, become clear in the language of category theory. \n\n##Categories Give an External View##\nAlthough the objects and morphisms of a category are intended to *represent* e.g. sets and functions, from the [point of view of the category](https://arbital.com/p/-personification_of_mathematics) the objects and morphisms have no internal structure. It is not possible to talk directly about the elements of an object or how a given morphism maps elements. Instead (from the viewpoint of the category) the information about the objects and morphisms are given completely by which objects are sources and targets for the morphisms and how the morphisms are composed.\n\nIn fact, this is the strength of category theory: abstracting away the internal details allows one to focus only on relevant information and also capture information about multiple similar types of structures that act in a certain way across different mathematical theories.\n\nThis is similar to the way that a group abstracts away what elements are whilst only capturing the information of how they are 'added' or 'multiplied'.\n\nIt is also somewhat similar to the concept of a program's API (or an interface in Java); we can't see inside the program or know how it implements something, but we know what kind of inputs and outputs programs have, and what kinds of inputs and outputs a composition of such programs have.\n\nNote that since it abstracts something away, a category does not always capture enough information for one's purposes. For example, there is addition of group homomorphisms defined pointwise. For this purpose, other structures such as [enriched categories](https://arbital.com/p/enriched_category) and [n-categories](https://arbital.com/p/n_category) may be used. However, for many purposes, categories are at a very good between isomorphic objects. This is considered a feature of category theory.\n\n##Common Symbols: Convention##\nDifferent texts make use of different conventions. This site makes use of the following common convention:\n\n - Categories are written in blackboard bold upper-case letters and are usually near the beginning of the alphabet. E.g. $\\mathbb{A}, \\mathbb{B}, \\mathbb{C}$.\n - Objects are written as upper-case letters usually near the beginning or end of the alphabet. E.g. $A, B, C, W, X, Y, Z$.\n - Morphisms are labelled with lower-case letters, usually near f or near u. E.g. $e, f, g, h, u, v, w$.\n - Elements of an object, where necessary, are written as lower-case letters, usually near the beginning or end of the alphabet. E.g. $a, b, c, x, y, z$\n - Functors are written as upper-case letters usually near F. E.g. $E, F, G, H$.\n - Natural transformations are written as Greek letters, usually near the beginning of the alphabet. E.g. $\\alpha, \\beta, \\gamma, \\delta$.\n - The morphisms forming part of a cone or cocone for a limit or colimit are often written as Greek letters with subscripts, usually $\\kappa$ or $\\lambda$.\n\nThese conventions are merely guidelines and far from universally followed. Check the definition for the symbol in question to see what it represents\n\n##[Isomorphisms in Category Theory](https://arbital.com/p/-isomorphism_category_theory)##\nIn category theory, [isomorphic](https://arbital.com/p/-4f4) objects are not distinguished. Many [https://arbital.com/p/-universal_constructions](https://arbital.com/p/-universal_constructions) do not pin down a [specific construction](https://arbital.com/p/-specific_construction_category_theory) but instead only specify it [up to isomorphism](https://arbital.com/p/-isomorphism_category_theory).\n\nDoing something in category theory which relies on a specific construction, that is, requiring objects to be equal instead of merely isomorphic, is colloquially referred to as [evil](https://arbital.com/p/-evil_category_theory).\n\n##[Universal Properties](https://arbital.com/p/-600)##\nOne of the most important concepts in category theory is that of a [universal property](https://arbital.com/p/600). An object in a category which satisfies a universal property is in a sense the 'best' (often meaning smallest or largest) object satisfying a certain property. This can often be used to describe in a universal way constructions like [products](https://arbital.com/p/4mj) which are defined for multiple distinct structures. In category theory, it is defined once without referring to a specific construction. This definition can then be applied to multiple categories.\n\nThe simplest [non-trivial](https://arbital.com/p/trivial) universal construction is the [terminal object](https://arbital.com/p/terminal_object). Given a category $\\mathbb{C}$, an object $T$ in $\\mathbb{C}$ is called a **terminal object** if, for any object $X$ in $\\mathbb{C}$, there is a [unique](https://arbital.com/p/unique) morphism $f: X \\rightarrow T$. In other words there is some $f: X \\rightarrow T$ and if there is also $g: X \\rightarrow T$ then $f=g$. In the category of sets, the terminal objects are exactly the one element sets. Given a one element set $\\{a\\}$, and any set $X$, there is a unique morphism $f: X \\rightarrow \\{a\\}$, namely the function taking every $x$ in $X$ to $a$. In the category of groups, terminal objects are exactly one-element groups. Note that terminal objects need not exist. Consider a [poset seen as a category](https://arbital.com/p/poset_as_category). If it has a largest element $T$, then each object is less than or equal to $T$. So from each object there is a unique morphism to $T$ and hence it is terminal. If, however, there is no largest element then the category has no terminal object.\n\nAs another example, [products](https://arbital.com/p/4mj) can be defined by a universal property: Given a pair of objects $X$ and $Y$, an object $P$ *along with a pair of morphisms* $f: P \\rightarrow X$ and $g: P \\rightarrow Y$ is called the **product** of $X$ and $Y$ if, given any other object $W$ and morphisms $u: W \\rightarrow X$ and $v:W \\rightarrow Y$ there is a *unique* morphism $h: W \\rightarrow P$ such that $fh = u$ and $gh = v$.\n\nThe above are both special cases of a very important and more general [universal construction](https://arbital.com/p/-universal_construction): the [limit](https://arbital.com/p/-limit_category_theory). This (along with the [colimit](https://arbital.com/p/-colimit_category_theory)) is described in more detail further below.\n\n##[Duality](https://arbital.com/p/-duality_category_theory)##\nFor any notion in a category, its **dual** is obtained by `reversing all the arrows' and 'reversing the order of composition'. If a statement is true in any category, then its dual is true in any category. As a [corollary](https://arbital.com/p/-corollary), if a statement is true in some categories, its dual is true in the duals of those categories.\n\nAs an example, consider the definition of a [terminal object](https://arbital.com/p/-terminal_object) given above. A statement about terminal object is that any two terminal objects are isomorphic. Let's examine the exact statement. Assume $T$ is terminal. Then for any $X$ there is unique $f: X \\rightarrow T$. If we reverse the arrows, we get that for every $X$ there is unique $f: X \\leftarrow T$. This is the definition of an [initial object](https://arbital.com/p/-initial_object). Consider another terminal object $T'$. The statement that $T'$ is isomorphic to $T$ is means that there is some $f: T \\rightarrow T'$ and $g: T' \\rightarrow T$ such that $gf = 1_T$ and $fg = 1_{T'}$. The dual of this is just the statement that there is some $f: T \\leftarrow T'$ and $g: T' \\leftarrow T$ such that $fg = 1_T$ and $gf = 1_{T'}$, this is exactly the same property! (The morphisms $f$ and $g$ have just been renamed). Hence, the dual of the statement that a terminal object is unique up to isomorphism is the statement that every initial object is unique up to isomorphism.\n\nSimilarly, if something is true for every category with an initial object, its dual will be true for every category with a terminal object.\n\nThe concept of duality can be a powerful way of obtaining new results which come easily within category theory, but which are not obvious in the theory to which category theory is being applied. As an advanced example, the category of [Boolean Algebras](https://arbital.com/p/-boolean_algebra) is dual to the category of [Stone Spaces](https://arbital.com/p/-stone_space). See, [Stone Duality on Wikipedia](https://en.wikipedia.org/wiki/Stone's_representation_theorem_for_Boolean_algebras) for the motivation.\n\n\n\n\n##[Functors](https://arbital.com/p/-Functor)##\nA **functor** is a morphism between categories.\n\nGiven two categories $\\mathbb{A}$ and $\\mathbb{B}$, a functor $F$ from $\\mathbb{A}$ to $\\mathbb{B}$, written $F: \\mathbb{A} \\rightarrow \\mathbb{B}$ is defined as a pair of functions:\n\n - $F_0:$ Objects($\\mathbb{A}$) $\\rightarrow$ Objects($\\mathbb{B}$)\n - $F_1:$ Morphisms($\\mathbb{A}$) $\\rightarrow$ Morphisms($\\mathbb{B}$)\n\nWhich satisfy:\n\n 1.\nPreservation of domain and codomain: \nIf $f: X \\rightarrow Y$ then $F_1(f): F_0(X) \\rightarrow F_1(Y)$. (\nPut differently, \nDom($F_1(f)$) = $F_0$(Dom($f$)) and Cod($F_1(f)$) = $F_0$(Cod($f$)) for every morphism $f$. )\n\n 2. Preservation of Identity:\nIf the morphism $1_X: X \\rightarrow X$ is the identity on $X$, then the morphism $F_1(1_X): F_0(X) \\rightarrow F_0(X)$ is the identity on $F_0(X)$.\n\n 3. Preservation of composition:\nGiven morphisms $f: X \\rightarrow Y$ and $g: Y \\rightarrow Z$, then the composition of their images $F_1(g) \\circ F_1(f): F_0(X) \\rightarrow F_0(Z)$ is the image of their composition $F_1(g \\circ f): F_0(X) \\rightarrow F_0(Z)$.\n\nInstead of differentiating $F_0$ and $F_1$, they are usually both written simply as $F$. E.g. $F(f): F(X) \\rightarrow F(Y)$.\n\n##[Properties of Morphisms](https://arbital.com/p/-morphism_category_theory)##\nMorphisms are the central objects of study in category theory. For this reason, properties of morphisms can be very important. \nA morphism $f: X \\rightarrow Y$ is called the following if it satisfies the given property:\n\n - **[Isomorphism](https://arbital.com/p/-isomorphism_category_theory):**\n ([Self-dual](https://arbital.com/p/-self_dual_category_theory)) \n\nThere exists some $g: Y \\rightarrow X$ such that $gf = 1_X$ and $fg = 1_Y$.\n\nIntuitively, an isomorphism is a way of transforming from one object to another in a way that makes them indistinguishable using the information of the category.\n\n - **[Monomorphism](https://arbital.com/p/-monomorphism):** ([Dual](https://arbital.com/p/-duality_category_theory) to epimorphic) \n\nFor any object $W$ and morphisms $g,h: W \\rightarrow X$, if $fg = fh$ then $g = h$.\n\nIntuitively, $f$ being a monomorphism indicates that *all of the information captured by the collection of morphisms into $X$ is preserved when composing by $f$*. It generalizes the notion of an [injective function](https://arbital.com/p/4b7), since in most [concrete categories](https://arbital.com/p/-concrete_category) (like [sets](https://arbital.com/p/-3jz), [groups](https://arbital.com/p/-3gd), and [topological spaces](https://arbital.com/p/-topology)) every injective map is a monomorphism. However, even in concrete categories (and certainly more generally), monomorphisms need not be injective. \n\n\n - **[Epimorphism](https://arbital.com/p/-epimorphism):** (Dual to monomorphic) \n\nFor any object $Z$ and morphisms $g,h: X \\rightarrow Z$, if $gf = hf$ then $g = h$.\n\nIntuitively, $f$ being an epimorphism indicates that *all the information captured by the collection of morphisms out of $Y$ is preserved when composing by $f$.*. It generalizes the notion of a [surjective function](https://arbital.com/p/-4bg). However, in an even stronger sense than for monomorphisms, a function being epimorphic and a function being surjective are far from equivalent.\n\nProperties that more closely match surjectivity include [Section / Split Epimorphism](https://arbital.com/p/-section_category_theory), and [regular epimorphism](https://arbital.com/p/-regular_epimorphism). [strict epimorphism](https://arbital.com/p/-strict_epimorphism), [strong epimorphism](https://arbital.com/p/-strong_epimorphism), and [extremal epimorphism](https://arbital.com/p/-extremal_epimorphism). Note that despite the names, not all of these are necessarily epimorphisms, but are epimorphisms in \"nice\" categories. \n\n - **[Endomorphism](https://arbital.com/p/-endomorphism):** (Self-dual) \n\n$X = Y$, i.e., $f: X \\rightarrow X$.\n\nAn endomorphism is a morphism from an object to itself.\n\n - **[Automorphism](https://arbital.com/p/-automorphism):** (Self-dual) \n\nThe morphism $f$ is both an endomorphism and an isomorphism.\n\nAn automorphism is a morphism from a structure to itself that preserves all the information of the structure that is distinguishable by the category. Intuitively, it gives \"another view\" of a structure (say, by moving around its elements) that leaves it essentially unchanged.\n\n - **[Retraction / Split Monomorphism](https://arbital.com/p/-retraction_category_theory):** (Dual to split epimorphic) \n\nThere exists some $g: Y \\rightarrow X$ such that $gf = 1_X$\n\nA morphism is a **retraction** if its effect can be \"reversed\" or inverted by another morphism applied after it. For example, every injective map is a retraction. The morphism which inverts the retraction is a **section**.\n\n - **[Section / Split Epimorphism](https://arbital.com/p/-section_category_theory):** (Dual to split monomorphic) \n\nThere exists some $g: Y \\rightarrow X$ such that $fg = 1_Y$\n\nA morphism is a **section** if it \"reverses\" the effect of some other morphism applied before it. The morphism which is inverted is a **retraction**.\n\n##[Limits](https://arbital.com/p/-limit_category_theory) and [Colimits](https://arbital.com/p/-colimit_category_theory)##\n\n##[Further Universal Constructions](https://arbital.com/p/-universal_construction)##\n\n##[Natural Transformations](https://arbital.com/p/-natural_transformation)##\n\n##[Constructions on Categories](https://arbital.com/p/-Constructions_on_categories)##\n\n##[Adjunctions and Adjoint Functors](https://arbital.com/p/-adjunction_category_theory)##", "date_published": "2016-10-20T22:30:34Z", "authors": ["Daniel Satanove", "Tsvi BT", "Patrick Stevens", "Nate Soares", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina"], "summaries": ["**Category theory** studies the abstraction of mathematical objects (such as [sets](https://arbital.com/p/3jz), [groups](https://arbital.com/p/3gd), and [topological spaces](https://arbital.com/p/topology)) in terms of the [morphisms](https://arbital.com/p/4d8) between them."], "tags": ["Group", "Function", "The composition of two group homomorphisms is a homomorphism", "Needs image", "Work in progress", "C-Class"], "alias": "4c7"} {"id": "2a8bd033a465b041a5f68a3123f19c92", "title": "Log base infinity", "url": "https://arbital.com/p/log_base_infinity", "source": "arbital", "source_type": "text", "text": "There is no $\\log_{\\infty},$ because $\\infty$ is not a [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc). Nevertheless, the function $z$ defined as $z(x) = 0$ for all $x \\in$ [$\\mathbb R^+$](https://arbital.com/p/positive_reals) can pretty readily be interpreted as $\\log_{\\infty}$.\n\nThat is, $z$ satisfies all properties of the basic [properties of the logarithm](https://arbital.com/p/4bz) except for the one that says there exists a $b$ such that $\\log(b) = 1.$ In the case of $\\log_\\infty,$ the logarithm base infinity claims \"well, if you gave me a $b$ that was _large enough_ I might return 1, but for all measly finite numbers I return 0.\" In fact, if you're feeling ambitious, you can define $\\log_\\infty$ to be a [https://arbital.com/p/-multifunction](https://arbital.com/p/-multifunction) which allows infinite inputs, and define $\\log_\\infty(\\infty)$ to return any positive real number that you'd like (1 included). This requires a few hijinks (like defining $\\infty^0$ to also return any number that you'd like), but can be made to work and satisfy all the basic logarithm properties (if you strategically re-interpret some '$=$' signs as '[$\\in$](https://arbital.com/p/set_contains)' signs).\n\nThe moral of the story is that functions that send everything to zero are _almost_ logarithm functions, with the minor caveat that they utterly destroy all the intricate structure that logarithm functions tap into. (That's what happens when you choose \"0\" as your arbitrary scaling factor when tapping into [https://arbital.com/p/-4gp](https://arbital.com/p/-4gp).)", "date_published": "2016-06-24T02:44:24Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Start"], "alias": "4c8"} {"id": "580ea41a59cbe6d3edaa05cde07ae5fe", "title": "Two independent events", "url": "https://arbital.com/p/two_independent_events", "source": "arbital", "source_type": "text", "text": "$$\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\n\nsummary: \n$$\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\nWe say that two [events](https://arbital.com/p/event_probability), $A$ and $B$, are *independent* when learning that $A$ has occurred does not change your probability that $B$ occurs. That is, $\\bP(B \\mid A) = \\bP(B)$. \nAnother way to state independence is that $\\bP(A,B) = \\bP(A) \\bP(B)$. \n\n\n\n\nWe say that two [events](https://arbital.com/p/event_probability), $A$ and $B$, are *independent* when learning that $A$ has occurred does not change your probability that $B$ occurs. That is, $\\bP(B \\mid A) = \\bP(B)$. \nEquivalently, $A$ and $B$ are independent if $\\bP(A)$ doesn't change if you condition on $B$: $\\bP(A \\mid B) = \\bP(A)$. \n\nAnother way to state independence is that $\\bP(A,B) = \\bP(A) \\bP(B)$. \n\nAll these definitions are equivalent: \n\n$$\\bP(A,B) = \\bP(A)\\; \\bP(B \\mid A)$$\n\nby the [chain rule](https://arbital.com/p/chain_rule_probability), so \n\n$$\\bP(A,B) = \\bP(A)\\; \\bP(B)\\;\\; \\Leftrightarrow \\;\\; \\bP(A)\\; \\bP(B \\mid A) = \\bP(A)\\; \\bP(B) \\ ,$$\n\nand similarly for $\\bP(B)\\; \\bP(A \\mid B)$.", "date_published": "2016-06-16T15:39:52Z", "authors": ["Tsvi BT", "Eric Rogstad"], "summaries": [], "tags": ["Stub"], "alias": "4cf"} {"id": "7a8c2154e2208463e40e9108916f5ca0", "title": "Cycle type of a permutation", "url": "https://arbital.com/p/cycle_type_of_a_permutation", "source": "arbital", "source_type": "text", "text": "Given an element $\\sigma$ of a [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$ on finitely many elements, we may express $\\sigma$ in [cycle notation](https://arbital.com/p/49f).\nThe cycle type of $\\sigma$ is then a list of the lengths of the cycles in $\\sigma$, where conventionally we omit length-$1$ cycles from the cycle type.\nConventionally we list the lengths in decreasing order, and the list is presented as a comma-separated collection of values.\n\nThe concept is well-defined because [https://arbital.com/p/-49k](https://arbital.com/p/-49k) up to reordering of the cycles.\n\n# Examples\n\n- The cycle type of the element $(123)(45)$ in $S_7$ is $3,2$, or (without the conventional omission of the cycles $(6)$ and $(7)$) $3,2,1,1$.\n- The cycle type of the identity element is the empty list.\n- The cycle type of a $k$-cycle is $k$, the list containing a single element $k$.", "date_published": "2016-06-15T06:25:27Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4cg"} {"id": "9730638f2cf21539452407145bfe3213", "title": "Two independent events: Square visualization", "url": "https://arbital.com/p/4cl", "source": "arbital", "source_type": "text", "text": "$$\n\\newcommand{\\true}{\\text{True}}\n\\newcommand{\\false}{\\text{False}}\n\\newcommand{\\bP}{\\mathbb{P}} \n$$\n\nsummary: \n$$\n\\newcommand{\\true}{\\text{True}}\n\\newcommand{\\false}{\\text{False}}\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\n\nSay $A$ and $B$ are independent [events](https://arbital.com/p/event_probability), so $\\bP(A, B) = \\bP(A)\\bP(B).$ Then we can draw their joint probability distribution using the using the [square visualization](https://arbital.com/p/496) of probabilities:\n\n\n\n\n\n\n\nThis is what independence looks like, using the [square visualization](https://arbital.com/p/496) of probabilities:\n\n\n\nWe can see that the [events](https://arbital.com/p/event_probability) $A$ and $B$ don't interact; we say that $A$ and $B$ are *independent*. Whether we look at the whole square, or just the red part of\nthe square where $A$ is true, the probability of $B$ stays the same. In other words, $\\bP(B \\mid A) = \\bP(B)$. That's what we mean by independence: the\nprobability of $B$ doesn't change if you condition on $A$.\n\nOur square of probabilities can be generated by multiplying together the probability of $A$ and the probability of $B$:\n\n\n\nThis picture demonstrates another way to define what it means for $A$ and $B$ to be independent:\n\n$$\\bP(A, B) = \\bP(A)\\bP(B)\\ .$$\n\n\n\nIn terms of factoring a joint distribution\n--\n\nLet's contrast independence with non-independence. Here's a picture of two ordinary, non-independent events $A$ and $B$:\n\n\n\n(If the meaning of this picture isn't clear, take a look at [https://arbital.com/p/496](https://arbital.com/p/496).)\n\nWe have the red blocks for $\\bP(A)$ and the blue blocks for $\\bP(\\neg A)$ lined up in columns. This means we've [factored](https://arbital.com/p/factoring_probability) our\nprobability distribution using $A$ as the first factor: \n\n$$\\bP(A,B) = \\bP(A) \\bP(B \\mid A)\\ .$$\n\nWe could just as well have factored by $B$ first: $\\bP(A,B) = \\bP(B) \\bP( A \\mid B)\\ .$ Then we'd draw a picture like this:\n\n\n\n\n\n\n\nNow, here again is the picture of [two independent events](https://arbital.com/p/4cf) $A$ and $B$:\n\n\n\n\n\nIn this picture, there's red and blue lined-up columns for $\\bP(A)$ and $\\bP(\\neg A)$, and there's *also* dark and light lined-up rows for $\\bP(B)$ and\n$\\bP(\\neg B)$. It looks like we somehow [factored](https://arbital.com/p/factoring_probability) our probability distribution $\\bP$ using both $A$ and \n$B$ as the first factor. \n\nIn fact, this is exactly what happened: since $A$ and $B$ are [independent](https://arbital.com/p/4cf), we have that $\\bP(B \\mid A) = \\bP(B)$. So the diagram\nabove is actually factored according to $A$ first: $\\bP(A,B) = \\bP(A) \\bP(B \\mid A)$. It's just that $\\bP(B \\mid A)= \\bP(B) = \\bP(B \\mid \\neg A)$, since $B$\nis independent from $A$. So we don't need to have different ratios of dark to light (a.k.a. conditional probabilities of $B$) in the left and right columns:\n\n\n\nIn this visualization, we can see what happens to the probability of $B$ when you condition on $A$ or on $\\neg A$: it doesn't change at all. The ratio of\n\\[area where $B$ happens\\](https://arbital.com/p/the) to \\[whole area\\](https://arbital.com/p/the), is the same as the ratio $\\bP(B \\mid A)$ where we only look at the area where $A$ happens, which is the\nsame as the ratio $\\bP(B \\mid \\neg A)$ where we only look at the area where $\\neg A$ happens. The fact that the probability of $B$ doesn't change when we\ncondition on $A$ is exactly what we mean when we say that $A$ and $B$ are independent.\n\nThe square diagram above is *also* factored according to $B$ first, using $\\bP(A,B) = \\bP(B) \\bP(A \\mid B)$. The red / blue ratios are the same in both rows\nbecause $\\bP(A \\mid B) = \\bP(A) = \\bP(A \\mid \\neg B)$, since $A$ and $B$ are independent:\n\n\n\nWe couldn't do any of this stuff if the columns and rows didn't both line up. (Which is good, because then we'd have proved the false statement that any two\nevents are independent!)\n\nIn terms of multiplying marginal probabilities\n---\n\nAnother way to say that $A$ and $B$ are independent variables %note:We're using the [equivalence](https://arbital.com/p/event_variable_equivalence) between [https://arbital.com/p/event_probability\nevents](https://arbital.com/p/event_probability\nevents) and [binary variables](https://arbital.com/p/binary_random_variable).% is that for any truth values $t_A,t_B \\in \\{\\true, \\false\\},$\n\n$$\\bP(A = t_A, B= t_B) = \\bP(A = t_A)\\bP(B = t_B)\\ .$$\n\n\n\nSo the [joint probabilities](https://arbital.com/p/1rh) for $A$ and $B$ are computed by separately getting the probability of $A$ and the probability of $B$, and then\nmultiplying the two probabilities together. For example, say we want to compute the probability $\\bP(A, \\neg B) = \\bP(A = \\true, B = \\false)$. We start with\nthe [marginal probability](https://arbital.com/p/marginal_probability) of $A$:\n\n\n\nand the probability of $\\neg B$:\n\n\n\nand then we multiply them:\n\n\n\n\nWe can get all the joint probabilities this way. So we can visualize the whole joint distribution as the thing that you get when you multiply two independent\nprobability distributions together. We just overlay the two distributions: \n\n\n\nTo be a little more mathematically elegant, we'd use the [topological product of two spaces](https://arbital.com/p/topological_product) shown earlier to draw the joint distribution\nas a product of the distributions of $A$ and $B$: \n\n", "date_published": "2016-06-16T14:55:54Z", "authors": ["Eric Rogstad", "Jaime Sevilla Molina", "Tsvi BT"], "summaries": [], "tags": [], "alias": "4cl"} {"id": "5d3f34fec2e732a3b29415719a899b53", "title": "Transposition (as an element of a symmetric group)", "url": "https://arbital.com/p/transposition_in_symmetric_group", "source": "arbital", "source_type": "text", "text": "In a [https://arbital.com/p/-497](https://arbital.com/p/-497), a transposition is a permutation which has the effect of swapping two elements while leaving everything else unchanged.\nMore formally, it is a permutation of [order](https://arbital.com/p/order_of_a_group_element) $2$ which fixes all but two elements.\n\n%%%knows-requisite([https://arbital.com/p/4cg](https://arbital.com/p/4cg)):\nA transposition is precisely an element with cycle type $2$.\n%%%\n\n# Example\n\nIn $S_5$, the permutation $(12)$ is a transposition: it swaps $1$ and $2$ while leaving all three of the elements $3,4,5$ unchanged.\nHowever, the permutation $(124)$ is not a transposition, because it has order $3$, not order $2$.", "date_published": "2016-06-15T07:50:47Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Definition"], "alias": "4cn"} {"id": "c3aeeb562751eba078eed6a16e977000", "title": "Every member of a symmetric group on finitely many elements is a product of transpositions", "url": "https://arbital.com/p/symmetric_group_is_generated_by_transpositions", "source": "arbital", "source_type": "text", "text": "Given a permutation $\\sigma$ in the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$, there is a finite sequence $\\tau_1, \\dots, \\tau_k$ of [transpositions](https://arbital.com/p/4cn) such that $\\sigma = \\tau_k \\tau_{k-1} \\dots \\tau_1$.\nEquivalently, symmetric groups are generated by their transpositions.\n\nNote that the transpositions might \"overlap\".\nFor example, $(123)$ is equal to $(23)(13)$, where the element $3$ appears in two of the transpositions.\n\nNote also that the sequence of transpositions is by no means uniquely determined by $\\sigma$.\n\n# Proof\n\nIt is enough to show that a [cycle](https://arbital.com/p/49f) is expressible as a sequence of transpositions.\nOnce we have this result, we may simply replace the successive cycles in $\\sigma$'s disjoint cycle notation by the corresponding sequences of transpositions, to obtain a longer sequence of transpositions which multiplies out to give $\\sigma$.\n\nIt is easy to verify that the cycle $(a_1 a_2 \\dots a_r)$ is equal to $(a_{r-1} a_r) (a_{r-2} a_r) \\dots (a_2 a_r) (a_1 a_r)$.\nIndeed, that product of transpositions certainly does not move anything that isn't some $a_i$; while if we ask it to evaluate $a_i$, then the $(a_1 a_r)$ does nothing to it, $(a_2 a_r)$ does nothing to it, and so on up to $(a_{i-1} a_r)$.\nThen $(a_i a_r)$ sends it to $a_r$; then $(a_{i+1} a_r)$ sends the resulting $a_r$ to $a_{i+1}$; then all subsequent transpositions $(a_{i+2} a_r), \\dots, (a_{r-1} a_r)$ do nothing to the resulting $a_{i+1}$.\nSo the output when given $a_i$ is $a_{i+1}$.\n\n# Why is this useful?\n\nIt can make arguments simpler: if we can show that some property holds for transpositions and that it is closed under products, then it must hold for the entire symmetric group.", "date_published": "2016-06-15T08:03:48Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4cp"} {"id": "f36ad5d7b3332f0c53ac21097ffa673a", "title": "Order of a group element", "url": "https://arbital.com/p/order_of_a_group_element", "source": "arbital", "source_type": "text", "text": "Given an element $g$ of group $(G, +)$ (which henceforth we abbreviate simply as $G$), the order of $g$ is the number of times we must add $g$ to itself to obtain the identity element $e$.\n\n%%%knows-requisite([https://arbital.com/p/3gg](https://arbital.com/p/3gg)):\nEquivalently, it is the order of the group $\\langle g \\rangle$ generated by $g$: that is, the order of $\\{ e, g, g^2, \\dots, g^{-1}, g^{-2}, \\dots \\}$ under the inherited group operation $+$.\n%%%\n\nConventionally, the identity element itself has order $1$.\n\n# Examples\n\n%%%knows-requisite([https://arbital.com/p/497](https://arbital.com/p/497)):\nIn the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_5$, the order of an element is the [https://arbital.com/p/-least_common_multiple](https://arbital.com/p/-least_common_multiple) of its [cycle type](https://arbital.com/p/4cg).\n%%%\n%%%knows-requisite([https://arbital.com/p/47y](https://arbital.com/p/47y)):\nIn the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_6$, the order of the generator is $6$.\nIf we view $C_6$ as being the integers [modulo](https://arbital.com/p/modular_arithmetic) $6$ under addition, then the element $0$ has order $1$; the elements $1$ and $5$ have order $6$; the elements $2$ and $4$ have order $3$; and the element $3$ has order $2$.\n%%%\n\nIn the group $\\mathbb{Z}$ of [integers](https://arbital.com/p/48l) under addition, every element except $0$ has infinite order. $0$ itself has order $1$, being the identity.", "date_published": "2016-06-15T08:14:47Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Formal definition", "Needs clickbait"], "alias": "4cq"} {"id": "e23ed3406771bf79a24c5df6f5b322a2", "title": "Strong Church Turing thesis", "url": "https://arbital.com/p/strong_Church_Turing_thesis", "source": "arbital", "source_type": "text", "text": "> Every realistic model of computation is [polynomial time reducible](https://arbital.com/p/) to [probabilistic Turing machines](https://arbital.com/p/)\n\nWhich amounts to saying that every computable process in the universe can be efficiently simulated by a probabilistic Turing machine.\n\nThe definition of *realistic model* appeals to intuition rather than a precise definition of realistic. A rule of thumb is that a computation model is realistic if it could be used to accurately model some physical process. For example, there is a clear relation between the model of [register machines](https://arbital.com/p/) and the inner workings of personal computers. On the other hand, a computational model which can access an [NP oracle](https://arbital.com/p/) does not have a physical counterpart.\n\nAs it happened with the standard [https://arbital.com/p/4bw](https://arbital.com/p/4bw), this is an inductive rather than a mathematically defined statement. However, unlike the standard CT thesis, it is [widely believed to be wrong](https://arbital.com/p/), primarily because [quantum computation](https://arbital.com/p/) stands as a highly likely counterexample to the thesis.\n\nWe remark that non-deterministic computation is not a candidate counterexample to the strong CT thesis, since it is not realistic. That said, we can find a relation between the two concepts: if [$P=NP$](https://arbital.com/p/4bd) and [$BQP\\subset NP$](https://arbital.com/p/) then $BQP = P$, so quantum computation can be efficiently simulated, and we lose the strongest contestant for a counterexample to Strong CT.\n\n#Consequences of falsehood\n\nThe falsehood of the Strong CT thesis opens up the possibility of the existence of physical processes that, while computable, cannot be modeled in a reasonable time with a classical computer. One consequence of this, that would also mean that if quantum computing is possible the speed .up it provides is non-trivial.\n\nEpistemic processes which assign priors using a penalty for computational time complexity are misguided if the Strong CT thesis is false. See for example [Levin search](https://arbital.com/p/).\n\n [probably not worth raising? human brains have crazy-low decoherence times.](https://arbital.com/p/comment:)", "date_published": "2016-06-16T10:46:01Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina", "Patrick Stevens"], "summaries": [], "tags": ["Start", "Needs summary"], "alias": "4ct"} {"id": "f4742cf8ff797d734ab2fb95c2e6527f", "title": "Category (mathematics)", "url": "https://arbital.com/p/category_mathematics", "source": "arbital", "source_type": "text", "text": "A **category** consists of a collection of objects with morphisms between them. A morphism $f$ goes from one object, say $X$, to another, say $Y$, and is drawn as an arrow from $X$ to $Y$. Note that $X$ may equal $Y$ (in which case $f$ is referred to as an [https://arbital.com/p/-endomorphism](https://arbital.com/p/-endomorphism)). The object $X$ is called the source or [domain](https://arbital.com/p/3js) of $f$ and $Y$ is called the target or [codomain](https://arbital.com/p/3lg) of $f$, though note that $f$ itself need not be a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) and $X$ and $Y$ need not be sets. This is written as $f: X \\rightarrow Y$.\n\nThese morphisms must satisfy three conditions:\n\n 1. [**Composition**](https://arbital.com/p/Composition_of_functions): For any two morphisms $f: X \\rightarrow Y$ and $g: Y \\rightarrow Z$, there exists a morphism $X \\rightarrow Z$, written as $g \\circ f$ or simply $gf$. \n 2. [**Associativity**](https://arbital.com/p/3h4): For any morphisms $f: X \\rightarrow Y$, $g: Y \\rightarrow Z$ and $h:Z \\rightarrow W$ composition is associative, i.e., $h(gf) = (hg)f$.\n 3. [**Identity**](https://arbital.com/p/identity_map): For any object $X$, there is a (unique) morphism, $1_X : X \\rightarrow X$ which, when composed with another morphism, leaves it unchanged. I.e., given $f:W \\rightarrow X$ and $g:X \\rightarrow Y$ it holds that: $1_X f = f$ and $g 1_X = g$.", "date_published": "2016-06-18T05:53:09Z", "authors": ["Eric Bruylant", "Mark Chimes", "Patrick Stevens"], "summaries": [], "tags": ["Work in progress"], "alias": "4cx"} {"id": "b156c0cb6663dd716ab7680f4f4afe7a", "title": "Dihedral group", "url": "https://arbital.com/p/dihedral_group", "source": "arbital", "source_type": "text", "text": "The dihedral group $D_{2n}$ is the group of symmetries of the $n$-vertex [https://arbital.com/p/-regular_polygon](https://arbital.com/p/-regular_polygon).\n\n# Presentation\nThe dihedral groups have very simple [presentations](https://arbital.com/p/group_presentation): $$D_{2n} \\cong \\langle a, b \\mid a^n, b^2, b a b^{-1} = a^{-1} \\rangle$$\nThe element $a$ represents a rotation, and the element $b$ represents a reflection in any fixed axis.\n\n\n# Properties\n\n- The dihedral groups $D_{2n}$ are all non-abelian for $n > 2$. ([Proof.](https://arbital.com/p/4d0))\n- The dihedral group $D_{2n}$ is a [https://arbital.com/p/-subgroup](https://arbital.com/p/-subgroup) of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$, generated by the elements $a = (123 \\dots n)$ and $b = (2, n)(3, n-1) \\dots (\\frac{n}{2}+1, \\frac{n}{2}+3)$ if $n$ is even, $b = (2, n)(3, n-1)\\dots(\\frac{n-1}{2}, \\frac{n+1}{2})$ if $n$ is odd.\n\n# Examples\n\n## $D_6$, the group of symmetries of the triangle\n\n\n\n\n# Infinite dihedral group\n\nThe infinite dihedral group has presentation $\\langle a, b \\mid b^2, b a b^{-1} = a^{-1} \\rangle$.\nIt is the \"infinite-sided\" version of the finite $D_{2n}$.\n\nWe may view the infinite dihedral group as being the subgroup of the group of [homeomorphisms](https://arbital.com/p/homeomorphism) of $\\mathbb{R}^2$ generated by a reflection in the line $x=0$ and a translation to the right by one unit.\nThe translation is playing the role of a rotation in the finite $D_{2n}$.", "date_published": "2016-06-16T18:38:00Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Definition", "Stub"], "alias": "4cy"} {"id": "63a4bf593bdd3a3c52ac566ff6f6d5c8", "title": "Dihedral groups are non-abelian", "url": "https://arbital.com/p/dihedral_groups_are_non_abelian", "source": "arbital", "source_type": "text", "text": "Let $n \\geq 3$. Then the [https://arbital.com/p/-4cy](https://arbital.com/p/-4cy) on $n$ vertices, $D_{2n}$, is not [abelian](https://arbital.com/p/3h2).\n\n# Proof\n\nThe most natural dihedral group [presentation](https://arbital.com/p/group_presentation) is $\\langle a, b \\mid a^n, b^2, bab^{-1} = a^{-1} \\rangle$.\nIn particular, $ba = a^{-1} b = a^{-2} a b$, so $ab = ba$ if and only if $a^2$ is the identity.\nBut $a$ is the rotation which has order $n > 2$, so $ab$ cannot be equal to $ba$.", "date_published": "2016-06-15T13:06:13Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Proof"], "alias": "4d0"} {"id": "6e46d9faf7086f3d55b7d4d64dd3f124", "title": "Needs image", "url": "https://arbital.com/p/needs_image_meta_tag", "source": "arbital", "source_type": "text", "text": "Meta tag for pages which would benefit from having images.\n\nIf a specific image has been described, use the [https://arbital.com/p/5v6](https://arbital.com/p/5v6) tag.", "date_published": "2016-08-12T21:30:49Z", "authors": ["M Yass", "Alexei Andreev", "Stephanie Zolayvar", "Patrick Stevens", "Eric Bruylant", "Mark Chimes"], "summaries": [], "tags": [], "alias": "4d3"} {"id": "f7d65b638334391f688f153ab0d8d6fd", "title": "Morphism", "url": "https://arbital.com/p/morphism", "source": "arbital", "source_type": "text", "text": "A morphism is the abstract representation of a [relation](https://arbital.com/p/3nt) between [mathematical objects](https://arbital.com/p/-mathematical_object).\n\nUsually, it is used to refer to [functions](https://arbital.com/p/3jy) mapping element of one set to another, but it may represent a more general notion of a relation in [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7).\n\n##Isomorphisms##\n\nTo understand a morphism, it is easier to first understand the concept of an [https://arbital.com/p/-4f4](https://arbital.com/p/-4f4). Two mathematical structures (say two [groups](https://arbital.com/p/-3gd)) are called **isomorphic** if they are indistinguishable using the information of the language and theory under consideration. \n\nImagine you are the Count von Count. You care only about counting things. You don't care what it is you count, you just care how many there are. You decide that you want to collect objects you count into boxes, and you consider two boxes equal if there are the same number of elements in both boxes. How do you know if two boxes have the same number of elements? You pair them up and see if there are any left over in either box. If there aren't any left over, then the boxes are \"bijective\" and the way that you paired them up is a [bijection](https://arbital.com/p/499). A bijection is a simple form of an isomorphism and the boxes are said to be isomorphic.\n\nFor example, the theory of groups only talks about the way that elements are combined via group operation (and whether they are the [https://arbital.com/p/-identity](https://arbital.com/p/-identity) or [inverses](https://arbital.com/p/-4sn), but that information is already given by the information of how elements are combined under the group operation (hereafter called multiplication). The theory does not care in what order elements are put, or what they are labelled or even what they are. Hence, if you are using the language and theory of groups, you want to say two groups are essentially indistinguishable if their multiplication acts the same way.", "date_published": "2016-08-06T13:04:12Z", "authors": ["Patrick Stevens", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina", "Team Arbital"], "summaries": [], "tags": ["Category theory", "Work in progress"], "alias": "4d8"} {"id": "ff33d7c6c480b4c4259e3fbbe09002e9", "title": "Isomorphism", "url": "https://arbital.com/p/isomorphism", "source": "arbital", "source_type": "text", "text": "A pair of mathematical structures are **isomorphic** to each other if they are \"essentially the same\", even if they aren't necessarily equal. \n\nAn **isomorphism** is a [https://arbital.com/p/-4d8](https://arbital.com/p/-4d8) between isomorphic structures which translates one to the other in a way that preserves all the relevant structure. An important property of an isomorphism is that it can be 'undone' by its [inverse](https://arbital.com/p/-4sn) isomorphism. \n\nAn isomorphism from an object to itself is called an **[automorphism](https://arbital.com/p/automorphism)**. They can be thought of as symmetries: different ways in which an object can be mapped onto itself without changing it.\n\n##Equality and Identity##\nThe simplest isomorphism is equality: if two things are equal then they are actually the same thing (and so not actually *two* things at all). Anything is obviously indistinguishable from itself under whatever measure you might use (it has any property in common with itself) and so regardless of the theory or language, anything is isomorphic to itself. This is represented by the [identity](https://arbital.com/p/-identity_function) (iso)morphism.\n\n%%%knows-requisite([https://arbital.com/p/3gd](https://arbital.com/p/3gd)):\n##[Group Isomorphisms](https://arbital.com/p/49x)##\nFor a more technical example, the theory of groups only talks about the way that elements are combined via group operation. The theory does not care in what order elements are put, or what they are labelled or even what they are. Hence, if you are using the language and theory of groups, you want to say two groups are essentially indistinguishable if you can pair up the elements such that their group operations act the same way.\n%%%\n\n##Isomorphisms in Category Theory##\nIn [category theory](https://arbital.com/p/-4c7), an isomorphism is a morphism which has a two-sided [https://arbital.com/p/-4sn](https://arbital.com/p/-4sn). That is to say, $f:A \\to B$ is an isomorphism if there is a morphism $g: B \\to A$ where $f$ and $g$ cancel each other out.\n\nFormally, this means that both composites $fg$ and $gf$ are equal to identity morphisms (morphisms which 'do nothing' or declare an object equal to itself). That is, $gf = \\mathrm {id}_A$ and $fg = \\mathrm {id}_B$.", "date_published": "2016-10-20T22:07:16Z", "authors": ["Daniel Satanove", "Eric Rogstad", "Dylan Hendrickson", "Patrick Stevens", "Eric Bruylant", "Mark Chimes"], "summaries": [], "tags": ["Group isomorphism", "Category theory", "Morphism", "Needs exercises", "C-Class"], "alias": "4f4"} {"id": "56eca286cfd8534a39f79405dfd70641", "title": "Square visualization of probabilities on two events: (example) Diseasitis", "url": "https://arbital.com/p/4fh", "source": "arbital", "source_type": "text", "text": "$$\n\\newcommand{\\bP}{\\mathbb{P}}\n$$\n\nsummary: $$ \\newcommand{\\bP}{\\mathbb{P}} $$\n\nWe can visualize the [https://arbital.com/p/22s](https://arbital.com/p/22s):\n\n\n\n\nThen we can see at a glance that the probability $\\bP(D \\mid B)$ of the patient having Diseasitis given that the tongue depressor is black, despite some intuitions, isn't all that big: \n\n$$\\bP(D \\mid B) = \\frac{\\bP(B, D)}{ \\bP(B, D) + \\bP(B, \\neg D)} = \\frac{3}{7} $$\n\n\n\n\nFrom the [https://arbital.com/p/22s](https://arbital.com/p/22s): \n\n> You are screening a set of patients for a disease, which we'll call Diseasitis. Based on prior epidemiology, you expect that around 20% of the patients in the screening population will in fact have Diseasitis. You are testing for the presence of the disease using a tongue depressor containing a chemical strip. Among patients with Diseasitis, 90% turn the tongue depressor black. However, 30% of the patients without Diseasitis will also turn the tongue depressor black. Among all the patients with black tongue depressors, how many have Diseasitis? \n\nIt seems like, since Diseasitis very strongly predicts that the patient has a black tongue depressor, it should be the case that the [https://arbital.com/p/-1rj](https://arbital.com/p/-1rj) $\\bP( \\text{Diseasitis} \\mid \\text{black tongue depressor})$ is big. But actually, it turns out that a patient with a black tongue depressor is more likely than not to be completely Diseasitis-free.\n\nCan we see this fact at a glance? Below, we'll use the [https://arbital.com/p/-496](https://arbital.com/p/-496) to draw pictures and use our visual intuition.\n\n\nTo introduce some notation: our [prior probability](https://arbital.com/p/1rm) $\\bP(D)$ that the patient has Diseasitis is $0.2$. We think that if the patient is sick $(D)$, then it's 90% likely that the tongue depressor will turn black $(B)$: we assign conditional probability $\\bP(B \\mid D) = 0.9$. We assign conditional probability $\\bP(B \\mid \\neg D) = 0.3$ that the tongue depressor will be black even if the patient isn't sick. We want to know $\\bP(D \\mid B)$, the [posterior probability](https://arbital.com/p/1rp) that the patient has Diseasitis given that we've seen a black tongue depressor.\n\nIf we wanted to, we could solve this problem precisely using [https://arbital.com/p/1lz](https://arbital.com/p/1lz): \n\n$$\n\\begin{align}\n\\bP(D \\mid B) &= \\frac{\\bP(B \\mid D) \\bP(D)}{\\bP(B)}\\\\\n&= \\frac{0.9 \\times 0.2}{ \\bP(B, D) + \\bP(B, \\neg D)}\\\\\n&= \\frac{0.18}{ \\bP(D)\\bP(B \\mid D) + \\bP(\\neg D)\\bP(B \\mid \\neg D)}\\\\\n&= \\frac{0.18}{ 0.18 + 0.24}\\\\\n&= \\frac{0.18}{ 0.42} = \\frac{3}{7} \\approx 0.43\\ .\n\\end{align}\n$$\n\nSo even if we've seen a black tongue depressor, the patient is more likely to be healthy than not: $\\bP(D \\mid B) < \\bP(\\neg D \\mid B) \\approx 0.57$. \n\nNow, this calculation might be enlightening if you are a real expert at [https://arbital.com/p/1lz](https://arbital.com/p/1lz). A better calculation would probably be the [odds ratio form of Bayes's rule](https://arbital.com/p/1x9).\n\nBut either way, maybe there's still an intuition saying that, come on, if the tongue depressor is such a strong indicator of Diseasitis that $\\bP(B \\mid D) = 0.9$, it must be that $\\bP(D \\mid B) =big$.\n\nLet's use the [square visualization of probabilities](https://arbital.com/p/496) to make it really visibly obvious that $\\bP(D \\mid B) < \\bP(\\neg D \\mid B)$, and to figure out why $\\bP(B \\mid D) = big$ doesn't imply $\\bP(D \\mid B) =big$. \n\nWe start with the probability of $\\bP(D)$ (so we're [factoring](https://arbital.com/p/factoring_probability) our probabilities by $D$ first): \n\n\n\nNow let's break up the red column, where $D$ is true and the patient has Diseasitis, into a block for the probability $\\bP(B \\mid D)$ that $B$ is also true, and a block for the probability $\\bP(\\neg B \\mid D)$ that $B$ is false.\n\n>Among patients with Diseasitis, 90% turn the tongue depressor black.\n\nThat is, in 90% of the outcomes where $D$ happens, $B$ also happens. So $0.9$ of the red column will be dark ($B$), and $0.1$ will be light:\n\n\n\n\n>However, 30% of the patients without Diseasitis will also turn the tongue depressor black.\n\nSo we break up the blue $\\neg D$ column by $\\bP(B \\mid \\neg D) = 0.3$ and $\\bP(\\neg B \\mid \\neg D) = 0.7$: \n\n\n\n\nNow we would like to know the probability $\\bP(D \\mid B)$ of Diseasitis once we've observed that the tongue depressor is black. Let's break up our diagram by whether or not $B$ happens: \n\n\n\nConditioning on $B$ is like only looking at the part of our distribution where $B$ happens. So the probability $\\bP(D \\mid B)$ of $D$ conditioned on $B$ is the proportion of that area where $D$ also happens: \n\n\n\nHere we can see why $\\bP(D \\mid B)$ isn't all that big. It's true that $\\bP(B,D)$ is big relative to $\\bP(\\neg B,D)$, since we know that $\\bP(B \\mid D)$ is big (patients with Diseasitis almost always have black tongue depressors): \n\n\n\nBut this ratio doesn't really matter if we want to know $\\bP(D \\mid B)$, the probability that a patient with a black tongue depressor has Diseasitis. What matters is that we also assign a reasonably high probability $\\bP(B, \\neg D)$ to the patient having a black tongue depressor but nevertheless *not* suffering from Diseasitis:\n\n\n\nSo even when we see a black tongue depressor, there's still a pretty high chance the patient is healthy anyway, and our [https://arbital.com/p/-1rp](https://arbital.com/p/-1rp) $\\bP(D\\mid B)$ is not that high. Recall our square of probabilities: \n\n\n\nWhen asked about $\\bP(D\\mid B)$, we think of the really high probability $\\bP(B\\mid D) = 0.9$:\n\n\n\nReally, we should look at the part of our probability mass where $B$ happens, and see that a sizeable portion goes to places where $\\neg D$ happens, and the patient is healthy:\n\n\n\nSide note\n---\n\nThe square visualization is very similar to [frequency diagrams](https://arbital.com/p/1x1), except we can just think in terms of probability mass rather than specifically frequency. Also, see that page for [waterfall diagrams](https://arbital.com/p/1x1), another way to visualize updating probabilities.", "date_published": "2016-06-18T08:41:24Z", "authors": ["Tsvi BT", "Team Arbital"], "summaries": [], "tags": [], "alias": "4fh"} {"id": "b300f2a0a31331956b5e4d1208fbc36c", "title": "Computer Programming Familiarity", "url": "https://arbital.com/p/programming_familiarity", "source": "arbital", "source_type": "text", "text": "This page exists so that math explanations can be customized on the basis of whether or not the readers is familiar with coding. If you know how to code, then click the three dot icon and check the \"this requisite\" box.", "date_published": "2016-06-16T17:22:00Z", "authors": ["Kevin Clancy"], "summaries": ["This page exists so that math explanations can be customized on the basis of whether or not the readers is familiar with coding."], "tags": ["Stub"], "alias": "4fv"} {"id": "7d79287f83043d50b80118e1f90e88cc", "title": "Group conjugate", "url": "https://arbital.com/p/group_conjugate", "source": "arbital", "source_type": "text", "text": "Two elements $x, y$ of a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$ are *conjugate* if there is some $h \\in G$ such that $hxh^{-1} = y$.\n\n# Conjugacy as \"changing the worldview\"\n\nConjugating by $h$ is equivalent to \"viewing the world through $h$'s eyes\".\nThis is most easily demonstrated in the [https://arbital.com/p/-497](https://arbital.com/p/-497), where it [is a fact](https://arbital.com/p/4bh) that if $$\\sigma = (a_{11} a_{12} \\dots a_{1 n_1})(a_{21} \\dots a_{2 n_2}) \\dots (a_{k 1} a_{k 2} \\dots a_{k n_k})$$\nand $\\tau \\in S_n$, then $$\\tau \\sigma \\tau^{-1} = (\\tau(a_{11}) \\tau(a_{12}) \\dots \\tau(a_{1 n_1}))(\\tau(a_{21}) \\dots \\tau(a_{2 n_2})) \\dots (\\tau(a_{k 1}) \\tau(a_{k 2}) \\dots \\tau(a_{k n_k}))$$\n\nThat is, conjugating by $\\tau$ has \"caused us to view $\\sigma$ from the point of view of $\\tau$\".\n\nSimilarly, in the [https://arbital.com/p/-4cy](https://arbital.com/p/-4cy) $D_{2n}$ on $n$ vertices, conjugation of the rotation by a reflection yields the inverse of the rotation: it is \"the rotation, but viewed as acting on the reflected polygon\".\nEquivalently, if the polygon is sitting on a glass table, conjugating the rotation by a reflection makes the rotation act \"as if we had moved our head under the table to look upwards first\".\n\nIn general, if $G$ is a group which [acts](https://arbital.com/p/3t9) as (some of) the symmetries of a certain object $X$ %%note:Which [we can always view as being the case](https://arbital.com/p/49b).%% then conjugation of $g \\in G$ by $h \\in G$ produces a symmetry $hgh^{-1}$ which acts in the same way as $g$ does, but on a copy of $X$ which has already been permuted by $h$.\n\n# Closure under conjugation\n\nIf a subgroup $H$ of $G$ is closed under conjugation by elements of $G$, then $H$ is a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6).\nThe concept of a normal subgroup is extremely important in group theory.\n\n%%%knows-requisite([https://arbital.com/p/3t9](https://arbital.com/p/3t9)):\n# Conjugation action\n\nConjugation forms a [action](https://arbital.com/p/3t9).\nFormally, let $G$ act on itself: $\\rho: G \\times G \\to G$, with $\\rho(g, k) = g k g^{-1}$.\nIt is an exercise to show that this is indeed an action.\n%%hidden(Show solution):\nWe need to show that the identity acts trivially, and that products may be broken up to act individually.\n\n- $\\rho(gh, k) = (gh)k(gh)^{-1} = ghkh^{-1}g^{-1} = g \\rho(h, k) g^{-1} = \\rho(g, \\rho(h, k))$;\n- $\\rho(e, k) = eke^{-1} = k$.\n%%\n\nThe [stabiliser](https://arbital.com/p/group_stabiliser) of this action, $\\mathrm{Stab}_G(g)$ for some fixed $g \\in G$, is the set of all elements such that $kgk^{-1} = g$: that is, such that $kg = gk$.\nEquivalently, it is the [centraliser](https://arbital.com/p/group_centraliser) of $g$ in $G$: it is the subgroup of all elements which commute with $G$.\n\nThe [orbit](https://arbital.com/p/group_orbit) of the action, $\\mathrm{Orb}_G(g)$ for some fixed $g \\in G$, is the [https://arbital.com/p/-4bj](https://arbital.com/p/-4bj) of $g$ in $G$.\nBy the [https://arbital.com/p/-4l8](https://arbital.com/p/-4l8), this immediately gives that the size of a conjugacy class divides the [order](https://arbital.com/p/3gg) of the parent group.\n%%%", "date_published": "2016-06-20T07:05:54Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Math 2"], "alias": "4gk"} {"id": "0c8e1b23e178d18f2ef1ea613910688e", "title": "There is only one logarithm", "url": "https://arbital.com/p/only_one_log", "source": "arbital", "source_type": "text", "text": "When you learned that we use [https://arbital.com/p/-decimal_notation](https://arbital.com/p/-decimal_notation) to record numbers, you may have wondered: If we used a different number system, such as a number system with 12 symbols instead of 10, how would that change the cost of writing numbers down? Are some number systems more efficient than others? On this page, we'll answer that question, among others.\n\nIn [https://arbital.com/p/4bz](https://arbital.com/p/4bz), we saw that any $f$ with [domain](https://arbital.com/p/3js) [$\\mathbb R^+$](https://arbital.com/p/positive_reals) that satisfies the equation $f(x \\cdot y) = f(x) + f(y)$ for all $x$ and $y$ in its domain is either trivial (i.e., it sends all inputs to 0), or it is [isomorphic](https://arbital.com/p/4f4) to [$\\log_b$](https://arbital.com/p/3nd) for some base $b$. Thus, if we want a function's output to change by a constant that depends on $y$ every time its input changes by a factor of $y$, we only have one meaningful degree of freedom, and that's the choice of $b \\neq 1$ such that $f(b) = 1$. Once we choose which value of $b$ $f$ sends to 1, the entire behavior of the function is fully defined.\n\nHow much freedom does this choice give us? Almost none! To see this, let's consider the difference between choosing base $b$ as opposed to an alternative base $c.$ Say we have an input $x \\in \\mathbb R^+$ — what's the difference between $\\log_b(x)$ and $\\log_c(x)?$\n\nWell, $x = c^y$ for some $y$, because $x$ is positive. Thus, $\\log_c(x)$ $=$ $\\log_c(c^y)$ $=$ $y.$ By contrast, $\\log_b(x)$ $=$ $\\log_b(c^y)$ $=$ $y \\log_b(c).$ ([Refresher](https://arbital.com/p/4bz).) Thus, $\\log_c$ and $\\log_b$ disagree on $x$ only by a constant factor — namely, $\\log_b(c).$ And this is true for _any_ $x$ — you can get $\\log_c(x)$ by calculating $\\log_b(x)$ and dividing by $\\log_b(c):$ $$\\log_c(x) = \\frac{\\log_b(x)}{\\log_b(c)}.$$\n\nThis is a remarkable equation, in the sense that it's worth remarking on. No matter what base $b$ we choose for $f$, and no matter what input $x$ we put into $f$, if we want to figure out what we would have gotten if we chose base $c$ instead, all we need to do is calculate $f(c)$ and divide $f(x)$ by $f(c).$ In other words, the different logarithm functions actually aren't very different at all — each one has all the same information as all the others, and you can recover the behavior of $\\log_c$ using $\\log_b$ and a simple calculation!\n\n(By a symmetric argument, you can show that $\\log_b(x) = \\frac{\\log_c(x)}{\\log_c(b)},$ which implies that $\\log_b(c) = \\frac{1}{\\log_c(b)}$, a fact we already knew from from studying [https://arbital.com/p/-44l](https://arbital.com/p/-44l).)\n\nAbove, we asked:\n\n> When you learned that we use [https://arbital.com/p/-decimal_notation](https://arbital.com/p/-decimal_notation) to record numbers, you may have wondered: If we used a different number system, such as a number system with 12 symbols instead of 10, how would that change the cost of writing numbers down? Are some number systems more efficient than others?\n\nFrom the fact that the [length of a written number grows logarithmically with the magnitude of the number](https://arbital.com/p/416) and the above equation, we can see that, no matter how large a number is, its base 10 representation differs in length from its base 12 representation only by a factor of $\\log_{10}(12) \\approx 1.08$. Similarly, the [binary](https://arbital.com/p/binary_notation) representation of a number is always about $\\log_2(10) \\approx 3.32$ times longer than its decimal representation. Because there is only one logarithm function (up to a multiplicative constant), which number base you use to represent numbers only affects the size of the representation by a multiplicative constant.\n\nSimilarly, if you ever want to convert between logarithmic measurements in different bases, you only need to perform a single multiplication (or division). For example, if someone calculated how many hours it took for the bacteria colony to triple, and you want to know how long it took to double, all you need to do is multiply by $\\log_3(2) \\approx 0.63.$ There is essentially only one logarithm function; the base merely defines the unit of measure. Given measurements taken in one base, you can easily convert to another.\n\nWhich base of the logarithm is best to use? Well, by the arguments above, it doesn't really matter which base you pick — it's easy enough to convert between bases if you need to. However, in computer science, [https://arbital.com/p/-3qq](https://arbital.com/p/-3qq), and many parts of physics, it's common to choose base 2. Why? Two reasons. First, because if you're designing a physical system to store data, a system that only has two stable states is the simplest available. Second, because if you need to pick an arbitrary amount that a thing has grown to call \"one growth,\" then doubling (i.e., \"it's now once again as large as it was when it started\") is a pretty natural choice. (For example, many people find the 'half life' of a radioactive element to be a more intuitive notion than its 'fifth life,' even though both are equally valid measures of radioactivity.)\n\nIn other parts of physics and engineering, the log base 10 is more common, because it has a natural relationship to the way we represent numbers (using a base 10 representation). That is, $\\log_{10}$ is one of the easiest logarithms to calculate in your head (because you can just [count the number of digits in a number's representation](https://arbital.com/p/416)), so it's often used by engineers.\n\nAmong mathematicians, the most common base of the logarithm is [$e$](https://arbital.com/p/e) $\\approx 2.718,$ a [https://arbital.com/p/-trancendental_number](https://arbital.com/p/-trancendental_number) that cannot be fully written down. Why this strange base? That's an [advanced topic](https://arbital.com/p/base_e_is_natural) not covered in this tutorial. In brief, though, if you look at the [derivative](https://arbital.com/p/47d) of the logarithm (i.e., the way the function changes if you perturb the input), if the input was $x$ and you perturb it a tiny amount, the output changes by about a factor of $\\frac{1}{x}$ times a constant $y$ that depends on the base of the logarithm, and $e$ is the base that causes that constant to be $1.$ For this reason and a handful of others, there is a compelling sense in which $\\log_e$ is the \"true form\" of the logarithm. (So much so that it has its own symbol, $\\ln,$ which is called the \"natural logarithm.\")\n\nRegardless of which base you choose, converting between bases is easy — and as equation (1) shows, you only need to have one logarithm function in your toolkit in order to understand any other logarithm. Because if you have $\\log_b$ and you want to know $\\log_c(x)$, you can just calculate $\\log_b(x)$ and divide it by $\\log_b(c)$, and then you're done. In other words, there is really only one logarithm function, and it doesn't matter which one you pick.\n\n[A visualization of the graph of the logarithm where you can change the base, but it changes only the labels on the axes while the curve itself remains completely static.](https://arbital.com/p/fixme:)", "date_published": "2016-06-20T20:50:28Z", "authors": ["Eric Rogstad", "Nate Soares", "Patrick Stevens"], "summaries": [], "tags": ["Needs summary", "Work in progress"], "alias": "4gm"} {"id": "3883718ecbeebc0ae09fa3f0ee137722", "title": "The log lattice", "url": "https://arbital.com/p/log_lattice", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/45q](https://arbital.com/p/45q) and other pages give physical interpretations of what logarithms are really doing. Now it's time to understand the raw numerics. Logarithm functions often output irrational, [https://arbital.com/p/-transcendental](https://arbital.com/p/-transcendental) numbers. For example, $\\log_2(3)$ starts with\n\n1.5849625007211561814537389439478165087598144076924810604557526545410982277943585625222804749180882420909806624750591673437175524410609248221420839506216982994936575922385852344415825363027476853069780516875995544737266834624612364248850047581810676961316404807130823233281262445248670633898014837234235783662478390118977006466312634223363341821270106098049177472541357330110499026268818251703576994712157113638912494135752192998699040767081539505404488360\n\nWhat do these long strings of digits mean? Why these numbers, in particular? We already have the tools to answer those questions, all that remains is to put them together into a visualization. In [https://arbital.com/p/44l](https://arbital.com/p/44l), we saw why it is that $\\log_2(3)$ must be this number: The cost of a [3-digit](https://arbital.com/p/4sj) in terms of 2-digits is more than 1 and less than 2; and the cost of ten 3-digits = one $3^{10}$-digit in terms of 2-digits is more than 15 and less than 16, and the cost of a hundred 3-digits = one $3^{100}$-digit in terms of 2-digits is more than 158 and less than 159, and so on. The giant number above is telling us about the _entire sequence_ of $3^{n}$-digit costs, for every $n$. If we double it, we get the cost of a 9-digit in terms of 2-digits. If we triple it, we get the cost of a 27-digit in terms of 2-digits.\n\nIn fact, this number also interacts correctly with all the other outputs of $\\log_2.$ $\\log_2(2)=1,$ and if we add 1 to the number above, we get the cost of a 6-digit in terms of 2-digits.\n\nThat is, the outputs of the log base 2 form a gigantic lattice, with an often-irrational output corresponding to each input. The gigantic lattice is set up just so, such that whenever you start at $x$ on the left and go to $x \\cdot y,$ the output you get on the right is the output to that corresponds to $x$ plus the output that corresponds to $y$.\n\n![The values of log2 on integers from 1 to 10](http://i.imgur.com/FvRymOk.png)\n\n_The values of the logarithm base 2 on integers from 1 to 10._\n\nThe log lattice is this giant lattice assigning a specific number to each number, such that multiplying on the left corresponds to adding on the right.\n\n![A demonstration that log2(3^10) is 10 log2(3)](http://i.imgur.com/tuTYlge.png)\n\n_An illustration of some of the connections between the logs base 2 of powers of 3._\n\nIt's an intricate lattice that preservers a huge amount of structure — all the structure of the numbers, in fact, given that $\\log$ is invertible. $\\log_2(3)$ is a number that is simultaneously $1$ less than $\\log_2(6),$ and half of $\\log_2(9)$, and a tenth of $\\log_2(3^{10}),$ and $\\log_2(3^9)$ less than $\\log_2(3^{10}).$ The value of $\\log_2(3)$ has to satisfy a _massive_ number of constraints, in order to be precisely the number such that multiplication on the left corresponds to addition on the right. It's no surprise, then, that [it has no concise decimal expansion](https://arbital.com/p/4n8).\n\n![A demonstration of the relationships between the log2 powers of 3](http://i.imgur.com/e650J5j.png)\n\n_Notice the connections between the logs base 2 of the powers of 3. log 2 of 3 is a tenth of log 2 of 310 is a tenth of the log 2 of 3100, which is twice the log 2 of 350 which is twice the log 2 of 325. And, of course, the log 2 of 3101 is about 1.58 units larger than the log 2 of 3100._\n\nIn fact, the constraints on this log lattice are so tight that there's only one way to do it, up to a multiplicative constant. You can multiply everything on the right side by a constant, and that's the only thing you can do without disturbing the structure. For example, here's part of the log lattice viewed in base 3:\n\n![The base 3 log lattice](http://i.imgur.com/yhcy9BZ.png)\n\nIt's exactly the same as the log lattice viewed in base 2, except that everything on the right has been divided by about 1.58 (i.e., multiplied by about 0.63).\n\nWhat are logarithms doing? Well, fundamentally, there is a way to transform numbers such that what was once multiplication is now addition. If you were going to do multiplication to some numbers, you can transform them into this lattice, and then do addition, and then transform them back, and you'll get the same result. There's only one way to preserve all that structure, although if you want to view the structure, you need to (arbitrarily) choose some number on the left to call \"1\" on the right. Given that choice $b$ of translation, the log base $b$ taps into that intricate lattice.\n\n[https://arbital.com/p/visualization](https://arbital.com/p/visualization)", "date_published": "2016-07-30T01:12:49Z", "authors": ["Eric Rogstad", "Nate Soares", "Malcolm McCrimmon", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "4gp"} {"id": "21d15afffb06ba8aa76dd0f4bd0ac2d7", "title": "Formal definition", "url": "https://arbital.com/p/formal_definition_meta_tag", "source": "arbital", "source_type": "text", "text": "Meta tag for pages which give formal or brief jargon-heavy technical definitions of a concept. Formal definitions should be secondary [lenses](https://arbital.com/p/17b) when explanations are available.", "date_published": "2016-07-13T22:30:18Z", "authors": ["Eric Bruylant", "Jessica Taylor", "Patrick Stevens", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "4gs"} {"id": "f619ea58b5c666dbf9d4525ac5665e55", "title": "Life in logspace", "url": "https://arbital.com/p/logspace", "source": "arbital", "source_type": "text", "text": "[https://arbital.com/p/4gp](https://arbital.com/p/4gp) hints at the reason that engineers, scientists, and AI researchers find logarithms so useful.\n\nAt this point, you may have a pretty good intuition for what logarithms are doing, and you may be saying to yourself\n\n> Ok, I understand logarithms, and I see how they could be a pretty useful concept sometimes. Especially if, like, someone happens to have an out-of-control bacteria colony lying around. I can also see how they're useful for measuring representations of things (like the length of a number when you write it down, or the cost of communicating a message). If I squint, I can _sort of_ see why it is that the brain encodes many percepts logarithmically (such that the tone that I perceive is the $\\log_2$ of the frequency of vibration in the air), as it's not too surprising that evolution occasionally hit upon this natural tool for representing things. I guess logs are pretty neat.\n\nThen you start seeing logarithms in other strange places — say, you learn that [AlphaGo](https://en.wikipedia.org/wiki/AlphaGo) primarily manipulated [logarithms of the probability](https://arbital.com/p/log_probability) that the human would take a particular action instead of manipulating the probabilities directly. Then maybe you learn that scientists, engineers, and navigators of yore used to pay large sums of money for giant, pre-computed tables of logarithm outputs.\n\nAt that point, you might say\n\n> Hey, wait, why do these logs keep showing up all over the place? Why are they _that_ ubiquitous? What's going on here?\n\nThe reason here is that _addition is easier than multiplication,_ and this is true both for navigators of yore (who had to do lots of calculations to keep their ships on course), and for modern AI algorithms.\n\nLet's say you have a whole bunch of numbers, and you know you're going to have to perform lots and lots of operations on them that involve taking some numbers you already have and multiplying them together. There are two ways you could do this: One is that you could bite the bullet and multiply them all, which gets pretty tiresome (if you're doing it by hand) and/or pretty expensive (if you're training a gigantic neural network on millions of games of go). Alternatively, you could put all your numbers through a log function (a one-time cost), and then perform all your operations using addition (much cheaper!), and then transform them back using [https://arbital.com/p/-exponentiation](https://arbital.com/p/-exponentiation) when you're done (another one-time cost).\n\nEmpirically, the second method tends to be faster, cheaper, and more convenient (at the cost of some precision, given that most log outputs are [transcendental](https://arbital.com/p/transcendental_number)).\n\nThis is the last insight about logarithms contained in this tutorial, and it is a piece of practical advice: If you have to perform a large chain of multiplications, you can often do it cheaper by first transforming your problem into \"log space\", where multiplication acts like addition (and exponentiation acts like multiplication). Then you can perform a bunch of additions instead, and transform the answer back into normal space at the end.", "date_published": "2016-07-29T22:26:10Z", "authors": ["Malcolm McCrimmon", "Nate Soares", "Eric Rogstad", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "4h0"} {"id": "071623e979361c5b32b0debe4e8ed312", "title": "The End (of the basic log tutorial)", "url": "https://arbital.com/p/log_tutorial_end", "source": "arbital", "source_type": "text", "text": "That concludes our introductory tutorial on logarithms! You have made it to the end.\n\nThroughout this tutorial, we saw that the logarithm base $b$ of $x$ calculates the number of $b$-factors in $x.$ Hopefully, this claim now means more to you than it once did. We've seen a number of different ways of interpreting what logarithms are doing, including:\n\n- $\\log_b(x) = y$ means [\"it takes about $y$ digits to write $x$ in base $b$.\"](https://arbital.com/p/416)\n- $\\log_b(x) = y$ means [\"it takes about $y$ $b$-digits to emulate an $x$-digit.\"](https://arbital.com/p/44l)\n- $\\log_b(x) = y$ means [\"if the space of possible messages to send goes up by a factor of $x$, then the cost, in $b$-digits, goes up by a factor of $y$](https://arbital.com/p/45q)\n- And, simply, $\\log_b(x) = y$ means that if you start with 1 and grow it by factors of $b$, then after $y$ iterations of this your result will be $x.$\n\nFor example, $\\log_2(100)$ counts the number of doublings that constitute a factor-of-100 increase. (The answer is more than 6 doublings, but slightly less than 7 doublings).\n\nWe've also seen that any function $f$ whose output grows by a constant (that depends on $y$) every time its input grows by a factor of $y$ is [very likely a logarithm function](https://arbital.com/p/4bz), and that, in essence, [https://arbital.com/p/-4gm](https://arbital.com/p/-4gm) function.\n\nWe've glanced at the [underlying structure](https://arbital.com/p/4gp) that all logarithm functions tap into, and we've briefly discussed [what makes working with logarithms so dang useful](https://arbital.com/p/4h0).\n\nThere are also a huge number of questions about, applications for, and extensions of the logarithm that we _didn't_ explore. Those include, but are not limited to:\n\n- Why is $e$ the natural base of the logarithm?\n- What is up with the link between logarithms, exponentials, and roots?\n- What is the derivative of $\\log_b(x)$ and why is it proportional to $\\frac{1}{x}$?\n- How can logarithms be efficiently calculated?\n- What happens when we extend logarithms to complex numbers, and why is the result a [https://arbital.com/p/-multifunction](https://arbital.com/p/-multifunction)?\n\nAnswering these questions will require an advanced tutorial on logarithms. Such a thing does not exist yet, but you can help make it happen.", "date_published": "2016-09-20T23:27:16Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "4h2"} {"id": "48031479529c5f413eb60081680e5050", "title": "Normal subgroup", "url": "https://arbital.com/p/normal_subgroup", "source": "arbital", "source_type": "text", "text": "A *normal subgroup* $N$ of group $G$ is one which is closed under [conjugation](https://arbital.com/p/4gk): for all $h \\in G$, it is the case that $\\{ h n h^{-1} : n \\in N \\} = N$.\nIn shorter form, $hNh^{-1} = N$.\n\nSince [conjugacy is equivalent to \"changing the worldview\"](https://arbital.com/p/4gk), a normal subgroup is one which \"looks the same from the point of view of every element of $G$\".\n\nA subgroup of $G$ is normal if and only if it is the [kernel](https://arbital.com/p/49y) of some [https://arbital.com/p/-47t](https://arbital.com/p/-47t) from $G$ to some group $H$. ([Proof.](https://arbital.com/p/4h7))\n\n%%%knows-requisite([https://arbital.com/p/category_theory_equaliser](https://arbital.com/p/category_theory_equaliser)):\nFrom a category-theoretic point of view, the kernel of $f$ is an equaliser of an arrow $f$ with the zero arrow; it is therefore universal such that composition with $f$ yields zero.\n%%%", "date_published": "2016-06-18T13:43:43Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Definition", "Stub"], "alias": "4h6"} {"id": "307c3e97643d0bcd09743442236b0c5e", "title": "Subgroup is normal if and only if it is the kernel of a homomorphism", "url": "https://arbital.com/p/subgroup_normal_iff_kernel_of_homomorphism", "source": "arbital", "source_type": "text", "text": "Let $N$ be a [https://arbital.com/p/-subgroup](https://arbital.com/p/-subgroup) of [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$.\nThen $N$ is [normal](https://arbital.com/p/4h6) in $G$ if and only if there is a group $H$ and a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $\\phi:G \\to H$ such that the [kernel](https://arbital.com/p/49y) of $\\phi$ is $N$.\n\n# Proof\n\n## \"Normal\" implies \"is a kernel\"\nLet $N$ be normal, so it is closed under [conjugation](https://arbital.com/p/4gk).\nThen we may define the [https://arbital.com/p/-quotient_group](https://arbital.com/p/-quotient_group) $G/N$, whose elements are the [left cosets](https://arbital.com/p/group_coset) of $N$ in $G$, and where the operation is that $gN + hN = (g+h)N$.\nThis group is well-defined ([proof](https://arbital.com/p/quotient_by_subgroup_is_well_defined_iff_normal)).\n\nNow there is a homomorphism $\\phi: G \\to G/N$ given by $g \\mapsto gN$.\nThis is indeed a homomorphism, by definition of the group operation $gN + hN = (g+h)N$.\n\nThe kernel of this homomorphism is precisely $\\{ g : gN = N \\}$; that is simply $N$:\n\n- Certainly $N \\subseteq \\{ g : gN = N \\}$ (because $nN = N$ for all $n$, since $N$ is closed as a subgroup of $G$);\n- We have $\\{ g : gN = N \\} \\subseteq N$ because if $gN = N$ then in particular $g e \\in N$ (where $e$ is the group identity) so $g \\in N$.\n\n## \"Is a kernel\" implies \"normal\"\nLet $\\phi: G \\to H$ have kernel $N$, so $\\phi(n) = e$ if and only if $n \\in N$.\nWe claim that $N$ is closed under conjugation by members of $G$.\n\nIndeed, $\\phi(h n h^{-1}) = \\phi(h) \\phi(n) \\phi(h^{-1}) = \\phi(h) \\phi(h^{-1})$ since $\\phi(n) = e$.\nBut that is $\\phi(h h^{-1}) = \\phi(e)$, so $hnh^{-1} \\in N$.\n\nThat is, if $n \\in N$ then $hnh^{-1} \\in N$, so $N$ is normal.", "date_published": "2016-06-17T08:33:06Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4h7"} {"id": "3717223c08c4e6987d727c04dc1735c8", "title": "Quotient by subgroup is well defined if and only if subgroup is normal", "url": "https://arbital.com/p/quotient_by_subgroup_is_well_defined_iff_normal", "source": "arbital", "source_type": "text", "text": "Let $G$ be a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) and $N$ a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $G$.\nThen we may define the *quotient group* $G/N$ to be the set of [left cosets](https://arbital.com/p/4j4) $gN$ of $N$ in $G$, with the group operation that $gN + hN = (gh)N$.\nThis is well-defined if and only if $N$ is normal.\n\n# Proof\n\n## $N$ normal implies $G/N$ well-defined\n\nRecall that $G/N$ is well-defined if \"it doesn't matter which way we represent a coset\": whichever coset representatives we use, we get the same answer.\n\nSuppose $N$ is a normal subgroup of $G$.\nWe need to show that given two representatives $g_1 N = g_2 N$ of a coset, and given representatives $h_1 N = h_2 N$ of another coset, that $(g_1 h_1) N = (g_2 h_2)N$.\n\nSo given an element of $g_1 h_1 N$, we need to show it is in $g_2 h_2 N$, and vice versa.\n\nLet $g_1 h_1 n \\in g_1 h_1 N$; we need to show that $h_2^{-1} g_2^{-1} g_1 h_1 n \\in N$, or equivalently that $h_2^{-1} g_2^{-1} g_1 h_1 \\in N$.\n\nBut $g_2^{-1} g_1 \\in N$ because $g_1 N = g_2 N$; let $g_2^{-1} g_1 = m$.\nSimilarly $h_2^{-1} h_1 \\in N$ because $h_1 N = h_2 N$; let $h_2^{-1} h_1 = p$.\n\nThen we need to show that $h_2^{-1} m h_1 \\in N$, or equivalently that $p h_1^{-1} m h_1 \\in N$.\n\nSince $N$ is closed under conjugation and $m \\in N$, we must have that $h_1^{-1} m h_1 \\in N$;\nand since $p \\in N$ and $N$ is closed under multiplication, we must have $p h_1^{-1} m h_1 \\in N$ as required.\n\n## $G/N$ well-defined implies $N$ normal\n\nFix $h \\in G$, and consider $hnh^{-1} N + hN$.\nSince the quotient is well-defined, this is $(hnh^{-1}h) N$, which is $hnN$ or $hN$ (since $nN = N$, because $N$ is a subgroup of $G$ and hence is closed under the group operation).\nBut that means $hnh^{-1}N$ is the identity element of the quotient group, since when we added it to $hN$ we obtained $hN$ itself.\n\nThat is, $hnh^{-1}N = N$.\nTherefore $hnh^{-1} \\in N$.\n\nSince this reasoning works for any $h \\in G$, it follows that $N$ is closed under conjugation by elements of $G$, and hence is normal.", "date_published": "2016-06-20T06:58:24Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4h9"} {"id": "88f82ae09ffaaaa88fe95afddc3b584e", "title": "Alternating group", "url": "https://arbital.com/p/alternating_group", "source": "arbital", "source_type": "text", "text": "The *alternating group* $A_n$ is defined as a certain subgroup of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$: namely, the collection of all elements which can be made by multiplying together an even number of [transpositions](https://arbital.com/p/4cn).\nThis is a well-defined notion ([proof](https://arbital.com/p/4hg)).\n\n%%%knows-requisite([https://arbital.com/p/4h6](https://arbital.com/p/4h6)):\n$A_n$ is a [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $S_n$; it is the [quotient](https://arbital.com/p/quotient_group) of $S_n$ by the [sign homomorphism](https://arbital.com/p/4hk).\n%%%\n\n# Examples\n\n- A [cycle](https://arbital.com/p/49f) of even length is an odd permutation in the sense that it can only be made by multiplying an odd number of transpositions.\nFor example, $(132)$ is equal to $(13)(23)$.\n- A cycle of odd length is an even permutation, in that it can only be made by multiplying an even number of transpositions.\nFor example, $(1354)$ is equal to $(54)(34)(14)$.\n\n- The alternating group $A_4$ consists precisely of twelve elements: the identity, $(12)(34)$, $(13)(24)$, $(14)(23)$, $(123)$, $(124)$, $(134)$, $(234)$, $(132)$, $(143)$, $(142)$, $(243)$.\n\n# Properties\n\n%%%knows-requisite([https://arbital.com/p/4h6](https://arbital.com/p/4h6)):\nThe alternating group $A_n$ is of [index](https://arbital.com/p/index_of_a_subgroup) $2$ in $S_n$.\nTherefore $A_n$ is [normal](https://arbital.com/p/4h6) in $S_n$ ([proof](https://arbital.com/p/4hl)).\nAlternatively we may give the homomorphism explicitly of which $A_n$ is the [kernel](https://arbital.com/p/49y): it is the [sign homomorphism](https://arbital.com/p/4hk).\n%%%\n\n- $A_n$ is generated by its $3$-cycles. ([Proof.](https://arbital.com/p/4hs))\n- $A_n$ is [simple](https://arbital.com/p/4jc). ([Proof.](https://arbital.com/p/4jb))\n- The [conjugacy classes](https://arbital.com/p/4bj) of $A_n$ are [easily characterised](https://arbital.com/p/alternating_group_five_conjugacy_classes).", "date_published": "2016-06-18T10:15:05Z", "authors": ["Patrick Stevens", "Team Arbital"], "summaries": [], "tags": ["Definition"], "alias": "4hf"} {"id": "159775f0d8377136ab31b2d02faa7e51", "title": "The collection of even-signed permutations is a group", "url": "https://arbital.com/p/even_signed_permutations_form_a_group", "source": "arbital", "source_type": "text", "text": "The collection of elements of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$ which are made by multiplying together an even number of permutations forms a subgroup of $S_n$.\n\nThis proves that the [https://arbital.com/p/-alternating_group](https://arbital.com/p/-alternating_group) $A_n$ is well-defined, if it is given as \"the subgroup of $S_n$ containing precisely that which is made by multiplying together an even number of transpositions\".\n\n# Proof\n\nFirstly we must check that \"I can only be made by multiplying together an even number of transpositions\" is a well-defined notion; this [is in fact true](https://arbital.com/p/4hh).\n\nWe must check the group axioms.\n\n- Identity: the identity is simply the product of no transpositions, and $0$ is even.\n- Associativity is inherited from $S_n$.\n- Closure: if we multiply together an even number of transpositions, and then a further even number of transpositions, we obtain an even number of transpositions.\n- Inverses: if $\\sigma$ is made of an even number of transpositions, say $\\tau_1 \\tau_2 \\dots \\tau_m$, then its inverse is $\\tau_m \\tau_{m-1} \\dots \\tau_1$, since a transposition is its own inverse.", "date_published": "2016-06-17T11:43:40Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4hg"} {"id": "e0c9a9c0dc94332f8e4ed8d92b847488", "title": "The sign of a permutation is well-defined", "url": "https://arbital.com/p/sign_of_permutation_is_well_defined", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$ contains elements which are made up from [transpositions](https://arbital.com/p/4cn) ([proof](https://arbital.com/p/4cp)).\nIt is a fact that if $\\sigma \\in S_n$ can be made by multiplying together an even number of transpositions, then it cannot be made by multiplying an odd number of transpositions, and vice versa.\n\n%%%knows-requisite([https://arbital.com/p/47y](https://arbital.com/p/47y)):\nEquivalently, there is a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) from $S_n$ to the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_2 = \\{0,1\\}$, taking the value $0$ on those permutations which are made from an even number of permutations and $1$ on those which are made from an odd number.\n%%%\n\n# Proof\n\nLet $c(\\sigma)$ be the number of cycles in the [disjoint cycle decomposition](https://arbital.com/p/49f) of $\\sigma \\in S_n$, including singletons.\nFor example, $c$ applied to the identity yields $n$, while $c((12)) = n-1$ because $(12)$ is shorthand for $(12)(3)(4)\\dots(n-1)(n)$.\nWe claim that multiplying $\\sigma$ by a transposition either increases $c(\\sigma)$ by $1$, or decreases it by $1$.\n\nIndeed, let $\\tau = (kl)$.\nEither $k, l$ lie in the same cycle in $\\sigma$, or they lie in different ones.\n\n- If they lie in the same cycle, then $$\\sigma = \\alpha (k a_1 a_2 \\dots a_r l a_s \\dots a_t) \\beta$$ where $\\alpha, \\beta$ are disjoint from the central cycle (and [hence commute](https://arbital.com/p/49g) with $(kl)$).\nThen $\\sigma (kl) = \\alpha (k a_s \\dots a_t)(l a_1 \\dots a_r) \\beta$, so we have broken one cycle into two.\n- If they lie in different cycles, then $$\\sigma = \\alpha (k a_1 a_2 \\dots a_r)(l b_1 \\dots b_s) \\beta$$ where again $\\alpha, \\beta$ are disjoint from $(kl)$.\nThen $\\sigma (kl) = \\alpha (k b_1 b_2 \\dots b_s l a_1 \\dots a_r) \\beta$, so we have joined two cycles into one.\n\nTherefore $c$ takes even values if there are evenly many transpositions in $\\sigma$, and odd values if there are odd-many transpositions in $\\sigma$.\n\nMore formally, let $\\sigma = \\alpha_1 \\dots \\alpha_a = \\beta_1 \\dots \\beta_b$, where $\\alpha_i, \\beta_j$ are transpositions.\n%%%knows-requisite([https://arbital.com/p/modular_arithmetic](https://arbital.com/p/modular_arithmetic)):\n(The following paragraph is more succinctly expressed as: \"$c(\\sigma) \\equiv n+a \\pmod{2}$ and also $\\equiv n+b \\pmod{2}$, so $a \\equiv b \\pmod{2}$.\")\n%%%\nThen $c(\\sigma)$ flips odd-to-even or even-to-odd for each integer $1, 2, \\dots, a$; it also flips odd-to-even or even-to-odd for each integer $1, 2, \\dots, b$.\nTherefore $a$ and $b$ must be of the same [parity](https://arbital.com/p/even_odd_parity).", "date_published": "2016-06-28T12:14:29Z", "authors": ["Patrick Stevens", "AM AM"], "summaries": [], "tags": [], "alias": "4hh"} {"id": "7e3a5ca536d4af7014f6db5032c8f3b6", "title": "Isomorphism: Intro (Math 0)", "url": "https://arbital.com/p/Isomorphism_intro_math_0", "source": "arbital", "source_type": "text", "text": "If two things are essentially the same from a certain perspective, and they only differ in unimportant details, then they are **isomorphic**.\n\n##Comparing Amounts##\nConsider the Count von Count. He cares only about counting things. He doesn't care what they are, just how many there are. He decides that he wants to collect items into plastic crates, and he considers two crates equal if both contain the same number of items. \n\n![Equivalent Crates](http://i.imgur.com/sG7Qfyv.jpg)\n\nNow Elmo comes to visit, and he wants to impress the Count, but Elmo is not great at counting. Without counting them explicitly, how can Elmo tell if two crates contain the same number of items?\n\nWell, he can take one item out of each crate and put the pair to one side.\n\n![Pairing one pair of items](https://i.imgur.com/myTqIrRl.jpg)\n\n He continues pairing items up in this way and when one crate runs out he checks if there are any left over in the other crate. If there aren't any left over, then he knows there were the same number of items in both crates. \n\n![Pairing all items in one crate with items in the other](https://i.imgur.com/S87JO2nl.jpg)\n\nSince the Count von Count only cares about counting things, the two crates are basically equivalent, and might as well be the same crate to him. Whenever two objects are the same from a certain perspective, we say that they are **isomorphic**.\n\nIn this example, the way in which the crates were the same is that each item in one crate could be paired with an item in the other.\n\n![Pairing all items in one box with items in the other](https://i.imgur.com/S87JO2nl.jpg)\n\n![Bijection between crates](https://i.imgur.com/53YbraFl.jpg)\n\nThis wouldn't have been possible if the crates had different numbers of items in them.\n\n\n![Different numbers of items](https://i.imgur.com/F5rLwAsl.jpg)\n\n![No way to pair items](https://i.imgur.com/KCI9UOvl.jpg)\n\n\nWhenever you can match each item in one collection with exactly one item in another collection, we say that the collections are **[bijective](https://arbital.com/p/-499)** and the way you paired them is a **[bijection](https://arbital.com/p/-499)**. A bijection is a specific kind of isomorphism.\n\n![Bijection between crates](https://i.imgur.com/53YbraFl.jpg)\n\nNote that there might be many different bijections between two bijective things.\n\n![Another bijection between crates](http://i.imgur.com/Q6Si1ZX.png)\n\nIn fact, all that counting involves is pairing up the things you want to count, either with your fingers or with the concepts of 'numbers' in your head. If there are as many objects in one crate as there are numbers from one to seven, and there are as many objects in another crate as numbers from one to seven, then both crates contain the same number of objects.\n\n##Comparing Maps##\nNow imagine that you have a map of the London Underground. Such a map is not to scale, nor does it even show how the tracks bend or which station is in which direction compared to another. They only record which stations are connected. \n\n![Map of the London Underground](http://i.imgur.com/9d8oX2K.gif)\n\nYour Spanish friend is coming to visit and you want to get them a version of the map in Spanish. But on the Spanish maps, the shape of the tracks is different and you can't read Spanish. What's more, not all the maps are of the London Underground! What do you do? Well, given a Spanish map and your English map, you can try to match up the stations (through trial and error) and if the stations are all *connected to each other in the same ways* on both maps then you know they are both of the same train system. \n\nMore precisely, consider the following smaller (fictional) example. There are stations *Trafalgal*, *Marybone*, *Oxbridge*, *Eastweston* and *Charlesburrough* on the English map. *Marybone* is connected to all the other stations, and *Charlesburrough* is connected to *Eastweston*. (There are no other connections)\n\n![Fictional London Underground Map](http://i.imgur.com/J3p094x.png)\n\n%note: We don't want to worry about whether stations are connected to themselves. You can just assume no station is ever connected to itself.%\n\n%note: If one station is connected to another, then the second station is also connected to the first. So since *Marybone* is connected to *Eastweston* then *Eastweston* is connected to *Marybone*. If it seems silly to even mention this fact, then don't worry to much. It's just that we might just have easily decided that there's a one-way train running from *Marybone* to *Eastweston* but not in the other direction.% \n\nAssume the Spanish map has the following stations: *Patata*, *Huesto*, *Carbon*, *Esteoeste*, and *Puente de Buey*. Assume also that *Huesto* is connected to every other station, and that *Carbon* is connected to *Esteoeste*. (Again, there are no other connections).\n\n![Fictional Spanish Map](http://i.imgur.com/nENfDqk.png)\n\nThen the two maps are essentially the same for your purposes. They are **isomorphic** (as [graphs](https://arbital.com/p/-graph), in fact), and the way that you matched the stations on the one map with those on the other is an **isomorphism**.\n\n## Isomorphisms##\nImagine that you have a *London Underground Official Spanish-to-English Train Station Dictionary* that tells you how to translate the names of the train stations. So, for example, you can use it to convert *Patata* to *Trafalgal*. Then this dictionary is an **isomorphism** from the Spanish map to the English one. \n\nIn particular, the dictionary translates *Huesto* to *Marybone*, *Patata* to *Tralfalgal*, *Puento de Buey* to *Oxbridge*, *Esteoeste* to *Eastweston*, and *Carbon* to *Charlesburrough*.\n\nYou could also get a *London Underground Official* **English-to-Spanish** *Train Station Dictionary*. Then, if you were to use this dictionary to translate *Tralfalgal*, you'd get back *Patata*. Hence your first original translation of that station from English to Spanish has been undone.\n\nIn particular, the dictionary translates *Marybone* to *Huesto*, *Tralfalgal* to *Patata*. *Oxbridge* to *Puento de Buey*, *Eastweston* to *Esteoeste*, and *Charlesburrough* to *Carbon*.\n\n![English Stations Paired Up with Spanish Stations](http://i.imgur.com/sElv72C.png)\n\n\nIn fact, if you take the English map and translate all of the stations into Spanish using the one dictionary and then translate back, you'd get back to where you started. Similarly, if you translated the Spanish map from Spanish into English with the one dictionary and back to Spanish with the other you'd get the Spanish map back. Hence both of these dictionaries are complete [inverses](https://arbital.com/p/-inverse) of each other.\n\n\nIn fact, in [category theory](https://arbital.com/p/-4c7), this is exactly the definition of an isomorphism: if you have some translation ([https://arbital.com/p/-4d8](https://arbital.com/p/-4d8)) such that you can find a backwards translation (morphism in the opposite direction), and using the one translation after the other is for all intents and purposed the same as not having translated anything at all (i.e., no important information is lost in translation), then the original translation is an **isomorphism**. (In fact, both of them are isomorphisms).\n\nWhat if you had a different pair of dictionaries. \n\nIn particular, what if the Spanish-to-English dictionary translates *Huesto* to *Marybone*, *Puento de Buey* to *Tralfalgal*, *Patata* to *Oxbridge*, *Carbon* to *Eastweston*, and *Esteoeste* to *Charlesburrough*?\n\nThen if we translate from Spanish into English, and then translate back with the original English-to-Spanish dictionary, then *Patata* is first translated to *Oxbridge*, but then it is translated back into *Puente de Buey*. So it does not reverse the translation. Hence this is not an isomorphism.\n\nHowever, what if the English-to-Spanish Dictionary translates *Marybone* to *Huesto*, *Oxbridge* to *Patata*. *Eastweston* to *Puento de Buey*, *Tralfalgal* to *Esteoeste*, and *Charlesburrough* to *Carbon*? Then the translations are reverses of each other. Hence this is another isomorphism. There may be many isomorphisms between two isomorphic maps.\n\n![Another Way of Pairing English Stations with Spanish Stations](http://i.imgur.com/X6A6oNb.png)\n\n##Non-Isomorphic Maps##\nImagine you had the English map from above. As a reminder the stations were:\n*Marybone*, *Eastweston*, *Charlesburrough*, *Tralfalgal*, and *Oxbridge*.\n\nIf, now, you find a Spanish map with only four stations on it, it can't possibly be isomorphic to your English map; there would be some station appearing on the English map which isn't named on the Spanish one. \n\n![Spanish Map with Only Four Stations When English Map has Five](http://i.imgur.com/BkVZBWB.png)\n\nSimilarly, if the Spanish map has six stations, then they aren't isomorphic either since there is an extra station on the Spanish map not appearing on the English one.\n\n![Spanish Map with Six Stations When English Map has Five](http://i.imgur.com/EYYmzWi.png)\n\nWhat if there are five stations on the Spanish map. Is it then definitely isomorphic to the English one?\n\nRecall that:\n*Marybone* is connected to every other station, and *Eastweston* is connected to *Charlesburrough*.\n\nBut what if now instead on the Spanish map, *Huesto* is still connected to everything, but nothing else is connected to anything else. \n\n![Huesto is Connected to Everything With No Other Connections. English Map Still Has the Same Connections](http://i.imgur.com/4mQD0hK.png)\n\nThen the translation taking *Marybone* to *Huesto*, *Tralfalgal* to *Patata*. *Oxbridge* to *Puento de Buey*, *Eastweston* to *Esteoeste*, and *Charlesburrough* to *Carbon* is not an isomorphism. *Eastweston* is connected to *Charlesburrough* on the English map, but the corresponding stations on the Spanish map, *Esteoeste* and *Carbon*, are not connected to each other. Hence this translation is not an isomorphism, since under these translations, the maps represent different things.\n\nBut even though this way of pairing up the stations isn't an isomorphism, maybe there is another way of pairing them up which is? But no, even this is doomed to failure because the *number of connections* in both cases is different. For example, *Eastweston* is connected to two stations, but no station on the Spanish map is connected to two other stations. Hence there cannot be any translation that works. No isomorphism exists between the two maps.\n\nIf no isomorphism exists between two structures, then they are **non-isomorphic**.\n\nNotice, in fact, that the two maps do not have the same total number of connections. There are five connections on the English map, but only four on the Spanish map. Hence they cannot be isomorphic. \n\nWhat if the Spanish map has the following connections? *Patata* to everything except *Carbon*, and *Carbon* to *Huesto*and *Esteoeste*. Then there still cannot be an isomorphism, since, again, *Marybone* is connected to four other stations, but nothing on the Spanish map is connected to four stations. In this case, both maps have five connections. Hence even if both maps have the same total number of connections, they may still be non-isomorphic.\n\n![Both Maps Have Five Connections but Are Non-Isomorphic](http://i.imgur.com/BKXLPqv.png)\n\n\n##Comparing Weights##\nNot all isomorphisms need be mappings between structures. Consider if you work at the post-office and must weigh packages. You do not care about the size and shape of the packages, only their weight. Then you consider two packages isomorphic if their weights are equal. \n\nImagine, then, that you have two packages, say one containing a book %note: *The Official London Underground History of Train Stations*% and the other is a plastic crate %note: \"To the Count, with love\"%. \n\nYou also have a half-broken pair of brass scales: they have a pair of pans on which items can be placed. \n\n![Empty Balanced Scales](http://i.imgur.com/2wuFlTO.png)\n\n\n\nHowever, they can only tip to the left or remain flat.\n\n![Empty Scales Tilting Left](http://i.imgur.com/oz3m8vF.png)\n\n\nIf the item on the left is heavier than the one on the right, then the scales tilt left.\n\n![Scales With Heavy Object on Left, Scales Tilt Left](http://i.imgur.com/iSmYESb.png)\n\nOtherwise, if they are of equal weight or the item on the left is lighter than the one on the right, the scales remain level.\n\n![Scales With Same Object on Both Sides, Scales Flat](http://i.imgur.com/NZe1uQC.png)\n\n\n![Scales With Heavy Object on Right, Scales Flat](http://i.imgur.com/0BqznlG.png)\n\nPlace the book on the left pan of the scale, and the crate on the right. If the scales balance then either the book is lighter than the crate or it is the same weight as the crate. Now swap them. If they remain level, then either the crate is lighter than the book or it is the same weight as the book. Since the book cannot be lighter than the crate whilst the crate is simultaneously lighter than the book, they must be the same weight. Hence they are isomorphic.\n\nThis very act of balancing the scales is an isomorphism. It has an inverse: just swap the two packages around! Start with the book on the left pan and the crate on the right. Then place the crate on the left pan and the book on the right. The fact that the scales balance both times tells you (the obvious fact) that the book weighs the same as itself. Since you already know this, doing this actually tells you as much about the book's weight compared to itself as doing nothing at all.\n\nIf this last part seems silly or confusing, don't worry too much about it. It's just to illustrate how the idea of an isomorphism is intricately tied with having an inverse.\n\n##An Isomorphism Joke##\nA man walks into a bar. He is surprised at how the patrons are acting. One of them says a number, like “forty-two”, and the rest break into laughter. He asks the bartender what’s going on. The bartender explains that they all come here so often that they’ve memorized all of each other’s jokes, and instead of telling them explicitly, they just give each a number, say the number, and laugh appropriately. The man is intrigued, so he shouts “Two thousand!”. He is shocked to find everyone laughs uproariously, the loudest he's heard that evening. Perplexed, he turns to the bartender and says “They laughed so much more at mine than at any of the others.\" \"Well of course,\" the bartender answers matter-of-factly, \"they've never heard that one before!”", "date_published": "2016-07-13T19:32:29Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Mark Chimes", "Team Arbital"], "summaries": [], "tags": ["Math 0", "Bijective function", "B-Class"], "alias": "4hj"} {"id": "29d521d5b4ec1bae4cb014c60592c179", "title": "Sign homomorphism (from the symmetric group)", "url": "https://arbital.com/p/sign_homomorphism_symmetric_group", "source": "arbital", "source_type": "text", "text": "The sign [homomorphism](https://arbital.com/p/47t) is given by sending a permutation $\\sigma$ in the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$ to $0$ if we can make $\\sigma$ by multiplying together an even number of [transpositions](https://arbital.com/p/4cn), and to $1$ otherwise.\n\n%%%knows-requisite([https://arbital.com/p/modular_arithmetic](https://arbital.com/p/modular_arithmetic)):\nEquivalently, it is given by sending $\\sigma$ to the number of transpositions making it up, [modulo](https://arbital.com/p/modular_arithmetic) $2$.\n%%%\n\nThe sign homomorphism [is well-defined](https://arbital.com/p/4hh).\n\n%%%knows-requisite([https://arbital.com/p/quotient_group](https://arbital.com/p/quotient_group)):\nThe [https://arbital.com/p/-alternating_group](https://arbital.com/p/-alternating_group) is obtained by taking the [quotient](https://arbital.com/p/quotient_group) of the symmetric group by the sign homomorphism.\n%%%", "date_published": "2016-06-17T12:13:38Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Definition"], "alias": "4hk"} {"id": "6c4b49c1b8fd6dcf6bd93e6e571973c5", "title": "Alternating group is generated by its three-cycles", "url": "https://arbital.com/p/alternating_group_generated_by_three_cycles", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_n$ is generated by its $3$-[cycles](https://arbital.com/p/49f).\nThat is, every element of $A_n$ can be made by multiplying together $3$-cycles only.\n\n# Proof\n\nThe product of two [transpositions](https://arbital.com/p/4cn) is a product of $3$-cycles: \n\n- $(ij)(kl) = (ijk)(jkl)$\n- $(ij)(jk) = (ijk)$\n- $(ij)(ij) = e$.\n\nTherefore any permutation which is a product of evenly-many transpositions (that is, all of $A_n$) is a product of $3$-cycles, because we can group up successive pairs of transpositions.\n\nConversely, every $3$-cycle is in $A_n$ because $(ijk) = (ij)(jk)$.", "date_published": "2016-06-17T13:23:26Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4hs"} {"id": "b711a3a66acd43d12512e3c908642c1a", "title": "Group coset", "url": "https://arbital.com/p/group_coset", "source": "arbital", "source_type": "text", "text": "Given a subgroup $H$ of [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$, the *left cosets* of $H$ in $G$ are sets of the form $\\{ gh : h \\in H \\}$, for some $g \\in G$.\nThis is written $gH$ as a shorthand.\n\nSimilarly, the *right cosets* are the sets of the form $Hg = \\{ hg: h \\in H \\}$.\n\n# Examples\n%%%knows-requisite([https://arbital.com/p/497](https://arbital.com/p/497)):\n## Symmetric group\n\nIn $S_3$, the [https://arbital.com/p/-497](https://arbital.com/p/-497) on three elements, we can list the elements as $\\{ e, (123), (132), (12), (13), (23) \\}$, using [cycle notation](https://arbital.com/p/49f).\nDefine $A_3$ (which happens to have a name: the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf)) to be the subgroup with elements $\\{ e, (123), (132) \\}$.\n\nThen the coset $(12) A_3$ has elements $\\{ (12), (12)(123), (12)(132) \\}$, which is simplified to $\\{ (12), (23), (13) \\}$.\n\nThe coset $(123)A_3$ is simply $A_3$, because $A_3$ is a subgroup so is closed under the group operation. $(123)$ is already in $A_3$.\n%%%\n\n\n\n# Properties\n\n- The left cosets of $H$ in $G$ [partition](https://arbital.com/p/set_partition) $G$. ([Proof.](https://arbital.com/p/4j5))\n- For any pair of left cosets of $H$, there is a [bijection](https://arbital.com/p/499) between them; that is, all the cosets are all the same size. ([Proof.](https://arbital.com/p/4j8))\n\n# Why are we interested in cosets?\n\nUnder certain conditions (namely that the subgroup $H$ must be [normal](https://arbital.com/p/4h6)), we may define the [https://arbital.com/p/-quotient_group](https://arbital.com/p/-quotient_group), a very important concept; see the page on [\"left cosets partition the parent group\"](https://arbital.com/p/4j5) for a glance at why this is useful.\n\n\nAdditionally, there is a key theorem whose usual proof considers cosets ([Lagrange's theorem](https://arbital.com/p/lagrange_theorem_on_subgroup_size)) which strongly restricts the possible sizes of subgroups of $G$, and which itself is enough to classify all the groups of [order](https://arbital.com/p/3gg) $p$ for $p$ [prime](https://arbital.com/p/prime_number).\nLagrange's theorem also has very common applications in [https://arbital.com/p/-number_theory](https://arbital.com/p/-number_theory), in the form of the [Fermat-Euler theorem](https://arbital.com/p/fermat_euler_theorem).", "date_published": "2016-06-17T15:58:15Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Formal definition", "Needs clickbait"], "alias": "4j4"} {"id": "9c027a495b263865572d7eca86062f80", "title": "Left cosets partition the parent group", "url": "https://arbital.com/p/left_cosets_partition_parent_group", "source": "arbital", "source_type": "text", "text": "Given a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$ and a subgroup $H$, the [left cosets](https://arbital.com/p/4j4) of $H$ in $G$ [partition](https://arbital.com/p/set_partition) $G$, in the sense that every element of $g$ is in precisely one coset.\n\n# Proof\nFirstly, every element is in a coset: since $g \\in gH$ for any $g$.\nSo we must show that no element is in more than one coset.\n\nSuppose $c$ is in both $aH$ and $bH$.\nThen we claim that $aH = cH = bH$, so in fact the two cosets $aH$ and $bH$ were the same.\nIndeed, $c \\in aH$, so there is $k \\in H$ such that $c = ak$.\nTherefore $cH = \\{ ch : h \\in H \\} = \\{ akh : h \\in H \\}$.\n\nExercise: $\\{ akh : h \\in H \\} = \\{ ar : r \\in H \\}$.\n%%hidden(Show solution):\nSuppose $akh$ is in the left-hand side.\nThen it is in the right-hand side immediately: letting $r=kh$.\n\nConversely, suppose $ar$ is in the right-hand side.\nThen we may write $r = k k^{-1} r$, so $a k k^{-1} r$ is in the right-hand side; but then $k^{-1} r$ is in $H$ so this is exactly an object which lies in the left-hand side.\n%%\n\nBut that is just $aH$.\n\nBy repeating the reasoning with $a$ and $b$ interchanged, we have $cH = bH$; this completes the proof.\n\n# Why is this interesting?\n\nThe fact that the left cosets partition the group means that we can, in some sense, \"compress\" the group $G$ with respect to $H$.\nIf we are only interested in $G$ \"up to\" $H$, we can deal with the partition rather than the individual elements, throwing away the information we're not interested in.\n\nThis concept is most importantly used in defining the [https://arbital.com/p/-4tq](https://arbital.com/p/-4tq).\nTo do this, the subgroup must be [normal](https://arbital.com/p/4h6) ([proof](https://arbital.com/p/4h9)).\nIn this case, the collection of cosets itself inherits a group structure from the parent group $G$, and the structure of the quotient group can often tell us a lot about the parent group.", "date_published": "2016-06-28T07:29:03Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4j5"} {"id": "47c9435ffeb999cf2c5f4eb6cc4f1c95", "title": "Church-Turing thesis: Evidence for the Church-Turing thesis", "url": "https://arbital.com/p/evidence_for_CT_thesis", "source": "arbital", "source_type": "text", "text": "As the Church-Turing thesis is not a [proper mathematical sentence](https://arbital.com/p/) we cannot prove it. However, we can collect [https://arbital.com/p/-3s](https://arbital.com/p/-3s) to increase our confidence in its correctness.\n\n#The inductive argument\nEvery computational model we have seen so far is [reducible](https://arbital.com/p/) to Turing's model.\n\nIndeed, the thesis was originally formulated independently by Church and Turing in reference to two different computational models ([https://arbital.com/p/Turing_machines](https://arbital.com/p/Turing_machines) and [https://arbital.com/p/Lambda_Calculus](https://arbital.com/p/Lambda_Calculus) respectively). When they were shown to be [equivalent](https://arbital.com/p/equivilance_relation) it was massive evidence in favor of both of them.\n\nA non-exhaustive list of models which can be shown to be reducible to Turing machines are:\n\n* [https://arbital.com/p/Lambda_calculus](https://arbital.com/p/Lambda_calculus)\n* [https://arbital.com/p/Quantum_computation](https://arbital.com/p/Quantum_computation)\n* [Non-deterministic_Turing_machines](https://arbital.com/p/Nondeterministic_Turing_machines)\n* [https://arbital.com/p/Register_machines](https://arbital.com/p/Register_machines)\n* The set of [https://arbital.com/p/-recursive_functions](https://arbital.com/p/-recursive_functions)\n\n#Lack of counterexamples\nPerhaps the strongest argument we have for the CT thesis is that there is not a widely accepted candidate to a counterexample of the thesis. This is unlike the [https://arbital.com/p/-4ct](https://arbital.com/p/-4ct), where quantum computation stands as a likely counterexample.\n\nOne may wonder whether the computational models which use a source of [randomness](https://arbital.com/p/) (such as quantum computation or [https://arbital.com/p/-probabilistic_Turing_machines](https://arbital.com/p/-probabilistic_Turing_machines)) are a proper counterexample to the thesis: after all, Turing machines are fully [https://arbital.com/p/-deterministic](https://arbital.com/p/-deterministic), so they cannot simulate randomness.\n\nTo properly explain this issue we have to recall what it means for a quantum computer or a probabilistic Turing machine to compute something: we say that such a device computes a function $f$ if for every input $x$ the probability of the device outputting $f(x)$ is greater or equal to some arbitrary constant greater than $1/2$. Thus we can compute $f$ in a classical machine, for there exists always the possibility of simulating every possible outcome of the randomness to deterministically compute the probability distribution of the output, and output as a result the possible outcome with greater probability.\n\nThus, randomness is reducible to Turing machines, and the CT thesis holds.", "date_published": "2016-06-20T17:59:25Z", "authors": ["Eric Bruylant", "Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "4j7"} {"id": "400da54ae4510760f5a636e14bd71598", "title": "Left cosets are all in bijection", "url": "https://arbital.com/p/left_cosets_biject", "source": "arbital", "source_type": "text", "text": "Let $H$ be a subgroup of $G$. \nThen for any two [left cosets](https://arbital.com/p/4j4) of $H$ in $G$, there is a [https://arbital.com/p/-499](https://arbital.com/p/-499) between the two cosets.\n\n# Proof\n\nLet $aH, bH$ be two cosets.\nDefine the function $f: aH \\to bH$ by $x \\mapsto b a^{-1} x$.\n\nThis has the correct [codomain](https://arbital.com/p/3lg): if $x \\in aH$ (so $x = ah$, say), then $ba^{-1} a x = bx$ so $f(x) \\in bH$.\n\nThe function is [injective](https://arbital.com/p/4b7): if $b a^{-1} x = b a^{-1} y$ then (pre-multiplying both sides by $a b^{-1}$) we obtain $x = y$.\n\nThe function is [surjective](https://arbital.com/p/4bg): given $b h \\in b H $, we want to find $x \\in aH$ such that $f(x) = bh$.\nLet $x = a h$ to obtain $f(x) = b a^{-1} a h = b h$, as required.", "date_published": "2016-06-17T16:00:59Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4j8"} {"id": "62247dccd97eb7668f9dc4d057d189cf", "title": "The alternating groups on more than four letters are simple", "url": "https://arbital.com/p/alternating_group_is_simple", "source": "arbital", "source_type": "text", "text": "Let $n > 4$ be a [https://arbital.com/p/-45h](https://arbital.com/p/-45h). Then the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_n$ on $n$ elements is [simple](https://arbital.com/p/simple_group).\n\n# Proof\n\nWe go by [induction](https://arbital.com/p/mathematical_induction) on $n$.\nThe base case of $n=5$ [we treat separately](https://arbital.com/p/4jf).\n\nFor the inductive step: let $n \\geq 6$, and suppose $H$ is a nontrivial [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) of $A_n$.\nUltimately we aim to show that $H$ contains the $3$-[cycle](https://arbital.com/p/49f) $(123)$; since $H$ [is a union of conjugacy classes](https://arbital.com/p/subgroup_normal_iff_union_of_conjugacy_classes), and since the $3$-cycles [form a conjugacy class](https://arbital.com/p/splitting_conjugacy_classes_in_alternating_group) in $A_n$, this means every $3$-cycle is in $H$ and [hence](https://arbital.com/p/4hs) $H = A_n$.\n\n## Lemma: $H$ contains a member of $A_{n-1}$\nTo start, we will show that at least $H$ contains *something* from $A_{n-1}$ (which we view as all the elements of $A_n$ which don't change the letter $n$). %%note:Recall that $A_n$ [acts](https://arbital.com/p/3t9) naturally on the set of \"letters\" $\\{1,2,\\dots, n \\}$ by permutation.%%\n(This approach has to be useful for an induction proof, because we need some way of introducing the simplicity of $A_{n-1}$.)\nThat is, we will show that there is some $\\sigma \\in H$ with $\\sigma \\not = e$, such that $\\sigma(n) = n$.\n\nLet $\\sigma \\in H$, where $\\sigma$ is not the identity.\n$\\sigma$ certainly sends $n$ somewhere; say $\\sigma(n) = i$, where $i \\not = n$ (since if it is, we're done immediately).\n\nThen if we can find some $\\sigma' \\in H$, not equal to $\\sigma$, such that $\\sigma'(n) = i$, we are done: $\\sigma^{-1} \\sigma'(n) = n$.\n\n$\\sigma$ must move something other than $i$ (that is, there must be $j \\not = i$ such that $\\sigma(j) \\not = j$), because if it did not, it would be the transposition $(n i)$, which is not in $A_n$ because it is an odd number of transpositions.\nHence $\\sigma(j) \\not = j$ and $j \\not = i$; also $j \\not = n$ because if it were, \n\nNow, since $n \\geq 6$, we can pick $x, y$ distinct from $n, i, j, \\sigma(j)$.\nThen set $\\sigma' = (jxy) \\sigma (jxy)^{-1}$, which must lie in $H$ because $H$ is closed under [conjugation](https://arbital.com/p/4gk).\n\nThen $\\sigma'(n) = i$; and $\\sigma' \\not = \\sigma$, because $\\sigma'(j) = \\sigma(y)$ which is not equal to $\\sigma(j)$ (since $y \\not = j$).\nHence $\\sigma'$ and $\\sigma$ have different effects on $j$ so they are not equal.\n\n## Lemma: $H$ contains all of $A_{n-1}$\nNow that we have shown $H$ contains some member of $A_{n-1}$.\nBut $H \\cap A_{n-1}$ is normal in $A_n$, because $H$ is normal in $A_n$ and it is [_intersect _subgroup _is _normal the intersection of a subgroup with a normal subgroup](https://arbital.com/p/4h6).\nTherefore by the inductive hypothesis, $H \\cap A_{n-1}$ is either the trivial group or is $A_{n-1}$ itself.\n\nBut $H \\cap A_{n-1}$ is certainly not trivial, because our previous lemma gave us a non-identity element in it; so $H$ must actually contain $A_{n-1}$.\n\n## Conclusion\n\nFinally, $H$ contains $A_{n-1}$ so it contains $(123)$ in particular; so we are done by the initial discussion.\n\n# Behaviour for $n \\leq 4$\n- $A_1$ is the trivial group so is vacuously not simple.\n- $A_2$ is also the trivial group. \n- $A_3$ is isomorphic to the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_3$ on three generators, so it is simple: it has no nontrivial proper subgroups, let alone normal ones.\n- $A_4$ has the following normal subgroup (the [Klein four-group](https://arbital.com/p/klein_four_group)): $\\{ e, (12)(34), (13)(24), (14)(23) \\}$. Therefore $A_4$ is not simple.", "date_published": "2016-06-17T17:19:24Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Stub"], "alias": "4jb"} {"id": "6b3025e57bbfcf556b8f9dec4acbb8bd", "title": "Simple group", "url": "https://arbital.com/p/simple_group", "source": "arbital", "source_type": "text", "text": "A simple group is a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) with no [normal subgroups](https://arbital.com/p/4h6).", "date_published": "2016-06-17T16:06:49Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Definition", "Stub"], "alias": "4jc"} {"id": "c2a6aa2a6edda4cead06ce5df0b58b3c", "title": "The alternating group on five elements is simple", "url": "https://arbital.com/p/alternating_group_five_is_simple", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_5$ on five elements is [simple](https://arbital.com/p/4jc).\n\n# Proof\n\nRecall that $A_5$ has [order](https://arbital.com/p/3gg) $60$, so [Lagrange's theorem](https://arbital.com/p/4jn) states that any subgroup of $A_5$ has order dividing $60$.\n\nSuppose $H$ is a normal subgroup of $A_5$, which is not the trivial subgroup $\\{ e \\}$.\nIf $H$ has order divisible by $3$, then by [Cauchy's theorem](https://arbital.com/p/4l6) there is a $3$-[cycle](https://arbital.com/p/49f) in $H$ (because the $3$-cycles are the only elements with order $3$ in $A_5$).\nBecause $H$ [is a union of conjugacy classes](https://arbital.com/p/4jw), and because the $3$-cycles [form a conjugacy class in $A_n$ for $n > 4$](https://arbital.com/p/4kv), $H$ would therefore contain *every* $3$-cycle; but then [it would be the entire alternating group](https://arbital.com/p/4hs).\n\nIf instead $H$ has order divisible by $2$, then there is a double transposition such as $(12)(34)$ in $H$, since these are the only elements of order $2$ in $A_5$.\nBut then $H$ contains the entire conjugacy class so it contains every double transposition; in particular, it contains $(12)(34)$ and $(15)(34)$, so it contains $(15)(34)(12)(34) = (125)$.\nHence as before $H$ contains every $3$-cycle so is the entire alternating group.\n\nSo $H$ must have order exactly $5$, by [Lagrange's theorem](https://arbital.com/p/4jn); so it contains an element of order $5$ since [prime order groups are cyclic](https://arbital.com/p/4jh).\n\nThe only such elements of $A_n$ are $5$-cycles; but the conjugacy class of a $5$-cycle is of size $12$, which is too big to fit in $H$ which has size $5$.", "date_published": "2016-06-28T06:23:28Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4jf"} {"id": "39903915e64ca91d97dfb7ceaac5e34a", "title": "Prime order groups are cyclic", "url": "https://arbital.com/p/prime_order_group_is_cyclic", "source": "arbital", "source_type": "text", "text": "Let $G$ be a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) whose [order](https://arbital.com/p/3gg) is equal to $p$, a [https://arbital.com/p/-4mf](https://arbital.com/p/-4mf).\nThen $G$ is [isomorphic](https://arbital.com/p/49x) to the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_p$ of order $p$.\n\n# Proof\nPick any non-identity element $g$ of the group.\n\nBy [Lagrange's theorem](https://arbital.com/p/4jn), the subgroup generated by $g$ has size $1$ or $p$ (since $p$ was prime).\nBut it can't be $1$ because the only subgroup of size $1$ is the trivial subgroup.\n\nHence the subgroup must be the entire group.", "date_published": "2016-06-20T06:47:24Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4jh"} {"id": "d4ebd0c5e0785748f256157c445ae952", "title": "Lagrange theorem on subgroup size", "url": "https://arbital.com/p/lagrange_theorem_on_subgroup_size", "source": "arbital", "source_type": "text", "text": "Lagrange's Theorem states that if $G$ is a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) and $H$ a subgroup, then the [order](https://arbital.com/p/3gg) $|H|$ of $H$ divides the order $|G|$ of $G$.\nIt generalises to infinite groups: the statement then becomes that the [left cosets](https://arbital.com/p/4j4) form a [partition](https://arbital.com/p/set_partition), and for any pair of cosets, there is a [bijection](https://arbital.com/p/499) between them.\n\n# Proof\n\nIn full generality, the cosets [form a partition](https://arbital.com/p/4j5) and [are all in bijection](https://arbital.com/p/4j8).\n\nTo specialise this to the finite case, we have divided the $|G|$ elements of $G$ into buckets of size $|H|$ (namely, the cosets), so $|G|/|H|$ must in particular be an integer.", "date_published": "2016-06-17T19:24:51Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4jn"} {"id": "0b6b67d54cd2b7e3819ba2c870fea6b8", "title": "The alternating group on five elements is simple: Simpler proof", "url": "https://arbital.com/p/simplicity_of_alternating_group_five_simpler_proof", "source": "arbital", "source_type": "text", "text": "If there is a non-trivial [https://arbital.com/p/-4h6](https://arbital.com/p/-4h6) $H$ of the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_5$ on five elements, then [it is a union of conjugacy classes](https://arbital.com/p/4jw).\nAdditionally, by [Lagrange's theorem](https://arbital.com/p/4jn), the [order](https://arbital.com/p/3gg) of a subgroup must divide the order of the group, so the total size of $H$ must divide $60$.\n\nWe can list the [conjugacy classes of $A_5$](https://arbital.com/p/alternating_group_five_conjugacy_classes); they are of size $1, 20, 15, 12, 12$ respectively.\n\nBy a brute-force check, no sum of these containing $1$ can possibly divide $60$ (which is the size of $A_5$) unless it is $1$ or $60$.\n\n# The brute-force check\n\nWe first list the [divisors](https://arbital.com/p/number_theory_divisor) of $60$: they are $$1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60$$\n\nSince the subgroup $H$ must contain the identity, it must contain the conjugacy class of size $1$.\nIf it contains any other conjugacy class (which, as it is a non-trivial subgroup by assumption, it must), then the total size must be at least $13$ (since the smallest other class is of size $12$); so it is allowed to be one of $15$, $20$, $30$, or $60$.\nSince $H$ is also assumed to be a *proper* subgroup, it cannot be $A_5$ itself, so in fact $60$ is also banned.\n\n## The class of size $20$\nIf $H$ then contains the conjugacy class of size $20$, then $H$ can only be of size $30$ because we have already included $21$ elements.\nBut there is no way to add just $9$ elements using conjugacy classes of size bigger than or equal to $12$.\n\nSo $H$ cannot contain the class of size $20$.\n\n## The class of size $15$\n\nIn this case, $H$ is allowed to be of size $20$ or $30$, and we have already found $16$ elements of it. So there are either $4$ or $14$ elements left to find; but we are only allowed to add classes of size exactly $12$, so this can't be done either.\n\n## The classes of size $12$\n\nWhat remains is two classes of size $12$, from which we can make $1+12 = 13$ or $1+12+12 = 25$.\nNeither of these divides $60$, so these are not legal options either.\n\nThis exhausts the search, and completes the proof.", "date_published": "2016-06-17T19:38:57Z", "authors": ["Team Arbital", "Patrick Stevens"], "summaries": [], "tags": [], "alias": "4jt"} {"id": "c7ff8f308bb6a4e840e5c5801f5d4d70", "title": "Subgroup is normal if and only if it is a union of conjugacy classes", "url": "https://arbital.com/p/subgroup_normal_iff_union_of_conjugacy_classes", "source": "arbital", "source_type": "text", "text": "Let $H$ be a subgroup of the [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$.\nThen $H$ is [normal](https://arbital.com/p/4h6) in $G$ if and only if it can be expressed as a [union](https://arbital.com/p/set_union) of [conjugacy classes](https://arbital.com/p/4bj).\n\n# Proof\n\n$H$ is normal in $G$ if and only if $gHg^{-1} = H$ for all $g \\in G$; equivalently, if and only if $ghg^{-1} \\in H$ for all $h \\in H$ and $g \\in G$.\n\nBut if we fix $h \\in H$, then the statement that $ghg^{-1} \\in H$ for all $g \\in G$ is equivalent to insisting that the conjugacy class of $h$ is contained in $H$.\nTherefore $H$ is normal in $G$ if and only if, for all $h \\in H$, the conjugacy class of $h$ lies in $H$.\n\nIf $H$ is normal, then it is clearly a union of conjugacy classes (namely $\\cup_{h \\in H} C_h$, where $C_h$ is the conjugacy class of $h$).\n\nConversely, if $H$ is not normal, then there is some $h \\in H$ such that the conjugacy class of $h$ is not wholly in $H$; so $H$ is not a union of conjugacy classes because it contains $h$ but not the entire conjugacy class of $h$.\n(Here we have used that the [https://arbital.com/p/-conjugacy_classes_partition_the_group](https://arbital.com/p/-conjugacy_classes_partition_the_group).)\n\n# Interpretation\n\nA normal subgroup is one which is fixed under conjugation; the most natural (and, indeed, the smallest) objects which are fixed under conjugation are conjugacy classes; so this criterion tells us that to obtain a *subgroup* which is fixed under conjugation, it is necessary and sufficient to assemble these objects (the conjugacy classes), which are themselves the smallest objects which are fixed under conjugation, into a group.", "date_published": "2016-06-17T19:37:34Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4jw"} {"id": "9c027442545253c9d0e985eaf53a04dc", "title": "Conjugacy classes of the alternating group on five elements", "url": "https://arbital.com/p/alternating_group_five_conjugacy_classes", "source": "arbital", "source_type": "text", "text": "This page lists the [conjugacy classes](https://arbital.com/p/4bj) of the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_5$ on five elements.\nSee a [different lens](https://arbital.com/p/4l0) for a derivation of this result using less theory.\n\n$A_5$ has size $5!/2 = 60$, where the exclamation mark denotes the [https://arbital.com/p/-factorial](https://arbital.com/p/-factorial) function.\nWe will assume access to [the conjugacy class table of $S_5$](https://arbital.com/p/4bk) the [https://arbital.com/p/-497](https://arbital.com/p/-497) on five elements; $A_5$ is a [quotient](https://arbital.com/p/quotient_group) of $S_5$ by the [sign homomorphism](https://arbital.com/p/4hk).\n\nWe have that a conjugacy class splits if and only if its [cycle type](https://arbital.com/p/49f) is all odd, all distinct. ([Proof.](https://arbital.com/p/4kv))\nThis makes the classification of conjugacy classes very easy.\n\n# The table\n\nWe must remove all the lines of [$S_5$'s table](https://arbital.com/p/4bk) which correspond to odd permutations (that is, those which are the product of odd-many [transpositions](https://arbital.com/p/4cn)). Indeed, those lines are classes which are not even in $A_5$.\n\nWe are left with cycle types $(5)$, $(3, 1, 1)$, $(2, 2, 1)$, $(1,1,1,1,1)$.\nOnly the $(5)$ cycle type can split into two, by the splitting condition.\nIt splits into the class containing $(12345)$ and the class which is $(12345)$ conjugated by odd permutations in $S_5$.\nA representative for that latter class is $(12)(12345)(12)^{-1} = (21345)$.\n\n$$\\begin{array}{|c|c|c|c|}\n\\hline\n\\text{Representative}& \\text{Size of class} & \\text{Cycle type} & \\text{Order of element} \\\\ \\hline\n(12345) & 12 & 5 & 5 \\\\ \\hline\n(21345) & 12 & 5 & 5 \\\\ \\hline\n(123) & 20 & 3,1,1 & 3 \\\\ \\hline\n(12)(34) & 15 & 2,2,1 & 2 \\\\ \\hline\ne & 1 & 1,1,1,1,1 & 1 \\\\ \\hline\n\\end{array}$$", "date_published": "2016-06-18T13:41:33Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4kr"} {"id": "d15fbc97003683db1283bda74765530a", "title": "Splitting conjugacy classes in alternating group", "url": "https://arbital.com/p/splitting_conjugacy_classes_in_alternating_group", "source": "arbital", "source_type": "text", "text": "Recall that in the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_n$, the notion of \"[https://arbital.com/p/-4bj](https://arbital.com/p/-4bj)\" coincides with that of \"has the same [cycle type](https://arbital.com/p/4cg)\" ([proof](https://arbital.com/p/4bh)).\nIt turns out to be the case that when we descend to the [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_n$, the conjugacy classes nearly all remain the same; the exception is those classes in $S_n$ which have cycle type all odd and all distinct.\nThose classes split into exactly two classes in $A_n$.\n\n# Proof\n\n## Conjugate implies same cycle type\n\nThis direction of the proof is identical to [the same direction](https://arbital.com/p/4bh) in the proof of the corresponding result on symmetric groups.\n\n## Converse: splitting condition\n\nRecall before we start that an even-length cycle can only be written as the product of an *odd* number of transpositions, and vice versa.\n\nThe question is: is every $\\tau \\in A_n$ with the same cycle type as $\\sigma \\in A_n$ conjugate (in $A_n)$ to $\\sigma$?\nIf so, then the conjugacy class in $A_n$ of $\\sigma$ is just the same as that of $\\sigma$ in $S_n$; if not, then the conjugacy class in $S_n$ must break into two pieces in $A_n$, namely $\\{ \\rho \\sigma \\rho^{-1} : \\rho \\ \\text{even} \\}$ and $\\{ \\rho \\sigma \\rho^{-1} : \\rho \\ \\text{odd} \\}$.\n(Certainly these are conjugacy classes: they only contain even permutations so they are subsets of $A_n$, while everything in either class is conjugate in $A_n$ because the definition only depends on the [parity](https://arbital.com/p/even_odd_parity) of $\\rho$.)\n\nLet $\\sigma = c_1 \\dots c_k$, and $\\tau = c_1' \\dots c_k'$ of the same cycle type: $c_i = (a_{i1} \\dots a_{i r_i})$, $c_i' = (b_{i1} \\dots b_{i r_i})$.\n\nDefine $\\rho$ to be the permutation which takes $a_{ij}$ to $b_{ij}$, but otherwise does not move any other letter.\nThen $\\rho \\sigma \\rho^{-1} = \\tau$, so if $\\rho$ lies in $A_n$ then we are done: $\\sigma$ and $\\tau$ are conjugate in $A_n$.\n\nThat is, we may assume without loss of generality that $\\rho$ is odd.\n\n### If any of the cycles is even in length\n\nSuppose without loss of generality that $r_1$, the length of the first cycle, is even.\nThen we can rewrite $c_1' = (b_{11} \\dots b_{1r_1})$ as $(b_{12} b_{13} \\dots b_{1 r_1} b_{11})$, which is the same cycle expressed slightly differently.\n\nNow $c_1'$ is even in length, so it is an odd permutation (being a product of an odd number of transpositions, $(b_{1 r_1} b_{11}) (b_{1 (r_1-1)} b_{11}) \\dots (b_{13} b_{11})(b_{12} b_{11})$).\nHence $\\rho c_1'$ is an even permutation.\n\nBut conjugating $\\tau$ by $\\rho c_1'$ yields $\\sigma$:\n$$\\sigma = \\rho \\tau \\rho^{-1} = \\rho c_1' (c_1'^{-1} \\tau c_1') c_1'^{-1} \\rho^{-1}$$\nwhich is the result of conjugating $c_1'^{-1} \\tau c_1' = \\tau$ by $\\rho c_1'$.\n\n(It is the case that $c_1'^{-1} \\tau c_1' = \\tau$, because $c_1'$ commutes with all of $\\tau$ except with the first cycle $c_1'$, so the expression is $c_1'^{-1} c_1' c_1' c_2' \\dots c_k'$, where we have pulled the final $c_1'$ through to the beginning of the expression $\\tau = c_1' c_2' \\dots c_k'$.)\n\n%%hidden(Example):\nSuppose $\\sigma = (12)(3456), \\tau = (23)(1467)$.\n\nWe have that $\\tau$ is taken to $\\sigma$ by conjugating with $\\rho = (67)(56)(31)(23)(12)$, which is an odd permutation so isn't in $A_n$.\nBut we can rewrite $\\tau = (32)(1467)$, and the new permutation we'll conjugate with is $\\rho c_1' = (67)(56)(31)(23)(12)(32)$, where we have appended $c_1' = (23) = (32)$.\nIt is the case that $\\rho$ is an even permutation and hence is in $A_n$, because it is the result of multiplying the odd permutation $\\rho$ by the odd permutation $c_1'$.\n\nNow the conjugation is the composition of two conjugations: first by $(32)$, to yield $(32)\\tau(32)^{-1} = (23)(1467)$ (which is $\\tau$ still!), and then by $\\rho$.\nBut we constructed $\\rho$ so as to take $\\tau$ to $\\sigma$ on conjugation, so this works just as we needed.\n%%\n\n## If all the cycles are of odd length, but some length is repeated\nWithout loss of generality, suppose $r_1 = r_2$ (and label them both $r$), so the first two cycles are of the same length: say $\\sigma$'s version is $(a_1 a_2 \\dots a_r)(c_1 c_2 \\dots c_r)$, and $\\tau$'s version is $(b_1 b_2 \\dots b_r)(d_1 d_2 \\dots d_r)$.\n\nThen define $\\rho' = \\rho (b_1 d_1)(b_2 d_2) \\dots (b_r d_r)$.\nSince $r$ is odd and $\\rho$ is an odd permutation, $\\rho'$ is an even permutation.\n\nNow conjugating by $\\rho'$ is the same as first conjugating by $(b_1 d_1)(b_2 d_2) \\dots (b_r d_r)$ and then by $\\rho$.\n\nBut conjugating by $(b_1 d_1)(b_2 d_2) \\dots (b_r d_r)$ takes $\\tau$ to $\\tau$, because it has the effect of replacing all the $b_i$ by $d_i$ and all the $d_i$ by $b_i$, so it simply swaps the two cycles round.\n\nHence the conjugation of $\\tau$ by $\\rho'$ yields $\\sigma$, and $\\rho'$ is in $A_n$. \n\n%%hidden(Example):\nSuppose $\\sigma = (123)(456), \\tau = (154)(237)$.\n\nThen conjugation of $\\tau$ by $\\rho = (67)(35)(42)(34)(25)$, an odd permutation, yields $\\sigma$.\n\nNow, if we first perform the conjugation $(12)(53)(47)$, we take $\\tau$ to itself, and then performing $\\rho$ yields $\\sigma$.\nThe combination of $\\rho$ and $(12)(53)(47)$ is an even permutation, so it does line in $A_n$.\n%%\n\n## If all the cycles are of odd length, and they are all distinct\n\nIn this case, we are required to check that the conjugacy class *does* split.\nRemember, we started out by supposing $\\sigma$ and $\\tau$ have the same cycle type, and they are conjugate in $S_n$ by an odd permutation (so they are not obviously conjugate in $A_n$); we need to show that indeed they are *not* conjugate in $A_n$.\n\nIndeed, the only ways to rewrite $\\tau$ into $\\sigma$ (that is, by conjugation) involve taking each individual cycle and conjugating it into the corresponding cycle in $\\sigma$. There is no choice about which $\\tau$-cycle we take to which $\\sigma$-cycle, because all the cycle lengths are distinct.\nBut the permutation $\\rho$ which takes the $\\tau$-cycles to the $\\sigma$-cycles is odd, so is not in $A_n$.\n\nMoreover, since each cycle is odd, we can't get past the problem by just cycling round the cycle (for instance, by taking the cycle $(123)$ to the cycle $(231)$), because that involves conjugating by the cycle itself: an even permutation, since the cycle length is odd.\nComposing with the even permutation can't take $\\rho$ from being odd to being even.\n\nTherefore $\\tau$ and $\\sigma$ genuinely are not conjugate in $A_n$, so the conjugacy class splits.\n\n%%hidden(Example):\nSuppose $\\sigma = (12345)(678), \\tau = (12345)(687)$ in $A_8$.\n\nThen conjugation of $\\tau$ by $\\rho = (87)$, an odd permutation, yields $\\sigma$.\nCan we do this with an *even* permutation instead?\n\nConjugating $\\tau$ by anything at all must keep the cycle type the same, so the thing we conjugate by must take $(12345)$ to $(12345)$ and $(687)$ to $(678)$.\n\nThe only ways of $(12345)$ to $(12345)$ are by conjugating by some power of $(12345)$ itself; that is even.\nThe only ways of taking $(687)$ to $(678)$ are by conjugating by $(87)$, or by $(87)$ and then some power of $(678)$; all of these are odd.\n\nTherefore the only possible ways of taking $\\tau$ to $\\sigma$ involve conjugating by an odd permutation $(678)^m(87)$, possibly alongside some powers of an even permutation $(12345)$; therefore to get from $\\tau$ to $\\sigma$ requires an odd permutation, so they are in fact not conjugate in $A_8$.\n%%\n\n# Example\n\nIn $A_7$, the cycle types are $(7)$, $(5, 1, 1)$, $(4,2,1)$, $(3,2,2)$, $(3,3,1)$, $(3,1,1,1,1)$, $(1,1,1,1,1,1,1)$, and $(2,2,1,1,1)$.\nThe only class which splits is the $7$-cycles, of cycle type $(7)$; it splits into a pair of half-sized classes with representatives $(1234567)$ and $(12)(1234567)(12)^{-1} = (2134567)$.", "date_published": "2016-06-28T07:44:43Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4kv"} {"id": "bf97588cc3afc769f94588b2745eebd9", "title": "Decision problem", "url": "https://arbital.com/p/decision_problem", "source": "arbital", "source_type": "text", "text": "A **decision problem** is a subset $D$ of a set of instances $A$, where generally $A$ is the set of finite [https://arbital.com/p/-bitstrings](https://arbital.com/p/-bitstrings) $\\{0,1\\}^*$.\n\nIn plain English, a decision problem is composed by a number of members of a collection, which generally share a common property. \n\nIn plainer English, a decision problem is a question of the form \"does bitstring $w$ have property $p$, yes or no?\" \n\nEvery question that fits the set form (is this string in the set?) can be re-expressed as \"does this bitstring have the property of being in the set?\" Everything that fits the property form can be re-expressed in set form as well, because a property implicitly specifies a set of things that have the property. Thus, the two forms are equivalent.\n\nIt is called a decision problem because we are trying to *decide* whether a given bitstring belongs to the set or not.\n\nA **solution** of the problem would consist in a procedure which given an instance $w$ of $A$ decides correctly in finite time whether $w$ belongs to $D$ or not. That is, a solution consists in a way of identifying the common trait shared by the members of $D$ which distinguish them from the rest of instances in $A$.\n\nWe say that a problem is [https://arbital.com/p/-decidable](https://arbital.com/p/-decidable) if there exists a solution which works for every instance. Trivially, finite decision problems are decidable, since we can have a look-up list with every element which we would consult every time we are given an instance to classify. If the instance corresponds to an element in the list, we accept it, and otherwise reject it. \n\nWe say that a decision problem is [https://arbital.com/p/-semidecidable](https://arbital.com/p/-semidecidable) if there is a procedure which, in finite time, identifies members of $D$, but fails to [https://arbital.com/p/-halt](https://arbital.com/p/-halt) and reject instances which do not belong to $D$.\n\nDecision problems can be classified according to their difficulty of solving in [complexity classes](https://arbital.com/p/), and are a central object of study in [https://arbital.com/p/-49w](https://arbital.com/p/-49w).\n\n#Examples\n##Graph connectedness\nSuppose we encode every possible finite graph, up to isomorphism, in the collection of finite bitstrings. Then we could define:\n$$\nCONNECTED = \\{s\\in\\{0,1\\}^*:\\text{$s$ represents a connected graph}\\}\n$$\nWhich would be a [decidable decision problem](https://arbital.com/p/graph_connectedness).\n\n##Tautology identification\nLet $TAUTOLOGY$ be the collection of formulas of [https://arbital.com/p/-first_order_logic](https://arbital.com/p/-first_order_logic) which are true for every [interpretation](https://arbital.com/p/mathematical_interpretation). Then [$TAUTOLOGY$ is a semidecidable decision problem](https://arbital.com/p/), as if a formula is [https://arbital.com/p/-valid](https://arbital.com/p/-valid) we can search for a proof, but otherwise, we do not have an effective procedure to decide that a formula is not valid.\n\n##Primality\nLet:\n$$\nPRIMES = \\{ x\\in \\mathbb{N}:\\text{$x$ is prime}\\}\n$$\nThen $PRIMES$ is a decidable decision problem.\n\nWe can alternately define\n$$\nPRIMES = \\{s\\in\\{0,1\\}^*:\\text{$s$ represent a prime number in base $2$}\\}\n$$\nThis illustrates that we can specify a particular decision problem in different ways.", "date_published": "2016-07-04T16:24:40Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina", "Alex Appel"], "summaries": ["A decision problem is a question of the form \"does [https://arbital.com/p/-bitstring](https://arbital.com/p/-bitstring) $w$ have [https://arbital.com/p/-property](https://arbital.com/p/-property) $p$, yes or no?\""], "tags": ["Start"], "alias": "4kz"} {"id": "3876354db66e9cef9f4118cf9599e2ff", "title": "Conjugacy classes of the alternating group on five elements: Simpler proof", "url": "https://arbital.com/p/conjugacy_classes_alternating_five_simpler", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-4hf](https://arbital.com/p/-4hf) $A_5$ on five elements has [conjugacy classes](https://arbital.com/p/4bj) very similar to those of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $S_5$, but where one of the classes has split in two.\n\nNote that $A_5$ has $60$ elements, since it is precisely half of the elements of $S_5$ which has $5! = 120$ elements (where the exclamation mark is the [https://arbital.com/p/-factorial](https://arbital.com/p/-factorial) function).\n\n# Table\n\n$$\\begin{array}{|c|c|c|c|}\n\\hline\n\\text{Representative}& \\text{Size of class} & \\text{Cycle type} & \\text{Order of element} \\\\ \\hline\n(12345) & 12 & 5 & 5 \\\\ \\hline\n(21345) & 12 & 5 & 5 \\\\ \\hline\n(123) & 20 & 3,1,1 & 3 \\\\ \\hline\n(12)(34) & 15 & 2,2,1 & 2 \\\\ \\hline\ne & 1 & 1,1,1,1,1 & 1 \\\\ \\hline\n\\end{array}$$\n\n# Working\n\nFirstly, the identity is in a class of its own, because $\\tau e \\tau^{-1} = \\tau \\tau^{-1} = e$ for every $\\tau$. \n\nNow, by the same reasoning as in [the proof](https://arbital.com/p/4bh) that conjugate elements must have the same cycle type in $S_n$, that result also holds in $A_n$.\n\nHence we just need to see whether any of the cycle types comprise more than one conjugacy class.\n\nRecall that the available cycle types are $(5)$, $(3,1,1)$, $(2,2,1)$, $(1,1,1,1,1)$ (the last of which is the identity and we have already considered it).\n\n## Double-transpositions\n\nAll the double-transpositions are conjugate (so the $(2,2,1)$ cycle type does not split):\n\n- $(ab)(cd)$ is conjugate to $(ab)(ce)$ if we conjugate by $(ab)(de)$; symmetrically this covers all the cases where one of the two transpositions remains the same.\n- $(ab)(cd)$ is conjugate to $(ac)(bd)$ by $(cba)$; this covers the case that $e$ is not introduced.\n- $(ab)(cd)$ is conjugate to $(ac)(be)$ by $(bc)(de)$; this covers the remaining cases that $e$ is introduced and neither of the two transpositions remains the same.\n\n## Three-cycles\n\nAll the three-cycles are conjugate (so the $(3,1,1)$ cycle type does not split): \n\n- $(abc)$ is conjugate to $(acb)$ by $(bc)(de)$, so three-cycles are conjugate to their permutations.\n- $(abc)$ is conjugate to $(abd)$ by $(cde)$; this covers the case of introducing a single new element to the cycle.\n- $(abc)$ is conjugate to $(ade)$ by $(bd)(ce)$; this covers the case of introducing two new elements to the cycle.\n\n## Five-cycles\n\nThis class does split: I claim that $(12345)$ and $(21345)$ are not conjugate.\n(Once we have this, then the class must split into two chunks, since $\\{ \\rho (12345) \\rho^{-1}: \\rho \\ \\text{even} \\}$ is closed under conjugation in $A_5$, and $\\{ \\rho (12345) \\rho^{-1}: \\rho \\ \\text{odd} \\}$ is closed under conjugation in $A_5$.\nThe first is the conjugacy class of $(12345)$ in $A_5$; the second is the conjugacy class of $(21345) = (12)(12345)(12)^{-1}$.\nThe only question here was whether they were separate conjugacy classes or whether their union was the conjugacy class.)\n\nRecall that $\\tau (12345) \\tau^{-1} = (\\tau(1), \\tau(2), \\tau(3), \\tau(4), \\tau(5))$, so we would need a permutation $\\tau$ such that $\\tau$ sends $1$ to $2$, $2$ to $1$, $3$ to $3$, $4$ to $4$, and $5$ to $5$.\nThe only such permutation is $(12)$, the transposition, but that is not actually a member of $A_5$.\n\nHence in fact $(12345)$ and $(21345)$ are not conjugate.", "date_published": "2016-06-18T13:07:31Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "4l0"} {"id": "78f1d53b6fdba907879776b75bdb9e5e", "title": "Poset: Exercises", "url": "https://arbital.com/p/poset_exercises", "source": "arbital", "source_type": "text", "text": "Try these exercises to test your poset knowledge.\n\n# Corporate Ladder\n\nImagine a company with five distinct roles: CEO, marketing manager, marketer, IT manager, and IT worker. At this company, if the CEO gives an order, everyone else must follow it. If an order comes from an IT manager, only IT workers are obligated to follow it. Similarly, if an order comes from a marketing manager, only marketers are obligated to follow it. Nobody is obligated to follow orders from marketers or IT workers.\n\nDo the workers at this company form a poset under the \"obligated to follow orders from\" relation?\n\n%%%hidden(Show solution):\nWhile not technically a poset due to its lack of reflexivity, it is pretty close. It is actually a strict ordering, whose underlying partial order could be obtained by making the reasonable assumption that each worker will follow her own orders.\n%%%\n\n# Bag Inclusion\n\nWe can define a notion of [power sets](https://arbital.com/p/power_set) for [bags](https://arbital.com/p/3jk) as follows. Let $X$ be a set, then we use $\\mathcal{M}(X)$ to denote the set of all bags containing elements of $X$. Let $A \\in \\mathcal{M}(X)$. The multiplicity function of $A$, $1_A : X \\rightarrow \\mathbb N$ maps each element of $X$ to the number of times that element occurs in $A$. We can use multiplicity functions to define a inclusion relation $\\subseteq$ for bags. For $A, B \\in \\mathcal M(X)$, we write $A \\subseteq B$ whenever for all $x \\in X$, $1_A(x) \\leq 1_B(x)$.\n\nDoes $\\mathcal{M}(X)$ form a poset under the bag inclusion relation $\\subseteq$? If so, prove it. Otherwise, show that it does not satisfy one of the three poset properties. \n\n# Duality\n\nGive the dual of the following proposition. \n\nFor all posets $P$ and all $p, q \\in P$, $q \\prec p$ implies that $\\{ r \\in P~|~r \\leq p \\}$ is a superset of $\\{ r \\in P~|~r \\leq q\\}$.\n\n%%hidden(Show solution):\nFor all posets $P$ and all $p, q \\in P$, $q \\succ p$ implies that $\\{ r \\in P~|~r \\geq p \\}$ is a superset of $\\{ r \\in P~|~r \\geq q\\}$ (where $q \\succ p$ means $p \\prec q$).\n%%\n\n# Hasse diagrams\n\nLet $X = \\{ x, y, z \\}$. Draw a Hasse diagram for the poset $\\langle \\mathcal P(X), \\subseteq \\rangle$ of the power set of $X$ ordered by inclusion.\n\n%%hidden(Show solution):\n![A Hasse diagram of the power set of X, ordered by inclusion](http://i.imgur.com/WG3OLFc.png)\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n e [= \"{}\"](https://arbital.com/p/label)\n x [= \"{x}\"](https://arbital.com/p/label)\n y [= \"{y}\"](https://arbital.com/p/label)\n z [= \"{z}\"](https://arbital.com/p/label)\n xy [= \"{x,y}\"](https://arbital.com/p/label)\n xz [= \"{x,z}\"](https://arbital.com/p/label)\n yz [= \"{y,z}\"](https://arbital.com/p/label)\n xyz [= \"{x,y,z}\"](https://arbital.com/p/label)\n\n rankdir = BT;\n e -> x\n e -> y\n e -> z\n x -> xy\n x -> xz\n y -> xy\n y -> yz\n z -> xz\n z -> yz\n xy -> xyz\n xz -> xyz\n yz -> xyz\n}\n%%%\n\n%%\n\n#Hasse diagrams (encore)\n\nIs it possible to draw a Hasse diagram for any poset?\n\n%%hidden(Show solution):\nNote that our description of Hasse diagrams made use of the covers relation $\\prec$. The covers relation, however, is not adequate to describe the structure of many posets. Consider the poset $\\langle \\mathbb R, \\leq \\rangle$ of the real numbers ordered by the standard comparison $\\leq$. We have $0 < 1$, but how would we convey that with a Hasse diagram? The problem is that $0$ has no covers, even though it is not a maximal element in $\\mathbb R$. In fact, for any $x \\in \\mathbb R$ such that $x > 0$, we can find a $y \\in \\mathbb R$ such that $0 < y < x$. This \"infinite density\" of $\\mathbb R$ makes it impossible to depict using a Hasse diagram.\n%%", "date_published": "2016-08-22T16:24:37Z", "authors": ["Eric Bruylant", "Kevin Clancy", "Mark Chimes", "Chris Pasek"], "summaries": [], "tags": ["Exercise "], "alias": "4l1"} {"id": "e630cfe771380d4d0306e6686d5f1a96", "title": "Cauchy's theorem on subgroup existence", "url": "https://arbital.com/p/cauchy_theorem_on_subgroup_existence", "source": "arbital", "source_type": "text", "text": "Cauchy's theorem states that if $G$ is a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) and $p$ is a [prime](https://arbital.com/p/4mf) dividing $|G|$ the [order](https://arbital.com/p/3gg) of $G$, then $G$ has a subgroup of order $p$. Such a subgroup is necessarily [cyclic](https://arbital.com/p/47y) ([proof](https://arbital.com/p/4jh)).\n\n# Proof\n\nThe proof involves basically a single magic idea: from thin air, we pluck the definition of the following set.\n\nLet $$X = \\{ (x_1, x_2, \\dots, x_p) : x_1 x_2 \\dots x_p = e \\}$$ the collection of $p$-[https://arbital.com/p/-tuple](https://arbital.com/p/-tuple)s of elements of the group such that the group operation applied to the tuple yields the identity.\nObserve that $X$ is not empty, because it contains the tuple $(e, e, \\dots, e)$.\n\nNow, the cyclic group $C_p$ of order $p$ [acts](https://arbital.com/p/3t9) on $X$ as follows: $$(h, (x_1, \\dots, x_p)) \\mapsto (x_2, x_3, \\dots, x_p, x_1)$$ where $h$ is the generator of $C_p$.\nSo a general element $h^i$ acts on $X$ by sending $(x_1, \\dots, x_p)$ to $(x_{i+1}, x_{i+2} , \\dots, x_p, x_1, \\dots, x_i)$.\n\nThis is indeed a group action (exercise).\n\n%%hidden(Show solution):\n\n- It certainly outputs elements of $X$, because if $x_1 x_2 \\dots x_p = e$, then $$x_{i+1} x_{i+2} \\dots x_p x_1 \\dots x_i = (x_1 \\dots x_i)^{-1} (x_1 \\dots x_p) (x_1 \\dots x_i) = (x_1 \\dots x_i)^{-1} e (x_1 \\dots x_i) = e$$\n- The identity acts trivially on the set: since rotating a tuple round by $0$ places is the same as not permuting it at all.\n- $(h^i h^j)(x_1, x_2, \\dots, x_p) = h^i(h^j(x_1, x_2, \\dots, x_p))$ because the left-hand side has performed $h^{i+j}$ which rotates by $i+j$ places, while the right-hand side has rotated by first $j$ and then $i$ places and hence $i+j$ in total.\n%%\n\nNow, fix $\\bar{x} = (x_1, \\dots, x_p) \\in X$.\n\nBy the [Orbit-Stabiliser theorem](https://arbital.com/p/4l8), the [orbit](https://arbital.com/p/4v8) $\\mathrm{Orb}_{C_p}(\\bar{x})$ of $\\bar{x}$ divides $|C_p| = p$, so (since $p$ is prime) it is either $1$ or $p$ for every $\\bar{x} \\in X$.\n\nNow, what is the size of the set $X$?\n%%hidden(Show solution):\nIt is $|G|^{p-1}$.\n\nIndeed, a single $p$-tuple in $X$ is specified precisely by its first $p$ elements; then the final element is constrained to be $x_p = (x_1 \\dots x_{p-1})^{-1}$.\n%%\n\nAlso, the orbits of $C_p$ acting on $X$ partition $X$ ([proof](https://arbital.com/p/4mg)).\nSince $p$ divides $|G|$, we must have $p$ dividing $|G|^{p-1} = |X|$.\nTherefore since $|\\mathrm{Orb}_{C_p}((e, e, \\dots, e))| = 1$, there must be at least $p-1$ other orbits of size $1$, because each orbit has size $p$ or $1$: if we had fewer than $p-1$ other orbits of size $1$, then there would be at least $1$ but strictly fewer than $p$ orbits of size $1$, and all the remaining orbits would have to be of size $p$, contradicting that $p \\mid |X|$.\n\n\nHence there is indeed another orbit of size $1$; say it is the singleton $\\{ \\bar{x} \\}$ where $\\bar{x} = (x_1, \\dots, x_p)$.\n\nNow $C_p$ acts by cycling $\\bar{x}$ round, and we know that doing so does not change $\\bar{x}$, so it must be the case that all the $x_i$ are equal; hence $(x, x, \\dots, x) \\in X$ and so $x^p = e$ by definition of $X$.", "date_published": "2016-06-30T12:08:56Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Proof"], "alias": "4l6"} {"id": "a7a88c91bfa649be194015684fc5a08b", "title": "Equaliser (category theory)", "url": "https://arbital.com/p/category_theory_equaliser", "source": "arbital", "source_type": "text", "text": "In [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7), an *equaliser* of a pair of arrows $f, g: A \\to B$ is an object $E$ and a universal arrow $e: E \\to A$ such that $ge = fe$.\nExplicitly, $ge = fe$, and for any object $X$ and arrow $x: X \\to A$ such that $fx = gx$, there is a unique factorisation $\\bar{x} : X \\to A$ such that $e \\bar{x} = x$.", "date_published": "2016-06-18T13:45:48Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Formal definition", "Stub"], "alias": "4l7"} {"id": "00465a9e0766763f9891ec8d97e08e35", "title": "Orbit-stabiliser theorem", "url": "https://arbital.com/p/orbit_stabiliser_theorem", "source": "arbital", "source_type": "text", "text": "summary(Technical): Let $G$ be a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), [acting](https://arbital.com/p/3t9) on a set $X$. Let $x \\in X$.\nWriting $\\mathrm{Stab}_G(x)$ for the [stabiliser](https://arbital.com/p/4mz) of $x$, and $\\mathrm{Orb}_G(x)$ for the [orbit](https://arbital.com/p/4v8) of $x$, we have $$|G| = |\\mathrm{Stab}_G(x)| \\times |\\mathrm{Orb}_G(x)|$$ where $| \\cdot |$ refers to the size of a set.\n\nLet $G$ be a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), [acting](https://arbital.com/p/3t9) on a set $X$. Let $x \\in X$.\nWriting $\\mathrm{Stab}_G(x)$ for the [stabiliser](https://arbital.com/p/4mz) of $x$, and $\\mathrm{Orb}_G(x)$ for the [orbit](https://arbital.com/p/4v8) of $x$, we have $$|G| = |\\mathrm{Stab}_G(x)| \\times |\\mathrm{Orb}_G(x)|$$ where $| \\cdot |$ refers to the size of a set.\n\nThis statement generalises to infinite groups, where the same proof goes through to show that there is a [bijection](https://arbital.com/p/499) between the [left cosets](https://arbital.com/p/4j4) of the group $\\mathrm{Stab}_G(x)$ and the orbit $\\mathrm{Orb}_G(x)$.\n\n# Proof\n\nRecall that the [https://arbital.com/p/-4lt](https://arbital.com/p/-4lt) of the parent group.\n\nFirstly, it is enough to show that there is a bijection between the left cosets of the stabiliser, and the orbit.\nIndeed, then $$|\\mathrm{Orb}_G(x)| |\\mathrm{Stab}_G(x)| = |\\{ \\text{left cosets of} \\ \\mathrm{Stab}_G(x) \\}| |\\mathrm{Stab}_G(x)|$$\nbut the right-hand side is simply $|G|$ because an element of $G$ is specified exactly by specifying an element of the stabiliser and a coset.\n(This follows because the [cosets partition the group](https://arbital.com/p/4j5).)\n\n## Finding the bijection\n\nDefine $\\theta: \\mathrm{Orb}_G(x) \\to \\{ \\text{left cosets of} \\ \\mathrm{Stab}_G(x) \\}$, by $$g(x) \\mapsto g \\mathrm{Stab}_G(x)$$\n\nThis map is well-defined: note that any element of $\\mathrm{Orb}_G(x)$ is given by $g(x)$ for some $g \\in G$, so we need to show that if $g(x) = h(x)$, then $g \\mathrm{Stab}_G(x) = h \\mathrm{Stab}_G(x)$.\nThis follows: $h^{-1}g(x) = x$ so $h^{-1}g \\in \\mathrm{Stab}_G(x)$.\n\nThe map is [injective](https://arbital.com/p/4b7): if $g \\mathrm{Stab}_G(x) = h \\mathrm{Stab}_G(x)$ then we need $g(x)=h(x)$.\nBut this is true: $h^{-1} g \\in \\mathrm{Stab}_G(x)$ and so $h^{-1}g(x) = x$, from which $g(x) = h(x)$.\n\nThe map is [surjective](https://arbital.com/p/4bg): let $g \\mathrm{Stab}_G(x)$ be a left coset.\nThen $g(x) \\in \\mathrm{Orb}_G(x)$ by definition of the orbit, so $g(x)$ gets taken to $g \\mathrm{Stab}_G(x)$ as required.\n\nHence $\\theta$ is a well-defined bijection.", "date_published": "2016-07-01T02:46:57Z", "authors": ["Mark Chimes", "Patrick Stevens", "Alexei Andreev"], "summaries": ["A [group action](https://arbital.com/p/-3t9) of a group $G$ acting on a set $X$ describes how $G$ sends elements of $X$ to other elements of $X$. Given a specific element $x \\in X$, the [stabiliser](https://arbital.com/p/-4mz) is all those elements of the group which send $x$ back to itself, and the [orbit](https://arbital.com/p/-4v8) of $x$ is all the elements to which $x$ can get sent. \n\nThis theorem tells you that $G$ is divided into equal-sized pieces using $x$. Each piece \"looks like\" the stabilizer of $x$ (and is the same size), and the orbit of $x$ tells you how to \"move the piece around\" over $G$ in order to cover it. \n\nPut another way, each element $y$ in the orbit of $x$ is transformed \"in the same way\" by $G$ relative to $y$. \n\nThis theorem is closely related to [Lagrange's Theorem](https://arbital.com/p/-4jn)."], "tags": [], "alias": "4l8"} {"id": "eccc782bfb359cef03409b4f9a717fce", "title": "Needs exercises", "url": "https://arbital.com/p/needs_exercises_meta_tag", "source": "arbital", "source_type": "text", "text": "One way the user can make sure they've understood the material is to do [exercises](https://arbital.com/p/4n2). They should usually be on their own [lens](https://arbital.com/p/17b), as additional \"homework problems\" that users can solve to get more practice. They can also be embedded in the explanation pages, so the user can solve them as they are learning, especially on [path](https://arbital.com/p/1rt) pages where the reader has asked for exercises.\n\nMany pages could be improved by adding exercises, but priority should be given to: \n\n* Pages which don't currently have exercises.\n* Pages which have a full explanation of a concept (rather than just [defienitions](https://arbital.com/p/4gs)).\n* Pages which are a requisite that many other concepts rely on.\n* Pages which are mostly self-contained, rather than overviews of a large and multifaceted concept (exercises should go with child pages explaining the subtopics).", "date_published": "2016-06-20T20:55:38Z", "authors": ["Eric Bruylant", "Mark Chimes", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "4lg"} {"id": "2a01921c6d100fc88892772cbc88d98b", "title": "Join and meet: Exercises", "url": "https://arbital.com/p/poset_join_exercises", "source": "arbital", "source_type": "text", "text": "Try these exercises to test your knowledge of joins and meets.\n\n\nTangled up \n--------------------\n\n![a big crazy poset](http://i.imgur.com/cYqkKm7.png)\n\nDetermine whether or not the following joins and meets exist in the poset depicted by the above Hasse diagram. \n\n$c \\vee b$\n\n%%hidden(Show solution):\nThis join does *not* exist, because $i$ and $j$ are incomparable upper bounds of $\\{ c, b \\}$, and no smaller upper bounds of $\\{ c, b \\}$ exist.\n%%\n\n$g \\vee e$\n\n%%hidden(Show solution):\nThis join exists. $g \\vee e = j$.\n%%\n\n$j \\wedge f$ \n\n%%hidden(Show solution):\nThis meet does *not* exist, because $d$ and $b$ are incomparable lower bounds of $\\{ j, f \\}$, and no larger lower bounds of $\\{ j, f \\}$ exist.\n%%\n\n$\\bigvee \\{a,b,c,d,e,f,g,h,i,j,k,l\\}$\n\n%%hidden(Show solution):\nThis join exists. It is $l$.\n%%\n\nJoin fu\n------------\n\nLet $P$ be a poset, $S \\subseteq P$, and $p \\in P$. Prove that if both $\\bigvee S$ and $(\\bigvee S) \\vee p$ exist then $\\bigvee (S \\cup \\{p\\})$ exists as well, and $(\\bigvee S) \\vee p = \\bigvee (S \\cup \\{p\\})$.\n\n%%hidden(Show solution):\nFor any $X \\subset P$, let $X^U$ denote the set of upper bounds of $X$. The above proposition follows from the fact that $\\{\\bigvee S, p\\}^U = (S \\cup p)^U$, which is apparent from the following chain of bi-implications:\n\n$q \\in \\{\\bigvee S, p\\}^U \\iff$ \n\nfor all $s \\in S, q \\geq \\bigvee S \\geq s$, and $q \\geq p \\iff$\n\n$q \\in (S \\cup \\{p\\})^U$.\n\nIf two subsets of a poset have the same set of upper bounds, then either they both lack a least upper bound, or both have the same least upper bound.\n%%\n\nMeet fu\n------------\n\nLet $P$ be a poset, $S \\subseteq P$, and $p \\in P$. Prove that if both $\\bigwedge S$ and $(\\bigwedge S) \\wedge p$ exist then $\\bigwedge (S \\cup \\{p\\})$ exists as well, and $(\\bigwedge S) \\wedge p = \\bigwedge(S \\cup \\{p\\})$.\n\n%%hidden(Show solution):\nNote that the proposition we are trying to prove here is the dual of the one stated in join fu. Thanks to the duality principle, this theorem therefore comes for free with our solution to join fu.\n%%\n\nQuite a big join\n--------------------------\n\nIn the poset $\\langle \\mathbb N, | \\rangle$ discussed in [https://arbital.com/p/43s](https://arbital.com/p/43s), does $\\bigvee \\mathbb N$ exist? If so, what is it?", "date_published": "2016-07-29T19:24:27Z", "authors": ["Eric Bruylant", "Kevin Clancy"], "summaries": [], "tags": ["Exercise "], "alias": "4ll"} {"id": "5284cc0e6d93659517c9cee7ccb3a3bd", "title": "Stabiliser is a subgroup", "url": "https://arbital.com/p/stabiliser_is_a_subgroup", "source": "arbital", "source_type": "text", "text": "Let $G$ be a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) which [acts](https://arbital.com/p/3t9) on the [set](https://arbital.com/p/-3jz) $X$.\nThen for every $x \\in X$, the [stabiliser](https://arbital.com/p/4mz) $\\mathrm{Stab}_G(x)$ is a [subgroup](https://arbital.com/p/-subgroup) of $G$.\n\n# Proof\n\nWe must check the group axioms.\n\n- The [identity](https://arbital.com/p/-54p), $e$, is in the stabiliser because $e(x) = x$; this is part of the definition of a group action.\n- [Closure](https://arbital.com/p/-3gy) is satisfied: if $g(x) = x$ and $h(x) = x$, then $(gh)(x) = g(h(x))$ by definition of a group action, but that is $g(x) = x$.\n- [Associativity](https://arbital.com/p/-3h4) is inherited from the parent group.\n- [Inverses](https://arbital.com/p/-inverse_mathematics): if $g(x) = x$ then $g^{-1}(x) = g^{-1} g(x) = e(x) = x$.", "date_published": "2016-07-07T15:26:16Z", "authors": ["Dylan Hendrickson", "Mark Chimes", "Patrick Stevens"], "summaries": ["Given a [group](https://arbital.com/p/-3gd) $G$ [acting](https://arbital.com/p/-3t9) on a [set](https://arbital.com/p/-3jz) $X$, the [stabiliser](https://arbital.com/p/-4mz) of some [element](https://arbital.com/p/-element_mathematics) $x \\in X$ is a [subgroup](https://arbital.com/p/-subgroup) of $G$."], "tags": ["Proof"], "alias": "4lt"} {"id": "d5c7e66e64dc32e12b1eae8d8dcdbe6e", "title": "Prime number", "url": "https://arbital.com/p/prime_number", "source": "arbital", "source_type": "text", "text": "A [https://arbital.com/p/-45h](https://arbital.com/p/-45h) $n > 1$ is *prime* if it has no [divisors](https://arbital.com/p/divisor_number_theory) other than itself and $1$.\nEquivalently, it has the property that if $n \\mid ab$ %%note:That is, $n$ divides the product $ab$%% then $n \\mid a$ or $n \\mid b$.\nConventionally, $1$ is considered to be neither prime nor [composite](https://arbital.com/p/composite_number) (i.e. non-prime).\n\n# Examples\n\n- The number $2$ is prime, because its divisors are $1$ and $2$; therefore it has no divisors other than itself and $1$.\n- The number $3$ is also prime, as are $5, 7, 11, 13, \\dots$.\n- The number $4$ is not prime; neither are $6, 8, 9, 10, 12, \\dots$.\n\n# Properties\n\n- There are infinitely many primes. ([Proof.](https://arbital.com/p/54r))\n- Every natural number may be written as a product of primes; moreover, this can only be done in one way (if we count \"the same product but with the order swapped\" as being the same: for example, $2 \\times 3 = 3 \\times 2$ is just one way of writing $6$). ([Proof.](https://arbital.com/p/fundamental_theorem_of_arithmetic))\n\n# How to find primes\n\nIf we want to create a list of all the primes below a given number, or the first $n$ primes for some fixed $n$, then an efficient way to do it is the [Sieve of Eratosthenes](https://arbital.com/p/sieve_of_eratosthenes).\n(There are other sieves available, but Eratosthenes is the simplest.)\n\nThere are many [tests](https://arbital.com/p/primality_testing) for primality and for compositeness.\n\n# More general concept\n\nThis definition of \"prime\" is, in a more general [ring-theoretic](https://arbital.com/p/3gq) setting, known instead as the property of [irreducibility](https://arbital.com/p/5m1).\nConfusingly, there is a slightly different notion in this ring-theoretic setting, which goes by the name of \"prime\"; this notion has [a separate page on Arbital](https://arbital.com/p/prime_element_ring_theory).\nIn the ring of integers, the two ideas of \"prime\" and \"irreducible\" actually coincide, but that is because the integers form a ring with several very convenient properties: in particular, being a [Euclidean domain](https://arbital.com/p/euclidean_domain), they are a [https://arbital.com/p/-principal_ideal_domain](https://arbital.com/p/-principal_ideal_domain) (PID), and [PIDs have unique factorisation](https://arbital.com/p/pid_implies_ufd).", "date_published": "2016-07-27T18:03:30Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Formal definition", "Stub"], "alias": "4mf"} {"id": "0741dbb6a65310ab40d0a66ece3b149f", "title": "Group orbits partition", "url": "https://arbital.com/p/group_orbits_partition", "source": "arbital", "source_type": "text", "text": "Let $G$ be a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd), [acting](https://arbital.com/p/3t9) on the set $X$.\nThen the [orbits](https://arbital.com/p/group_orbit) of $X$ under $G$ form a [partition](https://arbital.com/p/set_partition) of $X$.\n\n# Proof\n\nWe need to show that every element of $X$ is in an orbit, and that if $x \\in X$ lies in two orbits then they are the same orbit.\n\nCertainly $x \\in X$ lies in an orbit: it lies in the orbit $\\mathrm{Orb}_G(x)$, since $e(x) = x$ where $e$ is the identity of $G$.\n(This follows by the definition of an action.)\n\nSuppose $x$ lies in both $\\mathrm{Orb}_G(a)$ and $\\mathrm{Orb}_G(b)$, where $a, b \\in X$.\nThen $g(a) = h(b) = x$ for some $g, h \\in G$.\nThis tells us that $h^{-1}g(a) = b$, so in fact $\\mathrm{Orb}_G(a) = \\mathrm{Orb}_G(b)$; it is an exercise to prove this formally.\n\n%%hidden(Show solution):\nIndeed, if $r \\in \\mathrm{Orb}_G(b)$, then $r = k(b)$, say, some $k \\in G$.\nThen $r = k(h^{-1}g(a)) = kh^{-1}g(a)$, so $r \\in \\mathrm{Orb}_G(a)$.\n\nConversely, if $r \\in \\mathrm{Orb}_G(a)$, then $r = m(b)$, say, some $m \\in G$.\nThen $r = m(g^{-1}h(b)) = m g^{-1} h (b)$, so $r \\in \\mathrm{Orb}_G(b)$.\n%%", "date_published": "2016-06-20T06:55:28Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Proof"], "alias": "4mg"} {"id": "83a5773dba39ca418cf27f76d38d4c96", "title": "Product (Category Theory)", "url": "https://arbital.com/p/product_category_theory", "source": "arbital", "source_type": "text", "text": "This simultaneously captures the concept of a product of [sets](https://arbital.com/p/-3jz), [posets](https://arbital.com/p/-3rb), [groups](https://arbital.com/p/-3gd), [topological spaces](https://arbital.com/p/-topological_space) etc. In addition, like any [universal construction](https://arbital.com/p/-universal_property), this characterization does not differentiate between [isomorphic](https://arbital.com/p/-4f4) versions of the product, thus allowing one to abstract away from an arbitrary, [specific construction](https://arbital.com/p/-specific_construction_category_theory).\n\n##Definition##\nGiven a pair of objects $X$ and $Y$ in a category $\\mathbb{C}$, the **product** of $X$ and $Y$ is an object $P$ *along with a pair of morphisms* $f: P \\rightarrow X$ and $g: P \\rightarrow Y$ satisfying the following [universal](https://arbital.com/p/-universal_property) condition:\n\nGiven any other object $W$ and morphisms $u: W \\rightarrow X$ and $v:W \\rightarrow Y$ there is a *unique* morphism $h: W \\rightarrow P$ such that $fh = u$ and $gh = v$.", "date_published": "2016-06-24T01:55:45Z", "authors": ["Mark Chimes", "Nate Soares", "Patrick Stevens", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "4mj"} {"id": "95bf03188287ec8e7ed8f5c5c56384da", "title": "Empirical probabilities are not exactly 0 or 1", "url": "https://arbital.com/p/cromwells_rule", "source": "arbital", "source_type": "text", "text": "[Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule) in statistics argues that no empirical proposition should be assigned a subjective probability of *exactly* $0$ or $1$ - it is always *possible* to be mistaken. (Some argue that this rule should be generalized to logical facts as well.)\n\nA probability of exactly $0$ or $1$ corresponds to infinite [log odds](https://arbital.com/p/1zh), and would require infinitely [strong](https://arbital.com/p/22x) evidence to reach starting from any finite [prior](https://arbital.com/p/1rm). To put it another way, if you don't start out infinitely certain of a fact before making any observations (before you were born), you won't reach infinite certainty after any finite number of observations involving finite probabilities.\n\nAll sensible [universal priors](https://arbital.com/p/4mr) seem so far to have the property that they never assign probability exactly $0$ or $1$ to any predicted future observation, since their hypothesis space is always broad enough to include an imaginable state of affairs in which the future is different from the past.\n\nIf you did assign a probability of exactly $0$ or $1,$ you would be unable to [update](https://arbital.com/p/1ly) no matter how much contrary evidence you observed. [Prior odds](https://arbital.com/p/1rb) of 0 : 1 (or 1 : 0), times any finite [likelihood ratio](https://arbital.com/p/1rq), end up yielding 0 : 1 (or 1 : 0).\n\nAs Rafal Smigrodski put it:\n\n> \"I am not totally sure I have to be always unsure. Maybe I could be legitimately sure about something. But once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom. I don't like the idea of not being able to change my mind, ever.\"", "date_published": "2016-07-06T16:01:13Z", "authors": ["Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["[Cromwell's Rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule) in statistics forbids us to assign probabilities of exactly $0$ or $1$ to any empirical proposition - that is, it is always possible to be mistaken.\n\n- Probabilities of exactly $0$ or $1$ correspond to infinite [log odds](https://arbital.com/p/1zh), meaning that no finite amount of evidence can ever suffice to reach them, or overturn them. Once you assign probability $0$ or $1,$ you can never change your mind.\n- Sensible [universal priors](https://arbital.com/p/4mr) never assign probability exactly $0$ or $1$ to any predicted future observation - their hypothesis space is always broad enough to include a scenario where the future is different from the past."], "tags": ["C-Class"], "alias": "4mq"} {"id": "e9bdbaf95d38b29b98050bd9696272d2", "title": "Universal prior", "url": "https://arbital.com/p/universal_prior", "source": "arbital", "source_type": "text", "text": "A \"universal\" [prior](https://arbital.com/p/27p) is a [probability distribution](https://arbital.com/p/3tb) that assigns positive probability to *every thinkable hypothesis*, for some reasonable meaning of \"every thinkable hypothesis\". A central example would be [Solomonoff induction](https://arbital.com/p/11w), in which the observations are a sequence of bits, the universal prior is \"every possible computer program that generates bit sequences\", and every such computer program starts with a positive probability.", "date_published": "2016-06-21T18:41:52Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Start"], "alias": "4mr"} {"id": "515e48bfca69c79540aa844eda3923ae", "title": "Stabiliser (of a group action)", "url": "https://arbital.com/p/group_stabiliser", "source": "arbital", "source_type": "text", "text": "Let the [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$ [act](https://arbital.com/p/3t9) on the set $X$.\nThen for each element $x \\in X$, the *stabiliser* of $x$ under $G$ is $\\mathrm{Stab}_G(x) = \\{ g \\in G: g(x) = x \\}$.\nThat is, it is the collection of elements of $G$ which do not move $x$ under the action.\n\nThe stabiliser of $x$ is a subgroup of $G$, for any $x \\in X$. ([Proof.](https://arbital.com/p/4lt))\n\nA closely related notion is that of the [orbit](https://arbital.com/p/4v8) of $x$, and the very important [Orbit-Stabiliser theorem](https://arbital.com/p/4l8) linking the two.", "date_published": "2016-06-28T20:08:10Z", "authors": ["Mark Chimes", "Patrick Stevens", "Alexei Andreev"], "summaries": ["The stabilizer of an element $x$ under the action of a group $G$ is the set ([actually a subgroup](https://arbital.com/p/4lt)) of elements of $G$ which leave $x$ unchanged."], "tags": ["Needs summary", "Formal definition", "Stub"], "alias": "4mz"} {"id": "b01471992bfbaad021e753b204e9cbc3", "title": "Exercise", "url": "https://arbital.com/p/exercise_meta_tag", "source": "arbital", "source_type": "text", "text": "One way readers can reinforce or make sure they've understood a concept is to do exercises. These pages give a collection of questions for this purpose.", "date_published": "2016-06-20T20:29:27Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "4n2"} {"id": "516b42b4f7945fa7545e040eef2fcb90", "title": "Why is the decimal expansion of log2(3) infinite?", "url": "https://arbital.com/p/log2_of_3_never_ends", "source": "arbital", "source_type": "text", "text": "$\\log_2(3)$ starts with\n\n1.5849625007211561814537389439478165087598144076924810604557526545410982277943585625222804749180882420909806624750591673437175524410609248221420839506216982994936575922385852344415825363027476853069780516875995544737266834624612364248850047581810676961316404807130823233281262445248670633898014837234235783662478390118977006466312634223363341821270106098049177472541357330110499026268818251703576994712157113638912494135752192998699040767081539505404488360\n\nand goes on indefinitely. Why is it 1.58... in particular? Well, it takes more than one but less than two [binary digits](https://arbital.com/p/binary_digit) to encode a [3-digit](https://arbital.com/p/4sj), so $\\log_2(3)$ must be between 1 and 2. ([Wait, what?](https://arbital.com/p/427)). It takes more than 15 but less than 16 binary digits to encode ten 3-digits, so $10 \\cdot \\log_2(3)$ must be between 15 and 16, which means $1.5 < \\log_2(3) < 1.6.$ It takes more than 158 but less than 159 binary digits to encode a hundred 3-digits, so $1.58 < \\log_2(3) < 1.59.$ And so on. Because no power of 3 is ever equal to any power of 2, $10^n \\cdot \\log_2(3)$ will never quite be a whole number, no matter how large $n$ is.\n\nThus, $\\log_2(3)$ has no finite decimal expansion, because $3$ is not a [rational](https://arbital.com/p/4zq) [https://arbital.com/p/-power](https://arbital.com/p/-power) of $2$. Using this argument, we can see that $\\log_b(x)$ is an integer if (and only if) $x$ is a power of $b$, and that $\\log_b(x)$ only has a finite expansion if some power of $x$ is a power of $b.$", "date_published": "2016-07-04T13:55:58Z", "authors": ["Nate Soares"], "summaries": ["It takes more than one but less than two [binary digits](https://arbital.com/p/binary_digit) to encode a [3-digit](https://arbital.com/p/4sj), so $\\log_2(3)$ must be between 1 and 2. ([Wait, what?](https://arbital.com/p/427)). It takes more than 15 but less than 16 binary digits to encode ten 3-digits, so $10 \\cdot \\log_2(3)$ must be between 15 and 16, which means $1.5 < \\log_2(3) < 1.6.$ It takes more than 158 but less than 159 binary digits to encode a hundred 3-digits, so $1.58 < \\log_2(3) < 1.59.$ And so on. Because no power of 3 is ever equal to any power of 2, $10^n \\cdot \\log_2(3)$ will never quite be a whole number, no matter how large $n$ is."], "tags": ["Start"], "alias": "4n8"} {"id": "ff5ab615ee81918a3e032d3326567cc7", "title": "Guide", "url": "https://arbital.com/p/guide_meta_tag", "source": "arbital", "source_type": "text", "text": "Meta tag for the start page of a [multi-page guide](https://arbital.com/p/327).", "date_published": "2016-06-21T20:10:52Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "4pj"} {"id": "c4c48d41e32c62c0ed3230c32fa18092", "title": "Needs brief summary", "url": "https://arbital.com/p/needs_brief_summary_meta_tag", "source": "arbital", "source_type": "text", "text": "Pages which have a long or in-depth main summary should also have a brief summary. These show up as an alternate tab on the [summary popover](https://arbital.com/p/1kl) when a user hovers over a [greenlink](https://arbital.com/p/17f). Add this tag to pages that need to have a brief summary.\n\nPages may also require a [general](https://arbital.com/p/433) or [technical](https://arbital.com/p/4pt) summary.", "date_published": "2016-06-22T15:11:40Z", "authors": ["Eric Bruylant", "Mark Chimes", "Nate Soares"], "summaries": [], "tags": ["Meta tags which request an edit to the page"], "alias": "4q2"} {"id": "c0052e4c6a9bf9e7ba86fc6a44822331", "title": "n-digit", "url": "https://arbital.com/p/n_digit", "source": "arbital", "source_type": "text", "text": "An $n$-digit is a physical object that can be stably placed into any of $n$ distinguishable states. For example, a coin (which can be placed heads or tails) and a single bit of memory on a computer (which either has a high volt level or a low volt level) are both examples of 2-digits. A [https://arbital.com/p/-42d](https://arbital.com/p/-42d) is an example of a 10-digit. One die is an example of a 6-digit; two dice together are an example of a 36-digit (because they can be placed in 36 different ways).\n\nWhat does and doesn't count as an $n$-digit depends on context and convention: For example, if you want to communicate a message to me by placing a penny heads-side up and choosing whether to point Abraham Lincoln's face either north, south, east, or west, then, for the purposes of the two of us, that penny is a 4-digit rather than a 2-digit. The definition of \"stably placed\" is also a bit up-for-grabs: If you're writing a computer program and need to store a [256-message](https://arbital.com/p/3v9) in short-term memory, then a byte of RAM will do, but if you need to store the same 256-message for a long period of time, you may need to use a less temporary 256-digit (such as a hard drive).\n\nNote that it's possible to emulate $m$-digits using $n$-digits, in general. If $m < n$ then an $n$-digit is trivially an $m$-digit (i.e., you can use a digit wheel like a 7-digit in a pinch), and if $m > n$ then, given enough $n$-digits, you can make do. For example, 3 coins can be used to encode an 8-digit. See also [https://arbital.com/p/emulating_digits](https://arbital.com/p/emulating_digits).", "date_published": "2016-06-24T02:58:44Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Non-standard terminology"], "alias": "4sj"} {"id": "aaa26ef8a8b00e033f7db0847e3dbb08", "title": "Emulating digits", "url": "https://arbital.com/p/emulating_digits", "source": "arbital", "source_type": "text", "text": "In general, given enough $n$-digits, you can emulate an $m$-digit, for any $m, n \\in$ [$\\mathbb N$](https://arbital.com/p/45h). If $m < n,$ you can emulate an $m$-digit using just one $n$-digit — in other words, you can use a [https://arbital.com/p/-42d](https://arbital.com/p/-42d) like a $7$-digit if you want to, by just ignoring three of the possible ways to set the digit wheel. If $m > n,$ things are a bit more difficult, but only slightly.\n\nBasically, with 2 $n$-digits, you can emulate a $n^2$-digit, as follows. Using your two $n$-digits, encode a number $(x, y)$ where $0 \\le x < n$ and $0 \\le y < n$. Interpret $(x, y)$ as $xn + y.$ You have now encoded a number between 0 (if $x = y = 0$) and $n^2 - 1$ (if $x = y = n-1$). Congratulations, you just used two $n$-digits to make an $n^2$ digit!\n\nYou can use the same strategy to emulate $n^3$-digits (interpret $(x, y, z)$ as $xn^2 + yn + z$), $n^4$-digits (you get the picture), and so on. Now, to emulate an $m$-digit, just pick an exponent $a$ such that $n^a > m,$ collect $a$ copies of an $n$-digit, and you're done.\n\nThis isn't necessarily the most efficient way to use $n$-digits to encode $m$-digits. For example, if $m$ is 1,000,001 and $n$ is 10, then you need seven 10-digits. Seven 10-digits are enough to emulate a 10-million-digit, whereas $m$ is a mere million-and-one-digit — paying for a 10-million-digit when all you needed was an $m$-digit seems a bit excessive. For some different methods you can use to recover your losses when encoding one type of digit using another type of digit, see [https://arbital.com/p/44l](https://arbital.com/p/44l) and [https://arbital.com/p/3ty](https://arbital.com/p/3ty). (These techniques are fairly useful in practice, given that modern computers encode everything using [bits](https://arbital.com/p/3p0), i.e. 2-digits, and so it's useful to know how to efficiently encode $m$-messages using bits when $m$ is pretty far from the nearest power of 2.)", "date_published": "2016-06-25T15:14:02Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": [], "tags": [], "alias": "4sk"} {"id": "3aefc32ba788404cc19d36361079f74d", "title": "Decimal notation", "url": "https://arbital.com/p/decimal_notation", "source": "arbital", "source_type": "text", "text": "Seventeen is the number that represents as many things as there are x marks at the end of this sentence: xxxxxxxxxxxxxxxxx. Writing out numbers by saying \"the number representing how many things there are in this pile:\" gets unwieldy when the pile gets large. Thus, we _represent_ numbers using the [numerals](https://arbital.com/p/numeral) 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Specifically, we write the number representing this many things: xxx as \"3\", and the number representing this many things: xxxxxxxxxxx as 11, and the number seventeen as \"17\". This is called \"decimal notation,\" because there are ten different symbols that we use. Numbers don't have to be written down in decimal notation, it's also possible to write them down in other notations such as [https://arbital.com/p/-binary_notation](https://arbital.com/p/-binary_notation). Some numbers can't even be written out in decimal notation (in full); consider, for example, the number $e$ which, in decimal notation, starts out with the digits 2.71828... and just keeps going.\n\n# How decimal notation works\n\nHow do you know that 17 is the number that represents the number of xs in this sequence: xxxxxxxxxxxxxxxxx? In practice, you know this because the rules of decimal notation were ingrained in you in a young child. But do you know those rules explicitly? Could you write out a series of rules for taking in some input symbols like '2', '4', and '6' and using those to figure out how many pebbles to add to a pile?\n\nThe answer, of course, is this many:\n\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\nBut how do we perform that conversion in general?\n\nIn short, the number 246 represents $(2 \\cdot 100) + (4 \\cdot 10) + (6 \\cdot 1),$ so as long as we know how to do addition and multiplication, and as long as we know what the basic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 mean, and as long as we know how to get to [powers](https://arbital.com/p/power) of 10 (1, 10, 100, 1000, ...), then we can explicitly understand decimal notation.\n\n(What do the basic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 mean? By convention, they represent as many things as are in the following ten sequences of xs: , x, xx, xxx, xxxx, xxxxx, xxxxxx, xxxxxxx, xxxxxxxx, and xxxxxxxxx, respectively.)\n\nThis explanation assumes that you're already quite familiar with decimal notation. Explaining decimal notation from scratch to someone who doesn't already know it (which was a task people _actually had to do_ back when half the world was using Roman numerals, a much less convenient system for representing numbers) is a fun task; to see what that looks like, refer to [https://arbital.com/p/+representing_numbers_from_scratch](https://arbital.com/p/+representing_numbers_from_scratch).\n\n# Other common notations\n\nThe above text made use of [https://arbital.com/p/-unary_notation](https://arbital.com/p/-unary_notation), which is a method of representing numbers by making a number of marks that correspond to the represented number. For example, in unary notation, 17 is written xxxxxxxxxxxxxxxxx (or ||||||||||||||||| or whatever, the actual marks don't matter). This is perhaps somewhat easier to understand, but writing large numbers like 93846793284756 gets rather ungainly.\n\nHistorical notations include [Roman numerals](https://arbital.com/p/https://en.wikipedia.org/wiki/Roman_numerals), which were a pretty bad way to represent numbers. (It took humanity quite some time to find _good_ tools for representing numbers; the decimal notation that's been ingrained in your head since early childhood is the result of many centuries worth of effort. It's much harder to invent good representations of numbers when you don't even have good tools for writing down and reasoning about numbers. Furthermore, the modern tools for representing numbers aren't necessarily ideal!)\n\nCommon notations in modern times (aside from decimal notation) include [https://arbital.com/p/-binary_notation](https://arbital.com/p/-binary_notation) (often used by computers), [https://arbital.com/p/-hexadecimal_notation](https://arbital.com/p/-hexadecimal_notation) (which is a useful format for humans reading binary notation). Binary notation and hexadecimal notation are very similar to decimal notation, with the difference that binary uses only two distinct symbols (instead of ten), and hexadecimal uses sixteen.", "date_published": "2016-07-04T05:32:35Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Nate Soares", "Michael Cohen"], "summaries": [], "tags": ["Needs brief summary"], "alias": "4sl"} {"id": "4ac4ae84cf7114e3f73cdd44016be1cd", "title": "Inverse function", "url": "https://arbital.com/p/inverse_function", "source": "arbital", "source_type": "text", "text": "If a function $g$ is the inverse of a function $f$, then $g$ undoes $f$, and $f$ undoes $g$. In other words, $g(f(x)) = x$ and $f(g(y)) = y$. An inverse function takes as its [domain](https://arbital.com/p/3js) the [range](https://arbital.com/p/3lm) of the original function, and the [range](https://arbital.com/p/3lm) of the inverse function is the [domain](https://arbital.com/p/3js) of the original. To put that another way, if $f$ maps $A$ onto $B$, then $g$ maps $B$ back onto $A$. To indicate the inverse of a function $f$, we write $f^{-1}$.\n\n## Examples ##\n\n$$y=f(x) = x^3\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ x=f^{-1}(y) = y^{1/3}$$\n$$y=f(x) = e^x\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ x=f^{-1}(y) = ln(y)$$\n$$y=f(x) = x+4\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ x=f^{-1}(y) = y-4$$", "date_published": "2016-07-07T15:47:27Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Eric Rogstad", "Michael Cohen"], "summaries": [], "tags": ["Formal definition"], "alias": "4sn"} {"id": "67612619466b0a029fdee35fb770b67b", "title": "Quotient group", "url": "https://arbital.com/p/quotient_group", "source": "arbital", "source_type": "text", "text": "summary(brief): \nA **quotient group** $G/N$ of a [group](https://arbital.com/p/-3gd) $G$ by a [normal subgroup](https://arbital.com/p/-4h6) $N$ is obtained by dividing up the group into pieces ([equivalence classes](https://arbital.com/p/-equivalence_class)), and then treating everything in one class the same way (by treating each class as a single element). The quotient group has a group structure defined on it based on the original structure of $G$, that works 'basically the same as $G$ up to equivalence'.\n \n\nsummary(technical): \nGiven a [group](https://arbital.com/p/-3gd) $(G, \\bullet)$ and a [normal subgroup](https://arbital.com/p/-4h6) $N \\unlhd G$. The **quotient** of $G$ by $N$, written $G/N$, has as [underlying set](https://arbital.com/p/-3gz) the set of (left)-[cosets](https://arbital.com/p/-4j4) of $N$ in $G$ and as operation $\\circ$ which is defined as $aN \\circ bN = (a \\bullet b) N$, where $xN = \\{xn : n \\in N\\}$ for each $x \\in G$. \n\nThe operation $\\circ$ is [well defined](https://arbital.com/p/well_defined) in the sense that if other representatives $a'$ and $b'$ are chosen such that \n$a'N = aN$ and $b'N = bN$ then also $(a' \\bullet b')N = (a \\bullet b)N$. \n\nThere is a [canonical](https://arbital.com/p/-canonical) [homomorphism](https://arbital.com/p/-47t) (sometimes called the [projection](https://arbital.com/p/-quotient_projection)) $\\phi: G \\rightarrow G/N: a \\mapsto aN$.\n\nThis is a special case of a [quotient](https://arbital.com/p/-quotient_universal_algebra) from universal algebra.\n\n\nsummary(examples):\nGiven the [group](https://arbital.com/p/-3gd) of [integers](https://arbital.com/p/-48l) $\\mathbb{Z}$, and the [normal subgroup](https://arbital.com/p/-nomral_subgroup) $2 \\mathbb{Z}$ of all even numbers, we can form the group $\\mathbb{Z}/2\\mathbb{Z}$. This group has only two [elements](https://arbital.com/p/-element) and [only cares](https://arbital.com/p/-personification_in_mathematics) if a number is odd or even. It tells us that the sum of an odd and an even number is odd, that the sum of two even numbers is even, and the sum of two odd numbers is also even!\n\n\n\nsummary(motivation):\nLet's say we have a group. Maybe the group is kinda large and unwieldy, and we want to find an easier way to think about it. Or maybe we just want to focus on a certain aspect of the group. Some of the actions will change things in ways we just don't really care about, or don't mind ignoring for now. So let's create a group homomorphism that will map all these actions to the identity action in a new group. The image of this homomorphism will be a group much like the first, except that it will ignore all the effects that come from those actions that we're ignoring - just what we wanted! This new group is called the quotient group.\n\n\n#The basic idea\nLet's say we have a [group](https://arbital.com/p/3gd). Maybe the group is kinda large and unwieldy, and we want to find an easier way to think about it. Or maybe we just want to focus on a certain aspect of the group. Some of the actions will change things in ways we just don't really care about, or don't mind ignoring for now. So let's create a [group homomorphism](https://arbital.com/p/47t) that will map all these actions to the identity action in a new group. The image of this homomorphism will be a group much like the first, except that it will ignore all the effects that come from those actions that we're ignoring - just what we wanted! This new group is called the quotient group. \n\n#Definition\n\nWe start with our group $G$. The actions we want to ignore form a group $N$, which must be a [normal subgroup](https://arbital.com/p/4h6) of $G$. The quotient group is then called $G/N$, and has a canonical homomorphism $\\phi: G \\rightarrow G/N$ which maps $g \\in G$ to the [coset](https://arbital.com/p/4j4) $gN$. \n\n## The divisor group\nIn the definition, we require the divisor $N$, to be a normal subgroup of $G$. Why? Well first, let's see why requiring $N$ to be a group makes sense. Remember that $N$ has the actions whose effects we want to ignore. So it makes sense that it should contain the identity action, which has no effect. It also is reasonable that it would be closed under the group operation - doing two things we don't care about shouldn't change anything we care about. Together, these two properties imply it is a [subgroup](https://arbital.com/p/subgroup): $N \\le G$. \n\nA subgroup is great, but it isn't quite good enough by itself to work here. That's because we want the quotient group to preserve the overall structure of the group, i.e. it should preserve the group multiplication. In other words, there needs to be a [group homomorphism](https://arbital.com/p/47t) $\\phi$ from $G$ to $G/N$. Since $N$ is the subgroup of things we want to ignore, all its actions should get mapped to the identity action under this homomorphism. That means it's the [kernel](https://arbital.com/p/49y) of the homomorphism $\\phi$, which means it's a [normal subgroup](https://arbital.com/p/4h6): $N \\trianglelefteq G$. \n\n## Cosets\nWhat exactly are the elements of the new group? They are [equivalence classes](https://arbital.com/p/equivalence_class) of actions, the sets $gN = \\{gn : n \\in N\\}$ where $g \\in G$, also known as a [coset](https://arbital.com/p/4j4). The identity element is the set $N$ itself. Multiplication is defined by $g_1N \\cdot g_2N = (g_1g_2)N$. \n\n# Generalizes the idea of a quotient\nWhat gives a quotient group the right to call itself a quotient? If $G$ and $N$ both have finite order, then $|G/N| = |G|/|N|$, which can be proved by the fact that $G/N$ consists of the cosets of $N$ in $G$, and that these [cosets are the same size](https://arbital.com/p/4j8), and [partition](https://arbital.com/p/4j5) $G$. \n\n\n\n#Example\nSuppose you have a collection of objects, and you need to split them into two equal groups. So you are trying to determine under what circumstances changing the number of objects will affect this property. You notice that changing the size of the collection by certain numbers such as 0, 2, 4, 24, and -6 doesn't affect this property. \n\nThe set of different size changes can be modeled as the additive group of integers $\\mathbb Z$. The changes that don't affect this property also form a group: $2\\mathbb Z = \\{2n : n\\in \\mathbb Z\\}$. Exercise: verify that this is a normal subgroup of $\\mathbb Z$. \n\nThis subgroup gives us two cosets: $0 + 2\\mathbb Z$ and $1 + 2\\mathbb Z$ (remember that $+$ is the group operation in this example), which are the elements of our quotient group. We will give them their conventional names: $\\text{even}$ and $\\text{odd}$, and we can apply the coset multiplication rule to see that $\\text{even}+ \\text{even} = \\text{even}$, $\\text{even} + \\text{odd} = \\text{odd}$, and $\\text{odd} + \\text{odd} = \\text{even}$. \n\nInstead of thinking about specific numbers, and how they will change our ability to split our collection of objects into two equal groups, we now have reduced the problem to its essence. Only the parity matters, and it follows the simple rules of the quotient group we discovered. \n\n#See also \n\n - [Lagrange's theorem](https://arbital.com/p/4jn).\n - [The first isomorphism theorem](https://arbital.com/p/first_isomorphism_theorem).", "date_published": "2016-07-01T01:13:15Z", "authors": ["Eric Rogstad", "Mark Chimes", "Adele Lopez"], "summaries": ["Given a [group](https://arbital.com/p/-3gd) $G$ with operation $\\bullet$ and a special kind of [subgroup](https://arbital.com/p/-subgroup) $N \\leq G$ called the \"[normal subgroup](https://arbital.com/p/-4h6)\", there is a way of \"dividing\" $G$ by $N$, written $G/N$. Usually this is defined so that each [element](https://arbital.com/p/-element_mathematics) of $G/N$ is a [subset](https://arbital.com/p/-subset) of $G$. In fact, each of them is an [equivalence class](https://arbital.com/p/-equivalence_class) of elements in $G$, so called because we think of everything in one equivalence class as being equivalent. One of the equivalence classes is $N$, and each of the other equivalence classes can be seen as $N$ \"shifted around\" inside $G$. Because they are equivalence classes, each element of $G$ only occurs in exactly one of them.\n\nThis collection $G/N$ of equivalence classes has a [group structure](https://arbital.com/p/-group_structure) based on the structure on $G$. Write the [operation](https://arbital.com/p/-group_operation) as $\\circ$. Say $A$ and $B$ are two equivalence classes in $G/N$. Then $A \\circ B$ is defined by taking an element $a \\in A$ and $b \\in B$, multiplying them as $a \\bullet b$ and checking in which equivalence class this product ends up. \n\nThere are some things to check, namely that the equivalence class in which we end up is the same regardless of which elements in $A$ and $B$ we happen to pick. This is guaranteed by the fact that $N$ is normal in $G$.\n\nThere is also a [group homomorphism](https://arbital.com/p/-47t) $\\phi$ from $G$ to $G/N$ that \"collapses\" the elements by sending each element in $G$ to the equivalence class in which it lies. This is called collapsing because lots of elements in $G$ are sent to the same element in $G/N$, so we can see $G/N$ as a collapsed version of $G$."], "tags": [], "alias": "4tq"} {"id": "2085488f2f3138bdd382189304492b3b", "title": "Exponential", "url": "https://arbital.com/p/exponential", "source": "arbital", "source_type": "text", "text": "An exponential is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) that can be represented by some [https://arbital.com/p/-constant](https://arbital.com/p/-constant) taken to the [power](https://arbital.com/p/power_mathematics) of a [https://arbital.com/p/-variable](https://arbital.com/p/-variable). The name comes from the fact that the variable is the *exponent* of the expression.\n\n\n## Exponential Growth\n\nExponentials are most useful in describing growth patterns where the growth rate is [proportional](https://arbital.com/p/4w3) to the amount of the thing that's growing. They can be represented by the formula: $f(x) = c \\times a^x$, where $c$ is the starting value and $a$ is the growth factor.\n\nThe classic example of exponential growth is [https://arbital.com/p/-compound_interest](https://arbital.com/p/-compound_interest). If you have \\$100 in a bank account that gives you 2% interest every year, then every year your money is multiplied by $1.02$. This means you can represent your account balance as $f(x) = 100 \\times 1.02^x$, where $x$ is the number of years your money has been in the account.\n\nAnother example is a dividing cell. If one cell is placed into an infinite culture and splits once every hour, the number of cells in the culture after $x$ hours is $f(x) = 1 \\times 2^x$ (assuming none of the cells die).\n\n\n### Recursive definition\n\nWe mentioned earlier that in an exponential function, the growth rate is proportional to the amount of the thing that's growing. In the compound interest example, we can write each value in terms of the previous value: $f(x) = f(x-1) \\times 1.02$. Therefore, the amount of growth at every step can be represented as $\\Delta f(x) = f(x+1) - f(x) = 0.02 \\times f(x)$.\n\nThis makes exponential growth a *memoryless growth function*, as the growth rate depends only on current information. Compare this to [https://arbital.com/p/-simple_interest](https://arbital.com/p/-simple_interest), where the interest only grows at a percentage of the initial value. Then we would have to write: $f(x) = f(x-1) + 0.02 \\times f(0)$, and because $f(0)$ is constant while $f(x)$ continues to grow, we cannot express the growth rate in terms of only current information — we have to keep \"in memory\" the initial balance to calculate the growth rate.", "date_published": "2016-07-04T05:34:11Z", "authors": ["Eric Bruylant", "Nate Soares", "Joe Zeng"], "summaries": ["The exponent base $b$ of a number $x$, written $b^x,$ is what you get when you multiply 1 by $b$, $x$ times. For example, the exponent base 10 of 3 is $10^3$ and pronounced \"10 to the power 3\" is 1000, because $10 \\cdot 10 \\cdot 10 = 1000$. Similarly, $2^4=16,$ because $2 \\cdot 2 \\cdot 2 \\cdot 2 = 16.$\n\nThe input $x$ may be fractional. For example, $10^{1/2}$ is the number you get when you multiply 1 by 10 half a time, where \"multiplying by 10 half a time\" means multiplying by a number $n$ such that if you multiplied by $n$ twice, you would have multiplied by 10 once. In this case, $n \\approx 3.16,$ because then $n \\cdot n \\approx 10.$"], "tags": ["Start", "Needs lenses"], "alias": "4ts"} {"id": "226ed4f2608c42dfba086266e12ad6a9", "title": "Most complex things are not very compressible", "url": "https://arbital.com/p/most_complexity_incompressible", "source": "arbital", "source_type": "text", "text": "Although the [halting problem](https://arbital.com/p/46h) means we can't *prove* it doesn't happen, it would nonetheless be *extremely surprising* if some 100-state Turing machine turned out to print the exact text of Shakespeare's *Romeo and Juliet.* Unless something was specifically generated by a simple algorithm, the Vast supermajority of data structures that *look* like they have high [algorithmic complexity](https://arbital.com/p/5v) actually *do* have high algorithmic complexity. Since there are at most $2^{101}$ programs that can be specified with at most 100 bits (in any particular language), we can't fit all the 1000-bit data structures into all the 100-bit programs. While *Romeo and Juliet* is certainly highly compressible, relative to most random bitstrings of the same length, it would be shocking for it to compress *all the way down* to a 100-state Turing machine. There just aren't enough 100-state Turing machines for one of their outputs to be *Romeo and Juliet*. Similarly, if you start with a 10 kilobyte text file, and 7zip compresses it down to 2 kilobytes, no amount of time spent trying to further compress the file using other compression programs will ever get that file down to 1 byte. For any given compressor there's at most 256 starting files that can ever be compressed down to 1 byte, and your 10-kilobyte text file almost certainly isn't one of them.\n\nThis takes on defensive importance with respect to refuting the probability-theoretic fallacy, \"Oh, sure, Occam's Razor seems to say that this proposition is complicated. But how can you be sure that this apparently complex proposition wouldn't turn out to be generated by some very simple mechanism?\" If we consider a [partition](https://arbital.com/p/1rd) of 10,000 possible propositions, collectively having a 0.01% probability on average, then all the arguments in the world for why various propositions might have unexpectedly high probability, must still add up to an average probability of 0.01%. It can't be the case that after considering that proposition 1 might have secretly high probability, and considering that proposition 2 might have secretly high probability, and so on, we end up assigning 5% probability to each proposition, because that would be a total probability of 500. If we assign prior probabilities using an algorithmic-complexity Occam prior as in [Solomonoff induction](https://arbital.com/p/11w), then the observation that \"most apparently complex things can't be further compressed into an amazingly simple Turing machine\", is the same observation as that \"most apparently Occam-penalized propositions can't turn out to be simpler than they look\" or \"most apparently subjectively improbable things can't turn out to have unseen clever arguments that would validly make them more subjectively probable\".", "date_published": "2016-06-27T00:06:10Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Start"], "alias": "4v5"} {"id": "9a184632db24b13bb41959f02aadda5c", "title": "Group orbit", "url": "https://arbital.com/p/group_orbit", "source": "arbital", "source_type": "text", "text": "When we have a [group](https://arbital.com/p/3gd) [acting](https://arbital.com/p/3t9) on a set, we are often interested in how the group acts on a particular [element](https://arbital.com/p/group_element). One natural way to do this is to look at the set of all the elements that different group actions will take the starting element to. This is called the orbit. \n\n#Definition\nLet $X$ be a set, with element $x \\in X$, and let $G$ be a group acting on $X$. Then the orbit of $x$ is $Gx = \\{gx : g \\in G\\}$. \n\n#Properties\nThe set $X$ is partitioned by group orbits - each element defines its own orbit, and two orbits containing the same element must be the same, because every action has an inverse action. This gives the fact that [cosets partition a group](https://arbital.com/p/4j5) as the special case where $X$ is a subgroup of $G$. \n#See Also\n\n - [The orbit-stabiliser theorem](https://arbital.com/p/4l8).", "date_published": "2016-06-28T21:01:45Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Adele Lopez"], "summaries": [], "tags": ["Formal definition", "Needs clickbait"], "alias": "4v8"} {"id": "f0b42b591f7187b0aa1a5c857ae7cff8", "title": "Lagrange theorem on subgroup size: Intuitive version", "url": "https://arbital.com/p/lagrange_theorem_on_subgroup_size_intuitive", "source": "arbital", "source_type": "text", "text": "Given a finite [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $G$, it may have many [subgroups](https://arbital.com/p/576).\nSo far, we know almost nothing about those subgroups; it would be great if we had some way of restricting them.\n\nAn example of such a restriction, which we do already know, is that a subgroup $H$ of $G$ has to have [size](https://arbital.com/p/3gg) less than or equal to the size of $G$ itself.\nThis is because $H$ is contained in $G$, and if the set $X$ is contained in the set $Y$ then the size of $X$ is less than or equal to the size of $Y$.\n(This would have to be true for any reasonable definition of \"size\"; the [usual definition](https://arbital.com/p/4w5) certainly has this property.)\n\nLagrange's Theorem gives us a much more powerful restriction: not only is the size $|H|$ of $H$ less than or equal to $|G|$, but in fact $|H|$ divides $|G|$.\n\n%%hidden(Example: subgroups of the cyclic group on six elements):\n*A priori*, all we know about the subgroups of the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_6$ of order $6$ is that they are of order $1, 2, 3, 4, 5$ or $6$.\n\nLagrange's Theorem tells us that they can only be of order $1, 2, 3$ or $6$: there are no subgroups of order $4$ or $5$.\nLagrange tells us nothing about whether there *are* subgroups of size $1,2,3$ or $6$: only that if we are given a subgroup, then it is of one of those sizes.\n\nIn fact, as an aside, there are indeed subgroups of sizes $1,2,3,6$:\n\n- the subgroup containing only the identity is of order $1$\n- the \"improper\" subgroup $C_6$ is of order $6$\n- subgroups of size $2$ and $3$ are guaranteed by [Cauchy's theorem](https://arbital.com/p/4l6).\n\n%%\n\n# Proof\n\nIn order to show that $|H|$ divides $|G|$, we would be done if we could divide the elements of $G$ up into separate buckets of size $|H|$.\n\nThere is a fairly obvious place to start: we already have one bucket of size $|H|$, namely $H$ itself (which consists of some elements of $G$).\nCan we perhaps use this to create more buckets of size $|H|$?\n\nFor motivation: if we think of $H$ as being a collection of symmetries (which we can do, by [Cayley's Theorem](https://arbital.com/p/49b) which states that all groups may be viewed as collections of symmetries), then we can create more symmetries by \"tacking on elements of $G$\".\n\nFormally, let $g$ be an element of $G$, and consider $gH = \\{ g h : h \\in H \\}$.\n\nExercise: every element of $G$ does have one of these buckets $gH$ in which it lies.\n%%hidden(Show solution):\nThe element $g$ of $G$ is contained in the bucket $gH$, because the identity $e$ is contained in $H$ and so $ge$ is in $gH$; but $ge = g$.\n%%\n\nExercise: $gH$ is a set of size $|H|$. %%note:More formally put, [https://arbital.com/p/-4j8](https://arbital.com/p/-4j8).%%\n%%hidden(Show solution):\nIn order to show that $gH$ has size $|H|$, it is enough to match up the elements of $gH$ [bijectively](https://arbital.com/p/499) with the elements of $|H|$.\n\nWe can do this with the [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $H \\to gH$ taking $h \\in H$ and producing $gh$.\nThis has an [inverse](https://arbital.com/p/4sn): the function $gH \\to H$ which is given by pre-multiplying by $g^{-1}$, so that $gx \\mapsto g^{-1} g x = x$.\n%%\n\nNow, are all these buckets separate? Do any of them overlap?\n\nExercise: if $x \\in rH$ and $x \\in sH$ then $rH = sH$. That is, if any two buckets intersect then they are the same bucket. %%note:More formally put, [https://arbital.com/p/4j5](https://arbital.com/p/4j5).%%\n%%hidden(Show solution):\nSuppose $x \\in rH$ and $x \\in sH$.\n\nThen $x = r h_1$ and $x = s h_2$, some $h_1, h_2 \\in H$.\n\nThat is, $r h_1 = s h_2$, so $s^{-1} r h_1 = h_2$.\nSo $s^{-1} r = h_2 h_1^{-1}$, so $s^{-1} r$ is in $H$ by closure of $H$.\n\nBy taking inverses, $r^{-1} s$ is in $H$.\n\nBut that means $\\{ s h : h \\in H \\}$ and $\\{ r h : h \\in H\\}$ are equal.\nIndeed, we show that each is contained in the other.\n\n- if $a$ is in the right-hand side, then $a = rh$ for some $h$. Then $s^{-1} a = s^{-1} r h$; but $s^{-1} r$ is in $H$, so $s^{-1} r h$ is in $H$, and so $s^{-1} a$ is in $H$.\nTherefore $a \\in s H$, so $a$ is in the left-hand side.\n- if $a$ is in the left-hand side, then $a = sh$ for some $h$. Then $r^{-1} a = r^{-1} s h$; but $r^{-1} s$ is in $H$, so $r^{-1} s h$ is in $H$, and so $r^{-1} a$ is in $H$.\nTherefore $a \\in rH$, so $a$ is in the right-hand side.\n%%\n\nWe have shown that the \"[cosets](https://arbital.com/p/4j4)\" $gH$ are all completely disjoint and are all the same size, and that every element lies in a bucket; this completes the proof.", "date_published": "2016-08-27T12:11:30Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["Lagrange's Theorem gives us a restriction on the size of a subgroup of a finite group: namely, subgroups have size dividing the parent group."], "tags": [], "alias": "4vz"} {"id": "ef998743a16cd66299c16790b2831c7e", "title": "Proportion", "url": "https://arbital.com/p/proportion", "source": "arbital", "source_type": "text", "text": "A **proportion** is a representation of one value as a [https://arbital.com/p/-fraction](https://arbital.com/p/-fraction) or [https://arbital.com/p/-multiple](https://arbital.com/p/-multiple) of another value. It is used to provide a sense of relative magnitude.\n\nIf some value $a$ is proportional to another value $b$, it means that $a$ can be expressed as $c \\times b$, where $c$ is some constant, also known as the *constant of proportionality* (or sometimes just the *proportion*) of $a$ to $b$.\n\nFor example, no matter how large a domino is, it is always twice as long as it is wide. Then, the proportion of the length compared to the width can be said to be $2$, and we can write $l = 2w$ where $w$ is the width and $l$ is the length.\n\n\n## Notation\n\nTo write that $a$ is proportional to $b$, the notation $a \\propto b$ is used. For example, the surface area of an object of a specific shape is proportional to the square of its length, so we write $A \\propto L^2$ where $A$ is area and $L$ is length.\n\n\n## Percentages\n\nPercentages are a way of describing proportions relative to the number 100 to make them easier to understand. When people refer to a \"percentage basis\" for calculating something, they are specifically talking about this type of proportion.\n\nTo get a percentage from a proportion, simply multiply it by 100. In our domino example, we multiply $2$ by $100$ to find that the length is equal to $200\\%$ of the width.\n\nPercentages only work with dimensionless proportions — that is, where the constant of proportionality has no [units](https://arbital.com/p/unit). For example, if a car requires $6.7$ litres of gas to travel $100$ kilometres, it makes no sense to say that the fuel efficiency of the car (which is a constant of proportionality for fuel used versus distance travelled) is $6.7\\%$ litres per kilometre. However, when comparing the fuel efficiencies of two cars, if one car has an efficiency of $5.2$ L/100km and another has one of $6.8$ L/100km, it makes sense to say that the first car is $131\\%$ as efficient as the second.\n\n\n## Common examples of proportions\n\nThe [https://arbital.com/p/-circumference](https://arbital.com/p/-circumference) of a [https://arbital.com/p/-circle](https://arbital.com/p/-circle) is proportional to its [https://arbital.com/p/-diameter](https://arbital.com/p/-diameter). We write $C \\propto d$, and the constant of proportionality is $\\pi = 3.14159265\\ldots$.\n\nThe [https://arbital.com/p/-area](https://arbital.com/p/-area) of a circle is proportional to the square of its radius. We write $A \\propto r^2$, and the constant of proportionality is also $\\pi$.\n\nThe rate of growth of an [https://arbital.com/p/-4ts](https://arbital.com/p/-4ts) function is proportional to the current value of the function. We write $\\frac{df}{dt} \\propto f$, or $\\Delta f \\propto f$ if considering discrete values.\n\nThe gravitational potential energy of an object in a constant gravitational field is proportional to its height relative to some zero point. We write $E \\propto h$, and the constant of proportionality is the weight of the object, its mass multiplied by the gravitational acceleration in the field.\n\nThe amount of current flowing through a specific wire is proportional to the potential difference, or voltage, between the endpoints of the wire. We write $I \\propto V$, and the constant of proportionality is the resistance of that wire.\n\nIn an ideal gas, the pressure of the gas is proportional to its temperature, if the volume and number of particles in the gas is held constant. This is one property of the ideal gas law in physics. We write $P \\propto T$.", "date_published": "2016-07-04T17:39:25Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Needs image", "C-Class"], "alias": "4w3"} {"id": "bb01f09aadb748a086486b8b92fcdda7", "title": "Cardinality", "url": "https://arbital.com/p/cardinality", "source": "arbital", "source_type": "text", "text": "summary: If $A$ is a finite set then the cardinality of $A$, denoted $|A|$, is the number of elements $A$ contains. When $|A| = n$, we say that $A$ is a set of cardinality $n$. There exists a [bijection](https://arbital.com/p/499) from any finite set of cardinality $n$ to the set $\\{0, ..., (n-1)\\}$ containing the first $n$ natural numbers.\n\nTwo infinite sets have the same cardinality if there exists a bijection between them. Any set in bijective correspondence with [$\\mathbb N$](https://arbital.com/p/45h) is called __countably infinite__, while any infinite set that is not in bijective correspondence with $\\mathbb N$ is call **[uncountably infinite](https://arbital.com/p/2w0)**. All countably infinite sets have the same cardinality, whereas there are multiple distinct uncountably infinite cardinalities.\n\nsummary(technical): The cardinality (or size) $|X|$ of a set $X$ is the number of elements in $X.$ For example, letting $X = \\{a, b, c, d\\}, |X|=4.$\n\n\n\nThe **cardinality** of a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) is a formalization of the \"number of elements\" in the set.\n\nSet cardinality is an [https://arbital.com/p/-53y](https://arbital.com/p/-53y). Two sets have the same cardinality if (and only if) there exists a [bijection](https://arbital.com/p/499) between them.\n\n## Definition of equivalence classes\n\n### Finite sets\n\nA set $S$ has a cardinality of a [https://arbital.com/p/-45h](https://arbital.com/p/-45h) $n$ if there exists a bijection between $S$ and the set of natural numbers from $1$ to $n$. For example, the set $\\{9, 15, 12, 20\\}$ has a bijection with $\\{1, 2, 3, 4\\}$, which is simply mapping the $m$th element in the first set to $m$; therefore it has a cardinality of $4$.\n\nWe can see that this equivalence class is [well-defined](https://arbital.com/p/) — if there exist two sets $S$ and $T$, and there exist bijective functions $f : S \\to \\{1, 2, 3, \\ldots, n\\}$ and $g : \\{1, 2, 3, \\ldots, n\\} \\to T$, then $g \\circ f$ is a bijection between $S$ and $T$, and so the two sets also have the same cardinality as each other, which is $n$.\n\nThe cardinality of a finite set is always a natural number, never a fraction or decimal.\n\n### Infinite sets\n\nAssuming the [axiom of choice](https://arbital.com/p/69b), the cardinalities of infinite sets are represented by the [https://arbital.com/p/aleph_numbers](https://arbital.com/p/aleph_numbers). A set has a cardinality of $\\aleph_0$ if there exists a bijection between that set and the set of *all* natural numbers. This particular class of sets is also called the class of [https://arbital.com/p/-countably_infinite_sets](https://arbital.com/p/-countably_infinite_sets).\n\nLarger infinities (which are [uncountable](https://arbital.com/p/2w0)) are represented by higher Aleph numbers, which are $\\aleph_1, \\aleph_2, \\aleph_3,$ and so on through the [ordinals](https://arbital.com/p/ordinal).\n\n**In the absence of the Axiom of Choice**\n\nWithout the axiom of choice, not every set may be [well-ordered](https://arbital.com/p/55r), so not every set bijects with an [https://arbital.com/p/-ordinal](https://arbital.com/p/-ordinal), and so not every set bijects with an aleph.\nInstead, we may use the rather cunning [https://arbital.com/p/Scott_trick](https://arbital.com/p/Scott_trick).\n\n%%todo: Examples and exercises (possibly as lenses) %%\n\n%%todo: Split off a more accessible cardinality page that explains the difference between finite, countably infinite, and uncountably infinite cardinalities without mentioning alephs, ordinals, or the axiom of choice.%%", "date_published": "2016-10-05T17:51:17Z", "authors": ["Dylan Hendrickson", "Eric Rogstad", "Patrick Stevens", "Eric Bruylant", "Joe Zeng"], "summaries": ["The **cardinality** of a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) is a formalization of the \"number of [elements](https://arbital.com/p/5xy)\" in the set."], "tags": ["Needs splitting by mastery"], "alias": "4w5"} {"id": "940cef4103c37e7de52bd72add973eee", "title": "Needs clickbait", "url": "https://arbital.com/p/needs_clickbait_meta_tag", "source": "arbital", "source_type": "text", "text": "A meta tag for pages which don't have [clickbait](https://arbital.com/p/597). Clickbait style guidelines:\n\nYes: \"First letter is capital.\" \nYes: \"Pique user's interest in the topic.\" \nYes: \"Most clickbaits are just one sentence telling user what they might find out if they read the page.\" \nYes: \"Can a clickbait ask the reader a leading question?\" \nYes: \"Period or other punctuation at the end.\" \nYes: Omitting clickbait if the title is descriptive enough. \nNo: \"Capitalize Every Word\" \nNo: \"**Markdown**\"", "date_published": "2016-08-19T21:01:16Z", "authors": ["Eric Bruylant", "Jaime Sevilla Molina", "Patrick Stevens"], "summaries": ["This page does not have [clickbait](https://arbital.com/p/597). Clickbait style guidelines:\n\nYes: \"First letter is capital.\" \nYes: \"Pique user's interest in the topic.\" \nYes: \"Most clickbaits are just one sentence telling user what they might find out if they read the page.\" \nYes: \"Can a clickbait ask the reader a leading question?\" \nYes: \"Period or other punctuation at the end.\" \nYes: Omitting clickbait if the title is descriptive enough. \nNo: \"Capitalize Every Word\" \nNo: \"**Markdown**\""], "tags": ["Meta tags which request an edit to the page"], "alias": "4w9"} {"id": "49f2161a29c71520b6b4a5105be2af23", "title": "Interpretations of \"probability\"", "url": "https://arbital.com/p/probability_interpretations", "source": "arbital", "source_type": "text", "text": "What does it *mean* to say that a flipped coin has a 50% probability of landing heads?\n\nHistorically, there are two popular types of answers to this question, the \"[frequentist](https://arbital.com/p/frequentist_probability)\" and \"[subjective](https://arbital.com/p/4vr)\" (aka \"[Bayesian](https://arbital.com/p/1r8)\") answers, which give rise to [radically different approaches to experimental statistics](https://arbital.com/p/4xx). There is also a third \"[propensity](https://arbital.com/p/propensity)\" viewpoint which is largely discredited (assuming the coin is deterministic). Roughly, the three approaches answer the above question as follows:\n\n- __The propensity interpretation:__ Some probabilities are just out there in the world. It's a brute fact about coins that they come up heads half the time. When we flip a coin, it has a fundamental *propensity* of 0.5 for the coin to show heads. When we say the coin has a 50% probability of being heads, we're talking directly about this propensity.\n- __The frequentist interpretation:__ When we say the coin has a 50% probability of being heads after this flip, we mean that there's a class of events similar to this coin flip, and across that class, coins come up heads about half the time. That is, the _frequency_ of the coin coming up heads is 50% inside the event class, which might be \"all other times this particular coin has been tossed\" or \"all times that a similar coin has been tossed\", and so on.\n- __The subjective interpretation:__ Uncertainty is in the mind, not the environment. If I flip a coin and slap it against my wrist, it's already landed either heads or tails. The fact that I don't know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim \"I think this coin is heads with probability 50%\" is an _expression of my own ignorance,_ and 50% probability means that I'd bet at 1 : 1 odds (or better) that the coin came up heads.\n\n\nFor a visualization of the differences between these three viewpoints, see [https://arbital.com/p/4yj](https://arbital.com/p/4yj). For examples of the difference, see [https://arbital.com/p/4yn](https://arbital.com/p/4yn). See also the [Stanford Encyclopedia of Philosophy article on interpretations of probability](https://arbital.com/p/http://plato.stanford.edu/entries/probability-interpret/).\n\nThe propensity view is perhaps the most intuitive view, as for many people, it just feels like the coin is intrinsically random. However, this view is difficult to reconcile with the idea that once we've flipped the coin, it has already landed heads or tails. If the event in question is decided [deterministically](https://arbital.com/p/determinism), the propensity view can be seen as an instance of the [https://arbital.com/p/-4yk](https://arbital.com/p/-4yk): When we mentally consider the coin flip, it feels 50% likely to be heads, so we find it very easy to imagine a *world* in which the coin is _fundamentally_ 50%-heads-ish. But that feeling is actually a fact about _us,_ not a fact about the coin; and the coin has no physical 0.5-heads-propensity hidden in there somewhere — it's just a coin.\n\nThe other two interpretations are both self-consistent, and give rise to pragmatically different statistical techniques, and there has been much debate as to which is preferable. The subjective interpretation is more generally applicable, as it allows one to assign probabilities (interpreted as betting odds) to one-off events.\n\n# Frequentism vs subjectivism\n\nAs an example of the difference between frequentism and subjectivism, consider the question: \"What is the probability that Hillary Clinton will win the 2016 US presidential election?\", as analyzed in the summer of 2016.\n\nA stereotypical (straw) frequentist would say, \"The 2016 presidential election only happens once. We can't *observe* a frequency with which Clinton wins presidential elections. So we can't do any statistics or assign any probabilities here.\"\n\nA stereotypical subjectivist would say: \"Well, prediction markets tend to be pretty [well-calibrated](https://arbital.com/p/well_calibrated) about this sort of thing, in the sense that when prediction markets assign 20% probability to an event, it happens around 1 time in 5. And the prediction markets are currently betting on Hillary at about 3 : 1 odds. Thus, I'm comfortable saying she has about a 75% chance of winning. If someone offered me 20 : 1 odds _against_ Clinton — they get \\$1 if she loses, I get \\$20 if she wins — then I'd take the bet. I suppose you could refuse to take that bet on the grounds that you Just Can't Talk About Probabilities of One-off Events, but then you'd be pointlessly passing up a really good bet.\"\n\nA stereotypical (non-straw) frequentist would reply: \"I'd take that bet too, of course. But my taking that bet *is not based on rigorous epistemology,* and we shouldn't allow that sort of thinking in experimental science and other important venues. You can do subjective reasoning about probabilities when making bets, but we should exclude subjective reasoning in our scientific journals, and that's what frequentist statistics is designed for. Your paper should not conclude \"and therefore, having observed thus-and-such data about carbon dioxide levels, I'd personally bet at 9 : 1 odds that anthropogenic global warming is real,\" because you can't build scientific consensus on opinions.\"\n\n...and then it starts getting complicated. The subjectivist responds \"First of all, I agree you shouldn't put posterior odds into papers, and second of all, it's not like your method is truly objective — the choice of \"similar events\" is arbitrary, abusable, and has given rise to [p-hacking](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging) and the [replication crisis](https://arbital.com/p/https://en.wikipedia.org/wiki/Replication_crisis).\" The frequentists say \"well your choice of prior is even more subjective, and I'd like to see you do better in an environment where peer pressure pushes people to abuse statistics and exaggerate their results,\" and then [down the rabbit hole we go](https://arbital.com/p/4xx).\n\nThe subjectivist interpretation of probability is common among artificial intelligence researchers (who often design computer systems that manipulate subjective probability distributions), wall street traders (who need to be able to make bets even in relatively unique situations), and common intuition (where people feel like they can say there's a 30% chance of rain tomorrow without worrying about the fact that tomorrow only happens once). Nevertheless, the frequentist interpretation is commonly taught in introductory statistics classes, and is the gold standard for most scientific journals.\n\nA common frequentist stance is that it is virtuous to have a large toolbox of statistical tools at your disposal. Subjectivist tools have their place in that toolbox, but they don't deserve any particular primacy (and they aren't generally accepted when it comes time to publish in a scientific journal).\n\nAn aggressive subjectivist stance is that frequentists have invented some interesting tools, and many of them are useful, but that refusing to consider subjective probabilities is toxic. Frequentist statistics were invented in a (failed) attempt to keep subjectivity out of science in a time before humanity really understood the laws of probability theory. Now we have [theorems](https://arbital.com/p/1lz) about how to manage subjective probabilities correctly, and how to factor personal beliefs out from the objective evidence provided by the data, and if you ignore these theorems you'll get in trouble. The frequentist interpretation is broken, and that's why science has p-hacking and a replication crisis even as all the wall-street traders and AI scientists use the Bayesian interpretation. This \"let's compromise and agree that everyone's viewpoint is valid\" thing is all well and good, but how much worse do things need to get before we say \"oops\" and start acknowledging the subjective probability interpretation across all fields of science?\n\nThe most common stance among scientists and researchers is much more agnostic, along the lines of \"use whatever statistical techniques work best at the time, and use frequentist techniques when publishing in journals because that's what everyone's been doing for decades upon decades upon decades, and that's what everyone's expecting.\"\n\nSee also [https://arbital.com/p/frequentist_probability](https://arbital.com/p/frequentist_probability), [https://arbital.com/p/4vr](https://arbital.com/p/4vr), and [https://arbital.com/p/4xx](https://arbital.com/p/4xx).\n\n# Which interpretation is most useful?\n\nProbably the subjective interpretation, because it subsumes the propensity and frequentist interpretations as special cases, while being more flexible than both.\n\nWhen the frequentist \"similar event\" class is clear, the subjectivist can take those frequencies (often called [base rates](https://arbital.com/p/base_rate) in this context) into account. But unlike the frequentist, she can also [combine those base rates with other evidence that she's seen](https://arbital.com/p/1lz), and assign probabilities to one-off events, and make money in prediction markets and/or stock markets (when she knows something that the market doesn't).\n\nWhen the laws of physics actually do \"contain uncertainty\", such as when they say that there are multiple different observations you might make next with differing likelihoods (as the [Schrodinger equation](https://arbital.com/p/schrodinger_equation) often will), a subjectivist can combine her propensity-style uncertainty with her personal uncertainty in order to generate her aggregate subjective probabilities. But unlike a propensity theorist, she's not forced to think that _all_ uncertainty is physical uncertainty: She can act like a propensity theorist with respect to Schrodinger-equation-induced uncertainty, while still believing that her uncertainty about a coin that has already been flipped and slapped against her wrist is in her head, rather than in the coin.\n\nThis fully general stance is consistent with the belief that frequentist tools are useful for answering frequentist questions: The fact that you can _personally_ assign probabilities to one-off events (and, e.g., evaluate how good a certain trade is on a prediction market or a stock market) does not mean that tools labeled \"Bayesian\" are always better than tools labeled \"frequentist\". Whatever interpretation of \"probability\" you use, you're encouraged to use whatever statistical tool works best for you at any given time, regardless of what \"camp\" the tool comes from. Don't let the fact that you think it's possible to assign probabilities to one-off events prevent you from using useful frequentist tools!", "date_published": "2016-07-01T05:22:14Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": ["What does it *mean* to say that a fair coin has a 50% probability of landing heads? There are three common views:\n\n- __The propensity interpretation:__ Some probabilities are just out there in the world. It's a brute fact about coins that they come up heads half the time; we'll call this the coin's physical \"propensity towards heads.\" When we say the coin has a 50% probability of being heads, we're talking directly about this propensity.\n- __The frequentist interpretation:__ When we say the coin has a 50% probability of being heads after this flip, we mean that there's a class of events similar to this coin flip, and across that class, coins come up heads about half the time. That is, the _frequency_ of the coin coming up heads is 50% inside the event class (which might be \"all other times this particular coin has been tossed\" or \"all times that a similar coin has been tossed\" etc).\n- __The subjective interpretation:__ Uncertainty is in the mind, not the environment. If I flip a coin and slap it against my wrist, it's already landed either heads or tails. The fact that I don't know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim \"I think this coin is heads with probability 50%\" is an _expression of my own ignorance,_ which means that I'd bet at 1 : 1 odds (or better) that the coin came up heads.\n\nWhen the event in question is deterministic, the propensity view is commonly considered an example of the [https://arbital.com/p/-4yk](https://arbital.com/p/-4yk). The frequentist view is self-consistent, but says that one-off events (such as the probability of it raining where you are _tomorrow in particular_) cannot be assigned \"probabilities\", because there are no objective frequencies to draw upon. The subjectivist view extends the frequentist view, and allows you to also consider probabilities (interpreted as betting odds) on one-off events. [https://arbital.com/p/1bv](https://arbital.com/p/1bv) provides tools for reasoning about subjective probabilities in a rigorous, consistent way."], "tags": [], "alias": "4y9"} {"id": "67a6af1791df94475bf209d4ae57ac14", "title": "A-Class", "url": "https://arbital.com/p/a_class_meta_tag", "source": "arbital", "source_type": "text", "text": "A-Class pages are comprehensive, intuitive, engaging, well-organized, and, where it would be useful, include excellent supporting material (e.g. examples, images, and exercises). You would enjoy learning from it, and send it to your friends. To reach A-Class pages must receive detailed feedback from both the target audience to make sure it's a good experience to learn from and [reviewers](https://arbital.com/p/5ft) with a strong understanding of the topic.\n\nFor higher level audiences asking on [Slack](https://arbital.com/p/4ph) is a good way to get feedback, for [https://arbital.com/p/1r3](https://arbital.com/p/1r3) or [https://arbital.com/p/1r5](https://arbital.com/p/1r5) sharing with your friends (in real life of via social media) is a better bet. To get the reviewers attention ask in #reviews on Slack.\n\nA-Class pages are:\n\n1. Comprehensive and authoritative, covering all areas of the topic without notable omissions or inaccuracies.\n2. Written to an engaging, enjoyable, professional standard.\n3. Well-organized, with useful overall and local structure.\n4. Aimed at an audience, with a level of technicality, jargon, and notation appropriate for them.\n5. Intuitively explained, making use of examples, visual aids, requisites, and anything else which aids reader comprehension.\n6. Very well-[summarized](https://arbital.com/p/1kl), in ways appropriate for all likely audiences.\n\nThis tag should only be added by [reviewers](https://arbital.com/p/5ft). \n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-08-19T21:47:21Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "4yf"} {"id": "105b7db7c64a5807d4ba91657ea41c5e", "title": "Correspondence visualizations for different interpretations of \"probability\"", "url": "https://arbital.com/p/probability_interpretations_correspondence", "source": "arbital", "source_type": "text", "text": "[Recall](https://arbital.com/p/4y9) that there are three common interpretations of what it means to say that a coin has a 50% probability of landing heads:\n\n- __The propensity interpretation:__ Some probabilities are just out there in the world. It's a brute fact about coins that they come up heads half the time; we'll call this the coin's physical \"propensity towards heads.\" When we say the coin has a 50% probability of being heads, we're talking directly about this propensity.\n- __The frequentist interpretation:__ When we say the coin has a 50% probability of being heads after this flip, we mean that there's a class of events similar to this coin flip, and across that class, coins come up heads about half the time. That is, the _frequency_ of the coin coming up heads is 50% inside the event class (which might be \"all other times this particular coin has been tossed\" or \"all times that a similar coin has been tossed\" etc).\n- __The subjective interpretation:__ Uncertainty is in the mind, not the environment. If I flip a coin and slap it against my wrist, it's already landed either heads or tails. The fact that I don't know whether it landed heads or tails is a fact about me, not a fact about the coin. The claim \"I think this coin is heads with probability 50%\" is an _expression of my own ignorance,_ which means that I'd bet at 1 : 1 odds (or better) that the coin came up heads.\n\nOne way to visualize the difference between these approaches is by visualizing what they say about when a model of the world should count as a good model. If a person's model of the world is definite, then it's easy enough to tell whether or not their model is good or bad: We just check what it says against the facts. For example, if a person's model of the world says \"the tree is 3m tall\", then this model is [correct](https://arbital.com/p/correspondence_theory_of_truth) if (and only if) the tree is 3 meters tall.\n\n![ordinary truth](http://i.imgur.com/5YriFTj.jpg)\n\nDefinite claims in the model are called \"true\" when they correspond to reality, and \"false\" when they don't. If you want to navigate using a map, you had better ensure that the lines drawn on the map correspond to the territory.\n\nBut how do you draw a correspondence between a map and a territory when the map is probabilistic? If your model says that a biased coin has a 70% chance of coming up heads, what's the correspondence between your model and reality? If the coin is actually heads, was the model's claim true? 70% true? What would that mean?\n\n![probability truth?](http://i.imgur.com/EjAto4b.jpg)\n\nThe advocate of __propensity__ theory says that it's just a brute fact about the world that the world contains ontologically basic uncertainty. A model which says the coin is 70% likely to land heads is true if and only the actual physical propensity of the coin is 0.7 in favor of heads.\n\n![propensity correspondence](http://i.imgur.com/0vQamhR.jpg)\n\nThis interpretation is useful when the laws of physics _do_ say that there are multiple different observations you may make next (with different likelihoods), as is sometimes the case (e.g., in quantum physics). However, when the event is deterministic — e.g., when it's a coin that has been tossed and slapped down and is already either heads or tails — then this view is largely regarded as foolish, and an example of the [https://arbital.com/p/-4yk](https://arbital.com/p/-4yk): The coin is just a coin, and has no special internal structure (nor special physical status) that makes it _fundamentally_ contain a little 0.7 somewhere inside it. It's already either heads or tails, and while it may _feel_ like the coin is fundamentally uncertain, that's a feature of your brain, not a feature of the coin.\n\nHow, then, should we draw a correspondence between a probabilistic map and a deterministic territory (in which the coin is already definitely either heads or tails?)\n\nA __frequentist__ draws a correspondence between a single probability-statement in the model, and multiple events in reality. If the map says \"that coin over there is 70% likely to be heads\", and the actual territory contains 10 places where 10 maps say something similar, and in 7 of those 10 cases the coin is heads, then a frequentist says that the claim is true.\n\n![frequentist correspondence](https://i.imgur.com/RaePEL7.png)\n\nThus, the frequentist preserves black-and-white correspondence: The model is either right or wrong, the 70% claim is either true or false. When the map says \"That coin is 30% likely to be tails,\" that (according to a frequentist) means \"look at all the cases similar to this case where my map says the coin is 30% likely to be tails; across all those places in the territory, 3/10ths of them have a tails-coin in them.\" That claim is definitive, given the set of \"similar cases.\"\n\nBy contrast, a __subjectivist__ generalizes the idea of \"correctness\" to allow for shades of gray. They say, \"My uncertainty about the coin is a fact about _me,_ not a fact about the coin; I don't need to point to other 'similar cases' in order to express uncertainty about _this_ case. I know that the world right in front of me is either a heads-world or a tails-world, and I have a [https://arbital.com/p/-probability_distribution](https://arbital.com/p/-probability_distribution) puts 70% probability on heads.\" They then draw a correspondence between their probability distribution and the world in front of them, and declare that the more probability their model assigns to the correct answer, the better their model is.\n\n![bayesian correspondence](http://i.imgur.com/OWczeTe.jpg)\n\nIf the world _is_ a heads-world, and the probabilistic map assigned 70% probability to \"heads,\" then the subjectivist calls that map \"70% accurate.\" If, across all cases where their map says something has 70% probability, the territory is actually that way 7/10ths of the time, then the Bayesian calls the map \"[https://arbital.com/p/-well_calibrated](https://arbital.com/p/-well_calibrated)\". They then seek methods to make their maps more accurate, and better calibrated. They don't see a need to interpret probabilistic maps as making definitive claims; they're happy to interpret them as making estimations that can be graded on a sliding scale of accuracy.\n\n## Debate\n\nIn short, the frequentist interpretation tries to find a way to say the model is definitively \"true\" or \"false\" (by identifying a collection of similar events), whereas the subjectivist interpretation extends the notion of \"correctness\" to allow for shades of gray.\n\nFrequentists sometimes object to the subjectivist interpretation, saying that frequentist correspondence is the only type that has any hope of being truly objective. Under Bayesian correspondence, who can say whether the map should say 70% or 75%, given that the probabilistic claim is not objectively true or false either way? They claim that these subjective assessments of \"partial accuracy\" may be intuitively satisfying, but they have no place in science. Scientific reports ought to be restricted to frequentist statements, which are definitively either true or false, in order to increase the objectivity of science.\n\nSubjectivists reply that the frequentist approach is hardly objective, as it depends entirely on the choice of \"similar cases\". In practice, people can (and do!) [abuse frequentist statistics](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging) by choosing the class of similar cases that makes their result look as impressive as possible (a technique known as \"p-hacking\"). Furthermore, the manipulation of subjective probabilities is subject to the [iron laws](https://arbital.com/p/1lz) of probability theory (which are the [only way to avoid inconsistencies and pathologies](https://arbital.com/p/) when managing your uncertainty about the world), so it's not like subjective probabilities are the wild west or something. Also, science has things to say about situations even when there isn't a huge class of objective frequencies we can observe, and science should let us collect and analyze evidence even then.\n\nFor more on this debate, see [https://arbital.com/p/4xx](https://arbital.com/p/4xx).", "date_published": "2016-07-10T10:58:40Z", "authors": ["Eric Rogstad", "Nate Soares"], "summaries": ["Let's say you have a model which says a particular coin is 70% likely to be heads. How should we assess that model?\n\n- According to the [propensity](https://arbital.com/p/propensity) interpretation, the coin has a fundamental intrinsic comes-up-headsness property, and the model is correct if that property is set to 0.7. (This theory is widely considered discredited, and is an example of the [https://arbital.com/p/-4yk](https://arbital.com/p/-4yk)).\n- According to the [frequentist](https://arbital.com/p/frequentist_probability) interpretation, the model is saying that there are a whole bunch of different places where some similar model is saying the same thing as this one (i.e., \"the coin is 70% heads\"), and all models in that reference class are true if, in 70% of those different places, the coin is heads.\n- According to the [subjectivist](https://arbital.com/p/4vr) interpretation, the model is saying that the one dang coin is 70 dang percent likely to be heads, and the model is either 70% accurate (if the coin is in fact heads) or 30% accurate (if it's tails).\n\nIn other words, the propensity and frequency interpretations try to find ways to say that the model is definitively \"true\" or \"false\" (one by postulating that uncertainty is an ontologically basic part of the world, the other by identifying a collection of similar events), whereas the subjective interpretation extends the notion of \"correctness\" to allow for shades of gray."], "tags": [], "alias": "4yj"} {"id": "73be33176baaa110babdd55f7ee8ee51", "title": "Probability interpretations: Examples", "url": "https://arbital.com/p/probability_interpretations_examples", "source": "arbital", "source_type": "text", "text": "## Betting on one-time events\n\nConsider evaluating, in June of 2016, the question: \"What is the probability of Hillary Clinton winning the 2016 US presidential election?\"\n\nOn the **propensity** view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance. If we see a prediction market in which prices move after each new poll — so that it says 60% one day, and 80% a week later — then clearly the prediction market isn't giving us very strong information about this objective chance, since it doesn't seem very likely that Clinton's *real* chance of winning is swinging so rapidly.\n\nOn the **frequentist** view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once. We can't *observe* a frequency with which Clinton wins presidential elections. A frequentist might concede that they would cheerfully buy for \\$1 a ticket that pays \\$20 if Clinton wins, considering this a favorable bet in an *informal* sense, while insisting that this sort of reasoning isn't sufficiently rigorous, and therefore isn't suitable for being included in science journals.\n\nOn the **subjective** view, saying that Hillary has an 80% chance of winning the election summarizes our *knowledge about* the election or our *state of uncertainty* given what we currently know. It makes sense for the prediction market prices to change in response to new polls, because our current state of knowledge is changing.\n\n## A coin with an unknown bias\n\nSuppose we have a coin, weighted so that it lands heads somewhere between 0% and 100% of the time, but we don't know the coin's actual bias.\n\nThe coin is then flipped three times where we can see it. It comes up heads twice, and tails once: HHT.\n\nThe coin is then flipped again, where nobody can see it yet. An honest and trustworthy experimenter lets you spin a wheel-of-gambling-odds,%note:The reason for spinning the wheel-of-gambling-odds is to reduce the worry that the experimenter might know more about the coin than you, and be offering you a deliberately rigged bet.% and the wheel lands on (2 : 1). The experimenter asks if you'd enter into a gamble where you win \\$2 if the unseen coin flip is tails, and pay \\$1 if the unseen coin flip is heads.\n\nOn a **propensity** view, the coin has some objective probability between 0 and 1 of being heads, but we just don't know what this probability is. Seeing HHT tells us that the coin isn't all-heads or all-tails, but we're still just guessing — we don't really know the answer, and can't say whether the bet is a fair bet.\n\nOn a **frequentist** view, the coin would (if flipped repeatedly) produce some long-run frequency $f$ of heads that is between 0 and 1. If we kept flipping the coin long enough, the actual proportion $p$ of observed heads is guaranteed to approach $f$ arbitrarily closely, eventually. We can't say that the *next* coin flip is guaranteed to be H or T, but we can make an objectively true statement that $p$ will approach $f$ to within epsilon if we continue to flip the coin long enough.\n\nTo decide whether or not to take the bet, a frequentist might try to apply an [unbiased estimator](https://arbital.com/p/unbiased_estimator) to the data we have so far. An \"unbiased estimator\" is a rule for taking an observation and producing an estimate $e$ of $f$, such that the [expected value](https://arbital.com/p/4b5) of $e$ is $f$. In other words, a frequentist wants a rule such that, if the hidden bias of the coin was in fact to yield 75% heads, and we repeat many times the operation of flipping the coin a few times and then asking a new frequentist to estimate the coin's bias using this rule, the *average* value of the estimated bias will be 0.75. This is a property of the _estimation rule_ which is objective. We can't hope for a rule that will always, in any particular case, yield the true $f$ from just a few coin flips; but we can have a rule which will provably have an *average* estimate of $f$, if the experiment is repeated many times.\n\nIn this case, a simple unbiased estimator is to guess that the coin's bias $f$ is equal to the observed proportion of heads, or 2/3. In other words, if we repeat this experiment many many times, and whenever we see $p$ heads in 3 tosses we guess that the coin's bias is $\\frac{p}{3}$, then this rule definitely is an unbiased estimator. This estimator says that a bet of \\$2 vs. $\\1 is fair, meaning that it doesn't yield an expected profit, so we have no reason to take the bet.\n\nOn a **subjectivist** view, we start out personally unsure of where the bias $f$ lies within the interval [1](https://arbital.com/p/0,). Unless we have any knowledge or suspicion leading us to think otherwise, the coin is just as likely to have a bias between 33% and 34%, as to have a bias between 66% and 67%; there's no reason to think it's more likely to be in one range or the other.\n\nEach coin flip we see is then [evidence](https://arbital.com/p/22x) about the value of $f,$ since a flip H happens with different probabilities depending on the different values of $f,$ and we update our beliefs about $f$ using [Bayes' rule](https://arbital.com/p/1zj). For example, H is twice as likely if $f=\\frac{2}{3}$ than if $f=\\frac{1}{3}$ so by [Bayes's Rule](https://arbital.com/p/1zm) we should now think $f$ is twice as likely to lie near $\\frac{2}{3}$ as it is to lie near $\\frac{1}{3}$.\n\nWhen we start with a uniform [prior](https://arbital.com/p/219), observe multiple flips of a coin with an unknown bias, see M heads and N tails, and then try to estimate the odds of the next flip coming up heads, the result is [Laplace's Rule of Succession](https://arbital.com/p/21c) which estimates (M + 1) : (N + 1) for a probability of $\\frac{M + 1}{M + N + 2}.$\n\nIn this case, after observing HHT, we estimate odds of 2 : 3 for tails vs. heads on the next flip. This makes a gamble that wins \\$2 on tails and loses \\$1 on heads a profitable gamble in expectation, so we take the bet.\n\nOur choice of a [uniform prior](https://arbital.com/p/219) over $f$ was a little dubious — it's the obvious way to express total ignorance about the bias of the coin, but obviousness isn't everything. (For example, maybe we actually believe that a fair coin is more likely than a coin biased 50.0000023% towards heads.) However, all the reasoning after the choice of prior was rigorous according to the laws of [probability theory](https://arbital.com/p/1bv), which is the [only method of manipulating quantified uncertainty](https://arbital.com/p/probability_coherence_theorems) that obeys obvious-seeming rules about how subjective uncertainty should behave.\n\n## Probability that the 98,765th decimal digit of $\\pi$ is $0$.\n\nWhat is the probability that the 98,765th digit in the decimal expansion of $\\pi$ is $0$?\n\nThe **propensity** and **frequentist** views regard as nonsense the notion that we could talk about the *probability* of a mathematical fact. Either the 98,765th decimal digit of $\\pi$ is $0$ or it's not. If we're running *repeated* experiments with a random number generator, and looking at different digits of $\\pi,$ then it might make sense to say that the random number generator has a 10% probability of picking numbers whose corresponding decimal digit of $\\pi$ is $0$. But if we're just picking a non-random number like 98,765, there's no sense in which we could say that the 98,765th digit of $\\pi$ has a 10% propensity to be $0$, or that this digit is $0$ with 10% frequency in the long run.\n\nThe **subjectivist** considers probabilities to just refer to their own uncertainty. So if a subjectivist has picked the number 98,765 without yet knowing the corresponding digit of $\\pi,$ and hasn't made any observation that is known to them to be entangled with the 98,765th digit of $\\pi,$ and they're pretty sure their friend hasn't yet looked up the 98,765th digit of $\\pi$ either, and their friend offers a whimsical gamble that costs \\$1 if the digit is non-zero and pays \\$20 if the digit is zero, the Bayesian takes the bet.\n\nNote that this demonstrates a difference between the subjectivist interpretation of \"probability\" and Bayesian probability theory. A perfect Bayesian reasoner that knows the rules of logic and the definition of $\\pi$ must, by the axioms of probability theory, assign probability either 0 or 1 to the claim \"the 98,765th digit of $\\pi$ is a $0$\" (depending on whether or not it is). This is one of the reasons why [perfect Bayesian reasoning is intractable](https://arbital.com/p/bayes_intractable). A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of $\\pi.$ Formalizing the rules of subjective probabilities about mathematical facts (in the way that [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) formalized the rules for manipulating subjective probabilities about empirical facts, such as which way a coin came up) is an open problem; this in known as the problem of [https://arbital.com/p/-logical_uncertainty](https://arbital.com/p/-logical_uncertainty).", "date_published": "2016-06-30T22:13:46Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": ["Consider evaluating, in June of 2016, the question: \"What is the probability of Hillary Clinton winning the 2016 US presidential election?\"\n\n- On the **propensity** view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance.\n- On the **subjective** view, saying that Hillary has an 80% chance of winning the election summarizes our *knowledge about* the election, or, equivalently, our *state of uncertainty* given what we currently know.\n- On the **frequentist** view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once."], "tags": [], "alias": "4yn"} {"id": "4de5865ee6fb1c45542105fee7e63ac6", "title": "Cauchy's theorem on subgroup existence: intuitive version", "url": "https://arbital.com/p/cauchy_theorem_on_subgroup_existence_intuitive", "source": "arbital", "source_type": "text", "text": "Cauchy's Theorem states that if $G$ is a finite [https://arbital.com/p/-group](https://arbital.com/p/-group), and $p$ is a [prime](https://arbital.com/p/4mf) dividing the [order](https://arbital.com/p/3gg) of $G$, then $G$ has an element of order $p$.\nEquivalently, it has a subgroup of order $p$.\n\nThis theorem is very important as a partial converse to [Lagrange's theorem](https://arbital.com/p/4jn): while Lagrange's theorem gives us *restrictions* on what subgroups can exist, Cauchy's theorem tells us that certain subgroups *definitely do* exist.\nIn this direction, Cauchy's theorem is generalised by the [Sylow theorems](https://arbital.com/p/sylow_theorems_on_subgroup_existence).\n\n# Proof\n\nLet $G$ be a group, and write $e$ for its identity.\nWe will try and find an element of order $p$.\n\nWhat does it mean for an element $x \\not = e$ to have order $p$? Nothing more nor less than that $x^p = e$.\n(This is true because $p$ is prime; if $p$ were not prime, we would additionally have to stipulate that $x^i$ is not $e$ for any smaller $i < p$.)\n\nNow follows the magic step which casts the problem in the \"correct\" light.\n\nConsider all possible combinations of $p$ elements from the group.\nFor example, if $p=5$ and the group elements are $\\{ a, b, c, d, e\\}$ with $e$ the identity, then $(e, e, a, b, a)$ and $(e,a,b,a,e)$ are two different such combinations.\nThen $x \\not = e$ has order $p$ if and only if the combination $(x, x, \\dots, x)$ multiplies out to give $e$.\n\nThe rest of the proof will be simply following what this magic step has told us to do.\n\nSince we want to find some $x$ where $(x, x, \\dots, x)$ multiplies to give $e$, it makes sense only to consider the combinations which multiply out to give $e$ in the first place.\nSo, for instance, we will exclude the tuple $(e, e, a, b, a)$ from consideration if it is the case that $eeaba$ is the identity (equivalently, that $aba = e$).\nFor convenience, we will label the set of all the allowed combinations: let us call it $X$.\n\nOf the tuples in $X$, we are looking for one with a very specific property: every entry is equal.\nFor example, the tuple $(a,b,c,b,b)$ is no use to us, because it doesn't tell us an element $x$ such that $x^p = e$; it only tells us that $abcbb = e$.\nSo we will be done if we can find a tuple in $X$ which has every element equal (and which is not the identity).\n\nWe don't really have anything to go on here, but it will surely help to know the size of $X$: it is $|G|^{p-1}$, since every element of $X$ is determined exactly by its first $p-1$ places (after which the last is fixed).\nThere are no restrictions on the first $p-1$ places, though: we can find a $p$th element to complete any collection of $p-1$.\n\n%%hidden(Example):\nIf $p=5$, and we have the first four elements $(a, a, b, e, \\cdot)$, then we know the fifth element *must* be $b^{-1} a^{-2}$.\nIndeed, $aabe(a^{-1} a^{-2}) = e$, and [inverse elements of a group are unique](https://arbital.com/p/), so nothing else can fill the last slot.\n%%\n\nSo we have $X$, of size $|G|^{p-1}$; we have that $p$ divides $|G|$ (and so it divides $|X|$); and we also have an element $(e,e,\\dots,e)$ of $X$.\nBut notice that if $(a_1, a_2, \\dots, a_p)$ is in $X$, then so is $(a_2, a_3, \\dots, a_p, a_1)$; and so on through all the rotations of an element.\nWe can actually group up the elements of $X$ into buckets, where two tuples are in the same bucket if (and only if) one can be rotated to make the other.\n\nHow big is a bucket? If a bucket contains something of the form $(a, a, \\dots, a)$, then of course that tuple can only be rotated to give one thing, so the bucket is of size $1$.\nBut if the bucket contains any tuple $T$ which is not \"the same element repeated $p$ times\", then there must be exactly $p$ things in the bucket: namely the $p$ things we get by rotating $T$. (Exercise: verify this!)\n\n%%hidden(Show solution):\nThe bucket certainly does contain every rotation of $T$: we defined the bucket such that if a tuple is in the bucket, then so are all its rotations.\nMoreover, everything in the bucket is a rotation of $T$, because two tuples are in the bucket if and only if we can rotate one to get the other; equivalently, $A$ is in the bucket if and only if we can rotate it to get $T$.\n\nHow many such rotations are there? We claimed that there were $p$ of them.\nIndeed, the rotations are precisely $$(a_1, a_2, \\dots, a_p), (a_2, a_3, \\dots, a_p, a_1), \\dots, (a_{p-1}, a_p, a_1, \\dots, a_{p-2}), (a_p, a_1, a_2, \\dots, a_{p-1})$$\nand there are $p$ of those; are they all distinct?\nYes, they are, and this is because $p$ is prime (it fails when $p=8$, for instance, because $(1,1,2,2,1,1,2,2)$ can be rotated four places to give itself).\nIndeed, if we could rotate the tuple $T$ nontrivially (that is, strictly between $1$ and $p$ times) to get itself, then the integer $n$, the \"number of places we rotated\", must divide the integer \"length of the tuple\". \nBut that tells us that $n$ divides the prime $p$, so $n$ is $1$ or $p$ itself, and so either the tuple is actually \"the same element repeated $p$ times\" (in the case $n=1$) %%note: But we're only considering $T$ which is not of this form!%%, or the rotation was the \"stupid\" rotation by $p$ places (in the case $n=p$).\n%%\n\nOK, so our buckets are of size $p$ exactly, or size $1$ exactly.\nWe're dividing up the members of $X$ - that is, $|G|^{p-1}$ things - into buckets of these size; and we already have one bucket of size $1$, namely $(e,e,\\dots,e)$.\nBut if there were no more buckets of size $1$, then we would have $p$ dividing $|G|^{p-1}$ and also dividing the size of all but one of the buckets.\nMixing together all the other buckets to obtain an uber-bucket of size $|G|^{p-1} - 1$, the total must be divisible by $p$, since each individual bucket was %%note:Compare with the fact that the sum of even numbers is always even; that is the case that $p=2$.%%; so $|G|^{p-1} - 1$ is divisible by $p$.\nBut that is a contradiction, since $|G|^{p-1}$ is also divisible by $p$.\n\nSo there is another bucket of size $1$.\nA bucket of size $1$ contains something of the form $(a,a,\\dots,a)$: that is, it has shown us an element of order $p$.", "date_published": "2016-06-30T13:51:06Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Needs clickbait"], "alias": "4ys"} {"id": "4063cdc24631b2a69234f2db69c87e45", "title": "Report likelihoods, not p-values", "url": "https://arbital.com/p/likelihoods_not_pvalues", "source": "arbital", "source_type": "text", "text": "This page advocates for a change in the way that statistics is done in standard scientific journals. The key idea is to report [likelihood functions](https://arbital.com/p/56s) instead of p-values, and this could have many benefits.\n\n_(Note: This page is a personal [opinion page](https://arbital.com/p/60p).)_\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## What's the difference?\n\nThe status quo across scientific journals is to test data for \"[statistical significance](https://arbital.com/p/statistically_significant)\" using functions such as [p-values](https://arbital.com/p/p_value). A p-value is a number calculated from a hypothesis (called the \"null hypothesis\"), an experiment, a result, and a [https://arbital.com/p/-summary_statistic](https://arbital.com/p/-summary_statistic). For example, if the null hypothesis is \"this coin is fair,\" and the experiment is \"flip it 6 times\", and the result is HHHHHT, and the summary statistic is \"the sequence has at least five H values,\" then the p-value is 0.11, which means \"if the coin were fair, and we did this experiment a lot, then only 11% of the sequences generated would have at least five H values.\"%%note:This does _not_ mean that the coin is 89% likely to be biased! For example, if the only alternative is that the coin is biased towards tails, then HHHHHT is evidence that it's fair. This is a common source of confusion with p-values.%% If the p-value is lower than an arbitrary threshold (usually $p < 0.05$) then the result is called \"statistically significant\" and the null hypothesis is \"rejected.\"\n\nThis page advocates that scientific articles should report _likelihood functions_ instead of p-values. A likelihood function for a piece of evidence $e$ is a function $\\mathcal L$ which says, for each hypothesis $H$ in some set of hypotheses, the probability that $H$ assigned to $e$, written [$\\mathcal L_e](https://arbital.com/p/51n).%%note: Many authors write $\\mathcal L(H \\mid e)$ instead. We think this is confusing, as then $\\mathcal L(H \\mid e) = \\mathbb P(e \\mid H),$ and it's hard enough for students of statistics to keep \"probability of $H$ given $e$\" and \"probability of $e$ given $H$\" straight as it is if the notation _isn't_ swapped around every so often.%% For example, if $e$ is \"this coin, flipped 6 times, generated HHHHHT\", and the set of hypotheses are $H_{0.25} =$ \"the coin only produces heads 25% of the time\" and $H_{0.5}$ = \"the coin is fair\", then $\\mathcal L_e(H_{0.25})$ $=$ $0.25^5 \\cdot 0.75$ $\\approx 0.07\\%$ and $\\mathcal L_e(H_{0.5})$ $=$ $0.5^6$ $\\approx 1.56\\%,$ for a [likelihood ratio](https://arbital.com/p/1rq) of about $21 : 1$ in favor of the coin being fair (as opposed to biased 75% towards tails).\n\nIn fact, with a single likelihood function, we can report the amount of support $e$ gives to _every_ hypothesis $H_b$ of the form \"the coin has bias $b$ towards heads\":%note:To learn how this graph was generated, see [Bayes' rule: Functional form](https://arbital.com/p/1zj).%\n\n![](http://i.imgur.com/UwwxmCe.png)\n\nNote that this likelihood function is _not_ telling us the probability that the coin is actually biased, it is _only_ telling us how much the evidence supports each hypothesis. For example, this graph says that HHHHHT provides about 3.8 times as much evidence for $H_{0.75}$ over $H_{0.5}$, and about 81 times as much evidence for $H_{0.75}$ over $H_{0.25}.$\n\nNote also that the likelihood function doesn't necessarily contain the _right_ hypothesis; for example, the function above shows the support of $e$ for every possible bias on the coin, but it doesn't consider hypotheses like \"the coin alternates between H and T\". Likelihood functions, like p-values, are essentially a mere summary of the raw data — there is no substitute for the raw data when it comes to allowing people to test hypotheses that the original researchers did not consider. (In other words, even if you report likelihoods instead of p-values, it's still virtuous to share your raw data.)\n\nWhere p-values let you measure (roughly) how well the data supports a _single_ \"null hypothesis\", with an arbitrary 0.05 \"not well enough\" cutoff, the likelihood function shows the support of the evidence for lots and lots of different hypotheses at once, without any need for an arbitrary cutoff.\n\n## Why report likelihoods instead of p-values?\n\n__1. Likelihood functions are less arbitrary than p-values.__ To report a likelihood function, all you have to do is pick which hypothesis class to generate the likelihood function for. That's your only degree of freedom. This introduces one source of arbitrariness, and if someone wants to check some other hypothesis they still need access to the raw data, but it is better than the p-value case, where you only report a number for a single \"null\" hypothesis.\n\nFurthermore, in the p-value case, you have to pick not only a null hypothesis but also an experiment and a summary statistic, and these degrees of freedom can have a huge impact on the final report. These extra degrees of freedom are both unnecessary ([to carry out a probabilistic update, all you need are your own personal beliefs and a likelihood function](https://arbital.com/p/1lz)) and exploitable, and empirically, they're actively harming scientific research.\n\n__2. Reporting likelihoods would solve p-hacking.__ If you're using p-values, then you can game the statistics via your choice of experiment and summary statistics. In the example with the coin above, if you say your experiment and summary statistic are \"flip the coin 6 times and count the number of heads\" then the p-value of HHHHHT with respect to $H_{0.5}$ is 0.11, whereas if you say your experiment and summary statistic are \"flip the coin until it comes up tails and count the number of heads\" then the p-value of HHHHHT with respect to $H_{0.5}$ is 0.03, which is \"significant.\" This is called \"[p-hacking](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging)\", and it's a serious problem in modern science.\n\nIn a likelihood function, _the amount of support an evidence gives to a hypothesis does not depend on which experiment the researcher had in mind._ Likelihood functions depend only on the data you actually saw, and the hypotheses you chose to report. The only way to cheat a likelihood function is to lie about the data you collected, or refuse to report likelihoods for a particular hypothesis.\n\nIf your paper fails to report likelihoods for some obvious hypotheses, then (a) that's precisely analogous to you choosing the wrong null hypothesis to consider; (b) it's just as easily noticeable as when your paper considers the wrong null hypothesis; and (c) it can be easily rectified given access to the raw data. By contrast, p-hacking can be subtle and hard to detect after the fact.\n\n__3. Likelihood functions are very difficult to game.__ There is no analog of p-hacking for likelihood functions. This is a theorem of probability theory known as [https://arbital.com/p/-conservation_of_expected_evidence](https://arbital.com/p/-conservation_of_expected_evidence), which says that likelihood functions can't be gamed unless you're falsifying or omitting data (or screwing up the likelihood calculations).%note:Disclaimer: the theorem says likelihood functions can't be gamed, but we still shouldn't underestimate the guile of dishonest researchers struggling to make their results look important. Likelihood functions have not been put through the gauntlet of real scientific practice; p-values have. That said, when p-values were put through that gauntlet, they [failed](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging) in a [spectacular fashion](https://arbital.com/p/https://en.wikipedia.org/wiki/Replication_crisis). When rebuilding, it's probably better to start from foundations that provably cannot be gamed.%\n\n\n__4. Likelihood functions would help stop the \"vanishing effect sizes\" phenomenon.__ The [decline effect](https://arbital.com/p/https://en.wikipedia.org/wiki/Decline_effect) occurs when studies which reject a null hypothesis $H_0$ have effect sizes that get smaller and smaller and smaller over time (the more someone tries to replicate the result). This is [usually evidence](https://arbital.com/p/http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/) that there is no actual effect, and that the initial \"large effects\" were a result of publication bias.\n\nLikelihood functions help avoid the decline effect by _treating different effect sizes differently._ The likelihood function for coins of different biases shows that the evidence HHHHHT gives a different amount of support to $H_{0.52},$ $H_{0.61}$, and $H_{0.8}$ (which correspond to small, medium, and large effect sizes, respectively). If three different studies find low support for $H_{0.5},$ and one of them gives all of its support to the large effect, another gives all its support to the medium effect, and the third gives all of its support to the smallest effect, then likelihood functions reveal that something fishy is going on (because they're all peaked in different places).\n\nIf instead we only use p-values, and always decide whether or not to \"keep\" or \"reject\" the null hypothesis (without specifying how much support goes to different alternatives), then it's hard to notice that the studies are actually contradictory (and that something very fishy is going on). Instead, it's very tempting to exclaim \"3 out of 3 studies reject $H_{0.5}$!\" and move on.\n\n__5. Likelihood functions would help stop publication bias.__ When using p-values, if the data yields a p-value of 0.11 using a null hypothesis $H_0$, the study is considered \"insignificant,\" and many journals have a strong bias towards positive results. When reporting likelihood functions, there is no arbitrary \"significance\" threshold. A study that reports a relative likelihoods of $21 : 1$ in favor of $H_a$ vs $H_0,$ that's exactly the same _strength of evidence_ as a study that reports $21 : 1$ odds against $H_a$ vs $H_0.$ It's all just evidence, and it can all be added to the corpus, there's no arbitrary \"significance\" threshold.\n\n__6. Likelihood functions make it trivially easy to combine studies.__ When combining studies that used p-values, researchers have to perform complex meta-analyses with dozens of parameters to tune, and they often find [exactly what they were expecting to find](https://arbital.com/p/https://en.wikipedia.org/wiki/Confirmation_bias). By contrast, the way you combine multiple studies that reported likelihood functions is... (drumroll)\n...you just multiply the likelihood functions together. If study A reports that $H_{0.75}$ was favored over $H_{0.5}$ with a [https://arbital.com/p/-1rq](https://arbital.com/p/-1rq) of $3.8 : 1$, and study B reports that $H_{0.75}$ was favored over $H_{0.5}$ at $5 : 1$, then the combined likelihood functions of both studies favors $H_{0.75}$ over $H_{0.5}$ at $(3.8 \\cdot 5) : 1$ $=$ $19 : 1.$\n\nWant to combine a hundred studies on the same subject? Multiply a hundred functions together. Done. No parameter tuning, no degrees of freedom through which bias can be introduced — just multiply.\n\n__7. Likelihood functions make it obvious when something has gone wrong.__ If, when you multiply all the likelihood functions together, _all_ hypotheses have extraordinarily low likelihoods, then something has gone wrong. Either a mistake has been made somewhere, or fraud has been committed, or the true hypothesis wasn't in the hypothesis class you're considering.\n\nThe actual hypothesis that explains all the data will have decently high likelihood across all the data. If none of the hypotheses fit that description, then either you aren't considering the right hypothesis yet, or some of the studies went wrong. (Try looking for one study that has a likelihood function _very very different_ from all the other studies, and investigate that one.)\n\nLikelihood functions won't do your science for you — you still have to generate good hypotheses, and be honest in your data reporting — but they _do_ make it obvious when something went wrong. (Specifically, [each hypothesis can tell you how low its likelihood is expected to be on the data](https://arbital.com/p/227), and if _every_ hypothesis has a likelihood far lower than expected, then something's fishy.)\n\n---\n\nA scientific community using likelihood functions would produce scientific research that's easier to use. If everyone's reporting likelihood functions, then all you personally need to do in order to figure out what to believe is take your own personal (subjective) prior probabilities and multiply them by all the likelihood functions in order to get your own personal (subjective) posterior probabilities.\n\nFor example, let's say you personally think the coin is probably fair, with $10 : 1$ odds of being fair as opposed to 75% biased in favor of heads. Now let's say that study A reports a likelihood function which favors $H_{0.75}$ over $H_{0.5}$ with a likelihood ratio of $3.8 : 1.$, and study B reports a $5 : 1$ likelihood ratio in the same direction. Multiplying all these together, your personal posterior beliefs should be $19 : 10$ in favor of $H_{0.75}$ over $H_{0.5}$. This is simply [Bayes' rule](https://arbital.com/p/1lz). Reporting likelihoods instead of p-values lets science remain objective, while allowing everyone to find their own personal posterior probabilities via a simple application of Bayes' theorem.\n\n## Why should we think this would work?\n\nThis may all sound too good to be true. Can one simple change really solve that many problems in modern science?\n\nFirst of all, you can be assured that reporting likelihoods instead of p-values would not \"solve\" all the problems above, and it would surely not solve all problems with modern experimental science. Open access to raw data, preregistration of studies, a culture that rewards replication, and many other ideas are also crucial ingredients to a scientific community that zeroes in on truth.\n\nHowever, reporting likelihoods would _help_ solve lots of different problems in modern experimental science. This may come as a surprise. Aren't likelihood functions just one more statistical technique, just another tool for the toolbox? Why should we think that one single tool can solve that many problems?\n\nThe reason lies in [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv). According to the axioms of probability theory, there is only one good way to account for evidence when updating your beliefs, and that way is via likelihood functions. Any other method is subject to inconsistencies and pathologies, as per the [coherence theorems of probability theory](https://arbital.com/p/probability_coherence_theorems).\n\nIf you're manipulating equations like $2 + 2 = 4,$ and you're using methods that may or may not let you throw in an extra 3 on the right hand side (depending on the arithmetician's state of mind), then it's no surprise that you'll occasionally get yourself into trouble and deduce that $2 + 2 = 7.$ The laws of arithmetic show that there is only one correct set of tools for manipulating equations if you want to avoid inconsistency.\n\nSimilarly, the laws of probability theory show that there is only one correct set of tools for manipulating _uncertainty_ if you want to avoid inconsistency. According to [those rules](https://arbital.com/p/1lz), the right way to represent evidence is through likelihood functions.\n\nThese laws (and a solid understanding of them) are younger than the experimental science community, and the statistical tools of that community predate a modern understanding of probability theory. Thus, it makes a lot of sense that the existing literature uses different tools. However, now that humanity _does_ possess a solid understanding of probability theory, it should come as no surprise that many diverse pathologies in statistics can be cleaned up by switching to a policy of reporting likelihoods instead of p-values.\n\n## What are the drawbacks?\n\nThe main drawback is inertia. Experimental science today reports p-values almost entirely across the board. Modern statistical toolsets have built-in support for p-values (and other related statistical tools) but very little support for reporting likelihood functions. Experimental scientists are trained mainly in [https://arbital.com/p/-frequentist_statistics](https://arbital.com/p/-frequentist_statistics), and thus most are much more familiar with p-value-type tools than likelihood-function-type tools. Making the switch would be painful.\n\nBarring the switching costs, though, making the switch could well be a strict improvement over modern techniques, and would help solve some of the biggest [problems](https://arbital.com/p/http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/) [facing](https://arbital.com/p/http://www.stat.columbia.edu/~gelman/research/published/asa_pvalues.pdf) [science](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging) [today](https://arbital.com/p/https://en.wikipedia.org/wiki/Publication_bias).\n\nSee also the [Likelihoods not p-values FAQ](https://arbital.com/p/505) and [https://arbital.com/p/4xx](https://arbital.com/p/4xx).", "date_published": "2017-04-29T03:00:59Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Nate Soares", "austin stone"], "summaries": ["If scientists reported likelihood functions instead of p-values, this could help science avoid [p-hacking](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging), [publication bias](https://arbital.com/p/https://en.wikipedia.org/wiki/Publication_bias), [the decline effect](https://arbital.com/p/https://en.wikipedia.org/wiki/Decline_effect), and other hazards of standard statistical techniques. Furthermore, it could help make it easier to combine results from multiple studies and perfrom meta-analyses, while making statistics intuitively easier to understand. (This is a bold claim, but a claim which is largely supported by [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv).)"], "tags": ["Opinion page"], "alias": "4zd"} {"id": "23f6e190921849c6fbe7a1507ea0f677", "title": "Cyclic Group Intro (Math 0)", "url": "https://arbital.com/p/cyclic_group_intro_math_0", "source": "arbital", "source_type": "text", "text": "##Clocks##\nSay we had a clock. We decide that we can \"add\" two numbers on the clock starting at the first number, then moving by the number of steps given by the second number. For example the sum of $5$ and $6$ is given by starting at $5$, and then moving by $6$ spaces, and ending up at $11$. But the sum of $7$ and $9$ is given by starting at $7$ and moving $9$ spaces to end up at $4$. This number can be calculated by just adding the numbers normally, then if you end up at more than $12$, subtract $12$. For example, $7+9 = 16$, and $16- 12 = 4$.\n\nNotice that if you add $12$ to anything, you just go once around the clock, and so ending up not doing anything. You can calculate it with $4$ as $4 + 12 = 16$, and $16 - 12 = 4$. Hence $12$ is the [https://arbital.com/p/-54p](https://arbital.com/p/-54p) for our clock. Hmm, usually the identity should be something like $0$. More on this in a moment.\n\nAlso, notice that no matter where you are on the clock, there's always an amount by which you can turn to end up at $12$. For example, if you are at $5$, you can add $7$ to get to $12$. This is called the [inverse](https://arbital.com/p/-inverse_mathematics). This number can easily be calculated by subtracting your number from $12$. For example, $12 - 5 = 7$ so the inverse of $5$ is $7$. Now what if we calculate the inverse of $12$? Then we get $0$. In fact, $12$ and $0$ are the same thing on our clock. Actually, we may as well scratch the $12$ off of our clock-face and just write $0$. It makes things a lot easier. Now, instead of considering $12$ the identity, we can consider $0$ the identity instead. If we are at any number, and we step $0$ times (in other words, we don't step at all), then we stay at the number we were.\n\nNotice now that instead of saying when we add $5$ and $7$ on our clock we get $12$, we actually want to say we get $0$. So now we change our rule at the top a little bit. To calculate the sum of two numbers, we add them, and if we get $12$ *or more* we subtract $12$. So now $4$ plus $2$ is still $6$, and $7$ plus $9$ is still obtained by calculating $7+9 = 16$ and $16-12 = 4$. But now, instead of calculating that $7 +5 = 12$ we now further calculate that $12 - 12 = 0$ and so say that $7$ plus $5$ is $0$ on our clock.\n\nOur clock is [closed](https://arbital.com/p/-3gy) under its addition, it has an [https://arbital.com/p/-54p](https://arbital.com/p/-54p) and [inverses](https://arbital.com/p/-inverse_mathematics). Hence, our clock is a [group](https://arbital.com/p/-3gd). From here on out we can write our group [operation](https://arbital.com/p/-operation_mathematics) as $\\bullet$. So $7 \\bullet 9 = 4$.\n\nNow, there is no reason our clock needs to have $12$ numbers. We could do exactly the same with, say, $15$ numbers. Then our operation is defined by adding the numbers, and if they add up to at least $15$, we subtract $15$. Now $5 \\bullet 7 = 12$ but $7 \\bullet 9 = 1$ since $7 + 9 = 16$ and $16 - 15 = 1$. Notice that now the inverse of $5$ is not $7$, but is instead found by calculating $15 - 5 = 10$. So the inverse of $5$ is $10$ (since $5 + 10 = 15$, so $5 \\bullet 10 = 0$). \n\nWe can write our inverses by adding a negative sign to the front of the number. So on our $15$ number clock, the inverse of $5$ is written $-5$ and so $-5 = 10$, but on our normal clock, the inverse of $5$ is $7$ so $-5 = 7$.\n\n##Cyclic Groups##\n\nThis motivates the term **cyclic**; if we go far enough, we get back to where we started. In fact, a group is called cyclic if it has a singe [generator](https://arbital.com/p/-generator_mathematics). This is an element that can be used to generate all the other elements by being added to itself enough times. On any clock, the generator is $1$. We can get any number by going $1 \\bullet 1 \\bullet 1 \\bullet \\cdots \\bullet 1$ the right number of times. \n\nTechnically, for mathematical reasons, we actually want to require that we can get every element even if we combine the generator with itself a *negative* number of times. What does this mean? Well, the generator $1$ has an inverse $-1$. In our normal clock $-1 = 11$. In our $15$ number clock $-1 = 14$. The point is that we could just as well get all the numbers by adding $-1$ to itself a bunch of times.\n\nWe can also get $0$ by \"not adding $1$ to itself at all\" or, put another way \"adding $1$ to itself $0$ times\". This may seem strange, but if you think of taking $1$, and then adding $-1$ once, it's like not taking $1$ at all and you get $0$. Don't worry too much about this. The point is that you can technically get $0$ from $1$.\n\nIn fact, the [finite groups](https://arbital.com/p/-finite_group) that you can get from adding one number to itself a bunch of times are all [isomorphic](https://arbital.com/p/-4f4) to some clock. That is to say, no matter what group you took that has this property, if you just renamed the elements, you would end up with something that looked like a clock in the way described above.\n\nTake for example a coin. It has two sides: heads and tails. Call them $h$ and $t$ for short. We can define an operation on the sides as follows, which we're going to refer to as \"adding\" sides: if you add heads to anything, you flip the coin. If you add tails to anything, you leave the coin alone. So, if you are on the heads side, and you \"add heads\", you flip the coin to tails. If you are on the heads side and you \"add tails\" you leave the coin on the heads side. If you are on the tails side and you \"add heads\", you flip the coin to heads, and if you are on the tails side and you \"add tails\" you just stay where you are. We can write the operation of 'adding\" the sides as \"$\\bullet$\" and then say $h \\bullet h = t$, and $h \\bullet t = h$, and $t \\bullet h = h$, and $t \\bullet t = t$. But hey! This is exactly the same we'd get if we called the head side $1$ and the tails side $0$, and considered a clock with just two numbers. Then $1 \\bullet 1 = 0$, and $1 \\bullet 0 = 1$, and $0 \\bullet 1 = 1$ and $0 \\bullet 0 = 0$. If you just had heads, you could get tails by just adding heads to itself, so the coin group is cyclic.\n\nFor any finite group where you can get every element starting from a single one, you can write it as a clock by relabeling the elements. Try it!\n\n##Infinite Cyclic Groups##\nHowever, things change if we allow our group to be [infinite](https://arbital.com/p/-infinity). Then we still require that every element can be obtained by adding $1$ to itself some amount of times, either positive or negative or zero. But maybe there is no way to wrap around. Consider all the [integers](https://arbital.com/p/-48l), that is to say, every number, positive, negative and zero. These actually do form a group. Then every number you can think of, you can get to by adding $1$ a bunch of times. But no matter how many $1$s you add on the positive side, you can never wrap around and get a negative number. However, the integers are still called a cyclic group, even though it's not really cyclic (I know, not a great name) because the official definition is that every element can be obtained by adding a specific something a bunch of times. \n\nAgain, the only infinite cyclic group that you get, up to [relabeling](https://arbital.com/p/-4f4), is the integers. \n\nSo the only cyclic groups are the clocks and the integers!", "date_published": "2016-08-01T08:20:02Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Mark Chimes"], "summaries": ["Every [https://arbital.com/p/-finite](https://arbital.com/p/-finite) **cyclic [group](https://arbital.com/p/-3gd)** can be thought of like a clock (except maybe with more or less numbers than twelve). Every [https://arbital.com/p/-infinite](https://arbital.com/p/-infinite) cyclic group can be thought of as the [integers](https://arbital.com/p/48l)."], "tags": ["Math 0", "B-Class"], "alias": "4zh"} {"id": "6aa42e978213b085c6c9b4cd4995fa67", "title": "Orbit-Stabiliser theorem: External Resources", "url": "https://arbital.com/p/orbit_stabiliser_theorem_external_resources", "source": "arbital", "source_type": "text", "text": "**[https://arbital.com/p/1r5](https://arbital.com/p/1r5)**\n\n* [Group actions II: the orbit-stabilizer theorem](https://gowers.wordpress.com/2011/11/09/group-actions-ii-the-orbit-stabilizer-theorem) (Tim Gowers) - How many rotational symmetries does a cube have?", "date_published": "2016-07-02T21:58:00Z", "authors": ["Eric Bruylant", "Mark Chimes", "Patrick Stevens"], "summaries": ["[External resources](https://arbital.com/p/arbital_resources) about the [https://arbital.com/p/4l8](https://arbital.com/p/4l8)."], "tags": ["List", "External resources", "Stub"], "alias": "4zk"} {"id": "3acda9a4c2c59a8eb59226b74cb7f337", "title": "Uncountability (Math 3)", "url": "https://arbital.com/p/uncountability_math_3", "source": "arbital", "source_type": "text", "text": "A [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $X$ is *uncountable* if there is no [bijection](https://arbital.com/p/499) between $X$ and [$\\mathbb{N}$](https://arbital.com/p/45h). Equivalently, there is no [injection](https://arbital.com/p/4b7) from $X$ to $\\mathbb{N}$.\n\n## Foundational Considerations ##\n\nIn set theories without the [axiom of choice](https://arbital.com/p/69b), such as [Zermelo Frankel set theory](https://arbital.com/p/ZF) without choice (ZF), it can be [consistent](https://arbital.com/p/5km) that there is a [https://arbital.com/p/-cardinal_number](https://arbital.com/p/-cardinal_number) $\\kappa$ that is incomparable to $\\aleph_0$. That is, there is no injection from $\\kappa$ to $\\aleph_0$ nor from $\\aleph_0$ to $\\kappa$. In this case, cardinality is not a [total order](https://arbital.com/p/540), so it doesn't make sense to think of uncountability as \"larger\" than $\\aleph_0$. In the presence of choice, [cardinality is a total order](https://arbital.com/p/5sh), so an uncountable set can be thought of as \"larger\" than a countable set.\n\nCountability in one [https://arbital.com/p/-model](https://arbital.com/p/-model) is not necessarily countability in another. By [Skolem's Paradox](https://arbital.com/p/skolems_paradox), there is a model of set theory $M$ where its [power set](https://arbital.com/p/6gl) of the naturals, denoted $2^\\mathbb N_M \\in M$ is countable when considered outside the model. Of course, it is a [theorem](https://arbital.com/p/6fk) that $2^\\mathbb N _M$ is uncountable, but that is within the model. That is, there is a bijection $f : \\mathbb N \\to 2^\\mathbb N_M$ that is not inside the model $M$ (when $f$ is considered as a set, its graph), and there is no such bijection inside $M$. This means that (un)countability is not [absolute](https://arbital.com/p/absoluteness).\n\n## See also\n\nIf you enjoyed this explanation, consider exploring some of [Arbital's](https://arbital.com/p/3d) other [featured content](https://arbital.com/p/6gg)!\n\nArbital is made by people like you, if you think you can explain a mathematical concept then consider [https://arbital.com/p/-4d6](https://arbital.com/p/-4d6)!", "date_published": "2016-10-26T19:09:44Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Daniel Satanove", "Patrick Stevens"], "summaries": [], "tags": ["C-Class"], "alias": "4zp"} {"id": "76af0983a58f46d430daeeeb1c3e21b8", "title": "Rational number", "url": "https://arbital.com/p/rational_number", "source": "arbital", "source_type": "text", "text": "The rational numbers are either whole numbers or fractions of whole numbers, like $0,$ $1$, $2$, $\\frac{1}{2}$, $\\frac{97}{3}$, $-17$, $\\frac{-85}{1993},$ and so on. The [set](https://arbital.com/p/3jz) of rational numbers is written $\\mathbb Q.$\n\n[Irrational numbers](https://arbital.com/p/54z) like [$\\pi$](https://arbital.com/p/49r) and [$e$](https://arbital.com/p/e) are _not_ included in $\\mathbb Q;$ the rational numbers are only those numbers which can be written as $\\frac{a}{b}$ for integers $a$ and $b$ (where $b \\neq 0$).\n\nFormally, $\\mathbb{Q}$ is the [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) of the [https://arbital.com/p/-field_of_fractions](https://arbital.com/p/-field_of_fractions) of $\\mathbb{Z}$ (the [ring](https://arbital.com/p/3gq) of [integers](https://arbital.com/p/48l)).\nThat is, each $q \\in \\mathbb Q$ is an expression $\\frac{a}{b}$, where $b$ is a nonzero integer and $a$ is an integer, together with certain rules for addition and multiplication.\nThe rational numbers are the last intermediate stage on the way to constructing the [real numbers](https://arbital.com/p/4bc), but they are also very interesting and important in their own right.\n\nOne intuition about the rational numbers is that once we've created the [real numbers](https://arbital.com/p/4bc), then a real number $x$ is a rational number if and only if it may be written as $\\frac{a}{b}$, where $a, b$ are integers and $b$ is not $0$.\n\n\n# Examples\n\n- The integer $1$ is a rational number, because it may be written as $\\frac{1}{1}$ (or, indeed, as $\\frac{2}{2}$ or $\\frac{-1}{-1}$, or $\\frac{a}{a}$ for any nonzero integer $a$).\n- The number [$\\pi$](https://arbital.com/p/49r) is not rational ([proof](https://arbital.com/p/513)).\n- The number $\\sqrt{2}$ (being the unique positive real which, when multiplied by itself, yields $2$) is not rational ([proof](https://arbital.com/p/548)).\n\n# Properties\n\n- There are infinitely many rationals. (Indeed, every integer is rational, because the integer $n$ may be written as $\\frac{n}{1}$, and there are infinitely many integers.)\n- There are countably many rationals ([proof](https://arbital.com/p/511)). Therefore, because there are [uncountably](https://arbital.com/p/2w0) many real numbers, [https://arbital.com/p/-almost_all](https://arbital.com/p/-almost_all) real numbers are not rational.\n- The rationals are [dense](https://arbital.com/p/dense_metric_space) in the reals.\n- The rationals form a [field](https://arbital.com/p/481) ([proof](https://arbital.com/p/4zr)). Indeed, they are a subfield of the real numbers.\n\n# Construction\n\nInstead of taking the reals and selecting a certain collection which we label the \"rationals\", [it is possible to construct the rationals](https://arbital.com/p/construction_of_complexes_from_naturals) given access only to the natural numbers; and from the rationals we may construct the reals.\nIn some sense, this approach is cleaner than starting with the reals and producing the rationals, because the natural numbers are very intuitive objects but the real numbers are less so.\nWe can be closer to satisfying some deep existential unease if we can build the reals out of the much-simpler naturals.\n\nAs an analogy, being able to produce a block of wood given access to a wooden table is much less satisfying than the other way round, and we run into blocks of wood \"in the wild\" so we are pretty convinced that there actually are such things as blocks of wood.\nOn the other hand, we almost never see wooden tables in nature, so we can't be quite as sure that they're real until we've built one ourselves.\n\nSimilarly, everyone recognises broadly what a counting number is, and they're out there in the wild, but the rational numbers are somewhat less \"natural\" and their existence is less intuitive.", "date_published": "2016-07-21T15:56:29Z", "authors": ["Eric Bruylant", "Eric Rogstad", "Patrick Stevens", "Nate Soares", "Adom Hartell", "Joe Zeng"], "summaries": ["A rational number is a number which may be written as $\\frac{a}{b}$, where $a$ is an [https://arbital.com/p/-48l](https://arbital.com/p/-48l) and $b$ is a nonzero integer."], "tags": ["Start"], "alias": "4zq"} {"id": "6b52007312faa2b6e9e1315abb2121e0", "title": "The rationals form a field", "url": "https://arbital.com/p/rationals_are_a_field", "source": "arbital", "source_type": "text", "text": "The set $\\mathbb{Q}$ of [rational numbers](https://arbital.com/p/4zq) is a [field](https://arbital.com/p/481).\n\n# Proof\n\n$\\mathbb{Q}$ is a ([commutative](https://arbital.com/p/3jb)) [ring](https://arbital.com/p/3gq) with additive identity $\\frac{0}{1}$ (which we will write as $0$ for short) and multiplicative identity $\\frac{1}{1}$ (which we will write as $1$ for short): we check the axioms individually.\n\n- $+$ is commutative: $\\frac{a}{b} + \\frac{c}{d} = \\frac{ad+bc}{bd}$, which by commutativity of addition and multiplication in $\\mathbb{Z}$ is $\\frac{cb+da}{db} = \\frac{c}{d} + \\frac{a}{b}$\n- $0$ is an identity for $+$: have $\\frac{a}{b}+0 = \\frac{a}{b} + \\frac{0}{1} = \\frac{a \\times 1 + 0 \\times b}{b \\times 1}$, which is $\\frac{a}{b}$ because $1$ is a multiplicative identity in $\\mathbb{Z}$ and $0 \\times n = 0$ for every integer $n$.\n- Every rational has an additive inverse: $\\frac{a}{b}$ has additive inverse $\\frac{-a}{b}$.\n- $+$ is [associative](https://arbital.com/p/3h4): $$\\left(\\frac{a_1}{b_1}+\\frac{a_2}{b_2}\\right)+\\frac{a_3}{b_3} = \\frac{a_1 b_2 + b_1 a_2}{b_1 b_2} + \\frac{a_3}{b_3} = \\frac{a_1 b_2 b_3 + b_1 a_2 b_3 + a_3 b_1 b_2}{b_1 b_2 b_3}$$\nwhich we can easily check is equal to $\\frac{a_1}{b_1}+\\left(\\frac{a_2}{b_2}+\\frac{a_3}{b_3}\\right)$. \n- $\\times$ is associative, trivially: $$\\left(\\frac{a_1}{b_1} \\frac{a_2}{b_2}\\right) \\frac{a_3}{b_3} = \\frac{a_1 a_2}{b_1 b_2} \\frac{a_3}{b_3} = \\frac{a_1 a_2 a_3}{b_1 b_2 b_3} = \\frac{a_1}{b_1} \\left(\\frac{a_2 a_3}{b_2 b_3}\\right) = \\frac{a_1}{b_1} \\left(\\frac{a_2}{b_2} \\frac{a_3}{b_3}\\right)$$\n- $\\times$ is commutative, again trivially: $$\\frac{a}{b} \\frac{c}{d} = \\frac{ac}{bd} = \\frac{ca}{db} = \\frac{c}{d} \\frac{a}{b}$$\n- $1$ is an identity for $\\times$: $$\\frac{a}{b} \\times 1 = \\frac{a}{b} \\times \\frac{1}{1} = \\frac{a \\times 1}{b \\times 1} = \\frac{a}{b}$$ by the fact that $1$ is an identity for $\\times$ in $\\mathbb{Z}$.\n- $+$ distributes over $\\times$: $$\\frac{a}{b} \\left(\\frac{x_1}{y_1}+\\frac{x_2}{y_2}\\right) = \\frac{a}{b} \\frac{x_1 y_2 + x_2 y_1}{y_1 y_2} = \\frac{a \\left(x_1 y_2 + x_2 y_1\\right)}{b y_1 y_2}$$\nwhile $$\\frac{a}{b} \\frac{x_1}{y_1} + \\frac{a}{b} \\frac{x_2}{y_2} = \\frac{a x_1}{b y_1} + \\frac{a x_2}{b y_2} = \\frac{a x_1 b y_2 + b y_1 a x_2}{b^2 y_1 y_2} = \\frac{a x_1 y_2 + a y_1 x_2}{b y_1 y_2}$$\nso we are done by distributivity of $+$ over $\\times$ in $\\mathbb{Z}$.\n\nSo far we have shown that $\\mathbb{Q}$ is a ring; to show that it is a field, we need all nonzero fractions to have inverses under multiplication.\nBut if $\\frac{a}{b}$ is not $0$ (equivalently, $a \\not = 0$), then $\\frac{a}{b}$ has inverse $\\frac{b}{a}$, which does indeed exist since $a \\not = 0$.\n\nThis completes the proof.", "date_published": "2016-07-06T16:28:22Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Proof"], "alias": "4zr"} {"id": "5700a268e4e08c04e40e5a86813b3acb", "title": "Complex number", "url": "https://arbital.com/p/complex_number", "source": "arbital", "source_type": "text", "text": "A complex number is a number of the form $z = a + b\\textrm{i}$, where $\\textrm{i}$ is the imaginary unit defined as $\\textrm{i}=\\sqrt{-1}$.\n\nDoesn't make sense? Let's backtrack a little.\n\n## Motivation - expanding the definition of numbers##\nSay we start with a child's understanding of numbers as counting numbers, or [natural numbers](https://arbital.com/p/45h). Using only natural numbers we can add and multiply whichever numbers we want, but there are some restrictions on subtraction: ask a child to subtract 5 from 3 and they'll say \"you can't do that!\".\n\nIn order to allow the computation of numbers like $5-3$, we need to define a new kind of number: negative numbers. The set of natural numbers, negative numbers and $0$ is called whole numbers, or [integers](https://arbital.com/p/48l). Unlike natural numbers, with integers we can add and subtract whatever we want. (In math terms, we say the set of integers is [closed](https://arbital.com/p/3gy) to addition.)\n\nWe still have a problem, though: what about division? Going back to our child analogy, say we're talking to a third grader. If you ask them to divide 8 by 2, no problem. But what if you as them to divide 2 by 8? \"You can't do that!\"\n\nJust like with negative numbers, we need to define a new kind of number: fractions, like $\\frac{1}{2}, \\frac{5}{3}$ or $-\\frac{6}{7}$. By adding fractions to the integers, we get a more general set of numbers - [rational numbers](https://arbital.com/p/4zq). With rational numbers we can multiply and divide however we want - the set of rational numbers is closed to multiplication. Well, except dividing by zero. No one has figured out how to do that without breaking mathematics, so that's awkward.\n\nNow say our child is a seventh grader, so they know about square roots. They can apply the square root operation to any perfect square, no problem - $\\sqrt{9}=3$. But what if you ask them to find the square root of a non-perfect square, like $\\sqrt{2}$? \"You can't do that!\"\n\nOnce again, we need to expand our set of numbers to include a new kind of number: irrational numbers. Unlike we discussed with negative numbers or fractions, irrationals can do a whole lot more that just define square roots. In fact, irrationals are so cool that according to some sources the Pythagoreans couldn't deal with their existence and ended up killing [the guy who invented them](https://arbital.com/p/https://en.wikipedia.org/wiki/Hippasus).\n\nAdding irrational numbers to our set of rational numbers, we get the [real numbers](https://arbital.com/p/4bc). Mathematically, we say that the reals are a [complete](https://arbital.com/p/real_number_completeness) [ordered](https://arbital.com/p/540) [field](https://arbital.com/p/481), which is actually one of the ways of defining them. \n\nHowever, there's still one problem: the real numbers are not an [https://arbital.com/p/-algebraically_closed_field](https://arbital.com/p/-algebraically_closed_field). In a practical sense, this means we still don't have closure under the square root operation $\\sqrt{}$, because we can't define the square roots of negative numbers.\n\nBy now you've probably noticed the pattern, though: any time someone says \"you can't do that!\", mathematicians invent a new kind of number to prove them wrong.\n\n## Introducing: imaginary numbers ##\nIn order to allow using the square root operation on negative numbers, we once again have to define a new kind of number: imaginary numbers. \n\nActually, we have to define just one number - the imaginary unit, $\\textrm{i}$. $\\textrm{i}$ is defined as a solution to the quadratic equation $x^2+1=0$. In other words, we can define $\\textrm{i}$ as equaling $\\sqrt{-1}$.\n\nAll the other imaginary numbers (square roots of negatives) follow directly from this definition of $\\textrm{i}$: for any negative number $-a$, we can define $\\sqrt{-a}=\\textrm{i}\\sqrt{a}$.\n\n##To be continued...##\nThis article is unfinished, and will later include an explanation of the complex plane as well as the algebraic properties of complex numbers.", "date_published": "2016-07-05T20:05:56Z", "authors": ["Eric Bruylant", "Yelene Seratna", "Patrick Stevens", "Eliana Ruby"], "summaries": [], "tags": ["Start", "Work in progress", "Needs clickbait"], "alias": "4zw"} {"id": "b4e7182924523b7ab47409a4ccb3353a", "title": "Rational numbers: Intro (Math 0)", "url": "https://arbital.com/p/rational_numbers_intro_math_0", "source": "arbital", "source_type": "text", "text": "*In order to get the most out of this page, you probably want a good grasp of the [integers](https://arbital.com/p/53r) first.*\n\n\"Rational number\" is a phrase mathematicians use for the idea of a \"fraction\".\nHere, we'll go through what a fraction is and why we should care about them.\n\n# What is a fraction?\n\nSo far, we've met the [integers](https://arbital.com/p/48l): whole numbers, which can be either bigger than $0$ or less than $0$ (or the very special $0$ itself).\nThe [natural numbers](https://arbital.com/p/45h) can count the number of cows I have in my possession; the integers can also count the number of cows I have after I've given away cows from having nothing, resulting in anti-cows.\n\nIn this article, though, we'll stop talking about cows and start talking about apples instead. The reason will become clear in a moment.\n\nSuppose I have two apples. %%note:I'm terrible at drawing, so my apples look suspiciously like circles.%%\n\n![Two apples](http://i.imgur.com/ZqYDkEX.png)\n\nWhat if I chopped one of the apples into two equally-sized pieces? (And now you know why we stopped talking about cows.)\n\nNow what I have is a whole apple, and… another apple which is in two pieces.\n\n![Two apples, one halved](http://i.imgur.com/pdUyIQe.png)\n\nLet's imagine now that I chop one of the pieces itself into two pieces, and for good measure I chop my remaining whole apple into three pieces.\n\n![Two apples, formed as two quarters, one half, three thirds](http://i.imgur.com/Zv1Y7EP.png)\n\nI still have the same amount of apple as I started with - I haven't eaten any of the pieces or anything - but now it's all in funny-sized chunks.\n\nNow I'll eat one of the smallest chunks. How many apples do I have now?\n\n![Two apples, with a quarter eaten from one](http://i.imgur.com/Bh0ekJQ.png)\n\nI certainly don't just have one apple, because three of the chunks I've got in front of me will together make an apple; and I've also got some chunks left over once I've done that.\nBut I can't have two apples either, because I *started* with two and then I ate a bit.\nMathematicians like to be able to compare things, and if I forced you to make a comparison, you could say that I have \"more than one apple\" but \"fewer than two apples\".\n\nIf you're happy with that, then it's a reasonable thing to ask: \"exactly how much apple do I have?\".\nAnd the mathematician will give an answer of \"one apple and three quarters\".\n\"One and three quarters\" is an example of a **rational number** or **fraction**: it expresses a quantity that came from dividing some number of things into some number of equal parts, then possibly removing some of the parts.\n%%note:I've left out the point that just as you moved from the [counting numbers](https://arbital.com/p/45h) to the [integers](https://arbital.com/p/48l), thereby allowing you to owe someone some apples, so we can also have a negative rational number of apples. We'll get to that in time.%%\n\n# The basic building block\n\nFrom a certain point of view, the building block of the *natural* numbers is just the number $1$: all natural numbers can be made by just adding together the number $1$ some number of times. (If I have a heap of apples, I can build it up just from single apples.)\nThe building block of the integers is also the number $1$, because if you gave me some apples %%note:which perhaps I've now eaten%% so that I owe you some apples, you might as well have given them to me one by one.\n\nNow the *rationals* have building blocks too, but this time there are lots and lots of them, because if you give me any kind of \"building block\" - some quantity of apple - I can always just chop it into two pieces and make a smaller \"building block\".\n(This wasn't true when we were confined just to whole apples, as in the natural numbers! If I can't divide up an apple, then I can't make any quantity of apples smaller than one apple. %%note:Except no apples at all.%%)\n\nIt turns out that a good choice of building blocks is \"one piece, when we divide an apple into several equally-sized pieces\".\nIf we took our apple, and divided it into five equal pieces, then the corresponding building-block is \"one fifth of an apple\": five of these building blocks makes one apple.\nTo a mathematician, we have just made the rational number which is written $\\frac{1}{5}$.\n\nSimilarly, if we divided our apple instead into six equal pieces, and take just one of the pieces, then we have made the rational number which is written $\\frac{1}{6}$.\n\nThe (positive) rational numbers are just whatever we could make by taking lots of copies of building blocks.\n\n# Examples\n\n- $1$ is a rational number. It can be made with the building block that is just $1$ itself, which is what we get if we take an apple and divide it into just one piece - that is, making no cuts at all. Or, if you're a bit squeamish about not making any cuts, $1$ can be made out of two halves: two copies of the building block that results when we take an apple and cut it into two equal pieces, taking just one of the pieces. (We write $\\frac{1}{2}$ for that half-sized building block.)\n- $2$ is a rational number: it can be made out of two lots of the $1$-building-block, or indeed out of four lots of the $\\frac{1}{2}$-building-block.\n- $\\frac{1}{2}$ is a rational number: it is just the half-sized building block itself.\n- If we took the apple and instead cut it into three pieces, we obtain a building block which we write as $\\frac{1}{3}$; so $\\frac{1}{3}$ is a rational number.\n- Two copies of the $\\frac{1}{3}$-building-block makes the rational number which we write $\\frac{2}{3}$.\n- Five copies of the $\\frac{1}{3}$-building-block makes somewhat more than one apple. Indeed, three of the building blocks can be put together to make one full apple, and then we've got two building blocks left over. We write the rational number represented by five $\\frac{1}{3}$-building-blocks as $\\frac{5}{3}$.\n\n# Notation\n\nNow you've seen the notation $\\frac{\\cdot}{\\cdot}$ used a few times, where there are numbers in the places of the dots.\nYou might be able to guess how this notation works in general now: if we take the blocks resulting when we divide an apple into \"dividey-number\"-many pieces, and then take \"lots\" of those pieces, then we obtain a rational number which we write as $\\frac{\\text{lots}}{\\text{dividey-number}}$.\nMathematicians use the words \"numerator\" and \"denominator\" for what I called \"lots\" and \"dividey-number\"; so it would be $\\frac{\\text{numerator}}{\\text{denominator}}$ to a mathematician.\n\n# Exercises\n\nCan you give some examples of how we can make the number $3$ from smaller building blocks? (There are lots and lots of ways you could correctly answer this question.)\n%%hidden(Show a possible solution):\nYou already know about one way from when we talked about the natural numbers: just take three copies of the $1$-block. (That is, three apples is three single apples put together.)\n\nAnother way would be to take six half-sized blocks: $\\frac{6}{2}$ is another way to write $3$.\n\nYet another way is to take fifteen fifth-sized blocks: $\\frac{15}{5}$ is another way to write $3$.\n\nIf you want to mix things up, you could take four half-sized blocks and three third-sized blocks: $\\frac{4}{2}$ and $\\frac{3}{3}$ together make $3$.\n![Three apples: four halves and three thirds](http://i.imgur.com/JBpqyko.png)\n%%\n\nIf you felt deeply uneasy about the last of my possible solutions above, there is a good and perfectly valid reason why you might have done; we will get to that eventually. If that was you, just forget I mentioned that last one for now.\nIf you were comfortable with it, that's also normal.\n\nHow about making the number $\\frac{1}{2}$ from smaller blocks?\n%%hidden(Show a possible solution):\nOf course, you could start by taking just one $\\frac{1}{2}$ block.\n\nFor a more interesting answer, you could take three copies of the sixth-sized block: $\\frac{3}{6}$ is the same as $\\frac{1}{2}$. \n\n![A half, expressed in sixths](http://i.imgur.com/5OBeBRE.png)\n\nAlternatively, five copies of the tenth-sized block: $\\frac{5}{10}$ is the same as $\\frac{1}{2}$.\n\n![A half, expressed in tenths](http://i.imgur.com/IfOm2xH.png)\n%%\n\nThe way I've drawn the pictures might be suggestive: in some sense, when I've given different answers just now, they all look like \"the same answer\" but with different lines drawn on.\nThat's because the rational numbers (\"fractions\", remember) correspond to answers to the question \"how much?\".\nWhile there is always more than one way to build a given rational number out of the building blocks, the way that we build the number doesn't affect the ultimate answer to the question \"how much?\".\n$\\frac{5}{10}$ and $\\frac{1}{2}$ and $\\frac{3}{6}$ are all simply different ways of writing the same underlying quantity: the number which represents the fundamental concept of \"chop something into two equal pieces\".\nThey each express different ways of making the same amount (for instance, out of five $\\frac{1}{10}$-blocks, or one $\\frac{1}{2}$-block), but the amount itself hasn't changed.\n\n# Going more general\n\nRemember, from when we treated the integers using cows, that I can give you a cow (even if I haven't got one) by creating a cow/anti-cow pair and then giving you the cow, leaving me with an anti-cow.\nWe count the number of anti-cows that I have by giving them a *negative* number.\n\nWe can do the same here with chunks of apple.\nIf I wanted to give you half an apple, but I didn't have any apples, I could create a half-apple/half-anti-apple pair, and then give you the half-apple; this would leave me with a half-anti-apple.\n\nWe count anti-apples in the same way as we count anti-cows: they are *negative*.\n\nSee the page on [subtraction](https://arbital.com/p/56x) for a much more comprehensive explanation; this page is more of a whistle-stop tour.\n\n# Limitations\n\nWe've had the idea of building-blocks: as $\\frac{1}{n}$, where $n$ was a natural number.\nWhy should $n$ be just a natural number, though?\nWe've already seen the integers; why can't it be one of those? %%note:That is, why not let it be negative?%%\n\nAs it turns out, we *can* let $n$ be an integer, but we don't actually get anything new if we do.\nWe're going to pretend for the moment that $n$ has to be positive, because it gets a bit weird trying to divide things into three anti-chunks; this approach doesn't restrict us in any way, but if you are of a certain frame of mind, it might just look like a strange and artificial boundary to draw.\n\nHowever, you must note that $n$ cannot be $0$ (whatever your stance on dividing things into anti-chunks).\nWhile there is a way to finesse the idea of an anti-chunk %%note: And if you sit and think really hard for a long time, you might even come up with it yourself!%%, there is simply no way to make it possible to divide an apple into $0$ equal pieces.\nThat is, $\\frac{1}{0}$ is not a rational number (and you should be very wary of calling it anything that suggests it's like a number - like \"infinity\" - and under no account may you do arithmetic on it).\n\n# Summary\n\nSo far, you've met what a rational number is! We haven't gone through how to do things with them yet, but hopefully you now understand vaguely what they're there for: they express the idea of \"dividing something up into parts\", or \"sharing things out among people\" (if I have two apples to split fairly among three people, I can be fair by chopping each apple into three $\\frac{1}{3}$-sized building blocks, and then giving each person two of the blocks).\n\n[Next up](https://arbital.com/p/514), we will see how we can combine rational numbers together, eventually making a very convenient shorthand. %%note:The study of this shorthand is known as \"arithmetic\".%%", "date_published": "2016-08-01T08:04:20Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": ["The rational numbers are \"fractions\". While the [natural numbers](https://arbital.com/p/45h) measure the answer to the question \"how many apples?\" when we have some apples, the rationals measure the answer to the question \"how much apple?\" when we can cut the apples up in different ways and remove pieces before counting them."], "tags": [], "alias": "4zx"} {"id": "ee45c28c053ee8746d1ad09517bb17a5", "title": "Report likelihoods not p-values: FAQ", "url": "https://arbital.com/p/likelihood_not_pvalue_faq", "source": "arbital", "source_type": "text", "text": "This page answers frequently asked questions about the [https://arbital.com/p/4zd](https://arbital.com/p/4zd) proposal for experimental science.\n\n_(Note: This page is a personal [opinion page](https://arbital.com/p/60p).)_\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n### What does this proposal entail?\n\nLet's say you have a coin and you don't know whether it's biased or not. You flip it six times and it comes up HHHHHT.\n\nTo report a [p-value](https://arbital.com/p/p_value), you have to first declare which experiment you were doing — were you flipping it six times no matter what you saw and counting the number of heads, or were you flipping it until it came up tails and seeing how long it took? Then you have to declare a \"null hypothesis,\" such as \"the coin is fair.\" Only then can you get a p-value, which in this case, is either 0.11 (if you were going to toss the coin 6 times regardless) or 0.03 (if you were going to toss until it came up heads). The p-value of 0.11 means \"if the null hypothesis _were_ true, then data that has as many H values as the observed data would only occur 11% of the time, if the declared experiment were repeated many many times.\"\n\nTo report a [likelihood](https://arbital.com/p/bayes_likelihood), you don't have to do any of that \"declare your experiment\" stuff, and you don't have to single out one special hypothesis. You just pick a whole bunch of hypotheses that seem plausible, such as the set of hypotheses $H_b$ = \"the coin has a bias of $b$ towards heads\" for $b$ between 0% and 100%. Then you look at the actual data, and report how likely that data is according to each hypothesis. In this example, that yields a graph which looks like this:\n\n![L(e|H)](http://i.imgur.com/UwwxmCe.png)\n\nThis graph says that HHHHHT is about 1.56% likely under the hypothesis $H_{0.5}$ saying that the coin is fair, and about 5.93% likely under the hypothesis $H_{0.75}$ that the coin comes up heads 75% of the time, and only 0.17% likely under the hypothesis $H_{0.3}$ that the coin only comes up tails 30% of the time.\n\nThat's all you have to do. You don't need to make any arbitrary choice about which experiment you were going to run. You don't need to ask yourself what you \"would have seen\" in other cases. You just look at the actual data, and report how likely each hypothesis in your hypothesis class said that data should be.\n\n(If you want to compare how well the evidence supports one hypothesis or another, you just use the graph to get a [likelihood ratio](https://arbital.com/p/1rq) between any two hypotheses. For example, this graph reports that the data HHHHHT supports the hypothesis $H_{0.75}$ over $H_{0.5}$ at odds of $\\frac{0.0593}{0.0156}$ = 3.8 to 1.)\n\nFor more of an explanation, see [https://arbital.com/p/4zd](https://arbital.com/p/4zd).\n\n### Why would reporting likelihoods be a good idea?\n\nExperimental scientists reporting likelihoods instead of p-values would likely help address many problems facing modern science, including [p-hacking](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging), [the vanishing effect size problem](https://arbital.com/p/http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/), and [publication bias](https://arbital.com/p/https://en.wikipedia.org/wiki/Publication_bias).\n\nIt would also make it easier for scientists to combine the results from multiple studies, and it would make it much much easier to conduct meta-analyses.\n\nIt would also make scientific statistics more intuitive, and easier to understand.\n\n### Likelihood functions are a Bayesian tool. Aren't Bayesian statistics subjective? Shouldn't science be objective?\n\nLikelihood functions are purely objective. In fact, there's only one degree of freedom in a likelihood function, and that's the choice of hypothesis class. This choice is no more arbitrary than the choice of a \"null hypothesis\" in standard statistics, and indeed, it's significantly less arbitrary (you can pick a large class of hypotheses, rather than just one; and none of them needs to be singled out as subjectively \"special\").\n\nThis is in stark contrast with p-values, which require that you pick an \"experimental design\" in advance, or that you talk about what data you \"could have seen\" if the experiment turned out differently. Likelihood functions only depend on the hypothesis class that you're considering, and the data that you actually saw. (This is one of the reasons why likelihood functions would solve p-hacking.)\n\nLikelihood functions are often used by Bayesian statisticians, and Bayesian statisticians do indeed use [subjective probabilities](https://arbital.com/p/4vr), which has led some people to believe that reporting likelihood functions would somehow allow hated subjectivity to seep into the hallowed halls of science.\n\nHowever, it's the [priors](https://arbital.com/p/1rm) that are subjective in Bayesian statistics, not likelihood functions. In fact, according to the [laws of probability theory](https://arbital.com/p/1lz), likelihood functions are precisely that-which-is-left-over when you factor out all subjective beliefs from an observation of evidence. In other words, probability theory tells us that likelihoods are the best summary there is for capturing the objective evidence that a piece of data provides (assuming your goal is to help make people's beliefs more accurate).\n\n### How would reporting likelihoods solve p-hacking?\n\nP-values depend on what experiment the experimenter says they had in mind. For example, if the data is HHHHHT and the experimenter says \"I was planning to flip it six times and count the number of Hs\" then the p-value (for the fair coin hypothesis) is 0.11, which is not \"significant.\" If instead the experimenter says \"I was planing to flip it until I got a T\" then the p-value is 0.03, which _is_ \"significant.\" Experimenters can ([and do!](https://arbital.com/p/http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108)) misuse or abuse this degree of freedom to make their results appear more significant than they actually are. This is known as \"p-hacking.\"\n\nIn fact, when running complicated experiments, this can (and does!) happen to honest well-meaning researchers. Some experimenters are dishonest, and many others simply lack the time and patience to understand the subtleties of good experimental design. We don't need to put that burden on experimenters. We don't need to use statistical tools that depend on which experiment the experimenter had in mind. We can instead report the likelihood that each hypothesis assigned to the actual data.\n\nLikelihood functions don't have this \"experiment\" degree of freedom. They don't care what experiment you thought you were doing. They only care about the data you actually saw. To use likelihood functions correctly, all you have to do is look at stuff and then not lie about what you saw. Given the set of hypotheses you want to report likelihoods for, the likelihood function is completely determined by the data.\n\n### But what if the experimenter tries to game the rules by choosing how much data to collect?\n\nThat's a problem if you're reporting p-values, but it's not a problem if you're reporting likelihood functions.\n\nLet's say there's a coin that you think is fair, that I think might be biased 55% towards heads. If you're right, then every toss is going to (in [https://arbital.com/p/-4b5](https://arbital.com/p/-4b5)) provide more evidence for \"fair\" than \"biased.\" But sometimes (rarely), even if the coin is fair, you will flip it and it will generate a sequence that supports the \"bias\" hypothesis more than the \"fair\" hypothesis.\n\nHow often will this happen? It depends on how exactly you ask the question. If you can flip the coin at most 300 times, then there's about a 1.4% chance that at some point the sequence generated will support the hypothesis \"the coin is biased 55% towards heads\" 20x more than it supports the hypothesis \"the coin is fair.\" (You can verify this yourself, and tweak the parameters, using [this code](https://arbital.com/p/https://gist.github.com/Soares/941bdb13233fd0838f1882d148c9ac14).)\n\nThis is an objective fact about coin tosses. If you look at a sequence of Hs and Ts generated by a fair coin, then some tiny fraction of the time, after some number $n$ of flips, it will support the \"biased 55% towards heads\" hypothesis 20x more than it supports the \"fair\" hypothesis. This is true no matter how or why you decided to look at those $n$ coin flips. It's true if you were always planning to look at $n$ coin flips since the day you were born. It's true if each coin flip costs $1 to look at, so you decided to only look until the evidence supported one hypothesis at least 20x better than the other. It's true if you have a heavy personal desire to see the coin come up biased, and were planning to keep flipping until the evidence supports \"bias\" 20x more than it supports \"fair\". It doesn't _matter_ why you looked at the sequence of Hs and Ts. The amount by which it supports \"biased\" vs \"fair\" is objective. If the coin really is fair, then the more you flip it the more the evidence will push towards \"fair.\" It will only support \"bias\" a small unlucky fraction of the time, and that fraction is completely independent from your thoughts and intentions .\n\nLikelihoods are objective. They don't depend on your state of mind.\n\nP-values, on the other hand, run into some difficulties. A p-value is about a single hypothesis (such as \"fair\") in isolation. If the coin is fair, then [all sequences of coin tosses are equally likely](https://arbital.com/p/fair_coin_equally_likely), so you need something more than the data in order to decide whether the data is \"significant evidence\" about fairness one way or the other. Which means you have to choose a \"reference class\" of ways the coin \"could have come up.\" Which means you need to tell us which experiment you \"intended\" to run. And down the rabbit hole we go.\n\nThe p-value you report depends on how many coin tosses you say you were going to look at. If you lie about where you intended to stop, the p-value breaks. If you're out in the field collecting data, and the data just subconsciously begins to feel overwhelming, and so you stop collecting evidence (or if the data just subconsciously feels insufficient and so you collect more) then the p-value breaks. How badly to p-values break? If you can toss the coin at most 300 times, then by choosing when to stop looking, you can get a p < 0.05 significant result _21% of the time,_ and that's assuming you are required to look at at least 30 flips. If you're allowed to use small sample sizes, the number is more like 25%. You can verify this yourself, and tweak the parameters, using [this code](https://arbital.com/p/https://gist.github.com/Soares/4955bb9268129476262b28e32b8ec979).\n\nIt's no wonder that p-values are so often misused! To use p-values correctly, an experimenter has to meticulously report their intentions about the experimental design before collecting data, and then has to hold utterly unfaltering to that experiment design as the data comes in (even if it becomes clear that their experimental design was naive, and that there were crucial considerations that they failed to take into account). Using p-values correctly requires good intentions, constant vigilance, and inflexibility.\n\nContrast this with likelihood functions. Likelihood functions don't depend on your intentions. If you start collecting data until it looks overwhelming and then stop, that's great. If you start collecting data and it looks underwhelming so you keep collecting more, that's great too. Every new piece of data you do collect will support the true hypothesis more than any other hypothesis, in expectation — that's the whole point of collecting data. Likelihood functions don't depend upon your state of mind.\n\n\n\n### What if the experimenter uses some other technique to bias the result?\n\nThey can't. Or, at least, it's a theorem of [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) that they can't. This law is known as [conservation of expected evidence](https://arbital.com/p/-conservation_expected_evidence), and it says that for any hypothesis $H$ and any piece of evidence $e$, $\\mathbb P(H) = \\mathbb P(H \\mid e) \\mathbb P(e) + \\mathbb P(H \\mid \\lnot e) \\mathbb P(\\lnot e),$ where $\\mathbb P$ stands for my personal subjective probabilities.\n\nImagine that I'm going to take your likelihood function $\\mathcal L$ and blindly combine it with my personal beliefs using [Bayes' rule](https://arbital.com/p/1lz). The question is, can you use $\\mathcal L$ to manipulate my beliefs? The answer is clearly \"yes\" if you're willing to lie about what data you saw. But what if you're honestly reporting all the data you _actually_ saw? Then can you manipulate my beliefs, perhaps by being strategic about what data you look at and how long you look at it?\n\nClearly, the answer to that question is \"sort of.\" If you have a fair coin, and you want to convince me it's biased, and you toss it 10 times, and it (by sheer luck) comes up HHHHHHHHHH, then that's a lot of evidence in favor of it being biased. But you can't use the \"hope the coin comes up heads 10 times in a row by sheer luck\" strategy to _reliably_ bias my beliefs; and if you try just flipping the coin 10 times and hoping to get lucky, then on average, you're going to produce data that convinces me that the coin is fair. The real question is, can you bias my beliefs _in expectation?_\n\nIf the answer is \"yes,\" then there will be times when I should ignore $\\mathcal L$ even if you honestly reported what you saw. If the answer is \"no,\" then there will be no such times — for every $e$ that would shift my beliefs heavily towards $H$ (such that you could say \"Aha! How naive! If you look at this data and see it is $e$, then you will believe $H$, just as I intended\"), there is an equal and opposite chance of alternative data which would push my beliefs _away_ from $H.$ So, can you set up a data collection mechanism that pushes me towards $H$ in expectation?\n\nAnd the answer to that question is _no,_ and this is a trivial theorem of probability theory. No matter what subjective belief state $\\mathbb P$ I use, if you honestly report the objective likelihood $\\mathcal L$ of the data you actually saw, and I update $\\mathbb P$ by [multiplying it by $\\mathcal L$](https://arbital.com/p/1lz), there is no way (according to $\\mathbb P$) for you to bias my probability of $H$ on average — no matter how strategically you decide which data to look at or how long to look. For more on this theorem and its implications, see [Conservation of Expected Evidence](https://arbital.com/p/conservation_expected_evidence).\n\nThere's a difference between metrics that can't be exploited in theory and metrics that can't be exploited in practice, and if a malicious experimenter really wanted to abuse likelihood functions, they could probably find some clever method. (At the least, they can always lie and make things up.) However, p-values aren't even provably inexploitable — they're so easy to exploit that sometimes well-meaning honest researchers exploit them _by accident_, and these exploits are already commonplace and harmful. When building better metrics, starting with metrics that are provably inexploitable is a good start.\n\n### What if you pick the wrong hypothesis class?\n\nIf you don't report likelihoods for the hypotheses that someone cares about, then that person won't find your likelihood function very helpful. The same problem exists when you report p-values (what if you pick the wrong null and alternative hypotheses?). Likelihood functions make the problem a little better, by making it easy to report how well the data supports a wide variety of hypotheses (instead of just ~2), but at the end of the day, there's no substitute for the raw data.\n\nLikelihoods are a summary of the data you saw. They're a useful summary, especially if you report likelihoods for a broad set of plausible hypotheses. They're a much better summary than many other alternatives, such as p-values. But they're still a summary, and there's just no substitute for the raw data.\n\n### How does reporting likelihoods help prevent publication bias?\n\nWhen you're reporting p-values, there's a stark difference between p-values that favor the null hypothesis (which are deemed \"insignificant\") and p-values that reject the null hypothesis (which are deemed \"significant\"). This \"significance\" occurs at arbitrary thresholds (e.g. p < 0.05), and significance is counted only in one direction (to be significant, you must reject the null hypothesis). Both these features contribute to publication bias: Journals only want to accept experiments that claim \"significance\" and reject the null hypothesis.\n\nWhen you're reporting [likelihood functions](https://arbital.com/p/56s), a 20 : 1 [ratio](https://arbital.com/p/1rq) is a 20 : 1 ratio is a 20 : 1 ratio. It doesn't matter if your likelihood function is peaked near \"the coin is fair\" or whether it's peaked near \"the coin is biased 82% towards heads.\" If the ratio between the likelihood of one hypothesis and the likelihood of another hypothesis is 20 : 1 then the data provides the same strength of evidence either way. Likelihood functions don't single out one \"null\" hypothesis and incentivize people to only report data that pushes away from that null hypothesis; they just talk about the relationship between the data and _all_ the interesting hypotheses.\n\nFurthermore, there's no arbitrary significance threshold for likelihood functions. If you didn't have a ton of data, your likelihood function will be pretty spread out, but it won't be useless. If you find $5 : 1$ odds in favor of $H_1$ over $H_2$, and I independently find $6 : 1$ odds in favor of $H_1$ over $H_2$, and our friend independently finds $3 : 1$ odds in favor of $H_1$ over $H_2,$ then our studies as a whole constitute evidence that favors $H_1$ over $H_2$ by a factor of $90 : 1$ — hardly insignificant! With likelihood ratios (and no arbitrary \"significance\" cutoffs) progress can be made in small steps.\n\nOf course, this wouldn't solve the problem of publication bias in full, not by a long shot. There would still be incentives to report cool and interesting results, and the scientific community might still ask for results to pass some sort of \"significance\" threshold before accepting them for publication. However, reporting likelihoods would be a good start.\n\n### How does reporting likelihoods help address vanishing effect sizes?\n\nIn a field where an effect does not actually exist, we will often observe an initial study that finds a very large effect, followed by a number of attempts at replication that find smaller and smaller and smaller effects (until someone postulates that the effect doesn't exist, and does a meta-analysis to look for p-hacking and publication bias). This is known as the [decline effect](https://arbital.com/p/https://en.wikipedia.org/wiki/Decline_effect); see also [_The control group is out of control_](https://arbital.com/p/http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/).\n\nThe decline effect is possible in part because p-values look only at whether the evidence says we should \"accept\" or \"reject\" a special null hypothesis, without any consideration for what that evidence says about the alternative hypotheses. Let's say we have three studies, all of which reject the null hypothesis \"the coin is fair.\" The first study rejects the null hypothesis with a 95% confidence interval of [0.9](https://arbital.com/p/0.7,) bias in favor of heads, but it was a small study and some of the experimenters were a bit sloppy. The second study is a bit bigger and a bit better organized, and rejects the null hypothesis with a 95% confidence interval of [0.62](https://arbital.com/p/0.53,). The third study is high-powered, long-running, and rejects the null hypothesis with a 95% confidence interval of [0.511](https://arbital.com/p/0.503,). It's easy to say \"look, three separate studies rejected the null hypothesis!\"\n\nBut if you look at the likelihood functions, you'll see that _something very fishy is going on_ — none of the studies actually agree with each other! The effect sizes are incompatible. Likelihood functions make this phenomenon easy to detect, because they tell you how much the data supports _all_ the relevant hypotheses (not just the null hypothesis). If you combine the three likelihood functions, you'll see that _none_ of the confidence intervals fare very well. Likelihood functions make it obvious when different studies contradict each other directly, which makes it much harder to summarize contradictory data down to \"three studies rejected the null hypothesis\".\n\n### What if I want to reject the null hypothesis without needing to have any particular alternative in mind?\n\nMaybe you don't want to report likelihoods for a large hypothesis class, because you are pretty sure you can't generate a hypothesis class that contains the correct hypothesis. \"I don't want to have to make up a bunch of alternatives,\" you protest, \"I just want to show that the null hypothesis is _wrong,_ in isolation.\"\n\nFortunately for you, that's possible using likelihood functions! The tool you're looking for is the notion of [strict confusion](https://arbital.com/p/227). A hypothesis $H$ will tell you how low its likelihood is supposed to get, and if its likelihood goes a lot lower than that value, then you can be pretty confident that you've got the wrong hypothesis.\n\nFor example, let's say that your one and only hypothesis is $H_{0.9}$ = \"the coin is biased 90% towards heads.\" Now let's say you flip the coin twenty times, and you see the sequence THTTHTTTHTTTTHTTTTTH. The [log-likelihood](https://arbital.com/p/log_likelihood) that $H_{0.9}$ _expected_ to get on a sequence of 20 coin tosses was about -9.37 [bits](https://arbital.com/p/evidence_bit),%%note: According to $H_{0.9},$ each coin toss carries $0.9 \\log_2(0.9) + 0.1 \\log_2(0.1) \\approx -0.469$ bits of evidence, so after 20 coin tosses, $H_{0.9}$ expects about $20 \\cdot 0.469 \\approx 9.37$ bits of [surprise](https://arbital.com/p/bayes_surprise). For more on why log likelihood is a convenient tool for measuring \"evidence\" and \"surprise,\" see [Bayes' rule: log odds form](https://arbital.com/p/1zh).%% for a likelihood score of about $2^{-9.37} \\approx$ $1.5 \\cdot 10^{-3},$ on average. The likelihood that $H_{0.9}$ actually gets on that sequence is -50.59 bits, for a likelihood score of about $5.9 \\cdot 10^{-16},$ which is _thirteen orders of magnitude less likely than expected._ You don't need to be clever enough to come up with an alternative hypothesis that explains the data in order to know that $H_{0.9}$ is not the right hypothesis for you.\n\nIn fact, likelihood functions make it easy to show that _lots_ of different hypotheses are strictly confused — you don't need to have a good hypothesis in your hypothesis class in order for reporting likelihood functions to be a useful service.\n\n### How does reporting likelihoods make it easier to combine multiple studies?\n\nWant to combine two studies that reported likelihood functions? Easy! Just multiply the likelihood functions together. If the first study reported 10 : 1 odds in favor of \"fair coin\" over \"biased 55% towards heads,\" and the second study reported 12 : 1 odds in favor of \"fair coin\" over \"biased 55% towards heads,\" then the combined studies support the \"fair coin\" hypothesis over the \"biased 55% towards heads\" hypothesis at a likelihood ratio of 120 : 1.\n\nIs it really that easy? Yes! That's one of the benefits of using a representation of evidence supported by a large edifice of [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) — they're trivially easy to compose. You have to ensure that the studies are independent first, because otherwise you'll double-count the data. (If the combined likelihood ratios get really extreme, you should be suspicious about whether they were actually independent.) This isn't exactly a new problem in experimental science; we can just add it to the list of reasons why replication studies had better be independent of the original study. Also, you can only multiply the likelihood functions together on places where they're both defined: If one study doesn't report the likelihood for a hypothesis that you care about, you might need access to the raw data in order to extend their likelihood function. But if the studies are independent and both report likelihood functions for the relevant hypotheses, then all you need to do is multiply.\n\n(Don't try this with p-values. A p < 0.05 study and a p < 0.01 study don't combine into anything remotely like a p < 0.0005 study.)\n\n### How does reporting likelihoods make it easier to conduct meta-analyses?\n\nWhen studies report p-values, performing a meta-analysis is a complicated procedure that requires dozens of parameters to be finely tuned, and (lo and behold) bias somehow seeps in, and meta-analyses often find whatever the analyzer set out to find. When studies report likelihood functions, performing a meta-analysis is trivial and doesn't depend on you to tune a dozen parameters. Just multiply all the likelihood functions together.\n\nIf you want to be extra virtuous, you can check for anomalies, such as one likelihood function that's tightly peaked in a place that disagrees with all the other peaks. You can also check for [strict confusion](https://arbital.com/p/227), to get a sense for how likely it is that the correct hypothesis is contained within the hypothesis class that you considered. But mostly, all you've got to do is multiply the likelihood functions together.\n\n### How does reporting likelihood functions make it easier to detect fishy studies?\n\nWith likelihood functions, it's much easier to find the studies that don't match up with each other — look for the likelihood function that has its peak in a different place than all the other peaks. That study deserves scrutiny: either those experimenters had something special going on in the background of their experiment, or something strange happened in their data collection and reporting process.\n\nFurthermore, likelihoods combined with the notion of [strict confusion](https://arbital.com/p/227) make it easy to notice when something has gone seriously wrong. As per the above answers, you can combine multiple studies by multiplying their likelihood functions together. What happens if the likelihood function is super small everywhere? That means that either (a) some of the data is fishy, or (b) you haven't considered the right hypothesis yet.\n\nWhen you _have_ considered the right hypothesis, it will have decently high likelihood under _all_ the data. There's only one real world underlying all our data, after all — it's not like different experimenters are measuring different underlying universes. If you multiply all the likelihood functions together and _all_ the hypotheses turn out looking wildly unlikely, then you've got some work to do — you haven't yet considered the right hypothesis.\n\nWhen reporting p-values, contradictory studies feel like the norm. Nobody even _tries_ to make all the studies fit together, as if they were all measuring the same world. With likelihood functions, we could actually aspire towards a world where scientific studies on the same topic are _all_ combined. A world where people try to find hypotheses that fit _all_ the data at once, and where a single study's data being out of place (and making all the hypotheses currently under consideration become [https://arbital.com/p/-227](https://arbital.com/p/-227)) is a big glaring \"look over here!\" signal. A world where it feels like studies are _supposed_ to fit together, where if scientists haven't been able to find a hypothesis that explains all the raw data, then they know they have their work cut out for them.\n\nWhatever the right hypothesis is, it will almost surely not be strictly confused under the actual data. Of course, when you come up with a completely new hypothesis (such as \"the coin most of us have been using is fair but study #317 accidentally used a different coin\") you're going to need access to the raw data of some of the previous studies in order to extend their likelihood functions and see how well they do on this new hypothesis. As always, there's just no substitute for raw data.\n\n### Why would this make statistics easier to do and understand?\n\np < 0.05 does not mean \"the null hypothesis is less than 5% likely\" (though that's what young students of statistics often _want_ it to mean). What the null hypothesis means is \"given a particular experimental design (e.g., toss the coin 100 times and count the heads) and the data (e.g., the sequence of 100 coin flips), if the null hypothesis _were_ true, then data that matches my chosen statistic (e.g., the number of heads) would only occur 5% of the time, if we repeated this experiment over and over and over.\"\n\nWhy the complexity? Statistics is designed to keep subjective beliefs out of the hallowed halls of science. Your science paper shouldn't be able to conclude \"and, therefore, I personally believe that the coin is very likely to be biased, and I'd bet on that at 20 : 1 odds.\" Still, much of this complexity is unnecessary. Likelihood functions achieve the same goal of objectivity, but without all the complexity.\n\n[$\\mathcal L_e](https://arbital.com/p/51n) $< 0.05$ _also_ doesn't mean \"$H$ is less than 5% likely\", it means \"H assigned less than 0.05 probability to $e$ happening.\" The student still needs to learn to keep \"probability of $e$ given $H$\" and \"probability of $H$ given $e$\" distinctly separate in their heads. However, likelihood functions do have a _simpler_ interpretation: $\\mathcal L_e(H)$ is the probability of the actual data $e$ occurring if $H$ were in fact true. No need to talk about experimental design, no need to choose a summary statistic, no need to talk about what \"would have happened.\" Just look at how much probability each hypothesis assigned to the actual data; that's your likelihood function.\n\nIf you're going to report p-values, you need to be meticulous in considering the complexities and subtleties of experiment design, on pain of creating p-values that are broken in non-obvious ways (thereby contributing to the [replication crisis](https://arbital.com/p/https://en.wikipedia.org/wiki/Replication_crisis)). When reading results, you need to take the experimenter's intentions into account. None of this is necessary with likelihoods.\n\nTo understand $\\mathcal L_e(H),$ all you need to know is how likely $e$ was according to $H.$ Done.\n\n### Isn't this just one additional possible tool in the toolbox? Why switch entirely away from p-values?\n\nThis may all sound too good to be true. Can one simple change really solve that many problems in modern science?\n\nFirst of all, you can be assured that reporting likelihoods instead of p-values would not \"solve\" all the problems above, and it would surely not solve all problems with modern experimental science. Open access to raw data, preregistration of studies, a culture that rewards replication, and many other ideas are also crucial ingredients to a scientific community that zeroes in on truth.\n\nHowever, reporting likelihoods would help solve lots of different problems in modern experimental science. This may come as a surprise. Aren't likelihood functions just one more statistical technique, just another tool for the toolbox? Why should we think that one single tool can solve that many problems?\n\nThe reason lies in [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv). According to the axioms of probability theory, there is only one good way to account for evidence when updating your beliefs, and that way is via likelihood functions. Any other method is subject to inconsistencies and pathologies, as per the [coherence theorems of probability theory](https://arbital.com/p/probability_coherence_theorems).\n\nIf you're manipulating equations like $2 + 2 = 4,$ and you're using methods that may or may not let you throw in an extra 3 on the right hand side (depending on the arithmetician's state of mind), then it's no surprise that you'll occasionally get yourself into trouble and deduce that $2 + 2 = 7.$ The laws of arithmetic show that there is only one correct set of tools for manipulating equations if you want to avoid inconsistency.\n\nSimilarly, the laws of probability theory show that there is only one correct set of tools for manipulating _uncertainty_ if you want to avoid inconsistency. According to [those rules](https://arbital.com/p/1lz), the right way to represent evidence is through likelihood functions.\n\nThese laws (and a solid understanding of them) are younger than the experimental science community, and the statistical tools of that community predate a modern understanding of probability theory. Thus, it makes a lot of sense that the existing literature uses different tools. However, now that humanity _does_ possess a solid understanding of probability theory, it should come as no surprise that many diverse pathologies in statistics can be cleaned up by switching to a policy of reporting likelihoods instead of p-values.\n\n\n### If it's so great why aren't we doing it already?\n\n[Probability theory](https://arbital.com/p/1bv) (and a solid understanding of all that it implies) is younger than the experimental science community, and the statistical tools of that community predate a modern understanding of probability theory. In particular, modern statistical tools were designed in an attempt to keep subjective reasoning out of the hallowed halls of science. You shouldn't be able to publish a scientific paper which concludes \"and therefore, I personally believe that this coin is biased towards heads, and would bet on that at 20 : 1 odds.\" Those aren't the foundations upon which science can be built.\n\nLikelihood functions are strongly associated with Bayesian statistics, and Bayesian statistical tools tend to manipulate subjective probabilities. Thus, it wasn't entirely clear how to use tools such as likelihood functions without letting subjectivity bleed into science.\n\nNowadays, we have a better understanding of how to separate out subjective probabilities from objective claims, and it's known that likelihood functions don't carry any subjective baggage with them. In fact, they carry _less_ subjective baggage than p-values do: A likelihood function depends only on the data that you _actually saw,_ whereas p-values depend on your experimental design and your intentions.\n\nThere are good historical reasons why the existing scientific community is using p-values, but now that humanity _does_ possess a solid theoretical understanding of probability theory (and how to factor subjective probabilities out from objective claims), it's no surprise that a wide array of diverse problems in modern statistics can be cleaned up by reporting likelihoods instead of p-values.\n\n### Has this ever been tried?\n\nNo. Not yet. To our knowledge, most scientists haven't even considered this proposal — and for good reason! There are a lot of big fish to fry when it comes to addressing the [replication crisis](https://arbital.com/p/https://en.wikipedia.org/wiki/Replication_crisis), [p-hacking](https://arbital.com/p/https://en.wikipedia.org/wiki/Data_dredging), [the problem of vanishing effect sizes](https://arbital.com/p/http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/), [publication bias](https://arbital.com/p/https://en.wikipedia.org/wiki/Publication_bias), and other problems facing science today. The scientific community at large is huge, decentralized, and has a lot of inertia. Most activists who are trying to shift it already have their hands full advocating for very important policies such as open access journals and pre-registration of trials. So it makes sense that nobody's advocating hard for reporting likelihoods instead of p-values — yet.\n\nNevertheless, there are good reasons to believe that reporting likelihoods instead of p-values would help solve many of the issues in modern experimental science.", "date_published": "2017-09-09T14:57:58Z", "authors": ["[ ]", "Nate Soares", "Diane Ritter", "Eric Bruylant", "Travis Rivera"], "summaries": [], "tags": ["Opinion page"], "alias": "505"} {"id": "0055dc109f9532014f2fb4a9457d3310", "title": "Real number (as Cauchy sequence)", "url": "https://arbital.com/p/real_number_as_cauchy_sequence", "source": "arbital", "source_type": "text", "text": "summary(Technical): The real numbers can be constructed as a field consisting of all Cauchy sequences of rationals, quotiented by the equivalence relation given by \"two sequences are equivalent if and only if they eventually get arbitrarily close to each other\".\n\nConsider the set of all [Cauchy sequences](https://arbital.com/p/53b) of [rational numbers](https://arbital.com/p/4zq): concretely, the set $$X = \\{ (a_n)_{n=1}^{\\infty} : a_n \\in \\mathbb{Q}, (\\forall \\epsilon \\in \\mathbb{Q}^{>0}) (\\exists N \\in \\mathbb{N})(\\forall n, m \\in \\mathbb{N}^{>N})(|a_n - a_m| < \\epsilon) \\}$$\n\nDefine an [https://arbital.com/p/-53y](https://arbital.com/p/-53y) on this set, by $(a_n) \\sim (b_n)$ if and only if, for every rational $\\epsilon > 0$, there is a [https://arbital.com/p/-45h](https://arbital.com/p/-45h) $N$ such that for all $n \\in \\mathbb{N}$ bigger than $N$, we have $|a_n - b_n| < \\epsilon$.\nThis is an equivalence relation (exercise).\n%%hidden(Show solution):\n- It is symmetric, because $|a_n - b_n| = |b_n - a_n|$.\n- It is reflexive, because $|a_n - a_n| = 0$ for every $n$, and this is $< \\epsilon$.\n- It is transitive, because if $|a_n - b_n| < \\frac{\\epsilon}{2}$ for sufficiently large $n$, and $|b_n - c_n| < \\frac{\\epsilon}{2}$ for sufficiently large $n$, then $|a_n - b_n| + |b_n - c_n| < \\frac{\\epsilon}{2} + \\frac{\\epsilon}{2} = \\epsilon$ for sufficiently large $n$; so by the [https://arbital.com/p/-triangle_inequality](https://arbital.com/p/-triangle_inequality), $|a_n - c_n| < \\epsilon$ for sufficiently large $n$.\n%%\n\nWrite $[https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ for the equivalence class of $(a_n)_{n=1}^{\\infty}$. (This is a slight abuse of notation, omitting the brackets that indicate that $a_n$ is actually a sequence rather than a rational number.)\n\nThe set of *real numbers* is the set of equivalence classes of $X$ under this equivalence relation, endowed with the following [totally ordered](https://arbital.com/p/540) [field](https://arbital.com/p/481) structure:\n\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/b_n](https://arbital.com/p/b_n) := [+ b_n](https://arbital.com/p/a_n)$\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n) := [\\times b_n](https://arbital.com/p/a_n)$\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ if and only if $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ or there is some $N$ such that for all $n > N$, $a_n \\leq b_n$.\n\nThis field structure is well-defined ([proof](https://arbital.com/p/51h)).\n\n# Examples\n\n- Any rational number $r$ may be viewed as a real number, being the class $[https://arbital.com/p/r](https://arbital.com/p/r)$ (formally, the equivalence class of the sequence $(r, r, \\dots)$).\n- The real number $\\pi$ is indeed a real number under this definition; it is represented by, for instance, $(3, 3.1, 3.14, 3.141, \\dots)$. It is also represented as $(100, 3, 3.1, 3.14, \\dots)$, along with many other possibilities.", "date_published": "2016-07-05T20:06:38Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": ["If we want to construct the real numbers in terms of simpler objects (such as the rationals), one way to do it is to take our putative real number and consider sequences of rational numbers which in some sense \"get closer and closer\" to the real number."], "tags": [], "alias": "50d"} {"id": "6baddf9f3aa6597d9668ba3c7cd09a61", "title": "Real number (as Dedekind cut)", "url": "https://arbital.com/p/real_number_as_dedekind_cut", "source": "arbital", "source_type": "text", "text": "%%comment: Mnemonics for defined macros: \\Ql = Q left, \\Qr = Q right, \\Qls = Q left strict, \\Qrs = Q right strict.%%\n\nThe rational numbers have a problem that makes them unsuitable for use in calculus — they have \"gaps\" in them. This may not be obvious or even make sense at first, because between any two rational numbers you can always find infinitely many other rational numbers. How could there be *gaps* in a set like that? $\\newcommand{\\rats}{\\mathbb{Q}} \\newcommand{\\Ql}{\\rats^\\le} \\newcommand{\\Qr}{\\rats^\\ge} \\newcommand{\\Qls}{\\rats^<} \\newcommand{\\Qrs}{\\rats^>}$\n$\\newcommand{\\set}[https://arbital.com/p/1](https://arbital.com/p/1){\\left\\{#1\\right\\}} \\newcommand{\\sothat}{\\ |\\ }$ \n\nBut using the construction of [Dedekind cuts](https://arbital.com/p/dedekind_cut), we can suss out these gaps into plain view. A Dedekind cut of a [https://arbital.com/p/-540](https://arbital.com/p/-540) $S$ is a pair of sets $(A, B)$ such that:\n\n1. Every element of $S$ is in exactly one of $A$ or $B$. (That is, $(A, B)$ is a [partition](https://arbital.com/p/set_partition) of $S$.)\n2. Every element of $A$ is less than every element of $B$.\n3. Neither $A$ nor $B$ is [empty](https://arbital.com/p/empty_set). (We'll see why this restriction matters in a moment.)\n\nOne example of such a cut might be the set where $A$ is the negative rational numbers and $B$ is the nonnegative rational numbers (positive or zero). We see that it satisfies the three properties of a Dedekind cut:\n\n1. Every rational number is either negative or nonnegative, but not both.\n2. Every rational number which is negative is less than a rational number that is nonnegative.\n3. There exists at least one negative rational number (e.g. $-1$) and one nonnegative rational number (e.g. $1$).\n\nIn fact, Dedekind cuts are intended to represent sets of rational numbers that are less than or greater than a specific real number (once we've defined them). To represent this, let's call them $\\Ql$ and $\\Qr$.\n\nKnowing this, why does it matter that neither set in a Dedekind cut is empty?\n\n%%hidden(Show solution): \nIf $\\Ql$ were empty, then we'd have a real number less than _all_ the rational numbers, which is $-\\infty$, which we don't want to define as a real number. Similarly, if $\\Qr$ were empty, then we'd get $+\\infty$.\n%%\n\n## Completion of a space\n\nIf a space is [complete](https://arbital.com/p/) (doesn't have any gaps in it), then in any Dedekind cut $(\\Ql, \\Qr)$, either $\\Ql$ will have a greatest element or $\\Qr$ will have a least element. (We can't have both at the same time — why?)\n\n%%hidden(Show solution):\nSuppose $\\Ql$ had a greatest element $q_u$ and $\\Qr$ had a least element $q_v$. We can't have $q_u = q_v$, because the same number would be in both sets. So then because the rational numbers are a dense space, there must exist a rational number $r$ so that $q_u < r < q_v$. Then $r$ is not in either $\\Ql$ or $\\Qr$, contradicting property 1 of a Dedekind cut.\n%%\n\nBut in the rational numbers, we can find a Dedekind cut where neither $\\Ql$ nor $\\Qr$ have a greatest or least element respectively.\n\nConsider the pair of sets $(\\Ql, \\Qr)$ where $\\Ql = \\set{x \\in \\rats \\mid x^3 \\le 2}$ and $\\Qr = \\set{x \\in \\rats \\mid x^3 \\ge 2}$.\n\n1. Every rational number has a cube either greater than 2 or less than 2, \n2. Because $f(x) = x^3$ is a [monotonically increasing](https://arbital.com/p/) function, we have that $p < q \\iff p^3 < q^3$, which means that every element in $\\Ql$ is less than every element in $\\Qr$.\n\nSo $(\\Ql, \\Qr)$ is a Dedekind cut. However, there is no rational number whose cube is *equal to* $2$, so $\\Ql$ has no greatest element and $\\Qr$ has no least element.\n\nThis represents a gap in the numbers, because we can invent a new number to place in that gap (in this case $\\sqrt[https://arbital.com/p/3](https://arbital.com/p/3){2}$), which is \"between\" any two numbers in $\\Ql$ and $\\Qr$.\n\n## Definition of real numbers\n\nBefore we move on, we will define one more structure that makes the construction more elegant. Define a *one-sided Dedekind cut* as any Dedekind cut $(\\Ql, \\Qr)$ with the additional property that the set $\\Ql$ has no greatest element (in which case we now call it $\\Qls$). The case where $\\Ql$ has a greatest element $q_g$ can be trivially transformed into the equivalent case on the other side by moving $q_g$ to $\\Qr$ where it is automatically the least element due to being less than any other element in $\\Qr$.\n\nThen we define the real numbers as the set of one-sided Dedekind cuts of the rational numbers.\n\n* A rational number $r$ is mapped to itself by the Dedekind cut where $r$ itself is the least element of $\\Qr$. (If the cuts weren't one-sided, $r$ would also be mapped to the set where $r$ was the greatest element of $\\Ql$, which would make the mapping non-unique.)\n* An irrational number $q$ is newly defined by the Dedekind cut where all the elements of $\\Qls$ are less than $q$ and all the elements of $\\Qr$ are (strictly) greater than $q$.\n\nNow we can define the [total order](https://arbital.com/p/549) $\\le$ for two real numbers $a = (\\Qls_a, \\Qr_a)$ and $b = (\\Qls_b, \\Qr_b)$ as follows: $a \\le b$ when $\\Qls_a \\subseteq \\Qls_b$. \n\nUsing this, we can show that unlike in the [Cauchy sequence definition](https://arbital.com/p/50d), we don't need to define any equivalence classes — every real number is uniquely defined by a one-sided Dedekind cut.\n\n%%hidden(Proof):\nIf $a = b$, then $a \\le b$ and $b \\le a$. By the definition of the order, we have that $\\Qls_a \\subseteq \\Qls_b$ and $\\Qls_b \\subseteq \\Qls_a$, which means that $\\Qls_a = \\Qls_b$, which means that the Dedekind cuts corresponding to $a$ and $b$ are also equal.\n%%", "date_published": "2016-07-23T17:41:20Z", "authors": ["Kevin Clancy", "Dylan Hendrickson", "Patrick Stevens", "Eric Bruylant", "Joe Zeng"], "summaries": ["The real numbers can be defined using [Dedekind cuts](https://arbital.com/p/dedekind_cut), which are [partitions](https://arbital.com/p/set_partition) of the rational numbers into two sets where every element in one set is less than every element in the other set. Each real number is uniquely defined by a one-sided Dedekind cut."], "tags": ["Stub"], "alias": "50g"} {"id": "1014914303b01d5efab54863af6b6c78", "title": "Currying", "url": "https://arbital.com/p/currying", "source": "arbital", "source_type": "text", "text": "Currying converts a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) of N [inputs](https://arbital.com/p/input) into a function of a single input, which returns a function that takes a second input, which returns a function that takes a third input, and so on, until N functions have been evaluated and N inputs have been taken, at which point the original function is evaluated on the given inputs and the result is returned.\n\nFor example:\n\nConsider a function of four parameters, $F:(X,Y,Z,N)→R$.\n\nThe curried version of the function, $curry(F)$ would have the type signature $X→(Y→(Z→(N→R)))$, and $curry(F)(4)(3)(2)(1)$ would equal $F(4,3,2,1)$.\n\nCurrying is named after the logician Haskell Curry. If you can't remember this derivation and need a mnemonic, you might like to imagine a function being cooked in a curry so thoroughly that it breaks up into small, bite-sized pieces, just as functional currying breaks a function up into smaller functions. (It might also help to note that there is a programming language also named after Haskell Curry (Haskell), that features currying prominently. All functions in Haskell are pre-curried by convention. This often makes [partially applying](https://wiki.haskell.org/Partial_application) functions effortless.)", "date_published": "2016-07-17T22:22:13Z", "authors": ["Eric Bruylant", "M Yass"], "summaries": [], "tags": [], "alias": "50p"} {"id": "74a291797985c4b5d7910419fc0790a2", "title": "Proof", "url": "https://arbital.com/p/proof_meta_tag", "source": "arbital", "source_type": "text", "text": "Proof pages provide [formal proofs](https://arbital.com/p/proof) about a statement, but don't motivate or explain them.", "date_published": "2016-07-02T21:33:01Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "50q"} {"id": "1c576c77c375e28e01bc8985dedd2c5c", "title": "External resources", "url": "https://arbital.com/p/external_resources_meta_tag", "source": "arbital", "source_type": "text", "text": "This lens links out to other great resources across the web. External resources lenses should never be the main lens, Arbital should always have at least a [https://arbital.com/p/-72](https://arbital.com/p/-72) with a popover [summary](https://arbital.com/p/1kl).", "date_published": "2016-07-02T21:53:46Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Stub"], "alias": "50r"} {"id": "89ca5bd32183ebdd4da99ad1b3d7c5a3", "title": "The set of rational numbers is countable", "url": "https://arbital.com/p/rationals_are_countable", "source": "arbital", "source_type": "text", "text": "The set $\\mathbb{Q}$ of [rational numbers](https://arbital.com/p/4zq) is countable: that is, there is a [bijection](https://arbital.com/p/499) between $\\mathbb{Q}$ and the set $\\mathbb{N}$ of [natural numbers](https://arbital.com/p/45h).\n\n# Proof\n\nBy the [Schröder-Bernstein theorem](https://arbital.com/p/cantor_schroeder_bernstein_theorem), it is enough to find an [injection](https://arbital.com/p/4b7) $\\mathbb{N} \\to \\mathbb{Q}$ and an injection $\\mathbb{Q} \\to \\mathbb{N}$.\n\nThe former is easy, because $\\mathbb{N}$ is a subset of $\\mathbb{Q}$ so the identity injection $n \\mapsto \\frac{n}{1}$ works.\n\nFor the latter, we may define a function $\\mathbb{Q} \\to \\mathbb{N}$ as follows.\nTake any rational in its lowest terms, as $\\frac{p}{q}$, say. %%note:That is, the [GCD](https://arbital.com/p/greatest_common_divisor) of the numerator $p$ and denominator $q$ is $1$.%%\nAt most one of $p$ and $q$ is negative (if both are negative, we may just cancel $-1$ from the top and bottom of the fraction); by multiplying by $\\frac{-1}{-1}$ if necessary, assume without loss of generality that $q$ is positive.\nIf $p = 0$ then take $q = 1$.\n\nDefine $s$ to be $1$ if $p$ is positive, and $2$ if $p$ is negative.\n\nThen produce the natural number $2^p 3^q 5^s$.\n\nThe function $f: \\frac{p}{q} \\mapsto 2^p 3^q 5^s$ is injective, because [prime factorisations are unique](https://arbital.com/p/fundamental_theorem_of_arithmetic) so if $f\\left(\\frac{p}{q}\\right) = f \\left(\\frac{a}{b} \\right)$ (with both fractions in their lowest terms, and $q$ positive) then $|p| = |a|, q=b$ and the sign of $p$ is equal to the sign of $a$.\nHence the two fractions were the same after all.", "date_published": "2016-07-03T07:47:06Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Proof"], "alias": "511"} {"id": "43fa9dc6204769091cd1bf8b453b88e2", "title": "Pi is irrational", "url": "https://arbital.com/p/pi_is_irrational", "source": "arbital", "source_type": "text", "text": "The number [$\\pi$](https://arbital.com/p/49r) is not [rational](https://arbital.com/p/4zq).\n\n# Proof\n\nFor any fixed real number $q$, and any [https://arbital.com/p/-45h](https://arbital.com/p/-45h) $n$, let $$A_n = \\frac{q^n}{n!} \\int_0^{\\pi} [](https://arbital.com/p/x)^n \\sin(x) dx$$\nwhere $n!$ is the [https://arbital.com/p/-5bv](https://arbital.com/p/-5bv) of $n$, $\\int$ is the [https://arbital.com/p/-definite_integral](https://arbital.com/p/-definite_integral), and $\\sin$ is the [https://arbital.com/p/-sin_function](https://arbital.com/p/-sin_function).\n\n## Preparatory work\n\nExercise: $A_n = (4n-2) q A_{n-1} - (q \\pi)^2 A_{n-2}$.\n%%hidden(Show solution):\nWe use [https://arbital.com/p/-integration_by_parts](https://arbital.com/p/-integration_by_parts).\n\n\n%%\n\nNow, $$A_0 = \\int_0^{\\pi} \\sin(x) dx = 2$$\nso $A_0$ is an integer.\n\nAlso $$A_1 = q \\int_0^{\\pi} x (\\pi-x) \\sin(x) dx$$ which by a simple calculation is $4q$.\n%%hidden(Show calculation):\nExpand the integrand and then integrate by parts repeatedly:\n$$\\frac{A_1}{q} = \\int_0^{\\pi} x (\\pi-x) \\sin(x) dx = \\pi \\int_0^{\\pi} x \\sin(x) dx - \\int_0^{\\pi} x^2 \\sin(x) dx$$\n\nThe first integral term is $$[\\cos](https://arbital.com/p/-x)_0^{\\pi} + \\int_0^{\\pi} \\cos(x) dx = \\pi$$\n\nThe second integral term is $$[\\cos](https://arbital.com/p/-x^2)_{0}^{\\pi} + \\int_0^{\\pi} 2x \\cos(x) dx$$\nwhich is $$\\pi^2 + 2 \\left( [\\sin](https://arbital.com/p/x)_0^{\\pi} - \\int_0^{\\pi} \\sin(x) dx \\right)$$\nwhich is $$\\pi^2 -4$$\n\nTherefore $$\\frac{A_1}{q} = \\pi^2 - (\\pi^2 - 4) = 4$$\n%%\n\nTherefore, if $q$ and $q \\pi$ are integers, then so is $A_n$ [inductively](https://arbital.com/p/5fz), because $(4n-2) q A_{n-1}$ is an integer and $(q \\pi)^2 A_{n-2}$ is an integer.\n\nBut also $A_n \\to 0$ as $n \\to \\infty$, because $\\int_0^{\\pi} [](https://arbital.com/p/x)^n \\sin(x) dx$ is in modulus at most $$\\pi \\times \\max_{0 \\leq x \\leq \\pi} [](https://arbital.com/p/x)^n \\sin(x) \\leq \\pi \\times \\max_{0 \\leq x \\leq \\pi} [](https://arbital.com/p/x)^n = \\pi \\times \\left[https://arbital.com/p/\\frac{\\pi^2}{4}\\right](https://arbital.com/p/\\frac{\\pi^2}{4}\\right)^n$$\nand hence $$|A_n| \\leq \\frac{1}{n!} \\left[q}{4}\\right](https://arbital.com/p/\\frac{\\pi^2)^n$$\n\nFor $n$ larger than $\\frac{\\pi^2 q}{4}$, this expression is getting smaller with $n$, and moreover it gets smaller faster and faster as $n$ increases; so its limit is $0$.\n%%hidden(Formal treatment):\nWe claim that $\\frac{r^n}{n!} \\to 0$ as $n \\to \\infty$, for any $r > 0$.\n\nIndeed, we have $$\\frac{r^{n+1}/(n+1)!}{r^n/n!} = \\frac{r}{n+1}$$\nwhich, for $n > 2r-1$, is less than $\\frac{1}{2}$.\nTherefore the ratio between successive terms is less than $\\frac{1}{2}$ for sufficiently large $n$, and so the sequence must shrink at least geometrically to $0$.\n%%\n\n## Conclusion\n\nSuppose (for [contradiction](https://arbital.com/p/46z)) that $\\pi$ is rational; then it is $\\frac{p}{q}$ for some integers $p, q$.\n\nNow $q \\pi$ is an integer (indeed, it is $p$), and $q$ is certainly an integer, so by what we showed above, $A_n$ is an integer for all $n$.\n\nBut $A_n \\to 0$ as $n \\to \\infty$, so there is some $N$ for which $|A_n| < \\frac{1}{2}$ for all $n > N$; hence for all sufficiently large $n$, $A_n$ is $0$.\nWe already know that $A_0 = 2$ and $A_1 = 4q$, neither of which is $0$; so let $N$ be the first integer such that $A_n = 0$ for all $n \\geq N$, and we can already note that $N > 1$.\n\nThen $$0 = A_{N+1} = (4N-2) q A_N - (q \\pi)^2 A_{N-1} = - (q \\pi)^2 A_{N-1}$$\nwhence $q=0$ or $\\pi = 0$ or $A_{N-1} = 0$.\n\nCertainly $q \\not = 0$ because $q$ is the denominator of a fraction; and $\\pi \\not = 0$ by whatever definition of $\\pi$ we care to use.\nBut also $A_{N-1}$ is not $0$ because then $N-1$ would be an integer $m$ such that $A_n = 0$ for all $n \\geq m$, and that contradicts the definition of $N$ as the *least* such integer.\n\nWe have obtained the required contradiction; so it must be the case that $\\pi$ is irrational.", "date_published": "2016-07-21T17:36:18Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Proof"], "alias": "513"} {"id": "34470c5622365830ce3ecabe10cb84a7", "title": "Arithmetic of rational numbers (Math 0)", "url": "https://arbital.com/p/arithmetic_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "Now that we've seen [what the rational numbers are](https://arbital.com/p/4zx), we'll move on to working out how to combine them.\n\nThere are four main ways to combine rational numbers, and collectively they are called \"arithmetic\":\n\n- [Addition](https://arbital.com/p/55m)\n- [Subtraction](https://arbital.com/p/56x)\n- [Multiplication](https://arbital.com/p/59s)\n- [Division](https://arbital.com/p/5jd)\n\nAdditionally, we can *[compare](https://arbital.com/p/5pk)* rational numbers with each other; and we can show that all of these operations [play nicely together](https://arbital.com/p/5pm) in a certain sense.\n\nThis page is the start of an arc which will go through all of these ideas.\nIn the process, we'll get you familiar with the notation that we use for rational numbers; it's a very convenient shorthand once you've learnt [the conventions](https://arbital.com/p/5q5).", "date_published": "2016-08-01T06:15:58Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["We can take two [rational numbers](https://arbital.com/p/4zq) and [add](https://arbital.com/p/addition) them together, just as we can take two [*natural* numbers](https://arbital.com/p/45h) and add them together; and all the other operations of the integers can be viewed in the rationals too."], "tags": ["Math 0"], "alias": "514"} {"id": "0cb1e8ec1c0fd14e91722a8ed5855c31", "title": "Field structure of rational numbers", "url": "https://arbital.com/p/field_structure_of_rational_numbers", "source": "arbital", "source_type": "text", "text": "The [rational numbers](https://arbital.com/p/4zq), being the [https://arbital.com/p/-field_of_fractions](https://arbital.com/p/-field_of_fractions) of the [integers](https://arbital.com/p/48l), have the following [field](https://arbital.com/p/481) structure:\n\n- Addition is given by $\\frac{a}{b} + \\frac{p}{q} = \\frac{aq+bp}{bq}$\n- Multiplication is given by $\\frac{a}{b} \\frac{c}{d} = \\frac{ac}{bd}$\n- The identity under addition is $\\frac{0}{1}$\n- The identity under multiplication is $\\frac{1}{1}$\n- The additive inverse of $\\frac{a}{b}$ is $\\frac{-a}{b}$\n- The multiplicative inverse of $\\frac{a}{b}$ (where $a \\not = 0$) is $\\frac{b}{a}$.\n\nIt additionally inherits a [total ordering](https://arbital.com/p/540) which respects the field structure: $0 < \\frac{c}{d}$ if and only if $c$ and $d$ are both positive or $c$ and $d$ are both negative.\nAll other information about the ordering can be derived from this fact: $\\frac{a}{b} < \\frac{c}{d}$ if and only if $0 < \\frac{c}{d} - \\frac{a}{b}$.", "date_published": "2016-07-29T18:39:26Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Formal definition"], "alias": "516"} {"id": "74944b5836cb84c030ef46a0c1611635", "title": "Elementary Algebra", "url": "https://arbital.com/p/elementary_algebra", "source": "arbital", "source_type": "text", "text": "How do we describe relations between different things? How can we figure out new true things from true things we already know?\nHow can we find and think about patterns we notice with numbers? Algebra is a unified framework for thinking about these questions, and gives us lots of tools to help us answer them, which work in an extremely wide variety of situations. \n\n#Equations\nAlgebra is based on the arithmetic of numbers, and the relations between them. \nLet's look at a basic equation: $$2 + 2 = 4$$\nThe 'equals' sign (=) tells us that both sides of the equation are actually the same. If we have two, and add two more to it, then we'll have four. \n\nSome other relations are 'less than' (<), for example $2 < 4$. or 'greater than' (>), for example $5 > 1$. \n\n##The right order\nIn equations, parenthesis tell us the right order to do things - things inside of parenthesis have to be done before the rest. This is important because doing things in different orders can give us different answers!\n$$2 + (3 \\times 4) = 14$$\n$$(2 + 3) \\times 4 = 20$$\n\nIt's annoying to have to use parentheses all the time (though it might be helpful if you find yourself getting confused about something). It would be nice if we could just write $2 + 3 \\times 4$ and have everyone know that we meant $2 + (3 \\times 4)$ . There's a standard [order of operations](https://arbital.com/p/54s) that everyone uses so that we don't have to use too many parentheses. \n\n\n\n#Balancing the truth\nIf we know one equation, what are some ways we could get _new_ equations from it, that will still be true? \nWe could make a change to one side, but then the equation would stop being true... unless we did the same change to the other side also! For example, we know $2+2=4$. What if we add three to both sides? If we check \n$(2 + 2) + 3 = 4 + 3$, we can see that it's still true! \n\n##Substitution \nOne way of getting new true things is really important. If we know two things are the same, we can always substitute one of them for the other, and this automatically will give us an equality relation between the two things! \n\nThis is really helpful for breaking down calculations into manageable pieces. For example, if we want to calculate $2^3 \\times 2^4$, we can first expand $2^3 = 2 \\times 2 \\times 2$, and then calculate $2 \\times 2 \\times 2 = 8$. Now, we combine these last two equations to see that $2^3 = 8$. Often, people will do both of these in one step, so if you ever are having a hard time figuring out how someone got an equation, you can try breaking it down like this to see if that helps. Next, we can do the same thing to see that $2^4 = 2 \\times 2 \\times 2 \\times 2 = 16$. We can then substitute both of these back into the original expression, to get $2^3 \\times 2^4 = 8 \\times 16$. One final calculation lets us see that $2^3\\times 2^4 = 128$. \n\n#Naming numbers\nWhat are all those letters doing in math, anyway?\n\nWhen you first learn someone's name, do you know everything there is to know about them? Sometimes, we know that there's _some_ number that fits in an equation, but we don't know what particular number it is yet. It's really helpful to be able to talk about the number anyway, in fact, giving it a name is often the first step to figuring out what it really is. \n\nThis kind of name is called a **variable**.\n\n## Doing lots of things at once\nAnother way names are useful is if we want to say something about lots of numbers at once!\nFor example, you might notice that $0 \\times 3 = 0$, $0 \\times -4 = 0$, $0 \\times 1224 = 0$ and so on. In fact, it's true that zero times _any_ number is zero. We could write this as $0 \\times \\text{any number} = 0$. Or if we need to keep referring to that number, we could give it a shorter name, while mentioning it could be any number: $0 \\times x = 0$, where $x$ can be any number. This is really useful because it allows us to express patterns much more easily!\n\n#Important patterns\nThese patterns hold for all [natural numbers](https://arbital.com/p/45h), [integers](https://arbital.com/p/48l), and [rational numbers](https://arbital.com/p/4zq), including for variables that are known to be one of these types of numbers, and much more! For an in-depth exploration of these patterns and their consequences, see the page on [rings](https://arbital.com/p/3gq). \n##Commutativity\n$$ a + b = b + a$$\n$$ a \\times b = b\\times a$$\n##Identity\n$$ 0 + a = a$$\n$$ 1 \\times a = a$$\n##Associativity \n$$ (a + b) + c = a + (b + c)$$\n$$ (a \\times b ) \\times c = a \\times (b\\times c)$$\n##Distributivity\n$$ a \\times (b + c) = a\\times b + a\\times c$$\n##Additive inverse\n$$ a + (-a) = a - a = 0 $$\n\n\n#Next steps\n* [https://arbital.com/p/Solving_equations](https://arbital.com/p/Solving_equations)\n* [Functions](https://arbital.com/p/3jy)", "date_published": "2016-08-06T13:10:55Z", "authors": ["Kevin Clancy", "Alexei Andreev", "Gabriel Sjöstrand", "Adele Lopez", "Eric Bruylant"], "summaries": [], "tags": ["C-Class"], "alias": "517"} {"id": "bac452730468dbff7b9a92750a24e345", "title": "The reals (constructed as classes of Cauchy sequences of rationals) form a field", "url": "https://arbital.com/p/reals_as_classes_of_cauchy_sequences_form_a_field", "source": "arbital", "source_type": "text", "text": "The real numbers, when [constructed as equivalence classes of Cauchy sequences](https://arbital.com/p/50d) of [rationals](https://arbital.com/p/4zq), form a [totally ordered](https://arbital.com/p/540) [field](https://arbital.com/p/481), with the inherited field structure given by \n\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/a_n+b_n](https://arbital.com/p/a_n+b_n)$\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [\\times b_n](https://arbital.com/p/a_n)$\n- $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ if and only if either $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ or for sufficiently large $n$, $a_n \\leq b_n$.\n\n# Proof\n\nFirstly, we need to show that those operations are even well-defined: that is, if we pick two different representatives $(x_n)_{n=1}^{\\infty}$ and $(y_n)_{n=1}^{\\infty}$ of the same equivalence class $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n)$, we don't somehow get different answers.\n\n## Well-definedness of $+$\n\nWe wish to show that $[https://arbital.com/p/x_n](https://arbital.com/p/x_n)+[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n) + [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ whenever $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n)$ and $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$; this is an exercise.\n%%hidden(Show solution):\nSince $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n)$, it must be the case that both $(x_n)$ and $(y_n)$ are Cauchy sequences such that $x_n - y_n \\to 0$ as $n \\to \\infty$.\nSimilarly, $a_n - b_n \\to 0$ as $n \\to \\infty$.\n\nWe require $[https://arbital.com/p/x_n+a_n](https://arbital.com/p/x_n+a_n) = [https://arbital.com/p/y_n+b_n](https://arbital.com/p/y_n+b_n)$; that is, we require $x_n+a_n - y_n-b_n \\to 0$ as $n \\to \\infty$.\n\nBut this is true: if we fix rational $\\epsilon > 0$, we can find $N_1$ such that for all $n > N_1$, we have $|x_n - y_n| < \\frac{\\epsilon}{2}$; and we can find $N_2$ such that for all $n > N_2$, we have $|a_n - b_n| < \\frac{\\epsilon}{2}$.\nLetting $N$ be the maximum of the two $N_1, N_2$, we have that for all $n > N$, $|x_n + a_n - y_n - b_n| \\leq |x_n - y_n| + |a_n - b_n|$ by the [https://arbital.com/p/-triangle_inequality](https://arbital.com/p/-triangle_inequality), and hence $\\leq \\epsilon$.\n%% \n\n## Well-definedness of $\\times$\n\nWe wish to show that $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ whenever $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) = [https://arbital.com/p/y_n](https://arbital.com/p/y_n)$ and $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$; this is also an exercise.\n%%hidden(Show solution):\nWe require $[a_n](https://arbital.com/p/x_n) = [b_n](https://arbital.com/p/y_n)$; that is, $x_n a_n - y_n b_n \\to 0$ as $n \\to \\infty$.\n\nLet $\\epsilon > 0$ be rational.\nThen $$|x_n a_n - y_n b_n| = |x_n (a_n - b_n) + b_n (x_n - y_n)|$$ using the very handy trick of adding the expression $x_n b_n - x_n b_n$.\n\nBy the triangle inequality, this is $\\leq |x_n| |a_n - b_n| + |b_n| |x_n - y_n|$.\n\nWe now use the fact that [https://arbital.com/p/cauchy_sequences_are_bounded](https://arbital.com/p/cauchy_sequences_are_bounded), to extract some $B$ such that $|x_n| < B$ and $|b_n| < B$ for all $n$;\nthen our expression is less than $B (|a_n - b_n| + |x_n - y_n|)$.\n\nFinally, for $n$ sufficiently large we have $|a_n - b_n| < \\frac{\\epsilon}{2 B}$, and similarly for $x_n$ and $y_n$, so the result follows that $|x_n a_n - y_n b_n| < \\epsilon$.\n%%\n\n## Well-definedness of $\\leq$\n\nWe wish to show that if $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ and $[https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/d_n](https://arbital.com/p/d_n)$, then $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ implies $[https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\leq [https://arbital.com/p/d_n](https://arbital.com/p/d_n)$.\n\nSuppose $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, but suppose for [contradiction](https://arbital.com/p/46z) that $[https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ is not $\\leq [https://arbital.com/p/d_n](https://arbital.com/p/d_n)$: that is, $[https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\not = [https://arbital.com/p/d_n](https://arbital.com/p/d_n)$ and there are arbitrarily large $n$ such that $c_n > d_n$.\nThen there are two cases.\n\n- If $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ then $[https://arbital.com/p/d_n](https://arbital.com/p/d_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$, so the result follows immediately. %%note:We didn't need the extra assumption that $[https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\not \\leq [https://arbital.com/p/d_n](https://arbital.com/p/d_n)$ here.%%\n- If for all sufficiently large $n$ we have $a_n \\leq b_n$, then \n\n## Additive [commutative](https://arbital.com/p/3jb) [group structure](https://arbital.com/p/3gd) on $\\mathbb{R}$\n\nThe additive identity is $[https://arbital.com/p/0](https://arbital.com/p/0)$ (formally, the equivalence class of the sequence $(0, 0, \\dots)$).\nIndeed, $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/0](https://arbital.com/p/0) = [https://arbital.com/p/a_n+0](https://arbital.com/p/a_n+0) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$.\n\nThe additive inverse of the element $[https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ is $[https://arbital.com/p/-a_n](https://arbital.com/p/-a_n)$, because $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/-a_n](https://arbital.com/p/-a_n) = [https://arbital.com/p/a_n-a_n](https://arbital.com/p/a_n-a_n) = [https://arbital.com/p/0](https://arbital.com/p/0)$.\n\nThe operation is commutative: $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/a_n+b_n](https://arbital.com/p/a_n+b_n) = [https://arbital.com/p/b_n+a_n](https://arbital.com/p/b_n+a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$.\n\nThe operation is closed, because the sum of two Cauchy sequences is a Cauchy sequence (exercise).\n%%hidden(Show solution):\nIf $(a_n)$ and $(b_n)$ are Cauchy sequences, then let $\\epsilon > 0$.\nWe wish to show that there is $N$ such that for all $n, m > N$, we have $|a_n+b_n - a_m - b_m| < \\epsilon$.\n\nBut $|a_n+b_n - a_m - b_m| \\leq |a_n - a_m| + |b_n - b_m|$ by the triangle inequality; so picking $N$ so that $|a_n - a_m| < \\frac{\\epsilon}{2}$ and $|b_n - b_m| < \\frac{\\epsilon}{2}$ for all $n, m > N$, the result follows.\n%%\n\nThe operation is associative: $$[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + ([https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n)) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/b_n+c_n](https://arbital.com/p/b_n+c_n) = [https://arbital.com/p/a_n+b_n+c_n](https://arbital.com/p/a_n+b_n+c_n) = [https://arbital.com/p/a_n+b_n](https://arbital.com/p/a_n+b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) = ([https://arbital.com/p/a_n](https://arbital.com/p/a_n)+[https://arbital.com/p/b_n](https://arbital.com/p/b_n))+[https://arbital.com/p/c_n](https://arbital.com/p/c_n)$$\n\n## [Ring structure](https://arbital.com/p/3gq)\n\nThe multiplicative identity is $[https://arbital.com/p/1](https://arbital.com/p/1)$ (formally, the equivalence class of the sequence $(1,1, \\dots)$).\nIndeed, $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/1](https://arbital.com/p/1) = [\\times 1](https://arbital.com/p/a_n) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$.\n\n$\\times$ is closed, because the product of two Cauchy sequences is a Cauchy sequence (exercise).\n%%hidden(Show solution):\nIf $(a_n)$ and $(b_n)$ are Cauchy sequences, then let $\\epsilon > 0$.\nWe wish to show that there is $N$ such that for all $n, m > N$, we have $|a_n b_n - a_m b_m| < \\epsilon$.\n\nBut $$|a_n b_n - a_m b_m| = |a_n (b_n - b_m) + b_m (a_n - a_m)| \\leq |b_m| |a_n - a_m| + |a_n| |b_n - b_m|$$ by the triangle inequality.\n\nCauchy sequences are bounded, so there is $B$ such that $|a_n|$ and $|b_m|$ are both less than $B$ for all $n$ and $m$.\n\nSo picking $N$ so that $|a_n - a_m| < \\frac{\\epsilon}{2B}$ and $|b_n - b_m| < \\frac{\\epsilon}{2B}$ for all $n, m > N$, the result follows.\n%%\n\n$\\times$ is clearly commutative: $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [\\times b_n](https://arbital.com/p/a_n) = [\\times a_n](https://arbital.com/p/b_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n) \\times [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$.\n\n$\\times$ is associative: $$[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times ([https://arbital.com/p/b_n](https://arbital.com/p/b_n) \\times [https://arbital.com/p/c_n](https://arbital.com/p/c_n)) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [\\times c_n](https://arbital.com/p/b_n) = [\\times b_n \\times c_n](https://arbital.com/p/a_n) = [\\times b_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/c_n](https://arbital.com/p/c_n) = ([https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)) \\times [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$$\n\n$\\times$ distributes over $+$: we need to show that $[https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times ([https://arbital.com/p/a_n](https://arbital.com/p/a_n)+[https://arbital.com/p/b_n](https://arbital.com/p/b_n)) = [https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$.\nBut this is true: $$[https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times ([https://arbital.com/p/a_n](https://arbital.com/p/a_n)+[https://arbital.com/p/b_n](https://arbital.com/p/b_n)) = [https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/a_n+b_n](https://arbital.com/p/a_n+b_n) = [\\times ](https://arbital.com/p/x_n) = [\\times a_n + x_n \\times b_n](https://arbital.com/p/x_n) = [\\times a_n](https://arbital.com/p/x_n) + [\\times b_n](https://arbital.com/p/x_n) = [https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/x_n](https://arbital.com/p/x_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$$\n\n## Field structure\n\nTo get from a ring to a field, it is necessary and sufficient to find a multiplicative inverse for any $[https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ not equal to $[https://arbital.com/p/0](https://arbital.com/p/0)$.\n\nSince $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\not = 0$, there is some $N$ such that for all $n > N$, $a_n \\not = 0$.\nThen defining the sequence $b_i = 1$ for $i \\leq N$, and $b_i = \\frac{1}{a_i}$ for $i > N$, we obtain a sequence which induces an element $[https://arbital.com/p/b_n](https://arbital.com/p/b_n)$ of $\\mathbb{R}$; and it is easy to check that $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/1](https://arbital.com/p/1)$.\n%%hidden(Show solution):\n$[https://arbital.com/p/a_n](https://arbital.com/p/a_n) [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [b_n](https://arbital.com/p/a_n)$; but the sequence $(a_n b_n)$ is $1$ for all $n > N$, and so it lies in the same equivalence class as the sequence $(1, 1, \\dots)$.\n%%\n\n## Ordering on the field\n\nWe need to show that:\n\n- if $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, then for every $[https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ we have $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$;\n- if $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ and $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, then $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times[https://arbital.com/p/b_n](https://arbital.com/p/b_n)$.\n\nWe may assume that the inequalities are strict, because if equality holds in the assumption then everything is obvious.\n%%hidden(Show obvious bits):\nIf $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, then for every $[https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ we have $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) = [https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ by well-definedness of addition.\nTherefore $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$.\n\nIf $[https://arbital.com/p/0](https://arbital.com/p/0) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ and $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, then $[https://arbital.com/p/0](https://arbital.com/p/0) = [https://arbital.com/p/0](https://arbital.com/p/0) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n) = [https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, so it is certainly true that $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$.\n%%\n\nFor the former: suppose $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) < [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, and let $[https://arbital.com/p/c_n](https://arbital.com/p/c_n)$ be an arbitrary equivalence class.\nThen $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) = [https://arbital.com/p/a_n+c_n](https://arbital.com/p/a_n+c_n)$; $[https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) = [https://arbital.com/p/b_n+c_n](https://arbital.com/p/b_n+c_n)$; but we have $a_n + c_n \\leq b_n + c_n$ for all sufficiently large $n$, because $a_n \\leq b_n$ for sufficiently large $n$.\nTherefore $[https://arbital.com/p/a_n](https://arbital.com/p/a_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n) \\leq [https://arbital.com/p/b_n](https://arbital.com/p/b_n) + [https://arbital.com/p/c_n](https://arbital.com/p/c_n)$, as required.\n\nFor the latter: suppose $[https://arbital.com/p/0](https://arbital.com/p/0) < [https://arbital.com/p/a_n](https://arbital.com/p/a_n)$ and $[https://arbital.com/p/0](https://arbital.com/p/0) < [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$.\nThen for sufficiently large $n$, we have both $a_n$ and $b_n$ are positive; so for sufficiently large $n$, we have $a_n b_n \\geq 0$.\nBut that is just saying that $[https://arbital.com/p/0](https://arbital.com/p/0) \\leq [https://arbital.com/p/a_n](https://arbital.com/p/a_n) \\times [https://arbital.com/p/b_n](https://arbital.com/p/b_n)$, as required.", "date_published": "2016-07-05T20:06:56Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Proof"], "alias": "51h"} {"id": "798a8b9c3590d2d52670db453e194679", "title": "Likelihood notation", "url": "https://arbital.com/p/likelihood_notation", "source": "arbital", "source_type": "text", "text": "The likelihood of a piece of evidence $e$ according to a hypothesis $H,$ known as \"the likelihood of $e$ given $H$\", is often written either $\\mathcal L_e(H)$ or $\\mathcal L(H \\mid e).$ The latter notation is confusing, because then $\\mathcal L(H \\mid e) = \\mathbb P(e \\mid H).$ Many students of statistics find it hard enough to keep the difference between $\\mathbb P(H \\mid e)$ and $\\mathbb P(e \\mid H)$ straight in their heads if we _don't_ occasionally swap the order of the arguments when talking about similar functions, so on Arbital, we much prefer the notation $\\mathcal L_e(H) = \\mathbb P(e \\mid H).$\n\n[Make this a child of 'likelihood' when 'likelihood' exists.](https://arbital.com/p/fixme:)", "date_published": "2016-07-07T03:16:06Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Start", "Non-standard terminology", "Definition"], "alias": "51n"} {"id": "38c55fd6224ae5dbb9029cd067bde90a", "title": "Modal logic", "url": "https://arbital.com/p/modal_logic", "source": "arbital", "source_type": "text", "text": "Modal logic is a [https://arbital.com/p/formal_system](https://arbital.com/p/formal_system) in which we expand [https://arbital.com/p/-propositional_calculus](https://arbital.com/p/-propositional_calculus) with two new operators meant to represent necessity ($\\square$) and possibility ($\\diamond$).%%note:There are more [general modal logics](https://arbital.com/p/) with different modal predicates out there.%%\n\nThose two operators can be defined in terms of each other. For example, we have that something is possible iff it is not necessary that its opposite is true; ie, $\\diamond A \\iff \\neg \\square \\neg A$.\n\nThus in modal logic, you can express sentences as $\\square A$ to express that it is necessary that $A$ is true. You can also go more abstract and say $\\neg\\square \\bot$ to express that it is not necessary that false is true. If we read the box operator as \"there is no mathematical proof of\" then the previous sentence becomes \"there is no mathematical proof of falsehood\", which encodes the [consistency of arithmetic](https://arbital.com/p/).\n\nThere are many systems of modal logic %note: See for example [T](https://arbital.com/p/t_modal_logic),[B](https://arbital.com/p/b_modal_logic),[S5](https://arbital.com/p/s5_modal_logic),[S4](https://arbital.com/p/s4_modal_logic),[K](https://arbital.com/p/k_modal_logic), [GLS](https://arbital.com/p/gls_modal_logic)%, which differ in the sentences they take as [https://arbital.com/p/-axioms](https://arbital.com/p/-axioms) and which [rules of inference](https://arbital.com/p/) they allow.\n\nOf particular interest are the systems called [normal systems of modal logic](https://arbital.com/p/5j8), and especially the [$Gödel-Löb$](https://arbital.com/p/5l3) system (called $GL$ for short, also known as the logic of provability). The main interest of $GL$ comes from being a [decidable](https://arbital.com/p/) logic which allows us to reason about sentences of [Peano arithmetic](https://arbital.com/p/3ft) talking about [provability](https://arbital.com/p/5gt) via [translations](https://arbital.com/p/), as proved by the [arithmetical adequacy theorems of Solomonov](https://arbital.com/p/).\n\nAnother widely studied system of modal logic is $GLS$, for which Solomonov proved [adequacy for truth](https://arbital.com/p/) in the [https://arbital.com/p/-standard_model_of_arithmetic](https://arbital.com/p/-standard_model_of_arithmetic).\n\nThe [semantics](https://arbital.com/p/semantics_mathematical) of the normal systems of modal logic come in the form of [https://arbital.com/p/Kripke_models](https://arbital.com/p/Kripke_models): digraph structures composed of worlds over which a visibility relation is defined.\n\n##Sentences\nWell formed sentences of modal logic are defined as follows:\n\n* $\\bot$ is a well-formed sentence of modal logic.\n* The sentence letters $p,q,...$ are well-formed sentences of modal logic.\n* If $A$ is a well-formed sentence, then $\\square A$ is a well-formed sentence.\n* If $A$ and $B$ are well formed sentences, then $A\\to B$ is a well-formed sentence.\n\nThe rest of [logical connectors](https://arbital.com/p/) can then be defined as abbreviations for structures made from implication and $\\bot$. For example , $\\neg A = A\\to \\bot$. The possibility operator $\\diamond$ is defined as $\\neg\\square\\neg$ as per the previous discussion.", "date_published": "2016-07-27T19:13:12Z", "authors": ["Eric Bruylant", "Jaime Sevilla Molina", "Patrick LaVictoire"], "summaries": [], "tags": ["Start"], "alias": "534"} {"id": "6b1f182b0095ca076e7b66a3b5452e36", "title": "Integers: Intro (Math 0)", "url": "https://arbital.com/p/integers_intro_math0", "source": "arbital", "source_type": "text", "text": "The integers are an extension of the [natural numbers](https://arbital.com/p/45h) into the *negatives*.\n\n## Negative numbers\n\nNegative numbers are numbers less than zero. They are notated as numbers with a minus sign in front of them, like $-2$ or $-15$ or $-6387$.\n\nWhen many people are first introduced to the concept of negative numbers, one question they have is, \"how do negative numbers exist? How can we have less than nothing of something?\"\n\nUsually this question is answered with something about travelling along the number line and then going in the opposite direction past zero. But we can also answer this question from the traditional perspective of quantities, using _antimatter_.\n\n## Arithmetic with anti-cows\n\nMatter and antimatter are substances that annihilate each other — they cancel each other out. In the same way, positive and negative quantities cancel each other out to some extent when added together.\n\n\n\nLet's create a physical example of this antimatter phenomenon, and consider a collection of cows. [diagram of cows](https://arbital.com/p/comment:) If a natural number is how many cows we have, then negative numbers are the presence of *anti-cows*. [For the sake of easy reading, let's colour a cow red, and an anti-cow blue: diagram of red cow = cow, blue cow = anticow](https://arbital.com/p/comment:) For example, if you have $3$ anti-cows, you have $-3$ cows. If you have $128$ anti-cows, you have $-128$ cows, and so on.\n\nWhen a cow and an anti-cow come into contact, they annihilate each other and nothing is left (except technically an enormous burst of energy that could cause a magnitude 9 earthquake, but we won't get into the nitty-gritty physics of antimatter annihilation here). [diagram of cow/anti-cow annihilation](https://arbital.com/p/comment:) Moreover, each cow will annihilate exactly one anti-cow, and vice versa. \n\nThis lets us perform subtraction by adding anti-cows instead of taking away cows. If you want to perform $6 - 4$, you imagine you have a collection of six cows, and you add four anti-cows to the mix, which annihilate four of the cows leaving you with two. This is exactly the same as if you'd taken away four cows (except that you've now released enough energy to power the entire world's energy demands for seven months). [diagram of 6 - 4](https://arbital.com/p/comment:)\n\nThis form of adding negatives is slightly more flexible than regular subtraction. In regular subtraction, you couldn't take away more cows than there already were. But when you add negatives, you can keep adding anti-cows after there are no cows left, at which point you just have more and more anti-cows, giving you a more and more negative number. So if you wanted to subtract $4 - 6$, you couldn't take away six cows from four, but you could add six anti-cows, and four of them would annihilate the four cows, leaving you with two anti-cows or a final answer of $-2$. [diagram of 4 - 6](https://arbital.com/p/comment:)\n\n## Useful properties of negative numbers\n\nExtending the natural numbers into negatives gives us lots of useful properties when doing arithmetic.\n\n### Addition of negative numbers\n\nNegative numbers add up just like positive ones. If you have $-3$ and $-7$, they add up to $-10$, just like three anti-cows and seven anti-cows would come together to make ten anti-cows. [diagram of ](https://arbital.com/p/comment:)\n\nKnowing this, how would you calculate $(-6) + (-8) + (-12) + (-20)$?\n\n%%hidden(Show solution):\nSimply add up all the negative quantities together. $$6 + 8 + 12 + 20 = 46 \\to (-6) + (-8) + (-12) + (-20) = -46$$\n%%\n\n### Subtraction as additive inversion\n\nIf we represent a natural number as a number of cows, then the same number of anti-cows is called the *additive inverse* of that number, because a number and its additive inverse add up to zero (equal numbers of cows and anti-cows will all annihilate each other to give nothing), which is the \"additive [identity](https://arbital.com/p/54p)\" (or \"the number that changes nothing when you add it to something\").\n\nUsing additive inverses allows us to express subtraction in the form of addition — for example, $5 - 2$ can be rewritten as $5 + (-2)$ (which just means that instead of taking away two cows, you're adding two anti-cows). When we do this, suddenly we can rearrange subtraction operations along with addition operations even though subtraction isn't [commutative](https://arbital.com/p/3jb).\n\nThen, if you have a giant addition and subtraction problem you want to solve, you can add up all the numbers you want to add (into a big collection of cows), then add up all the numbers you want to subtract (into a big collection of anti-cows), and just do one subtraction (annihilating cow/anti-cow pairs) to get your answer. For example, $6 - 2 + 7 - 5$ is really just $6 + (-2) + 7 + (-5)$ which is equal to $6 + 7 + (-5) + (-2)$. Then, you add up the number of cows (13) and anti-cows (7), and annihilate them to get the final answer of $6$.\n\nHere's another exercise for you: How would you calculate $13 + 8 - 5 + 6 + 4 - 12 - 9 + 1$?\n\n%%hidden(Show solution):\nFirst, rewrite each subtraction operation as addition of a negative number: $$13 + 8 + (-5) + 6 + 4 + (-12) + (-9) + 1$$\n\nThen rearrange the equation to group all the positive and negative numbers (cows and anti-cows) together: $$13 + 8 + 6 + 4 + 1 + (-5) + (-12) + (-9)$$\n\nThen add up all the positive and negative numbers separately: $$(13 + 8 + 6 + 4 + 1) + ((-5) + (-12) + (-9)) = 32 + (-26)$$\n\nAnd finally, subtract it all at once: $$32 + (-26) = 32 - 26 = 6$$\n\nAnd voilà, you have six cows.\n%%\n\nWhat about $8 - 6 + 4 - 13 + 7 - 5 - 9 + 12$?\n\n%%hidden(Show solution):\nSame as before, we rewrite, regroup, add up, and subtract:\n\n$$8 + (-6) + 4 + (- 13) + 7 + (- 5) + (- 9) + 12 \\\\ \\Downarrow$$\n\n$$8 + 4 + 7 + 12 + (-6) + (-13) + (-5) + (-9) \\\\ \\Downarrow$$ \n\n$$(8 + 4 + 7 + 12) + ((-6) + (-13) + (-5) + (-9)) = 31 + (-33)$$\n\n$$31 + (-33) = 31 - 33$$\n\nBut hold on — now you have more anti-cows than cows, so you can't subtract directly! Not to worry — simply flip the subtraction problem around, and subtract the cows from the anti-cows, knowing that the final result will be negative instead.\n\n$$31 - 33 = -(33 - 31) = -2$$\n%%\n\n## The set of integers\n\nBack to defining the integers. The set of integers is simply the set of natural numbers and their additive inverses — i.e. the set of numbers of cows you can have if you can have anti-cows as well. This set branches out to infinity on both sides, and we usually write it as $\\{ \\ldots, -2, -1, 0, 1, 2, \\ldots \\}$.", "date_published": "2016-07-07T15:30:38Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": ["The integers are an extension of the [natural numbers](https://arbital.com/p/45h) into the *negatives*.\n\nIn this article we'll be exploring a novel way to think of negative numbers, using collections of cows and anti-cows. *Mooooo!*"], "tags": ["Math 0", "C-Class"], "alias": "53r"} {"id": "093e46b309e9643f101e6f85e5bf4d7f", "title": "The reals (constructed as Dedekind cuts) form a field", "url": "https://arbital.com/p/reals_as_dedekind_cuts_form_a_field", "source": "arbital", "source_type": "text", "text": "The real numbers, when [constructed as Dedekind cuts](https://arbital.com/p/50g) over the [rationals](https://arbital.com/p/4zq), form a [field](https://arbital.com/p/481).\n\nWe shall often write the one-sided [Dedekind cut](https://arbital.com/p/dedekind_cut) $(A, B)$ %%note:Recall: \"one-sided\" means that $A$ has no greatest element.%% as simply $\\mathbf{A}$ (using bold face for Dedekind cuts); we can do this because if we already know $A$ then $B$ is completely determined.\nThis will make our notation less messy.\n\nThe field structure, together with the [total ordering](https://arbital.com/p/total_order) on it, is as follows (where we write $\\mathbf{0}$ for the Dedekind cut $(\\{ r \\in \\mathbb{Q} \\mid r < 0\\}, \\{ r \\in \\mathbb{Q} \\mid r \\geq 0 \\})$): \n\n- $(A, B) + (C, D) = (A+C, B+D)$\n- $\\mathbf{A} \\leq \\mathbf{C}$ if and only if everything in $A$ lies in $C$.\n- Multiplication is somewhat complicated.\n - If $\\mathbf{0} \\leq \\mathbf{A}$, then $\\mathbf{A} \\times \\mathbf{C} = \\{ a c \\mid a \\in A, a > 0, c \\in C \\}$. \n - If $\\mathbf{A} < \\mathbf{0}$ and $\\mathbf{0} \\leq \\mathbf{C}$, then $\\mathbf{A} \\times \\mathbf{C} = \\{ a c \\mid a \\in A, c \\in C, c > 0 \\}$.\n - If $\\mathbf{A} < \\mathbf{0}$ and $\\mathbf{C} < \\mathbf{0}$, then $\\mathbf{A} \\times \\mathbf{C} = \\{\\} $ \n\nwhere $(A, B)$ is a one-sided [Dedekind cut](https://arbital.com/p/dedekind_cut) (so that $A$ has no greatest element).\n\n(Here, the \"set sum\" $A+C$ is defined as \"everything that can be made by adding one thing from $A$ to one thing from $C$\": namely, $\\{ a+c \\mid a \\in A, c \\in C \\}$ in [https://arbital.com/p/-3lj](https://arbital.com/p/-3lj); and $A \\times C$ is similarly $\\{ a \\times c \\mid a \\in A, c \\in C \\}$.)\n\n# Proof\n\n## Well-definedness\n\nWe need to show firstly that these operations do in fact produce [Dedekind cuts](https://arbital.com/p/dedekind_cut).\n\n### Addition\nFirstly, we need everything in $A+C$ to be less than everything in $B+D$.\nThis is true: if $a+c \\in A+C$, and $b+d \\in B+D$, then since $a < b$ and $c < d$, we have $a+c < b+d$.\n\nNext, we need $A+C$ and $B+D$ together to contain all the rationals.\nThis is true: \n\nFinally, we need $(A+C, B+D)$ to be one-sided: that is, $A+C$ needs to have no top element, or equivalently, if $a+c \\in A+C$ then we can find a bigger $a' + c'$ in $A+C$.\nThis is also true: if $a+c$ is an element of $A+C$, then we can find an element $a'$ of $A$ which is bigger than $a$, and an element $c'$ of $C$ which is bigger than $C$ (since both $A$ and $C$ have no top elements, because the respective Dedekind cuts are one-sided); then $a' + c'$ is in $A+C$ and is bigger than $a+c$.\n\n### Multiplication\n\n\n### Ordering\n\n\n## Additive [commutative](https://arbital.com/p/3jb) [group structure](https://arbital.com/p/3gd)\n\n\n\n## [Ring structure](https://arbital.com/p/3gq)\n\n\n\n## [Field structure](https://arbital.com/p/481)\n\n\n\n## Ordering on the field", "date_published": "2016-07-08T15:20:19Z", "authors": ["Dylan Hendrickson", "Patrick Stevens"], "summaries": [], "tags": ["Work in progress", "Proof"], "alias": "53v"} {"id": "28037ad90e5c24acd26c3053b3d8ea5f", "title": "Equivalence relation", "url": "https://arbital.com/p/equivalence_relation", "source": "arbital", "source_type": "text", "text": "An **equivalence relation** is a binary [https://arbital.com/p/-3nt](https://arbital.com/p/-3nt) $\\sim$ on a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $S$ that can be used to say whether [elements](https://arbital.com/p/element_of_a_set) of $S$ are equivalent.\n\nAn equivalence relation satisfies the following properties:\n\n1. For any $x \\in S$, $x \\sim x$ (the [reflexive](https://arbital.com/p/reflexive_relation) property).\n2. For any $x,y \\in S$, if $x \\sim y$ then $y \\sim x$ (the [symmetric](https://arbital.com/p/symmetric_relation) property).\n3. For any $x,y,z \\in S$, if $x \\sim y$ and $y \\sim z$ then $x \\sim z$ (the [transitive](https://arbital.com/p/573) property).\n\nIntuitively, any element is equivalent to itself, equivalence isn't directional, and if two elements are equivalent to the same third element then they're equivalent.\n\n## Equivalence classes\n\nWhenever we have a set $S$ with an equivalence relation $\\sim$, we can divide $S$ into *equivalence classes*, i.e. sets of mutually equivalent elements. The equivalence class of some $x \\in S$ is the set of elements of $S$ equivalent to $x$, written $[https://arbital.com/p/x](https://arbital.com/p/x)=\\{y \\in S \\mid x \\sim y\\}$, and we say that $x$ is a \"representative\" of $[https://arbital.com/p/x](https://arbital.com/p/x)$. We call the set of equivalence classes $S/\\sim = \\{[https://arbital.com/p/x](https://arbital.com/p/x) \\mid x \\in S\\}$.\n\nFrom the definition of an equivalence relation, it's easy to show that $x \\in [https://arbital.com/p/x](https://arbital.com/p/x)$ and $[https://arbital.com/p/x](https://arbital.com/p/x)=[https://arbital.com/p/y](https://arbital.com/p/y)$ if and only if $x \\sim y$.\n\nIf you have a set already [partitioned](https://arbital.com/p/set_partition) into subsets ($S$ is the [https://arbital.com/p/-disjoint_union](https://arbital.com/p/-disjoint_union) of the elements of a collection $A$), you can go the other way and define a relation with two elements related whenever they're in the same subset ($x \\sim y$ when there is some $U \\in A$ with $x,y \\in U$). Then this is an equivalence relation, and the partition of the set is the set of equivalence classes under this relation (that is, $[https://arbital.com/p/x](https://arbital.com/p/x) \\in A$ and $A=S/\\sim$).\n\n##Defining functions on equivalence classes\n\nSuppose you have a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $f: S \\to U$ and want to define a corresponding function $f^*: S/\\sim \\to U$, where $U$ can be any set. You define $f^*([https://arbital.com/p/x](https://arbital.com/p/x))$ to be $f(x)$. This could be a problem; what if $x \\sim y$ but $f(x) \\neq f(y)$? Then $f^*([https://arbital.com/p/x](https://arbital.com/p/x))=f^*([https://arbital.com/p/y](https://arbital.com/p/y))$ wouldn't be [well-defined](https://arbital.com/p/-well_defined). Whenever you define a function on equivalence classes in terms of representatives, you have to make sure the definition doesn't depend on which representative you happen to pick. In fact, one way you might arrive at an equivalence relation is to say that $x \\sim y$ whenever $f(x)=f(y)$.\n\nIf you have a function $f: S \\to S$ and want to define $f^*: S/\\sim \\to S/\\sim$ by $f^*([https://arbital.com/p/x](https://arbital.com/p/x))=[https://arbital.com/p/f](https://arbital.com/p/f)$, you have to verify that whenever $x \\sim y$, $[https://arbital.com/p/f](https://arbital.com/p/f)=[https://arbital.com/p/f](https://arbital.com/p/f)$, equivalently $f(x) \\sim f(y)$. \n\n#Examples\n\nConsider the [integers](https://arbital.com/p/48l) with the relation $x \\sim y$ if $n|x-y$, for some $n \\in \\mathbb N$. This is the integers [mod](https://arbital.com/p/modular_arithmetic) $n$. The [https://arbital.com/p/-addition](https://arbital.com/p/-addition) and [https://arbital.com/p/-multiplication](https://arbital.com/p/-multiplication) operations can be inherited from the integers, so it makes sense to talk about addition and multiplication mod $n$. \n\nThe [real numbers](https://arbital.com/p/4bc) can be defined as equivalence classes of [Cauchy sequences](https://arbital.com/p/50d).\n\n[https://arbital.com/p/4f4](https://arbital.com/p/4f4) is an equivalence relation (unless the objects form a [https://arbital.com/p/-proper_class](https://arbital.com/p/-proper_class)).", "date_published": "2016-07-07T16:52:44Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Joe Zeng"], "summaries": ["An **equivalence relation** is a binary [https://arbital.com/p/-3nt](https://arbital.com/p/-3nt) that is [reflexive](https://arbital.com/p/reflexive_relation), [symmetric](https://arbital.com/p/symmetric_relation), and [transitive](https://arbital.com/p/573). You can think of it as a way to say when two elements of a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) are \"the same\" or \"equivalent,\" despite not being literally the same element. It can be used to [partition](https://arbital.com/p/set_partition) a set into equivalence classes."], "tags": ["Start"], "alias": "53y"} {"id": "5e65505218386278b829414c51d961ec", "title": "Totally ordered set", "url": "https://arbital.com/p/totally_ordered_set", "source": "arbital", "source_type": "text", "text": "A **totally ordered set** is a pair $(S, \\le)$ of a set $S$ and a *total order* $\\le$ on $S$, which is a [https://arbital.com/p/-binary_relation](https://arbital.com/p/-binary_relation) that satisfies the following properties:\n\n1. For all $a, b \\in S$, if $a \\le b$ and $b \\le a$, then $a = b$. (the [antisymmetric](https://arbital.com/p/antisymmetric_relation) property)\n2. For all $a, b, c \\in S$, if $a \\le b$ and $b \\le c$, then $a \\le c$. (the [transitive](https://arbital.com/p/573) property)\n3. For all $a, b \\in S$, either $a \\le b$ or $b \\le a$, or both. (the [totality](https://arbital.com/p/total_relation) property)\n\nA totally ordered set is a special type of [partially ordered set](https://arbital.com/p/3rb) that satisfies the total property — in general, posets only satisfy the [reflexive](https://arbital.com/p/5dy) property, which is that $a \\le a$ for all $a \\in S$.\n\n## Examples of totally ordered sets\n\nThe [real numbers](https://arbital.com/p/4bc) are a totally ordered set. So are any of the subsets of the real numbers, such as the [rational numbers](https://arbital.com/p/4zq) or the [integers](https://arbital.com/p/48l).\n\n## Examples of not totally ordered sets\n\nThe [complex numbers](https://arbital.com/p/4zw) do not have a canonical total ordering, and especially not a total ordering that preserves all the properties of the ordering of the real numbers, although one can define a total ordering on them quite easily.", "date_published": "2016-07-21T16:07:16Z", "authors": ["Kevin Clancy", "Eric Rogstad", "Dylan Hendrickson", "Eric Bruylant", "Joe Zeng"], "summaries": ["A totally ordered set is a pair $(S, \\le)$ of a set $S$ and a *total order* $\\le$ on $S$, which is a [binary relation](https://arbital.com/p/3nt) that satisfies the following properties:\n\n1. For all $a, b \\in S$, if $a \\le b$ and $b \\le a$, then $a = b$. (the [antisymmetric](https://arbital.com/p/antisymmetric_relation) property)\n2. For all $a, b, c \\in S$, if $a \\le b$ and $b \\le c$, then $a \\le c$. (the [transitive](https://arbital.com/p/573) property)\n3. For all $a, b \\in S$, either $a \\le b$ or $b \\le a$, or both. (the [totality](https://arbital.com/p/total_relation) property)"], "tags": ["Start", "Needs summary"], "alias": "540"} {"id": "715c30cac984cd8a3c8b9189d2a1a9b8", "title": "Intro to Number Sets", "url": "https://arbital.com/p/number_sets_intro", "source": "arbital", "source_type": "text", "text": "There are several common [sets](https://arbital.com/p/3jz) of [numbers](https://arbital.com/p/54y) that mathematicians use in their studies. In order from simple to complex, they are:\n\n1. The [natural numbers](https://arbital.com/p/506) $\\mathbb{N}$\n\n2. The [integers](https://arbital.com/p/53r) $\\mathbb{Z}$\n\n3. The [rational numbers](https://arbital.com/p/rational_number_math0) $\\mathbb{Q}$\n\n4. The [real numbers](https://arbital.com/p/real_number_math0) $\\mathbb{R}$\n\n5. The [complex numbers](https://arbital.com/p/complex_number_math0) $\\mathbb{C}$\n\nEach set is constructed in some way from the previous one, and this path will show you how they are constructed from the most basic numbers. You may have come across these terms in a math class that you attended, and may have had other definitions given to you. In this path, you will obtain a firm, complete understanding of these sets, how they are constructed, and what they mean in mathematics.\n\n\n## Why are number sets important?\n\nBefore we go any further though, it would be nice to know the motivation behind defining the number sets first.\n\nA [set](https://arbital.com/p/3jz) is a fancy name for a collection of objects. Some collections of objects have special properties — such as the set of all blue things, which are special in that they're all blue. In math, if a set of objects all have a certain property, we can make **inferences** about them — that is, there are certain things we can say about them that you can deduce logically. For example:\n\n> In a [field](https://arbital.com/p/481), every nonzero number has a multiplicative inverse.\n\nYou don't need to know what a field is yet (it's a special type of set), but now you can make inferences about them without restricting yourself to a specific example when talking about them. For example, you know that if a set _is_ a field, then every number in that set that isn't zero can divide into another number in that set (by multiplying by its \"multiplicative inverse\") and produce yet another number in that set.\n\nConversely, you can also tell when a set is or isn't a field based on whether it satisfies the properties a field has. For example, since you can't divide $3$ by $2$ (because the result is $1.5$ which is not a natural number), you now know that the natural numbers are not a field.\n\n\nNow let's turn to our first set: the natural numbers.", "date_published": "2016-08-20T14:27:36Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Stub"], "alias": "544"} {"id": "17ce2f3fd8dedfb2e0a362656c0b7cba", "title": "The square root of 2 is irrational", "url": "https://arbital.com/p/sqrt_2_is_irrational", "source": "arbital", "source_type": "text", "text": "$\\sqrt 2$, the unique [https://arbital.com/p/-positive](https://arbital.com/p/-positive) [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc) whose square is 2, is not a [https://arbital.com/p/-4zq](https://arbital.com/p/-4zq).\n\n#Proof\n\nSuppose $\\sqrt 2$ is rational. Then $\\sqrt 2=\\frac{a}{b}$ for some integers $a$ and $b$; [https://arbital.com/p/-without_loss_of_generality](https://arbital.com/p/-without_loss_of_generality) let $\\frac{a}{b}$ be in [https://arbital.com/p/-lowest_terms](https://arbital.com/p/-lowest_terms), i.e. $\\gcd(a,b)=1$. We have\n\n$$\\sqrt 2=\\frac{a}{b}$$\n\nFrom the definition of $\\sqrt 2$,\n\n$$2=\\frac{a^2}{b^2}$$\n$$2b^2=a^2$$\n\nSo $a^2$ is a multiple of $2$. Since $2$ is [prime](https://arbital.com/p/4mf), $a$ must be a multiple of 2; let $a=2k$. Then\n\n$$2b^2=(2k)^2=4k^2$$\n$$b^2=2k^2$$\n\nSo $b^2$ is a multiple of $2$, and so is $b$. But then $2|\\gcd(a,b)$, which contradicts the assumption that $\\frac{a}{b}$ is in lowest terms! So there isn't any way to express $\\sqrt 2$ as a fraction in lowest terms, and thus there isn't a way to express $\\sqrt 2$ as a ratio of integers at all. That is, $\\sqrt 2$ is irrational.", "date_published": "2016-07-06T18:40:58Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Proof"], "alias": "548"} {"id": "4827a071f948b306aee595117652ff27", "title": "Order relation", "url": "https://arbital.com/p/order_relation", "source": "arbital", "source_type": "text", "text": "An **order relation** (also called an **order** or **ordering**) is a [binary relation](https://arbital.com/p/3nt) $\\le$ on a [set](https://arbital.com/p/3jz) $S$ that can be used to order the elements in that set.\n\nAn order relation satisfies the following properties:\n\n1. For all $a \\in S$, $a \\le a$. (the [reflexive](https://arbital.com/p/reflexive_relation) property)\n2. For all $a, b \\in S$, if $a \\le b$ and $b \\le a$, then $a = b$. (the [antisymmetric](https://arbital.com/p/antisymmetric_relation) property)\n3. For all $a, b, c \\in S$, if $a \\le b$ and $b \\le c$, then $a \\le c$. (the [transitive](https://arbital.com/p/transitive_relation) property)\n\nA set that has an order relation is called a [partially ordered set](https://arbital.com/p/3rb) (or \"poset\"), and $\\le$ is its *partial order*.\n\n## Totality of an order\n\nThere is also a fourth property that distinguishes between two different types of orders:\n\n4. For all $a, b \\in S$, either $a \\le b$ or $b \\le a$ or both. (the [total](https://arbital.com/p/total_relation) property)\n\nThe total property implies the reflexive property, by setting $a = b$.\n\nIf the order relation satisfies the total property, then $S$ is called a [https://arbital.com/p/-540](https://arbital.com/p/-540), and $\\le$ is its *total order*.\n\n## Well-ordering\n\nA fifth property that extends the idea of a \"total order\" is that of the [well-ordering](https://arbital.com/p/55r):\n\n5. For every subset $X$ of $S$, $X$ has a least element: an element $x$ such that for all $y \\in X$, we have $x \\leq y$.\n\nWell-orderings are very useful: they are the orderings we can perform [induction](https://arbital.com/p/mathematical_induction) over. (For more on this viewpoint, see the page on [https://arbital.com/p/structural_induction](https://arbital.com/p/structural_induction).)\n\n# Derived relations\n\nThe order relation immediately affords several other relations.\n\n## Reverse order\n\nWe can define a *reverse order* $\\ge$ as follows: $a \\ge b$ when $b \\le a$. \n\n## Strict order \n\nFrom any poset $(S, \\le)$, we can derive a *strict order* $<$, which disallows equality. For $a, b \\in S$, $a < b$ when $a \\le b$ and $a \\neq b$. This strict order is still antisymmetric and transitive, but it is no longer reflexive.\n\nWe can then also define a reverse strict order $>$ as follows: $a > b$ when $b \\le a$ and $a \\neq b$.\n\n## Incomparability\n\nIn a poset that is not totally ordered, there exist elements $a$ and $b$ where the order relation is undefined. If neither $a \\leq b$ nor $b \\leq a$ then we say that $a$ and $b$ are *incomparable*, and write $a \\parallel b$. \n\n## Cover relation\n\nFrom any poset $(S, \\leq)$, we can derive an underlying *cover relation* $\\prec$, defined such that for $a, b \\in S$, $a \\prec b$ whenever the following two conditions are satisfied:\n\n1. $a < b$.\n2. For all $s \\in S$, $a \\leq s < b$ implies that $a = s$.\n\nSimply put, $a \\prec b$ means that $b$ is the smallest element of $S$ which is strictly greater than $a$.\n$a \\prec b$ is pronounced \"$a$ is covered by $b$\", or \"$b$ covers $a$\", and $b$ is said to be a *cover* of $a$.", "date_published": "2016-07-07T14:32:44Z", "authors": ["Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Formal definition"], "alias": "549"} {"id": "6c2fcfeb283d36030e42d827a914b170", "title": "Identity element", "url": "https://arbital.com/p/identity_element", "source": "arbital", "source_type": "text", "text": "An identity [element](https://arbital.com/p/5xy) in a set $S$ with a [binary operation](https://arbital.com/p/-3kb) $*$ is an element $i$ that leaves any element $a \\in S$ unchanged when combined with it in that operation.\n\nFormally, we can define an element $i$ to be an identity element if the following two statements are true:\n\n1. For all $a \\in S$, $i * a = a$. If only this statement is true then $i$ is said to be a *left identity*.\n2. For all $a \\in S$, $a * i = a$. If only this statement is true then $i$ is said to be a *right identity*.\n\nThe existence of an identity element is a property of many algebraic structures, such as [groups](https://arbital.com/p/3gd), [rings](https://arbital.com/p/3gq), and [fields](https://arbital.com/p/481).", "date_published": "2016-08-20T11:19:57Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": ["An identity [element](https://arbital.com/p/5xy) in a set $S$ with a [binary operation](https://arbital.com/p/-3kb) $*$ is an element $i$ that leaves any element $a \\in S$ unchanged when combined with it in that operation."], "tags": ["Math 2", "Formal definition"], "alias": "54p"} {"id": "44742fd2a0effcba795df70508671669", "title": "Proof that there are infinitely many primes", "url": "https://arbital.com/p/infinitely_many_primes", "source": "arbital", "source_type": "text", "text": "The proof that there are infinitely many [prime numbers](https://arbital.com/p/4mf) is a [https://arbital.com/p/-46z](https://arbital.com/p/-46z).\n\nFirst, we note that there is indeed a prime: namely $2$.\nWe will also state a lemma: that every number greater than $1$ has a prime which divides it.\n(This is the easier half of the [https://arbital.com/p/5rh](https://arbital.com/p/5rh), and the slightly stronger statement that \"every number may be written as a product of primes\" is proved there.)\n\nNow we can proceed to the meat of the proof.\nSuppose that there were finitely many prime numbers $p_1, p_2, \\ldots, p_n$.\nSince we know $2$ is prime, we know $2$ appears in that list.\n\nThen consider the number $P = p_1p_2\\ldots p_n + 1$ — that is, the product of all primes plus 1.\nSince $2$ appeared in the list, we know $P \\geq 2+1 = 3$, and in particular $P > 1$.\n\nThe number $P$ can't be divided by any of the primes in our list, because it's 1 more than a multiple of them.\nBut there is a prime which divides $P$, because $P>1$; we stated this as a lemma.\nThis is immediately a contradiction: $P$ cannot be divided by any prime, even though all integers greater than $1$ can be divided by a prime.\n\n# Example\n\nThere is a common misconception that $p_1 p_2 \\dots p_n+1$ must be prime if $p_1, \\dots, p_n$ are all primes.\nThis isn't actually the case: if we let $p_1, \\dots, p_6$ be the first six primes $2,3,5,7,11,13$, then you can check by hand that $p_1 \\dots p_6 + 1 = 30031$; but $30031 = 59 \\times 509$.\n(These are all somewhat ad-hoc; there is no particular reason I knew that taking the first six primes would give me a composite number at the end.)\nHowever, we *have* discovered a new prime this way (in fact, two new primes!): namely $59$ and $509$.\n\nIn general, this is a terrible way to discover new primes.\nThe proof tells us that there must be some new prime dividing $30031$, without telling us anything about what those primes are, or even if there is more than one of them (that is, it doesn't tell us whether $30031$ is prime or composite).\nWithout knowing in advance that $30031$ is equal to $59 \\times 509$, it is in general very difficult to *discover* those two prime factors.\nIn fact, it's an open problem whether or not prime factorisation is \"easy\" in the specific technical sense of there being a polynomial-time algorithm to do it, though most people believe that prime factorisation is \"hard\" in this sense.", "date_published": "2016-08-15T11:09:43Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Proof"], "alias": "54r"} {"id": "6817747af45130faaa62d1ae0082da51", "title": "Order of operations", "url": "https://arbital.com/p/order_of_operations", "source": "arbital", "source_type": "text", "text": "The **order of operations** is a notational convention employed in arithmetical expressions to disambiguate the order in which operations should be performed.\n\nIt is also referred to PEMDAS, BEDMAS, BODMAS, or other such acronyms depending on where you live. The acronym is always 6 letters, and refers to the following things in order:\n\n1. [https://arbital.com/p/parentheses](https://arbital.com/p/parentheses) (or Brackets), in order of inside-most parentheses first\n2. [Exponents](https://arbital.com/p/exponent) (or Orders or Indices), in order of inside-most (or rightmost) exponents first\n3. [https://arbital.com/p/multiplication](https://arbital.com/p/multiplication) and [https://arbital.com/p/division](https://arbital.com/p/division), in order from left to right\n4. [https://arbital.com/p/addition](https://arbital.com/p/addition) and [https://arbital.com/p/subtraction](https://arbital.com/p/subtraction), in order from left to right\n\n\n## Motivation\n\nThe infix notation we use to write expressions of mathematical operators is inherently ambiguous. Only in some very special examples where the expression consists of only a single [associative](https://arbital.com/p/3h4) operation do all possible orders of evaluation evaluate to the same thing. When considering the expression $2 - 4 + 3$, do we do the $2 - 4$ first or the $4 + 3$ first? The result is either $1$ or $-5$ depending on which you choose. When considering the expression $7 + 8 \\times 9 - 6$, which operation do we do first? The result could be anywhere from $31$ to $129$ depending on the order you do them in.\n\nTherefore, to relieve mathematicians of placing brackets around every expression, there are some standard, intuitive conventions put in place.\n\n\n## Example expressions\n\nConsider the expression $3 + 7 \\times 2^{(6 + 8)}$. We do parentheses first: $$3 + 7 \\times 2^{14}$$ Then exponents: $$3 + 7 \\times 16384$$ Then multiplication and division: $$3 + 114688$$ And finally addition and subtraction: $$114691$$\n\n\n## Notable ambiguous cases\n\nThe expression $48 \\div 2 (9 + 3)$ is a controversial ambiguity in the order of operations that results from an ambiguity in multiplication by juxtaposition — which is, the convention that numbers next to each other in brackets should be multiplied together. For example, $2(3+5)$ is equal to $2 \\times (3 + 5)$ or $16$.\n\nIf multiplication by juxtaposition is taken as a _parenthetical_ operation due to the parentheses in the operands, then the result is 2:\n\n$$\\begin{align*}\n48 \\div 2 (9 + 3) &= 48 \\div 2 (12) \\\\\n&= 48 \\div 24 \\\\\n&= 2\n\\end{align*}$$\n\nHowever, if multiplication by juxtaposition is taken as simply a regular _multiplication_ operation, the result is 288: \n\n$$\\begin{align*}\n48 \\div 2 (9 + 3) &= 48 \\div 2 \\times 12 \\\\\n&= 24 \\times 12 \\\\\n&= 288\n\\end{align*}$$\n\nThe second step is different because the 48 and 2 are operated on first, as the left most division or multiplication operation in the expression.", "date_published": "2016-07-06T21:01:11Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Start"], "alias": "54s"} {"id": "a8f103c953c6b1e1a3e7208c0b87fbbb", "title": "Whole number", "url": "https://arbital.com/p/whole_number", "source": "arbital", "source_type": "text", "text": "A **whole number** is an intuitive concept of a number that is not [fractional](https://arbital.com/p/). There are three different definitions of the whole numbers, whose usage depends heavily on the author:\n\n1. The [natural numbers](https://arbital.com/p/45h).\n\n2. The natural numbers and [https://arbital.com/p/-zero](https://arbital.com/p/-zero) (in definitions of the [https://arbital.com/p/-number_sets](https://arbital.com/p/-number_sets) where the natural numbers do not contain zero).\n\n3. The [integers](https://arbital.com/p/48l).\n\nThe set of whole numbers, when it is separate from the natural numbers and the integers, is usually notated as either $\\mathbb{N}^0$ or $\\mathbb{W}$.\n\nThis page is a disambiguation page for three different definitions.", "date_published": "2016-07-06T13:00:19Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": ["An intuitive concept of a number that is not fractional. Depending on the author, the set of whole numbers is defined as either the [natural numbers](https://arbital.com/p/45h), the natural numbers and [https://arbital.com/p/-zero](https://arbital.com/p/-zero), or the [integers](https://arbital.com/p/48l)."], "tags": ["C-Class", "Disambiguation"], "alias": "54t"} {"id": "ce188387c48e400e354b183bf510958f", "title": "Blue oysters", "url": "https://arbital.com/p/blue_oysters", "source": "arbital", "source_type": "text", "text": "You're collecting exotic oysters in Nantucket, and there are two different bays you could harvest oysters from. In both bays, 11% of the oysters contain valuable pearls and 89% are empty. In the first bay, 4% of the pearl-containing oysters are blue, and 8% of the non-pearl-containing oysters are blue. In the second bay, 13% of the pearl-containing oysters are blue, and 26% of the non-pearl-containing oysters are blue. You created a special device that helps you find blue oysters. Would you rather harvest blue oysters from the first bay or the second bay?\n\nYou're encouraged to try to solve this problem yourself, and to refrain from looking at the answer. The answer can be found in [https://arbital.com/p/1zh](https://arbital.com/p/1zh).", "date_published": "2016-07-15T21:32:59Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Just a requisite", "Example problem"], "alias": "54v"} {"id": "240848a9be081e455737ed97fc0f9348", "title": "Example problem", "url": "https://arbital.com/p/example_problem", "source": "arbital", "source_type": "text", "text": "Tag for pages that provide an example problem referenced by a number of other pages.\n\nThe summary of the page should contain the entire text of the problem (so that the problem can be referenced by greenlink). The body of the page should either contain the answer (e.g., hidden behind an 'Answer' button), or should contain a link to the page that contains the answer. The body of the page should not explain the solution to the problem, but may link to the page that does.", "date_published": "2016-07-05T23:18:28Z", "authors": ["Kevin Clancy", "Nate Soares"], "summaries": [], "tags": [], "alias": "54w"} {"id": "dfc782bbd0ea2b685c150d24973988a2", "title": "Number", "url": "https://arbital.com/p/number", "source": "arbital", "source_type": "text", "text": "A **number** is an object used to represent quantity or value.\n\nExamples of numbers include the [natural numbers](https://arbital.com/p/45h), the [integers](https://arbital.com/p/48l), the [rational numbers](https://arbital.com/p/4zq), and the [real numbers](https://arbital.com/p/4bc).", "date_published": "2016-07-07T17:05:10Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Stub"], "alias": "54y"} {"id": "c9a4a585998a1403afbd3979ccc7059d", "title": "Irrational number", "url": "https://arbital.com/p/irrational_number", "source": "arbital", "source_type": "text", "text": "An irrational number is a [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc) that is not a [https://arbital.com/p/-4zq](https://arbital.com/p/-4zq). This set is generally denoted using either $\\mathbb{I}$ or $\\overline{\\mathbb{Q}}$, the latter of which represents it as the [complement](https://arbital.com/p/set_complement) of the rationals within the reals.\n\nIn the [Cauchy sequence definition](https://arbital.com/p/50d) of real numbers, the irrational numbers are the equivalence classes of Cauchy sequences of rational numbers that do not converge in the rationals. In the [Dedekind cut definition](https://arbital.com/p/50g), the irrational numbers are the one-sided Dedekind cuts where the set $\\mathbb{Q}^\\ge$ does not have a least element.\n\n## Properties of irrational numbers\n\nIrrational numbers have decimal expansions (and indeed, representations in any base $b$) that do not repeat or terminate.\n\nThe set of irrational numbers is [uncountable](https://arbital.com/p/2w0).", "date_published": "2016-07-06T09:44:11Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Stub"], "alias": "54z"} {"id": "5a7e576be92c126782dcde536acbdfa8", "title": "Shift towards the hypothesis of least surprise", "url": "https://arbital.com/p/flee_from_surprise", "source": "arbital", "source_type": "text", "text": "The [log-odds form of Bayes' rule](https://arbital.com/p/1zh) says that strength of belief and strength of evidence can both be measured in [bits](https://arbital.com/p/3y2). These evidence-bits can also be used to measure a quantity called \"Bayesian surprise\", which yields [One final, if this is the last thing in the path](https://arbital.com/p/fixme:)yet another intuition for understanding Bayes' rule.\n\nRoughly speaking, we can measure how surprised a hypothesis $H_i$ was by the evidence $e$ by measuring how much probability it put on $e.$ If $H_i$ put 100% of its probability mass on $e$, then $e$ is completely unsurprising (to $H_i$). If $H_i$ put 0% of its probability mass on $e$, then $e$ is _as surprising as possible._ Any measure of $\\mathbb P(e \\mid H_i),$ the probability $H_i$ assigned to $e$, that obeys this property, is worthy of the label \"surprise.\" Bayesian surprise is $-\\!\\log(\\mathbb P(e \\mid H_i)),$ which is a quantity that obeys these intuitive constraints and has some other interesting features.\n\nConsider again the [https://arbital.com/p/-54v](https://arbital.com/p/-54v) problem. Consider the hypotheses $H$ and $\\lnot H$, which say \"the oyster will contain a pearl\" and \"no it won't\", respectively. To keep the numbers easy, let's say we draw an oyster from a third bay, where $\\frac{1}{8}$ of pearl-carrying oysters are blue and $\\frac{1}{4}$ of empty oysters are blue.\n\nImagine what happens when the oyster is blue. $H$ predicted blueness with $\\frac{1}{8}$ of its probability mass, while $\\lnot H$ predicted blueness with $\\frac{1}{4}$ of its probability mass. Thus, $\\lnot H$ did better than $H,$ and goes up in probability. Previously, we've been combining both $\\mathbb P(e \\mid H)$ and $\\mathbb P(e \\mid \\lnot H)$ into unified likelihood ratios, like $\\left(\\frac{1}{8} : \\frac{1}{4}\\right)$ $=$ $(1 : 2),$ which says that the 'blue' observation carries 1 bit of evidence $H.$ However, we can also take the logs first, and combine second.\n\nBecause $H$ assigned only an eighth of its probability mass to the 'blue' observation, and because [Bayesian update works by eliminating incorrect probability mass](https://arbital.com/p/1y6), we have to adjust our belief in $H$ by $\\log_2\\left(\\frac{1}{8}\\right) = -3$ bits away from $H.$ (Each negative bit means \"throw away half of $H$'s probability mass,\" and we have to do that 3 times in order to remove the probability that $H$ failed to assign to $e$.)\n\nSimilarly, because $\\lnot H$ assigned only a quarter of its probability mass to the 'blue' observation, we have to adjust our belief in $H$ by $\\log_2\\left(\\frac{1}{4}\\right) = -2$ bits away from $\\lnot H.$\n\nThus, when the 'blue' observation comes in, we move our belief (measured in bits) 3 notches away from $H$ and then two notches back towards $H.$ On net, our belief shifts 1 notch away from $H$.\n\n![hypotheses emitting surprise](https://i.imgur.com/ZXGB8x0.png)\n\n_$H$ assigned 1/8th of its probability mass to blueness, so it emits $-\\!\\log_2\\left(\\frac{1}{8}\\right)=3$ bits of surprise pushing away from $H$. $\\lnot H$ assigned 1/4th of its probability mass to blueness, so it emits $-\\!\\log_2\\left(\\frac{1}{4}\\right)=2$ bits of surprise pushing away from $\\lnot H$ (and towards $H$). Thus, belief in $H$ moves 1 bit towards $\\lnot H$, on net._\n\nIf instead $H$ predicted blue with probability 4% (penalty $\\log_2(0.04) \\approx -4.64$) and $\\lnot H$ predicted blue with probability 8% (penalty $\\log_2(0.08) \\approx -3.64$), then we would have shifted a bit over 4.6 notches towards $\\lnot H$ and a bit over 3.6 notches back towards $H,$ but we would have shifted the same number of notches _on net._ This is why it's only the _relative_ difference between the number of bits docked from $H$ and the number of bits docked from $\\lnot H$ that matters.\n\nIn general, given an observation $e$ and a hypothesis $H,$ the number of bits we need to dock from our belief in $H$ is $\\log_2(\\mathbb P(e \\mid H)),$ that is, the log of the probability that $H$ assigned to $e.$ This quantity is never positive, because the logarithm of $x$ for $0 \\le x \\le 1$ is in the range $[0](https://arbital.com/p/-\\infty,)$. If we negate it, we get a non-negative quantity that relates $H$ to $e$, which is 0 when $H$ was certain that $e$ was going to happen, and which is infinite when $H$ was certain that $e$ wasn't going to happen, and which is measured in the same units as evidence and belief. Thus, this quantity is often called \"surprise,\" and intuitively, it measures how surprised the hypothesis $H$ was by $e$ (in bits).\n\nThere is some correlation between Bayesian surprise and the times when a human would feel surprised (at seeing something that they thought was unlikely), but, of course, the human emotion is quite different. (A human can feel surprised for other reasons than \"my hypotheses failed to predict the data,\" and humans are also great at ignoring evidence instead of feeling surprised.)\n\nGiven this definition of Bayesian surprise, we can view Bayes' rule as saying that _surprise repels belief._ When you make an observation $e,$ each hypothesis emits repulsive \"surprise\" signals, which shift your hypothesis. Referring again to the image above, when $H$ predicts the observation you made with $\\frac{1}{8}$ of its probability mass, and $\\lnot H$ predicts it with $\\frac{1}{4}$ of its probability mass, we can imagine $H$ emitting a surprise signal with a strength of 3 bits away from $H$ and $\\lnot H$ emitting a surprise signal with a strength of 2 bits away from $\\lnot H$. Both those signals push the belief in $H$ in different directions, and it ends up 1 bit closer to $\\lnot H$ (which emitted the weaker surprise signal).\n\nIn other words, whenever you find yourself feeling surprised by something you saw, think of the _least surprising explanation_ for that evidence — and then award that hypothesis a few bits of belief.", "date_published": "2016-11-03T01:10:16Z", "authors": ["Eric Bruylant", "Nate Soares", "Dony Christie", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "552"} {"id": "fbdc22bf186860f90197623b3c775443", "title": "Bayes' rule: Definition", "url": "https://arbital.com/p/bayes_rule_definition", "source": "arbital", "source_type": "text", "text": "Bayes' rule is the mathematics of [probability theory](https://arbital.com/p/1rf) governing how to update your beliefs in the light of new evidence.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n## [Notation](https://arbital.com/p/1y9)\n\nIn much of what follows, we'll use the following [notation](https://arbital.com/p/1y9):\n\n- Let the hypotheses being considered be $H_1$ and $H_2$.\n- Let the evidence observed be $e_0.$\n- Let $\\mathbb P(H_i)$ denote the [prior probability](https://arbital.com/p/1rm) of $H_i$ before observing the evidence.\n- Let the [conditional probability](https://arbital.com/p/1rj) $\\mathbb P(e_0\\mid H_i)$ denote the [likelihood](https://arbital.com/p/1rq) of observing evidence $e_0$ assuming $H_i$ to be true.\n- Let the [conditional probability](https://arbital.com/p/1rj) $\\mathbb P(H_i\\mid e_0)$ denote the [posterior probability](https://arbital.com/p/1rp) of $H_i$ after observing $e_0.$\n\n## [Odds](https://arbital.com/p/1x5)/[proportional](https://arbital.com/p/1zm) form\n\nBayes' rule in the [odds form](https://arbital.com/p/1x5) or [proportional form](https://arbital.com/p/1zm) states:\n\n$$\\dfrac{\\mathbb P(H_1)}{\\mathbb P(H_2)} \\times \\dfrac{\\mathbb P(e_0\\mid H_1)}{\\mathbb P(e_0\\mid H_2)} = \\dfrac{\\mathbb P(H_1\\mid e_0)}{\\mathbb P(H_2\\mid e_0)}$$\n\nIn other words, the [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) times the [likelihood ratio](https://arbital.com/p/1rq) yield the [posterior](https://arbital.com/p/1rp) odds. [Normalizing](https://arbital.com/p/1rk) these odds will then yield the posterior probabilities.\n\nIn [other other words](https://arbital.com/p/1zm): If you initially think $h_i$ is $\\alpha$ times as probable as $h_k$, and then see evidence that you're $\\beta$ times as likely to see if $h_i$ is true as if $h_k$ is true, you should update to thinking that $h_i$ is $\\alpha \\cdot \\beta$ times as probable as $h_k.$\n\nSuppose that Professor Plum and Miss Scarlet are two suspects in a murder, and that we start out thinking that Professor Plum is twice as likely to have committed the murder as Miss Scarlet ([prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) of 2 : 1). We then discover that the victim was poisoned. We think that Professor Plum is around one-fourth as likely to use poison as Miss Scarlet ([likelihood ratio](https://arbital.com/p/1rq) of 1 : 4). Then after observing the victim was poisoned, we should think Plum is around half as likely to have committed the murder as Scarlet: $2 \\times \\dfrac{1}{4} = \\dfrac{1}{2}.$ This reflects [posterior](https://arbital.com/p/1rp) odds of 1 : 2, or a posterior probability of 1/3, that Professor Plum did the deed.\n\n## [Proof](https://arbital.com/p/1xr)\n\nThe [proof of Bayes' rule](https://arbital.com/p/1xr) is by the definition of [conditional probability](https://arbital.com/p/1rj) $\\mathbb P(X\\wedge Y) = \\mathbb P(X\\mid Y) \\cdot \\mathbb P(Y):$\n\n$$\n\\dfrac{\\mathbb P(H_i)}{\\mathbb P(H_j)} \\times \\dfrac{\\mathbb P(e\\mid H_i)}{\\mathbb P(e\\mid H_j)}\n= \\dfrac{\\mathbb P(e \\wedge H_i)}{\\mathbb P(e \\wedge H_j)}\n= \\dfrac{\\mathbb P(e \\wedge H_i) / \\mathbb P(e)}{\\mathbb P(e \\wedge H_j) / \\mathbb P(e)}\n= \\dfrac{\\mathbb P(H_i\\mid e)}{\\mathbb P(H_j\\mid e)}\n$$\n\n## [Log odds form](https://arbital.com/p/1zh)\n\nThe [log odds form of Bayes' rule](https://arbital.com/p/1zh) states:\n\n$$\\log \\left ( \\dfrac\n {\\mathbb P(H_i)}\n {\\mathbb P(H_j)}\n\\right )\n+\n\\log \\left ( \\dfrac\n {\\mathbb P(e\\mid H_i)}\n {\\mathbb P(e\\mid H_j)}\n\\right ) \n =\n\\log \\left ( \\dfrac\n {\\mathbb P(H_i\\mid e)}\n {\\mathbb P(H_j\\mid e)}\n\\right )\n$$\n\nE.g.: \"A study of Chinese blood donors found that roughly 1 in 100,000 of them had HIV (as determined by a very reliable gold-standard test). The non-gold-standard test used for initial screening had a sensitivity of 99.7% and a specificity of 99.8%, meaning that it was 500 times as likely to return positive for infected as non-infected patients.\" Then our prior belief is -5 orders of magnitude against HIV, and if we then observe a positive test result, this is evidence of strength +2.7 orders of magnitude for HIV. Our posterior belief is -2.3 orders of magnitude, or odds of less than 1 to a 100, against HIV.\n\nIn log odds form, the same [strength of evidence](https://arbital.com/p/22x) (log [likelihood ratio](https://arbital.com/p/1rq)) always [moves us the same additive distance](https://arbital.com/p/1zh) along a line representing strength of belief (also in log odds). If we measured distance in probabilities, then the same 2 : 1 likelihood ratio might move us a different distance along the probability line depending on whether we started with prior 10% probability or 50% probability.\n\n## Visualizations\n\nGraphical of visualizing Bayes' rule include [frequency diagrams, the waterfall visualization](https://arbital.com/p/1wy), the [spotlight visualization](https://arbital.com/p/1zm), the [magnet visualization](https://arbital.com/p/1zh), and the [Venn diagram for the proof](https://arbital.com/p/1xr).\n\n## Examples\n\nExamples of Bayes' rule may be found [here](https://arbital.com/p/1wt).\n\n## [Multiple hypotheses and updates](https://arbital.com/p/1zg)\n\nThe [odds form of Bayes' rule](https://arbital.com/p/1x5) works for odds ratios between more than two hypotheses, and applying multiple pieces of evidence. Suppose there's a bathtub full of coins. 1/2 of the coins are \"fair\" and have a 50% probability of producing heads on each coinflip; 1/3 of the coins produce 25% heads; and 1/6 produce 75% heads. You pull out a coin at random, flip it 3 times, and get the result HTH. You may legitimately calculate:\n\n$$\\begin{array}{rll}\n(1/2 : 1/3 : 1/6) \\cong & (3 : 2 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n\\times & (2 : 3 : 1) & \\\\\n\\times & (2 : 1 : 3) & \\\\\n= & (24 : 6 : 9) & \\cong (8 : 2 : 3)\n\\end{array}$$\n\nSince multiple pieces of evidence may not be [conditionally independent](https://arbital.com/p/conditional_independence) from one another, it is important to be aware of the [Naive Bayes assumption](https://arbital.com/p/naive_bayes_assumption) and whether you are making it.\n\n## [Probability form](https://arbital.com/p/554)\n\nAs a formula for a single probability $\\mathbb P(H_i\\mid e),$ Bayes' rule states:\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)}$$\n\n## [Functional form](https://arbital.com/p/1zj)\n\nIn [functional form](https://arbital.com/p/1zj), Bayes' rule states:\n\n$$\\mathbb P(\\mathbf{H}\\mid e) \\propto \\mathbb P(e\\mid \\mathbf{H}) \\cdot \\mathbb P(\\mathbf{H}).$$\n\nThe posterior probability function over hypotheses given the evidence, is *proportional* to the likelihood function from the evidence to those hypotheses, times the prior probability function over those hypotheses.\n\nSince posterior probabilities over [mutually exclusive and exhaustive](https://arbital.com/p/1rd) possibilities must sum to $1,$ [normalizing](https://arbital.com/p/1rk) the product of the likelihood function and prior probability function will yield the exact posterior probability function.", "date_published": "2016-10-04T04:14:07Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["C-Class"], "alias": "553"} {"id": "5010e403929e8370530c1ae78384747d", "title": "Bayes' rule: Probability form", "url": "https://arbital.com/p/bayes_rule_probability", "source": "arbital", "source_type": "text", "text": "The formulation of [https://arbital.com/p/1lz](https://arbital.com/p/1lz) you are most likely to see in textbooks runs as follows:\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)}$$\n\nWhere:\n\n- $H_i$ is the hypothesis we're interested in.\n- $e$ is the piece of evidence we observed.\n- $\\sum_k (\\text {expression containing } k)$ [means](https://arbital.com/p/summation_notation) \"Add up, for every $k$, the sum of all the (expressions containing $k$).\"\n- $\\mathbf H$ is a set of [mutually exclusive and exhaustive](https://arbital.com/p/1rd) hypotheses that include $H_i$ as one of the possibilities, and the expression $H_k$ inside the sum ranges over all the possible hypotheses in $\\mathbf H$.\n\nAs a quick example, let's say there's a bathtub full of potentially biased coins.\n\n- Coin type 1 is fair, 50% heads / 50% tails. 40% of the coins in the bathtub are type 1.\n- Coin type 2 produces 70% heads. 35% of the coins are type 2.\n- Coin type 3 produces 20% heads. 25% of the coins are type 3.\n\nWe want to know the [posterior](https://arbital.com/p/1rp) probability that a randomly drawn coin is of type 2, after flipping the coin once and seeing it produce heads once.\n\nLet $H_1, H_2, H_3$ stand for the hypotheses that the coin is of types 1, 2, and 3 respectively. Then using [conditional probability notation](https://arbital.com/p/1rj), we want to know the probability $\\mathbb P(H_2 \\mid heads).$\n\nThe probability form of Bayes' theorem says:\n\n$$\\mathbb P(H_2 \\mid heads) = \\frac{\\mathbb P(heads \\mid H_2) \\cdot \\mathbb P(H_2)}{\\sum_k \\mathbb P(heads \\mid H_k) \\cdot \\mathbb P(H_k)}$$\n\nExpanding the sum:\n\n$$\\mathbb P(H_2 \\mid heads) = \\frac{\\mathbb P(heads \\mid H_2) \\cdot \\mathbb P(H_2)}{[P](https://arbital.com/p/\\mathbb) + [P](https://arbital.com/p/\\mathbb) + [P](https://arbital.com/p/\\mathbb)}$$\n\nComputing the actual quantities:\n\n$$\\mathbb P(H_2 \\mid heads) = \\frac{0.70 \\cdot 0.35 }{[\\cdot 0.40](https://arbital.com/p/0.50) + [\\cdot 0.35](https://arbital.com/p/0.70) + [\\cdot 0.25](https://arbital.com/p/0.20)} = \\frac{0.245}{0.20 + 0.245 + 0.05} = 0.\\overline{49}$$\n\nThis calculation was big and messy. Which is fine, because the probability form of Bayes' theorem is okay for directly grinding through the numbers, but not so good for doing things in your head.\n\n# Meaning\n\nWe can think of the advice of Bayes' theorem as saying:\n\n\"Think of how much each hypothesis in $H$ contributed to our expectation of seeing the evidence $e$, including both the [likelihood](https://arbital.com/p/56v) of seeing $e$ if $H_k$ is true, and the [prior](https://arbital.com/p/1rm) probability of $H_k$. The [posterior](https://arbital.com/p/1rp) of $H_i$ after seeing $e,$ is the amount $H_i$ contributed to our expectation of seeing $e,$ within the total expectation of seeing $e$ contributed by every hypothesis in $H.$\"\n\nOr to say it at somewhat greater length:\n\nImagine each hypothesis $H_1,H_2,H_3\\ldots$ as an expert who has to distribute the probability of their predictions among all possible pieces of evidence. We can imagine this more concretely by visualizing \"probability\" as a lump of clay.\n\nThe total amount of clay is one kilogram (probability $1$). Each expert $H_k$ has been allocated a fraction $\\mathbb P(H_k)$ of that kilogram. For example, if $\\mathbb P(H_4)=\\frac{1}{5}$ then expert 4 has been allocated 200 grams of clay.\n\nWe're playing a game with the experts to determine which one is the best predictor.\n\nEach time we're about to make an observation $E,$ each expert has to divide up all their clay among the possible outcomes $e_1, e_2, \\ldots.$\n\nAfter we observe that $E = e_j,$ we take away all the clay that wasn't put onto $e_j.$ And then our new belief in all the experts is the relative amount of clay that each expert has left.\n\nSo to know how much we now believe in expert $H_4$ after observing $e_3,$ say, we need to know two things: First, the amount of clay that $H_4$ put onto $e_3,$ and second, the total amount of clay that all experts (including $H_4$) put onto $e_3.$\n\nIn turn, to know *that,* we need to know how much clay $H_4$ started with, and what fraction of its clay $H_4$ put onto $e_3.$ And similarly, to compute the total clay on $e_3,$ we need to know how much clay each expert $H_k$ started with, and what fraction of their clay $H_k$ put onto $e_3.$\n\nSo Bayes' theorem here would say:\n\n$$\\mathbb P(H_4 \\mid e_3) = \\frac{\\mathbb P(e_3 \\mid H_4) \\cdot \\mathbb P(H_4)}{\\sum_k \\mathbb P(e_3 \\mid H_k) \\cdot \\mathbb P(H_k)}$$\n\nWhat are the incentives of this game of clay?\n\nOn each round, the experts who gain the most are the experts who put the most clay on the observed $e_j,$ so if you know for certain that $e_3$ is about to be observed, your incentive is to put all your clay on $e_3.$\n\nBut putting *literally all* your clay on $e_3$ is risky; if $e_5$ is observed instead, you lose all your clay and are out of the game. Once an expert's amount of clay goes all the way to zero, there's no way for them to recover over any number of future rounds. That hypothesis is done, dead, and removed from the game. (\"Falsification,\" some people call that.) If you're not certain that $e_5$ is literally impossible, you'd be wiser to put at least a *little* clay on $e_5$ instead. That is to say: if your mind puts some probability on $e_5,$ you'd better put some clay there too!\n\n([As it happens](https://arbital.com/p/bayes_score), if at the end of the game we score each expert by the logarithm of the amount of clay they have left, then each expert is incentivized to place clay exactly proportionally to their honest probability on each successive round.)\n\nIt's an important part of the game that we make the experts put down their clay in advance. If we let the experts put down their clay afterwards, they might be tempted to cheat by putting down all their clay on whichever $e_j$ had actually been observed. But since we make the experts put down their clay in advance, they have to *divide up* their clay among the possible outcomes: to give more clay to $e_3,$ that clay has to be taken away from some other outcome, like $e_5.$ To put a very high probability on $e_3$ and gain a lot of relative credibility if $e_3$ is observed, an expert has to stick their neck out and risk losing a lot of credibility if some other outcome like $e_5$ happens instead. *If* we force the experts to make advance predictions, that is!\n\nWe can also derive from this game that the question \"does evidence $e_3$ support hypothesis $H_4$?\" depends on how well $H_4$ predicted $e_3$ _compared to the competition._ It's not enough for $H_4$ to predict $e_3$ well if every other hypothesis also predicted $e_3$ well--your amazing new theory of physics gets no points for predicting that the sky is blue. $H_k$ only goes up in probability when it predicts $e_j$ better than the alternatives. And that means we have to ask what the alternative hypotheses predicted, even if we think those hypotheses are false.\n\nIf you get in a car accident, and don't want to relinquish the hypothesis that you're a great driver, then you can find all sorts of reasons (\"the road was slippery! my car freaked out!\") why $\\mathbb P(e \\mid GoodDriver)$ is not too low. But $\\mathbb P(e \\mid BadDriver)$ is also part of the update equation, and the \"bad driver\" hypothesis *better* predicts the evidence. Thus, your first impulse, when deciding how to update your beliefs in the face of a car accident, should not be \"But my preferred hypothesis allows for this evidence!\" It should instead be \"Points to the 'bad driver' hypothesis for predicting this evidence better than the alternatives!\" (And remember, you're allowed to [increase $\\mathbb P](https://arbital.com/p/update_beliefs_incrementally), while still thinking that it's less than 50% probable.)\n\n# Proof\n\nThe proof of Bayes' theorem follows from the definition of [https://arbital.com/p/-1rj](https://arbital.com/p/-1rj):\n\n$$\\mathbb P(X \\mid Y) = \\frac{\\mathbb P(X \\wedge Y)}{\\mathbb P (Y)}$$\n\nAnd from the [law of marginal probability](https://arbital.com/p/law_marginal_probability):\n\n$$\\mathbb P(Y) = \\sum_k \\mathbb P(Y \\wedge X_k)$$\n\nTherefore:\n\n$$\n\\mathbb P(H_i \\mid e) = \\frac{\\mathbb P(H_i \\wedge e)}{\\mathbb P (e)} \\tag{defn. conditional prob.}\n$$\n\n$$\n\\mathbb P(H_i \\mid e) = \\frac{\\mathbb P(e \\wedge H_i)}{\\sum_k \\mathbb P (e \\wedge H_k)} \\tag {law of marginal prob.}\n$$\n\n$$\n\\mathbb P(H_i \\mid e) = \\frac{\\mathbb P(e \\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P (e \\mid H_k) \\cdot \\mathbb P(H_k)} \\tag {defn. conditional prob.}\n$$\n\nQED.", "date_published": "2017-08-13T04:21:13Z", "authors": ["Eric Bruylant", "Alexei Andreev", "Nadeem Mohsin", "Eric Rogstad", "Nate Soares", "Adom Hartell", "Eliezer Yudkowsky"], "summaries": ["The formulation of [https://arbital.com/p/1lz](https://arbital.com/p/1lz) you are most likely to see in textbooks says:\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)}$$\n\nThis follows from the definition of [https://arbital.com/p/-1rj](https://arbital.com/p/-1rj) which states that $\\mathbb P(X \\mid Y) = \\frac{\\mathbb P(X \\wedge Y)}{\\mathbb P (Y)}$, and the [law of marginal probability](https://arbital.com/p/law_marginal_probability) which says that $\\mathbb P(Y) = \\sum_k \\mathbb P(Y \\wedge X_k)$.\n\nWe can think of the corresponding advice as saying, \"Think of how much each hypothesis in $H$ contributed to our expectation of seeing the evidence $e$, including both the [likelihood](https://arbital.com/p/56v) of seeing $e$ if $H_k$ is true, and the [prior](https://arbital.com/p/1rm) probability of $H_k$. The [posterior](https://arbital.com/p/1rp) of $H_i$ after seeing $e,$ is the amount $H_i$ contributed to our expectation of seeing $e,$ within the total expectation of seeing $e$ contributed by every hypothesis in $H.$"], "tags": ["B-Class"], "alias": "554"} {"id": "f1e15f0d4cab1433e2b06b1cb4e5fc6b", "title": "Odds form to probability form", "url": "https://arbital.com/p/bayes_odds_to_probability", "source": "arbital", "source_type": "text", "text": "The odds form of Bayes' rule works for any two hypotheses $H_i$ and $H_j,$ and looks like this:\n\n$$\\frac{\\mathbb P(H_i \\mid e)}{\\mathbb P(H_j \\mid e)} = \\frac{\\mathbb P(H_i)}{\\mathbb P(H_j)} \\times \\frac{\\mathbb P(e \\mid H_i)}{\\mathbb P(e \\mid H_j)} \\tag{1}.$$\n\nThe probabilistic form of Bayes' rule requires a hypothesis set $H_1,H_2,H_3,\\ldots$ that is [https://arbital.com/p/-1rd](https://arbital.com/p/-1rd), and looks like this:\n\n$$\\mathbb P(H_i\\mid e) = \\frac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)} \\tag{2}.$$\n\nWe will now show that equation (2) follows from equation (1). Given a collection $H_1,H_2,H_3,\\ldots$ of mutually exclusive and exhaustive hypotheses and a hypothesis $H_i$ from that collection, we can form another hypothesis $\\lnot H_i$ consisting of all the hypotheses $H_1,H_2,H_3,\\ldots$ _except_ $H_i.$ Then, using $\\lnot H_i$ as $H_j$ and multiplying the fractions on the right-hand side of equation (1), we see that\n\n$$\\frac{\\mathbb P(H_i \\mid e)}{\\mathbb P(\\lnot H_i \\mid e)} = \\frac{\\mathbb P(H_i) \\cdot \\mathbb P(e \\mid H_i)}{\\mathbb P(\\lnot H_i)\\cdot \\mathbb P(e \\mid \\lnot H_i)}.$$\n\n$\\mathbb P(\\lnot H_i)\\cdot \\mathbb P(e \\mid \\lnot H_i)$ is the prior probability of $\\lnot H_i$ times the degree to which $\\lnot H_i$ predicted $e.$ Because $\\lnot H_i$ is made of a bunch of mutually exclusive hypotheses, this term can be calculated by summing $\\mathbb P(H_k) \\cdot \\mathbb P(e \\mid H_k)$ for every $H_k$ in the collection except $H_i.$ Performing that replacement, and swapping the order of multiplication, we get:\n\n$$\\frac{\\mathbb P(H_i \\mid e)}{\\mathbb P(\\lnot H_i \\mid e)} = \\frac{\\mathbb P(e \\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_{k \\neq i} \\mathbb P(e \\mid H_k) \\cdot \\mathbb P(H_k)}.$$\n\nThese are the posterior odds for $H_i$ versus $\\lnot H_i.$ Because $H_i$ and $\\lnot H_i$ are mutually exclusive and exhaustive, we can convert these odds into a probability for $H_i,$ by calculating numerator / (numerator + denominator), in the same way that $3 : 4$ odds become a 3 / (3 + 4) probability. When we do so, equation (2) drops out:\n\n$$\\mathbb P(H_i\\mid e) = \\frac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)}.$$\n\nThus, we see that the probabilistic formulation of Bayes' rule follows from the odds form, but is less general, in that it only works when the set of hypotheses being considered are mutually exclusive and exhaustive.\n\nWe also see that the probabilistic formulation converts the posterior odds into a posterior probability. When computing [multiple updates in a row](https://arbital.com/p/bayes_rule_vector), you actually only need to perform this \"normalization\" step once at the very end of your calculations — which means that the odds form of Bayes' rule is also more efficient in practice.", "date_published": "2016-07-06T04:34:25Z", "authors": ["Eric Bruylant", "Nate Soares"], "summaries": [], "tags": ["Needs clickbait", "B-Class"], "alias": "555"} {"id": "c6a7acda9075ab7b8e4080e349b9752d", "title": "Logistic function", "url": "https://arbital.com/p/logistic_function", "source": "arbital", "source_type": "text", "text": "The logistic function is a [https://arbital.com/p/-sigmoid](https://arbital.com/p/-sigmoid) function that maps the [real numbers](https://arbital.com/p/4bc) to the unit interval $(0, 1)$ using the formula $\\displaystyle f(x) = \\frac{1}{1 + e^{-x}}$.\n\nMore generally, there exists a [family](https://arbital.com/p/family_of_functions) of logistic functions that can be written as $\\displaystyle f(x) = \\frac{M}{1 + \\alpha^{c(x_0 - x)}}$, where:\n\n* $M$ is the upper bound of the function (in which case the function maps to the interval $(0, M)$ instead). When $M = 1$, the logistic function is usually being used to calculate some [https://arbital.com/p/-1rf](https://arbital.com/p/-1rf) or [https://arbital.com/p/-4w3](https://arbital.com/p/-4w3) of a total.\n\n* $x_0$ is the inflection point of the curve, or the value of $x$ where the function's growth stops speeding up and starts slowing down.\n\n* $\\alpha$ is a variable controlling the steepness of the curve.\n\n* $c$ is a scaling factor for the distance.\n\n## Applications\n\n* The logistic function is used to model growth rates of populations in an ecosystem with a limited carrying capacity.\n\n* The inverse logistic function (with $\\alpha = 2$) is used to convert a probability to log-odds form for use in [https://arbital.com/p/1lz](https://arbital.com/p/1lz).\n\n* The logistic function (with $\\alpha = 10$ and $c = 1/400$) is used to calculate the expected probability of a player winning given a specific difference in rating in the Elo rating system.", "date_published": "2016-07-06T23:33:37Z", "authors": ["Eric Bruylant", "Joe Zeng"], "summaries": ["The logistic function is a [https://arbital.com/p/-sigmoid](https://arbital.com/p/-sigmoid) function that maps the [real numbers](https://arbital.com/p/4bc) to the unit interval $(0, 1)$ using the formula $\\displaystyle f(x) = \\frac{1}{1 + e^{-x}}$ or variants of this formula."], "tags": ["Start", "Needs parent"], "alias": "558"} {"id": "ef3ad0bf3d8a7b0ca5dac3394d94f721", "title": "Sparking widgets", "url": "https://arbital.com/p/sparking_widgets", "source": "arbital", "source_type": "text", "text": "10% of widgets are bad and 90% are good. 4% of good widgets emit sparks, and 12% of bad widgets emit sparks. If a widget is sparking, find an easy way to calculate the chance that it's a bad widget, keeping the operations as simple as possible so that it's easy to do the calculation entirely in your head.\n\nYou're encouraged to solve the problem yourself without looking at the answer. an answer can be found in [https://arbital.com/p/bayes_extraordinary_claims](https://arbital.com/p/bayes_extraordinary_claims).", "date_published": "2016-07-06T12:58:00Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Just a requisite", "Example problem"], "alias": "559"} {"id": "0214905a38271739bc39bac06c141973", "title": "Sock-dresser search", "url": "https://arbital.com/p/sockdresser_search", "source": "arbital", "source_type": "text", "text": "You left your socks somewhere in your room. You think there's a 4/5 chance that they're in your dresser, so you start looking through your dresser's 8 drawers. After checking 6 drawers at random, you haven't found your socks yet. What is the probability you will find your socks in the next drawer you check?\n\nYou're encouraged to solve the problem yourself before looking up the answer. The answer can be found in [https://arbital.com/p/1y6](https://arbital.com/p/1y6).", "date_published": "2016-07-06T12:11:50Z", "authors": ["Nate Soares"], "summaries": [], "tags": ["Just a requisite", "Example problem"], "alias": "55b"} {"id": "0cd530209c6ba8a5e57b21db2b51dc42", "title": "Ordinary claims require ordinary evidence", "url": "https://arbital.com/p/bayes_ordinary_claims", "source": "arbital", "source_type": "text", "text": "A corollary to [extraordinary claims require extraordinary evidence](https://arbital.com/p/21v) is \"ordinary claims require only ordinary evidence\"%%note: Attributed to [Gwern Branwen](https://www.gwern.net/), who hasn't popularized this claim nearly as much as Carl Sagan popularized the extraordinary version.%%\n\nThis corollary is probably more important in practice, as, in our day-to-day lives, we encounter more ordinary claims than extraordinary claims, but may be more tempted to reject many of those ordinary claims by demanding extraordinary evidence (a bias known as \"motivated skepticism\").\n\n# Example: A failing new employee\n\nOne contributed example: \n\n> A few years back, a senior person at my workplace told me that a new employee wasn't getting his work done on time, and that she'd had to micromanage him to get any work out of him at all. This was an unpleasant fact for a number of reasons; I'd liked the guy, and I'd advocated for hiring him to our Board of Directors just a few weeks earlier (which is why the senior manager was talking to me). I could have demanded more evidence, I could have demanded that we give him more time to work out, I could have demanded a videotape and signed affidavits… but a new employee not working out, just *isn't that improbable*. Could I have named the exact prior odds of an employee not working out, could I have said [how much more likely](https://arbital.com/p/1zm) I was to hear that exact report of a long-term-bad employee than a long-term-good employee? No, but 'somebody hires the wrong person' happens all the time, and I'd seen it happen before. It wasn't an extraordinary claim, and I wasn't licensed to ask for extraordinary evidence. To put numbers on it, I thought the proportion of bad to good employees was on the order of 1 : 4 at the time, and the likelihood ratio for the manager's report seemed more like 10 : 1.\n\nOr to put it another way: The rule is 'extraordinary claims require extraordinary evidence', not 'inconvenient but ordinary claims require extraordinary evidence'.\n\nIn everyday life, we consider many more ordinary claims than extraordinary claims — including many ordinary claims whose inconvenience might tempt us to dismiss their adequate but ordinary evidence. In those cases, it's important to remember that ordinary claims only require ordinary evidence.", "date_published": "2016-07-06T13:33:00Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["C-Class"], "alias": "55c"} {"id": "eee9ac83608eed1e1ed3f6ec0e96ffaf", "title": "Ordered ring", "url": "https://arbital.com/p/ordered_ring", "source": "arbital", "source_type": "text", "text": "An **ordered ring** is a [https://arbital.com/p/-3gq](https://arbital.com/p/-3gq) $R=(X,\\oplus,\\otimes)$ with a [total order](https://arbital.com/p/540) $\\leq$ compatible with the ring structure. Specifically, it must satisfy these axioms for any $a,b,c \\in X$:\n\n- If $a \\leq b$, then $a \\oplus c \\leq b \\oplus c$.\n- If $0 \\leq a$ and $0 \\leq b$, then $0 \\leq a \\otimes b$\n\nAn element $a$ of the ring is called \"[https://arbital.com/p/-positive](https://arbital.com/p/-positive)\" if $00$.)\n\n%%hidden(Show proof):\nClearly $1 = 1 \\otimes 1$. So $1$ is a square, which means it's nonnegative.\n%%\n\n# Examples\n\nThe [real numbers](https://arbital.com/p/4bc) are an ordered ring (in fact, an [https://arbital.com/p/-ordered_field](https://arbital.com/p/-ordered_field)), as is any [https://arbital.com/p/-subring](https://arbital.com/p/-subring) of $\\mathbb R$, such as [$\\mathbb Q$](https://arbital.com/p/4zq).\n\nThe [complex numbers](https://arbital.com/p/4zw) are not an ordered ring, because there is no way to define the order between $0$ and $i$. Suppose that $0 \\le i$, then, we have $0 \\le i \\times i = -1$, which is false. Suppose that $i \\le 0$, then $0 = i + (-i) \\le 0 + (-i)$, but then we have $0 \\le (-i) \\times (-i) = -1$, which is again false. Alternatively, $i^2=-1$ is a square, so it must be nonnegative; that is, $0 \\leq -1$, which is a contradiction.", "date_published": "2016-07-07T16:51:00Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Joe Zeng"], "summaries": ["An **ordered ring** is a [https://arbital.com/p/-3gq](https://arbital.com/p/-3gq) that is [totally ordered](https://arbital.com/p/540), where the ordering agrees with the ring operations. In particular, adding something to two elements doesn't change which of them is bigger, and the product of two [https://arbital.com/p/-positive](https://arbital.com/p/-positive) elements is positive."], "tags": ["Start"], "alias": "55j"} {"id": "7c561532c06b269183d1a986e8f22985", "title": "Addition of rational numbers (Math 0)", "url": "https://arbital.com/p/addition_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "We've already met the idea that rational numbers are what we can make by putting together the building blocks which we get by dividing a single apple into some number of equally-sized pieces; and remember that the building blocks have a special notation, as $\\frac{1}{\\text{number}}$.\nRecall also that if we take $5$ of a certain building block, for instance, we write $\\frac{5}{\\text{number}}$.\n\nIt's clear that if you take one apple and another apple, and put them together, you'll get two apples.\nSo we should hope that that's the same if we divided up the apples into pieces first (without removing any of the pieces).\n\nWe write $a+b$ for \"take the rational number $a$ and put it next to the rational number $b$, and count up what you've got\".\nBecause I'm bad at drawing, we'll pretend apples are just perfect circles. %%note:This is actually a design decision: eventually we'll want to get away from considering apples, and this more abstract representation will be useful.%%\n\n![Example of a sum of two rational numbers](http://i.imgur.com/GUgmCwb.png)\n\n----------\n\n\nFor example, we should hope that $\\frac{2}{2} + \\frac{3}{3} = 2$.\n\n![Two halves and three thirds](http://i.imgur.com/6vdRdcT.png)\n\nWhat about in cases which aren't of the form $\\frac{n}{n}$ for some integer $n$?\nWell, how about $\\frac{5}{3} + \\frac{8}{3}$.\n\n![Five thirds and eight thirds](http://i.imgur.com/7dPqmCP.png)\n\nIf we had five $\\frac{1}{3}$-blocks, and we put them together with another eight $\\frac{1}{3}$-blocks, we would hope to have $5+8=13$ blocks.\nSo we should hope that $\\frac{5}{3} + \\frac{8}{3} = \\frac{13}{3}$.\n\n![Thirds, separated out](http://i.imgur.com/foVkEiS.png)\n\nAnd in general, if both our quantities are made up from the same size of building-block (in the above case, the blocks are $\\frac{1}{3}$-sized), we should just be able to take the two numerators %%note:Remember, that was the word mathematicians use for the number of blocks we have.%% and add them together.\n\n----------\n\nVery well. But what if the two quantities are not made up from the same size of building-block? That is, the denominators are different?\nFor example, $\\frac{5}{3} + \\frac{5}{4}$?\n\n![Five thirds and five quarters](http://i.imgur.com/xFtKEnU.png)\n\n![Five thirds and five quarters, rearranged](http://i.imgur.com/Av984T0.png)\n\nNow it's not so clear.\nHow might we approach this? You should muse on this for thirty seconds before reading on; it will be good for your soul.\n\n%%hidden(Show solution):\nThe way we do it is to find some *smaller* block-size, out of which we can make *both* building-blocks.\n%%\n\n# Example\n\nOur example is $\\frac{5}{3} + \\frac{5}{4}$.\nSo we want to make the $\\frac{1}{3}$ block and the $\\frac{1}{4}$ block out of some smaller block.\n\nNow, I won't tell you yet how I got this, but you can check for yourself that $\\frac{1}{12}$-blocks will make both $\\frac{1}{3}$- and $\\frac{1}{4}$-blocks: because three $\\frac{1}{12}$-blocks together make something the same size as a $\\frac{1}{4}$-block, and four $\\frac{1}{12}$-blocks together make something the same size as a $\\frac{1}{3}$-block.\n\n![Thirds, divided into twelfths](http://i.imgur.com/k2iz8uR.png)\n\n![Quarters, divided into twelfths](http://i.imgur.com/gmHK3oj.png)\n\nTherefore $\\frac{1}{3} = \\frac{4}{12}$:\n\n![One third, divided into twelfths](http://i.imgur.com/fFX4Dmc.png)\n\nAnd $\\frac{1}{4} = \\frac{3}{12}$:\n\n![One quarter, divided into twelfths](http://i.imgur.com/gRqmfAc.png)\n\nNow, if $\\frac{1}{3} = \\frac{4}{12}$ - that is, if one $\\frac{1}{3}$-piece is the same size as four $\\frac{1}{12}$-pieces - then it should be the case that five $\\frac{1}{3}$-pieces come to five lots of four $\\frac{1}{12}$-pieces.\nThat is, to twenty $\\frac{1}{12}$-pieces: $\\frac{5}{3} = \\frac{20}{12}$.\n\n\nSimilarly, $\\frac{5}{4} = \\frac{15}{12}$, because five $\\frac{1}{4}$-pieces is the same as five lots of three sets of $\\frac{1}{12}$-pieces, and $5 \\times 3 = 15$.\n\nTherefore $\\frac{5}{3} + \\frac{5}{4}$ should be just the same as $\\frac{20}{12} + \\frac{15}{12}$, which we know how to calculate!\nIt is $\\frac{35}{12}$.\n\n# General procedure\n\nThe preceding example might suggest a general way for adding two fractions together. (In the process, this should put to rest the existential dread mentioned in [the intro to rational numbers](https://arbital.com/p/4zx), about whether it makes sense to add two rational numbers at all.)\n\nThe general procedure is as follows:\n\n 1. Find a building-block size such that the building-blocks of both fractions can be made out of that building-block\n 2. Express each fraction individually in terms of that building-block\n 3. Add the two together (which we can now do, because they are both made of the same size of building-block).\n\nThe interesting part is really the first step.\n\n## Finding a building-block that will neatly make two other differently-sized blocks\n\nLet's be concrete here: just as above, we showed that $\\frac{1}{12}$ works for blocks of size $\\frac{1}{3}$ and $\\frac{1}{4}$, so we'll now consider $\\frac{1}{2}$ and $\\frac{1}{5}$.\n\nWe're trying to divide up $\\frac{1}{2}$ into copies of a smaller building-block, such that with copies of that *same* building block, we can also make $\\frac{1}{5}$.\nHere is the key insight that tells us how to do it: let us pretend that we are dividing up the *same* object, this time a square instead of a circle, into two pieces along one edge and into five pieces along the other edge.\nHow many pieces have we made?\n\n![Square divided into tenths](http://i.imgur.com/qkMkQvZ.png)\n\nThere are $2 \\times 5 = 10$ pieces, each of them the same size (namely the $\\frac{1}{10}$-block size), and note that as if by magic we've just produced a way to subdivide $\\frac{1}{2}$ into pieces made up of the $\\frac{1}{10}$-block!\n\n![Square divided into tenths, one half indicated](http://i.imgur.com/n9dYZBA.png)\n\nAnd we also have the $\\frac{1}{5}$ subdivided in the same way!\n\n![Square divided into tenths, one fifth indicated](http://i.imgur.com/0ny9RnS.png)\n\nThis might suggest a general way to do this; you should certainly ponder for thirty seconds before continuing.\n%%hidden(Show the general way):\n\nLet's say we are trying to make both $\\frac{1}{m}$ and $\\frac{1}{n}$-sized blocks out of smaller blocks.\n(So above, $m$ is standing for $2$ and $n$ is standing for $5$.)\n\nThen we can do this by splitting both up into $\\frac{1}{m \\times n}$-blocks.\n\nIndeed, $\\frac{1}{n} = \\frac{m}{m \\times n}$ (i.e. we can make a $\\frac{1}{n}$-block out of $m$ of the $\\frac{1}{m \\times n}$ tiny-blocks),\nand $\\frac{1}{m} = \\frac{n}{m \\times n}$ (i.e. we can make a $\\frac{1}{m}$-block out of $n$ of the $\\frac{1}{m \\times n}$ tiny-blocks).\n%%\n\n## Aside: arithmetical rules\n\nThe description of the \"general way\" above has become pretty full of $\\frac{1}{\\text{thing}}$, and refers less and less to the building-blocks we originally considered.\nThe reason for this is that $\\frac{1}{\\text{thing}}$ is an extremely convenient shorthand, and the \"general way\" can be written as a very simple rule for manipulating that shorthand.\nOnce we have such a convenient rule, we can compute with fractions without having to go through the bother of working out what each fraction really and truly *means* (in terms of the building-blocks); this saves a lot of time in practice.\nAs an analogy, it's easier for you to drive a car if, every time you change gears, you *don't* have to work out what's happening inside the gearbox.\n\nThe simple rule we have just seen is $$\\frac{1}{m} + \\frac{1}{n} = \\frac{n}{m \\times n} + \\frac{m}{m \\times n}$$\nAnd we've already had the rule that $$\\frac{a}{m} + \\frac{b}{m} = \\frac{a+b}{m}$$\n(Remember, if these are not obvious to you, you should think back to what they really mean, in terms of the building-blocks. The first rule is to do with dividing up a big square into little squares, from the previous subsection; the second rule is just taking two collections of building-blocks of the same size and putting them together.)\n\n\nThis is almost every rule we need to add any pair of fractions!\nWe can now add single building blocks of any size together.\n\nTo add any pair of rational numbers together - say of building-block size $\\frac{1}{m}$ and $\\frac{1}{n}$ respectively - we just need to express them each in terms of the smaller building-block $\\frac{1}{m \\times n}$, and add them together as normal.\n\n%%hidden(Example):\nEarlier, we considered $\\frac{5}{4} + \\frac{5}{3}$.\n\nA building-block which can make both the $\\frac{1}{4}$ and $\\frac{1}{3}$-sized blocks is the $\\frac{1}{12}$-block, because $3 \\times 4 = 12$.\n\nTherefore we need to express both our fractions as being made from $\\frac{1}{12}$-sized blocks.\n\n$\\frac{5}{4}$ is $\\frac{15}{12}$, because in each of the five $\\frac{1}{4}$-blocks, there are three $\\frac{1}{12}$-blocks, so we have a total of $5 \\times 3$ blocks of size $\\frac{1}{12}$.\n\n$\\frac{5}{3}$ is $\\frac{20}{12}$, because in each of the five $\\frac{1}{3}$-blocks, there are four $\\frac{1}{12}$-blocks, so we have a total of $5 \\times 4 = 20$ blocks of size $\\frac{1}{12}$.\n\nTherefore our final answer is $\\frac{15}{12} + \\frac{20}{12} = \\frac{35}{12}$.\n%%\n\nTo make this into a general rule: $$\\frac{a}{m} + \\frac{b}{n} = \\frac{a \\times n}{m \\times n} + \\frac{b \\times m}{m \\times n} = \\frac{a \\times n + b \\times m}{m \\times n}$$\n\n(Recall the order of operations in the integers: the notation $a \\times n + b \\times m$ means \"do $a \\times n$; then do $b \\times m$; then finally add them together\". Multiplication comes before addition.)\n\nAnd that's it! That's how we add rational numbers together, and it works even when any or all of $a, b, m, n$ are negative.\n(Remember, though, that $m$ and $n$ can't be zero, because it makes no sense to divide something up into no pieces.)\n\nIt would be wise now to try the [exercises](https://arbital.com/p/55p). One learns mathematics through doing, much more than through simply reading; your understanding will be cemented by going through some concrete examples.", "date_published": "2016-08-13T12:14:24Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["Addition is \"putting two quantities next to each other and working out how much they come to when combined\"."], "tags": ["Math 0", "Needs image", "Image requested"], "alias": "55m"} {"id": "eeb5ed5c31c559159bb63561f24e471c", "title": "Addition of rational numbers exercises", "url": "https://arbital.com/p/addition_of_rational_numbers_exercises", "source": "arbital", "source_type": "text", "text": "This page consists of exercises designed to help you get to grips with the addition of rational numbers.\n\n# Core exercises\n\n## First exercise: $\\frac{1}{10} + \\frac{1}{5}$\n\n%%hidden(Show some possible solutions):\nUsing the instant rule from the text (which is actually a bit unwieldy here): $$\\frac{1}{10} + \\frac{1}{5} = \\frac{1 \\times 5 + 10 \\times 1}{10 \\times 5} = \\frac{5+10}{50} = \\frac{15}{50}$$\nNotice that this can actually be made simpler: it is the same thing as $\\frac{3}{10}$, because when we take $\\frac{3}{10}$ and split each $\\frac{1}{10}$-block into five equal pieces, we obtain $15$ copies of the $\\frac{1}{50}$-block.\n\nAlternatively, one could spot that $\\frac{1}{10}$-blocks actually can already be used to make $\\frac{1}{5}$-blocks: $\\frac{1}{5} = \\frac{2}{10}$.\nTherefore we're actually trying to find $\\frac{1}{10} + \\frac{2}{10}$, which is easy: it's $\\frac{3}{10}$.\n%%\n\n## Second exercise: $\\frac{1}{15} + \\frac{1}{10}$\n\n%%hidden(Show some possible solutions):\nUsing the instant rule from the text: $$\\frac{1}{10} + \\frac{1}{15} = \\frac{1 \\times 15 + 10 \\times 1}{10 \\times 15} = \\frac{25}{150} = \\frac{1}{6}$$\n\nWere you expecting a big denominator, or at least a multiple of 5? From this example, we can see that the final answer of a rational addition problem can have a denominator which doesn't even seem related to the others.\n\nAlternatively, one could spot that $\\frac{1}{30}$-blocks will make up $\\frac{1}{10}$- and $\\frac{1}{15}$-blocks.\nThen we are actually trying to find $\\frac{3}{30} + \\frac{2}{30} = \\frac{5}{30}$; it's a bit easier to see that $\\frac{5}{30} = \\frac{1}{6}$ than it is to see that $\\frac{25}{150} = \\frac{1}{6}$.\n%%\n\n## Third exercise: $\\frac{1}{10} + \\frac{1}{15}$\nYou might find this exercise a little familiar…\n\n%%hidden(Show possible solution):\nUsing the instant rule from the text: $$\\frac{1}{15} + \\frac{1}{10} = \\frac{1 \\times 10 + 15 \\times 1}{15 \\times 10} = \\frac{25}{150} = \\frac{1}{6}$$\n\nYou may notice that we've basically done the same calculations as in the second exercise.\nIn fact, addition doesn't care which way round the numbers go: $\\frac{1}{10} + \\frac{1}{15} = \\frac{1}{15} + \\frac{1}{10}$, even if we don't already know that that number is $\\frac{1}{6}$.\n\nThis is intuitive from the fact that addition is the idea of \"place the apples next to each other and count up the total\": just putting the apples down in a different order doesn't change the total amount of apple.\n%%\n\n## Fourth exercise: $\\frac{0}{5} + \\frac{2}{5}$\n\n%%hidden(Show possible solution):\nNotice that both the denominators are the same (namely $5$), so we can just combine the $\\frac{1}{5}$-sized pieces straight away.\nWe have $0$ pieces and $2$ pieces, so the total is $2$ pieces.\n\nThat is, the answer is $\\frac{2}{5}$.\n%%\n\n## Fifth exercise: $\\frac{0}{7} + \\frac{2}{5}$\n\n%%hidden(Show possible solution):\nIf you spot that this is \"no $\\frac{1}{7}$-pieces\" next to \"two $\\frac{1}{5}$-pieces\", then you might just immediately write down that the answer is $\\frac{2}{5}$ because there aren't any $\\frac{1}{7}$-pieces to change the answer; and you'd be correct.\n\nTo use the instant rule from the text: $$\\frac{0}{7} + \\frac{2}{5} = \\frac{0 \\times 5 + 2 \\times 7}{5 \\times 7} = \\frac{0 + 14}{35} = \\frac{14}{35}$$\n\nBut that is the same as $\\frac{2}{5}$ (simply expressed with each $\\frac{1}{5}$-piece subdivided further into sevenths).\n%%\n\n# Extension exercises\n\nThese exercises are meant to be harder and to stretch your conceptual understanding.\nGive them a proper go, but don't worry too much if you don't get the same answers as me.\nMine are, in a technical sense, \"right\", but no matter what you end up with, you will derive a lot of benefit from trying to work out what the answers are yourself without having been told exactly how.\nThe learning of mathematics is much more about thinking and understanding (usually guided by examples) than it is about just repeatedly carrying out calculations.\n\n## First extension exercise: $\\frac{1}{5} + \\frac{-1}{10}$\nYes, it is possible to add a negative number of chunks.\nTry using the instant rule and see what happens.\n\n%%hidden(Show possible solution):\nUsing the instant rule from the text: $$\\frac{1}{15} + \\frac{-1}{10} = \\frac{1 \\times 10 + 15 \\times (-1)}{15 \\times 10} = \\frac{10 - 15}{150} = \\frac{-5}{150} = \\frac{-1}{30}$$\n\nWhat has happened here? What have we \"really done\" with our chunks of apple?\nHave a think; we'll see a lot more of this when we get to [subtraction](https://arbital.com/p/56x).\n%%\n\n## Second extension exercise: what rational number must we add to $\\frac{7}{8}$ to obtain $\\frac{13}{8}$?\n\nYou're looking for an answer that looks like $\\frac{a}{b}$ where there are integers in place of $a$ and $b$.\n\n%%hidden(Show some possible solutions):\n\nSomething you can do to make this question easier is to notice that both the numbers have the same chunk-size (namely $\\frac{1}{8}$), so we might try adding some number of $\\frac{1}{8}$-chunks.\nThen we're trying to get from $7$ chunks to $13$ chunks, so we need to add $6$ chunks.\n\nThat is, the final number is $\\frac{6}{8}$ (which is also $\\frac{3}{4}$).\n%%\n\n## Third extension exercise: what rational number must we add to $\\frac{7}{8}$ to obtain $\\frac{13}{7}$?\n\nYou're looking for an answer that looks like $\\frac{a}{b}$ where there are integers in place of $a$ and $b$.\n\n%%hidden(Show some possible solutions):\n\nNow, the numbers are no longer of the same chunk-size, so we should make it so that they are of the same size.\n\nThe chunk-size to use is $\\frac{1}{8 \\times 7} = \\frac{1}{56}$. The reason for this is the same as the reasoning we saw when working out how to add $\\frac{1}{8}$ and $\\frac{1}{7}$.\n\nThen the two numbers are $\\frac{7 \\times 7}{7 \\times 8} = \\frac{49}{56}$ and $\\frac{8 \\times 13}{8 \\times 7} = \\frac{104}{56}$, so the answer is that we need to get from $49$ to $104$; to do that, we need to add $55$ chunks of size $\\frac{1}{56}$, so the answer is $\\frac{55}{56}$.\n%%", "date_published": "2016-08-01T07:41:23Z", "authors": ["Patrick Stevens", "Joe Zeng"], "summaries": ["This page consists of exercises designed to help you get to grips with the addition of rational numbers."], "tags": [], "alias": "55p"} {"id": "1df700f6592ce4699e6a8d3d7bd81551", "title": "Well-ordered set", "url": "https://arbital.com/p/well_ordered_set", "source": "arbital", "source_type": "text", "text": "A **well-ordered set** is a [https://arbital.com/p/-540](https://arbital.com/p/-540) $(S, \\leq)$, such that for any nonempty subset $U \\subset S$ there is some $x \\in U$ such that for every $y \\in U$, $x \\leq y$; that is, every nonempty subset of $S$ has a least element.\n\nAny finite totally ordered set is well-ordered. The simplest [infinite](https://arbital.com/p/infinity) well-ordered set is [$\\mathbb N$](https://arbital.com/p/45h), also called [$\\omega$](https://arbital.com/p/ordinal_omega) in this context.\n\nEvery well-ordered set is [isomorphic](https://arbital.com/p/4f4) to a unique [https://arbital.com/p/-ordinal_number](https://arbital.com/p/-ordinal_number), and thus any two well-ordered sets are comparable.\n\nThe order $\\leq$ is called a \"well-ordering,\" despite the fact that \"well\" is usually an adverb.\n\n#Induction on a well-ordered set\n\n[https://arbital.com/p/mathematical_induction](https://arbital.com/p/mathematical_induction) works on any well-ordered set. On well-ordered sets longer than $\\mathbb N$, this is called [https://arbital.com/p/-transfinite_induction](https://arbital.com/p/-transfinite_induction). \n\nInduction is a method of proving a statement $P(x)$ for all elements $x$ of a well-ordered set $S$. Instead of directly proving $P(x)$, you prove that if $P(y)$ holds for all $y < x$, then $P(x)$ is true. This suffices to prove $P(x)$ for all $x \\in S$.\n\n%%hidden(Show proof):\nLet $U = \\{x \\in S \\mid \\neg P(x) \\}$ be the set of elements of $S$ for which $P$ doesn't hold, and suppose $U$ is nonempty. Since $S$ is well-ordered, $U$ has a least element $x$. That means $P(y)$ is true for all $y < x$, which implies $P(x)$. So $x \\not\\in U$, which is a contradiction. Hence $U$ is empty, and $P$ holds on all of $S$.\n%%", "date_published": "2016-07-07T15:09:09Z", "authors": ["Dylan Hendrickson", "Joe Zeng"], "summaries": [], "tags": [], "alias": "55r"} {"id": "1e03ae3db8be9e3f76c712e591ef38e9", "title": "Needs parent", "url": "https://arbital.com/p/needs_parent_meta_tag", "source": "arbital", "source_type": "text", "text": "This page is not attached to an appropriate parent page.", "date_published": "2016-07-14T21:02:15Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Stub"], "alias": "55s"} {"id": "b2cfada7a6e2a7d840d4d8150b1ffc50", "title": "Löb's theorem", "url": "https://arbital.com/p/lobs_theorem", "source": "arbital", "source_type": "text", "text": "We trust Peano Arithmetic to correctly capture certain features of the [standard model of arithmetic](https://arbital.com/p/). Furthermore, we know that Peano Arithmetic is expressive enough to [talk about itself](https://arbital.com/p/31z) in meaningful ways. So it would certainly be great if Peano Arithmetic asserted what now is an intuition: that everything it proves is certainly true.\n\nIn formal notation, let $Prv$ stand for the [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt) of $PA$. Then, $Prv(T)$ is true if and only if there is a proof from the axioms and rules of inference of $PA$ of $T$. Then what we would like $PA$ to say is that $Prv(S)\\implies S$ for every sentence $S$.\n\nBut alas, $PA$ suffers from a problem of self-trust.\n\nLöb's theorem states that if $PA\\vdash Prv(S)\\implies S$ then $PA\\vdash S$. This immediately implies that if $PA$ is consistent, the sentences $PA\\vdash Prv(S)\\implies S$ are not provable when $S$ is false, even though according to our intuitive understanding of the standard model every sentence of this form must be true.\n\nThus, $PA$ is incomplete, and fails to prove a particular set of sentences that would increase massively our confidence in it.\n\nNotice that [Gödel's second incompleteness theorem](https://arbital.com/p/godels_second_incompleteness_theorem) follows immediately from Löb's theorem, as if $PA$ is consistent, then by Löb's $PA\\nvdash Prv(0= 1)\\implies 0= 1$, which by the propositional calculus implies $PA\\nvdash \\neg Prv(0= 1)$.\n\nIt is worth remarking that Löb's theorem does not only apply to the standard provability predicate, but to every predicate satisfying the [Hilbert-Bernais derivability conditions](https://arbital.com/p/).", "date_published": "2016-07-30T02:03:46Z", "authors": ["Patrick LaVictoire", "Eric Rogstad", "Malcolm McCrimmon", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["If $PA\\vdash Prv_{PA}(A)\\implies A$ then $PA\\vdash A$"], "tags": ["Needs summary", "Needs accessible summary", "Stub"], "alias": "55w"} {"id": "effe055a501e2978d0c6ddf10412da8c", "title": "Frequency diagrams: A first look at Bayes", "url": "https://arbital.com/p/bayes_frequency_diagram_diseasitis", "source": "arbital", "source_type": "text", "text": "Bayesian reasoning is about how to revise our beliefs in the light of evidence.\n\nWe'll start by considering one scenario in which the strength of the evidence has clear numbers attached.\n\n(Don't worry if you don't know how to solve the following problem. We'll see shortly how to solve it.)\n\nSuppose you are a nurse screening a set of students for a sickness called Diseasitis.%note: Literally \"inflammation of the disease\".%\n\n- You know, from past population studies, that around 20% of the students will have Diseasitis at this time of year.\n\nYou are testing for Diseasitis using a color-changing tongue depressor, which usually turns black if the student has Diseasitis.\n\n- Among patients with Diseasitis, 90% turn the tongue depressor black.\n- However, the tongue depressor is not perfect, and also turns black 30% of the time for healthy students.\n\nOne of your students comes into the office, takes the test, and turns the tongue depressor black. What is the probability that they have Diseasitis?\n\n(*If* you think you see how to do it, you can try to solve this problem before continuing. To quickly see if you got your answer right, you can expand the \"Answer\" button below; the derivation will be given shortly.)\n\n%hidden(Answer): The probability a student with a blackened tongue depressor has Diseasitis is 3/7, roughly 43%. %\n\nThis problem can be solved a hard way or a clever easy way. We'll walk through the hard way first.\n\nFirst, we imagine a population of 100 students, of whom 20 have Diseasitis and 80 don't.%%note:Multiple studies show that thinking about concrete numbers such as \"20 out of 100 students\" or \"200 out of 1000 students\" is more likely to produce correct spontaneous reasoning on these problems than thinking about percentages like \"20% of students.\" E.g. \"[Probabilistic reasoning in clinical medicine](https://faculty.washington.edu/jmiyamot/p548/eddydm%20prob%20reas%20i%20clin%20medicine.pdf)\" by David M. Eddy (1982).%%\n\n![prior frequency](https://i.imgur.com/gvcoqQN.png)\n\n90% of sick students turn their tongue depressor black, and 30% of healthy students turn the tongue depressor black. So we see black tongue depressors on 90% * 20 = 18 sick students, and 30% * 80 = 24 healthy students.\n\n![posterior frequency](https://i.imgur.com/0XDbrYi.png)\n\nWhat's the probability that a student with a black tongue depressor has Diseasitis? From the diagram, there are 18 sick students with black tongue depressors. 18 + 24 = 42 students in total turned their tongue depressors black. Imagine reaching into a bag of all the students with black tongue depressors, and pulling out one of those students at random; what's the chance a student like that is sick?\n\n![conditional probability](http://i.imgur.com/W6avfHO.png?1)\n\nThe final answer is that a patient with a black tongue depressor has an 18/42 = 3/7 = 43% probability of being sick.\n\nMany medical students have at first found this answer counter-intuitive: The test correctly detects Diseasitis 90% of the time! If the test comes back positive, why is it still less than 50% likely that the patient has Diseasitis? Well, the test also incorrectly \"detects\" Diseasitis 30% of the time in a healthy patient, and we start out with lots more healthy patients than sick patients.\n\nThe test does provide *some* evidence in favor of of the patient being sick. The probability of a patient being sick goes from 20% before the test, to 43% after we see the tongue depressor turn black. But this isn't conclusive, and we need to perform further tests, maybe more expensive ones.\n\n*If* you feel like you understand this problem setup, consider trying to answer the following question before proceeding: What's the probability that a student who does *not* turn the tongue depressor black - a student with a negative test result - has Diseasitis? Again, we start out with 20% sick and 80% healthy students, 70% of healthy students will get a negative test result, and only 10% of sick students will get a negative test result.\n\n%%hidden(Answer):Imagine 20 sick students and 80 healthy students. 10% * 20 = 2 sick students have negative test results. 70% * 80 = 56 healthy students have negative test results. Among the 2+56=58 total students with negative test results, 2 students are sick students with negative test results. So 2/58 = 1/29 = 3.4% of students with negative test results have Diseasitis.%%\n\n%if-before([https://arbital.com/p/1x1](https://arbital.com/p/1x1)): Now let's turn to a faster, easier way to solve the same problem.%\n%!if-before([https://arbital.com/p/1x1](https://arbital.com/p/1x1)): For a more clever way to perform the same calculation, see [https://arbital.com/p/1x1](https://arbital.com/p/1x1).%", "date_published": "2016-10-25T23:46:18Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Sachin Krishnan", "Anthony Mercuri", "Akanksha rawat", "Nate Soares", "jj jj", "David Salamon", "Eric Bruylant", "John Mahony", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class"], "alias": "55z"} {"id": "afb3f95039402725121b37bf1532b358", "title": "Frequency diagram", "url": "https://arbital.com/p/bayes_frequency_diagram", "source": "arbital", "source_type": "text", "text": "Frequency diagrams, like [waterfall diagrams](https://arbital.com/p/1wy), provide a way of visualizing [Bayes' Rule](https://arbital.com/p/1lz). For example, if 20% of the patients in the screening patient are sick (red) and 80% are healthy (blue); and 90% of the sick patients get positive test results; and 30% of the healthy patients get positive test results, we could visualize the probabilities as proportions of a large population, as in the following frequency diagram:\n\n![Frequency diagram](https://i.imgur.com/0XDbrYi.png)\n\nSee [https://arbital.com/p/55z](https://arbital.com/p/55z) for a walkthrough of the diagram.", "date_published": "2016-07-06T21:04:25Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": ["Frequency diagrams, like [waterfall diagrams](https://arbital.com/p/1wy), provide a way of visualizing [Bayes' Rule](https://arbital.com/p/1lz). For example, if 20% of the patients in the screening patient are sick (red) and 80% are healthy (blue); and 90% of the sick patients get positive test results; and 30% of the healthy patients get positive test results, we could visualize the probabilities as proportions of a large population, as in the following frequency diagram:\n\n![Frequency diagram](https://i.imgur.com/0XDbrYi.png)\n\nThen, when asking about the probability that a patient with a black tongue depressor is sick, we can divide the number of sick patients with black tongue depressors (18 in this case) by the number of total patients with black tongue depressors (18 + 24, in this case)."], "tags": ["C-Class"], "alias": "560"} {"id": "5bfdcdd19f22b1cf127ff68e9a5f2b59", "title": "Odds: Introduction", "url": "https://arbital.com/p/odds_intro", "source": "arbital", "source_type": "text", "text": "Lets say we have a bag containing twice as many blue marbles as red marbles. Then, if you reach in without looking and pick out a marble at random, the odds are 2 : 1 in favor of drawing a blue marble as opposed to a red one.\n\nOdds express *relative* quantities. 2 : 1 odds are the same as 4 : 2 odds are the same as 600 : 300 odds. For example, if the bag contains 1 red marble and 2 blue marbles, or 2 red marbles and 4 blue marbles, then your chance of pulling out a red marble is the same in both cases:\n\n![](https://i.imgur.com/IcsOXl0.png?0)\n\nIn other words, given odds of $(x : y)$ we can scale it by any positive number $\\alpha$ to get equivalent odds of $(\\alpha x : \\alpha y).$ \n\n# Converting odds to probabilities\n\nIf there were also green marbles, the *relative odds* for red *versus* blue would still be (1 : 2), but the *probability* of drawing a red marble would be lower.\n\n![](https://i.imgur.com/ooyn9Py.png?0)\n\nIf red, blue, and green are the only kinds of marbles in the bag, then we can turn odds of $(r : b : g)$ into probabilities $(p_r : p_b : p_g)$ that say the probability of drawing each kind of marble. Because red, blue, and green are the only possibilities, $p_r + p_g + p_b$ must equal 1, so $(p_r : p_b : p_g)$ must be odds equivalent to $(r : b : g)$ but \"normalized\" such that it sums to one. For example, $(1 : 2 : 1)$ would normalize to $\\frac{1}{4} : \\frac{2}{4} : \\frac{1}{4},$ which are the probabilities of drawing a red / blue / green marble (respectively) from the bag on the right above.\n\nNote that if red and blue are not the only possibilities, then it doesn't make sense to convert the odds $(r : b)$ of red vs blue into a probability. For example, if there are 100 green marbles, one red marble, and two blue marbles, then the odds of red vs blue are 1 : 2, but the probability of drawing a red marble is much lower than 1/3! Odds can only be converted into probabilities if its terms are [https://arbital.com/p/-1rd](https://arbital.com/p/-1rd).\n\nImagine a forest with some sick trees and some healthy trees, where the odds of a tree being sick (as opposed to heathy) are (2 : 3), and every tree is either sick or healthy (there are no in-between states). Then the probability of randomly picking a sick tree from among *all* trees is 2 / 5, because 2 out of every (2 + 3) trees is sick.\n\n![](https://i.imgur.com/GVZnz2c.png?0)\n\nIn general, the operation we're doing here is taking relative odds like $(a : b : c \\ldots)$ and dividing each term by the sum $(a + b + c \\ldots)$ to produce $$\\left(\\frac{a}{a + b + c \\ldots} : \\frac{b}{a + b + c \\ldots} : \\frac{c}{a + b + c \\ldots}\\ldots\\right)$$ Dividing each term by the sum of all terms gives us an equivalent set of odds (because each element is divided by the same amount) whose terms sum to 1.\n\nThis process of dividing a set of odds by the sum of its terms to get a set of probabilities that sum to 1 is called [normalization](https://arbital.com/p/1rk).\n\n# Converting probabilities to odds\n\nLet's say we have two events R and B, which might be things like \"I draw a red marble\" and \"I draw a blue marble.\" Say $\\mathbb P(R) = \\frac{1}{4}$ and $\\mathbb P(B) = \\frac{1}{2}.$ What are the odds of R vs B? $\\mathbb P(R) : \\mathbb P(B) = \\left(\\frac{1}{4} : \\frac{1}{2}\\right),$ of course.\n\nEquivalently, we can take the odds $\\left(\\frac{\\mathbb P(R)}{\\mathbb P(B)} : 1\\right)$, because $\\frac{\\mathbb P(R)}{\\mathbb P(B)}$ is how many more times likely R is than B. In this example, $\\frac{\\mathbb P(R)}{\\mathbb P(B)} = \\frac{1}{2},$ because R is half as likely as B. Sometimes, the quantity $\\frac{\\mathbb P(R)}{\\mathbb P(B)}$ is called the \"odds ratio of R vs B,\" in which case it is understood that the odds for R vs B are $\\left(\\frac{\\mathbb P(R)}{\\mathbb P(B)} : 1\\right).$\n\n# Odds to ratios\n\nWhen there are only two terms $x$ and $y$ in a set of odds, the odds can be written as a ratio $\\frac{x}{y}.$ The odds *ratio* $\\frac{x}{y}$ refers to the *odds* $(x : y),$ or, equivalently, $\\left(\\frac{x}{y} : 1\\right).$", "date_published": "2016-10-11T17:27:48Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["C-Class", "Low-speed explanation"], "alias": "561"} {"id": "b370b14ae917b705d210f0b8b8ddc75d", "title": "Odds: Refresher", "url": "https://arbital.com/p/odds_refresher", "source": "arbital", "source_type": "text", "text": "Let's say that, in a certain forest, there are 2 sick trees for every 3 healthy trees. We can then say that the odds of a tree being sick (as opposed to healthy) are $(2 : 3).$\n\nOdds express *relative* chances. Saying \"There's 2 sick trees for every 3 healthy trees\" is the same as saying \"There's 10 sick trees for every 15 healthy trees.\" If the original odds are $(x : y)$ we can multiply by a positive number $\\alpha$ and get a set of equivalent odds $(\\alpha x : \\alpha y).$ \n\nIf there's 2 sick trees for every 3 healthy trees, and every tree is either sick or healthy, then the *probability* of randomly picking a sick tree from among *all* trees is 2/(2+3):\n\n![Odds v probabilities](https://i.imgur.com/GVZnz2c.png?0)\n\nIf the set of possibilities $A, B, C$ are [mutually exclusive and exhaustive](https://arbital.com/p/1rd), then the probabilities $\\mathbb P(A) + \\mathbb P(B) + \\mathbb P(C)$ should sum to $1.$ If there's no further possibilities $d,$ we can convert the relative odds $(a : b : c)$ into the probabilities $(\\frac{a}{a + b + c} : \\frac{b}{a + b + c} : \\frac{c}{a + b + c}).$ The process of dividing each term by the sum of terms, to turn a set of proportional odds into probabilities that sum to 1, is called [normalization](https://arbital.com/p/1rk).\n\nWhen there are only two terms $x$ and $y$ in the odds, they can be expressed as a single ratio $\\frac{x}{y}.$ An odds ratio of $\\frac{x}{y}$ refers to odds of $(x : y),$ or, equivalently, odds of $\\left(\\frac{x}{y} : 1\\right).$ Odds of $(x : y)$ are sometimes called odds ratios, where it is understood that the actual ratio is $\\frac{x}{y}.$", "date_published": "2016-10-12T22:38:46Z", "authors": ["Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Start", "High-speed explanation"], "alias": "562"} {"id": "45dff4e2cdd9faccf78379dc12897ade", "title": "Conditional probability: Refresher", "url": "https://arbital.com/p/conditional_probability_refresher", "source": "arbital", "source_type": "text", "text": "$\\mathbb P(\\text{left} \\mid \\text{right})$ is the probability that the thing on the left is true _assuming_ the thing on the right is true, and it's defined as $\\frac{\\mathbb P(\\text{left} \\land \\text{right})}{\\mathbb P(\\text{right})}.$\n\nThus, $\\mathbb P(yellow \\mid banana)$ is the probability that a banana is yellow (\"the probability of yellowness given banana\"), while $\\mathbb P(banana \\mid yellow)$ is the probability that a yellow object is a banana (\"the probability of banana, given yellowness\").%%note: In general, $\\mathbb P(v)$ is an abbreviation of $\\mathbb P(V = v)$ for some variable $V$, which is assumed to be known from the context. For example, $\\mathbb P(yellow)$ might stand for $\\mathbb P({ColorOfNextObjectInBag}=yellow)$ where $ColorOfNextObjectInBag$ is a [variable](https://arbital.com/p/random_variable) in our [https://arbital.com/p/-probability_distribution](https://arbital.com/p/-probability_distribution) $\\mathbb P,$ and $yellow$ is one of the [values](https://arbital.com/p/random_variable_value) that that variable can take on.%%\n\n$\\mathbb P(x \\land y)$ is used to denote the probability of both $x$ and $y$ being true simultaneously (according to some [https://arbital.com/p/-probability_distribution](https://arbital.com/p/-probability_distribution) $\\mathbb P$). $\\mathbb P(x\\mid y)$, pronounced \"the conditional probability of x, given y\", is defined to be the quantity\n\n$$\\frac{\\mathbb P(x \\wedge y)}{\\mathbb P(y)}.$$\n\nFor example, in the [Diseasitis](https://arbital.com/p/22s) problem, $\\mathbb P({sick}\\mid {positive})$ is the probability that a patient is sick _given_ a positive test result, and it's calculated by taking the 18% patients who are sick *and* have positive test results, and dividing by all 42% of the patients who got positive test results. That is, $\\mathbb P({sick}\\mid {positive})$ $=$ $\\frac{\\mathbb P({sick} \\land {positive})}{\\mathbb P({positive})}.$\n\nUsing a [frequency diagram](https://arbital.com/p/bayes_frequency_diagrams), we can visualize $\\mathbb P(sick \\mid positive)$ as the probability of drawing a $sick$ result from a bag of only those people in the population who got a $positive$ result.\n\n![diseasitis frequency](https://i.imgur.com/0XDbrYi.png?0)\n\n![bag of 18 and 24 patients](https://i.imgur.com/W6avfHO.png?0)\n\nThe \"given\" operator in $\\mathbb P(x\\mid y)$ tells us to assume that $y$ is true, to restrict our attention to only possible cases where $y$ is true, and then ask about the probability of $x$ *within* those cases.\n\nNote that $\\mathbb P(positive \\mid sick)$ is _not_ the same as $\\mathbb P(sick \\mid positive).$ To find the probability that a patient has a positive result given that they're sick, we can visualize taking the 20 sick patients and putting them in a group, and then asking the probability that a randomly selected one will have a positive result, which will be $\\frac{18}{20} = 0.9$ — so $\\mathbb P(positive \\mid sick) = 90\\%,$ while $\\mathbb P(sick \\mid positive) \\approx 43\\%.$ Mixing up which one is which is an unfortunate source of of many practical errors when you're trying to do these calculations using only the formal notation, at least until you get used to it. Just remember that $\\mathbb P(\\text{left} \\mid \\text{right})$ is the probability of the thing on the left _given_ that the thing on the right is true.", "date_published": "2016-10-10T21:53:50Z", "authors": ["Adom Hartell", "Eric Bruylant", "Nate Soares", "Cameron McFee"], "summaries": ["$\\mathbb P(\\text{left} \\mid \\text{right})$ is the probability that the thing on the left is true _assuming_ the thing on the right is true, and it's defined as $\\frac{\\mathbb P(\\text{left} \\land \\text{right})}{\\mathbb P(\\text{right})}.$\n\nThus, $\\mathbb P(yellow \\mid banana)$ is the probability that a banana is yellow (\"the probability of yellowness given banana\"), while $\\mathbb P(banana \\mid yellow)$ is the probability that a yellow object is a banana (\"the probability of banana, given yellowness\")."], "tags": ["C-Class"], "alias": "565"} {"id": "96d9c75b1c84af8a419d231e2432b117", "title": "The n-th root of m is either an integer or irrational", "url": "https://arbital.com/p/566", "source": "arbital", "source_type": "text", "text": "There is an intuitive way to see that for any natural numbers $m$ and $n$, $\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n)m$ will always either be an integer or an irrational number.\n\nSuppose that there was some $\\sqrt[https://arbital.com/p/n](https://arbital.com/p/n)m$ that was a rational number $\\frac{a}{b}$ that was not an integer. Suppose further that $\\frac ab$ is written as a [https://arbital.com/p/-reduced](https://arbital.com/p/-reduced) fraction, such that the [https://arbital.com/p/-greatest_common_divisor](https://arbital.com/p/-greatest_common_divisor) of $a$ and $b$ is $1$. Then, since $\\frac{a}{b}$ is not an integer, $b > 1$.\n\nSince $\\frac ab = \\sqrt[https://arbital.com/p/n](https://arbital.com/p/n)m$, we have conversely that $(\\frac ab)^n = m$, which is a natural number by our hypothesis. But let's take a closer look at $(\\frac ab)^n$. It evaluates to $\\frac{a^n}{b^n}$, which is still a reduced fraction. \n\nBut since $b > 1$ before, we have that $b^n > 1$ as well, meaning that $(\\frac ab)^n$ cannot be an integer, contradicting the fact that it equals $m$, a natural number.", "date_published": "2016-07-06T21:33:03Z", "authors": ["Eric Bruylant", "Sharma Kunapalli", "Joe Zeng"], "summaries": [], "tags": ["Proof", "Needs parent"], "alias": "566"} {"id": "6339052f5b24129e4b1a4d3c83b64dd4", "title": "Probability distribution: Motivated definition", "url": "https://arbital.com/p/probability_distribution_motivated_definition", "source": "arbital", "source_type": "text", "text": "When discussing probabilities, people will often (informally) say things like \"well, the probability $\\mathbb P(sick)$ of the patient being sick is about 20%.\" What does this $\\mathbb P(sick)$ notation mean?\n\nIntuitively, $\\mathbb P(sick)$ is supposed to denote the probability that a particular person is sick (on a scale from 0 to 1). But how is $\\mathbb P(sick)$ defined? Is there an objective probability of sickness? If not, where does the number come from?\n\nAt first you might be tempted to say $\\mathbb P(sick)$ is defined by the surrounding population: If 1% of people are sick at any given time, then maybe $\\mathbb P(sick)$ should be 1%. But what if this person is currently running a high fever and complaining about an upset stomach? Then we should probably assign a probability higher than 1%.\n\nNext you might be tempted to say that the _true_ probability of the person being sick is either 0 or 1 (because they're either sick or they aren't), but this observation doesn't really help us manage our _own_ uncertainty. It's all well and good to say \"either they sick or they aren't,\" but if you're a doctor who has to choose which medication to prescribe (and different ones have different drawbacks), then you need some way of talking about how sick they _seem_ to be (given what you've seen).\n\nThis leads us to the notion of [subjective probability](https://arbital.com/p/4vr). _Your_ probability that a person is sick is a fact about _you._ They are either sick or healthy, and as you observe more facts about them (such as \"they're running a fever\"), your _personal_ belief in their health vs sickness changes. This is the idea that used to define notation like $\\mathbb P(sick).$\n\nFormally, $\\mathbb P(sick)$ is defined to be the probability that $\\mathbb P$ assigns to $sick,$ where $\\mathbb P$ is a type of object known as a \"probability distribution\", which is an object designed for keeping track of (and managing) uncertainty. Specifically, probability distributions are objects that distribute a finite amount of \"stuff\" across a large number of \"states,\" and $\\mathbb P(sick)$ measures how much stuff $\\mathbb P$ _in particular_ puts on $sick$-type states. For example, the states could be cups with labels on them, and the stuff could be water, in which case $\\mathbb P(sick)$ would be the proportion of all water in the $sick$-labeled cups.\n\nThe \"stuff\" and \"states\" may be arbitrary: you can build a probability distribution out of water in cups, clay in cubbyholes, abstract numbers represented in a computer, or weightings between neurons in your head. The stuff is called \"probability mass,\" the states are called \"possibilities.\"\n\nTo be even more concrete, imagine you build $\\mathbb P$ out of cups and water, and that you give some of the cups suggestive labels like $sick$ and $healthy$. Then you can talk about the proportion of all probability-water that's in the $sick$ cup vs the $healthy$ cup. This is a probability distribution, but it's not a very useful one. In practice, we want to model more than one thing at a time. Let's say that you're a doctor at an immigration center who needs to assess a person's health, age, and country of origin. Now the set of possibilities that you want to represent aren't just $sick$ and $healthy,$ they're _all combinations_ of health, age, and origin:\n\n$$\n\\begin{align}\nsick, \\text{age }1, \\text{Afghanistan} \\\\\nhealthy, \\text{age }1, \\text{Afghanistan} \\\\\nsick, \\text{age }2, \\text{Afghanistan} \\\\\n\\vdots \\\\\nsick, \\text{age }29, \\text{Albania} \\\\\nhealthy, \\text{age }29, \\text{Albania} \\\\\nsick, \\text{age }30, \\text{Albania} \\\\\n\\vdots\n\\end{align}\n$$\n\nand so on. If you build this probability distribution out of cups, you're going to need a lot of cups. If there are 2 possible health states ($sick$ and $healthy$), 150 possible ages, and 196 possible countries, then the total number of cups you need in order to build this probability distribution is $2 \\cdot 150 \\cdot 196 = 58800,$ which is rather excessive. (There's a reason we do probabilistic reasoning using transistors and/or neurons, as opposed to cups with water in them).\n\nIn order to make this proliferation of possibilities manageable, the possibilities are usually arranged into columns, such as the \"Health\", \"Age\", and \"Country\" columns above. This columns are known as \"[variables](https://arbital.com/p/random_variable)\" of the distribution. Then, $\\mathbb P(sick)$ is an abbreviation for $\\mathbb P(\\text{Health}=sick),$ which counts the proportion of all probability mass (water) allocated to possibilities (cups) that have $sick$ in the Health column of their label.\n\nWhat's the point of doing all this setup? Once we've made a probability distribution, we can hook it up to the outside world such that, when the world interacts with the probability distribution, the probability mass is shifted around inside the cups. For example, if you have a rule which says \"whenever a person shows me a passport from country X, I throw out all water except the water in cups with X in the Country column\", then, whenever you see a passport, the probability distribution will get more accurate.\n\nThe natural question here is, what are the best ways to manipulate the probability mass in $\\mathbb P$ (in response to observations), if the goal is to have $\\mathbb P$ get more and more accurate over time? That's exactly the sort of question that [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) can be used to answer (and it has implications both for artificial intelligence, and for understanding human intelligence — after all, _we ourselves_ are a physical system that manages uncertainty, and updates beliefs in response to observations).\n\nAt this point, there are two big objections to answer. First objection:\n\n> Whoa now, the number of cups in $\\mathbb P$ got pretty big pretty quickly, and this was a simple example. In a realistic probability distribution $\\mathbb P$ intended to represent the real world (which has way more than 3 variables worth tracking), the number of necessary possibilities would be _ridiculous._ Why do we define probabilities in terms of these huge impractical \"probability distributions\"?\n\nThis is an important question, which is answered by three points:\n\n1. In practice, there are a number of tricks for exploiting regularities in the structure of the world in order to drastically reduce the number of cups you need to track. We won't be covering those tricks in this guide, but you can check out [Realistic probabilities](https://arbital.com/p/) and [Arbital's guide to Bayes nets](https://arbital.com/p/) if you're interested in the topic.\n2. Even so, full-fledged probabilistic reasoning _is_ computationally infeasible on complex problems. In practice, physical reasoning systems (such as brains or artificial intelligence algorithms) use lots of approximations and shortcuts.\n3. Nevertheless, reasoning according to a full probability distribution is the _theoretical ideal_ for how to do good reasoning. [You can't do better than probabilistic reasoning](https://arbital.com/p/) (unless you're born knowing the right answers to everything), and [insofar as you don't use probabilistic reasoning, you can be exploited](https://arbital.com/p/). Even if complex probability distribution are too big to manage in practice, they tell give lots of hints about how to reason right or wrong that we can follow in our day-to-day lives.\n\nSecond objection:\n\n> You basically just said \"given a bunch of cups and some water, we define the probability of a person being sick as the amount of water in some suggestively-labeled cups.\" How does that have anything to do with whether or not the person is actually sick? Just because you put a $sick$ label on there doesn't magically give the water meaning!\n\nThis is an important point. For $\\mathbb P$ to be useful, we want to design a reasoning procedure such that the more we interact with a person, the more probability mass starts to reflect how healthy the person actually is. That is, we want the water to go into $sick$ cups if they're sick, and $healthy$ cups if they're healthy. If our reasoning procedure has that property, and we have $\\mathbb P$ interact with the world for a while, then its probabilities will get pretty accurate — at which point $\\mathbb P$ can be used to answer questions and/or make decisions. (This is the principle that makes brains and artificial intelligence algorithms tick.)\n\nHow do we design reasoning mechanisms that cause $\\mathbb P$ to become more accurate the more it interacts with the world? That's a [big question](https://arbital.com/p/how_to_get_accurate_probabilities), and the answer has many parts. One of the most important parts of the answer, though, is a law of probability theory which tells us the correct way to move the probability mass around in response to new observations (assuming the goal is to make $\\mathbb P$ more accurate). For more on that law, see [Bayes' rule](https://arbital.com/p/1lz).", "date_published": "2016-07-06T22:49:26Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["B-Class"], "alias": "569"} {"id": "051895a6547474fabb1a0bad5a06764b", "title": "Math 3 example statements", "url": "https://arbital.com/p/math_3_examples", "source": "arbital", "source_type": "text", "text": "If you're at a Math 3 level, you'll probably be familiar with at least some of these sentences and formulas, or you would be able to understand what they meant on a surface level if you were to look them up. Note that you don't necessarily have to understand the *proofs* of these statements (that's what we're here for, to teach you what they mean), but your eyes shouldn't gloss over them either.\n\n> In a [group](https://arbital.com/p/3gd) $G$, the [https://arbital.com/p/-4bj](https://arbital.com/p/-4bj) of an element $g$ is the set of elements that can be written as $hgh^{-1}$ for all $h \\in G$.\n\n> The [rank-nullity theorem](https://arbital.com/p/) states that for any [https://arbital.com/p/-linear_mapping](https://arbital.com/p/-linear_mapping) $f: V \\to W$, the [https://arbital.com/p/-dimension](https://arbital.com/p/-dimension) of the [https://arbital.com/p/-image](https://arbital.com/p/-image) of $f$ plus the dimension of the [https://arbital.com/p/-kernel](https://arbital.com/p/-kernel) of $f$ is equal to the dimension of $V$.\n\n> A [Baire space](https://arbital.com/p/) is a space that satisfies [Baire's Theorem](https://arbital.com/p/) on [complete metric spaces](https://arbital.com/p/complete_metric_space): For a [https://arbital.com/p/-topological_space](https://arbital.com/p/-topological_space) $X$, if ${F_1, F_2, F_3, \\ldots}$ is a [countable](https://arbital.com/p/countable_set) collection of open sets that are [dense](https://arbital.com/p/dense_set) in $X$, then $\\bigcap_{n=1}^\\infty F_n$ is also dense in $X$.\n\n> The [https://arbital.com/p/riemann_hypothesis](https://arbital.com/p/riemann_hypothesis) asserts that every non-trivial zero of the [https://arbital.com/p/riemann_zeta_function](https://arbital.com/p/riemann_zeta_function) $\\zeta(s) = \\sum_{n=1}^\\infty \\frac{1}{s^n}$ when $s$ is a complex number has a real part equal to $\\frac12$.\n\n> $\\newcommand{\\pd}[https://arbital.com/p/2](https://arbital.com/p/2){\\frac{\\partial #1}{\\partial #2}}$ The [https://arbital.com/p/jacobian_matrix](https://arbital.com/p/jacobian_matrix) of a [vector-valued function](https://arbital.com/p/vector_valued_function) $f: \\mathbb{R}^m \\to \\mathbb{R}^n$ is the matrix of [https://arbital.com/p/-partial_derivatives](https://arbital.com/p/-partial_derivatives) $\\left[\\begin{matrix} \\pd{y_1}{x_1} & \\pd{y_1}{x_2} & \\cdots & \\pd{y_1}{x_m} \\\\ \\pd{y_2}{x_1} & \\pd{y_2}{x_2} & \\cdots & \\pd{y_2}{x_m} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\pd{y_n}{x_1} & \\pd{y_n}{x_2} & \\cdots & \\pd{y_n}{x_m} \\end{matrix} \\right](https://arbital.com/p/)$ between each component of the argument vector $x = (x_1, x_2, \\ldots, x_m)$ and each component of the result vector $y = f(x) = (y_1, y_2, \\ldots, y_n)$. It is notated as $\\displaystyle \\frac{d\\mathbf{y}}{d\\mathbf{x}}$ or $\\displaystyle \\frac{d(y_1, y_2, \\ldots, y_n)}{d(x_1, x_2, \\ldots, x_m)}$.", "date_published": "2016-07-10T10:29:15Z", "authors": ["Dylan Hendrickson", "Joe Zeng"], "summaries": [], "tags": [], "alias": "56b"} {"id": "24727f2a8174592ea1ef59c94429d3bb", "title": "Proof of Bayes' rule: Probability form", "url": "https://arbital.com/p/bayes_rule_probability_proof", "source": "arbital", "source_type": "text", "text": "Let $\\mathbf H$ be a [variable](https://arbital.com/p/random_variable) in $\\mathbb P$ for the true hypothesis, and let $H_k$ be the possible values of $\\mathbf H,$ such that $H_k$ is [https://arbital.com/p/-1rd](https://arbital.com/p/-1rd). Then, Bayes' theorem states:\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e\\mid H_i) \\cdot \\mathbb P(H_i)}{\\sum_k \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)},$$\n\nwith a proof that runs as follows. By the definition of [https://arbital.com/p/-1rj](https://arbital.com/p/-1rj),\n\n$$\\mathbb P(H_i\\mid e) = \\dfrac{\\mathbb P(e \\wedge H_i)}{\\mathbb P(e)} = \\dfrac{\\mathbb P(e \\mid H_i) \\cdot \\mathbb P(H_i)}{\\mathbb P(e)}$$\n\nBy the law of [marginal probability](https://arbital.com/p/law_of_marginal_probability):\n\n$$\\mathbb P(e) = \\sum_{k} \\mathbb P(e \\wedge H_k)$$\n\nBy the definition of conditional probability again:\n\n$$\\mathbb P(e \\wedge H_k) = \\mathbb P(e\\mid H_k) \\cdot \\mathbb P(H_k)$$\n\nDone.\n\nNote that this proof of Bayes' rule is less general than the [proof](https://arbital.com/p/1xr) of the [odds form of Bayes' rule](https://arbital.com/p/1x5).\n\n## Example\n\nUsing the [Diseasitis](https://arbital.com/p/22s) example problem, this proof runs as follows:\n\n$$\\begin{array}{c}\n\\mathbb P({sick}\\mid {positive}) = \\dfrac{\\mathbb P({positive} \\wedge {sick})}{\\mathbb P({positive})} \\\\[https://arbital.com/p/0.3em](https://arbital.com/p/0.3em)\n= \\dfrac{\\mathbb P({positive} \\wedge {sick})}{\\mathbb P({positive} \\wedge {sick}) + \\mathbb P({positive} \\wedge \\neg {sick})} \\\\[https://arbital.com/p/0.3em](https://arbital.com/p/0.3em)\n= \\dfrac{\\mathbb P({positive}\\mid {sick}) \\cdot \\mathbb P({sick})}{(\\mathbb P({positive}\\mid {sick}) \\cdot \\mathbb P({sick})) + (\\mathbb P({positive}\\mid \\neg {sick}) \\cdot \\mathbb P(\\neg {sick}))}\n\\end{array}\n$$\n\nNumerically:\n\n$$3/7 = \\dfrac{0.18}{0.42} = \\dfrac{0.18}{0.18 + 0.24} = \\dfrac{90\\% * 20\\%}{(90\\% * 20\\%) + (30\\% * 80\\%)}$$\n\nUsing red for sick, blue for healthy, and + signs for positive test results, the proof above can be visually depicted as follows:\n\n![bayes theorem probability](https://i.imgur.com/H9im04o.png?0)\n\n%todo: if we replace the other Venn diagram for the proof of Bayes' rule, we should probably update this one too.%", "date_published": "2016-10-08T18:26:54Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Needs clickbait", "B-Class"], "alias": "56j"} {"id": "aec479cae5281e97965f1603e21b104e", "title": "Math 2 example statements", "url": "https://arbital.com/p/56p", "source": "arbital", "source_type": "text", "text": "If you're at a Math 2 level, you'll probably be familiar with most or all of these sentences and formulas, or you would be able to understand what they meant on a surface level if you were to look them up.\n\n> The [https://arbital.com/p/-quadratic_formula](https://arbital.com/p/-quadratic_formula) states that the roots of every quadratic expression $ax^2 + bx + c$ are equal to $\\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$. The expression under the [https://arbital.com/p/-square_root](https://arbital.com/p/-square_root), $b^2 - 4ac$, is called the [https://arbital.com/p/-discriminant](https://arbital.com/p/-discriminant), and determines how many roots there are in the equation.\n\n> The imaginary number $i$ is defined as the primary root of the quadratic equation $x^2 + 1 = 0$.\n\n> To solve the system of linear equations $\\begin{matrix}ax + by = c \\\\ dx + ey = f\\end{matrix}$ for $x$ and $y$, the value of $x$ can be computed as $\\displaystyle \\frac{bf - ce}{bd - ae}$.\n\n> The [https://arbital.com/p/-power_rule](https://arbital.com/p/-power_rule) in calculus states that $\\frac{d}{dx} x^n = nx^{n-1}$.\n\n> All the solutions to the equation $m^n = n^m$ where $m < n$ are of the form $m = (1 + \\frac 1x)^x$ and $n = (1 + \\frac 1x)^{x+1}$, where $x$ is any positive real number.", "date_published": "2016-07-07T15:19:21Z", "authors": ["Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": [], "alias": "56p"} {"id": "fe09864c5807f1a97f0e7b7af6ebb750", "title": "Binary notation", "url": "https://arbital.com/p/binary_notation", "source": "arbital", "source_type": "text", "text": "When you were first taught how to write [numbers](https://arbital.com/p/54y) and [add](https://arbital.com/p/addition) them together, you were probably told something about the \"ones place\", the \"tens place\", the \"hundreds place\", and so on. Each [https://arbital.com/p/-digit](https://arbital.com/p/-digit) further to the left represents a larger [multiple](https://arbital.com/p/math_power) of ten, and the multiples are added together to get the number you want--so $8207$ is counted as $(7 \\times 10^0) + (0 \\times 10^1) + (2 \\times 10^2) + (8 \\times 10^3)$, or \"seven ones, zero tens, two hundreds, and eight thousands\". But why use powers of ten? What's so special about that number?\n\nWell, nothing, actually. We probably only use powers of ten because we happen to have evolved with ten fingers--any number bigger than one will work just as well, although it may look strange at first. Since two is the smallest [https://arbital.com/p/-48l](https://arbital.com/p/-48l) larger than one, a *binary* number notation--a notation that uses powers of two--is one of the simplest possible. %note:The word \"binary\" has the same root as the word \"bicycle\"--*bi* meaning \"two\".%\n\nBinary notation uses only two digits, $0$ and $1$, and each \"place\" to the left goes up by a power of two instead of ten (in other words, it doubles). For example, in binary notation, the number $11010$ is counted as $(0 \\times 2^0) + (1 \\times 2^1) + (0 \\times 2^2) + (1 \\times 2^3) + (1 \\times 2^4)$, or \"zero ones, one two, zero fours, one eight, and one sixteen\". Translating back to the familiar [base](https://arbital.com/p/number_bases) ten, we would write it as $26$.\n\nYou may notice that binary notation tends to be a bit longer than [decimal](https://arbital.com/p/4sl) (for example, \"11010\" takes more characters to write than \"26\"). It's also more difficult to read, unless you have a lot of practice with it. So why would anyone use it? Well, for one thing, binary notation is often very convenient for talking about powers of two, for instance when working with base-two [logarithms](https://arbital.com/p/3nd) or log [https://arbital.com/p/-1rb](https://arbital.com/p/-1rb), or when working with some quantity measured in [bits](https://arbital.com/p/3y2). It's also essential for working with computers, as all modern computers store and manipulate data, on the lowest level, exclusively using binary notation.", "date_published": "2016-07-30T00:40:26Z", "authors": ["Malcolm McCrimmon", "Eric Bruylant", "Eric Rogstad"], "summaries": ["Binary notation is a way to write [numbers](https://arbital.com/p/54y) using [multiples](https://arbital.com/p/math_power) of two instead of the usual multiples of ten. It's especially useful for working with computers, which use binary notation exclusively."], "tags": ["Start", "Needs parent"], "alias": "56q"} {"id": "2576a40b2aa2b0e93b8a81aed72a33a0", "title": "Likelihood function", "url": "https://arbital.com/p/likelihood_function", "source": "arbital", "source_type": "text", "text": "Let's say you have a piece of evidence $e$ and a set of hypotheses $\\mathcal H.$ Each $H_i \\in \\mathcal H$ assigns some [likelihood](https://arbital.com/p/56v) to $e.$ The function $\\mathcal L_{e}(H_i)$ that reports this likelihood for each $H_i \\in \\mathcal H$ is known as a \"likelihood function.\"\n\nFor example, let's say that the evidence is $e_c$ = \"Mr. Boddy was killed with a candlestick,\" and the hypotheses are $H_S$ = \"Miss Scarlett did it,\" $H_M$ = \"Colonel Mustard did it,\" and $H_P$ = \"Mrs. Peacock did it.\" Furthermore, if Miss Scarlett was the murderer, she's 20% likely to have used a candlestick. By contrast, if Colonel Mustard did it, he's 10% likely to have used a candlestick, and if Mrs. Peacock did it, she's only 1% likely to have used a candlestick. In this case, the likelihood function is\n\n$$\\mathcal L_{e_c}(h) = \n\\begin{cases}\n0.2 & \\text{if $h = H_S$} \\\\\n0.1 & \\text{if $h = H_M$} \\\\\n0.01 & \\text{if $h = H_P$} \\\\\n\\end{cases}\n$$\n\nFor an example using a continuous function, consider a possibly-biased coin whose bias $b$ to come up heads on any particular coinflip might be anywhere between $0$ and $1$. Suppose we observe the coin to come up heads, tails, and tails. We will denote this evidence $e_{HTT}.$ The likelihood function over each hypothesis $H_b$ = \"the coin is biased to come up heads $b$ portion of the time\" for $b \\in [1](https://arbital.com/p/0,)$ is:\n\n$$\\mathcal L_{e_{HTT}}(H_b) = b\\cdot (1-b)\\cdot (1-b).$$\n\nThere's no reason to [normalize](https://arbital.com/p/1rk) likelihood functions so that they sum to 1 — they aren't probability distributions, they're functions expressing each hypothesis' propensity to yield the observed evidence. For example, if the evidence was really obvious ($e_s$ = \"the sun rose this morning,\") it might be the case that almost all hypotheses have a very high likelihood, in which case the sum of the likelihood function will be much more than 1.\n\nLikelihood functions carry _absolute_ likelihood information, and therefore, they contain information that [relative likelihoods](https://arbital.com/p/1rq) do not. Namely, absolute likelihoods can be used to check a hypothesis for [strict confusion](https://arbital.com/p/227).", "date_published": "2016-08-02T15:12:50Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["C-Class"], "alias": "56s"} {"id": "83bee2d3961a59546d003e2693a2b940", "title": "Likelihood ratio", "url": "https://arbital.com/p/likelihood_ratio", "source": "arbital", "source_type": "text", "text": "Given a piece of evidence $e_0$ and two hypothsese $H_i$ and $H_j,$ the likelihood ratio between them is the ratio of the [https://arbital.com/p/-56v](https://arbital.com/p/-56v) each hypothesis assigns to $e_0.$\n\nFor example, imagine the evidence is $e$ = \"Mr. Boddy was knifed\", and the hypotheses are $H_P$ = \"Professor Plum did it\" and $H_W$ = \"Mrs. White did it.\" Let's say that, if Professor Plum were the killer, we're 25% sure he would have used a knife. Let's also say that, if Mrs. White were the killer, there's only a 5% chance she would have used a knife. Then the likelihood ratio of $e_0$ between $H_P$ and $H_W$ is 25/5 = 5, which says that $H_P$ assigns five times as much likelihood to $e$ as does $H_W,$ which means that the evidence supports the \"Plum did it\" hypothesis five times as much as it supports the \"Mrs. White did it\" hypothesis.\n\nA likelihood ratio of 5 denotes [relative likelihoods](https://arbital.com/p/1rq) of $(5 : 1).$ Relative likelihoods can be multiplied by [odds](https://arbital.com/p/1rb) in order to update those odds, as per [https://arbital.com/p/1lz](https://arbital.com/p/1lz).", "date_published": "2016-07-07T12:39:05Z", "authors": ["Nate Soares", "Alexei Andreev"], "summaries": ["Given a piece of evidence $e$ and two hypothsese $H_i$ and $H_j,$ the likelihood ratio between them is the ratio of the [https://arbital.com/p/-56v](https://arbital.com/p/-56v) each hypothesis assigns to $e$ For example, let $e$ = \"Mr. Boddy was knifed\", and say that Professor Plum is 25% likely to use a knife while Mrs. White is only 5% likely to use a knife. Then the likelihood ratio of $e$ between the hypotheses \"Plum did it\" and \"Mrs. White did it\" is 25/5 = 5/1. See also [https://arbital.com/p/1rq](https://arbital.com/p/1rq)."], "tags": ["Start"], "alias": "56t"} {"id": "6261e8f5161ab69d9d0a1b1acc9e78f4", "title": "Likelihood", "url": "https://arbital.com/p/bayesian_likelihood", "source": "arbital", "source_type": "text", "text": "Consider a piece of evidence $e,$ such as \"Mr. Boddy was shot.\" We might have a number of different hypotheses that explain this evidence, including $H_S$ = \"Miss Scarlett killed him\", $H_M$ = \"Colonel Mustard killed him\", and so on.\n\nEach of those hypotheses assigns a different probability to the evidence. For example, imagine that _if_ Miss Scarlett _were_ the killer, there's a 20% chance she would use a gun, and an 80% chance she'd use some other weapon. In this case, the \"Miss Scarlett\" hypothesis assigns a *likelihood* of 20% to $e.$\n\nWhen reasoning about different hypotheses using a [probability distribution](https://arbital.com/p/-probability_distribution) $\\mathbb P$, the likelihood of evidence $e$ given hypothesis $H_i$ is often written using the [conditional probability](https://arbital.com/p/1rj) $\\mathbb P(e \\mid H_i).$ When reporting likelihoods of many different hypotheses at once, it is common to use a [https://arbital.com/p/-likelihood_function,](https://arbital.com/p/-likelihood_function,) sometimes written [$\\mathcal L_e](https://arbital.com/p/51n).\n\n[Relative likelihoods](https://arbital.com/p/1rq) measure the degree of support that a piece of evidence $e$ provides for different hypotheses. For example, let's say that if Colonel Mustard were the killer, there's a 40% chance he would use a gun. Then the absolute likelihoods of $H_S$ and $H_M$ are 20% and 40%, for _relative_ likelihoods of (1 : 2). This says that the evidence $e$ supports $H_M$ twice as much as it supports $H_S,$ and that the amount of support would have been the same if the absolute likelihoods were 2% and 4% instead.\n\nAccording to [Bayes' rule](https://arbital.com/p/1lz), relative likelihoods are the appropriate tool for measuring the [strength of a given piece evidence](https://arbital.com/p/22x). Relative likelihoods are one of two key constituents of belief in [Bayesian reasoning](https://arbital.com/p/bayesian_reasoning), the other being [prior probabilities](https://arbital.com/p/1rm).\n\nWhile absolute likelihoods aren't necessary when updating beliefs by Bayes' rule, they are useful when checking for [confusion](https://arbital.com/p/227). For example, say you have a coin and only two hypotheses about how it works: $H_{0.3}$ = \"the coin is random and comes up heads 30% of the time\", and $H_{0.9}$ = \"the coin is random and comes up heads 90% of the time.\" Now let's say you toss the coin 100 times, and observe the data HTHTHTHTHTHTHTHT... (alternating heads and tails). The _relative_ likelihoods strongly favor $H_{0.3},$ because it was less wrong. However, the _absolute_ likelihood of $H_{0.3}$ will be much lower than expected, and this deficit is a hint that $H_{0.3}$ isn't right. (For more on this idea, see [https://arbital.com/p/227](https://arbital.com/p/227).)", "date_published": "2016-10-07T23:58:35Z", "authors": ["Eric Bruylant", "Nate Soares", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": ["\"Likelihood\", when speaking of Bayesian reasoning, denotes *the probability of an observation, supposing some hypothesis to be correct.*\n\nSuppose our piece of evidence $e$ is that \"Mr. Boddy was shot.\" One of our suspects is Miss Scarlett, and we denote by $H_S$ the hypothesis that Miss Scarlett shot Mr. Boddy. Suppose that if Miss Scarlett *were* the killer, we'd have predicted in advance a 20% probability she would use a gun, and an 80% chance she'd use some other weapon.\n\nThen the *likelihood* from the evidence, to Miss Scarlett being the killer, is 0.20. Using [conditional probability notation](https://arbital.com/p/1rj), $\\mathbb P(e \\mid H_S) = 0.20.$\n\nThis doesn't mean Miss Scarlett has a 20% chance of being the killer; it means that if she is the killer, our observation had a probability of 20%.\n\nRelative likelihoods are a key ingredient for [Bayesian reasoning](https://arbital.com/p/1ly) and one of the quantities plugged into [Bayes's Rule](https://arbital.com/p/1lz)."], "tags": ["Start", "Needs summary", "Needs clickbait"], "alias": "56v"} {"id": "623a4cb5bb50c956fa57e596ae3db5d1", "title": "Subtraction of rational numbers (Math 0)", "url": "https://arbital.com/p/subtraction_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "So far, we have met [the idea of a rational number](https://arbital.com/p/4zx), treating them as chunks of apples, and [how to add them together](https://arbital.com/p/55m).\nNow we will discover how the idea of the anti-apple (by analogy with the [integers' anti-cow](https://arbital.com/p/53r)) must work.\n\n# The anti-apple\n\nJust as we had an anti-cow, so we can have an anti-apple.\nIf we combine an apple with an anti-apple, they both annihilate, leaving nothing behind.\nWe write this as $1 + (-1) = 0$.\n\nA very useful thing for you to ponder for thirty seconds (though I will give you the answer soon): given that $\\frac{a}{n}$ means \"divide an apple into $n$ equal pieces, then take $a$ copies of the resulting little-piece\", what would $\\frac{-1}{n}$ mean? And what would $-\\frac{1}{n}$ mean?\n\n%%hidden(Show possible solution):\n$\\frac{-1}n$ would mean \"divide an apple into $n$ equal pieces, then take $-1$ copies of the resulting little-piece\".\nThat is, turn it into an anti-little-piece.\nThis anti-little-piece will annihilate one little-piece of the same size: $\\frac{-1}{n} + \\frac{1}{n} = 0$.\n\n$-\\frac{1}{n}$, on the other hand, would mean \"divide an anti-apple into $n$ equal pieces, then take $1$ copy of the resulting little-anti-piece\".\nBut this is the same as $\\frac{-1}{n}$: it doesn't matter whether we do \"convert to anti, then divide up the apple\" or \"divide up the apple, then convert to anti\".\nThat is, \"little-anti-piece\" is the same as \"anti-little-piece\", which is very convenient.\n%%\n\nWhat about chunks of apple?\nIf we combine half an apple with half an anti-apple, they should also annihilate, leaving nothing behind.\nWe write this as $\\frac{1}{2} + \\left(-\\frac{1}{2}\\right) = 0$.\n\nHow about a bit more abstract?\nIf we combine an apple with half an anti-apple, what should happen?\nWell, the apple can be made out of two half-chunks (that is, $1 = \\frac{1}{2} + \\frac{1}{2}$); and we've just seen that half an apple will annihilate half an anti-apple; so we'll be left with just one of the two halves of the apple.\nMore formally, $1 + \\left(-\\frac{1}{2}\\right) = \\frac{1}{2}$; or, writing out the calculation in full, $$1 + \\left(-\\frac{1}{2}\\right) = \\frac{1}{2} + \\frac{1}{2} + \\left(-\\frac{1}{2}\\right) = \\frac{1}{2}$$\n\nLet's go the other way round: if we combine an anti-apple with half an apple, what happens?\nIt's pretty much the same as the opposite case except flipped around: the anti-apple is made of two anti-half-chunks, and the half apple will annihilate one of those chunks, leaving us with half an anti-apple: that is, $(-1) + \\frac{1}{2} = -\\frac{1}{2}$.\n\nWe call all of these things **subtraction**: \"subtracting\" a quantity is defined to be the same as the addition of an anti-quantity.\n\n# General procedure\n\nSince we already know how to add, we might hope that subtraction will be easier (since subtraction is just a slightly different kind of adding).\n\nIn general, we write $-\\frac{1}{n}$ for \"the $\\frac{1}{n}$-sized building block, but made by dividing an *anti*-apple (instead of an apple) into $n$ equal pieces\".\nRemember from the pondering above that this is actually the same as $\\frac{-1}{n}$, where we have divided an apple into $n$ equal pieces but then taken $-1$ of the pieces.\n\nThen in general, we can just use the instant addition rule that we've already seen. %%note:Recall: this was $\\frac{a}{m} + \\frac{b}{n} = \\frac{a\\times n + b \\times m}{m \\times n}$. Remember that the order of operations in the integers is such that in the numerator, we calculate both the products first; then we add them together.%%\nIn fact, $$\\frac{a}{m} - \\frac{b}{n} = \\frac{a}{m} + \\left(\\frac{-b}{n}\\right) = \\frac{a \\times n + (-b) \\times m}{m \\times n} = \\frac{a \\times n - b \\times m}{m \\times n}$$\n\nWhy are we justified in just plugging these numbers into the formula, without justification?\nYou're quite right if you are dubious: you should not be content merely to *learn* this formula%%note:This goes for all of maths! It's not simply a collection of arbitrary rules, but a proper process that we use to model our thoughts. Behind every pithy, unmemorable formula is a great edifice of motivation and reason, if you can only find it.%%.\nIn the rest of this page, we'll go through why it works, and how you might construct it yourself if you forgot it.\nI took the choice here to present the formula first, because it's a good advertisement for why we use the $\\frac{a}{n}$ notation rather than talking about \"$\\frac{1}{n}$-chunks\" explicitly: it's a very compact and neat way of expressing all this talk of anti-apples, in the light of what we've already seen about addition.\n\nVery well: what should $\\frac{a}{m} - \\frac{b}{n}$ be?\nWe should first find a smaller chunk out of which we can build both the $\\frac{1}{m}$ and $\\frac{1}{n}$ chunks.\nWe've seen already that $\\frac{1}{m \\times n}$ will work as a smaller chunk-size.\n\nNow, what is $\\frac{a}{m}$ expressed in $\\frac{1}{m \\times n}$-chunks?\nEach $\\frac{1}{m}$-chunk is $n$ of the $\\frac{1}{m \\times n}$-chunks, so $a$ of the $\\frac{1}{m}$-chunks is $a$ lots of \"$n$ lots of $\\frac{1}{m \\times n}$-chunks\": that is, $a \\times n$ of them.\n\nSimilarly, $\\frac{b}{n}$ is just $b \\times m$ lots of $\\frac{1}{m \\times n}$-chunks.\n\nSo, expressed in $\\frac{1}{m \\times n}$-chunks, we have $a \\times n$ lots of positive chunks, and $b \\times m$ lots of anti-chunks.\nTherefore, when we put them together, we'll get $a \\times n - b \\times m$ chunks (which might be negative or positive or even zero - after all the annihilation has taken place we might end up with either normal or anti-chunks or maybe no chunks at all - but it's still an integer, being a number of chunks).\n\nSo the total amount of apple we have is $\\frac{a \\times n - b \\times m}{m \\times n}$, just like we got out of the instant formula.", "date_published": "2016-08-01T11:51:15Z", "authors": ["Patrick Stevens", "Joe Zeng"], "summaries": ["Subtraction is \"adding anti-apples\", or equivalently removing some apples you already have."], "tags": ["Math 0"], "alias": "56x"} {"id": "aac3f73fb7dc8bb072ad3f8fee5e180e", "title": "Transitive relation", "url": "https://arbital.com/p/transitive_relation", "source": "arbital", "source_type": "text", "text": "A binary [https://arbital.com/p/-3nt](https://arbital.com/p/-3nt) $R$ is **transitive** if whenever $aRb$ and $bRc$, $aRc$. \n\nThe most common examples or transitive relations are [partial orders](https://arbital.com/p/3rb) (if $a \\leq b$ and $b \\leq c$, then $a \\leq c$) and [equivalence relations](https://arbital.com/p/53y) (if $a \\sim b$ and $b \\sim c$, then $a \\sim c$).\n\nA transitive relation that is also [reflexive](https://arbital.com/p/5dy) is called a [https://arbital.com/p/-preorder](https://arbital.com/p/-preorder).\n\nA [https://arbital.com/p/-transitive_set](https://arbital.com/p/-transitive_set) $S$ is a set on which the element-of relation $\\in$ is transitive; whenever $a \\in x$ and $x \\in S$, $a \\in S$.", "date_published": "2016-12-19T02:28:20Z", "authors": ["Dylan Hendrickson", "Martin Epstein", "Joe Zeng"], "summaries": [], "tags": ["Formal definition", "Stub"], "alias": "573"} {"id": "98eb53e15d738d6c77eb5000522079c2", "title": "Lattice: Examples", "url": "https://arbital.com/p/poset_lattice_examples", "source": "arbital", "source_type": "text", "text": "Here are some additional examples of lattices. $\\newcommand{\\nsubg}{\\mathcal N \\mbox{-} Sub~G}$\n\nA familiar example\n---------------------------------\n\nConsider the following lattice.\n\n![Suspicious Lattice Hasse Diagram](http://i.imgur.com/7HefDKm.png)\n\nDoes this lattice look at all familiar to you? From some other area of mathematics, perhaps?\n\n%%hidden(Reveal the truth):\n\nIn fact, this lattice corresponds to boolean logic, as can be seen when we replace b with true and a with false in the following \"truth table\".\n\n![lattice truth table](http://i.imgur.com/hpThsTk.png)\n\n%%comment:\n\nLatex source:\n\n\\begin{tabular} {| c | c | c | c |}\n \\hline\n $x$ & $y$ & $x \\vee y$ & $x \\wedge y$ \\\\ \\hline\n $a$ & $a$ & $a$ & $a$ \\\\ \\hline\n $a$ & $b$ & $b$ & $a$ \\\\ \\hline\n $b$ & $a$ & $b$ & $a$ \\\\ \\hline\n $b$ & $b$ & $b$ & $b$ \\\\ \\hline\n\\end{tabular}\n\n%%\n\n%%\n\n\nNormal subgroups\n---------------------------------\n\nLet $G$ be a group, and let $\\nsubg$ be the set of all [normal subgroups](https://arbital.com/p/4h6) of $G$. Then $\\langle \\nsubg, \\subseteq \\rangle$ is a lattice where for $H, K \\in \\nsubg$, $H \\wedge K = H \\cap K$, and $H \\vee K = HK = \\{ hk \\mid h \\in H, k \\in K \\}$.\n\n%%hidden(Proof):\n\nLet $H,K \\in \\nsubg$. Then $H \\wedge K = H \\cap K$. We first note that $H \\cap K$ is a [subgroup](https://arbital.com/p/576) of $G$. For let $a,b \\in H \\cap K$. Since $H$ is a group, $a \\in H$, and $b \\in H$, we have $ab \\in H$. Likewise, $ab \\in K$. Combining these, we have $ab \\in H \\cap K$, and so $H \\cap K$ is satisfies the closure requirement for subgroups. Since $H$ and $K$ are groups, $a \\in H$, and $a \\in K$, we have $a^{-1} \\in H$ and $a^{-1} \\in K$. Hence, $a^{-1} \\in H \\cap K$, and so $H \\cap K$ satisfies the inverses requirement for subgroups. Since $H$ and $K$ are subgroups of $G$, we have $e \\in H$ and $e \\in K$. Hence, we have $e \\in H \\cap K$, and so $H \\cap K$ satisfies the identity requirement for subgroups. \n\nFurthermore, $H \\cap K$ is a normal subgroup, because for all $a \\in G$, $a^{-1}(H \\cap K)a = a^{-1}Ha \\cap a^{-1}Ka = H \\cap K$. It's clear from the definition of intersection that $H$ and $K$ do not share a common subset larger than $H \\cap K$.\n \nFor $H, K \\in \\nsubg$, we have $H \\vee K = HK = \\{ hk \\mid h \\in H, k \\in K \\}$. \n\nFirst we will show that $HK$ is a group. For $hk, h'k' \\in HK$, since $kH = Hk$, there is some $h'' \\in H$ such that $kh' = h''k$. Hence, $hkh'k' = hh''kk' \\in HK$, and so $HK$ is closed under $G$'s group action. For $hk \\in HK$, we have $(hk)^{-1} = k^{-1}h^{-1} \\in k^{-1}H = Hk^{-1} \\subseteq HK$, and so $HK$ is closed under inversion. Since $e \\in H$ and $e \\in K$, we have $e = ee \\in HK$. Finally, $HK$ inherits its associativity from $G$.\n\nTo see that $HK$ is a normal subgroup of $G$, let $a \\in G$. Then $a^{-1}HKa = Ha^{-1}Ka = HKa^{-1}a = HK$.\n\nThere is no subgroup $F$ of $G$ smaller than $HK$ which contains both $H$ and $K$. If there were such a subgroup, there would exist some $h \\in H$ and some $k \\in K$ such that $hk \\not\\in F$. But $h \\in F$ and $k \\in F$, and so from $F$'s group closure we conclude $hk \\in F$, a contradiction.\n\n%%", "date_published": "2016-07-16T17:08:44Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": ["Example problem"], "alias": "574"} {"id": "53db56282ad57cba26a2bfba5120c00a", "title": "Subgroup", "url": "https://arbital.com/p/subgroup", "source": "arbital", "source_type": "text", "text": "A **subgroup** of a [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $(G,*)$ is a group of the form $(H,*)$, where $H \\subset G$. We usually say simply that $H$ is a subgroup of $G$. \n\nFor a subset of a group $G$ to be a subgroup, it needs to satisfy all of the group axioms itself: [closure](https://arbital.com/p/3gy), [associativity](https://arbital.com/p/3h4), [identity](https://arbital.com/p/54p), and [inverse](https://arbital.com/p/inverse_element). We get associativity for free because $G$ is a group. So the requirements of a subgroup $H$ are:\n\n1. **[Closure](https://arbital.com/p/3gy):** For any $x, y$ in $H$, $x*y$ is in $H$.\n2. **[Identity](https://arbital.com/p/54p):** The identity $e$ of $G$ is in $H$.\n3. **[Inverses](https://arbital.com/p/inverse_element):** For any $x$ in $H$, $x^{-1}$ is also in $H$.\n\nA subgroup is called [normal](https://arbital.com/p/4h6) if it is closed under [conjugation](https://arbital.com/p/4gk). \n\nThe subgroup [https://arbital.com/p/-3nt](https://arbital.com/p/-3nt) is [transitive](https://arbital.com/p/573): if $H$ is a subgroup of $G$, and $I$ is a subgroup of $H$, then $I$ is a subgroup of $G$.\n\n#Examples\n\nAny group is a subgroup of itself. The [https://arbital.com/p/-trivial_group](https://arbital.com/p/-trivial_group) is a subgroup of every group. \n\nFor any [https://arbital.com/p/-48l](https://arbital.com/p/-48l) $n$, the set of multiples of $n$ is a subgroup of the integers (under [https://arbital.com/p/-addition](https://arbital.com/p/-addition)).", "date_published": "2016-07-07T16:00:36Z", "authors": ["Dylan Hendrickson"], "summaries": [], "tags": ["Stub"], "alias": "576"} {"id": "b3f280c2f6dcd588b3f663b8605debdb", "title": "Boolean", "url": "https://arbital.com/p/boolean", "source": "arbital", "source_type": "text", "text": "In formal [https://arbital.com/p/-258](https://arbital.com/p/-258), a boolean variable is a [https://arbital.com/p/-variable](https://arbital.com/p/-variable) that can take on one of only two possible values: \"[https://arbital.com/p/-true](https://arbital.com/p/-true)\" or \"[https://arbital.com/p/-false](https://arbital.com/p/-false)\". [https://arbital.com/p/2v8](https://arbital.com/p/2v8) can then be said to [evaluate](https://arbital.com/p/-evaluation) to one of these two values, in the same way that ordinary [algebraic](https://arbital.com/p/algebra) expressions evaluate to a [https://arbital.com/p/-54y](https://arbital.com/p/-54y).\n\nAlso as in algebraic expressions, boolean values can be manipulated using certain operators such as [$\\land$ ](https://arbital.com/p/and_operator), [$\\lor$ ](https://arbital.com/p/or_operator), [negation_operator $\\neg$ ([https://arbital.com/p/negation](https://arbital.com/p/negation), and [$\\rightarrow$ ](https://arbital.com/p/implication_operator). This field is called, surprisingly, [https://arbital.com/p/-boolean_algebra](https://arbital.com/p/-boolean_algebra).\n\nBecause booleans can only express [absolute truth or falsity](https://arbital.com/p/4mq), when working with measures of uncertainty you must use other representations, such as [https://arbital.com/p/-1rf](https://arbital.com/p/-1rf).", "date_published": "2016-07-29T22:11:31Z", "authors": ["Malcolm McCrimmon", "Eric Bruylant"], "summaries": ["A boolean is a mathematical quantity that can only take on one of two values: \"[https://arbital.com/p/-true](https://arbital.com/p/-true)\" or \"[https://arbital.com/p/-false](https://arbital.com/p/-false)\". Booleans are to [https://arbital.com/p/-258](https://arbital.com/p/-258) what [integers](https://arbital.com/p/48l) are to [https://arbital.com/p/-arithmetic](https://arbital.com/p/-arithmetic)."], "tags": ["Needs parent", "Stub"], "alias": "57f"} {"id": "c065f1f428d1236057270e587eb50759", "title": "Disambiguation", "url": "https://arbital.com/p/disambiguation_meta_tag", "source": "arbital", "source_type": "text", "text": "Several distinct concepts with comparable importance use this page's name, this page helps readers find what they're looking for.", "date_published": "2016-07-15T23:07:05Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Stub"], "alias": "57n"} {"id": "a7a8852d1ffcbec50de4765438ec8b76", "title": "Proof of Löb's theorem", "url": "https://arbital.com/p/proof_lobs_thorem", "source": "arbital", "source_type": "text", "text": "The intuition behind the proof of Löb's theorem invokes the formalization of the following argument:\n\nLet $S$ stand for the sentence \"If $S$ is provable, then I am Santa Claus\".\n\nAs it is standard, when trying to prove a If...then theorem we can start by supposing the antecedent and checking that the consequent follows. But if $S$ is provable, then we can chain its proof to an application of modus ponens of $S$ to itself in order to get a proof of the consequent \"I am Santa Claus\". If we suppose that showing the existence of a proof of a sentence implies that the sentence is true, then \"I am Santa Claus\" is true.\n\nThus we have supposed that $S$ is provably true then it follows that \"I am Santa Claus\", which is a proof of what $S$ states! Therefore, $S$ is provably true, and we can apply the same reasoning again (this time without supposing a counterfactual) to get that it is true that \"I am Santa Claus\".\n\nHoly Gödel! Have we broken logic? If this argument worked, then we could prove any nonsensical thing we wanted. So where does it fail?\n\nOne suspicious point is our implicit assumption that $S$ can be constructed in the first place as a first order sentence. But we know that there exists a [provability predicate](https://arbital.com/p/5j7) encoding the meaning we need, and then we can apply the [diagonal lemma](https://arbital.com/p/59c) to the formula $Prv(x)\\implies$ \"I am Santa Claus\" to get our desired $S$.\n\n(We will ignore the fact that expressing \"I am Santa Claus\" in the language of first order logic is rather difficult, since we could easily replace it with something simple like $0=1$.)\n\nIt turns out that the culprit of this paradox is the apparently innocent supposition that when \"I am Santa Claus\" is provable, then it is true.\n\nHow can this be false?, you may ask. Well, the standard provability predicate $Prv$ is something of the form $\\exists y Proof(x,y)$, where $Proof(x,y)$ is true iff $y$ encodes a valid proof of $x$. But $y$ might be a [nonstandard number](https://arbital.com/p/) encoding a [nonstandard proof](https://arbital.com/p/) of infinitely many steps. If the only proofs of $x$ are nonstandard then certainly you will never be able to prove $x$ from the axioms of the system using standard methods.\n\nWe can restate this result as Löb's theorem: If $Prv(A)\\implies A$ is provable, then $A$ is provable.\n\n-----\nNow we go for the formal proof.\n\nLet $T$ be an [axiomatizable](https://arbital.com/p/) extension of [minimal arithmetic](https://arbital.com/p/), and $P$ a one-place predicate satisfying the [Bernais-Hilbert derivability conditions](https://arbital.com/p/5j7), which are:\n\n1. If $T\\vdash A$, then $T\\vdash P(A)$.\n2. $T\\vdash P(A\\implies B) \\implies (P(A)\\implies P(B))$\n3. $T\\vdash P(A)\\implies P(P(A))$.\n\nFor example, $PA$ and the standard provability predicate [satisfy those conditions](https://arbital.com/p/).\n\nLet us suppose that $A$ is such that $T\\vdash P(A)\\implies A$.\n\nAs $T$ extends [minimal arithmetic](https://arbital.com/p/), the [https://arbital.com/p/-59c](https://arbital.com/p/-59c) is applicable, and thus there exists $S$ such that $T\\vdash S \\iff (P(S)\\implies A)$.\n\nBy condition 1, $T\\vdash P(S \\implies (P(S)\\implies A))$.\n\nBy condition 2, $T\\vdash P(S) \\implies P(P(S) \\implies A)$.\n\nAlso by condition 2, $T\\vdash P(P(S) \\implies A) \\implies (P(P(S)) \\implies P(A))$.\n\nCombining these, $T\\vdash P(S) \\implies (P(P(S)) \\implies P(A))$.\n\nBy condition 3, $T\\vdash P(S)\\implies P(P(S))$ and thus $T\\vdash P(S)\\implies P(A)$.\n\nBy our initial assumption, this means that $T\\vdash P(S)\\implies A$.\n\nBut by the construction of $S$, then $T\\vdash S$!\n\nBy condition 1 again, then $T\\vdash P(S)$, and since we already had shown that $T\\vdash P(S)\\implies A$, we have that $T\\vdash A$, which completes the proof.\n\nWe remark that $P$ can be any predicate satisfying the conditions, such as the verum predicate $x=x$.", "date_published": "2016-10-14T08:10:23Z", "authors": ["M Yass", "Dylan Hendrickson", "Malcolm McCrimmon", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["You can prove [https://arbital.com/p/55w](https://arbital.com/p/55w) by assuming $\\square A\\to A$ and constructing a sentence $S$ such that $PA\\vdash S\\leftrightarrow (\\square S \\to A)$"], "tags": [], "alias": "59b"} {"id": "685aaf66b5041fc43dca1459cee17739", "title": "Diagonal lemma", "url": "https://arbital.com/p/diagonal_lemma", "source": "arbital", "source_type": "text", "text": "The **diagonal lemma** shows how to construct self-referential sentences in the language of arithmetic from formulas $\\phi(x)$.\n\nIn its standard form it says that is $T$ is a theory extending [minimal arithmetic](https://arbital.com/p/) (so that it can talk about [recursive](https://arbital.com/p/) expressions with full generality) and $\\phi(x)$ is a formula with one free variable $x$ then there exists a sentence $S$ such that $T\\vdash S\\leftrightarrow \\phi(\\ulcorner S\\urcorner)$.\n\nThis can be further [generalized](https://arbital.com/p/) for cases with multiples formulas and variables.\n\nThe diagonal lemma is important because it allows the formalization of all kind of self-referential and apparently paradoxical sentences. \n\nTake for example the liar's sentence affirming that \"This sentence is false\". Since [there is no truth predicate](https://arbital.com/p/), we will have to adapt it to our language to say \"This sentence is not provable\". Since there is a [predicate of arithmetic expressing provability](https://arbital.com/p/5gt) we can construct a formula $\\neg \\square_{PA} (x)$, which is true iff there is no proof in [$PA$](https://arbital.com/p/3ft) of the sentence [encoded](https://arbital.com/p/31z) by $x$.\n\nBy invoking the diagonal's lemma, we can see that there exists a sentence $G$ such that $PA\\vdash G\\leftrightarrow \\neg \\square_{PA} (\\ulcorner G\\urcorner)$, which reflects the spirit of our informal sentence. The [weak form of Gödel's first incompleteness theorem](https://arbital.com/p/) follows swiftly from there.\n\nThe equivalent in computation of the diagonal lemma is called [quining](https://arbital.com/p/322), and refers to computer programs which produce their own source code as part of their computation.\n\nIndeed, the key insight of the diagonal lemma and quining is that you can have subexpressions in your program that when \"expanded\" are identical to the original expression that contains them.", "date_published": "2016-07-22T06:11:27Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["If F(x) is a one place formula of First Order Logic and $T$ is a theory extending [minimal arithmetic](https://arbital.com/p/) then there exists a sentence $S$ such that $T\\vdash S\\iff F(\\ulcorner S \\urcorner)$."], "tags": ["Needs parent", "Stub"], "alias": "59c"} {"id": "62456220e7db44f85d5c2cc931d61923", "title": "Gödel's first incompleteness theorem", "url": "https://arbital.com/p/godels_first_incompleteness_theorem", "source": "arbital", "source_type": "text", "text": "The **first incompleteness theorem** is a result about the existence of sentences of arithmetic that cannot be proved or disproved, no matter what axioms we take as true.\n\nWhat Gödel originally proved was that every [$\\omega$-consistent](https://arbital.com/p/) and [axiomatizable](https://arbital.com/p/) extension of Peano Arithmetic is incomplete, but the result was later refined to weaken the requirement of $\\omega$-consistency to simple [https://arbital.com/p/-5km](https://arbital.com/p/-5km) and the set of theorems that the extension had to prove to that of [minimal arithmetic](https://arbital.com/p/).\n\nThe heart of both proofs is the [https://arbital.com/p/-59c](https://arbital.com/p/-59c), which allows us express self-referential sentences in the language of arithmetic.\n\nThis put an end to the dream of building a [https://arbital.com/p/-complete](https://arbital.com/p/-complete) [https://arbital.com/p/-5hh](https://arbital.com/p/-5hh) that axiomatized all of mathematics, since as soon as one was expressive enough to talk about arithmetic, incompleteness would kick in.\n\n##Interpretation from model theory\nThe first incompleteness theorem highlights the impossibility of defining the [natural numbers](https://arbital.com/p/45h) with the usual operations of [addition](https://arbital.com/p/) and [multiplication](https://arbital.com/p/) in [first order logic](https://arbital.com/p/).\n\nWe already knew from Lowenhëim-Skolem theorem that there would be models of $PA$ which are not isomorphic to the usual arithmetic, but the first incompleteness theorem implies that some of those models disagree in the truth value of some theorems of this language (those are the undecidable sentences).", "date_published": "2016-10-16T15:28:44Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["Every [consistent](https://arbital.com/p/) and [axiomatizable](https://arbital.com/p/) extension of [minimal arithmetic](https://arbital.com/p/) is incomplete"], "tags": ["Needs parent", "Stub"], "alias": "59g"} {"id": "df3a95b6afcde481a1751b250c4bf50f", "title": "Proof of Gödel's first incompleteness theorem", "url": "https://arbital.com/p/59h", "source": "arbital", "source_type": "text", "text": "##Weak form\nThe weak Gödel's first incompleteness theorem states that every [$\\omega$-consistent](https://arbital.com/p/) [axiomatizable](https://arbital.com/p/) extension of minimal arithmetic is incomplete.\n\nLet $T$ extend [https://arbital.com/p/-minimal_arithmetic](https://arbital.com/p/-minimal_arithmetic), and let $Prv_{T}$ be the [standard provability predicate](https://arbital.com/p/5gt) of $T$. \n\nThen we apply the [diagonal lemma](https://arbital.com/p/59c) to get $G$ such that $T\\vdash G\\iff \\neg Prv_{T}(G)$.\n\nWe assert that the sentence $G$ is undecidable in $T$. We prove it by contradiction:\n\nSuppose that $T\\vdash G$. Then $Prv_ {T}(G)$ is correct, and as it is a $\\exists$-rudimentary sentence then it is [provable in minimal arithmetic](https://arbital.com/p/every_true_e_rudimentary_sentence_is_provable_in_minimal_arithmetic), and thus in $T$. So we have that $T\\vdash Prv_ {T}(G)$ and also by the construction of $G$ that $T\\vdash \\neg Prv_{T}(G)$, contradicting that $T$ is consistent.\n\nNow, suppose that $T\\vdash \\neg G$. Then $T\\vdash Prv_{T}(G)$. But then as $T$ is consistent there cannot be a standard proof of $G$, so if $Prv_{T}(x)$ is of the form $\\exists y Proof_{T}(x,y)$ then for no natural number $n$ it can be that $T\\vdash Proof_ {T}(\\ulcorner G\\urcorner,n)$, so $T$ is $\\omega$-inconsistent, in contradiction with the hypothesis.\n\n##Strong form\n\n> Every [consistent](https://arbital.com/p/5km) and [https://arbital.com/p/-axiomatizable](https://arbital.com/p/-axiomatizable) extension of [https://arbital.com/p/-minimal_arithmetic](https://arbital.com/p/-minimal_arithmetic) is [incomplete](https://arbital.com/p/complete).\n\nThis theorem follows as a consequence of the [undecidability of arithmetic](https://arbital.com/p/) combined with the lemma stating that [any complete axiomatizable theory is undecidable](https://arbital.com/p/)", "date_published": "2016-10-11T18:24:50Z", "authors": ["Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "59h"} {"id": "7eedbd07fe356669954b72492e94f911", "title": "Multiplication of rational numbers (Math 0)", "url": "https://arbital.com/p/multiplication_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "We've seen how to [add](https://arbital.com/p/55m) and [subtract](https://arbital.com/p/56x) pairs of [rational numbers](https://arbital.com/p/4zq).\nBut the [natural numbers](https://arbital.com/p/45h) have another operation on them: multiplication.\n\nRemember, a given rational number represents what we get when we cut an apple into pieces all of the same size, then take some number %%note:Possibly *more* than we actually made, and possibly negative!%% of the little pieces.\nThe **product** of $\\frac{a}{m}$ and $\\frac{b}{n}$ %%note: Recall that $\\frac{a}{m}$ is \"$a$ copies of the little-piece we get when we cut an apple into $m$ equal pieces.%% is what we call \"$\\frac{a}{m}$ **multiplied by** $\\frac{b}{n}$\", and it answers the question \"What happens if we do the procedure that would make $\\frac{b}{n}$, but instead of starting by cutting one apple into $n$ pieces, we started by cutting $\\frac{a}{m}$ apples into $n$ pieces?\".\n\nWe write the product of $\\frac{a}{m}$ and $\\frac{b}{n}$ as $\\frac{a}{m} \\times \\frac{b}{n}$.\n\n# Example\n\nIt's hopefully easy to see that $1 \\times \\frac{b}{n} = \\frac{b}{n}$.\nIndeed, the definition is \"what do we get if we would make $\\frac{b}{n}$, but instead of starting by cutting one apple, we started by cutting $1$ apple?\"; but that's just the same!\nIt's like saying \"What if, instead of putting bread around my sandwich filling, I tried putting bread?\" - I haven't actually changed anything, and I'll still get the same old sandwich %%note:or $\\frac{b}{n}$%% out at the end.\n\nHow about $2 \\times \\frac{3}{5}$? (Strictly speaking, I should probably be writing $\\frac{2}{1}$ instead of $2$, but this way saves a bit of writing. $\\frac{2}{1}$ means \"two copies of the thing I get when I cut an apple into one piece\"; but an apple cut into one piece is just that apple, so $\\frac{2}{1}$ just means two apples.)\nWell, that says \"instead of cutting one apple, we cut two apples\" into $\\frac{3}{5}$-sized pieces.\n\nFrom now on, my pictures of apples will get even worse: rather than being circles, they'll now be squares.\nIt just makes the diagrams easier to understand.\n\n![Two times three-fifths](http://i.imgur.com/tW78Nys.png)\n\nIn the picture, we have two apples (squares) which I've drawn next to each other, separated by a dashed line.\nThen I've taken $\\frac{3}{5}$ of the whole shape (shaded in red): that is, to the group of two apples I have done the procedure that would create $\\frac{3}{5}$ if it were done to one apple alone.\n\nNotice, though, that this divides neatly into $\\frac{3}{5}$ of the left-hand apple, and $\\frac{3}{5}$ of the right-hand apple.\nSo the red-shaded area comes to $\\frac{3}{5} + \\frac{3}{5}$, which you already know how to calculate: it is $\\frac{6}{5}$.\n\n# General integer times fraction\n\nCan you work out, from the case of $2 \\times \\frac{3}{5}$ above, what $m \\times \\frac{a}{n}$ is, where $m$ is an integer?\n\n%hidden(Show solution):\nIt is $\\frac{a \\times m}{n}$.\n\nIndeed, the procedure to get $\\frac{a}{n}$ is: we split $1$ into $n$ equal pieces, and then take $a$ of them.\nSo the procedure to get $m \\times \\frac{a}{n}$ is: we split $m$ into $n$ equal pieces, and then take $a$ of them.\n\nBut each of the pieces we've just made by splitting $m$—that is, those demarcated by the longer solid lines in the $2 \\times \\frac{3}{5}$ diagram above—can be viewed as being $m$ copies of what we get by splitting $1$.\n(In the diagram above, we have $2$ copies of that which we get by splitting $1$: namely the two copies indicated by the dashed line.)\nSo we can view the second procedure as: we split $1$ into $n$ equal pieces %%note:In the diagram above, there are $5$ such equal pieces, and right now we're looking only at one square, not at both squares joined together.%%, and then take $a$ of them %%note:In the diagram above, $a$ is $3$: this has given us the red shaded bit of one of the squares.%%, and then do this $m$ times. %%note: In the diagram above, $m$ is $2$: we're finally looking at the two squares joined together into a rectangle.%%\n\nThis produces $a \\times m$ pieces, each of size $\\frac{1}{n}$, and hence the rational number $\\frac{a \\times m}{n}$.\n%\n\nYou should check that you get the right answer for a different example: $-5 \\times \\frac{2}{3}$.\n%%hidden(Show solution):\nThis is \"do the procedure that makes $\\frac{2}{3}$, but instead of starting with $1$, start with $-5$\".\n\nSo we take five anti-apples, and divide them into thirds (obtaining $15$ anti-chunks of size $\\frac{1}{3}$ each, grouped as five groups of three); and then we take two chunks out of each group of three, obtaining $10$ anti-chunks of apple in total.\n\nSo $-5 \\times \\frac{2}{3} = \\frac{-10}{3}$, in accordance with the rule of $n \\times \\frac{a}{n} = \\frac{a \\times m}{n}$.\n%%\n\n# General fraction times fraction\n\n\n\n\n# Order doesn't matter\n\nNotice that while it was fairly obvious that order doesn't matter during addition (that is, $\\frac{a}{m} + \\frac{b}{n} = \\frac{b}{n} + \\frac{a}{m}$), because it's simply \"putting two things next to each other and counting up what you've got\", it's not all that obvious that the product of two fractions should be independent of the order we multiplied in.\nHowever, you should check, from the general expression above, that it actually *is* independent of the order.\n\nWhy is this?\nWhy should it be that \"do the procedure that made $\\frac{b}{n}$, but starting from $\\frac{a}{m}$ instead of $1$\" and \"do the procedure that made $\\frac{a}{m}$, but starting from $\\frac{b}{n}$ instead of $1$\" give the same answer?\n\nWell, remember the diagram we had for $2 \\times \\frac{3}{5}$ (remembering that that is \"do the procedure that would make $\\frac{3}{5}$, but instead of doing it to $1$, we do it to $2$):\n\n![Two times three-fifths](http://i.imgur.com/tW78Nys.png)\n\nWhat would we get if we rotated this diagram by a quarter-turn?\n\n![Two times three-fifths, rotated](https://imgur.com/C61M0f2.png)\n\nBut wait! The shaded bit is just what we get when we do the procedure that makes $2$ (namely \"put two copies of the shape next to each other\"), but instead of doing it on the single (upper-most) square, we do it to the version of the number $\\frac{3}{5}$ that is represented by the shaded bit of the upper-most square! And that is exactly what we would do to get $\\frac{3}{5} \\times 2$.\n\nIn general, $\\frac{a}{m} \\times \\frac{b}{n}$ is the same as $\\frac{b}{n} \\times \\frac{a}{m}$, because the two just \"come from the same diagram, rotated by a quarter-turn\".\nThey are measuring the same amount of stuff, because the amount of stuff in a diagram doesn't change simply because we rotated it.\n\n## Another example\n\nWe'll do $\\frac{-5}{7} \\times \\frac{2}{3}$.\n\n\n\n# Meditation: why the notation makes sense\n\nAt this point, a digression is in order.\nWe have already seen the notation $\\frac{a}{n}$ for \"take an apple; divide it into $n$ pieces, each $\\frac{1}{n}$-sized; and then take $a$ of the chunks\".\nIn the language of multiplication that we've now seen, that is \"do what we would do to make $a$, but do it starting from a $\\frac{1}{n}$-chunk instead of $1$\".\nThat is, $\\frac{a}{n}$ is just $\\frac{1}{n} \\times a$.\n\nAnd we can do that in a different way: we can take $a$ apples, divide each into $n$ chunks, and then just draw one of the chunks from each apple.\nIn the language of multiplication, that is just \"do what we would do to make a $\\frac{1}{n}$-chunk, but do it to $a$ instead of $1$\".\nThat is, $\\frac{a}{n} = a \\times \\frac{1}{n}$.\n\nRecalling that $a$ is just $\\frac{a}{1}$, our notation $\\frac{a}{n}$ is simply the same as $\\frac{a}{1} \\times \\frac{1}{n}$, as an instance of the \"instant rule\" $\\frac{a}{1} \\times \\frac{1}{n} = \\frac{a \\times 1}{1 \\times n} = \\frac{a}{n}$.\n\n# Inverses: putting things in reverse\n\nRemember that we had \"anti-apples\" as a way of making nothing ($0$) by adding to some quantity of apples.\nIn a similar vein, we can \"invert\" multiplication.\n\nWhenever $a$ is not $0$, we can find a rational number $\\frac{c}{d}$ such that $\\frac{a}{b} \\times \\frac{c}{d} = 1$.\n(Notice that we've got $1$ as our \"base point\" now, rather than the $0$ that addition had.)\n\nIndeed, using the instant rule, we see that $\\frac{a}{b} \\times \\frac{c}{d} = \\frac{a \\times c}{b \\times d}$, so to make $1$ we want $a \\times c$ to be the same as $b \\times d$.\n\nBut we can do that: if we let $c = b$ and $d = a$, we get the right thing, namely $\\frac{a \\times b}{b \\times a} = \\frac{a \\times b}{a \\times b} = \\frac{1}{1} = 1$.\n\nSo $\\frac{b}{a}$ works as an inverse to $\\frac{a}{b}$.\nAnd this is why we needed $a$ not to be $0$: because $\\frac{b}{a}$ isn't actually a rational number unless $a$ is nonzero.\n\n## Intuition\n\nWe've seen how this definition follows from the instant rule.\nWhere does it *actually* come from, though?", "date_published": "2016-08-01T08:15:07Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["Multiplication is making a number by \"doing to some quantity what would usually be done to the quantity $1$\"."], "tags": ["Stub"], "alias": "59s"} {"id": "0d28e08d225bb76057dafd932d4eb6e4", "title": "Bijective Function: Intro (Math 0)", "url": "https://arbital.com/p/bijective_function_intro_math_0", "source": "arbital", "source_type": "text", "text": "##Comparing Amounts##\nConsider the Count von Count. He cares only about counting things. He doesn't care what they are, just how many there are. He decides that he wants to collect items into plastic crates, and he considers two crates equal if both contain the same number of items. \n\n![Equivalent Crates](http://i.imgur.com/sG7Qfyv.jpg)\n\nNow Elmo comes to visit, and he wants to impress the Count, but Elmo is not great at counting. Without counting them explicitly, how can Elmo tell if two crates contain the same number of items?\n\nWell, he can take one item out of each crate and put the pair to one side.\n\n![Pairing one pair of items](https://i.imgur.com/myTqIrRl.jpg)\n\n He continues pairing items up in this way and when one crate runs out he checks if there are any left over in the other crate. If there aren't any left over, then he knows there were the same number of items in both crates. \n\n![Pairing all items in one crate with items in the other](https://i.imgur.com/S87JO2nl.jpg)\n\nSince the Count von Count only cares about counting things, the two crates are basically equivalent, and might as well be the same crate to him. Whenever two objects are the same from a certain perspective, we say that they are **isomorphic**.\n\nIn this example, the way in which the crates were the same is that each item in one crate could be paired with an item in the other.\n\n![Pairing all items in one box with items in the other](https://i.imgur.com/S87JO2nl.jpg)\n\n![Bijection between crates](https://i.imgur.com/53YbraFl.jpg)\n\nThis wouldn't have been possible if the crates had different numbers of items in them.\n\n\n![Different numbers of items](https://i.imgur.com/F5rLwAsl.jpg)\n\n![No way to pair items](https://i.imgur.com/KCI9UOvl.jpg)\n\n\nWhenever you can match each item in one collection with exactly one item in another collection, we say that the collections are **[bijective](https://arbital.com/p/-499)** and the way you paired them is a **[bijection](https://arbital.com/p/-499)**. A bijection is a specific kind of isomorphism.\n\n![Bijection between crates](https://i.imgur.com/53YbraFl.jpg)\n\nNote that there might be many different bijections between two bijective things.\n\n![Another bijection between crates](http://i.imgur.com/Q6Si1ZX.png)\n\nIn fact, all that counting involves is pairing up the things you want to count, either with your fingers, or with the concepts of 'numbers' in your head. If there are as many objects in one crate as there are numbers from one to seven, and there are as many objects in another crate as numbers from one to seven, then both crates contain the same number of objects.", "date_published": "2016-07-12T17:57:03Z", "authors": ["Eric Bruylant", "Mark Chimes"], "summaries": [], "tags": ["Math 0", "Isomorphism: Intro (Math 0)", "Work in progress"], "alias": "5bp"} {"id": "ea616c925578a69f3a5a9a5db539761c", "title": "Convex set", "url": "https://arbital.com/p/convex_set", "source": "arbital", "source_type": "text", "text": "A convex set is a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) of [vectors](https://arbital.com/p/3w0) that contains all line segments between vectors in the set. Consider the following shape:\n\n![a convex set](https://upload.wikimedia.org/wikipedia/commons/thumb/6/6b/Convex_polygon_illustration1.svg/634px-Convex_polygon_illustration1.svg.png)\n\nAs shown, a line segment between two points $x$ and $y$ in this shape lies entirely within the shape. In fact, this is true for any pair of points in the shape. Therefore, this shape is convex. For comparison, the following shape is not convex:\n\n![a non-convex set](https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Convex_polygon_illustration2.svg/634px-Convex_polygon_illustration2.svg.png)\n\nThe fact that part of the line segment between $x$ and $y$ (both inside the shape) lies outside the shape proves that this shape is not convex.\n\n\nFormally, a set $S$ is convex if\n\n$$\\forall x, y \\in S, \\theta \\in [1](https://arbital.com/p/0,): \\theta x + (1 - \\theta) y \\in S$$\n\n(images are from Wikipedia: [here](https://en.wikipedia.org/wiki/File:Convex_polygon_illustration1.svg) and [here](https://en.wikipedia.org/wiki/File:Convex_polygon_illustration2.svg))", "date_published": "2016-07-13T02:40:03Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["Start", "Needs summary", "Needs splitting by mastery"], "alias": "5bs"} {"id": "809323f0baeef055c7d6afb90c46e2e6", "title": "Factorial", "url": "https://arbital.com/p/factorial", "source": "arbital", "source_type": "text", "text": "Factorial is most simply defined as a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) on positive [integers](https://arbital.com/p/48l). 5 factorial (written as $5!$) means $1*2*3*4*5$. In general then, for a positive integer $n$, $n!=\\prod_{i=1}^{n}i$. For applications to [combinatorics](https://arbital.com/p/), it will also be useful to define $0! = 1$.\n\n## Applications to Combinatorics ##\n\n$n!$ is the number of possible orders for a set of $n$ objects. For example, if we arrange the letters $A$, $B$, and $C$, here are all the options:\n$$ABC$$\n$$ACB$$\n$$BAC$$\n$$BCA$$\n$$CAB$$\n$$CBA$$\nYou can see that there are $6$ possible orders for $3$ objects, and $6 = 3*2*1 = 3!$. Why does this work? We can [prove this by induction](https://arbital.com/p/5fz). First, we'll see pretty easily that it works for $1$ object, and then we can show that if it works for $n$ objects, it will work for $n+1$. Here's the case for $1$ object.\n$$A$$\n$$1 = \\prod_{i=1}^{1}i = 1!$$\nNow we have the objects $\\{A_{1},A_{2},...,A_{n},A_{n+1}\\}$, and $n+1$ slots to put them in. If $A_{n+1}$ is in the first slot, now we're ordering $n$ remaining objects in $n$ remaining slots, and by our [induction hypothesis](https://arbital.com/p/5fz), there are $n!$ ways to do this. Now let's suppose $A_{n+1}$ is in the second slot. Any orderings that result from this will be completely unique from the orderings where $A_{n+1}$ was in the first slot. Again, there are $n$ remaining slots, and $n$ remaining objects to put in them, in an arbitrary order. There are another $n!$ possible orderings. We can put $A_{n+1}$ in each slot, one by one, and generate another $n!$ orderings, all of which are unique, and by the end, we will have every possible ordering. We know we haven't missed any because $A_{n+1}$ has to be somewhere. The total number of orderings we get is $n!*(n+1)$, which equals $(n+1)!$.\n\n\n## Extrapolating to [Real Numbers](https://arbital.com/p/50d) ##\n\nThe factorial function can be defined in a different way so that it is defined for all real numbers (and in fact for complex numbers too).\n\n%%hidden(Definition):\nWe define $x!$ as follows:\n$$x! = \\Gamma (x+1),$$\nwhere $\\Gamma $ is the [gamma function](https://arbital.com/p/):\n$$\\Gamma(x)=\\int_{0}^{\\infty}t^{x-1}e^{-t}\\mathrm{d} t$$\nWhy does this correspond to the factorial function as defined previously? We can prove by induction that for all positive integers $x$:\n$$\\prod_{i=1}^{x}i = \\int_{0}^{\\infty}t^{x}e^{-t}\\mathrm{d} t$$\nFirst, we verify for the case where $x=1$. Indeed:\n$$\\prod_{i=1}^{1}i = \\int_{0}^{\\infty}t^{1}e^{-t}\\mathrm{d} t$$\n$$1=1$$\nNow we suppose that the equality holds for a given $x$:\n$$\\prod_{i=1}^{x}i = \\int_{0}^{\\infty}t^{x}e^{-t}\\mathrm{d} t$$\nand try to prove that it holds for $x + 1$:\n$$\\prod_{i=1}^{x+1}i = \\int_{0}^{\\infty}t^{x+1}e^{-t}\\mathrm{d} t$$\nWe'll start with the induction hypothesis, and manipulate until we get the equality for $x+1$.\n$$\\prod_{i=1}^{x}i = \\int_{0}^{\\infty}t^{x}e^{-t}\\mathrm{d} t$$\n$$(x+1)\\prod_{i=1}^{x}i = (x+1)\\int_{0}^{\\infty}t^{x}e^{-t}\\mathrm{d} t$$\n$$\\prod_{i=1}^{x+1}i = (x+1)\\int_{0}^{\\infty}t^{x}e^{-t}\\mathrm{d} t$$\n$$= 0+\\int_{0}^{\\infty}(x+1)t^{x}e^{-t}\\mathrm{d} t$$\n$$= \\left[](https://arbital.com/p/)_{0}^{\\infty}+\\int_{0}^{\\infty}(x+1)t^{x}e^{-t}\\mathrm{d} t$$\n$$= \\left[](https://arbital.com/p/)_{0}^{\\infty}-\\int_{0}^{\\infty}(x+1)t^{x}(-e^{-t})\\mathrm{d} t$$\nBy the product rule of integration:\n$$=\\int_{0}^{\\infty}t^{x+1}e^{-t}\\mathrm{d} t$$\nThis completes the proof by induction, and that's why we can define factorials in terms of the gamma function.\n%%", "date_published": "2023-06-20T17:25:34Z", "authors": ["Michael Cohen", "Patrick Stevens", "Douglas Weathers", "Eric Bruylant", "Henri Lemoine"], "summaries": [], "tags": [], "alias": "5bv"} {"id": "7dfbf50913d38aceb6ea0572a56a1a1e", "title": "Convex function", "url": "https://arbital.com/p/convex_function", "source": "arbital", "source_type": "text", "text": "A convex [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) \"only curves upwards.\" To check whether a function is convex, use the following procedure:\n\n1. Draw the graph of the function.\n\n![graph of a function](http://i.imgur.com/bJAKquW.png)\n\n2. Select any two points in the graph.\n\n![graph with two points](http://i.imgur.com/j3MsPWR.png)\n\n3. Draw a line segment connecting the two points.\n\n![graph with two points connected](http://i.imgur.com/MoZTIK1.png)\n\n4. See whether this line segment is ever lower than the graph.\n\n![graph with part of line segment highlighted](http://i.imgur.com/4piUSZJ.png)\n\nIf the line segment is ever lower than the graph, the function is not convex. The function graphed above is not convex, as can be seen by looking at the red part of the line segment. On the other hand, if the line segment never goes below the graph (no matter which two initial points you selected), then the function is convex. \n\nEquivalently, a function is convex if its [https://arbital.com/p/-epigraph](https://arbital.com/p/-epigraph) is a [https://arbital.com/p/-5bs](https://arbital.com/p/-5bs).", "date_published": "2016-07-13T03:23:54Z", "authors": ["Eric Bruylant", "Jessica Taylor"], "summaries": [], "tags": ["Start", "Needs summary"], "alias": "5bw"} {"id": "9133d8191d4b85ff88c809b17ce7e24a", "title": "Needs accessible summary", "url": "https://arbital.com/p/needs_accessible_summary_meta_tag", "source": "arbital", "source_type": "text", "text": "This tag is for pages which have a summary of some kind but don't have a summary which is accessible to a relevant audience.\n\nSome pages may require a [technical](https://arbital.com/p/4pt) or [brief](https://arbital.com/p/4q2) summary.", "date_published": "2016-07-13T20:24:05Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Stub"], "alias": "5cb"} {"id": "10da01c7a5716460f120f2f5f2474304", "title": "Reflexive relation", "url": "https://arbital.com/p/reflexive_relation", "source": "arbital", "source_type": "text", "text": "A binary [relation](https://arbital.com/p/3nt) over some set is **reflexive** when every element of that set is related to itself. (In symbols, a relation $R$ over a set $X$ is reflexive if $\\forall a \\in X, aRa$.) For example, the relation $\\leq$ defined over the real numbers is reflexive, because every number is less than or equal to itself.\n\nA relation is **anti-reflexive** when *no* element of the set over which it is defined is related to itself. $<$ is an anti-reflexive relation over the real numbers. Note that a relation doesn't have to be either reflexive or anti-reflexive; if Alice likes herself but Bob doesn't like himself, then the relation \"_ likes _\" over the set $\\{Alice, Bob\\}$ is neither reflexive nor anti-reflexive.\n\nThe **reflexive closure** of a relation $R$ is the union of $R$ with the [identity relation](https://arbital.com/p/Identity_relation); it is the smallest relation that is reflexive and that contains $R$ as a subset. For example, $\\leq$ is the reflexive closure of $<$.\n\nSome other simple properties that can be possessed by binary relations are [symmetry](https://arbital.com/p/Symmetric_relation) and [transitivity](https://arbital.com/p/573).\n\nA reflexive relation that is also transitive is called a [preorder](https://arbital.com/p/Preorder).", "date_published": "2016-07-15T17:48:12Z", "authors": ["Eric Bruylant", "Ryan Hendrickson", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "5dy"} {"id": "1f553a60d47fb870b7d7d362f275bd80", "title": "Algebraic structure tree", "url": "https://arbital.com/p/algebraic_structure_tree", "source": "arbital", "source_type": "text", "text": "Some classes of [https://arbital.com/p/-3gx](https://arbital.com/p/-3gx) are given special names based on the properties of their sets and operations. These terms grew organically over the history of modern mathematics, so the overall list of names is a bit arbitrary (and in a few cases, some authors will use slightly different assumptions about certain terms, such as whether a semiring needs to have identity elements). This list is intended to clarify the situation to someone who has some familiarity with what an algebraic structure is, but not a lot of experience with using these specific terms.\n\n%%comment:Tree is the wrong word; this should be more of an algebraic structure collection of disjoint directed acyclic graphs? But this is what other pages seem to have chosen to link to, so here we are!%%\n\n# One set, one binary operation\n\n* [https://arbital.com/p/Groupoid](https://arbital.com/p/Groupoid), sometimes known as a magma. This is the freebie. Have a set and a binary operation? That's a groupoid.\n * A [https://arbital.com/p/-semigroup](https://arbital.com/p/-semigroup) is a groupoid where *the operation is [associative](https://arbital.com/p/3h4)*.\n * A [https://arbital.com/p/-3h3](https://arbital.com/p/-3h3) is a semigroup where *the operation has an [https://arbital.com/p/-54p](https://arbital.com/p/-54p)*.\n * A [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) is a monoid where *every element has an [https://arbital.com/p/-inverse_element](https://arbital.com/p/-inverse_element) under the binary operation*.\n * An [https://arbital.com/p/3h2](https://arbital.com/p/3h2) is a group where *the binary operation is [commutative](https://arbital.com/p/3jb)*.\n * A [https://arbital.com/p/-semilattice](https://arbital.com/p/-semilattice) is a semigroup where *the operation is [idempotent](https://arbital.com/p/Idempotent_operation) and commutative*.\n * A [https://arbital.com/p/-quasigroup](https://arbital.com/p/-quasigroup) is a groupoid where *every element has a [left and right quotient](https://arbital.com/p/quotient_algebra) under the binary operation* (sometimes called the [https://arbital.com/p/Latin_square_property](https://arbital.com/p/Latin_square_property)).\n * A [loop](https://arbital.com/p/Algebraic_loop) is a quasigroup *with identity*.\n * A **group**, as defined above, can also be defined as a (non-empty) quasigroup where *the operation is associative* ([quotients and associativity give a two-sided identity and two-sided inverses](https://arbital.com/p/), provided there's at least one element to be that identity).\n\n# One set, two binary operations\n\nFor the below, we'll use $*$ and $\\circ$ to denote the two binary operations in question. It might help to think of $*$ as \"like addition\" and $\\circ$ as \"like multiplication\", but be careful—in most of these structures, properties of addition and multiplication like commutativity won't be assumed!\n\n* [https://arbital.com/p/Ringoid](https://arbital.com/p/Ringoid) assumes only that $\\circ$ distributes over $*$—in other words, $a \\circ (b * c) = (a \\circ b) * (a \\circ c)$ and $(a * b) \\circ c = (a \\circ c) * (b \\circ c)$.\n * A [https://arbital.com/p/-semiring](https://arbital.com/p/-semiring) is a ringoid where *both $*$ and $\\circ$ define semigroups*.\n * An [https://arbital.com/p/-additive_semiring](https://arbital.com/p/-additive_semiring) is a semiring where $*$ *is commutative*.\n * A [https://arbital.com/p/-rig](https://arbital.com/p/-rig) is an additive semiring where $*$ *has an identity element*. (It's almost a ring! It's just missing negatives.)\n * A [https://arbital.com/p/-3gq](https://arbital.com/p/-3gq) is a rig where *every element has an inverse element under* $*$. (Some authors also require $\\circ$ to have an identity to call the structure a ring.)\n * A **ring with unity** is a ring where $\\circ$ *has an identity*. (Some authors just use the word \"ring\" for this; others use variations like \"unit ring\".)\n * A [https://arbital.com/p/-division_ring](https://arbital.com/p/-division_ring) is a ring with unity where *every element (except for the identity of $*$) has an inverse element under* $\\circ$.\n * A [field](https://arbital.com/p/481) is a division ring where $\\circ$ *is commutative*.\n * A [lattice](https://arbital.com/p/46c) is a ringoid where *both $*$ and $\\circ$ define semilattices, and satisfy the absorption laws ($a \\circ (a * b) = a * (a \\circ b) = a$)*. (While we'll continue to use $*$ and $\\circ$ here, the two operations of a lattice are almost always denoted with [$\\wedge$ and $\\vee$](https://arbital.com/p/3rc).)\n * A [https://arbital.com/p/-bounded_lattice](https://arbital.com/p/-bounded_lattice) is a lattice where *both operations have identities*.", "date_published": "2016-07-16T18:39:11Z", "authors": ["Eric Bruylant", "Ryan Hendrickson", "Kevin Clancy"], "summaries": [], "tags": ["Start"], "alias": "5dz"} {"id": "4a13c4c205b5baa0c845134cc1d34ec0", "title": "Bayes' rule: Beginner's guide", "url": "https://arbital.com/p/5f3", "source": "arbital", "source_type": "text", "text": "This arc teaches basic understanding of [https://arbital.com/p/1lz](https://arbital.com/p/1lz) via [https://arbital.com/p/1x5](https://arbital.com/p/1x5). This arc should work for [https://arbital.com/p/1r5](https://arbital.com/p/1r5)-type readers.", "date_published": "2016-08-02T16:15:37Z", "authors": ["Eric Bruylant", "Nate Soares", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress", "B-Class"], "alias": "5f3"} {"id": "c1042831ee7b9869f326d2e7540b638a", "title": "Lattice: Exercises", "url": "https://arbital.com/p/poset_lattice_exercise", "source": "arbital", "source_type": "text", "text": "Try these exercises to test your knowledge of lattices.\n\n## Distributivity\n\nDoes the lattice meet operator distribute over joins? In other words, for all lattices $L$ and all $p, q, r \\in L$, is it necessarily true that $p \\wedge (q \\vee r) = (p \\wedge q) \\vee (p \\wedge r)$? Prove your answer.\n\n%%hidden(Solution):\nThe following counterexample shows that lattice meets do not necessarily distribute over joins.\n\n![A non-distributive lattice called as M3](http://i.imgur.com/SKJirZx.png)\n\n%%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n edge [= \"none\"](https://arbital.com/p/arrowhead)\n\n rankdir = BT;\n t -> p\n t -> q\n t -> r\n p -> s\n q -> s\n r -> s\n}\n%%%\n\nIn the above lattice, $p \\wedge (q \\vee r) = p \\neq t = (p \\wedge q) \\vee (p \\wedge r)$.\n%%\n\n## Common elements\n\nLet $L$ be a lattice, and let $J$ and $K$ be two finite subsets of $L$ with a non-empty intersection. Prove that $\\bigwedge J \\leq \\bigvee K$.\n\n%%hidden(Solution):\nIf $J$ and $K$ have a non-empty intersection, then there exists some lattice element $p$ such that $p \\in J$ and $p \\in K$. Since $\\bigwedge J$ is a lower bound of $J$, we have $\\bigwedge J \\leq p$. Since $\\bigvee K$ is an upper bound of $K$, we have $p \\leq \\bigvee K$. By transitivity, we have $\\bigwedge J \\leq p \\leq \\bigvee K$.\n%%\n\n## Another inequality\n\nLet $L$ be a lattice, and let $J$ and $K$ be two finite subsets of $L$ such that for all $j \\in J$ and $k \\in K$, $j \\leq k$. Prove that $\\bigvee J \\leq \\bigwedge K$.\n\n%%hidden(Solution):\nRephrasing the problem statement, we have that every element of $J$ is a lower bound of $K$ and that every element of $K$ is an upper bound of $J$. It the follows that for $j \\in J$, $j \\leq \\bigwedge K$. Hence, $\\bigwedge K$ is an upper bound of $J$, and therefore it is greater than or equal to the *least* upper bound of $J$: $\\bigvee J \\leq \\bigwedge K$.\n%%\n\n## The minimax theorem\n\nLet $L$ be a lattice and $A$ an $m \\times n$ matrix of elements of $L$. Prove the following inequality: $$\\bigvee_{i=1}^m \\bigwedge_{j=1}^n A_{ij} \\leq \\bigwedge_{j=1}^n \\bigvee_{i=1}^m A_{ij}$$.\n\n%%hidden(Solution):\nTo get an intuitive feel for this theorem, it helps to first consider a small concrete instantiation. Consider the $3 \\times 3$ matrix depicted below, with elements $a,b,c,d,e,f,g,h$, and $i$. The inequality instantiates to $(a \\wedge b \\wedge c) \\vee (d \\wedge e \\wedge f) \\vee (g \\wedge h \\wedge i) \\leq (a \\vee d \\vee g) \\wedge (b \\vee e \\vee h) \\wedge (c \\vee f \\vee i)$. Why would this inequality hold?\n\n$$\\left[\\begin{array}{ccc}\na & b & c \\\\\nd & e & f \\\\\ng & h & i \\\\\n\\end{array} \\right](https://arbital.com/p/)$$\n\nNotice that each parenthesized expression on the left hand side of the inequality shares an element with each parenthesized expression on the right hand side of the inequality.This is true because the parenthesized expressions on the left hand side correspond to rows and the parenthesized expressions on the right hand side correspond to columns; each row of a matrix shares an element with each of its columns. The theorem proven in the *Common elements* exercise above then tells us that each parenthesized expression on the left hand side is less than or equal to each parenthesized expression on the right hand side.\n\nLet $J = \\{ a \\wedge b \\wedge c, d \\wedge e \\wedge f, g \\wedge h \\wedge i \\}$ and $K = \\{ a \\vee d \\vee g, b \\vee e \\vee h, c \\vee f \\vee i \\}$. Then the hypothesis for the theorem proven in the *Another inequality* exercise holds, giving us $\\bigvee J \\leq \\bigwedge K$, which is exactly what we wanted to prove. Extending this approach to the general case is straightforward.\n%%", "date_published": "2017-04-24T22:35:57Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": ["Exercise "], "alias": "5ff"} {"id": "1a0917c5b1549e75112e2160cb49c0bf", "title": "Mathematical induction", "url": "https://arbital.com/p/mathematical_induction", "source": "arbital", "source_type": "text", "text": "The **principle of mathematical induction** is a proof technique in which a statement, $P(n)$, is proven about a set of [natural numbers](https://arbital.com/p/45h) $n$. It may be best understood as treating the statements like dominoes: a statement $P(n)$ is true if the $n$-th domino is knocked down. We must knock down a first domino, or prove that a **base case** $P(m)$ is true. Next we must make sure the dominoes are close enough together to fall, or that the **inductive step** holds; in other words, we prove that if $k \\geq m$ and $P(k)$ is true, $P(k+1)$ is true. Then since $P(m)$ is true, $P(m+1)$ is true; and since $P(m+1)$ is true, $P(m+2)$ is true, and so on.\n\nAn example\n=======\n\nWe'll do an example to build our intuition before giving the proper definition of the principle. We'll provide yet another proof that\n$$ 1 + 2 + \\cdots + n = \\frac{n(n+1)}{2}$$\nfor all integers $n \\ge 1$. First, let's check the base case, where $n=1$:\n$$ 1 = \\frac{1(1+1)}{2} = \\frac{2}{2} = 1.$$\nThis is (fairly obviously) true, so we can move forward with the inductive step. The inductive step includes an assumption, namely that the statement is true for some integer $k$ that is larger than the base integer. For our example, if $k\\ge1$, we assume that\n$$1 + 2 + \\cdots + k = \\frac{k(k+1)}{2}$$\nand try to prove that\n$$ 1 + 2 + \\cdots + k + (k+1) = \\frac{(k+1)([https://arbital.com/p/k+1](https://arbital.com/p/k+1)+1)}{2}.$$\nTake our assumption and add $k+1$ to both sides:\n$$1+2+\\cdots + k + (k+1) = \\frac{k(k+1)}{2} + k + 1.$$\nNow the left-hand sides of what we know and what we want are the same. Hopefully the right-hand side will shake out to be the same. Get common denominators so that the right-hand side of the above equation is\n$$\\frac{k(k+1)}{2} + \\frac{2(k+1)}{2} = \\frac{(k+2)(k+1)}{2} = \\frac{(k+1)([https://arbital.com/p/k+1](https://arbital.com/p/k+1)+1)}{2}.$$\nTherefore,\n$$ 1 + 2 + \\cdots + k + (k+1) = \\frac{(k+1)([https://arbital.com/p/k+1](https://arbital.com/p/k+1)+1)}{2}$$\nas desired.\n\nLet $P(n)$ be the statement for $n \\ge 1$ that the sum of all integers between 1 and $n$ is $n(n+1)/2$. At the beginning we showed that the base case, $P(1)$, is true. Next we showed the inductive step, that if $k \\ge 1$ and $P(k)$ is true, then $P(k+1)$ is true. This means that since $P(1)$ is true, $P(2)$ is true; and since $P(2)$ is true, $P(3)$ is true; etc., so that $P(n)$ is true for all integers $n \\ge 1$.\n\nDefinition for the natural numbers\n======\n\nWe are ready to properly define mathematical induction.\n\nWeak mathematical induction\n-------\n\nLet $P(n)$ be a statement about the natural numbers. Then $P$ is true for all $n \\ge m$ if\n\n 1. $P(m)$ is true, and\n 2. For all $k \\ge m$, $P(k+1)$ is true if $P(k)$ is.\n\nStrong mathematical induction\n-----\n\nLet $P(n)$ be a statement about the natural numbers. Then $P$ is true for all $n \\ge m$ if\n\n 1. $P(m)$ is true, and\n 2. For all $k \\ge m$, $P(k)$ is true so long as $P(\\ell)$ is true for all $m \\le \\ell < k$.\n\nA note: **strong mathematical induction** is a variant on mathematical induction by requiring a stronger inductive step, namely that the statement is true for *all* smaller indices, not just the immediate predecessor.\n\nInduction on a well-ordered set\n=====\n\nWell done if your immediate response to the above material was, \"Well, am I only restricted to this technique on the natural numbers?\" No. As long as your index set is [well-ordered](https://arbital.com/p/55r), then strong mathematical induction will work.\n\nHowever, if your ordered set is not well-ordered, you can prove properties 1 and 2 above, and still not have it hold for all elements beyond the base case. For instance, consider the set of nonnegative real numbers, and let $P(x)$ be the claim $x\\leq 1$. Then $P(0)$ is true, and for all real numbers $x\\ge 0$, $P(x)$ is true whenever $P(y)$ is true for all $0 \\le y < x$. But of course $P(2)$ is false.", "date_published": "2016-08-09T10:17:38Z", "authors": ["Kevin Clancy", "Patrick LaVictoire", "Dylan Hendrickson", "Douglas Weathers", "Eric Bruylant"], "summaries": [], "tags": ["B-Class"], "alias": "5fz"} {"id": "3c4bd00a91d68568caa9b89661335f7b", "title": "Proposed A-Class", "url": "https://arbital.com/p/proposed_a_class", "source": "arbital", "source_type": "text", "text": "Post pages which you think should be A-Class in a comment here. Subscribe to this page to get updates about new proposals.\n\nOnce the number of approvals from [reviewers](https://arbital.com/p/5ft) reaches $3-rejections*2$ the page can be given an [A-Class](https://arbital.com/p/4yf) tag.", "date_published": "2016-07-17T16:28:50Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "5g2"} {"id": "52611c7dbd49a7bdbf1738a56a173889", "title": "Proposed B-Class", "url": "https://arbital.com/p/proposed_b_class", "source": "arbital", "source_type": "text", "text": "Post pages which you think should be B-Class in a comment here. Subscribe to this page to get updates about new proposals.\n\nOnce the number of approvals from [reviewers](https://arbital.com/p/5ft) reaches $1-rejections*2$ the page can be given an [B-Class](https://arbital.com/p/4yd) tag.", "date_published": "2016-08-02T18:22:58Z", "authors": ["Eric Bruylant", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "5g3"} {"id": "c63ba1808e92369972e3436d07b60d5f", "title": "Standard provability predicate", "url": "https://arbital.com/p/standard_provability_predicate", "source": "arbital", "source_type": "text", "text": "We know that every theory as [expressive](https://arbital.com/p/expressiveness_mathematics) as [https://arbital.com/p/-minimal_arithmetic](https://arbital.com/p/-minimal_arithmetic) (i.e., [$PA$](https://arbital.com/p/Peano_Arithmetic)) is capable of talking about [statements](https://arbital.com/p/statement_mathematics) about itself via [encodings](https://arbital.com/p/encoding) of [sentences](https://arbital.com/p/sentence_mathematics) of the language of [https://arbital.com/p/-arithmetic](https://arbital.com/p/-arithmetic) into the [natural numbers](https://arbital.com/p/45h).\n\nOf particular interest is figuring out whether it is possible to write a formula $Prv(x)$ which is true [https://arbital.com/p/-46m](https://arbital.com/p/-46m) there exists a proof of $x$ from the axioms and rules of inference of our theory.\n\nIf we reflect on what a proof is, we will come to the conclusion that a proof is a sequence of symbols which follows certain rules. Concretely, it is a sequence of strings such that either:\n\n1. The string is an [https://arbital.com/p/-axiom](https://arbital.com/p/-axiom) of the system or...\n2. The string is a theorem that can be deduced from previous strings of the system by applying a [rule of inference](https://arbital.com/p/).\n\nAnd the last string in the sequence corresponds to the theorem we want to prove.\n\nIf the axioms are a [https://arbital.com/p/-computable_set](https://arbital.com/p/-computable_set), and the rules of inference are also effectively computable, then checking whether a certain string is a proof of a given theorem is also computable.\n\nSince every computable set can be defined by a [$\\Delta_1$-formula](https://arbital.com/p/) in arithmetic %%note:[Proof](https://arbital.com/p/)%% then we can define the $\\Delta_1$-formula $Prv(x,y)$ such that $PA\\vdash Prv(n,m)$ iff $m$ encodes a proof of the sentence of arithmetic encoded by $n$.\n\nThen a sentence $S$ is a theorem of $PA$ if $PA\\vdash \\exists y Prv(\\ulcorner S \\urcorner, y)$.\n\nThis formula is the standard provability predicate, which we will abbreviate as $\\square_{PA}(x)$, and has some nice properties which correspond to what one would expect of a [https://arbital.com/p/-5j7](https://arbital.com/p/-5j7).\n\nHowever, due to [non-standard models](https://arbital.com/p/non_standard_model), there are some intuitions about provability that the standard provability predicate fails to capture. For example, it turns out that $\\square_{PA}A\\rightarrow A$ [is not always a theorem of $PA$](https://arbital.com/p/55w).\n\nThere are non-standard models of $PA$ which contain numbers other than $0,1,2,..$ (called [non-standard models of arithmetic](https://arbital.com/p/non_standard_models_of_arithmetic)). In those models, the standard predicate $\\square_{PA}x$ can be true even if for no natural number $n$ it is the case that $Prv(x,n)$. %%note:This condition is called [$\\omega$-inconsistency](https://arbital.com/p/)%% This means that there is a non-standard number which satisfies the formula, but nonstandard numbers do not encode standard proofs!", "date_published": "2016-07-23T16:04:16Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["In theories extending [https://arbital.com/p/-minimal_arithmetic](https://arbital.com/p/-minimal_arithmetic) there exists a $\\exists_1$-formula $\\square_T$(x) which defines the existence of a proof in an [axiomatizable theory](https://arbital.com/p/) $T$. \n\nThe formula satisfies [some nice propereties](https://arbital.com/p/5j7), but fails to capture some intuitions about provability due to non-standard weirdness."], "tags": [], "alias": "5gt"} {"id": "67cfd57b7e1128fe8cf512c4b3c042a9", "title": "Löb's theorem and computer programs", "url": "https://arbital.com/p/5hr", "source": "arbital", "source_type": "text", "text": "The close relationship between [logic and computability](https://arbital.com/p/) allows us to frame Löb's theorem in terms of a computer program which is systematically looking for proofs of mathematical statements, `ProofSeeker(X)`.\n\nProofSeeker can be something like this:\n\n ProofSeeker(X):\n n=1\n while(True):\n if Prv(X,n): return True\n else n = n+1\n\nWhere `Prv(X,n)` is true if $n$ [encodes](https://arbital.com/p/) a proof of $X$%%note:See [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt) for more info on how to talk about provablity%%.\n\nNow we form a special sentence called a *reflection principle*, of the form $L(X)$= \"*If `ProofSeeker(X)` halts, then X is true*\". (This requires a [quine](https://arbital.com/p/322) to construct.)\n\nReflection principles are intuitively true, since ProofSeeker clearly halts iff it finds a proof of $X$, and if there is a proof of $X$, then $X$ must be true if we have chosen an appropriate [https://arbital.com/p/-5hh](https://arbital.com/p/-5hh) to search for proofs. For example, let's say that `ProofSeeker` is looking for proofs within [https://arbital.com/p/3ft](https://arbital.com/p/3ft).\n\nThe question now becomes, what happens when we call `ProofSeeker` on $L(X)$? Is $PA$ capable of proving that the reflection principle for any given $X$ is true, and therefore `ProofSeeker` will eventually halt? Or will it run forever?\n\nSeveral possibilities appear:\n\n1. If $PA\\vdash X$, then certainly $PA\\vdash L(X)$, since if the consequent of $L(X)$ is provable, then the whole sentence is provable.\n2. If $PA\\vdash \\neg X$, then we cannot assert that $PA\\vdash L(X)$, for that would imply asserting that $PA\\vdash$\"There is no proof of X\". This is tantamount to $PA$ asserting the [https://arbital.com/p/-5km](https://arbital.com/p/-5km) of $PA$, which is forbidden by [Gödel's second incompleteness theorem](https://arbital.com/p/).\n3. If $X$ is undecidable in $PA$, then if it were the case that $PA\\vdash L(X)$ it would be inconsistent that $PA\\vdash \\neg X$ for the same reason as when $X$ is disprovable, and thus $PA\\vdash X$, contradicting that it was undecidable.\n\n**Löb's theorem** is the assertion that $PA$ proves the reflection principle for $X$ only if $PA$ proves $X$.\n\nOr conversely, Löb's theorem states that if $PA\\not\\vdash X$ then $PA\\not\\vdash \\square_{PA} X \\rightarrow X$.\n\nIt can be [proved](https://arbital.com/p/) in $PA$ and stronger systems. It has a very strong link with Gödel's second incompleteness theorem, and in fact [they both are equivalent](https://arbital.com/p/).", "date_published": "2016-07-29T21:32:55Z", "authors": ["Malcolm McCrimmon", "Jaime Sevilla Molina", "Eric Rogstad", "Patrick LaVictoire"], "summaries": [], "tags": [], "alias": "5hr"} {"id": "771884b005be1e795a9e42889d758551", "title": "Gödel II and Löb's theorem", "url": "https://arbital.com/p/5hs", "source": "arbital", "source_type": "text", "text": "The abstract form of [Gödel's second incompleteness theorem](https://arbital.com/p/) states that if $P$ is a provability predicate in a [consistent](https://arbital.com/p/5km), [https://arbital.com/p/-axiomatizable](https://arbital.com/p/-axiomatizable) theory $T$ then $T\\not\\vdash \\neg P(\\ulcorner S\\urcorner)$ for a disprovable $S$.\n\nOn the other hand, [Löb's theorem](https://arbital.com/p/55w) says that in the same conditions and for every sentence $X$, if $T\\vdash P(\\ulcorner X\\urcorner)\\rightarrow X$, then $T\\vdash X$.\n\nIt is easy to see how GII follows from Löb's. Just take $X$ to be $\\bot$, and since $T\\vdash \\neg \\bot$ (by definition of $\\bot$), Löb's theorem tells that if $T\\vdash \\neg P(\\ulcorner \\bot\\urcorner)$ then $T\\vdash \\bot$. Since we assumed $T$ to be consistent, then the consequent is false, so we conclude that $T\\neg\\vdash \\neg P(\\ulcorner \\bot\\urcorner)$.\n\nThe rest of this article exposes how to deduce Löb's theorem from GII.\n\nSuppose that $T\\vdash P(\\ulcorner X\\urcorner)\\rightarrow X$.\n\nThen $T\\vdash \\neg X \\rightarrow \\neg P(\\ulcorner X\\urcorner)$.\n\nWhich means that $T + \\neg X\\vdash \\neg P(\\ulcorner X\\urcorner)$.\n\nFrom Gödel's second incompleteness theorem, that means that $T+\\neg X$ is inconsistent, since it proves $\\neg P(\\ulcorner X\\urcorner)$ for a disprovable $X$.\n\nSince $T$ was consistent before we introduced $\\neg X$ as an axiom, then that means that $X$ is actually a consequence of $T$. By completeness, that means that we should be able to prove $X$ from $T$'s axioms, so $T\\vdash X$ and the proof is done.", "date_published": "2016-07-25T06:03:35Z", "authors": ["Jaime Sevilla Molina"], "summaries": ["[Gödel's second incompleteness theorem](https://arbital.com/p/) and [Löb's theorem](https://arbital.com/p/) are equivalent to each other."], "tags": [], "alias": "5hs"} {"id": "bd6113da75527668ad73be038100aa77", "title": "Provability predicate", "url": "https://arbital.com/p/provability_predicate", "source": "arbital", "source_type": "text", "text": "A provability predicate is a formula $P$ of a theory satisfying the Hilbert-Bernais derivability conditions. If the diagonal theorem is applicable in the theory as well, then [https://arbital.com/p/55w](https://arbital.com/p/55w) and [Gödel's second incompleteness theorem](https://arbital.com/p/) are provable for $P$.\n\nThe Hilbert-Bernais derivability conditions are as follows:\n\n1. (**Necessitation**) If $T\\vdash S$, then $T\\vdash P(\\ulcorner S \\urcorner)$\n2. (**Provability of modus ponens / distributive axioms**) $T\\vdash P(\\ulcorner A\\rightarrow B \\urcorner)\\rightarrow (P(\\ulcorner A \\urcorner)\\rightarrow P(\\ulcorner B \\urcorner))$\n3. (**Provability of renecessitation**) $T\\vdash P(\\ulcorner S \\urcorner)\\rightarrow P(\\ulcorner P(\\ulcorner S \\urcorner) \\urcorner)$\n\nThe derivability conditions are tighlty related to the axioms and rules of inference of [https://arbital.com/p/-534](https://arbital.com/p/-534). In fact, the normal systems of provability are defined as those that have necessitation as a rule and the distributive axioms. %%note:They also have to be closed under substitution%% On the other hand, D3 is the axiom that defines the system [K4](https://arbital.com/p/), and it is also satisfied by [https://arbital.com/p/GL](https://arbital.com/p/GL).\n\n##Examples\n\nThe **verum** formula $x=x$ trivially satisfies the derivability conditions.\n\nThe [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt) of arithmetic is a provability predicate.", "date_published": "2016-08-19T09:11:55Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["A provability predicate of a theory $T$ is a formula $P(x)$ with one free variable $x$ such that:\n\n1. If $T\\vdash S$, then $T\\vdash P(\\ulcorner S \\urcorner)$\n2. $T\\vdash P(\\ulcorner A\\rightarrow B \\urcorner)\\rightarrow (P(\\ulcorner A \\urcorner)\\rightarrow P(\\ulcorner B \\urcorner))$\n3. $T\\vdash P(\\ulcorner S \\urcorner)\\rightarrow P(\\ulcorner P(\\ulcorner S \\urcorner) \\urcorner)$"], "tags": ["Needs parent"], "alias": "5j7"} {"id": "ddaf6eab5b11000de308d8241626eec3", "title": "Normal system of provability logic", "url": "https://arbital.com/p/normal_system_of_provability", "source": "arbital", "source_type": "text", "text": "Between the modal systems of provability, the normal systems distinguish themselves by exhibiting nice properties that make them useful to reason.\n\nA normal system of provability is defined as satisfying the following conditions:\n\n1. Has **necessitation** as a rule of inference. That is, if $L\\vdash A$ then $L\\vdash \\square A$.\n2. Has **modus ponens** as a rule of inference: if $L\\vdash A\\rightarrow B$ and $L\\vdash A$ then $L\\vdash B$.\n3. Proves all **tautologies** of propositional logic.\n4. Proves all the **distributive axioms** of the form $\\square(A\\rightarrow B)\\rightarrow (\\square A \\rightarrow \\square B)$.\n5. It is **closed under substitution**. That is, if $L\\vdash F(p)$ then $L\\vdash F(H)$ for every modal sentence $H$.\n\nThe simplest normal system, which only has as axioms the tautologies of propositional logic and the distributive axioms, it is known as the [K system](https://arbital.com/p/).\n\n##Normality\nThe good properties of normal systems are collectively called **normality**.\n\nSome theorems of normality are:\n\n* $L\\vdash \\square(A_1\\wedge ... \\wedge A_n)\\leftrightarrow (\\square A_1 \\wedge ... \\wedge \\square A_n)$\n* Suppose $L\\vdash A\\rightarrow B$. Then $L\\vdash \\square A \\rightarrow \\square B$ and $L\\vdash \\diamond A \\rightarrow \\diamond B$.\n* $L\\vdash \\diamond A \\wedge \\square B \\rightarrow \\diamond (A\\wedge B)$\n\n##First substitution theorem\nNormal systems also satisfy the first substitution theorem.\n>(**First substitution theorem**) Suppose $L\\vdash A\\leftrightarrow B$, and $F(p)$ is a formula in which the sentence letter $p$ appears. Then $L\\vdash F(A)\\leftrightarrow F(B)$.\n\n##The hierarchy of normal systems\nThe most studied normal systems can be ordered by extensionality:\n\n![Hierarchy of normal systems](http://i.imgur.com/1yrL9FU.png)\n\nThose systems are:\n\n* The system K\n* The system K4\n* The system [GL](https://arbital.com/p/5l3)\n* The system T\n* The system S4\n* The system B\n* The system S5", "date_published": "2017-04-22T14:13:21Z", "authors": ["Eric Bruylant", "Jaime Sevilla Molina"], "summaries": [], "tags": ["Needs summary"], "alias": "5j8"} {"id": "ea5bf4193678d6435c3de2156358d074", "title": "Group presentation", "url": "https://arbital.com/p/group_presentation", "source": "arbital", "source_type": "text", "text": "summary(Technical): A presentation $\\langle X \\mid R \\rangle$ of a [group](https://arbital.com/p/3gd) $G$ is a set $X$ of *generators* and a set $R$ of *relators* which are words on $X \\cup X^{-1}$, such that $G \\cong F(X) / \\llangle R \\rrangle^{F(X)}$ the [https://arbital.com/p/-normal_closure](https://arbital.com/p/-normal_closure) of $\\llangle R \\rrangle$ with respect to the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) $F(X)$. \n\nA presentation $\\langle X \\mid R \\rangle$ of a group $G$ is an object that can be viewed in two ways:\n\n - a way of making $G$ as a [quotient](https://arbital.com/p/4tq) of the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) on some set $X$\n - a set $X$ of *generators* (from which we form the set $X^{-1}$ of formal inverses to $X$%%note:For example, if $X = \\{ a, b \\}$ then $X^{-1}$ is a set of new symbols which we may as well write $\\{ a^{-1}, b^{-1} \\}$. %%), and a set $R$ of *relators* (which must be [freely reduced words](https://arbital.com/p/5jc) on $X \\cup X^{-1}$), such that every element of $G$ may be written as a product of the generators and such that by combining elements of $R$ we can obtain every possible way of expressing the identity element of $G$ as a product of elements of $X$.\n\nEvery group $G$ has a presentation with $G$ as the set of generators, and the set of relators is the set containing every trivial word.\nOf course, this presentation is in general not unique: we may, for instance, add a new generator $t$ and the relator $t$ to any presentation to obtain an [isomorphic](https://arbital.com/p/49x) presentation.\n\nThe above presentation corresponds to taking the quotient of the free group $F(G)$ on $G$ by the homomorphism $\\phi: F(G) \\to G$ which sends a word $(a_1, a_2, \\dots, a_n)$ to the product $a_1 a_2 \\dots a_n$.\nThis is an instance of the more widely-useful fact that every group is a [quotient](https://arbital.com/p/4tq) of a [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) ([proof](https://arbital.com/p/5jb)).\n\n# Examples\n\n- The [https://arbital.com/p/-47y](https://arbital.com/p/-47y) $C_2$ on two elements has a presentation $\\langle x \\mid x^2 \\rangle$. That is, it has just one generator, $x$, and the relator $x^2$ tells us that $x^2$ is the [identity](https://arbital.com/p/54p) $e$.\nNotice that $\\langle x \\mid x^4 \\rangle$ would also satisfy the description that \"there is one generator, and $x^4$ is the identity\". However, the group corresponding to *this* presentation contains four elements, not two, so it is not $C_2$. This demonstrates the fact that if we have a presentation $\\langle X \\mid R \\rangle$, and a group can be written in such a way that all the relators hold in the group, and the group can be generated by the elements of $X$, that still doesn't mean the presentation describes the group; it could be that extra relations hold in the group that aren't listed in $R$. (In this case, for example, $x^2 = e$ is not listed in $\\langle x \\mid x^4 \\rangle$.)\n- The presentation $\\langle x, y \\mid xyx^{-1}y^{-1} \\rangle$ describes a group with two generators, such that the only nontrivial relation is $xyx^{-1}y^{-1} = e$ (and anything that can be built up from that). That relation may be written as $xy=yx$: that is, $x$ and $y$ [commute](https://arbital.com/p/3jb). This tells us that the group is [abelian](https://arbital.com/p/3h2), since every generator commutes with every other generator. In fact, this group's elements are just words $x^n y^m$ for some integers $n, m$; this follows because, for instance, $xyx = xxy = x^2y$, and in general we can pull all the instances of the letter $x$ (and $x^{-1}$) out to the front of the word. Therefore we can write an element of this group as $(m, n)$ where $m, n$ are integers; hence the group is just $\\mathbb{Z}^2$ with pointwise addition as its operation.\n- The presentation $\\langle x, y \\mid x^2, y \\rangle$ is just $C_2$ again. Indeed, we have a relator telling us that $y$ is equal to the identity, so we might as well just omit it from the generating set (because it doesn't add anything new to any word in which it appears).\n- The presentation $\\langle a, b \\mid aba^{-1}b^{-2}, bab^{-1}a^{-2} \\rangle$ is a longwinded way to define the trivial group (the group with one element). To prove this, it is enough to show that each generator represents the identity, because then every word on the generators has been made up from the identity element so is itself the identity. We have access to the facts that $ab = b^2 a$ and that $ba = a^2 b$ in this group (because, for example, $aba^{-1} b^{-2} = e$). The rest of the proof is an exercise.\n\n%%hidden(Show solution):\nWe have $ab = b^2 a$ from the first relator; that is $b ba$.\nBut $ba = a^2 b$ is the second relator, so that is $b a^2 b$; hence $ab = b a^2 b$ and so $a = b a^2$ by cancelling the rightmost $b$.\nThen by cancelling the rightmost $a$, we obtain $e = ba$, and hence $a = b^{-1}$.\n\nBut now by the first relator, $ab = b^2 a = b b a$; using that both $ab$ and $ba$ are the identity, this tells us that $e = b$; so $b$ is trivial.\n\nNow $a = b^{-1}$ and so $a$ is trivial too.\n%%", "date_published": "2016-07-27T11:02:40Z", "authors": ["mrkun", "Patrick Stevens"], "summaries": ["A presentation $\\langle X \\mid R \\rangle$ of a [group](https://arbital.com/p/3gd) is, informally, a way of specifying the group by a set $X$ of *generators* together with a set $R$ of *relators*.\nEvery element of the group is some product of generators, and the relators tell us when a product is trivial."], "tags": ["Math 3", "Stub"], "alias": "5j9"} {"id": "5a88dfa5423e36b41c645b5bcd793d02", "title": "Every group is a quotient of a free group", "url": "https://arbital.com/p/every_group_is_quotient_of_free_group", "source": "arbital", "source_type": "text", "text": "Given a [group](https://arbital.com/p/3gd) $G$, there is a [https://arbital.com/p/-free_group](https://arbital.com/p/-free_group) $F(X)$ on some set $X$, such that $G$ is [isomorphic](https://arbital.com/p/49x) to some [quotient](https://arbital.com/p/4tq) of $F(X)$.\n\nThis is an instance of a much more general phenomenon: for a general [monad](https://arbital.com/p/monad_category_theory) $T: \\mathcal{C} \\to \\mathcal{C}$ where $\\mathcal{C}$ is a category, if $(A, \\alpha)$ is an [algebra](https://arbital.com/p/eilenberg_moore_category) over $T$, then $\\alpha: TA \\to A$ is a [coequaliser](https://arbital.com/p/coequaliser_category_theory). ([Proof.](https://arbital.com/p/algebras_are_coequalisers))\n\n# Proof\nLet $F(G)$ be the free group on the elements of $G$, in a slight abuse of notation where we use $G$ interchangeably with its [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz).\nDefine the [homomorphism](https://arbital.com/p/47t) $\\theta: F(G) \\to G$ by \"multiplying out a word\": taking the word $(a_1, a_2, \\dots, a_n)$ to the product $a_1 a_2 \\dots a_n$.\n\nThis is indeed a group homomorphism, because the group operation in $F(G)$ is concatenation and the group operation in $G$ is multiplication: clearly if $w_1 = (a_1, \\dots, a_m)$, $w_2 = (b_1, \\dots, b_n)$ are words, then $$\\theta(w_1 w_2) = \\theta(a_1, \\dots, a_m, b_1, \\dots, b_m) = a_1 \\dots a_m b_1 \\dots b_m = \\theta(w_1) \\theta(w_2)$$\n\nThis immediately expresses $G$ as a quotient of $F(G)$, since [kernels of homomorphisms are normal subgroups](https://arbital.com/p/4h7).", "date_published": "2016-07-22T09:38:50Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Math 3"], "alias": "5jb"} {"id": "8b1ac22f26733ce48648cef7bfff8ae7", "title": "Freely reduced word", "url": "https://arbital.com/p/freely_reduced_word", "source": "arbital", "source_type": "text", "text": "summary: A \"word\" over a set $X$ is a finite ordered list of elements from $X$ and $X^{-1}$ (where $X^{-1}$ is the set of formal inverses of the elements of $X$), as if we were treating the elements of $X$ and $X^{-1}$ as letters of an alphabet. A \"freely reduced\" word over $X$ is one which doesn't contain any consecutive cancelling letters such as $x x^{-1}$.\n\nGiven a [set](https://arbital.com/p/3jz) $X$, we can make a new set $X^{-1}$ consisting of \"formal inverses\" of elements of $X$.\nThat is, we create a set of new symbols, one for each element of $X$, which we denote $x^{-1}$; so $$X^{-1} = \\{ x^{-1} \\mid x \\in X \\}$$\n\nBy this stage, we have not given any kind of meaning to these new symbols.\nThough we have named them suggestively as $x^{-1}$ and called them \"inverses\", they are at this point just objects.\n\nNow, we apply meaning to them, giving them the flavour of group inverses, by taking the [union](https://arbital.com/p/set_union) $X \\cup X^{-1}$ and making finite \"words\" out of this combined \"alphabet\".\n\nA finite word over $X \\cup X^{-1}$ consists of a list of symbols from $X \\cup X^{-1}$.\nFor example, if $X = \\{ 1, 2 \\}$ %%note:Though in general $X$ need not be a set of numbers.%%, then some words are:\n\n- The empty word, which we commonly denote $\\varepsilon$\n- $(1)$\n- $(2)$\n- $(2^{-1})$\n- $(1, 2^{-1}, 2, 1, 1, 1, 2^{-1}, 1^{-1}, 1^{-1})$\n\nFor brevity, we usually write a word by just concatenating the \"letters\" from which it is made:\n\n- The empty word, which we commonly denote $\\varepsilon$\n- $1$\n- $2$\n- $2^{-1}$\n- $1 2^{-1} 2 1 1 1 2^{-1} 1^{-1} 1^{-1}$\n\nFor even more brevity, we can group together successive instances of the same letter.\nThis means we could also write the last word as $1 2^{-1} 2 1^3 2^{-1} 1^{-2}$.\n\nNow we come to the definition of a **freely reduced** word: it is a word which has no subsequence $r r^{-1}$ or $r^{-1} r$ for any $r \\in X$.\n\n# Example\n\nIf $X = \\{ a, b, c \\}$, then we might write $X^{-1}$ as $\\{ a^{-1}, b^{-1}, c^{-1} \\}$ (or, indeed, as $\\{ x, y, z \\}$, because there's no meaning inherent in the $a^{-1}$ symbol so we might as well write it as $x$).\n\nThen $X \\cup X^{-1} = \\{ a,b,c, a^{-1}, b^{-1}, c^{-1} \\}$, and some examples of words over $X \\cup X^{-1}$ are:\n\n- The empty word, which we commonly denote $\\varepsilon$\n- $a$\n- $aaaa$\n- $b$\n- $b^{-1}$\n- $ab$\n- $ab^{-1}cbb^{-1}c^{-1}$\n- $aa^{-1}aa^{-1}$\n\nOf these, all except the last two are freely reduced.\nHowever, $ab^{-1}cbb^{-1}c^{-1}$ contains the substring $bb^{-1}$, so it is not freely reduced; and $aa^{-1}aa^{-1}$ is not freely reduced (there are several ways to see this: it contains $aa^{-1}$ twice and $a^{-1} a$ once).\n\n%%hidden(Alternative, more opaque, treatment which might help with one aspect):\nThis chunk is designed to get you familiar with the idea that the symbols $a^{-1}$, $b^{-1}$ and so on in $X^{-1}$ don't have any inherent meaning.\n\nIf we had (rather perversely) gone with $\\{ x, y, z \\}$ as the corresponding \"inverses\" to $\\{ a, b, c \\}$ (in that order), rather than $\\{ a^{-1}, b^{-1}, c^{-1} \\}$ as our \"inverses\" %%note:Which you should never do. It just makes things harder to read.%%, then the above words would look like:\n\n- The empty word, which we commonly denote $\\varepsilon$\n- $a$\n- $aaaa$, which we might also write as $a^4$\n- $b$\n- $y$\n- $ab$\n- $aycbyz$\n- $axax$\n\nFor the same reasons, all but the last two would be freely reduced.\nHowever, $aycbyz$ contains the substring $by$ so it is not freely reduced; and $axax$ is not freely reduced (there are several ways to see this: it contains $ax$ twice and $xa$ once).\n%%\n\n# Why are we interested in this?\n\nWe can use the freely reduced words to construct the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) on a given set $X$; this group has as its elements the freely reduced words over $X \\cup X^{-1}$, and as its group operation \"concatenation followed by free reduction\" (that is, removal of pairs $r r^{-1}$ and $r^{-1} r$). %%note:We make this construction properly rigorous, and check that it is indeed a group, on the [https://arbital.com/p/5kg](https://arbital.com/p/5kg) page.%%\nThe notion of \"freely reduced\" basically tells us that $r r^{-1}$ is the identity for every letter $r \\in X$, as is $r^{-1} r$; this cancellation of inverses is a property we very much want out of a group.\n\nThe free group is (in a certain well-defined sense from [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7)%%note:See [https://arbital.com/p/free_group_functor_is_left_adjoint_to_forgetful](https://arbital.com/p/free_group_functor_is_left_adjoint_to_forgetful) for the rather advanced reason why.%%) the purest way of making a group containing the elements $X$, but to make it, we need to throw in inverses for every element of $X$, and then make sure the inverses play nicely with the original elements (which we do by free reduction).\nThat is why we need \"freely-reducedness\".", "date_published": "2016-07-25T16:17:10Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Patrick Stevens"], "summaries": [], "tags": ["Needs parent"], "alias": "5jc"} {"id": "bab1f2d76a2c8cff751badf8b2855556", "title": "Division of rational numbers (Math 0)", "url": "https://arbital.com/p/division_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "So far in our study of the [arithmetic](https://arbital.com/p/514) of [rational numbers](https://arbital.com/p/4zq), we've had [addition](https://arbital.com/p/55m) (\"putting apples and chunks of apples side by side and counting what you've got\"), [subtraction](https://arbital.com/p/56x) (\"the same, but you're allowed anti-apples too\"), and [multiplication](https://arbital.com/p/59s) (\"make a rational number, but instead of starting from $1$ apple, start from some other number\").\n\n**Division** is what really sets the rational numbers apart from the [integers](https://arbital.com/p/53r), and it is the mathematician's answer to the question \"if I have some apples, how do I share them among my friends?\".\n\n# What's wrong with the integers?\n\nIf you have an integer number of apples (that is, some number of apples and anti-apples - no chunks allowed, just whole apples and anti-apples), and you want to share them with friends, sometimes you'll get lucky.\nIf you have four apples, for instance, then you can share them out between yourself and one friend, giving each person two apples.\n\nBut sometimes (often, in fact) you'll get unlucky.\nIf you want to share four apples between yourself and two others, then you can give each person one apple, but there's this pesky single apple left over which you just can't share.\n\n# What the rationals do for us\n\nThe trick, obvious to anyone who has ever eaten a cake, is to cut the leftover apple into three equally-sized pieces and give each person a piece.\nNow we have shared out all four apples equally.\n\nBut in order to do so, we've left the world of the integers, and in getting out our knife, we started working in the rationals.\nHow much apple has everyone received, when we shared four apples among three people (that is, myself and two friends as recipients of apple)?\n%%hidden(Show solution):\nEveryone got $\\frac{4}{3}$.\n\nIndeed, everyone got one whole apple; and then we chopped the remaining apple into three $\\frac{1}{3}$-chunks and gave everyone one chunk.\nSo everyone ended up with one apple and one $\\frac{1}{3}$-chunk.\n\nBy our instant addition rule %%note:If you've forgotten it, check out the [addition page](https://arbital.com/p/55m) again; it came from working out a chunk size out of which we can make both the $\\frac{1}{3}$-chunk and the $1$-chunk.%%, $$1 + \\frac{1}{3} = \\frac{1}{1} + \\frac{1}{3} = \\frac{3 \\times 1 + 1 \\times 1}{3 \\times 1} = \\frac{3+1}{3} = \\frac{4}{3}$$ \n\nThere's another way to see this, if (laudably!) you don't like just applying rules.\nWe could cut *every* apple into three pieces at the beginning, so we're left with four collections of three $\\frac{1}{3}$-sized chunks.\nBut now it's easy to share this among three people: just give everyone one of the $\\frac{1}{3}$-chunks from each apple.\nWe gave everyone four chunks in total, so this is $\\frac{4}{3}$.\n%%\n\nThe rationals provide the natural answer to all \"sharing\" questions about apples.\n\nWe write \"rational number $x$ divided by rational number $y$\" as $\\frac{x}{y}$: that is, $x$ apples divided amongst $y$ people.\n(We'll soon get to what it means to divide by a non-integer number of people; just roll with it for now.)\nIf space is a problem, we can write $a/n$ instead.\nNotice that our familiar notation of \"$\\frac{1}{m}$-sized chunks\" is actually just $1$ apple divided amongst $m$ people: it's the result of dividing $1$ into $m$ equal chunks.\nSo the notation does make sense, and it's just an extension of the notation we've been using already.\n\n# Division by a natural number\n\nIn general, $\\frac{a}{m}$ apples, divided amongst $n$ people, is obtained by the \"other way\" above.\nCut the $\\frac{a}{m}$ into $n$ pieces, and then give everyone an equal number of pieces.\n\nRemember, $\\frac{a}{m}$ is made of $a$ copies of pieces of size $\\frac{1}{m}$; so what we do is cut all of the $\\frac{1}{m}$-chunks individually into $n$ pieces, and then give everyone $a$ of the little pieces we've made.\nBut \"cut a $\\frac{1}{m}$-chunk into $n$ pieces\" is just \"cut an apple into $n$ pieces, but instead of doing it to one apple, do it to a $\\frac{1}{m}$-chunk\": that is, it is $\\frac{1}{m} \\times \\frac{1}{n}$, or $\\frac{1}{m \\times n}$.\n\nSo the answer is just $$\\frac{a}{m} / n = \\frac{a}{m \\times n}$$\n\n## Example\n\n\n\n# Division by a negative integer\n\nWhat would it even mean to divide an apple between four anti-people?\nHow about a simpler question: dividing an apple between one anti-person%%note:Remembering that dividing $x$ apples between one person just gives $x$, since there's not even any cutting of the apples necessary.%%? (The answer to this would be $\\frac{1}{-1}$.) It's not obvious!\n\nWell, the thing to think about here is that if I take an apple, and share it among one person %%note:Myself, probably. I'm very selfish.%%, then I've got just the same apple as before (cut into no chunks): that is, $\\frac{1}{1} = 1$.\nAlso, if I take an apple and share it among one person who is *not* myself (and I don't give any apple to myself), then we've also just got the same unsliced apple as before: $\\frac{1}{1} = 1$ again. \n\nBut if I take an apple, and give it to an *anti*-person%%note:Being very careful not to touch them, because I would annihilate an anti-person!%%, then from their perspective *I*'m the anti-person, and I've just given them an anti-apple.\nThere's a law of symmetry built into reality: the laws of physics are invariant if we reflect \"thing\" and \"anti-thing\" throughout the universe.%%note:Technically this is not *quite* true: the actual symmetry is on reflecting charge, parity and time all together, rather than just parity alone. But for the purposes of this discussion, let's pretend that the universe is parity-symmetric.%%\nThe anti-person sees the universe in a way that's the same as my way, but where the \"anti\" status of everything (and everyantithing) is flipped.\n\n\n\nPut another way, \"anti-ness\" is not an absolute notion but a relative one.\nI can only determine whether something is the same anti-ness or the opposite anti-ness to myself.\n\nSear this into your mind: the laws of rational-number \"physics\" are the same no matter who is observing.\nIf I observe a transaction, like \"I, a person, give someone an apple\", then an external person will observe \"The author, a person, gave someone an apple\", while an external anti-person will observe \"The author, an anti-person, gave an anti-person an anti-apple\".\n\nFrom the external person's perspective, they saw \"someone (of the same anti-ness as me) gave someone else (of the same anti-ness as me) an apple (of the same anti-ness as me)\".\nFrom the anti-person's perspective, everything is relatively the same: \"someone (of the opposite anti-ness to me) gave someone else (of the opposite anti-ness to me) an apple (of the opposite anti-ness to me)\".\n\nSo $\\frac{-1}{-1}$, being \"one anti-apple shared among one anti-person\", can be viewed instead from the perspective of an anti-person; they see one *apple* being given to one *person*: that is, $\\frac{-1}{-1}$ is equal to $1$.\n\nArmed with the fact that $\\frac{-1}{-1} = 1$, we can just apply our usual multiplication rule that $\\frac{a}{m} \\times \\frac{b}{n} = \\frac{a \\times b}{m \\times n}$, to deduce that $$\\frac{1}{-m} = \\frac{1}{-m} \\times 1 = \\frac{1}{-m} \\times \\frac{-1}{-1} = \\frac{-1 \\times 1}{-m \\times -1} = \\frac{-1}{m}$$\n\nThe law of symmetry-of-the-universe basically says that $\\frac{a}{-b} = \\frac{-a}{b}$.", "date_published": "2016-08-01T06:09:45Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["Division captures the idea of \"sharing\" apples between people, and it is the core motivating idea of the rational numbers."], "tags": ["Math 0", "C-Class"], "alias": "5jd"} {"id": "361aaa328da50ee56956090d566dc956", "title": "Monotone function", "url": "https://arbital.com/p/poset_monotone_function", "source": "arbital", "source_type": "text", "text": "Let $\\langle P, \\leq_P \\rangle$ and $\\langle Q, \\leq_Q \\rangle$ be [posets](https://arbital.com/p/3rb). Then a function $\\phi : P \\rightarrow Q$ is said to be **monotone** (alternatively, **order-preserving**) if for all $s, t \\in P$, $s \\le_P t$ implies $\\phi(s) \\le_Q \\phi(t)$.\n\nPositive example\n------\n\n![A simple monotone map phi](http://i.imgur.com/6w11VT2.png)\n\n%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n rankdir = BT;\n rank = same;\n compound = true;\n fontname=\"MathJax_Main\";\n\n subgraph cluster_P {\n node [https://arbital.com/p/style=filled,color=white](https://arbital.com/p/style=filled,color=white);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n style = filled;\n color = lightgrey;\n fontcolor = black;\n label = \"P\";\n labelloc = b;\n b -> a;\n c -> a;\n \n }\n subgraph cluster_Q {\n node [https://arbital.com/p/style=filled](https://arbital.com/p/style=filled);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n color = black;\n fontcolor = black;\n label= \"Q\";\n labelloc = b;\n u -> t;\n }\n edge [= blue, style = dashed](https://arbital.com/p/color)\n fontcolor = blue;\n label = \"φ\"; \n labelloc = t; \n b -> t [= false](https://arbital.com/p/constraint);\n a -> t [= false](https://arbital.com/p/constraint);\n c -> u [= false](https://arbital.com/p/constraint);\n}\n\n%%\n\nHere is an example of a monotone map $\\phi$ from a poset $P$ to another poset $Q$. Since $\\le_P$ has two comparable pairs of elements, $(c,a)$ and $(b,a)$, there are two constraints that $\\phi$ must satisfy to be considered monotone. Since $c \\leq_P a$, we need $\\phi(c) = u \\leq_Q t = \\phi(a)$. This is, in fact, the case. Also, since $b \\leq_P a$, we need $\\phi(b) = t \\leq_Q t = \\phi(a)$. This is also true.\n\nNegative example\n---------------\n\n![A simple, non-monotone map](http://i.imgur.com/Zp0wSnP.png)\n\n%%comment:\ndot source:\n\ndigraph G {\n node [= 0.1, height = 0.1](https://arbital.com/p/width)\n rankdir = BT;\n rank = same;\n compound = true;\n fontname=\"MathJax_Main\";\n\n subgraph cluster_P {\n node [https://arbital.com/p/style=filled,color=white](https://arbital.com/p/style=filled,color=white);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n style = filled;\n color = lightgrey;\n fontcolor = black;\n label = \"P\";\n labelloc = b;\n a -> b;\n }\n\n subgraph cluster_Q {\n node [https://arbital.com/p/style=filled](https://arbital.com/p/style=filled);\n edge [= \"none\"](https://arbital.com/p/arrowhead);\n color = black;\n fontcolor = black;\n label= \"Q\";\n labelloc = b;\n w -> u;\n w -> v;\n u -> t;\n v -> t;\n }\n edge [= blue, style = dashed](https://arbital.com/p/color)\n fontcolor = blue;\n label = \"φ\"; \n labelloc = t;\n b -> u [= false](https://arbital.com/p/constraint);\n a -> v [= false](https://arbital.com/p/constraint);\n}\n%%\n\nHere is an example of another map $\\phi$ between two other posets $P$ and $Q$. This map is not monotone, because $a \\leq_P b$ while $\\phi(a) = v \\parallel_Q u = \\phi(b)$.\n\nAdditional material\n----------------------------------\n\nFor some examples of montone functions and their applications, see [https://arbital.com/p/5lf](https://arbital.com/p/5lf). To test your knowledge of monotone functions, head on over to [https://arbital.com/p/6pv](https://arbital.com/p/6pv).", "date_published": "2016-12-03T01:42:37Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": [], "alias": "5jg"} {"id": "d6630565412577a6414b0381de4e7b8f", "title": "Exponential notation for function spaces", "url": "https://arbital.com/p/5k7", "source": "arbital", "source_type": "text", "text": "If $X$ and $Y$ are sets, the set of functions from $X$ to $Y$ (often written $X \\to Y$) is sometimes also written $Y^X$. This latter notation, which we'll call *exponential notation*, is related to the notation for finite powers of sets (e.g., $Y^3$ for the set of triples of elements of $Y$) as well as the notation of exponentiation for numbers.\n\nWithout further ado, here are some reasons this is good notation.\n\n- A function $f : X \\to Y$ can be thought of as an \"$X$ wide\" tuple of elements of $Y$. That is, a tuple of elements of $Y$ where the positions in the tuple are given by elements of $X$, generalizing the notation $Y^n$ which denotes the set of $n$ wide tuples of elements of $Y$. Note that if $|X| = n$, then $Y^X \\cong Y^n$.\n\n- This notion of exponentiation together with cartesian product as multiplication and disjoint union as addition satisfy the same relations as exponentiation, multiplication, and addition of natural numbers. Namely, \n\n - $Z^{X \\times Y} \\cong (Z^X)^Y$ (this isomorphism is called currying)\n - $Z^{X + Y} \\cong Z^X \\times Z^Y$\n - $Z^1 \\cong Z$ (where $1$ is a one element set, since there is one function into $Z$ for every element of $Z$)\n - $Z^0 \\cong 1$ (where $0$ is the empty set, since there is one function from the empty set to any set)\n\nMore generally, $Y^X$ is good notation for the exponential object representing $\\text{Hom}_{\\mathcal{C}}(X, Y)$ in an arbitrary cartesian closed category $\\mathcal{C}$ for the first set of reasons listed above.", "date_published": "2016-07-25T05:14:47Z", "authors": ["Eric Rogstad", "Izaak Meckler"], "summaries": [], "tags": [], "alias": "5k7"} {"id": "799c05dd02ea3442b002b02a80c7ba5a", "title": "Free group", "url": "https://arbital.com/p/free_group", "source": "arbital", "source_type": "text", "text": "Intuitively, the free [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $F(X)$ on the [set](https://arbital.com/p/3jz) $X$ is the group whose elements are the [freely reduced words](https://arbital.com/p/5jc) over $X$, and whose group operation is \"[https://arbital.com/p/-concatenation](https://arbital.com/p/-concatenation) followed by free reduction\".\n\nThe free group can be constructed rigorously in several equivalent ways, some of which are easy to construct but hard to understand, and some of which are intuitive but rather hard to define properly.\nOur formal construction (detailed on the [Formal Definition lens](https://arbital.com/p/5s1)) will be one of the more opaque definitions; there, we eventually show that the formal construction yields a group which is [isomorphic](https://arbital.com/p/49x) to the intuitive version, and this will prove that the intuitive version does in fact define a group with the right properties for the free group.\n\n# Intuitive definition\n\nGiven a set $X$, the free group $F(X)$ (or $FX$) on $X$ has:\n\n- as its elements, the [freely reduced words](https://arbital.com/p/5jc) over $X$;\n- as its group [operation](https://arbital.com/p/3h7), \"concatenation followed by free reduction\".\n\n## Example\n\nThe free group on the set $X = \\{ a, b \\}$ contains the following elements (among others):\n\n- The word $(a,b,a,a,a,b^{-1})$, which is written in shorthand as $abaaab^{-1}$ or (most commonly) as $aba^3b^{-1}$\n- The empty word $()$, which is written as $\\varepsilon$\n- The word $(b,b,b)$, which is written as $b^3$\n- The word $(a^{-1}, b^{-1}, b^{-1})$, or $a^{-1} b^{-2}$\n\n(From now on, we will only use the common shorthand to denote words, except in cases where this interferes with something else we're doing.)\n\nSome things which the same free group does *not* contain are:\n\n- $aa^{-1}$, since this is not freely reduced\n- $c$, for the fatuous reason that $c$ is not a word over the alphabet $\\{a,b\\}$\n- $abb^{-1}a$, since this is not freely reduced.\n\nSome examples of this group's group operation (which we will write as $\\cdot$) in action are:\n\n- $aba \\cdot bab = ababab$\n- $aba^2 \\cdot a^3b = aba^5b$\n- $aba^{-1} \\cdot a = ab$, obtained as the free reduction of $aba^{-1}a$\n- $ab \\cdot b^{-1} a^{-1} = \\varepsilon$, obtained as $abb^{-1}a^{-1} = aa^{-1}$ (reduction of $b$) and then $a a^{-1} = \\varepsilon$.\n\n# Examples\n\n- The free group on the empty set is just the trivial group, the group with just one element. Indeed, we must have an identity (the empty word), but there are no extra generators to throw in, so there are no longer words.\n- The free group on a set with one element is isomorphic to the group of integers under addition. Indeed, say our set is $\\{ a \\}$; then every member of the free group is a word of the form $a^n$ or $a^{-n}$ or $a^0$. Then the isomorphism is that $a^i$ is sent to $i \\in \\mathbb{Z}$.\n- The free group on two elements is [countable](https://arbital.com/p/2w0). Indeed, its elements are all of the form $a^{i_1} b^{j_1} a^{i_2} b^{j_2} \\dots a^{i_n} b^{j_n}$, and so the free group [injects](https://arbital.com/p/4b7) into the [set of natural numbers](https://arbital.com/p/45h) by the (rather convoluted) $$a^{i_1} b^{j_1} a^{i_2} b^{j_2} \\dots a^{i_n} b^{j_n} \\mapsto 2^{\\mathrm{sgn}(i_1)+2} 3^{|i_1|} 5^{\\mathrm{sgn}(j_1)+2} 7^{|j_1|} \\dots$$\nwhere we produce the natural number %%note:Uniquely, by the [https://arbital.com/p/-5rh](https://arbital.com/p/-5rh).%% whose prime factors tell us exactly how to construct the original word. %%note:This trick is extremely useful in determining whether a set is countable, and you should go over it until you are sure you understand it.%% (The $\\mathrm{sgn}$ function outputs $-1$ if its input is negative; $1$ if the input is positive; and $0$ if its input is $0$.)\nAs a related exercise, you should prove that the free group on a countable set is countable.\n\n# Why are the free groups important?\n\nIt is a fact that [https://arbital.com/p/-5jb](https://arbital.com/p/-5jb).\nTherefore the free groups can be considered as a kind of collection of \"base groups\" from which all other groups can be made as quotients.\nThis idea is made more concrete by the idea of the [https://arbital.com/p/-5j9](https://arbital.com/p/-5j9), which is a notation that specifies a group as a quotient of a free group. %%note:Although it's not usually presented in this way at first, because the notation has a fairly intuitive meaning on its own.%%\n\n## Free groups \"have no more relations than they are forced to have\"\n\nThe crux of the idea of the group presentation is that the free group is the group we get when we take all the elements of the set $X$ as elements which \"generate\" our putative group, and then throw in every possible combination of those \"generators\" so as to complete it into a bona fide group.\nWe explain why the free group is (informally) a \"pure\" way of doing this, by walking through an example.\n\n%%%hidden(Example):\nIf we want a group which contains the elements of $X = \\{ a, b \\}$, then what we *could* do is make it into the [https://arbital.com/p/-47y](https://arbital.com/p/-47y) on the two elements, $C_2$, by insisting that $a$ be the [identity](https://arbital.com/p/54p) of the group and that $b \\cdot b = a$.\nHowever, this is a very ad-hoc, \"non-pure\" way of making a group out of $X$. %%note:For those who are looking out for that sort of thing, to do this in generality will require heavy use of the [axiom of choice](https://arbital.com/p/69b).%%\nIt adds the \"relation\" $b^2 = a$ which wasn't there before.\n\nInstead, we might make the free group $FX$ by taking the two elements $a, b$, and throwing in everything we are forced to make if no non-trivial combination is the identity.\nFor example:\n\n- we have no reason to suspect that either $a$ or $b$ is already the identity, so we must throw in an identity which we will label $\\varepsilon$; \n- we have no reason to suspect that $a \\cdot b$ should be equal to $a$ or to $b$ or to $\\varepsilon$, so we must throw in an element equal to $a \\cdot b$ which we will label $ab$;\n- we *do* have a reason to suspect that $a^{-1} \\cdot a$ is already in the group (because it \"ought to be\" the identity $\\varepsilon$), so we don't throw in the element $a^{-1} a$;\n- and so on through all the words, such as $a^{-1}ba^2b^{-2}$ and so forth.\n\nThis way adds an awfully large number of elements, but it doesn't require us to impose any arbitrary relations.\n%%%\n\n# Universal property of the free group\n\nThe free group [has a universal property](https://arbital.com/p/free_group_universal_property), letting us view groups from the viewpoint of [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7).\nTogether with the idea of the [quotient](https://arbital.com/p/4tq) (which can be formulated in category theory as the [https://arbital.com/p/-coequaliser](https://arbital.com/p/-coequaliser)) and the subsequent idea of the [https://arbital.com/p/-5j9](https://arbital.com/p/-5j9), this lets us construct any group in a category-theoretic way.\n\nIndeed, every group $G$ has a presentation $\\langle X \\mid R \\rangle$ say, which expresses $G$ as a quotient (i.e. a certain coequaliser) of the free group $F(X)$.\nWe can construct $F(X)$ through the universal property, so we no longer need to say anything at all about the elements or group operation of $G$ to define it.\n\n# Properties\n\n- It is not obvious, but $FX$ is isomorphic to $FY$ if and only if $X$ and $Y$ have the same [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5) as sets. ([Proof.](https://arbital.com/p/free_group_isomorphic_iff_sets_biject)) This completely classifies the free groups and tells us just when two free groups are \"the same\", which is nice to have.\n- The only two [abelian groups](https://arbital.com/p/3h2) which are free are the trivial group (of one element; i.e. the free group on no generators), and $\\mathbb{Z}$ the free group on one generator. No others are abelian because if there are two or more generators, pick $a, b$ to be two of them; then $ab \\not = ba$ by free-ness. %%note: To be completely precise about this, using the [formal definition](https://arbital.com/p/5s1), $\\rho_a \\rho_b \\not = \\rho_b \\rho_a$ because they have different effects when applied to the empty word $\\varepsilon$. One produces $ab$; the other produces $ba$. %%\n- The free groups are all [torsion](https://arbital.com/p/torsion_group_theory)-free %%note:\"Torsion-free\" means \"free of torsion\", where the word \"free\" is nothing to do with \"free group\".%%: that is, no element has finite [order](https://arbital.com/p/4cq) unless it is the identity $\\varepsilon$. ([Proof.](https://arbital.com/p/5kz)) This property helps us to recognise when a group is *not* free: it is enough to identify an element of finite order. However, the test doesn't go the other way, because there are torsion-free groups which are not free.\n\n%%hidden(Proof that \"the rationals under addition\" is torsion-free but not free):\nSuppose the order of an element $x \\in \\mathbb{Q}$ were finite and equal to $n \\not = 0$, say.\nThen $x+x+\\dots+x$, $n$ times, would yield $0$ (the [identity](https://arbital.com/p/54p) of $(\\mathbb{Q}, +)$).\nBut that would mean $n \\times x = 0$, and so $n=0$ or $x = 0$; since $n \\not = 0$ already, we must have $x = 0$, so $x$ is the identity after all.\n\nThe group is not free: suppose it were free. It is abelian, so it must be isomorphic to either the trivial group (clearly not: $\\mathbb{Q}$ is infinite but the trivial group isn't) or $\\mathbb{Z}$.\nIt's not isomorphic to $\\mathbb{Z}$, though, because $\\mathbb{Z}$ is cyclic: there is a \"generating\" element $1$ such that every element of $\\mathbb{Z}$ can be made by adding together (possibly negatively-many) copies of $1$.\n$\\mathbb{Q}$ doesn't have this property, because if $x$ were our attempt at such a generating element, then $\\frac{x}{2}$ could not be made, so $x$ couldn't actually be generating after all.\n%%", "date_published": "2016-10-23T16:22:16Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Eric Rogstad", "Patrick Stevens"], "summaries": ["The free [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) $F(X)$ on the [set](https://arbital.com/p/3jz) $X$ is the group whose elements are the [freely reduced words](https://arbital.com/p/5jc) over $X$, and whose group operation is \"[https://arbital.com/p/-concatenation](https://arbital.com/p/-concatenation) followed by free reduction\"."], "tags": [], "alias": "5kg"} {"id": "e69f99a4485f43ea4cfff90ab63e060b", "title": "Consistency", "url": "https://arbital.com/p/consistency", "source": "arbital", "source_type": "text", "text": "A consistent [https://arbital.com/p/-theory](https://arbital.com/p/-theory) is one in which there are well-formed statements that you cannot prove from its axioms; or equivalently, that there is no $X$ such that $T\\vdash X$ and $T\\vdash \\neg X$.\n\nFrom the point of view of [https://arbital.com/p/-model_theory](https://arbital.com/p/-model_theory), a consistent theory is one whose axioms are [https://arbital.com/p/-satisfiable](https://arbital.com/p/-satisfiable). Thus, to prove that a set of axioms is consistent you can resort to constructing a model using a formal system whose consistency you trust (normally using [https://arbital.com/p/set_theory](https://arbital.com/p/set_theory)) in which all the axioms come true.\n\nArithmetic is [expressive enough](https://arbital.com/p/) to talk about consistency within itself. If $\\square_{PA}$ represents the [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt) in [https://arbital.com/p/3ft](https://arbital.com/p/3ft) then a sentence of the form $\\neg\\square_{PA}(\\ulcorner 0=1\\urcorner)$ represents the consistency of $PA$, since it comes to say that there exists a disprovable sentence for which there is no proof. [Gödel's second incompleteness theorem](https://arbital.com/p/) comes to say that such a sentence is not provable from the axioms of $PA$.", "date_published": "2016-07-24T06:48:18Z", "authors": ["Eric Rogstad", "Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "5km"} {"id": "ba91d70eae9b724e469fd5992de2179b", "title": "Free groups are torsion-free", "url": "https://arbital.com/p/free_group_is_torsion_free", "source": "arbital", "source_type": "text", "text": "Given the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) $FX$ on a set $X$, it is the case that $FX$ has no [torsion](https://arbital.com/p/torsion_group_theory) elements.\nThat is, every element has infinite order except for the [https://arbital.com/p/-54p](https://arbital.com/p/-54p).\n\n# Proof\n\nRecall that we can view every element of the free group as being a [https://arbital.com/p/-5jc](https://arbital.com/p/-5jc) whose letters are elements of $X$.\nAlso the group operation is \"concatenation of words, followed by free reduction\".\nIt is certainly intuitively plausible that if we repeatedly concatenate a word with more copies of itself, and then perform free reduction, we will never reach the empty word; this is what we shall prove.\nThe cancellations in the process of free reduction are everything here, because the only way the powers of a word can get shorter (and hence the only way the powers of a word can ever come to the empty word) is by those cancellations.\nSo we are going to have to analyse the behaviour of words at their start and ends, because when we take powers of a word, the start and end are the places where cancellation may happen.\n\nWe will say a word is *cyclically reduced* if it is not only freely reduced, but also it is \"freely reduced if we rotate the word round one place\".\nFormally, a freely reduced word $a_1 a_2 \\dots a_n$ is cyclically reduced if and only if $a_1 \\not = a_n^{-1}$.\nThis captures the idea that \"the word doesn't admit any cancellation when we take powers of it\".\n\n%%hidden(Examples):\n\n%%\n\nNow, every freely reduced word may $w$ be written as $r w^\\prime r^{-1}$ where $r$ is a freely reduced word and $w^\\prime$ is a cyclically reduced word, and there is no cancellation between $r$, $r^{-1}$ and $w^\\prime$.\nThis is easily proved by [https://arbital.com/p/-5fz](https://arbital.com/p/-5fz) on the length of $w$: it is immediate if $w$ is already cyclically reduced (letting $r = \\varepsilon$ the empty word, and $w^\\prime = w$), while if $w$ is not cyclically reduced then it is $a v a^{-1}$ for some letter $a \\in X$ and some freely reduced word $v$.\nBut then by the inductive hypothesis (since $v$ is shorter than $w$), $v$ may be written as $r v^\\prime r^{-1}$ where $v^\\prime$ is cyclically reduced; so $w = a r v^\\prime r^{-1} a^{-1} = (ar) v^\\prime (ar)^{-1}$ as required.\n\nMoreover, this decomposition is *unique*, since if $r w^\\prime r^{-1} = s v^\\prime s^{-1}$ then $s^{-1} r w^\\prime r^{-1} s = v^\\prime$; but $v^\\prime$ is cyclically reduced so there are only two ways to prevent cancellation:\n\n- $s$ is the identity, whereupon $v^\\prime = r w^\\prime r^{-1}$ is cyclically reduced, so (by freely-reducedness of $w = r w^\\prime r^{-1}$) we have $r = e$ as well, and $v^\\prime = w^\\prime = w$.\n- $s$ is not the identity but it entirely cancels with something in $r^{-1}$; hence $s$ is a sub-word of $r$. But by symmetry, $r$ is a sub-word of $s$, so they are the same (because without loss of generality $r$ is also not the identity); and therefore $v^\\prime = w^\\prime$.\n\nFinally, to complete the proof that the free group is torsion-free, simply take a putative word $w$ whose order $n$ is finite.\nExpress it uniquely as $r w^\\prime r^{-1}$ as above; then $(rw^\\prime r^{-1})^n = r (w^\\prime)^n r^{-1}$ which is expected to become the empty word after free reduction.\nBut we already know there is no cancellation between $r$ and $w^\\prime$ and $r^{-1}$, so there cannot be any cancellation between $r, (w^\\prime)^n, r^{-1}$ either, and by the cyclically-reduced nature of $w^\\prime$, our power $r (w^\\prime)^n r^{-1}$ is actually freely reduced already.\nHence our power of the word is emphatically not the empty word.", "date_published": "2016-07-25T17:38:13Z", "authors": ["Dylan Hendrickson", "Patrick Stevens"], "summaries": [], "tags": [], "alias": "5kz"} {"id": "ace1b36291599599caa2a4c4ef282a76", "title": "Provability logic", "url": "https://arbital.com/p/provability_logic", "source": "arbital", "source_type": "text", "text": "The Gödel -Löb system of provability logic, or $GL$ for short, is a [https://arbital.com/p/-5j8](https://arbital.com/p/-5j8) which captures important notions about [provability predicates](https://arbital.com/p/5j7). It can be regarded as a formalism which allows to decide whether certain sentences in which a provability predicate appears are in fact theorems of [https://arbital.com/p/3ft](https://arbital.com/p/3ft).\n\n$GL$ has two rules of inference:\n\n* **Modus ponens** allows to infer $B$ from $A\\to B$ and $A$.\n* **Necessitation** says that if $GL\\vdash A$ then $GL\\vdash \\square A$.\n\nThe axioms of $GL$ are as follows:\n\n* $GL\\vdash \\square (A\\to B)\\to(\\square A \\to \\square B)$ (**Distibutive axioms**)\n* $GL\\vdash \\square (\\square A \\to A)\\to \\square A$ (**Gödel-Löb axioms**)\n* Propositional tautologies are axioms of $GL$.\n\n**Exercise**: Show using the rules of $GL$ that $\\square \\bot \\leftrightarrow \\square \\diamond p$. %%note:$\\diamond p$ is short for $\\neg \\square \\neg p$.%%\n\n%%hidden(Show solution):\nThose problems are best solved by working backwards.\n\nWe want to show that $GL\\vdash \\square \\bot \\leftrightarrow \\square \\diamond p$.\n\nWhat can lead us to that? Well, we know that because of [normality](https://arbital.com/p/), this can be deduced from $GL\\vdash \\square (\\bot \\leftrightarrow \\diamond p)$.\n\nSimilarly, that can be derived from necessitation of $GL\\vdash \\bot \\leftrightarrow \\diamond p$.\n\nThe propositional calculus shows that $GL\\vdash \\bot \\to \\diamond p$ is a tautology, so we can prove by following the steps backward that $GL\\vdash \\square \\bot \\to \\square \\diamond p$.\n\nNow we have to prove that $GL\\vdash \\square \\diamond p\\to \\square \\bot$ and we are done.\n\nFor that we are going to go forward from the tautology $GL\\vdash \\bot \\to \\neg p$. \n\nBy normality, $GL\\vdash \\square \\bot \\to \\square \\neg p$.\n\nContraposing, $Gl\\vdash \\neg \\square \\neg p\\to\\neg\\square\\bot$. By necessitation and normality, $Gl\\vdash \\square \\neg \\square \\neg p\\to\\square \\neg\\square\\bot$.\n\nBut $\\square\\neg\\square\\bot$ is equivalent to $\\square[\\bot \\to \\bot](https://arbital.com/p/\\square)$, and it is an axiom that $GL\\vdash \\square[\\bot \\to \\bot](https://arbital.com/p/\\square) \\to\\square \\bot$, so by modus ponens, $Gl\\vdash \\square \\neg \\square \\neg p\\to\\square \\bot$, and since $\\diamond p = \\neg \\square \\neg p$ we are done.\n%%\n\nIt is fairly easy to see that GL is consistent. If we interpret $\\square$ as the verum operator which is always true, we realize that every theorem of $GL$ is assigned a value of true according to this interpretation and the usual rules of propositional calculus %%note:[proof](https://arbital.com/p/) %%. However, there are well formed modal sentences such as $\\neg \\square \\bot$ such that the assigned value is false, and thus they cannot be theorems of $GL$.\n\n##Semantics\nHowever simple the deduction procedures of $GL$ are, they are bothersome to use in order to find proofs. Thankfully, an alternative interpretation in terms of [Kripke models](https://arbital.com/p/5ll) has been developed that allows to decide far more conveniently whether a modal sentence is a theorem of $GL$.\n\n$GL$ is adequate for finite, transitive and irreflexive Kripke models. That is, a sentence $A$ is a theorem of $GL$ if and only if $A$ is valid in every finite, transitive and irreflexive model%%note: [proof](https://arbital.com/p/)%%. Check out the page on Kripke models if you do not know how to construct Kripke models or decide if a sentence is valid in it.\n\nAn important notion in this kind of models is that of *rank*. The rank $\\rho$ of a world $w$ from which no world is visible is $\\rho(w)=0$. The rank of any other world is defined as the maximum among the ranks of its successors, plus one. In other words, the rank of a world is the length of the greatest \"chain of worlds\" in the model such that $w$ can view the first slate of the chain. \n\nSince models are irreflexive and finite, the rank is a well-defined notion: no infinite chain of worlds is ever possible.\n\n**Exercise**: Show that the Gödel-Löb axioms are valid in every finite, transitive and irreflexive Kripke model.\n\n%%hidden(Show solution):\nSuppose there is a finite, transitive and irreflexive Kripke model in which an sentence of the form $\\square[A\\to A](https://arbital.com/p/\\square)\\to \\square A$ is not valid.\n\nLet $w$ be the lowest rank world in the model such that $w\\not\\models \\square[A\\to A](https://arbital.com/p/\\square)\\to \\square A$. Then we have that $w\\models \\square[A\\to A](https://arbital.com/p/\\square)$ but $w\\not \\models \\square A$.\n \nTherefore, there exists $x$ such that $w R x$, also $x\\models \\neg A$ and $x\\models \\square A\\to A$. But then $x\\models \\neg\\square A$.\n\nSince $x$ has lower rank than $w$, we also have that $x\\models \\square[A\\to A](https://arbital.com/p/\\square)\\to \\square A$. Combining those two last facts we get that $x\\not\\models \\square[A\\to A](https://arbital.com/p/\\square)$, so there is $y$ such that $xRy$ and $y\\not\\models \\square A\\to A$.\n\nBut by transitivity $wRy$, contradicting that $w\\models \\square[A\\to A](https://arbital.com/p/\\square)$. So the supposition was false, and the proof is done.\n%%\n\nThe Kripke model formulation specially simplifies reasoning in cases in which [no sentence letters appear](https://arbital.com/p/constant_sentences). A polynomial time algorithm can be written for deciding theoremhood this case.\n\nFurthermore, $GL$ is [https://arbital.com/p/-decidable](https://arbital.com/p/-decidable). However, it is [$PSPACE$-complete](https://arbital.com/p/) %%note:[proof](https://arbital.com/p/)%%.\n\n##Arithmetical adequacy\nYou can translate sentences of modal logic to sentences of arithmetic using *realizations*.\n\nA realization $*$ is a function such that \n$A\\to B*=A*\\to B*$, \n$(\\square A)* =\\square_{PA}(A*)$, \nand $p* = S_p$ for every sentence letter $p$, where each $S_p$ is an arbitrary well formed closed sentence of arithmetic.\n\n[Solovay's adequacy theorem for GL](https://arbital.com/p/) states that a modal sentence $A$ is a theorem of $GL$ iff $PA$ proves $A*$ for every realization $*$.\n\nThus we can use $GL$ to reason about arithmetic and viceversa.\n\nNotice that $GL$ is decidable while [arithmetic is not](https://arbital.com/p/). This is explained by the fact that $GL$ only deals with a specific subset of logical sentences, in which quantification plays a restricted role. In fact, quantified provability logic is undecidable.\n\nAnother remark is that while knowing that $GL\\not\\vdash A$ implies that there is a realization such that $PA\\not\\vdash A*$, we do not know whether a specific realization of A is a theorem or not. A related result, the [uniform completeness theorem for $GL$](https://arbital.com/p/), proves that there exists a specific realization $\\#$ such that $PA\\not \\vdash A^{\\#}$ if $GL\\not\\vdash A$, which works for every $A$.\n\n##Fixed points\nOne of the most important results in $GL$ is the existence of fixed points of sentences of the form $p\\leftrightarrow \\phi(p)$. Concretely, the [fixed point theorem](https://arbital.com/p/) says that for every sentence $\\phi(p)$ modalized in $p$ %%note:that is, every $p$ occurs within the scope of a $\\square$%% there exists $H$ in which $p$ does not appear such that $GL\\vdash \\square [\\phi](https://arbital.com/p/p\\leftrightarrow) \\leftrightarrow \\square (p\\leftrightarrow h)$. Furthermore, there are constructive proofs which allow to build such an $H$.\n\nIn arithmetic, there are plenty of interesting [self referential sentences](https://arbital.com/p/59c) such as the [Gödel sentence](https://arbital.com/p/godel_first_incompleteness_theorem) for which the fixed point theorem is applicable and gives us insights about their meaning.\n\nFor example, the modalization of the Gödel sentence is something of the form $p\\leftrightarrow \\neg\\square p$. The procedure for finding fixed points tells us that $GL\\vdash \\square (p\\leftrightarrow \\neg\\square p)\\to \\square(p\\leftrightarrow \\neg\\square\\bot$. Thus by arithmetical adequacy, and since [everything $PA$ proves is true](https://arbital.com/p/) we can conclude that the Gödel sentence is equivalent to the [consistency of arithmetic](https://arbital.com/p/5km).", "date_published": "2016-07-26T08:44:04Z", "authors": ["Dylan Hendrickson", "Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "5l3"} {"id": "ae679669d614350595c96b4581fbc674", "title": "Monotone function: examples", "url": "https://arbital.com/p/order_monotone_examples", "source": "arbital", "source_type": "text", "text": "Here are some examples of monotone functions.\n\nA cunning plan\n--------\n\nThere's a two-player game called 10 questions, in which player A begins the game by stating \"I'm thinking of an X\", where X can be replaced with any noun of player A's choosing.\n\nThen player B asks player A a series of questions, which player A must answer with either truthfully with \"yes\" or \"no\". After asking 10 questions, player B is forced to guess what the object player A was thinking of. Player B wins if the guess is correct, and loses otherwise. Player B may guess before 10 turns have expired, in which case the guess counts as one of\ntheir questions.\n\nHere is an example of a game of 10 questions:\n\nA: I'm thinking of a food.\n\nQuestion 1. \nB: Is it healthy? \nA: Yes.\n\nQuestion 2.\nB: Is it crunchy? \nA: No.\n\nQuestion 3.\nB: Must it be prepared before it is eaten?\nA: No.\n\nQuestion 4.\nB: Is yellow? \nA: Yes.\n\nQuestion 5.\nB: Is it a banana? \nA: Yes.\n\n**Player B wins**\n\nNow suppose that you are playing 10 questions as player B. Player A begins by stating\n\"I'm thinking of a letter of the alphabet.\" You immediately think of a strategy\nthat requires no more than 5 guesses: repeatedly ask \"does it come after $\\star$?\"\nwhere $\\star$ is near the middle of the contiguous sequence of letters that you have\nnot yet eliminated. Initially the contiguous sequence of letters between A and Z\nhave not been eliminated.\n\n \nQuestion 1.\n*You:* Does it come after M?\n*Player A:* Yes.\n\nAt this point, the contiguous sequence of 13 letters that have not been eliminated is N-Z,\ninclusive.\n\nQuestion 2.\n*You:* Does it come after S? *Player A:* No.\n\nAt this point, the contiguous sequence of 6 letters that have not been eliminated is N-S,\ninclusive.\n\nQuestion 3. *You:* Does it come after P? *Player A:* Yes.\n\nWe've now narrowed it down to Q,R,and S.\n\nQuestion 4. *You:* Does it come after R? *Player A:* No.\n\nQuestion 5. *You:* Does it come after Q? *Player A:* No.\n\n*You:* Is it Q? *Player A:* Yes.\n\n**You win**\n\nBut what does this have to do with monotone functions? The letters of the alphabet form a poset $Alph = \\langle \\{A,...,Z\\}, \\leq_{Alph} \\rangle$ (in fact, a [totally ordered set](https://arbital.com/p/540)) under the standard alphabetic order. Your strategy for playing 10 questions can be viewed as probing a monotone function from $Alph$ to the 2-element poset **2** of a [boolean values](https://arbital.com/p/57f), where $false <_{\\textbf{2}} true$. Specifically, the probed function $f : Alph \\to \\textbf{2}$ is defined such that $f(\\star) \\doteq Q >_{Alph} \\star$ \n\nThe monotonicity of $f$ is crucial to being able to eliminate possibilities at each step. If we probe $f$ at a letter $\\star_1$ and observe a false result, then any letter $\\star_2$ less than $\\star_1$ must map to false as well: $\\star_2 \\leq_{Alph} \\star_1$ implies $f(\\star_2) \\leq_{\\textbf{2}} f(\\star_1)$. Yes, we can demonstrate the aformentioned monotone function with a diagram, but since its size is unwieldy, it has been placed in the following hidden text block.\n\n%%hidden(Show solution):\n![Diagram of f](http://i.imgur.com/ILepmis.png)\n%%\n\n\n\n\nDeduction systems\n---------------\n\n[https://arbital.com/p/deduction_systems](https://arbital.com/p/deduction_systems) allow one to infer new judgments from a set of known judgments. They are often specified as a set of rules, in which each rule is represented as a horizontal bar with one or more premises appearing above the bar and a conclusion that can be deduced from those premises appearing below the bar. \n\n![A deduction system](http://i.imgur.com/OFx6REF.png)\n\nThe above rules form a fragment of [propositional logic](https://arbital.com/p/57f). $\\phi$ and $\\psi$ are used to denote logical statements, called *propositions*. $\\wedge$ and $\\to$ are binary logical operators, each of which maps a pair of propositions to a single proposition. $\\wedge$ is the *and* operator: if $\\phi$ and $\\psi$ are logical statements, then $\\phi \\wedge \\psi$ is the simultaneous assertion of both $\\phi$ and $\\psi$. $\\to$ is the *implication* operator: $\\phi \\to \\psi$ asserts that if we know $\\phi$ to be true, then we can conclude $\\psi$ is true as well.\n\nThe leftmost rule has two premises $\\phi$ and $\\psi$, and concludes from these premises the proposition $\\phi \\wedge \\psi$. Invoking a rule to deduce its conclusion from its premises is called *applying* the rule. A tree of rule applications is called a [proof](https://arbital.com/p/proof).\n\nDeduction systems are often viewed as proof languages. However, it can also be fruitful to view a deduction system as a function which, given a set of input propositions, produces the set of all propositions that can be concluded from exactly one rule application, using the input propositions as premises. More concretely, letting $X$ be the set of propositions, the above set of deduction rules corresponds to the following function $F : \\mathcal P(X) \\to \\mathcal P(X)$.\n\n$F(A) = \\\\ ~~ \\{ \\phi \\wedge \\psi\\mid\\phi \\in A, \\psi \\in A \\} \\cup \\\\ ~~ \\{ \\phi \\mid \\phi \\wedge \\psi \\in A \\} \\cup \\\\ ~~ \\{ \\psi \\mid \\phi \\wedge \\psi \\in A \\} \\cup \\\\ ~~ \\{ \\phi \\mid \\psi \\in A, \\psi \\to \\phi \\in A \\}$\n\n$F$ is a montone function from the poset $\\langle \\mathcal P(X), \\subseteq \\rangle$ to itself. The reason that $F$ is monotone is that larger input sets give us more ways to apply the rules of our system, yielding larger outputs.\n\nIn addition to standard logical notions, deduction systems can be used to describe [type systems](https://arbital.com/p/type_system), [program logics](https://arbital.com/p/program_logic), and [operational semantics](https://arbital.com/p/operational_semantics).\n\nAdditional material\n-------------\n\nIf you still haven't had enough monotonicity, might I recommend trying some of the [exercises](https://arbital.com/p/6pv)?\n\n%comment:\nIncreasing functions\n----------------\n\nA monotone function between two [totally ordered sets](https://arbital.com/p/540) is called increasing. For example, the graph of a monotone function from the poset $\\langle \\mathbb R, \\le \\rangle$ to itself has an upward slope. Such functions can be searched efficiently using the [binary search algorithm](https://arbital.com/p/binary_search). \n%", "date_published": "2016-12-03T22:54:32Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": [], "alias": "5lf"} {"id": "1ad2b815d2bf631928e882ea7969c571", "title": "Kripke model", "url": "https://arbital.com/p/kripke_model", "source": "arbital", "source_type": "text", "text": "**Kripke models** are [interpretations](https://arbital.com/p/) of [https://arbital.com/p/-534](https://arbital.com/p/-534) which turn out to be adequate under some basic constraints for a wide array of systems of modal logic.\n\n#Definition \nA Kripke model $M$ is formed by three elements:\n\n* A set of **worlds** $W$\n* A **visibility relationship** between worlds $R$. If $wRx$, we will say that $w$ sees $x$ or that $x$ is a successor of $w$.\n* A **valuation function** $V$ that associates pairs of worlds and sentence letters to truth values.\n\nYou can visualize Kripke worlds as a [https://arbital.com/p/-digraph](https://arbital.com/p/-digraph) in which each node is a world, and the edges represent visibility.\n\n\n\nThe valuation function can be extended to assign truth values to whole modal sentences. We will say that a world $w\\in W$ makes a modal sentence $A$ true (*notation*: $M,w\\models A$) if it is assigned a truth value of true according to the following rules:\n\n* If $A$ is a sentence letter $p$, then $M,w\\models p$ if $V(w,p)=\\top$.\n* If $A$ is a truth functional compound of sentences, then its value comes from the propositional calculus and the values of its components. For example, if $A=B\\to C$, then $M,w\\models A$ [https://arbital.com/p/-46m](https://arbital.com/p/-46m) $M,w\\not\\models B$ or $M,w\\models C$.\n* If $A$ is of the form $\\square B$, then $M,w\\models \\square B$ if $M,x\\models B$ for every successor $x$ of $w$.\n\nGiven this way of assigning truth values to sentences, we will say the following:\n\n* $A$ is valid in a model $M$ (*notation*: $M\\models A$) if $M,w\\models A$ for every $w$ in $W$. \n* $A$ is valid in a class of models if it is valid in every model of the class.\n* $A$ is satisfiable in a model $M$ if there exists $w\\in W$ such that $M,w\\models A$.\n* $A$ is satisfiable in a class of models if it is satisfiable in some model of the class.\n\nNotice that a sentence $A$ is valid in a class of models iff its negation is not satisfiable.\n\n#Kripke models as semantics of modal logic\nKripke models are tightly connected to the interpretations of certain modal logics.\n\nFor example, they model well the rules of [normal provability systems](https://arbital.com/p/5j8).\n\n**Exercise**: show that Kripke models satisfy the necessitation rule of modal logic. That is, if $A$ is valid in $M$, then $\\square A$ is valid in $M$.\n\n%%hidden(Show solution):\nLet $w\\inW$ be a world of $M$. Let $x$ be an arbitrary successor of $w$. Then, as $M\\models A$, then $M,x\\models A$. Since $x$ was arbitrary, every successor of $w$ models $A$, so $w\\models \\square A$. As $w$ was arbitrary, then every world in $M$ models $\\square A$, and thus $M\\models \\square A$.\n%%\n\n**Exercise**: Show that the distributive axioms are always valid in Kripke models. Distributive axioms are modal sentences of the form $\\square[B](https://arbital.com/p/A\\to)\\to(\\square A \\to \\square B)$.\n\n%%hidden(Show solution):\nSuppose that $w\\in W$ is such that $w\\models \\square[B](https://arbital.com/p/A\\to)$ and $w\\models \\square A$.\n\nLet $x$ be a successor of $w$. Then $x\\models A\\to B$ and $x\\models A$. We conclude by the propositional calculus that $x\\models B$, so as $x$ was an arbitrary successor we have that $w\\models \\square B$, and thus $w\\models\\square[B](https://arbital.com/p/A\\to)\\to(\\square A \\to \\square B)$.\n\nWe conclude that $M\\models\\square[B](https://arbital.com/p/A\\to)\\to(\\square A \\to \\square B)$, no matter what $M$ is.\n%%\n\nThose two last exercises prove that the [K system](https://arbital.com/p/) of modal logic is [sound](https://arbital.com/p/soundness) for the class of all Kripke models. This is because we have proved that its axioms and rules of inference hold in all Kripke models (propositional tautologies and modus ponens trivially hold).\n\n##Some adequacy theorems\nNow we present a relation of modal logics and the class of Kripke models in which they are [adequate](https://arbital.com/p/adequacy). You can check the proofs of adequacy in their respective pages.\n\n* The class of *all* Kripke models is adequate to the [K system](https://arbital.com/p/)\n* The class of [transitive](https://arbital.com/p/573) Kripke models is adequate to [K4](https://arbital.com/p/)\n* The class of [reflexive](https://arbital.com/p/) Kripke models is adequate to [T](https://arbital.com/p/)\n* The class of [reflexive](https://arbital.com/p/) and [transitive](https://arbital.com/p/573) Kripke models is adequate to [S4](https://arbital.com/p/)\n* The class of [reflexive](https://arbital.com/p/) and [symmetric](https://arbital.com/p/) Kripke models is adequate to [B](https://arbital.com/p/)\n* The class of [reflexive](https://arbital.com/p/) and [euclidean](https://arbital.com/p/) Kripke models is adequate to [S5](https://arbital.com/p/)\n* The class of [finite](https://arbital.com/p/), [irreflexive](https://arbital.com/p/) and [transitive](https://arbital.com/p/573) Kripke models is adequate to [GL](https://arbital.com/p/5l3)\n\nNotice that as the constraint on the models becomes stronger, the set of valid sentences increases, and thus the correspondent adequate system is stronger.\n\nA note about notation may be in order. We say that a Kripke model is, e.g, [transitive](https://arbital.com/p/573) if its visibility relation is transitive %%note:Same for reflexive, symmetric and euclidean %%. Similarly, a Kripke model is finite if its set of worlds is finite.", "date_published": "2016-07-26T14:42:16Z", "authors": ["Dylan Hendrickson", "Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "5ll"} {"id": "3ef12cf9ca8e1878b6293bf53f409682", "title": "Antisymmetric relation", "url": "https://arbital.com/p/antisymmetric_relation", "source": "arbital", "source_type": "text", "text": "An antisymmetric relation is a relation where no two distinct elements are related in both directions. In other words. $R$ is antisymmetric iff\n\n$(aRb ∧ bRa) → a = b$\n\nor, equivalently, $a ≠ b → (¬aRb ∨ ¬bRa)$\n\nAntisymmetry isn't quite the [compliment](https://arbital.com/p/set_theory_compliment) of [Symmetry](https://arbital.com/p/symmetric_relation). Due to the fact that $aRa$ is allowed in an antisymmetric relation, the equivalence relation, $\\{(0,0), (1,1), (2,2)...\\}$ is both symmetric and antisymmetric.\n\nExamples of antisymmetric relations also include the successor relation, $\\{(0,1), (1,2), (2,3), (3,4)...\\}$, or this relation linking numbers to their prime factors $\\{...(9,3),(10,5),(10,2),(14,7),(14,2)...)\\}$", "date_published": "2016-08-05T22:42:35Z", "authors": ["Eric Rogstad", "Kevin Clancy", "M Yass"], "summaries": [], "tags": [], "alias": "5lt"} {"id": "34ade1910d4200c4e9b2dbe1bec4ccdc", "title": "Fixed point theorem of provability logic", "url": "https://arbital.com/p/fixed_point_theorem_provability_logic", "source": "arbital", "source_type": "text", "text": "The fixed point theorem of provability logic is a key result that gives a explicit procedure to find equivalences for sentences such as the ones produced by the [https://arbital.com/p/-59c](https://arbital.com/p/-59c).\n\nIn its most simple formulation, it states that:\n\n>Let $\\phi(p)$ be a modal sentence [modalized](https://arbital.com/p/5m4) in $p$. Then there exists a letterless $H$ such that $GL\\vdash \\boxdot[\\phi](https://arbital.com/p/p\\leftrightarrow) \\leftrightarrow \\boxdot[H](https://arbital.com/p/p\\leftrightarrow)$ %%note: Notation: $\\boxdot A = A\\wedge \\square A$%%.\n\nThis result can be generalized for cases in which letter sentences other than $p$ appear in the original formula, and the case where multiple formulas are present.\n\n#Fixed points\nThe $H$ that appears in the statement of the theorem is called a **fixed point** of $\\phi(p)$ on $p$.\n\nIn general, a fixed point of a formula $\\psi(p, q_1...,q_n)$ on $p$ will be a modal formula $H(q_1,...,q_n)$ in which $p$ does not appear such that $GL\\vdash \\boxdot[https://arbital.com/p/p\\leftrightarrow\\psi](https://arbital.com/p/p\\leftrightarrow\\psi) \\leftrightarrow \\boxdot[H](https://arbital.com/p/p_i\\leftrightarrow)$.\n\nThe fixed point theorem gives a sufficient condition for the existence of fixed points, namely that $\\psi$ is modalized in $\\psi$. It is an open problem to determine a necessary condition for the existence of fixed points.\n\nFixed points satisfy some important properties:\n\nIf $H$ is a fixed point of $\\phi$ on $p$, then $GL\\vdash H(q_1,...,q_n)\\leftrightarrow \\phi(H(q_1,...,q_n),q_1,...,q_n)$. This coincides with our intuition of what a fixed point is, since this can be seen as an argument that when fed to $\\phi$ it returns something equivalent to itself.\n\n%%hidden(Proof):\nSince $H$ is a fixed point, $GL\\vdash \\boxdot[https://arbital.com/p/p\\leftrightarrow\\psi](https://arbital.com/p/p\\leftrightarrow\\psi) \\leftrightarrow \\boxdot[H](https://arbital.com/p/p_i\\leftrightarrow)$. Since $GL$ is [normal](https://arbital.com/p/), it is closed under substitution. By substituing $p$ for $H$, we find that $GL\\vdash \\boxdot[https://arbital.com/p/H](https://arbital.com/p/H) \\leftrightarrow \\boxdot[https://arbital.com/p/H](https://arbital.com/p/H)$.\n\nBut trivially $GL\\vdash \\boxdot[H(q_1,...,q_n)\\leftrightarrow H(q_1,...,q_n)$, so $GL\\vdash \\boxdot[https://arbital.com/p/H](https://arbital.com/p/H)$.\n%%\n\n$H$ and $I$ are fixed points of $\\phi$ if and only if $GL\\vdash H\\leftrightarrow I$. This is knows as the \n**uniqueness of fixed points**.\n\n%%hidden(Proof):\nLet $H$ be a fixed point on $p$ of $\\phi(p)$; that is, $GL\\vdash \\boxdot(p\\leftrightarrow \\phi(p))\\leftrightarrow (p\\leftrightarrow H)$.\n\nSuppose $I$ is such that $GL\\vdash H\\leftrightarrow I$. Then by the first substitution theorem, $GL\\vdash F(I)\\leftrightarrow F(H)$ for every formula $F(q)$. If $F(q)=\\boxdot(p\\leftrightarrow q)$, then $GL\\vdash \\boxdot(p\\leftrightarrow H)\\leftrightarrow \\boxdot(p\\leftrightarrow I)$, from which it follows that $GL\\vdash \\boxdot(p\\leftrightarrow \\phi(p))\\leftrightarrow (p\\leftrightarrow I)$.\n\nConversely, if $H$ and $I$ are fixed points, then $GL\\vdash \\boxdot (p\\leftrightarrow H)\\leftrightarrow \\boxdot (p\\leftrightarrow I)$, so since $GL$ is closed under substitution, $GL\\vdash\\boxdot (H\\leftrightarrow H)\\leftrightarrow \\boxdot (H\\leftrightarrow I)$. Since $GL\\vdash \\boxdot (H\\leftrightarrow H)$, it follows that $GL\\vdash (H\\leftrightarrow I)$.\n%%\n\n#Special case fixed point theorem\n\nThe special case of the fixed point theorem is what we stated at the beginning of the page. Namely:\n\n>Let $\\phi(p)$ be a modal sentence [modalized](https://arbital.com/p/5m4) in p. \n>Then there exists a letterless $H$ such that $GL\\vdash \\boxdot[\\phi](https://arbital.com/p/p\\leftrightarrow) \\leftrightarrow \\boxdot[H](https://arbital.com/p/p\\leftrightarrow)$.\n\nThere is a nice semantical procedure based on [Kripke models](https://arbital.com/p/5ll) that allows to compute $H$ as a truth functional compound of sentences $\\square^n \\bot$ %%note:$\\square^n A = \\underbrace{\\square,\\square,\\ldots,\\square}_{n\\text{-times}} A$ %%. (ie, $H$ is in [normal form](https://arbital.com/p/)).\n\n\n##$A$-traces\nLet $A$ be a modal sentence modalized in $p$ in which no other sentence letter appears (we call such a sentence a $p$-sentence). We want to calculate $A$'s fixed point on $p$. This procedure bears a resemblance to the [trace](https://arbital.com/p/) method for evaluating letterless modal sentences.\n\nWe are going to introduce the notion of the $A$-trace of a $p$-sentence $B$, notated by $[https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B)](https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B))_A$. The $A$-trace maps modal sentences to sets of [natural numbers](https://arbital.com/p/45h), and is defined recursively as follows:\n\n* $[https://arbital.com/p/[https://arbital.com/p/\\bot](https://arbital.com/p/\\bot)](https://arbital.com/p/[https://arbital.com/p/\\bot](https://arbital.com/p/\\bot))_A = \\emptyset$\n* $[https://arbital.com/p/[C](https://arbital.com/p/B\\to)](https://arbital.com/p/[C](https://arbital.com/p/B\\to))_A = (\\mathbb{N} \\setminus [https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B)](https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B))_A)\\cup [https://arbital.com/p/[https://arbital.com/p/C](https://arbital.com/p/C)](https://arbital.com/p/[https://arbital.com/p/C](https://arbital.com/p/C))_A$\n* $[https://arbital.com/p/[D](https://arbital.com/p/\\square)](https://arbital.com/p/[D](https://arbital.com/p/\\square))_A=\\{m:\\forall i < m i\\in [https://arbital.com/p/[https://arbital.com/p/D](https://arbital.com/p/D)](https://arbital.com/p/[https://arbital.com/p/D](https://arbital.com/p/D))_A\\}$\n* $[https://arbital.com/p/[https://arbital.com/p/p](https://arbital.com/p/p)](https://arbital.com/p/[https://arbital.com/p/p](https://arbital.com/p/p))_A=[https://arbital.com/p/[https://arbital.com/p/A](https://arbital.com/p/A)](https://arbital.com/p/[https://arbital.com/p/A](https://arbital.com/p/A))_A$\n\n*Lemma*: Let $M$ be a finite, transitive and irreflexive Kripke model in which $(p\\leftrightarrow A) is valid, and $B$ a $p$-sentence. Then $M,w\\models B$ iff $\\rho(w)\\in [https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B)](https://arbital.com/p/[https://arbital.com/p/B](https://arbital.com/p/B))_A$.\n\n%%hidden(Proof):\nComing soon \n%%\n\n*Lemma*: The $A$-trace of a $p$-sentence $B$ is either finite or cofinite, and furthermore either it has less than $n$ elements or lacks less than $n$ elements, where $n$ is the number of $\\square$s in $A$.\n\n%%hidden(Proof):\nComing soon \n%%\n\nThose two lemmas allow us to express the truth value of $A$ in terms of world ranks for models in which $p\\leftrightarrow A$ is valid. Then the fixed point $H$ will be either the union or the negation of the union of a finite number of sentences $\\square^{n+1}\\bot\\wedge \\square^n \\bot$ %%note:Such a sentence is only true in worlds of rank $n$%%\n\nIn the following section we work through an example, and demonstrate how can we easily compute those fixed points using a [Kripke chain](https://arbital.com/p/).\n\n##Applications\n\nFor an example, we will compute the fixed point for the modal [Gödel sentence](https://arbital.com/p/) $p\\leftrightarrow \\neg\\square p$ and analyze its significance.\n\nWe start by examining the truth value of $\\neg\\square p$ in the $0$th rank worlds. Since the only letter is $p$ and it is modalized, this can be done without problem (remember that $\\square B$ is always true in the rank $0$ worlds, no mater what $B$ is). Now we apply to $p$ the constraint of having the same truth value as $\\neg\\square p$.\n\nWe iterate the procedure for the next world ranks.\n\n$$\n\\begin{array}{cccc}\n \\text{world= } & p & \\square (p) & \\neg \\square (p) \\\\\n 0 & \\bot & \\top & \\bot \\\\\n 1 & \\top & \\bot & \\top \\\\\n 2 & \\top & \\bot & \\top \\\\\n\\end{array}\n$$\n\nSince there is only one $\\square$ in the formula, the chain is guaranteed to stabilize in the first world and there is no need for going further. We have shown the truth values in world $2$ to show that this is indeed the case.\n\nFrom the table we have just constructed it becomes evident that $[https://arbital.com/p/[https://arbital.com/p/p](https://arbital.com/p/p)](https://arbital.com/p/[https://arbital.com/p/p](https://arbital.com/p/p))_{\\neg\\square p} = \\mathbb{N}\\setminus \\{0\\}$. Thus $H = \\square^{0+1}\\bot \\wedge \\square^0\\bot = \\neg\\square\\bot$.\n\nTherefore, $GL\\vdash \\square [\\neg\\square p](https://arbital.com/p/p\\leftrightarrow)\\leftrightarrow \\square[\\neg\\square \\bot](https://arbital.com/p/p\\leftrightarrow)$. Thus, the fixed point for the modal Gödel sentence is the [https://arbital.com/p/-5km](https://arbital.com/p/-5km) of arithmetic!\n\nBy employing the [arithmetical soundness of GL](https://arbital.com/p/), we can translate this result to $PA$ and show that $PA\\vdash \\square_{PA} [\\neg\\square_{PA} G](https://arbital.com/p/G\\leftrightarrow)\\leftrightarrow \\square_{PA}[\\neg\\square_{PA} \\bot](https://arbital.com/p/G\\leftrightarrow)$ for every sentence $G$ of arithmetic.\n\nSince in $PA$ we can construct $G$ by the [https://arbital.com/p/-59c](https://arbital.com/p/-59c) such that $PA\\vdash G\\leftrightarrow \\neg\\square_{PA} G$, by necessitation we have that for such a $G$ then $PA\\vdash \\square_PA[G\\leftrightarrow \\neg\\square_{PA} G](https://arbital.com/p/)$. By the theorem we just proved using the fixed point, then $PA\\vdash \\square_{PA}[\\neg\\square_{PA} \\bot](https://arbital.com/p/G\\leftrightarrow)$. SInce [everything $PA$ proves is true](https://arbital.com/p/) then $PA\\vdash G\\leftrightarrow \\neg\\square_{PA} \\bot$.\n\nSurprisingly enough, the Gödel sentence is equivalent to the consistency of arithmetic! This makes more evident that $G$ is not provable [unless $PA$ is inconsistent](https://arbital.com/p/godel_second_incompleteness_theorm), and that it is not disprovable unless it is [$\\omega$-inconsistent](https://arbital.com/p/omega_inconsistency).\n\n**Exercise:** Find the fixed point for the [Henkin sentence](https://arbital.com/p/) $H\\leftrightarrow\\square H$.\n\n%%hidden(Show solution):\n$$\n\\begin{array}{ccc}\n \\text{world= } & p & \\square (p) \\\\\n 0 & \\top & \\top \\\\\n 1 & \\top & \\top \\\\\n\\end{array}\n$$\nThus the fixed point is simply $\\top$.\n%%\n\n\n#General case\nThe first generalization we make to the theorem is allowing the appearance of sentence letters other than the one we are fixing. The concrete statement is as follows:\n\n>Let $\\phi(p, q_1,...,q_n)$ be a modal sentence [modalized](https://arbital.com/p/5m4) in p. Then $\\phi$ has a fixed point on $p$.%%.\n\nThere are several constructive procedures for finding the fixed point in the general case.\n\nOne particularly simple procedure is based on **k-decomposition**.\n\n##K-decomposition\nLet $\\phi$ be as in the hypothesis of the fixed point theorem. Then we can express $\\phi$ as $B(\\square D_1(p), ..., \\square D_{k}(p))$, since every $p$ occurs within the scope of a $\\square$ (The $q_i$s are omitted for simplicity, but they may appear scattered between $B$ and the $D_i$s). This is called a $k$-decomposition of $\\phi$.\n\nIf $\\phi$ is $0$-decomposable, then it is already a fixed point, since $p$ does not appear. \n\nOtherwise, consider $B_i = B(\\square D_1(p), ..., \\square D_{i-1}(p),\\top, \\square D_{i+1}(p),...,\\square D_k(p))$, which is $k-1$-decomposable.\n\nAssuming that the procedure works for $k-1$ decomposable formulas, we can use it to compute a fixed point $H_i$ for each $B_i$. Now, $H=B(\\square D_1(H_1),...,\\square D_k(H_k))$ is the desired fixed point for $\\phi$.\n\n%%hidden(Proof):\n\nThere is a nice semantic proof in *Computability and Logic*, by Boolos *et al*.\n%%\n\nThis procedure constructs fixed points with structural similarity to the original sentence.\n\n###Example\nLet's compute the fixed point of $p\\leftrightarrow \\neg\\square(q\\to p)$.\n\nWe can 1-decompose the formula in $B(d)=\\neg d$, $D_1(p)=q\\to p$. \n\nThen $B_1(p)=\\neg \\top = \\bot$, which is its own fixed point. Thus the desired fixed point is $H=B(\\square D_1(\\bot))=\\neg\\square \\neg q$.\n\n**Exercise:** Compute the fixed point of $p\\leftrightarrow \\square [https://arbital.com/p/\\square](https://arbital.com/p/\\square)$.\n\n%%hidden(Show solution):\nOne possible decomposition of the the formula at hand is $B(a)=a$, $D_1(p)=\\square(p\\wedge q)\\wedge \\square(p\\wedge r)$.\n\nNow we compute the fixed point of $B(\\top)$, which is simply $\\top$.\n\nTherefore the fixed point of the whole expression is $B(\\square D_1(p=\\top))=\\square[https://arbital.com/p/\\square](https://arbital.com/p/\\square)=\\square[https://arbital.com/p/\\square](https://arbital.com/p/\\square)$\n%%\n\n#Generalized fixed point theorem\n\n>Suppose that $A_i(p_1,...,p_n)$ are $n$ modal sentences such that $A_i$ is modalized in $p_n$ (possibly containing sentence letters other than $p_js$).\n\n>Then there exists $H_1, ...,H_n$ in which no $p_j$ appears such that $GL\\vdash \\wedge_{i\\le n} \\{\\boxdot (p_i\\leftrightarrow A_i(p_1,...,p_n)\\}\\leftrightarrow \\wedge_{i\\le n} \\{\\boxdot(p_i\\leftrightarrow H_i)\\}$.\n\n%%hidden(Proof):\nWe will prove it by induction. \n\nFor the base step, we know by the fixed point theorem that there is $H$ such that $GL\\vdash \\boxdot(p_1\\leftrightarrow A_i(p_1,...,p_n)) \\leftrightarrow \\boxdot(p_1\\leftrightarrow H(p_2,...,p_n))$\n\nNow suppose that for $j$ we have $H_1,...,H_j$ such that $GL\\vdash \\wedge_{i\\le j} \\{\\boxdot(p_i\\leftrightarrow A_i(p_1,...,p_n)\\}\\leftrightarrow \\wedge_{i\\le j} \\{\\boxdot(p_i\\leftrightarrow H_i(p_{j+1},...,p_n))\\}$.\n\nBy the [second substitution theorem](https://arbital.com/p/), $GL\\vdash \\boxdot(A\\leftrightarrow B)\\rightarrow [https://arbital.com/p/F](https://arbital.com/p/F)$. Therefore we have that $GL\\vdash \\boxdot(p_i\\leftrightarrow H_i(p_{j+1},...,p_n)\\rightarrow [https://arbital.com/p/\\boxdot](https://arbital.com/p/\\boxdot)$.\n\nIf we iterate the replacements, we finally end up with $GL\\vdash \\wedge_{i\\le j} \\{\\boxdot(p_i\\leftrightarrow A_i(p_1,...,p_n)\\}\\rightarrow \\boxdot(p_{j+1}\\leftrightarrow A_{j+1}(H_1,...,H_j,p_{j+1},...,p_n))$.\n\nAgain by the fixed point theorem, there is $H_{j+1}'$ such that $GL\\vdash \\boxdot(p_{j+1}\\leftrightarrow A_{j+1}(H_1,...,H_j,p_{j+1},...,p_n)) \\leftrightarrow \\boxdot[H_{j+1}'](https://arbital.com/p/p_{j+1}\\leftrightarrow)$.\n\nBut as before, by the second substitution theorem, $GL\\vdash \\boxdot[H_{j+1}'](https://arbital.com/p/p_{j+1}\\leftrightarrow)\\rightarrow [\\boxdot(p_i\\leftrightarrow H_i(p_{j+1},...,p_n)) \\leftrightarrow \\boxdot(p_i\\leftrightarrow H_i(H_{j+1}',...,p_n))$.\n\nLet $H_{i}'$ stand for $H_i(H_{j+1}',...,p_n)$, and by combining the previous lines we find that $GL\\vdash \\wedge_{i\\le j+1} \\{\\boxdot(p_i\\leftrightarrow A_i(p_1,...,p_n)\\}\\rightarrow \\wedge_{i\\le j+1} \\{\\boxdot(p_i\\leftrightarrow H_i'(p_{j+2},...,p_n))\\}$.\n\nBy [Goldfarb's lemma](https://arbital.com/p/), we do not need to check the other direction, so $GL\\vdash \\wedge_{i\\le j+1} \\{\\boxdot(p_i\\leftrightarrow A_i(p_1,...,p_n)\\}\\leftrightarrow \\wedge_{i\\le j+1} \\{\\boxdot(p_i\\leftrightarrow H_i'(p_{j+2},...,p_n))\\}$ and the proof is finished $\\square$\n\n----\n\nOne remark: the proof is wholly constructive. You can iterate the construction of fixed point following the procedure implied by the construction of the $H_i'$ to compute fixed points.\n\n%%\n\nAn immediate consequence of the theorem is that for those fixed points $H_i$ and every $A_i$, $GL\\vdash H_i\\leftrightarrow A_i(H_1,...,H_n)$.\n\nIndeed, since $GL$ is closed under substitution, we can make the change $p_i$ for $H_i$ in the theorem to get that $GL\\vdash \\wedge_{i\\le n} \\{\\boxdot (H_i\\leftrightarrow A_i(H_1,...,H_n)\\}\\leftrightarrow \\wedge_{i\\le n} \\{\\boxdot(H_i\\leftrightarrow H_i)\\}$.\n\nSince the righthand side is trivially a theorem of $GL$, we get the desired result.", "date_published": "2017-03-02T15:00:20Z", "authors": ["mrkun", "Jaime Sevilla Molina"], "summaries": [">Let $\\phi(p, q_1,...,q_n)$ be a modal sentence [modalized](https://arbital.com/p/5m4) in $p$. \n>Then there exists a sentence $H(q_1,..,q_n)$ such that $GL\\vdash \\boxdot[\\phi](https://arbital.com/p/p\\leftrightarrow) \\leftrightarrow \\boxdot[H](https://arbital.com/p/p\\leftrightarrow)$.\n\nThis result can be used to give us insight about [self-referent](https://arbital.com/p/59c) sentences of arithmetic."], "tags": ["Work in progress"], "alias": "5lx"} {"id": "1e9915fe9e566084e8d8b3d1801393c3", "title": "Irreducible element (ring theory)", "url": "https://arbital.com/p/irreducible_element_ring_theory", "source": "arbital", "source_type": "text", "text": "summary(Technical): Let $(R, +, \\times)$ be a [ring](https://arbital.com/p/3gq) which is an [integral domain](https://arbital.com/p/5md). We say $x \\in R$ is *irreducible* if, whenever we write $r = a \\times b$, it is the case that (at least) one of $a$ or $b$ is a [unit](https://arbital.com/p/5mg) (that is, has a multiplicative inverse).\n\n[Ring theory](https://arbital.com/p/3gq) can be viewed as the art of taking the integers [$\\mathbb{Z}$](https://arbital.com/p/48l) and extracting or identifying its essential properties, seeing where they lead.\nIn that light, we might ask what the abstracted notion of \"[prime](https://arbital.com/p/4mf)\" should be.\nConfusingly, we call this property *irreducibility* rather than \"primality\"; \"[prime](https://arbital.com/p/5m2)\" in ring theory corresponds to something closely related but not the same.\n\nIn a ring $R$ which is an [https://arbital.com/p/-5md](https://arbital.com/p/-5md), we say that an element $x \\in R$ is *irreducible* if, whenever we write $r = a \\times b$, it is the case that (at least) one of $a$ or $b$ is a [unit](https://arbital.com/p/5mg) (that is, has a multiplicative inverse).\n\n# Why do we require $R$ to be an integral domain?\n\n\n# Examples\n\n\n# Relationship with primality in the ring-theoretic sense\n\nIt is always the case that [primes](https://arbital.com/p/5m2) are irreducible in any integral domain.\n%%hidden(Show proof):\nIf $p$ is prime, then $p \\mid ab$ implies $p \\mid a$ or $p \\mid b$ by definition.\nWe wish to show that if $p=ab$ then one of $a$ or $b$ is invertible.\n\nSuppose $p = ab$.\nThen in particular $p \\mid ab$ so $p \\mid a$ or $p \\mid b$.\nAssume without loss of generality that $p \\mid a$; so there is some $c$ such that $a = cp$.\n\nTherefore $p = ab = cpb$; we are working in a commutative ring, so $p(1-bc) = 0$.\nSince the ring is an integral domain and $p$ is prime (so is nonzero), we must have $1-bc = 0$ and hence $bc = 1$.\nThat is, $b$ is invertible.\n%%\n\nHowever, the converse does not hold (though it may in certain rings).\n\n- In $\\mathbb{Z}$, it is a fact that irreducibles are prime. Indeed, it is a consequence of [Bézout's theorem](https://arbital.com/p/5mp) that if $p$ is \"prime\" in the usual $\\mathbb{Z}$-sense (that is, irreducible in the rings sense) %%note:I'm sorry about the notation. It's just what we're stuck with. It is very confusing.%%, then $p$ is \"prime\" in the rings sense. ([Proof.](https://arbital.com/p/5mh))\n- In the ring $\\mathbb{Z}[https://arbital.com/p/\\sqrt{-3}](https://arbital.com/p/\\sqrt{-3})$ of [complex numbers](https://arbital.com/p/4zw) of the form $a+b \\sqrt{-3}$ where $a, b$ are integers, the number $2$ is irreducible but we may express $4 = 2 \\times 2 = (1+\\sqrt{-3})(1-\\sqrt{-3})$. That is, we have $2 \\mid (1+\\sqrt{-3})(1-\\sqrt{-3})$ but $2$ doesn't divide either of those factors. Hence $2$ is not prime.\n\n%%hidden(Proof that $2$ is irreducible in $\\mathbb{Z}[https://arbital.com/p/\\sqrt{-3}](https://arbital.com/p/\\sqrt{-3})):\nA slick way to do this goes via the [norm](https://arbital.com/p/norm_complex_number) $N(2)$ of the complex number $2$; namely $4$.\n\nIf $2 = ab$ then $N(2) = N(a)N(b)$ because [the norm is a multiplicative function](https://arbital.com/p/norm_of_complex_number_is_multiplicative), and so $N(a) N(b) = 4$.\nBut $N(x + y \\sqrt{-3}) = x^2 + 3 y^2$ is an integer for any element of the ring, and so we have just two distinct options: $N(a) = 1, N(b) = 4$ or $N(a) = 2 = N(b)$.\n(The other cases follow by interchanging $a$ and $b$.)\n\nThe first case: $N(x+y \\sqrt{3}) = 1$ is only possible if $x= \\pm 1, y = \\pm 0$.\nHence the first case arises only from $a=\\pm1, b=\\pm2$; this has not led to any new factorisation of $2$.\n\nThe second case: $N(x+y \\sqrt{3}) = 2$ is never possible at all, since if $y \\not = 0$ then the norm is too big, while if $y = 0$ then we are reduced to finding $x \\in \\mathbb{Z}$ such that $x^2 = 2$.\n\nHence if we write $2$ as a product, then one of the factors must be invertible (indeed, must be $\\pm 1$).\n%%\n\nIn fact, in a [https://arbital.com/p/-principal_ideal_domain](https://arbital.com/p/-principal_ideal_domain), \"prime\" and \"irreducible\" are equivalent. ([Proof.](https://arbital.com/p/5mf))\n\n# Relationship with unique factorisation domains\n\nIt is a fact that an integral domain $R$ is a [UFD](https://arbital.com/p/unique_factorisation_domain) if and only if it has \"all irreducibles are [prime](https://arbital.com/p/5m2) (in the sense of ring theory)\" and \"every $r \\in R$ may be written as a product of irreducibles\". ([Proof.](https://arbital.com/p/alternative_condition_for_ufd))\nThis is a slightly easier condition to check than our original definition of a UFD, which instead of \"all irreducibles are prime\" had \"products are unique up to reordering and multiplying by invertible elements\".\n\nTherefore the relationship between irreducibles and primes is at the heart of the nature of a unique factorisation domain.\nSince $\\mathbb{Z}$ is a UFD, in some sense the [Fundamental Theorem of Arithmetic](https://arbital.com/p/fundamental_theorem_of_arithmetic) holds precisely because of the fact that \"prime\" is the same as \"irreducible\" in $\\mathbb{Z}$.", "date_published": "2016-08-01T20:04:03Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["An irreducible element of a [ring](https://arbital.com/p/3gq) is one which cannot be written as a nontrivial product of two other elements of the ring."], "tags": [], "alias": "5m1"} {"id": "94b669c6f93ce8c82e5150b5c486f812", "title": "Prime element of a ring", "url": "https://arbital.com/p/prime_element_ring_theory", "source": "arbital", "source_type": "text", "text": "summary(Technical): Let $(R, +, \\times)$ be a [ring](https://arbital.com/p/3gq) which is an [integral domain](https://arbital.com/p/5md). We say $p \\in R$ is *prime* if, whenever $p \\mid ab$, it is the case that either $p \\mid a$ or $p \\mid b$ (or both).\n\nAn element of an [https://arbital.com/p/-5md](https://arbital.com/p/-5md) is *prime* if it has the property that $p \\mid ab$ implies $p \\mid a$ or $p \\mid b$.\nEquivalently, if its generated [ideal](https://arbital.com/p/ideal_ring_theory) is [prime](https://arbital.com/p/prime_ideal) in the sense that $ab \\in \\langle p \\rangle$ implies either $a$ or $b$ is in $\\langle p \\rangle$.\n\nBe aware that \"prime\" in ring theory does not correspond exactly to \"[prime](https://arbital.com/p/4mf)\" in number theory (the correct abstraction of which is [irreducibility](https://arbital.com/p/5m1)). \nIt is the case that they are the same concept in the ring $\\mathbb{Z}$ of [integers](https://arbital.com/p/48l) ([proof](https://arbital.com/p/5mf)), but this is a nontrivial property that turns out to be equivalent to the [https://arbital.com/p/-5rh](https://arbital.com/p/-5rh) ([proof](https://arbital.com/p/alternative_condition_for_ufd)).\n\n# Examples\n\n# Properties\n\n- Primes are always [irreducible](https://arbital.com/p/5m1); a proof of this fact appears on the [page on irreducibility](https://arbital.com/p/5m1), along with counterexamples to the converse.\n-", "date_published": "2016-08-21T05:33:25Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["A prime element of a [ring](https://arbital.com/p/3gq) is one such that, if it divides a product, then it divides (at least) one of the terms of the product."], "tags": ["Stub"], "alias": "5m2"} {"id": "1f3a29e6e2936130a7156eed2009d482", "title": "Modalized modal sentence", "url": "https://arbital.com/p/modalized_modal_sentence", "source": "arbital", "source_type": "text", "text": "A [modal sentence](https://arbital.com/p/) $A$ is said to be **modalized** in $p$ if every occurrence of $p$ happens within the scope of a $\\square$.\n\nAs an example, $\\square p \\wedge q$ is modalized in $p$, but not in $q$.\n\nIf $A$ does not contain $p$, then it is trivially modalized in $p$.\n\nA sentence which is modalized in every sentence letter is said to be **fully modalized**.\n\nBeing modalized in $p$ is a sufficient condition for having a [fixed point](https://arbital.com/p/5lx) on $p$.", "date_published": "2016-07-27T20:30:17Z", "authors": ["Jaime Sevilla Molina"], "summaries": [], "tags": ["Definition"], "alias": "5m4"} {"id": "3b6b64fa3740a2f70f4274ddcef8a569", "title": "Integral domain", "url": "https://arbital.com/p/integral_domain", "source": "arbital", "source_type": "text", "text": "summary(Technical): An integral domain is a [ring](https://arbital.com/p/3gq) in which $ab=0$ implies $a=0$ or $b=0$. (We exclude the ring with one element: that is conventionally not considered an integral domain.)\n\nIn keeping with [ring theory](https://arbital.com/p/3gq) as the attempt to isolate each individual property of [$\\mathbb{Z}$](https://arbital.com/p/48l) and work out how the properties interplay with each other, we define the notion of **integral domain** to capture the fact that if $a \\times b = 0$ then $a=0$ or $b=0$.\nThat is, an integral domain is one which has no \"zero divisors\": $0$ cannot be nontrivially expressed as a product.\n(For uninteresting reasons, we also exclude the ring with one element, in which $0=1$, from being an integral domain.)\n\n# Examples\n\n- $\\mathbb{Z}$ is an integral domain.\n- Any [field](https://arbital.com/p/481) is an integral domain. (The proof is an exercise.)\n\n%%hidden(Show solution):\nSuppose $ab = 0$, but $a \\not = 0$. We wish to show that $b=0$.\n\nSince we are working in a field, $a$ has an inverse $a^{-1}$; multiply both sides by $a^{-1}$ to obtain $a^{-1} a b = 0 \\times a^{-1}$.\nSimplifying, we obtain $b = 0$.\n%%\n\n- When $p$ is a [prime](https://arbital.com/p/4mf) integer, the ring $\\mathbb{Z}_p$ of integers [mod](https://arbital.com/p/modular_arithmetic) $p$ is an integral domain.\n- When $n$ is a [composite](https://arbital.com/p/composite_number) integer, the ring $\\mathbb{Z}_n$ is *not* an integral domain. Indeed, if $n = r \\times s$ with $r, s$ positive integers, then $r s = n = 0$ in $\\mathbb{Z}_n$.\n\n# Properties\n\nThe reason we care about integral domains is because they are precisely the rings in which we may cancel products: if $a \\not = 0$ and $ab = ac$ then $b=c$.\n%%hidden(Proof):\nIndeed, if $ab = ac$ then $ab-ac = 0$ so $a(b-c) = 0$, and hence (in an integral domain) $a=0$ or $b=c$.\n\nMoreover, if we are not in an integral domain, say $r s = 0$ but $r, s \\not = 0$.\nThen $rs = r \\times 0$, but $s \\not = 0$, so we can't cancel the $r$ from both sides.\n%%\n\n## Finite integral domains\n\nIf a ring $R$ is both finite and an integral domain, then it is a [field](https://arbital.com/p/481).\nThe proof is an exercise.\n%%hidden(Show solution):\nGiven $r \\in R$, we wish to find a multiplicative inverse.\n\nSince there are only finitely many elements of the ring, consider $S = \\{ ar : a \\in R\\}$.\nThis set is a subset of $R$, because the multiplication of $R$ is [closed](https://arbital.com/p/3gy).\nMoreover, every element is distinct, because if $ar = br$ then we can cancel the $r$ (because we are in an integral domain), so $a = b$.\n\nSince there are $|R|$-many elements of the subset $S$ (where $| \\cdot |$ refers to the [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5)), and since $R$ is finite, $S$ must in fact be $R$ itself.\n\nTherefore in particular $1 \\in S$, so $1 = ar$ for some $a$.\n%%", "date_published": "2016-07-28T11:33:14Z", "authors": ["Patrick Stevens"], "summaries": ["An integral domain is a [ring](https://arbital.com/p/3gq) in which the only way to make $0$ as a product is to multiply $0$ by something. For instance, in an integral domain like [$\\mathbb{Z}$](https://arbital.com/p/48l), $2 \\times 3$ is not equal to $0$ because neither $2$ nor $3$ is."], "tags": [], "alias": "5md"} {"id": "a0d13512178e699e64d1bf2cc8e805fa", "title": "In a principal ideal domain, \"prime\" and \"irreducible\" are the same", "url": "https://arbital.com/p/pid_implies_prime_equals_irreducible", "source": "arbital", "source_type": "text", "text": "Let $R$ be a [ring](https://arbital.com/p/3gq) which is a [PID](https://arbital.com/p/principal_ideal_domain), and let $r \\not = 0$ be an element of $R$.\nThen $r$ is [irreducible](https://arbital.com/p/5m1) if and only if $r$ is [prime](https://arbital.com/p/5m2).\n\nIn fact, it is easier to prove a stronger statement: that the following are equivalent.\n%%note:Every proof known to the author is of this shape either implicitly or explicitly, but when it's explicit, it should be clearer what is going on.%%\n\n1. $r$ is irreducible. \n2. $r$ is prime.\n3. The generated [ideal](https://arbital.com/p/ideal_ring_theory) $\\langle r \\rangle$ is [maximal](https://arbital.com/p/maximal_ideal) in $R$.\n\n\n\n# Proof\n\n## $2 \\Rightarrow 1$\nA proof that \"prime implies irreducible\" appears on the page for [irreducibility](https://arbital.com/p/5m1).\n\n## $3 \\Rightarrow 2$\nWe wish to show that if $\\langle r \\rangle$ is maximal, then it is prime.\n(Indeed, $r$ is [prime](https://arbital.com/p/5m2) if and only if its generated ideal is [prime](https://arbital.com/p/prime_ideal).)\n\nAn ideal $I$ is maximal if and only if the [quotient](https://arbital.com/p/quotient_ring) $R/I$ is a [field](https://arbital.com/p/481). ([Proof.](https://arbital.com/p/ideal_maximal_iff_quotient_is_field))\n\nAn ideal $I$ is [prime](https://arbital.com/p/prime_ideal) if and only if the quotient $R/I$ is an [https://arbital.com/p/-5md](https://arbital.com/p/-5md). ([Proof.](https://arbital.com/p/ideal_prime_iff_quotient_is_integral_domain))\n\nAll fields are integral domains. (A proof of this appears on [the page on integral domains](https://arbital.com/p/5md).)\n\nHence maximal ideals are prime.\n\n## $1 \\Rightarrow 3$\nLet $r$ be irreducible; then in particular it is not invertible, so $\\langle r \\rangle$ isn't simply the whole ring.\n\nTo show that $\\langle r \\rangle$ is maximal, we need to show that if it is contained in any larger ideal then that ideal is the whole ring.\n\nSuppose $\\langle r \\rangle$ is contained in the larger ideal $J$, then.\nBecause we are in a principal ideal domain, $J = \\langle a \\rangle$, say, for some $a$, and so $r = a c$ for some $c$.\nIt will be enough to show that $a$ is invertible, because then $\\langle a \\rangle$ would be the entire ring.\n\nBut $r$ is irreducible, so one of $a$ and $c$ is invertible; if $a$ is invertible then we are done, so suppose $c$ is invertible.\n\nThen $a = r c^{-1}$.\nWe have supposed that $J$ is indeed larger than $\\langle r \\rangle$: that there is $j \\in J$ which is not in $\\langle r \\rangle$.\nSince $j \\in J = \\langle a \\rangle$, we can find $d$ (say) such that $j = a d$; so $j = r c^{-1} d$ and hence $j \\in \\langle r \\rangle$, which is a contradiction.", "date_published": "2016-07-28T12:03:01Z", "authors": ["Patrick Stevens"], "summaries": ["While there are two distinct but closely related notions of \"[prime](https://arbital.com/p/5m2)\" and \"[irreducible](https://arbital.com/p/5m1)\", the two ideas are actually the same in a certain class of [ring](https://arbital.com/p/3gq). This result is basically what enables the [https://arbital.com/p/-fundamental_theorem_of_arithmetic](https://arbital.com/p/-fundamental_theorem_of_arithmetic)."], "tags": [], "alias": "5mf"} {"id": "2a59b03dc346f9da747a1ec20b7cc6df", "title": "Unit (ring theory)", "url": "https://arbital.com/p/unit_ring_theory", "source": "arbital", "source_type": "text", "text": "An element $x$ of a non-trivial [ring](https://arbital.com/p/3gq)%%note:That is, a ring in which $0 \\not = 1$; equivalently, a ring with more than one element.%% is known as a **unit** if it has a multiplicative inverse: that is, if there is $y$ such that $xy = 1$.\n(We specified that the ring be non-trivial. \nIf the ring is trivial then $0=1$ and so the requirement is the same as $xy = 0$; this means $0$ is actually invertible in this ring, since its inverse is $0$: we have $0 \\times 0 = 0 = 1$.)\n\n$0$ is never a unit, because $0 \\times y = 0$ is never equal to $1$ for any $y$ (since we specified that the ring be non-trivial).\n\nIf every nonzero element of a ring is a unit, then we say the ring is a [field](https://arbital.com/p/481).\n\nNote that if $x$ is a unit, then it has a *unique* inverse; the proof is an exercise.\n%%hidden(Proof):\nIf $xy = xz = 1$, then $zxy = z$ (by multiplying both sides of $xy=1$ by $z$) and so $y = z$ (by using $zx = 1$).\n%%\n\n# Examples\n\n- In [$\\mathbb{Z}$](https://arbital.com/p/48l), $1$ and $-1$ are both units, since $1 \\times 1 = 1$ and $-1 \\times -1 = 1$. However, $2$ is not a unit, since there is no integer $x$ such that $2x=1$. In fact, the *only* units are $\\pm 1$.\n- [$\\mathbb{Q}$](https://arbital.com/p/4zq) is a [field](https://arbital.com/p/481), so every rational except $0$ is a unit.", "date_published": "2016-07-28T13:12:05Z", "authors": ["Patrick Stevens"], "summaries": ["A unit of a [ring](https://arbital.com/p/3gq) is an element with a multiplicative inverse."], "tags": [], "alias": "5mg"} {"id": "96f91af338b03be9074ceecb47a2bf74", "title": "Euclid's Lemma on prime numbers", "url": "https://arbital.com/p/irreducibles_are_prime_in_integers", "source": "arbital", "source_type": "text", "text": "summary(Technical): Let $p$ be a [prime](https://arbital.com/p/4mf) natural number. Then $p \\mid ab$ implies $p \\mid a$ or $p \\mid b$.\n\nEuclid's lemma states that if $p$ is a [prime number](https://arbital.com/p/4mf), which divides a product $ab$, then $p$ divides at least one of $a$ or $b$.\n\n# Proof\n\n## Elementary proof\n\nSuppose $p \\mid ab$ %%note:That is, $p$ divides $ab$.%%, but $p$ does not divide $a$.\nWe will show that $p \\mid b$.\n\nIndeed, $p$ does not divide $a$, so the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw) of $p$ and $a$ is $1$ (exercise: do this without using integer factorisation); so by [Bézout's theorem](https://arbital.com/p/bezouts_theorem) there are integers $x, y$ such that $ax+py = 1$.\n\n%%hidden(Show solution to exercise):\nWe are not allowed to use the fact that we can factorise integers, because we need \"$p \\mid ab$ implies $p \\mid a$ or $p \\mid b$\" as a lemma on the way towards the proof of the [https://arbital.com/p/-5rh](https://arbital.com/p/-5rh) (which is the theorem that tells us we can factorise integers).\n\nRecall that the highest common factor of $a$ and $p$ is defined to be the number $c$ such that:\n\n- $c \\mid a$;\n- $c \\mid p$;\n- for any $d$ which divides $a$ and $p$, we have $d \\mid c$.\n\n[Euclid's algorithm](https://arbital.com/p/euclidean_algorithm) tells us that $a$ and $p$ do have a (unique) highest common factor.\n\nNow, if $c \\mid p$, we have that $c = p$ or $c=1$, because $p$ is [prime](https://arbital.com/p/4mf).\nBut $c$ is not $p$ because we also know that $c \\mid a$, and we already know $p$ does not divide $a$.\n\nHence $c = 1$.\n%%\n\nBut multiplying through by $b$, we see $abx + pby = b$.\n$p$ divides $ab$ and divides $p$, so it divides the left-hand side; hence it must divide the right-hand side too.\nThat is, $p \\mid b$.\n\n## More abstract proof\n\nThis proof uses much more theory but is correspondingly much more general, and it reveals the important feature of $\\mathbb{Z}$ here.\n\n$\\mathbb{Z}$, viewed as a [ring](https://arbital.com/p/3gq), is a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5). ([Proof.](https://arbital.com/p/integers_is_pid))\nThe theorem we are trying to prove is that the [irreducibles](https://arbital.com/p/5m1) in $\\mathbb{Z}$ are all [prime](https://arbital.com/p/5m2) in the sense of ring theory.\n\nBut it is generally true that in a PID, \"prime\" and \"irreducible\" coincide ([proof](https://arbital.com/p/5mf)), so the result is immediate.\n\n# Converse is false\n\nAny composite number $pq$ (where $p, q$ are greater than $1$) divides $pq$ without dividing $p$ or $q$, so the converse is very false.\n\n# Why is this important?\n\nThis lemma is a nontrivial step on the way to proving the [https://arbital.com/p/-5rh](https://arbital.com/p/-5rh); and in fact in a certain general sense, if we can prove this lemma then we can prove the FTA.\nIt tells us about the behaviour of the primes with respect to products: we now know that the primes \"cannot be split up between factors\" of a product, and so they behave, in a sense, [\"irreducibly\"](https://arbital.com/p/5m1).\n\nThe lemma is also of considerable use as a tiny step in many different proofs.", "date_published": "2016-08-07T12:53:46Z", "authors": ["Brayden Beathe-Gateley", "Patrick Stevens"], "summaries": ["The [prime numbers](https://arbital.com/p/4mf) have a special property that they \"can't be distributed between terms of a product\": if $p$ is a prime dividing a product $ab$ of [integers](https://arbital.com/p/48l), then $p$ wholly divides one or both of $a$ or $b$. It cannot be the case that \"some but not all of $p$ divides into $a$, and the rest of $p$ divides into $b$\"."], "tags": [], "alias": "5mh"} {"id": "a844f25ee15fe2235c85db811ddedaf8", "title": "Bézout's theorem", "url": "https://arbital.com/p/bezout_theorem", "source": "arbital", "source_type": "text", "text": "Bézout's theorem is an important basic theorem of number theory.\nIt states that if $a$ and $b$ are integers, and $c$ is an integer, then the equation $ax+by = c$ has integer solutions in $x$ and $y$ if and only if the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw) of $a$ and $b$ divides $c$.\n\n# Proof\n\nWe have two directions of the equivalence to prove.\n\n## If $ax+by=c$ has solutions\n\nSuppose $ax+by=c$ has solutions in $x$ and $y$.\nThen the highest common factor of $a$ and $b$ divides $a$ and $b$, so it divides $ax$ and $by$; hence it divides their sum, and hence $c$.\n\n## If the highest common factor divides $c$\n\nSuppose $\\mathrm{hcf}(a,b) \\mid c$; equivalently, there is some $d$ such that $d \\times \\mathrm{hcf}(a,b) = c$.\n\nWe have the following fact: that the highest common factor is a linear combination of $a, b$. ([Proof](https://arbital.com/p/hcf_is_linear_combination); this [can also be seen](https://arbital.com/p/extended_euclidean_algorithm) by working through [Euclid's algorithm](https://arbital.com/p/euclidean_algorithm).)\n\nTherefore there are $x$ and $y$ such that $ax + by = \\mathrm{hcf}(a,b)$.\n\nFinally, $a (xd) + b (yd) = d \\mathrm{hcf}(a, b) = c$, as required.\n\n# Actually finding the solutions\n\nSuppose $d \\times \\mathrm{hcf}(a,b) = c$, as above.\n\nThe [https://arbital.com/p/-extended_euclidean_algorithm](https://arbital.com/p/-extended_euclidean_algorithm) can be used (efficiently!) to obtain a linear combination $ax+by$ of $a$ and $b$ which equals $\\mathrm{hcf}(a,b)$.\nOnce we have found such a linear combination, the solutions to the integer equation $ax+by=c$ follow quickly by just multiplying through by $d$.\n\n# Importance\n\nBézout's theorem is important as a step towards the proof of [Euclid's lemma](https://arbital.com/p/5mh), which itself is the key behind the [https://arbital.com/p/5rh](https://arbital.com/p/5rh).\nIt also holds in general [principal ideal domains](https://arbital.com/p/5r5).", "date_published": "2016-09-22T04:26:22Z", "authors": ["Eric Bruylant", "Soyoko U.", "Patrick Stevens"], "summaries": ["Bézout's theorem states that if $a$ and $b$ are integers, and $c$ is an integer, then the equation $ax+by = c$ has integer solutions in $x$ and $y$ if and only if the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw) of $a$ and $b$ divides $c$."], "tags": ["Math 2", "C-Class", "Proof"], "alias": "5mp"} {"id": "de6c3213a8b900462291e5e115b9bdb8", "title": "Rice's Theorem", "url": "https://arbital.com/p/rice_theorem", "source": "arbital", "source_type": "text", "text": "Rice's Theorem is a rather surprising and very strong restriction on what we can determine about the [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) computed by an arbitrary [Turing machine](https://arbital.com/p/5pd).\nIt tells us that for *every* nontrivial property of computable functions %%note:By \"nontrivial\", we mean there is at least one function with that property and at least one without that property.%%, there is no general procedure which takes as its input a Turing machine, and computes whether or not the function computed by that machine has that property.\n\nTherefore, if we want to discover anything about the output of a general computer program, *in general* the best we can do is simply run the program.\nAs a corollary, there can be no *fully general* procedure that checks whether a piece of computer code is free of bugs or not.\n\n# Formal statement\n\nWe will use the notation $[https://arbital.com/p/n](https://arbital.com/p/n)$ for the $n$th [Turing machine](https://arbital.com/p/5pd) under some fixed [numbering system](https://arbital.com/p/description_number).\nEach such machine induces a [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2), which we will also write as $[https://arbital.com/p/n](https://arbital.com/p/n)$ where this is unambiguous due to context; then it makes sense to write $[n](m)$ for the value that machine $[https://arbital.com/p/n](https://arbital.com/p/n)$ outputs when it is run on input $m$.\n\nLet $A$ be a non-empty, proper %%note:That is, it is not the entire set.%% subset of $\\{ \\mathrm{Graph}(n) : n \\in \\mathbb{N} \\}$, where $\\mathrm{Graph}(n)$ is the [graph](https://arbital.com/p/graph_of_a_function) of the [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2) computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$, the $n$th Turing machine.\nThen there is no Turing machine $[https://arbital.com/p/r](https://arbital.com/p/r)$ such that:\n\n- $[r](i)$ is $1$ if $\\mathrm{Graph}(i) \\in A$\n- $[r](i)$ is $0$ if $\\mathrm{Graph}(i) \\not \\in A$.\n\n# Caveats\n\n- While this result tells us, for example, that \"no procedure will ever be able to determine whether an arbitrary program is bug-free\", in practice it may be possible to determine whether *a large class* of programs is bug-free, while accepting the fact that our procedure might not be able to solve the fully general case.\n\n- Additionally, this result only tells us about the *graphs* of the functions in question.\nWe can determine certain properties which are specific to the Turing machine: for example, we can tell whether the program will halt in five steps, by simply running it for five steps.\nThis does not contradict Rice, because Rice tells us only about the ultimate answer the machines spit out, and nothing about the procedures they use to get to the answer; \"the machine halts in five steps\" is not a property of the graph of the function, but is a property of the Turing machine itself.\n\n- Rice's theorem is only a restriction on whether we can *decide* the status of a function: that is, whether we can decide *whether or not* the function computed by some machine has a certain property. Rice tells us nothing if we're only looking for a procedure that \"must find out in finite time whether a function *does* have a property, but is allowed to never give an answer if the function *doesn't* have the property\".\nFor example, we can determine whether a partial function is defined anywhere (that is, it is not the empty function: the one which never outputs anything, whatever its input) by just attempting to evaluate the function in parallel at $0$, at $1$, at $2$, and so on.\nIf the partial function is defined anywhere, then eventually one of the parallel threads will discover this fact; but if it is defined nowhere, then the procedure might just spin on and on forever without giving any output.\nHowever, Rice's theorem does guarantee that there is no procedure which will tell us in finite time *whether or not* its input is a function which is defined somewhere; even though we have just specified a procedure which will tell us in finite time *if* its input is defined somewhere.\n\n# Proof outline\n\nSeveral proofs exist: for example, [one by reduction](https://arbital.com/p/5n6) to the [halting problem](https://arbital.com/p/halting_problem), and one [standalone proof](https://arbital.com/p/5t9).\nHere, we sketch the standalone proof in broad strokes, because it goes via a neat lemma.\n\nThe intermediate lemma we prove is:\n\n> Let $h: \\mathbb{N} \\to \\mathbb{N}$ be [total](https://arbital.com/p/total_function) computable: that is, it halts on every input.\nThen there is $n \\in \\mathbb{N}$ such that $\\mathrm{Graph}(n) = \\mathrm{Graph}(h(n))$. %%note:And, moreover, we can actually *find* such an $n$.%%\n\nThat is, the \"underlying function\" of $n$ - the partial function computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$ - has the same output, at every point, as the function computed by $[https://arbital.com/p/h](https://arbital.com/p/h)$.\nIf we view $h$ as a way of manipulating a program (as specified by its [https://arbital.com/p/-description_number](https://arbital.com/p/-description_number)), then this fixed-point theorem states that we can find a program whose underlying function is not changed at all by $h$.\n\nThis lemma might be somewhat surprising: it \"ought\" to be possible to find a change one could make to arbitrary computer code, with the guarantee that the altered code must do something different to the original.\nThe fixed-point theorem tells us that this is not the case.\n\nThe proof of the lemma is very difficult to understand fully, but rather easy to state, because there are several useful shorthands which hide much of the complexity of what is really going on; full details, along with a worked example, can be found in [the accompanying lens](https://arbital.com/p/5t9).\n\nOnce we have the intermediate lemma, Rice's theorem itself follows quickly.\nIndeed, if the operation of \"determine whether a machine computes a function whose graph is in $A$ or not\" is computable, then we can do the following procedure:\n\n- Take some computer code as input.\n- Determine whether the code specifies a function whose graph is in $A$ or not.\n- If it is in $A$, output code for a specific (probably unrelated) function whose graph is *not* in $A$.\n- Otherwise, output code for a specific (probably unrelated) function whose graph is in $A$.\n\nThe fixed-point theorem tells us that some program isn't changed by the above procedure; but the procedure is guaranteed to interchange programs-from-$A$ with programs-not-from-$A$, so the procedure can't have any fixed points after all.", "date_published": "2016-08-14T08:28:35Z", "authors": ["Eric Rogstad", "Dylan Hendrickson", "Patrick Stevens", "Eric Bruylant", "Jaime Sevilla Molina"], "summaries": ["Rice's Theorem tells us that for *every* nontrivial property of computable functions, there is no general procedure which takes as its input a Turing machine, and computes whether or not the function computed by that machine has that property. That is, there is no general way to determine anything nontrivial about the output of an arbitrary Turing machine."], "tags": ["Math 2", "B-Class"], "alias": "5mv"} {"id": "5bcbe50c20fdf85cfd299e587cc70ccb", "title": "Greatest common divisor", "url": "https://arbital.com/p/greatest_common_divisor", "source": "arbital", "source_type": "text", "text": "There are two ways to define the **greatest common divisor** (also known as **greatest common factor**, or **highest common factor**), both equivalent.\n\nThe first definition is as the name suggests: the GCD of $a$ and $b$ is the largest number which divides both $a$ and $b$.\n\nThe second definition is the more \"mathematical\", because it generalises to arbitrary [rings](https://arbital.com/p/3gq) rather than just [ordered rings](https://arbital.com/p/55j).\nThe GCD of $a$ and $b$ is the number $c$ such that $c \\mid a$, $c \\mid b$, and whenever $d \\mid a$ and $d \\mid b$, we have $d \\mid c$.\n(That is, it is the maximal element of the [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) that consists of the divisors of $a$ and $b$, ordered by division.)\n\n# Examples\n\n\n\n# Equivalence of the definitions\n\n\n\n# Relation to prime factorisations\n\n\n\n# Calculating the GCD efficiently", "date_published": "2016-08-04T18:04:44Z", "authors": ["Eric Bruylant", "Kendrea Beers", "Patrick Stevens"], "summaries": ["The greatest common divisor of two [natural numbers](https://arbital.com/p/45h) is the largest number which divides both of them."], "tags": ["Needs parent", "Stub"], "alias": "5mw"} {"id": "870488e56917b9808bc9ccaf1d8a362f", "title": "Metric", "url": "https://arbital.com/p/metric", "source": "arbital", "source_type": "text", "text": "A **metric**, sometimes referred to as a distance function, is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) that defines a [real](https://arbital.com/p/-4bc) nonnegative distance between every two elements of a [set](https://arbital.com/p/3jz). It is commonly denoted by the variable $d$. In [https://arbital.com/p/-3vl](https://arbital.com/p/-3vl), a metric $d$ that defines distances between elements of the set $S$ is written:\n$$d: S \\times S \\to [0, \\infty)$$\n\nIn this case we say $d$ *is a metric on* $S$.\n\nThat is, a metric $d$ on a set $S$ takes as input any two elements $a$ and $b$ from $S$ and outputs a number that is taken to define their distance in $S$ under $d$. Apart from being nonnegative real numbers, the distances a metric outputs must follow three other rules in order for the function to meet the definition of a metric. A function that matches the above colon-to notation is called a metric if and only if it satisfies these requirements. The following must hold for any choice of $a$, $b$, and $c$ in $S$:\n\n 1. $d(a, b) = 0 \\iff a = b$\n\n 2. $d(a, b) = d(b, a)$\n\n 3. $d(a, b) + d(b, c) \\geq d(a, c)$\n\n(1) effectively states both that the distance from an element to itself is 0, and that the distance between non-identical elements must be greater than 0. (2) asserts that a metric must be [commutative](https://arbital.com/p/3jb); informally the distance from $a$ to $b$ must be the same as the distance from $b$ to $a$. Finally, (3) is known as the [https://arbital.com/p/-triangle_inequality](https://arbital.com/p/-triangle_inequality) and asserts that the distance from $a$ to $c$ is at most as large as the sum of the distances from $a$ to $b$ and from $b$ to $c$. It is named as such because in [https://arbital.com/p/euclidean_space](https://arbital.com/p/euclidean_space), the points $a$, $b$, and $c$ form a triangle, and the inequality requires that the length of one side of the triangle is not longer than the sum of the lengths of the other two sides; violating this would mean that the shortest path between two points is no longer the straight line between them.\n\nIt is possible (and relatively common!) to deal with multiple different metrics on the same set. This means we are using the same set elements as labels, but treating the distances between elements differently; in this case the different [metric spaces](https://arbital.com/p/metric_space) we are defining may have very different properties. If multiple metrics are being considered, we must be careful when speaking of distances between elements of the set to specify which metric we are using. For example, if $d$ and $e$ are both metrics on $S$, we cannot just say \"the distance between $a$ and $b$ in $S$\" because it is ambiguous whether we are referring to $d(a, b)$ or to $e(a, b)$. We could instead say something like \"the distance between $a$ and $b$ under $e$\" to remove the ambiguity.\n\nThe most commonly-used metric on Cartesian space is the Euclidean metric, defined in two dimensions as $d(a, b) = \\sqrt{(a_1-b_1)^2 + (a_2-b_2)^2}$, and more generally in $n$ dimensions as $d(a, b) = \\sqrt{\\sum_{i=1}^n (a_i-b_i)^2}$.\n\nA less-common metric on Cartesian space is the Manhattan metric, defined generally as $d(a, b) = \\sum_{i=1}^n |a_i-b_i|$; the distance is analogous to the distance taken between two points on a rectangular grid when motion is constrained to be purely vertical or horizontal, but not diagonal.", "date_published": "2017-03-23T22:15:16Z", "authors": ["Eric Rogstad", "Adam Buchbinder", "Bryce Woodworth", "Kevin Clancy"], "summaries": ["Let $S$ be a set. A **metric** on $S$ is a function $d : S \\times S \\to \\mathbb R_{\\ge 0}$ such that for all $a,b,c \\in S$,\n\n1. $d(a,b) = 0 \\Leftrightarrow a = b$\n2. $d(a,b) = d(b,a)$\n3. $d(a,b) + d(b,c) \\geq d(a,c)$ \n\nSuch a function defines a notion of distance between pairs of elements of $S$."], "tags": [], "alias": "5n0"} {"id": "0c090e4bf8b3c2321f483ca02f7c7cb2", "title": "Solovay's theorems of arithmetical adequacy for GL", "url": "https://arbital.com/p/arithmetical_adequacy_GL", "source": "arbital", "source_type": "text", "text": "One of the things that makes [https://arbital.com/p/-5l3](https://arbital.com/p/-5l3) such an interesting formal system is the direct relation between its theorems and a restricted albeit rich class of theorems regarding [provability predicates](https://arbital.com/p/5j7) in [https://arbital.com/p/3ft](https://arbital.com/p/3ft).\n\nAs usual, the adequacy result comes in the form of a pair of theorems, proving respectively [https://arbital.com/p/-soundness](https://arbital.com/p/-soundness) and [https://arbital.com/p/-completeness](https://arbital.com/p/-completeness) for this class. Before stating the results, we describe the way to [translate](https://arbital.com/p/translation) modal sentences to sentences of arithmetic, thus describing the class of sentences of arithmetic the result alludes to.\n\n##Realizations\nA realization $*$ is a function from the set of well-formed sentences of modal logic to the set of sentences of arithmetic. Intuitively, we are trying to preserve the structure of the sentence while mapping the expressions proper of modal logic to related predicates in the language of $PA$.\n\nConcretely,\n\n* $p^* = S_p$: sentence letters are mapped to arbitrary closed sentences of arithmetic.\n* $(\\square A)^*=P(A^*)$: the box operator is mapped to a [https://arbital.com/p/-5j7](https://arbital.com/p/-5j7) $P$, usually the [https://arbital.com/p/-5gt](https://arbital.com/p/-5gt).\n* $(A\\to B)^* = A^* \\to B^*$: truth functional compounds are mapped as expected.\n* $\\bot ^* = \\neg X$, where $X$ is any theorem of $PA$, for example, $0\\ne 1$.\n\nThe class of sentences of $PA$ such that there exists a modal sentence of which they are a realization is the set for which we will prove the soundness and completeness.\n\n##Arithmetical soundness\n> If $GL\\vdash A$, then $PA\\vdash A^*$ for every realization $*$.\n\nThe applications to this result are endless. For example, this theorem allows us to take advantage of the procedures to calculate [fixed points](https://arbital.com/p/5lx) in $GL$ to get results about $PA$.\n\nTo better get an intuition of how this correspondence works, try figuring out how the properties of the [https://arbital.com/p/-5j7](https://arbital.com/p/-5j7) relate to the axioms and rules of inference of $GL$.\n\n[Proof](https://arbital.com/p/)\n\n##Arithmetical completeness\n> If $GL\\not\\vdash A$, then there exists a realization $*$ such that $PA\\not\\vdash A^*$.\n\nThe proof of arithmetical completeness is a beautiful and intricate construction that exploits the semantical relationship between $GL$ and the finite, transitive and irreflexive [Kripke models](https://arbital.com/p/5ll). Check [its page](https://arbital.com/p/) for the details.\n\n##Uniform arithmetical completeness\n> There exists a realization $*$ such that for every modal sentence $A$ we have that $GL\\not\\vdash A$ only if $PA\\not\\vdash A$.\n\nThis result generalizes the arithmetical completeness theorem to a new level.\n\n[Proof](https://arbital.com/p/)", "date_published": "2016-07-29T10:53:04Z", "authors": ["Jaime Sevilla Molina"], "summaries": ["$GL\\vdash A$ iff $PA\\vdash A^*$ for every realization $*$.\n\nRealizations are translations from modal sentences to sentences of arithmetic."], "tags": ["Work in progress"], "alias": "5n5"} {"id": "3a5449b24220318cc93892d4cb7ccfc6", "title": "Rice's theorem and the Halting problem", "url": "https://arbital.com/p/rice_and_halt", "source": "arbital", "source_type": "text", "text": "We will show that [Rice's theorem](https://arbital.com/p/5mv) and the [the halting problem](https://arbital.com/p/46h) are equivalent.\n\n#The Halting theorem implies Rice's theorem\nLet $S$ be a non trivial set of computable partial functions, and suppose that there exists a Turing machine encoded by $[https://arbital.com/p/n](https://arbital.com/p/n)$ such that:\n$$\n[n](m) = \n\\left\\{\n \\begin{array}{ll}\n 1 & [https://arbital.com/p/m](https://arbital.com/p/m) \\text{ computes a function in $S$} \\\\\n 0 & \\text{otherwise} \\\\\n \\end{array}\n \\right.\n$$\n\nWe can assume [w.l.o.g.](https://arbital.com/p/without_loss_of_generality) that the empty function undefined on every input is not in $S$ \n%%note:Suppose that the empty function is in $S$. Then it is satisfied that the empty function is **not** in $S^c$, and if $S$ is decidable then it follows immediately that [$S^c$ is decidable as well](https://arbital.com/p/the_complement_of_a_decidable_set_is_decidable). So we can use $S^c$ as our $S$ and the argument follows exactly the same.%%. \nThus there exists a computable function in $S$ computed by some machine $[https://arbital.com/p/s](https://arbital.com/p/s)$ such that $[s](x)$ is defined for some input $x$.\n\nSuppose we want to decide whether the machine $[https://arbital.com/p/m](https://arbital.com/p/m)$ halts on input $[https://arbital.com/p/x](https://arbital.com/p/x)$.\n\nFor that purpose we can build a machine $Proxy_s$ which does the following:\n\n Proxy_s(z):\n call [m](x)\n return s[https://arbital.com/p/z](https://arbital.com/p/z)\n\nClearly, if $[m](x)$ halts then Proxy_z computes the same function as $[https://arbital.com/p/s](https://arbital.com/p/s)$, and thus $[n](Proxy_s)=1$.\n\nIf on the other hand $[m](x)$ does not halt, then Proxy_s(z) computes the empty function, which we assumed to not be in $S$, and therefore $[n](Proxy_s)=0$.\n\nThus we can use a Turing machine computing pertinence to $S$ to decide the halting problem, which we know is undecidable. We conclude that such a machine cannot possibly exists, and thus Rice's theorem holds.\n\n\n#Rice's Theorem implies the Halting theorem\nSuppose that there was a Turing machine $HALT$ deciding the Halting Problem.\n\nLet $S$ be the set of computable functions defined on a fixed input $x$, which is clearly non-trivial, as it does not contain the empty function but is not empty either. Let $[https://arbital.com/p/n](https://arbital.com/p/n)$ be a Turing machine, and we want to decide whether $[https://arbital.com/p/n](https://arbital.com/p/n)\\in S$ or not. If this was possible for an arbitrary $[https://arbital.com/p/n](https://arbital.com/p/n)$, then we would have reached a contradiction, as Rice's theorem forbids this outcome.\n\nBut $[https://arbital.com/p/n](https://arbital.com/p/n)$ belongs to $S$ iff $[https://arbital.com/p/n](https://arbital.com/p/n)$ halts on input $x$, so we can use $HALT$ to decide whether $[https://arbital.com/p/n](https://arbital.com/p/n)$ belongs to $S$, in contradiction with Rice's theorem. So our supposition of the existence of $HALT$ was erroneous, and thus the Halting theorem is true.", "date_published": "2016-08-14T17:22:24Z", "authors": ["Jaime Sevilla Molina"], "summaries": [], "tags": ["Work in progress"], "alias": "5n6"} {"id": "4265a5eae7d1686a8a61031d9e8fefa0", "title": "Modular arithmetic", "url": "https://arbital.com/p/modular_arithmetic", "source": "arbital", "source_type": "text", "text": "In ordinary [https://arbital.com/p/-arithmetic](https://arbital.com/p/-arithmetic), you can think of [https://arbital.com/p/-addition](https://arbital.com/p/-addition) and [https://arbital.com/p/-subtraction](https://arbital.com/p/-subtraction) as traveling in different directions along an [infinitely](https://arbital.com/p/infinity) long road. A calculation like $9 + 6$ can be thought of as starting at kilometer marker 9, then driving for another 6 kilometers, which would bring you to kilometer marker 15 ([https://arbital.com/p/-negative_numbers](https://arbital.com/p/-negative_numbers) are analogous to driving along the road backwards). If the road is perfectly straight, you can never go back to a marker you've already visited by driving forward. But what if the road were a circle?\n\nModular arithmetic is a type of addition that's more like driving around in a circle than along an infinite straight line. In modular arithmetic, you can start with a number, add a positive number to it, and come out with the same number you started with--just as you can drive forward on a circular road to get right back where you started. If the length of the road were 12, for example, then if you drove 12 kilometers you would wind up right back where you started. In this case, we would call it *modulus 12* arithmetic, or *mod 12* for short.\n\nModular arithmetic may seem strange, but in fact, you probably use it every day! The hours on the face of a clock \"wrap around\" from 12 to 1 in exactly the same way that a circular road wraps around on itself. Thus, while in ordinary arithmetic $9 + 6 = 15$, when figuring out what time it will be 6 hours after 9 o'clock, we use modular arithmetic to arrive at the correct answer of 3 o'clock, rather than 15 o'clock.", "date_published": "2016-08-02T15:29:56Z", "authors": ["Eric Rogstad", "Malcolm McCrimmon", "Patrick Stevens", "Eric Bruylant", "Mark Chimes"], "summaries": ["Modular arithmetic is the type of [https://arbital.com/p/-addition](https://arbital.com/p/-addition) we use when calculating dates and times. In ordinary [https://arbital.com/p/-arithmetic](https://arbital.com/p/-arithmetic), $9 + 6 = 15$, but when working with the hours of the day, 6 hours after 9 o'clock is 3 o'clock, not 15 o'clock. This type of \"wrap-around\" addition generalizes to many other domains."], "tags": ["Start", "Cyclic Group Intro (Math 0)", "Needs parent"], "alias": "5ns"} {"id": "2683b750a61b019d502e78e18668e825", "title": "Partial function", "url": "https://arbital.com/p/partial_function", "source": "arbital", "source_type": "text", "text": "A **partial function** is like a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $f: A \\to B$, but where we relax the requirement that $f(a)$ must be defined for all $a \\in A$.\nThat is, it must still be the case that \"$a = b$ and $f(a)$ is defined\" implies \"$f(b)$ is defined and $f(a) = f(b)$\", but now we no longer need $f(a)$ to be defined everywhere.\nWe can write $f: A \\rightharpoonup B$ %%note:In LaTeX, this symbol is given by `\\rightharpoonup`.%%to denote that $f$ is a partial function with **domain** $A$ and **codomain** $B$: that is, whenever $f(x)$ is defined then we have $x \\in A$ and $f(x) \\in B$.\n\nThis idea is essentially the \"flip side\" to the distinction between the [https://arbital.com/p/-3lv](https://arbital.com/p/-3lv) dichotomy.\n\n# Implementation in set theory\n\nJust as a function can be implemented as a set $f$ of ordered pairs $(a, b)$ such that: \n\n- every $x \\in A$ appears as the first element of some ordered pair in $f$\n- if $(a, b)$ is an ordered pair in $f$ then $a \\in A$ and $b \\in B$\n- if $(a, b)$ and $(a, c)$ are ordered pairs in $f$, then $b=c$\n\nso we can define a *partial* function as a set $f$ of ordered pairs $(a,b)$ such that:\n\n- if $(a, b)$ is an ordered pair in $f$ then $a \\in A$ and $b \\in B$\n- if $(a, b)$ and $(a, c)$ are ordered pairs in $f$, then $b=c$\n\n(That is, we omit the first listed requirement from the definition of a *bona fide* function.)\n\n# Relationship to Turing machines\n\n\nMorally speaking, every [Turing machine](https://arbital.com/p/5pd) $\\mathcal{T}$ may be viewed as computing some function $f: \\mathbb{N} \\to \\mathbb{N}$, by defining $f(n)$ to be the state of the tape after $\\mathcal{T}$ has been allowed to execute on the tape which has been initialised with the value $n$.\n\nHowever, if $\\mathcal{T}$ does not terminate on input $n$ (for example, it may be the machine \"if $n = 3$ then return $1$; otherwise loop indefinitely\"), then this \"morally correct\" state of affairs is not accurate: how should we define $f(4)$?\nThe answer is that we should instead view $f$ as a *partial* function which is just undefined if $\\mathcal{T}$ fails to halt on the input in question.\nSo with the example $\\mathcal{T}$ above, $f$ is the partial function which is only defined at $3$, and $f(3) = 1$.", "date_published": "2016-08-06T10:41:39Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": [], "alias": "5p2"} {"id": "ad43d8ad3c43d87dc57209eb44a2d5c2", "title": "Turing machine", "url": "https://arbital.com/p/turing_machine", "source": "arbital", "source_type": "text", "text": "A Turing Machine is a simple mathematical model of [https://arbital.com/p/-computation](https://arbital.com/p/-computation) that is powerful enough to describe any computation a computer can do.\n\nImagine a robot, in front of a little whiteboard, with infinitely many whiteboards to both sides, finitely many of which have a symbol written on them. The robot can erase the contents of a whiteboard and replace it with some other symbol, and it can move over to the next whiteboard on the left or right, or shut down. This is all the robot can do. The robot's actions are determined by only two things: the symbol on the whiteboard it just saw, and its internal state. The output of this process is defined to be \"whatever is written on the string of whiteboards when the robot has shut down\".\n\nThis is equivalent to a Turing Machine (with the robot replaced by a machine head and the infinite line of whiteboards replaced by an infinite tape subdivided into cells). The *halting problem* (which [is unsolvable](https://arbital.com/p/halting_problem_is_uncomputable) in general) asks whether the robot will eventually shut down at some point.\n\nSo, a Turing Machine can be specified with the following information: \n\n- A finite set of symbols the robot can write (one of which is the null symbol, an empty board).\n- A finite set of states the robot can be in (at least one of which causes the robot to shut down).\n- A starting state for the robot.\n- Starting symbols on finitely many of the boards (whiteboard location and symbol type data).\n- A transition function for the robot, which takes a symbol/state pair as input, and has a $(\\text{symbol},\\text{state},\\text{move left or right})$ triple as output. For example, ine such transition might be represented as `if symbol is 7 and state is FQUF, then (erase and write 4, set state to ZEXA, move left)`.\n\nSurprisingly enough, other proposed models of computation have all been shown to be weaker than, or equivalent to, Turing Machines! With infinite memory space, and sufficiently intricate sets of symbols and states, the robot and whiteboard (or machine head and memory tape) system can compute anything at all that is computable in principle!\nThis fact is known as the Church-Turing thesis; it's very widely believed to be true, and certainly no-one has ever found any hint of a counterexample, but it's not \"proved\" in any meaningful sense.\n\n# Variants of Turing machines\n\n*Multi-tape Turing Machines* would be equivalent to having several robots in infinite whiteboard hallways, except that the robots are networked together to all share the same state. An example state transition is as follows:\n\n`If symbol A is * and symbol B is 6 and symbol C is absent and state is VREJ, set state to IXXI, robot A writes ! and moves left, robot B writes 9 and moves left, robot C writes = and doesn't move.`\n\nThese Multi-tape Machines can speed up some computations polynomially (so, for example, a problem which would normally take 1 million steps to solve may be solvable in a thousand steps, because of the square root speedup). Because these machines can only muster a polynomial speedup, and moving to a one-tape Turing Machine only incurs a polynomial slowdown, the computational [complexity class P](https://arbital.com/p/5pf) is unchanged across Turing Machines with different numbers of tapes.\n\n*Write-only Turing Machines* are Multi-tape Turing Machines where one of the tapes/hallways of whiteboards has its input ignored when determining the next state, written symbols and movements.\nWe can think of this situation as one where one particular robot is blind.\n\n*Read-only Turing Machines* are Multi-tape Turing Machines, and one of the tapes/hallways of whiteboards cannot be rewritten. The robot in there can only move around and observe, but it has not been given a pen or rubber so it can't write on or erase the boards.\n\n*Oracle Machines* (which are more powerful than Turing Machines, and don't exist in reality, though they are a very useful tool in computational complexity theory), are like a mult-tape machine with exactly two tapes: one tape is designated as the \"oracle tape\", and one tape as the \"machine tape\".\nThis time, one of the robot states is \"INVOKING MAGIC ORACLE\".\nWhen that happens, the contents of the whiteboards in the machine hall (that is, the contents of the machine tape) are interpreted as the description of a problem, and then a correct solution to the problem magically appears on the string of whiteboards in the oracle hall (that is, on the oracle tape), completely erasing whatever was on the oracle hall whiteboards originally; and finally the oracle robot is moved to the first whiteboard of the answer.\n\nTherefore the functionality of the oracle machine depends very strongly on what the oracle does! A given oracle machine might do one thing when the oracle does \"compute the [https://arbital.com/p/-5bv](https://arbital.com/p/-5bv) of the number I was called with\" than when it does \"compute whether or not the number I was given is the [https://arbital.com/p/-description_number](https://arbital.com/p/-description_number) of a halting Turing machine\".\n\nOracle machines are like ordinary Turing machines, except we also give them the ability (in principle) to obtain instant correct answers to any particular problem. The problem we may instantly solve is fixed in advance, before we ever start running the machine.\nWith the right oracle, oracle machines can solve *any* problem, where Turing machines cannot (recalling that the halting problem can't be solved by Turing machines).\nHowever, the price is that oracle machines don't exist: they require a magic oracle, and we don't have any of those in nature.", "date_published": "2016-10-03T16:45:38Z", "authors": ["Eric Leese", "Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Alex Appel", "Eric Bruylant"], "summaries": [], "tags": ["C-Class"], "alias": "5pd"} {"id": "7416703d3d6b7d474ff522e2ca3a926e", "title": "P (Polynomial Time Complexity Class)", "url": "https://arbital.com/p/polynomial_time_complexity_class", "source": "arbital", "source_type": "text", "text": "P is the [class of problems](https://arbital.com/p/problem_class) which can be solved by [algorithms](https://arbital.com/p/algorithm) whose run time is bounded by a [https://arbital.com/p/-polynomial](https://arbital.com/p/-polynomial).", "date_published": "2016-08-02T15:33:46Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eric Leese"], "summaries": [], "tags": ["Stub"], "alias": "5pf"} {"id": "1beaf1e1667d4014fc7d3ab3cedef5bd", "title": "Ordering of rational numbers (Math 0)", "url": "https://arbital.com/p/ordering_of_rational_numbers_math_0", "source": "arbital", "source_type": "text", "text": "So far while learning about [how to combine rational numbers](https://arbital.com/p/514), we have seen [addition](https://arbital.com/p/55m), [subtraction](https://arbital.com/p/56x), [multiplication](https://arbital.com/p/59s) and [division](https://arbital.com/p/5jd).\nThere is one final major thing we can do to a pair of rational numbers: to *compare* them.\nOnce you know what you're looking for, it is very easy to compare certain pairs of rational numbers; in this page, we'll look at how to extend that.\n\nIntuitively, if I gave you an apple in one hand, and four apples in the other hand %%note:My hands are rather large.%%, you would be able to tell me that the four-apples hand was holding more apples.\nThis is the kind of comparison we are trying to generalise, and we will do it from one simple observation.\n\nThe observation we will make is that it is very easy to determine whether a rational number is negative or not. %%note:Recall that \"negative\" meant \"it is expressed in anti-apples rather than apples\".%% Indeed, we just need to see if we're holding an anti-thing or not.\nThis might be hard if we have to do some calculations first—for example, it's not immediately obvious whether $\\frac{16}{107} - \\frac{3}{20}$ is anti- or not—but we'll assume that we've already done all the calculations to reduce an expression down to just a single rational number.\n(In this example, we can use the [subtraction techniques](https://arbital.com/p/56x) to work out that $\\frac{16}{107} - \\frac{3}{20} = -\\frac{1}{2140}$. That's obviously negative, because it's got a negative sign out the front.)\n\nTo summarise, then, what I have just asserted is that it is easy to see whether a rational number is negative or not; we say that a number which is *positive* %%note:That is, expressed in apples rather than anti-apples.%% is \"*greater than $0$*\".\nRecall that $0$ was the name we gave to the rational number which is \"no apples at all\"; then what this is saying is that if I have a positive number of apples in one hand, and no apples at all in the other, then according to the intuition earlier, I have more apples in the first hand than in the no-apples hand.\n(Hopefully you see that this is true; if not, let us know, because this is one of those strange areas where it's very hard for a mathematician to remember *not* understanding it immediately and intuitively, since we've each been doing this for decades. We're doing our best to remember what parts of the maths are genuinely difficult and weird, but we might get it wrong.)\n\nSimilarly, we say that $0$ is *less than* any positive number, and write $0 < \\frac{5}{16}$, for instance.\nThe littler quantity always goes on the littler end of the arrow.\n(The number $0$ itself is neither negative nor positive. It's just $0$.\nTherefore we can't write $0 < 0$ or $0 > 0$; it's actually the case that $0=0$, and this excludes the other two options of $<$ or $>$.)\n\n# One weird trick to compare any two rationals %%note:Mathematicians hate it!%%\n\nNow that we can compare any rational with $0$, we will work out how to compare any rational with any other rational.\n\nThe key insight is that adding the same number of apples %%note:Or anti-apples.%% to each hand should not change the relative fact of whether there are more apples in one hand or the other.\nFor a real-world example, on a [balance scale](https://commons.wikimedia.org/wiki/File:Balanced_scale_of_Justice.svg) it doesn't matter whether you add five grams or even fifty kilograms onto each of the two pans; the result of the weight comparison will be the same.\n(Now you should probably forget the scales metaphor again, because the weight of antimatter behaves in a way that doesn't lend itself nicely to what we're trying to do.)\n\n## Example\n\nSo let's say we want to compare $\\frac{5}{6}$ and $\\frac{3}{4}$.\nWhich of the two is bigger?\n\nWell, what we can do is add $\\frac{3}{4}$ of an anti-apple to both hands.\nBy the principle that \"adding the same amount to each hand doesn't change their quantity relative to each other\", the result of the comparison between $\\frac{5}{6}$ and $\\frac{3}{4}$ is just the same as the result of the comparison between $\\frac{5}{6} - \\frac{3}{4}$ and $\\frac{3}{4} - \\frac{3}{4}$: that is, between $\\frac{1}{12}$ and $0$.\nThat's easy, though, since we already know how to compare $0$ with anything!\n\nSo $\\frac{5}{6}$ is bigger than $\\frac{3}{4}$, since $\\frac{1}{12}$ is bigger than $0$: we write $\\frac{5}{6} > \\frac{3}{4}$.\n\n## Comparisons with anti-apples\n\nIn this section, we will just close our eyes, swallow grimly, and hope for the best.\n\nIf we want to compare $-\\frac{59}{12}$ and $\\frac{4}{7}$, what should happen?\nIf you already have the right intuition built in, then this will be obvious, but before you know how to do it, it's really not clear at all.\nAfter all, $-\\frac{59}{12}$ is \"a large amount of anti-apple\" (it's nearly five whole anti-apples!) but $\\frac{4}{7}$ is \"a small amount of apple\" (not even one whole apple).\n\nHere's where the \"close our eyes\" happens.\nLet's just go by the principle that adding the same amount of apple to both hands shouldn't change anything, and we'll add $\\frac{59}{12}$ apples to both sides.\n\nThen the $-\\frac{59}{12}$ becomes $0$, and the $\\frac{4}{7}$ becomes the rather gruesome $\\frac{461}{84}$. %%note:This is all good practice for you to get fluent with adding and subtracting.%%\nBut we already know how to compare $0$ with things, so we can see that $\\frac{461}{84}$ is bigger than $0$.\n\nTherefore we must have $\\frac{4}{7}$ being bigger than $-\\frac{59}{12}$.\n\nBy the same token, *any* amount of anti-apple is always less than *any* amount of apple, and indeed any amount of anti-apple is always less than $0$.\n\n## Another example\n\nHow about comparing $\\frac{-3}{5}$ and $\\frac{9}{-11}$?\nThe first thing to do is to remember that we can take the minus signs outside the fractions, because an anti-chunk of apple is the same as a chunk of anti-apple.\n\nThat is, we are trying to compare $-\\frac{3}{5}$ and $-\\frac{9}{11}$.\n\nAdd $\\frac{3}{5}$ to both, to compare $0$ and $-\\frac{9}{11} + \\frac{3}{5} = -\\frac{12}{55}$.\n\nAdd $\\frac{12}{55}$ to both again, to compare $\\frac{12}{55}$ and $0$.\n\nClearly the $\\frac{12}{55}$ is bigger, so it must be that $\\frac{-3}{5}$ is bigger than $\\frac{9}{-11}$.\n\n## Instant rule\n\nJust as we had an [instant rule for addition](https://arbital.com/p/55m), so we can make an instant rule for comparison.\n\nIf we want to see which of $\\frac{a}{b}$ and $\\frac{c}{d}$ is bigger, it is enough to see which of $0$ and $\\frac{c}{d} - \\frac{a}{b}$ is bigger.\n\nBut $$\\frac{c}{d} - \\frac{a}{b} = \\frac{c \\times b - a \\times d}{b \\times d}$$\n\nSadly from this point there are actually two cases to consider, because we might have produced something that looks like any of the following:\n\n- $\\frac{5}{6}$\n- $\\frac{-4}{7}$\n- $\\frac{3}{-8}$\n- $\\frac{-2}{-9}$\n\n(That is, there could be minus-signs scattered all over the place.)\n\nHowever, there is a way to get around this, and it hinges on the fact from the [Division page](https://arbital.com/p/5jd) that $\\frac{-1}{-1} = 1$.\n\nIf $b$ is negative, then we can just write $\\frac{a}{b} = \\frac{-a}{-b}$, and now $-b$ is positive!\nFor example, $\\frac{5}{-6}$ has $b=-6$; then that is the same as $\\frac{-5}{6}$.\nSimilarly, $\\frac{-7}{-8}$ is the same as $\\frac{7}{8}$.\n\nLikewise we can always write $\\frac{c}{d}$ so that the numerator is positive: as $\\frac{-c}{-d}$ if necessary.\n\nSo, we have four cases:\n\n- If $b, d$ are both positive, then $\\frac{c \\times b - a \\times d}{b \\times d}$ is positive precisely when $c \\times b - a \\times d$ is positive as an integer; i.e. when $cb > ad$.\n- If $b$ is positive and $d$ is negative, then $$\\frac{c}{d} - \\frac{a}{b} = \\frac{-c}{-d} - \\frac{a}{b} = \\frac{(-c) \\times b - a \\times (-d)}{b \\times (-d)}$$ where the denominator %%note:Remember, that's the thing on the bottom of the fraction: in this case, $b \\times (-d)$.%% is positive. \n\n## Why did I say \"Mathematicians hate it\"?%%note:Aside from parodying Internet banner ads, that is.%% A diversion on pedagogy\n\nThe way I've done this is all completely correct, but it's slightly backwards from the way a mathematician would usually present it.\n(Not sufficiently backwards that mathematicians should hate it, but I couldn't resist.)\n\nUsually, when *finding* a way of comparing objects, mathematicians would probably do what we've done above: find an easy way of comparing some of the objects, and then try to extend it to cover all the objects.\n\nBut if a mathematician *already knows* a way of comparing objects and is just writing it down for the benefit of other mathematicians, they would usually write down the complete \"cover all the objects\" method right at the start, and would then go on to show that it does indeed cover all the objects and has all the right properties.\n\nThis has the benefit of producing very terse descriptions with the minimum necessary amount of writing; but it's very *bad* at helping other people understand where it came from.\nIf you know where something came from, you stand a better chance of being able to recreate it yourself if you forget the bottom line, and you might well remember the bottom line better, too.\nThis is why we've done it slightly backwards here.", "date_published": "2016-08-14T01:53:01Z", "authors": ["Patrick Stevens", "Joe Zeng"], "summaries": ["\"Ordering\" is the idea that some quantities of apple are \"bigger\" than others."], "tags": [], "alias": "5pk"} {"id": "6dffef90a53466ad87700ddfee606b70", "title": "Rational arithmetic all works together", "url": "https://arbital.com/p/rationals_form_a_field_math_0", "source": "arbital", "source_type": "text", "text": "We have seen [what the rational numbers are](https://arbital.com/p/4zx), and five things we can do with them:\n\n- [addition](https://arbital.com/p/55m)\n- [subtraction](https://arbital.com/p/56x)\n- [multiplication](https://arbital.com/p/59s)\n- [division](https://arbital.com/p/5jd)\n- [comparison](https://arbital.com/p/5pk)\n\nThese might seem like five standalone operations, but in fact they all play nicely together in a particular way.\nYou don't need to know the fancy name, but here it is anyway: mathematicians say that the rationals **form an [ordered field](https://arbital.com/p/ordered_field)**, meaning that the five operations above:\n\n- work in the rationals\n- slot together, behaving in certain specific ways that make it easy to calculate\n\n*You shouldn't bother learning this page particularly deeply, because none of the properties alone is very interesting; instead, try and absorb it as a whole.*\n\nWe've already seen the \"instant rules\" for manipulating the rationals, and where they come from.\nHere, we'll first quickly define the rational numbers themselves, and all the operations above, in \"instant rule\" format, just to have them all here in one place.\nThen we'll go through all the properties that are required for mathematicians to be able to say that the operations on rationals \"play nicely together\" in the above sense; and we'll be relying on the instant rules as our definitions, because they're totally unambiguous.\nThat way we can be much more sure that we're not making some small error.\n(Relying on an intuition about how apples work could in theory lead us astray; but the rules leave no wiggle-room or scope for interpretation.)\n\nThe letters $a, b, c, d$ should be read as standing for integers (possibly positive, negative or $0$), and $b$ and $d$ should be assumed not to be $0$ (recalling that [it makes no sense](https://arbital.com/p/5jd) to divide by $0$).\n\n- A **rational number** is a pair of [integers](https://arbital.com/p/53r), written as $\\frac{a}{b}$, where $b$ is not $0$, such that $\\frac{a}{b}$ is viewed as being the same as $\\frac{c}{d}$ precisely when $a \\times d = b \\times c$. %%note:This is just the instant rule for subtraction, below, together with the assumption that $\\frac{0}{x} = \\frac{0}{y}$ for any $x, y$ nonzero integers; in particular, the assumption that $\\frac{0}{b \\times d} = \\frac{0}{1}$ when $b, d$ are not $0$, which we will need later.%%\n- Addition: $$\\frac{a}{b} + \\frac{c}{d} = \\frac{a \\times d + b \\times c} {b \\times d}$$\n- Subtraction: $$\\frac{a}{b} - \\frac{c}{d} = \\frac{a}{b} + \\frac{-c}{d} = \\frac{a \\times d - b \\times c}{b \\times d}$$\n- Multiplication: $$\\frac{a}{b} \\times \\frac{c}{d} = \\frac{a \\times c}{b \\times d}$$\n- Division (where also $c$ is not $0$): $$\\frac{a}{b} \\big/ \\frac{c}{d} = \\frac{a}{b} \\times \\frac{d}{c} = \\frac{a \\times d}{b \\times c}$$\n- Comparison (where we write $\\frac{a}{b}$ and $\\frac{c}{d}$ such that both $b$ and $d$ are positive %%note:Remember, we can do that: if $b$ is negative, for instance, we may instead write $\\frac{a}{b}$ as $\\frac{-a}{-b}$, and the extra minus-sign we introduce has now flipped $b$ from being negative to being positive.%%): $\\frac{a}{b} < \\frac{c}{d}$ precisely when $\\frac{c}{d}-\\frac{a}{b}$ is positive: that is, when $$\\frac{b \\times c - a \\times d}{b \\times d} > 0$$\nwhich is in turn when $$b \\times c - a \\times d > 0$$\n\nTo be super-pedantic, it is necessary to show that the rules above are \"well-defined\": for instance, you don't get different answers if you apply the rules using $\\frac{2}{4}$ instead of $\\frac{1}{2}$.\nConcretely, for example, $$\\frac{2}{4} + \\frac{1}{3} = \\frac{1}{2} + \\frac{1}{3}$$ according to the instant rules.\nIt's actually true that they *are* well-defined in this sense, but we won't show it here; consider them to be exercises.\n\nAdditionally, each integer can also be viewed as a rational number: namely, the integer $n$ can be viewed as $\\frac{n}{1}$, being \"$n$ copies of the $\\frac{1}{1}$-chunk\".\n\nThe properties required for an \"ordered field\"—that is, a nicely-behaved system of arithmetic in the rationals—fall naturally into groups, which will correspond to main-level headings below.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# How addition behaves\n\n## Addition always spits out a rational number %%note:Confucius say that man who run in front of bus get tired; man who run behind bus get exhausted. On the other hand, mathematicians say that the rationals are *closed* under addition.%%\n\nYou might remember a certain existential dread mentioned in the [intro to rational numbers](https://arbital.com/p/4zx): whether it made sense to add rational numbers at all.\nThen we saw in [https://arbital.com/p/55m](https://arbital.com/p/55m) that in fact it does make sense.\n\nHowever, in this page we are working from the instant rules above, so if we want to prove that \"addition spits out a rational number\", we actually have to prove the fact that $\\frac{a \\times d + b \\times c} {b \\times d}$ is a rational number; no mention of apples at all.\n\nRemember, a rational number here (as defined by instant rule!) is a pair of integers, the bottom one being non-zero.\nSo we just need to check that $a \\times d + b \\times c$ is an integer and that $b \\times d$ is a non-zero integer, where $a, b, c, d$ are integers and $b, d$ are also not zero.\n\nBut that's easy: the product of integers is integer, and the product of integers which aren't zero is not zero.\n\n## There must be a rational $0$ such that adding $0$ to anything doesn't change the thing %%note:Mathematicians say that there is an [https://arbital.com/p/-54p](https://arbital.com/p/-54p) for addition.%%\n\nThis tells us that if we want to stay where we are, but we absolutely must add something, then we can always add $0$; this won't change our answer.\nMore concretely, $\\frac{5}{6} + 0 = \\frac{5}{6}$.\n\nStrictly speaking, I'm pulling a bit of a fast one here; if you noticed how, then *very* well done.\nA rational number is a *pair* of integers, such that the bottom one is not $0$; and I've just tried to say that $0$ is our rational such that adding $0$ doesn't change rationals.\nBut $0$ is not a pair of integers!\n\nThe trick is to use $\\frac{0}{1}$ instead.\nThen we can verify that $$\\frac{0}{1} + \\frac{a}{b} = \\frac{0 \\times b + a \\times 1}{1 \\times b} = \\frac{0 + a}{b} = \\frac{a}{b}$$ as we require.\n\n## Every rational must have an anti-rational under addition %%note:Mathematicians say that addition is *invertible*.%%\n\nConcretely, every rational $\\frac{a}{b}$ must have some rational $\\frac{c}{d}$ such that $\\frac{a}{b} + \\frac{c}{d} = \\frac{0}{1}$.\nWe've [already seen](https://arbital.com/p/56x) that we can define $-\\frac{a}{b}$ to be such an \"anti-rational\", but remember that in order to fit in with our instant rules above, $-\\frac{a}{b}$ is also not a pair of integers; we should instead use $\\frac{-a}{b}$.\n\nThen $$\\frac{a}{b} + \\frac{-a}{b} = \\frac{a \\times b + (-a) \\times b}{b \\times b} = \\frac{0}{b \\times b}$$\n\nFinally, we want $\\frac{0}{b \\times b} = \\frac{0}{1}$; but this is immediate by the instant-rule definition of \"rational number\", since $0 \\times 1 = 0 \\times (b \\times b)$ (both being equal to $0$).\n\n## Addition doesn't care which way round we do it %%note:Mathematicians say that addition is [commutative](https://arbital.com/p/3jb).%%\n\nConcretely, this is the fact that $$\\frac{a}{b} + \\frac{c}{d} = \\frac{c}{d} + \\frac{a}{b}$$\nThis is intuitively plausible because \"addition\" is just \"placing apples next to each other\", and if I put five apples down and then seven apples, I get the same number of apples as if I put down seven and then five.\n\nBut we have to use the instant rules now, so that we can be sure our definition is completely watertight.\n\nSo here we go: $$\\frac{a}{b} + \\frac{c}{d} = \\frac{a \\times d + b \\times c}{b \\times d} = \\frac{c \\times b + d \\times a}{d \\times b} = \\frac{c}{d} + \\frac{a}{b}$$\nwhere we have used the fact that multiplication of *integers* doesn't care about the order in which we do it, and similarly addition of *integers*.\n\n## Addition doesn't care about the grouping of terms %%note:Mathematicians say that addition is [associative](https://arbital.com/p/3h4).%%\n\nHere, we mean that $$\\left(\\frac{a}{b} + \\frac{c}{d}\\right) + \\frac{e}{f} = \\frac{a}{b} + \\left( \\frac{c}{d} + \\frac{e}{f} \\right)$$\nThis is an intuitively plausible fact, because \"addition\" is just \"placing apples next to each other\", and if I put down three apples, then two apples to the right, then one apple to the left, I get the same number ($6$) of apples as if I had put down one apple on the left, then three apples in the middle, then two on the right.\n\nBut we have to use the instant rules now, so that we can be sure our definition is completely watertight.\nSo here goes, starting from the left-hand side:\n$$\\left(\\frac{a}{b} + \\frac{c}{d}\\right) + \\frac{e}{f} = \\frac{a \\times d + b \\times c}{b \\times d} + \\frac{e}{f} = \\frac{(a \\times d + b \\times c) \\times f + (b \\times d) \\times e}{(b \\times d) \\times f}$$\n\nSince addition and multiplication of *integers* don't care about grouping of terms, this is just $$\\frac{a \\times d \\times f + b \\times c \\times f + b \\times d \\times e}{b \\times d \\times f}$$\n\nNow going from the right-hand side:\n$$\\frac{a}{b} + \\left( \\frac{c}{d} + \\frac{e}{f} \\right) = \\frac{a}{b} + \\frac{c \\times f + d \\times e}{d \\times f} = \\frac{a \\times (d \\times f) + b \\times (c \\times f + d \\times e))}{b \\times (d \\times f)}$$\nand similarly we can write this $$\\frac{a \\times d \\times f + b \\times c \\times f + b \\times d \\times e}{b \\times d \\times f}$$\nwhich is the same as we got by starting on the left-hand side.\n\nSince we showed that both the left-hand and the right-hand side are equal to the same thing, they are in fact equal to each other.\n\n# How multiplication behaves\n\n## Multiplication always spits out a rational number %%note:Mathematicians say that the rationals are *closed* under multiplication.%%\nThis one is very easy to show: $\\frac{a}{b} \\times \\frac{c}{d} = \\frac{a \\times c}{b \\times d}$, but $b \\times d$ is not zero when $b$ and $d$ are not zero, so this is a valid rational number.\n\n## There must be a rational $1$ such that multiplying a thing by $1$ doesn't change the thing %%note:Mathematicians say that multiplication has an [https://arbital.com/p/-54p](https://arbital.com/p/-54p).%%\n\nFrom our motivation of what multiplication was (\"do unto $\\frac{a}{b}$ what you would otherwise have done unto $1$\"), it should be clear that the number $1$ ought to work here.\nHowever, $1$ is not strictly speaking a rational number according to the letter of the \"instant rules\" above; so instead we use $\\frac{1}{1}$.\n\nThen $$\\frac{1}{1} \\times \\frac{a}{b} = \\frac{1 \\times a}{1 \\times b} = \\frac{a}{b}$$ since $1 \\times n = n$ for any integer $n$.\n\n## Every nonzero rational must have a corresponding rational such that when we multiply them, we get $1$ %%note:Mathematicians say that every nonzero rational has an *inverse*. (Multiplication is not quite *invertible*, because $0$ has no inverse.)%%\n\n\nUsing the instant rule, though, we can take our nonzero rational $\\frac{a}{b}$ (where neither $a$ nor $b$ is zero; this is forced by the requirement that $\\frac{a}{b}$ be nonzero), and use $\\frac{b}{a}$ as our corresponding rational.\nThen $$\\frac{a}{b} \\times \\frac{b}{a} = \\frac{a\\times b}{b \\times a} = \\frac{a \\times b}{a \\times b} = \\frac{1}{1}$$\n\n## Multiplication doesn't care which way round we do it %%note:Mathematicians say that multiplication is [commutative](https://arbital.com/p/3jb).%%\nThis had its own section on the [multiplication page](https://arbital.com/p/59s), and its justification there was by rotating a certain picture.\nHere, we'll use the instant rule:\n$$\\frac{a}{b} \\times \\frac{c}{d} = \\frac{a \\times c}{b \\times d} = \\frac{c \\times a}{d \\times b} = \\frac{c}{d} \\times \\frac{a}{b}$$\nwhere we have used that multiplication of *integers* doesn't care about order.\n\n## Multiplication doesn't care about the grouping of terms %%note:Mathematicians say that multiplication is [associative](https://arbital.com/p/3h4).%%\n\nThis is much harder to see from our intuition, because while addition is a very natural operation (\"put something next to something else\"), multiplication is much less natural (it boils down to \"make something bigger\").\nIn a sense, we're trying to show that \"make something bigger and then bigger again\" is the same as \"make something bigger by a bigger amount\".\nHowever, it turns out to be the case that multiplication *also* doesn't care about the grouping of terms.\n\nUsing the instant rule for multiplication, it's quite easy to show that $$\\frac{a}{b} \\times \\left(\\frac{c}{d} \\times \\frac{e}{f} \\right) = \\left(\\frac{a}{b} \\times \\frac{c}{d} \\right) \\times \\frac{e}{f}$$\n\nIndeed, working from the left-hand side:\n$$\\frac{a}{b} \\times \\left(\\frac{c}{d} \\times \\frac{e}{f} \\right) = \\frac{a}{b} \\times \\frac{c \\times e}{d \\times f} = \\frac{a \\times (c \\times e)}{b \\times (d \\times f)}$$\nwhich is $\\frac{a \\times c \\times e}{b \\times d \\times f}$ because multiplication of *integers* doesn't care about the grouping of terms.\n\nOn the other hand, $$\\left(\\frac{a}{b} \\times \\frac{c}{d} \\right) \\times \\frac{e}{f} = \\frac{a \\times c}{b \\times d} \\times \\frac{e}{f} = \\frac{(a \\times c) \\times e}{(b \\times d) \\times f} = \\frac{a \\times c \\times e}{b \\times d \\times f}$$\n\nThese two are equal, so we have shown that the left-hand and right-hand sides are both equal to the same thing, and hence they are the equal to each other.\n\n# How addition and multiplication interact\n\n## Multiplication \"filters through\" addition %%note:Mathematicians say that multiplication *distributes over* addition.%%\n\nWhat we will show is that $$\\left(\\frac{c}{d} + \\frac{e}{f}\\right) \\times \\frac{a}{b} = \\left(\\frac{a}{b} \\times \\frac{c}{d}\\right) + \\left(\\frac{a}{b} \\times \\frac{e}{f}\\right)$$\n\nThis is intuitively true: the left-hand side is \"make $\\frac{a}{b}$, but instead of starting out with $1$, start with $\\left(\\frac{c}{d} + \\frac{e}{f}\\right)$\"; while the right-hand side is \"make $\\frac{a}{b}$, but instead of starting out with $1$, start with $\\frac{c}{d}$; then do the same but start with $\\frac{e}{f}$; and then put the two together\".\nIf we draw out diagrams for the right-hand side, and put them next to each other, we get the diagram for the left-hand side.\n(As an exercise, you should do this for some specific values of $a,b,c,d,e,f$.)\n\nWe'll prove it now using the instant rules.\n\nThe left-hand side is $$\\left(\\frac{c}{d} + \\frac{e}{f}\\right) \\times \\frac{a}{b} = \\frac{c \\times f + d \\times e}{d \\times f} \\times \\frac{a}{b} = \\frac{(c \\times f + d \\times e) \\times a}{(d \\times f) \\times b}$$\nwhich is $\\frac{c \\times f \\times a + d \\times e \\times a}{d \\times f \\times b}$.\n\nThe right-hand side is $$\\left(\\frac{a}{b} \\times \\frac{c}{d}\\right) + \\left(\\frac{a}{b} \\times \\frac{e}{f}\\right) = \\frac{a \\times c}{b \\times d} + \\frac{a \\times e}{b \\times f} = \\frac{(a \\times c) \\times (b \\times f) + (b \\times d) \\times (a \\times e)}{(b \\times d) \\times (b \\times f)} = \\frac{(a \\times c \\times b \\times f) + (b \\times d \\times a \\times e)}{(b \\times d \\times b \\times f)}$$\nNotice that everything on the top has a $b$ in it somewhere, so we can use this same \"filtering-through\" property that we already know the *integers* have:\n$$\\frac{b \\times [https://arbital.com/p/(](https://arbital.com/p/()}{b \\times (d \\times b \\times f)}$$\nAnd finally we know that multiplying a fraction's numerator and denominator both by $b$ doesn't change the fraction:\n$$\\frac{(a \\times c \\times f) + (d \\times a \\times e)}{d \\times b \\times f}$$\n\nThis is just the same as the left-hand side, after we rearrange the product terms.\n\n# How comparison interacts with the operations\n\n## How comparison interacts with addition\nThe founding principle for how we came up with the definition of \"comparison\" was that adding the same thing to two sides of a balance scale didn't change the balance.\n\nIn terms of the instant rules, that becomes: if $\\frac{a}{b} < \\frac{c}{d}$, then $\\frac{a}{b} + \\frac{e}{f} < \\frac{c}{d} + \\frac{e}{f}$.\n\nBut by our instant rule, this is true precisely when $$0 < \\frac{c}{d} + \\frac{e}{f} - (\\frac{a}{b} + \\frac{e}{f})$$ and since multiplication by $-1$ filters through addition, that is precisely when $$0 < \\frac{c}{d} + \\frac{e}{f} - \\frac{a}{b} - \\frac{e}{f}$$\nwhich (since addition doesn't care about which way round we do the additions, and subtraction is just a kind of addition) is precisely when $$0 < \\frac{c}{d} - \\frac{a}{b} + \\frac{e}{f} - \\frac{e}{f}$$\ni.e. when $0 < \\frac{c}{d} - \\frac{a}{b}$; i.e. when $\\frac{a}{b} < \\frac{c}{d}$.\n\n## How comparison interacts with multiplication\n\nThe aim here is basically to show that we can't multiply two things and get an anti-thing.\nWritten out in the notation, we wish to show that if $0 < \\frac{a}{b}$ and if $0 < \\frac{c}{d}$ then $0 < \\frac{a}{b} \\times \\frac{c}{d}$, since the test for anti-ness is \"am I less than $0$?\".\n\nSince $0 < \\frac{a}{b}$, there are two options: either both $a$ and $b$ are positive, or they are both negative.\n(If one is negative and one is positive, then the fraction will be negative.)\n\nLikewise either both $c$ and $d$ are positive, or both are negative.\n\nSo we have four options in total:\n\n- $a, b, c, d$ are positive\n- $a, b, c, d$ are negative\n- $a, b$ are positive; $c, d$ are negative\n- $a, b$ are negative; $c, d$ are positive.\n\nIn the first case, we have $\\frac{a \\times c}{b \\times d}$ positive, because all of $a, b, c, d$ are.\n\nIn the second case, we have $a \\times c$ positive and $b \\times d$ also positive (because both are two negative integers multiplied together); so again the fraction $\\frac{a \\times c}{b \\times d}$ is positive.\n\nIn the third case, we have $a \\times c$ negative and $b \\times d$ also negative (because both are a positive number times a negative number); so the fraction $\\frac{a \\times c}{b \\times d}$ is a negative divided by a negative.\nTherefore it is positive, because we can multiply the numerator and the denominator by $-1$ to turn it into a positive divided by a positive.\n\n%%hidden(Example):\nWe'll consider $\\frac{1}{3}$ and $\\frac{-2}{-5}$.\nThen the product is $\\frac{1}{3} \\times \\frac{-2}{-5} = \\frac{-2}{-15}$; but that is the same as $\\frac{2}{15}$.\n%%\n\nIn the fourth case, we can do the same as the above, or we can be a bit sneaky: using the fact from earlier that multiplication doesn't care about order, we can note that $\\frac{a}{b} \\times \\frac{c}{d}$ is the same as $\\frac{c}{d} \\times \\frac{a}{b}$.\nBut now $c$ and $d$ are positive, and $a$ and $b$ are negative; we've already shown (in the previous case) that if the first fraction is \"positive divided by positive\", and the second fraction is \"negative divided by negative\".\nBy swapping $a$ for $c$, and $b$ for $d$, we can use a result we've already proved to obtain this final case.\n\nHence we've shown all four possible cases, and so the final result follows.", "date_published": "2016-08-14T06:39:06Z", "authors": ["Patrick Stevens", "Alexei Andreev"], "summaries": ["We have seen various operations of arithmetic on the rational numbers; in fact the operations are not as stand-alone as they may have appeared at first, but they are part of a closely-knit structure."], "tags": ["C-Class", "Proposed B-Class"], "alias": "5pm"} {"id": "3032022b2279f66a303b900cb3264581", "title": "Order of rational operations (Math 0)", "url": "https://arbital.com/p/order_of_rational_operations_math_0", "source": "arbital", "source_type": "text", "text": "", "date_published": "2016-08-01T06:13:49Z", "authors": ["Patrick Stevens"], "summaries": ["When an expression from rational arithmetic is written out, evaluate anything in brackets first; then perform multiplication and division left-to-right; then perform addition and subtraction left-to-right."], "tags": ["Stub"], "alias": "5q5"} {"id": "ca121a846c8038522010a7809ee067f9", "title": "Principal ideal domain", "url": "https://arbital.com/p/principal_ideal_domain", "source": "arbital", "source_type": "text", "text": "In [ring theory](https://arbital.com/p/3gq), an [https://arbital.com/p/-5md](https://arbital.com/p/-5md) is a **principal ideal domain** (or **PID**) if every [ideal](https://arbital.com/p/ideal_ring_theory) can be generated by a single element.\nThat is, for every ideal $I$ there is an element $i \\in I$ such that $\\langle i \\rangle = I$; equivalently, every element of $I$ is a multiple of $i$.\n\nSince ideals are [kernels](https://arbital.com/p/5r6) of [ring homomorphisms](https://arbital.com/p/ring_homomorphism) ([proof](https://arbital.com/p/5r9)), this is saying that a PID $R$ has the special property that *every* ring homomorphism from $R$ acts \"nearly non-trivially\", in that the collection of things it sends to the identity is just \"one particular element, and everything that is forced by that, but nothing else\".\n\n# Examples\n\n- Every [Euclidean domain](https://arbital.com/p/euclidean_domain) is a PID. ([Proof.](https://arbital.com/p/euclidean_domain_is_pid))\n- Therefore $\\mathbb{Z}$ is a PID, because it is a [Euclidean domain](https://arbital.com/p/euclidean_domain). (Its Euclidean function is \"take the modulus\".)\n- Every [field](https://arbital.com/p/481) is a PID because every ideal is either the singleton $\\{ 0 \\}$ (i.e. generated by $0$) or else is the entire ring (i.e. generated by $1$).\n- The [ring $F](https://arbital.com/p/polynomial_ring) over a field $F$ is a PID, because it is a Euclidean domain. (Its Euclidean function is \"take the [degree](https://arbital.com/p/polynomial_degree) of the polynomial\".)\n- The ring of [Gaussian integers](https://arbital.com/p/gaussian_integer), $\\mathbb{Z}[https://arbital.com/p/i](https://arbital.com/p/i)$, is a PID because it is a Euclidean domain. ([Proof](https://arbital.com/p/gaussian_integers_is_pid); its Euclidean function is \"take the [norm](https://arbital.com/p/norm_complex_number)\".)\n- The ring $\\mathbb{Z}[https://arbital.com/p/X](https://arbital.com/p/X)$ (of integer-coefficient polynomials) is *not* a PID, because the ideal $\\langle 2, X \\rangle$ is not principal. This is an example of a [https://arbital.com/p/-unique_factorisation_domain](https://arbital.com/p/-unique_factorisation_domain) which is not a PID. \n- The ring $\\mathbb{Z}_6$ is *not* a PID, because it is not an integral domain. (Indeed, $3 \\times 2 = 0$ in this ring.)\n\nThere are examples of PIDs which are not Euclidean domains, but they are mostly uninteresting.\nOne such ring is $\\mathbb{Z}[](https://arbital.com/p/\\frac{1}{2})$. ([Proof.](http://www.maths.qmul.ac.uk/~raw/MTH5100/PIDnotED.pdf))\n\n# Properties\n\n- Every PID is a [https://arbital.com/p/-unique_factorisation_domain](https://arbital.com/p/-unique_factorisation_domain). ([Proof](https://arbital.com/p/principal_ideal_domain_has_unique_factorisation); this fact is not trivial.) The converse is false; see the case $\\mathbb{Z}[https://arbital.com/p/X](https://arbital.com/p/X)$ above.\n- In a PID, \"[prime](https://arbital.com/p/5m2)\" and \"[irreducible](https://arbital.com/p/5m1)\" coincide. ([Proof.](https://arbital.com/p/5mf)) This fact also characterises the [maximal ideals](https://arbital.com/p/maximal_ideal) of PIDs.\n- Every PID is trivially [Noetherian](https://arbital.com/p/noetherian_ring): every ideal is not just *finitely* generated, but generated by a single element.", "date_published": "2016-08-04T14:10:18Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["A principal ideal domain is an [https://arbital.com/p/-5md](https://arbital.com/p/-5md) in which every [ideal](https://arbital.com/p/ideal_ring_theory) has a single generator."], "tags": [], "alias": "5r5"} {"id": "c585af0093feeff806c8d3ec87bc55e6", "title": "Kernel of ring homomorphism", "url": "https://arbital.com/p/kernel_of_ring_homomorphism", "source": "arbital", "source_type": "text", "text": "Given a [https://arbital.com/p/-ring_homomorphism](https://arbital.com/p/-ring_homomorphism) $f: R \\to S$ between [rings](https://arbital.com/p/3gq) $R$ and $S$, we say the **kernel** of $f$ is the collection of elements of $R$ which $f$ sends to the zero element of $S$.\n\nFormally, it is $$\\{ r \\in R \\mid f(r) = 0_S \\}$$\nwhere $0_S$ is the zero element of $S$.\n\n# Examples\n\n- Given the \"identity\" (or \"do nothing\") ring homomorphism $\\mathrm{id}: \\mathbb{Z} \\to \\mathbb{Z}$, which sends $n$ to $n$, the kernel is just $\\{ 0 \\}$.\n- Given the ring homomorphism $\\mathbb{Z} \\to \\mathbb{Z}$ taking $n \\mapsto n \\pmod{2}$ (using the usual shorthand for [https://arbital.com/p/-5ns](https://arbital.com/p/-5ns)), the kernel is the set of even numbers.\n\n# Properties\n\nKernels of ring homomorphisms are very important because they are precisely [ideals](https://arbital.com/p/ideal_ring_theory). ([Proof.](https://arbital.com/p/5r9))\nIn a way, \"ideal\" is to \"ring\" as \"[https://arbital.com/p/-576](https://arbital.com/p/-576)\" is to \"[group](https://arbital.com/p/3gd)\", and certainly [subrings](https://arbital.com/p/subring_ring_theory) are much less interesting than ideals; a lot of ring theory is about the study of ideals.\n\nThe kernel of a ring homomorphism always contains $0$, because a ring homomorphism always sends $0$ to $0$.\nThis is because it may be viewed as a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) acting on the underlying additive group of the ring in question, and [the image of the identity is the identity](https://arbital.com/p/49z) in a group.\n\nIf the kernel of a ring homomorphism contains $1$, then the ring homomorphism sends everything to $0$.\nIndeed, if $f(1) = 0$, then $f(r) = f(r \\times 1) = f(r) \\times f(1) = f(r) \\times 0 = 0$.", "date_published": "2016-08-04T17:38:29Z", "authors": ["Patrick Stevens"], "summaries": ["The kernel of a [https://arbital.com/p/-ring_homomorphism](https://arbital.com/p/-ring_homomorphism) is the collection of elements which the homomorphism sends to $0$."], "tags": [], "alias": "5r6"} {"id": "4c9265219c165cd3679a3efab9d8bc3e", "title": "0.999...=1", "url": "https://arbital.com/p/5r7", "source": "arbital", "source_type": "text", "text": "Although some people find it counterintuitive, the [decimal expansions](https://arbital.com/p/4sl) $0.999\\dotsc$ and $1$ represent the same [https://arbital.com/p/-4bc](https://arbital.com/p/-4bc).\n\n# Informal proofs\n\nThese \"proofs\" can help give insight, but be careful; a similar technique can \"prove\" that $1+2+4+8+\\dotsc=-1$. They work in this case because the [https://arbital.com/p/-series](https://arbital.com/p/-series) corresponding to $0.999\\dotsc$ is [https://arbital.com/p/-absolutely_convergent](https://arbital.com/p/-absolutely_convergent).\n\n* \\begin{align}\nx &= 0.999\\dotsc \\newline\n10x &= 9.999\\dotsc \\newline\n10x-x &= 9.999\\dotsc-0.999\\dotsc \\newline\n9x &= 9 \\newline\nx &= 1 \\newline\n\\end{align}\n\n* \\begin{align}\n\\frac 1 9 &= 0.111\\dotsc \\newline\n1 &= \\frac 9 9 \\newline\n&= 9 \\times \\frac 1 9 \\newline\n&= 9 \\times 0.111\\dotsc \\newline\n&= 0.999\\dotsc\n\\end{align}\n\n* The real numbers are [https://arbital.com/p/-dense](https://arbital.com/p/-dense), which means that if $0.999\\dots\\neq1$, there must be some number in between. But there's no decimal expansion that could represent a number in between $0.999\\dots$ and $1$.\n\n# Formal proof\n\nThis is a more formal version of the first informal proof, using the definition of [https://arbital.com/p/-4sl](https://arbital.com/p/-4sl).\n\n%%hidden(Show proof):\n$0.999\\dots$ is the decimal expansion where every digit after the decimal point is a $9$. By definition, it is the value of the series $\\sum_{k=1}^\\infty 9 \\cdot 10^{-k}$. This value is in turn defined as the [https://arbital.com/p/-limit](https://arbital.com/p/-limit) of the sequence $(\\sum_{k=1}^n 9 \\cdot 10^{-k})_{n\\in\\mathbb N}$. Let $a_n$ denote the $n$th term of this sequence. I claim the limit is $1$. To prove this, we have to show that for any $\\varepsilon>0$, there is some $N\\in\\mathbb N$ such that for every $n>N$, $|1-a_n|<\\varepsilon$. \n\nLet's prove by [induction](https://arbital.com/p/5fz) that $1-a_n=10^{-n}$. Since $a_0$ is the sum of {$0$ terms, $a_0=0$, so $1-a_0=1=10^0$. If $1-a_i=10^{-i}$, then \n\n\\begin{align}\n1 - a_{i+1} &= 1 - (a_i + 9 \\cdot 10^{-(i+1)}) \\newline\n&= 1-a_i - 9 \\cdot 10^{-(i+1)} \\newline\n&= 10^{-i} - 9 \\cdot 10^{-(i+1)} \\newline\n&= 10 \\cdot 10^{-(i+1)} - 9 \\cdot 10^{-(i+1)} \\newline\n&= 10^{-(i+1)}\n\\end{align}\n\nSo $1-a_n=10^{-n}$ for all $n$. What remains to be shown is that $10^{-n}$ eventually gets (and stays) arbitrarily small; this is true by the [https://arbital.com/p/archimedean_property](https://arbital.com/p/archimedean_property) and because $10^{-n}$ is monotonically decreasing.\n%%\n\n\n# Arguments against $0.999\\dotsc=1$\n\nThese arguments are used to try to refute the claim that $0.999\\dotsc=1$. They're flawed, since they claim to prove a false conclusion.\n\n* $0.999\\dotsc$ and $1$ have different digits, so they can't be the same. In particular, $0.999\\dotsc$ starts \"$0.$,\" so it must be less than 1.\n\n%%hidden(Why is this wrong?):\nDecimal expansions and real numbers are different objects. Decimal expansions are a nice way to represent real numbers, but there's no reason different decimal expansions have to represent different real numbers.\n%%\n\n* If two numbers are the same, their difference must be $0$. But $1-0.999\\dotsc=0.000\\dotsc001\\neq0$.\n\n%%hidden(Why is this wrong?):\nDecimal expansions go on infinitely, but no farther. $0.000\\dotsc001$ doesn't represent a real number because the $1$ is supposed to be after infinitely many $0$s, but each digit has to be a finite distance from the decimal point. If you have to pick a real number to for $0.000\\dotsc001$ to represent, it would be $0$.\n%%\n\n* $0.999\\dotsc$ is the limit of the sequence $0.9, 0.99, 0.999, \\dotsc$. Since each term in this sequence is less than $1$, the limit must also be less than $1$. (Or \"the sequence can never reach $1$.\")\n\n%%hidden(Why is this wrong?):\nThe sequence gets arbitrarily close to $1$, so its limit is $1$. It doesn't matter that all of the terms are less than $1$.\n%%\n\n* In the first proof, when you subtract $0.999\\dotsc$ from $9.999\\dotsc$, you don't get $9$. There's an extra digit left over; just as $9.99-0.999=8.991$, $9.999\\dotsc-0.999\\dotsc=8.999\\dotsc991$.\n\n%%hidden(Why is this wrong?):\nThere are infinitely many $9$s in $0.999\\dotsc$, so when you shift it over a digit there are still the same amount. And the \"decimal expansion\" $8.999\\dotsc991$ doesn't make sense, because it has infinitely many digits and then a $1$.\n%%", "date_published": "2016-08-04T12:02:40Z", "authors": ["Dylan Hendrickson"], "summaries": [], "tags": ["Start"], "alias": "5r7"} {"id": "f7a9b9ae0369053175471635516e2d2f", "title": "Ideals are the same thing as kernels of ring homomorphisms", "url": "https://arbital.com/p/ideal_equals_kernel_of_ring_homomorphism", "source": "arbital", "source_type": "text", "text": "In [ring theory](https://arbital.com/p/3gq), the notion of \"[ideal](https://arbital.com/p/ideal_ring_theory)\" corresponds precisely with the notion of \"[kernel](https://arbital.com/p/5r6) of [https://arbital.com/p/-ring_homomorphism](https://arbital.com/p/-ring_homomorphism)\".\n\nThis result is analogous to the fact from [group theory](https://arbital.com/p/3gd) that [normal subgroups](https://arbital.com/p/4h6) are the same thing as [kernels of group homomorphisms](https://arbital.com/p/49y) ([proof](https://arbital.com/p/4h7)).\n\n# Proof\n\n## Kernels are ideals\n\nLet $f: R \\to S$ be a ring homomorphism between rings $R$ and $S$.\nWe claim that the kernel $K$ of $f$ is an ideal.\n\nIndeed, it is clearly a [https://arbital.com/p/-576](https://arbital.com/p/-576) of the ring $R$ when viewed as just an additive group %%note:That is, after removing the multiplicative structure from the ring.%% because $f$ is a *group* homomorphism between the underlying additive groups, and kernels of *group* homomorphisms are subgroups (indeed, *normal* subgroups). ([Proof.](https://arbital.com/p/4h7))\n\nWe just need to show, then, that $K$ is closed under multiplication by elements of the ring $R$.\nBut this is easy: if $k \\in K$ and $r \\in R$, then $f(kr) = f(k)f(r) = 0 \\times r = 0$, so $kr$ is in $K$ if $k$ is.\n\n## Ideals are kernels", "date_published": "2016-08-03T16:30:24Z", "authors": ["Patrick Stevens"], "summaries": ["In [ring theory](https://arbital.com/p/3gq), the notion of \"[ideal](https://arbital.com/p/ideal_ring_theory)\" corresponds precisely with the notion of \"[kernel](https://arbital.com/p/5r6) of [https://arbital.com/p/-ring_homomorphism](https://arbital.com/p/-ring_homomorphism)\"."], "tags": [], "alias": "5r9"} {"id": "39c1c38fbc2c61e415388d7746ef6f5c", "title": "Fundamental Theorem of Arithmetic", "url": "https://arbital.com/p/fundamental_theorem_of_arithmetic", "source": "arbital", "source_type": "text", "text": "summary: The Fundamental Theorem of Arithmetic is a statement about the [natural numbers](https://arbital.com/p/45h); it says that every natural number may be decomposed as a product of [primes](https://arbital.com/p/4mf), and this expression is unique up to reordering the factors. It is an extremely important theorem, and it is the basis of the field of [https://arbital.com/p/-number_theory](https://arbital.com/p/-number_theory).\n\nThe Fundamental Theorem of Arithmetic states that every [https://arbital.com/p/-45h](https://arbital.com/p/-45h) (greater than or equal to $2$) may be expressed as a product of [prime numbers](https://arbital.com/p/4mf), and the product is unique up to reordering.\n\nThis theorem is one of the main reasons $1$ is not considered to be prime: indeed, if it were prime then $3 \\times 5$ could be factorised into primes as $3 \\times 5 \\times 1$, but these would be two *different* factorisations of the number $15$.\nThe FTA's statement is much cleaner if $1$ is not thought of as prime.\n\nIn a more general context, the FTA says precisely that the [ring](https://arbital.com/p/3gq) $\\mathbb{Z}$ is a [https://arbital.com/p/-unique_factorisation_domain](https://arbital.com/p/-unique_factorisation_domain); there is therefore a much more abstract proof than the elementary one we will present further on in this article:\n\n- $\\mathbb{Z}$ is a [Euclidean domain](https://arbital.com/p/euclidean_domain) (with Euclidean function given by \"take the modulus\");\n- Therefore $\\mathbb{Z}$ is a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5) ([proof](https://arbital.com/p/euclidean_domain_is_pid));\n- And principal ideal domains are unique factorisation domains ([proof](https://arbital.com/p/principal_ideal_domain_has_unique_factorisation)).\n\n# Examples\n\n- The FTA does not talk about $0$ or $1$; this is because these numbers are conventionally considered neither prime nor composite.\n- Even if we haven't bothered to calculate $17 \\times 23 \\times 23$, we can immediately say that it is odd. Indeed, by the FTA, $2$ cannot divide $17 \\times 23^2$, because the complete list of prime factors of this number is $\\{ 17, 23, 23\\}$, and $2$ is prime.\n\n# Proof\n\nTimothy Gowers has an [excellent article](https://gowers.wordpress.com/2011/11/18/proving-the-fundamental-theorem-of-arithmetic/) about the proof of the FTA.\n\nThe FTA consists of two parts: we must show that every number can be decomposed as primes, and also that every number can be decomposed *uniquely*.\n\n## Every number can be written as a product of primes\n\nThis part is the easier, and it uses [https://arbital.com/p/-strong_induction](https://arbital.com/p/-strong_induction) (a version of [proof by induction](https://arbital.com/p/5fz)).\n\nClearly $2$ can be written as a product of primes, because it *is* prime; so it can be written as just itself.\n\nNow, for $n$ bigger than $2$, if $n$ is prime then we are immediately done (just write it as itself).\nOtherwise, $n$ is not prime, so it can be written as $a \\times b$, say, with $a$ and $b$ both less than $n$.\n\nBut by the inductive hypothesis, we can express $a$ and $b$ each as products of primes, so we can express $n$ as the combined product of the two sets of factors of $a$ and $b$.\n\n%%hidden(Example):\nConsider $n = 1274$.\nWe have two options: $n$ is prime or $n$ is composite.\n\nIt turns out that $n$ is actually equal to $49 \\times 26$, so it's not prime.\n\nBy the inductive hypothesis, we can factor $49$ as a product of primes (indeed, it's $7^2$); and we can factor $26$ as a product of primes (indeed, it's $2 \\times 13$); so we can factor $1274$ as $2 \\times 7^2 \\times 13$.\n\n(If you like, you can view this as just \"start again at $49$ instead of at $1274$, and spit out what you get; then start again at $26$ instead of $1274$, and spit out what you get; and finally combine the spittings-out\"; no mention of a spooky \"inductive hypothesis\" at all.)\n\nNote that at this point, we haven't any guarantee at all that this is the *only* prime factorisation; all we assert so far is that it is *a* prime factorisation.\n%%\n\n## Every number can be decomposed *uniquely* as a product of primes\n\nFor this, we will need a basic (but non-obvious and important) fact about the behaviour of prime numbers: [Euclid's lemma](https://arbital.com/p/5mh), which states that if a prime $p$ divides a product $ab$, then $p$ divides at least one of $a$ and $b$.\n\nWe will work by induction on $n$ again.\nIf $n = 2$ then the result is immediate: a number can only be divided by numbers which are not larger than it, but $1$ and $2$ are the only such numbers.\n\nSuppose $n$ can be written as both $p_1 p_2 \\dots p_r$ and $q_1 q_2 \\dots q_s$, where each $p_i$ and $q_j$ is prime (but there might be repeats: maybe $p_1 = p_2 = q_3 = q_7$, for instance).\nWe need to show that $r=s$ and that (possibly after reordering the lists) $p_i = q_i$ for each $i$.\n\nCertainly $p_1$ divides $n$, because it divides $p_1 p_2 \\dots p_r$.\nTherefore it divides $q_1 q_2 \\dots q_s$, and hence it divides one of $q_1$ or $q_2 \\dots q_s$, by Euclid's lemma.\nTherefore either it divides $q_1$, or it divides one of $q_2$ or $q_3 \\dots q_s$; by induction, $p_1$ divides some $q_i$.\nBecause we don't care about the ordering of the list, let us reorder the list if necessary so that in fact $i=1$: put the factor $q_i$ at the start of the list.\n\nNow, $q_1$ is prime, and $p_1$ is not equal to $1$ but it divides $q_1$; hence $p_1 = q_1$.\n\nDividing through by $p_1$, then, we obtain $p_2 \\dots p_r = q_2 \\dots q_s$, a strictly smaller number; so by the inductive hypothesis, $r-1 = s-1$ (so $r=s$) and the unordered list of $p_i$ is the same as the unordered list of $q_i$ for $i \\geq 2$.\n\nThis proves the theorem.\n\n# Why is this not obvious?\n\nTimothy Gowers has a [good piece](https://gowers.wordpress.com/2011/11/13/why-isnt-the-fundamental-theorem-of-arithmetic-obvious/) on why this result is not just obvious.\nOf course, what is \"obvious\" and what is not \"obvious\" varies heavily depending on who you're talking to.\nFor this author personally, the true reason it's not obvious is Gowers's reason number 4: because there are very similar structures which do *not* have the property of unique factorisation.\n(Gowers uses $\\mathbb{Z}[https://arbital.com/p/\\sqrt{-5}](https://arbital.com/p/\\sqrt{-5})$; on the [page on irreducibles](https://arbital.com/p/5m1), we show that $\\mathbb{Z}[https://arbital.com/p/\\sqrt{-3}](https://arbital.com/p/\\sqrt{-3})$ could be used just as well.)", "date_published": "2016-08-07T08:18:06Z", "authors": ["Patrick Stevens"], "summaries": [], "tags": ["Start"], "alias": "5rh"} {"id": "ef648e2e0b9040d7471af5f1bf2a5d09", "title": "Formal definition of the free group", "url": "https://arbital.com/p/free_group_formal_definition", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) may be constructed formally using van der Waerden's trick, which is not intuitive at all but leads to a definition that is very easy to work with. This page will detail van der Waerden's construction, and will prove that the trick yields a group which has all the right properties to be the free group.\n\n# The construction\n\nWrite $X^r$ for the set that contains all the [freely reduced words](https://arbital.com/p/5jc) over $X \\cup X^{-1}$ (so, for instance, excluding the word $aa^{-1}$). %%note:We use the superscript $r$ to denote \"reduced\".%%\n\nWe define the *free group* $F(X)$, or $FX$, on the set $X$ to be a certain [https://arbital.com/p/-576](https://arbital.com/p/-576) of the [https://arbital.com/p/-497](https://arbital.com/p/-497) $\\mathrm{Sym}(X^r)$: namely that which is generated by the following elements, one for each $x \\in X \\cup X^{-1}$:\n\n- $\\rho_x : \\mathrm{Sym}(X^r) \\to \\mathrm{Sym}(X^r)$, sending $a_1 a_2 \\dots a_n \\mapsto a_1 a_2 \\dots a_n x$ if $a_n \\not = x^{-1}$, and $a_1 a_2 \\dots a_{n-1} x^{-1} \\mapsto a_1 a_2 \\dots a_{n-1}$.\n- $\\rho_{x^{-1}} : \\mathrm{Sym}(X^r) \\to \\mathrm{Sym}(X^r)$, sending $a_1 a_2 \\dots a_n \\mapsto a_1 a_2 \\dots a_n x^{-1}$ if $a_n \\not = x$, and $a_1 a_2 \\dots a_{n-1} x \\mapsto a_1 a_2 \\dots a_{n-1}$.\n\nRecall that each $\\rho_x$ lies in $\\mathrm{Sym}(X^r)$, so each is a [https://arbital.com/p/-499](https://arbital.com/p/-499) from $X^r$ to $X^r$.\nWe specify it by stating what it does to every element of $X^r$ (that is, to every freely reduced word over $X$).\n\nWe first specify what it does to those words which don't end in $x^{-1}$: $\\rho_x$ simply appends an $x$ to such words.\nWe then specify it to the remaining words, those which do end in $x^{-1}$: then $\\rho_x$ just removes the $x^{-1}$.\n\nIt's easy to check that if $\\rho_x$ is given a freely-reduced word as input, then it produces a freely-reduced word as output, because the only change to the word is at the end and we make sure to provide a separate definition if $x$ is to be cancelled.\nTherefore each $\\rho_x$ is a function $X^r \\to X^r$.\n\nThen we do it all again for all the inverses $x^{-1}$, creating the functions $\\rho_{x^{-1}}$; and finally, we add in the [https://arbital.com/p/-54p](https://arbital.com/p/-54p), denoted $\\rho_{\\varepsilon}$, which simply returns its input unchanged.\n\nNotice that the $\\rho_x$ and $\\rho_{x^{-1}}$ are all indeed bijective (and therefore members of $\\mathrm{Sym}(X^r)$), because in fact $\\rho_x$ and $\\rho_{x^{-1}}$ are [inverse](https://arbital.com/p/4sn) to each other (each cancelling off what the other did), and a function with an inverse is bijective.\n\nSo, we've defined the free group as a certain subgroup of the symmetric group.\nRemember that the subgroup has as its group operation \"function composition\"; so $\\rho_x \\cdot \\rho_y = \\rho_x \\circ \\rho_y$, for instance.\nWe will write $\\rho_x \\rho_y$ for this, omitting the group operation.\n\nSomething key to notice is that if we apply $\\rho_{a_n} \\rho_{a_{n-1}} \\dots \\rho_{a_1}$ to the empty word $\\varepsilon$, we get $$\\rho_{a_n} \\rho_{a_{n-1}} \\dots \\rho_{a_1}(\\varepsilon) = \\rho_{a_n} \\rho_{a_{n-1}} \\dots \\rho_{a_3}(\\rho_{a_2}(a_1)) = \\rho_{a_n a_{n-1} \\dots a_3}(a_1 a_2) = \\dots = a_1 a_2 \\dots a_n$$\nif $a_1 a_2 \\dots a_n$ is a freely reduced word.\n(Indeed, if the word is freely reduced then none of the successive $\\rho_{a_i}, \\rho_{a_{i+1}}$ can have cancelled each other's effect out, so every application of a $\\rho_{a_i}$ must be appending a letter.)\nHence we might hope to have captured the freely reduced words in our subgroup.\n\n# The formal definition is the same as the intuitive definition\n\nWe'll show that there is a bijection between the free group and the set of reduced words, by \"converting\" each reduced word into a corresponding member of the free group.\n\nTake a reduced word, $w = a_1 a_2 \\dots a_n$, and produce the member of the free group (that is, the function) $\\rho_{a_1} \\rho_{a_2} \\dots \\rho_{a_n}$. %%note:Recall that the group operation here is composition of functions, so this is actually the function $\\rho_{a_1} \\circ \\rho_{a_2} \\circ \\dots \\circ \\rho_{a_n}$.%%\nThis really does produce a member of the free group (i.e. of the subgroup of the symmetry group), because each $a_i$ is an element of $X \\cup X^{-1}$ and we have already specified how to make $\\rho_{a_i}$ from such an element.\n\nNow, we claim that in fact this map is injective: that is, we can't take two words $a_1 a_2 \\dots a_n$ and $b_1 b_2 \\dots b_m$ and produce the same member of the free group.\n(That is, we show that $\\rho_{a_1} \\rho_{a_2} \\dots \\rho_{a_n} = \\rho_{b_1} \\rho_{b_2} \\dots \\rho_{b_m}$ implies $a_1 \\dots a_n = b_1 \\dots b_m$.)\nIndeed, if the two functions (\"elements of the free group\") are equal, then they must in particular do the same thing when they are applied to the empty word $\\varepsilon$.\nBut by the \"key notice\" above, when we evaluate $\\rho_{a_1} \\rho_{a_2} \\dots \\rho_{a_n}$ at the empty word, we get $a_n a_{n-1} \\dots a_2 a_1$; and when we evaluate $\\rho_{b_1} \\rho_{b_2} \\dots \\rho_{b_m}$ at the empty word, we get $b_m b_{m-1} \\dots b_2 b_1$; so the two words must be equal after all. \n\nFinally, the map is surjective: we can make any member of the free group by \"converting the appropriate reduced word into a function\".\nIndeed, the free group is generated by the $\\rho_x$ and $\\rho_{x^{-1}}$ for $x \\in X$, so every element is some $\\rho_{x_1} \\dots \\rho_{x_n}$ for some selection of $x_1, \\dots, x_n \\in X \\cup X^{-1}$.\nNote that $x_1 \\dots x_n$ need not necessarily be freely reduced as a word at the moment; but if it is indeed not freely reduced, so some $x_i, x_{i+1}$ cancel each other out, then removing that pair completely doesn't change the function $\\rho_{x_1} \\dots \\rho_{x_n}$.\nFor example, $\\rho_{x_1} \\rho_{x_1^{-1}} \\rho_{x_2} = \\rho_{x_2}$.\nHence the process of \"performing one step of a free reduction\" (i.e. removing a cancelling pair) doesn't change the member of the free group as a function; and since each such removal makes the word shorter, it must eventually terminate.\nIt remains to show that it doesn't matter in what order we remove the cancelling pairs; but that is immediate because we've already shown that our \"conversion\" process is injective: we started with a member of the free group, so if it corresponds to a freely reduced word then it corresponds to a *unique* freely reduced word.\nSince we've just shown that it does indeed correspond to a freely reduced word (by repeatedly removing cancelling pairs), we are done.\n\nThe above shows that the free group can be considered just to be the set of reduced words.", "date_published": "2016-08-05T06:44:17Z", "authors": ["Patrick Stevens"], "summaries": ["The [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) may be constructed formally using van der Waerden's trick, which is not intuitive at all but leads to a definition that is very easy to work with. This page will detail van der Waerden's construction, and will prove that the trick yields a group which has all the right properties to be the free group."], "tags": ["Math 3"], "alias": "5s1"} {"id": "ceb3ee44a0eebde1968f882aa194f37b", "title": "Operations in Set theory", "url": "https://arbital.com/p/set_theory_operation", "source": "arbital", "source_type": "text", "text": "An operation in [set](https://arbital.com/p/3jz) theory is a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) of two sets, that returns a set.\n\nCommon set operations include [Union ](https://arbital.com/p/5s8), [Intersection ](https://arbital.com/p/5sb), [Relative complement ](https://arbital.com/p/5sc), and [Cartesian product ](https://arbital.com/p/3xb).\n\n![illustrations of the output of four basic set operations](https://imgh.us/basic_set_operations_1.svg)", "date_published": "2016-10-12T04:31:20Z", "authors": ["Eric Bruylant", "M Yass"], "summaries": [], "tags": ["C-Class"], "alias": "5s5"} {"id": "0391d2bf92ea5b20f1c3abcf32ec3681", "title": "Absolute Complement", "url": "https://arbital.com/p/set_absolute_complement", "source": "arbital", "source_type": "text", "text": "The complement $A^\\complement$ of a set $A$ is the set of all things that are not in $A$. Put simply, the complement is its opposite.\n\nWhere $U$ denotes the universe, $A^\\complement = U \\setminus A$. That is, $A^\\complement$ is the [Relative complement](https://arbital.com/p/set_relative_complement) of $U$ and $A$.", "date_published": "2016-08-05T23:32:07Z", "authors": ["M Yass"], "summaries": [], "tags": [], "alias": "5s7"} {"id": "30451e79278b56588dc832aee1b94090", "title": "Union", "url": "https://arbital.com/p/set_union", "source": "arbital", "source_type": "text", "text": "The union of two sets $A$ and $B$, denoted $A \\cup B$, is the set of things which are either in $A$ or in $B$ or both.\n\n![illustration of the output of the union operation](https://imgh.us/set_union.svg)\n\nFormally stated, where $C = A \\cup B$\n\n$$x \\in C \\leftrightarrow (x \\in A \\lor x \\in B)$$\n\nThat is, [https://arbital.com/p/46m](https://arbital.com/p/46m) $x$ is in the union $C$, then either $x$ is in $A$ or $B$ or possibly both.\n\n%%todo: more lengthy explanation for [https://arbital.com/p/1r6](https://arbital.com/p/1r6) level%%\n\n# Examples\n\n - $\\{1,2\\} \\cup \\{2,3\\} = \\{1,2,3\\}$\n - $\\{1,2\\} \\cup \\{8,9\\} = \\{1,2,8,9\\}$\n - $\\{0,2,4,6\\} \\cup \\{3,4,5,6\\} = \\{0,2,3,4,5,6\\}$\n - $\\mathbb{R^-} \\cup \\mathbb{R^+} \\cup \\{0\\} = \\mathbb{R}$ (In other words, the union of the negative reals, the positive reals and zero make up all of the [real numbers](https://arbital.com/p/4bc).)", "date_published": "2016-10-04T21:44:20Z", "authors": ["M Yass", "Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Eric Bruylant", "Joe Zeng"], "summaries": [], "tags": ["Start", "Formal definition"], "alias": "5s8"} {"id": "358fb62b335ee81146c6a4536e89c6c2", "title": "Intersection", "url": "https://arbital.com/p/set_intersection", "source": "arbital", "source_type": "text", "text": "The intersection of two sets $A$ and $B$, denoted $A \\cap B$, is the set of elements which are in both $A$ and $B$.\n\n![illustration of the output of an intersection](https://imgh.us/set_intersection_1.svg)\n\nFormally stated, where $C = A \\cap B$\n\n$$x \\in C \\leftrightarrow (x \\in A \\land x \\in B)$$\n\nThat is, [https://arbital.com/p/46m](https://arbital.com/p/46m) $x$ is in the intersection $C$, then $x$ is in $A$ and $x$ is in $B$.\n\nFor example,\n\n - $\\{1,2\\} \\cap \\{2,3\\} = \\{2\\}$\n - $\\{1,2\\} \\cap \\{8,9\\} = \\{\\}$\n - $\\{0,2,4,6\\} \\cap \\{3,4,5,6\\} = \\{4,6\\}$", "date_published": "2016-10-12T04:26:59Z", "authors": ["Eric Bruylant", "M Yass"], "summaries": [], "tags": [], "alias": "5sb"} {"id": "d624336f657fd6d65080740b00ae5333", "title": "Relative complement", "url": "https://arbital.com/p/set_relative_complement", "source": "arbital", "source_type": "text", "text": "The relative complement of two sets $A$ and $B$, denoted $A \\setminus B$, is the set of elements that are in $A$ while not in $B$.\n\n![illustration of the output of a relative complement](https://imgh.us/set_relative_complement.svg)\n\nFormally stated, where $C = A \\setminus B$\n\n$$x \\in C \\leftrightarrow (x \\in A \\land x \\notin B)$$\n\nThat is, [https://arbital.com/p/46m](https://arbital.com/p/46m) $x$ is in the relative complement $C$, then $x$ is in $A$ and x is not in $B$.\n\nFor example,\n\n - $\\{1,2,3\\} \\setminus \\{2\\} = \\{1,3\\}$\n - $\\{1,2,3\\} \\setminus \\{9\\} = \\{1,2,3\\}$\n - $\\{1,2\\} \\setminus \\{1,2,3,4\\} = \\{\\}$\n\nIf we name the set $U$ as the set of all things, then we can define the [Absolute complement](https://arbital.com/p/5s7) of the set $A$, $A^\\complement$, as $U \\setminus A$", "date_published": "2016-10-04T21:48:10Z", "authors": ["Eric Bruylant", "M Yass"], "summaries": [], "tags": [], "alias": "5sc"} {"id": "2a7868a1f15c2b92c061331c5eed45d3", "title": "Cantor-Schröder-Bernstein theorem", "url": "https://arbital.com/p/cantor_schroeder_bernstein_theorem", "source": "arbital", "source_type": "text", "text": "summary(Technical): The Cantor-Schröder-Bernstein theorem states that the [class](https://arbital.com/p/class_set_theory) of [cardinals](https://arbital.com/p/4w5) forms a [total order](https://arbital.com/p/540). (Though not a totally ordered *set*, because [there is no set of all cardinals](https://arbital.com/p/cardinals_form_a_proper_class)).\n\nRecall that we say a [set](https://arbital.com/p/3jz) $A$ is *smaller than or equal to* a set $B$ if there is an [injection](https://arbital.com/p/4b7) from $A$ to $B$.\nThe set $A$ *has the same [size](https://arbital.com/p/4w5)* as $B$ if there is a [bijection](https://arbital.com/p/499) between $A$ and $B$.\n\nThe Cantor-Schröder-Bernstein theorem states that if $A$ is smaller than or equal to $B$, and $B$ is smaller than or equal to $A$, then $A$ and $B$ have the same size.\nThis tells us that the arithmetic of [cardinals](https://arbital.com/p/4w5) is well-behaved in that it behaves like a [total order](https://arbital.com/p/540).\nIt is similar to the arithmetic of the [natural numbers](https://arbital.com/p/45h) in that it can never be the case that simultaneously $a < b$ and $b < a$.\n\n# Proofs\n\nThere are several proofs, some concrete and some less so.\n\n## Concrete proof\n\nA clear explanation of the intuition of this proof [has been written](https://www.math.brown.edu/~res/infinity.pdf) by Richard Evan Schwartz; see page 61 of the linked PDF, or search on the word \"dog\" (which appears in the first page of the explanation).\n[The intuition is Math0 suitable.](https://arbital.com/p/comment:)\n\nLet $f: A \\to B$ and $g: B \\to A$ be injections; we will define a bijective function $h: A \\to B$.\n\nBecause $f$ is injective, if $f$ ever hits $b$ (that is, if there is $a \\in A$ such that $f(a) = b$) then it is possible to define $f^{-1}(b)$ to be the *unique* $a \\in A$ such that $f(a) = b$; similarly with $g$.\nThe slogan is \"if $f^{-1}(a)$ exists, then it is well-defined: there is no leeway about which element we might choose to be $f^{-1}(a)$\".\n\nFix some $a \\in A$, and consider the sequence $$\\dots, f^{-1}(g^{-1}(a)), g^{-1}(a), a, f(a), g(f(a)), \\dots$$\nNow, this sequence might not extend infinitely to the left; it may not even get past $a$ to the left (if $g^{-1}(a)$ doesn't exist, for instance). %%note:On the other hand, perhaps the sequence does extend infinitely to the left.%%\nAlso, the sequence might duplicate elements: it might be the case that $gfgf(a) = a$, for instance. %%note:And maybe there is a repeat somewhere to the left, too.%%\n\nSimilarly, we can fix some $b \\in B$, and consider $$\\dots g^{-1} f^{-1}(b), f^{-1}(b), b, g(b), f(g(b)), \\dots$$\n\nEvery element of $A$, and every element of $B$, appears in one of these chains.\nMoreover, if $a \\in A$ appears in two different chains, then in fact the two chains are the same %%note:Though maybe we're looking at a different place on the same chain. If we compare the $g^{-1} f^{-1}(b)$-chain with the $b$-chain, we see they're the same chain viewed two different ways: one viewing is two places offset from the other viewing.%%, because each element of the chain specifies and is specified by the previous element in the chain (if it exists) and each element of the chain specifies and is specified by the next element of the chain.\n\nSo every element of $A$ and every element of $B$ is in exactly one chain.\nNow, it turns out that there are exactly four distinct \"types\" of chain.\n\n- It could extend infinitely in both directions without repeats. In this case, we define $h(a) = f(a)$ for each element of $A$ in the chain. (Basically assigning to each element of $A$ the element of $B$ which is next in the chain.)\n- It could extend infinitely off to the right, but it has a hard barrier at the left, and has no repeats: the chain stops at an element of $A$. In this case, again we define $h(a) = f(a)$ for each element of $A$ in the chain. (Again, basically assigning to each element of $A$ the element of $B$ which is next in the chain.)\n- It could extend infinitely off to the right, but it has a hard barrier at the left, and has no repeats: the chain stops at an element of $B$. In this case, we define $h(a) = g^{-1}(a)$ for each element of $A$ in the chain. (Basically assigning to each element of $A$ the element of $B$ which is *previous* in the chain.)\n- It could have repeats. Then it must actually be a cycle of even length (unrolled into an infinite line), because each element of the chain only depends on the one before it, and because we can't have two successive elements of $A$ (since the elements are alternating between $A$ and $B$). In this case, we define $h(a) = f(a)$ for each element of $A$ in the chain. (Basically assigning to each element of $A$ the element of $B$ which is next in the chain.)\n\nHave we actually made a bijection?\nCertainly our function is well-defined: every element of $A$ appears in exactly one chain, and we've specified where every element of $A$ in any chain goes, so we've specified where every element of $A$ goes.\n\nOur function is [surjective](https://arbital.com/p/4bg), because every element of $B$ is in a chain; if $b \\in B$ has an element $a$ of $A$ before it in its chain, then we specified that $h$ takes $a$ to $b$, while if $b \\in B$ is at the leftmost end of its chain, we specified that $h$ takes $g(b)$ (that is, the following element in the chain) to $b$.\n\nOur function is injective.\nSince the chains don't overlap, and the first three cases of \"what a chain might look like\" have no repeats at all, the only possible way an element of $B$ can be hit twice by $h$ is if that element lies in one of the cyclical chains.\nBut then to elements of $A$ in that chain, $h$ assigns the following element of $B$; so $b \\in B$ is hit only by the preceding element of $A$, which is the same in all cases because the chain is a cycle.\n%%note:The picture in the linked intuitive document makes this much clearer.%%\n\n## Proof from the [Knaster-Tarski theorem](https://arbital.com/p/knaster_tarski_theorem)\nThis proof is very quick, but almost completely opaque.\nIt relies on the [Knaster-Tarski fixed point theorem](https://arbital.com/p/knaster_tarski_theorem), which states that if $X$ is a [complete poset](https://arbital.com/p/complete_poset) and $f: X \\to X$ is [order-preserving](https://arbital.com/p/order_preserving_map), then $f$ has a [https://arbital.com/p/-fixed_point](https://arbital.com/p/-fixed_point) (i.e. $x$ such that $f(x) = x$).\n\nLet $f: A \\to B$ and $g: B \\to A$ be injective.\n\nWe are looking for a [partition](https://arbital.com/p/set_partition) $P \\cup Q$ of $A$, and a partition $R \\cup S$ of $B$, such that $f$ is injective from $P$ to $R$, and $g$ is injective from $S$ to $Q$.\n(Then we can just define our bijection $A \\to B$ by \"do $f$ on $P$, and do $g^{-1}$ on $Q$\".)\n\nNow, the function $P \\mapsto A \\setminus g(B \\setminus f(P))$ is order-preserving from the [https://arbital.com/p/-power_set](https://arbital.com/p/-power_set) $\\mathcal{P}(A)$ to $\\mathcal{P}(A)$ (ordered by inclusion), because there is an even number of complements.\n\nBut $\\mathcal{P}(A)$ is complete as a poset ([proof](https://arbital.com/p/power_set_poset_is_complete)), so by Knaster-Tarski there is a set $P$ such that $P = A \\setminus g(B \\setminus f(P))$.\n\nThis yields our partition as required.", "date_published": "2016-08-27T12:12:19Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["The Cantor-Schröder-Bernstein theorem tells us that if one [set](https://arbital.com/p/3jz) is [smaller-than-or-equal-to](https://arbital.com/p/4w5) another, and the other is smaller-than-or-equal-to the one, then they are the same size. That is, \"comparison of set sizes\" behaves itself, analogously to the fact in the [natural numbers](https://arbital.com/p/45h) that it can't be the case that both $1 < 2$ and $2<1$."], "tags": [], "alias": "5sh"} {"id": "dc6f1446eb5517f798b27e083f4ff5ee", "title": "Well-defined", "url": "https://arbital.com/p/well_defined", "source": "arbital", "source_type": "text", "text": "\"Well-defined\" is a slightly fuzzy word in mathematics.\nBroadly, an object is said to be \"well-defined\" if it has been given a definition that is completely unambiguous, can be executed without regard to any arbitrary choices the mathematician might make, or generally is crisply defined.\n\n(The [Wikipedia page](https://en.wikipedia.org/wiki/Well-defined) on well-definedness contains many examples for those who are more comfortable with mathematical notation.)\n\n# Specific instances\n\n## Functions\nOne of the most common uses of the phrase \"well-defined\" is when talking about [functions](https://arbital.com/p/3jy).\nA function is **well-defined** if it really is a bona fide function.\nThis usually manifests itself as the following:\n\n> Whenever $x=y$, we have $f(x) = f(y)$: that is, the output of the function doesn't depend on how we specify the input to the function, only on the input itself.\n\nThis property is often pretty easy to check.\nFor instance, the function from [$\\mathbb{N}$](https://arbital.com/p/45h) to itself given by $n \\mapsto n+1$ is \"obviously\" well-defined: it's trivially obvious that if $n=m$ then $f(n) = f(m)$.\n\nHowever, sometimes it is not so easy.\nThe function $\\mathbb{N} \\to \\mathbb{N}$ given by \"take the number of prime factors\" is not obviously well-defined, because it could in principle be the case that some number $n$ is equal to both $p_1 p_2 p_3$ and $q_1 q_2$ for some primes $p_1, p_2, p_3, q_1, q_2$; then our putative function might plausibly attempt to output either $3$ or $2$ on the same natural number input $n$, so the function would not be well-defined.\n(It turns out that there is a non-trivial theorem, the [Fundamental Theorem of Arithmetic](https://arbital.com/p/5rh), guaranteeing that this function *is* in fact well-defined.)\n\nWell-definedness in this context comes up very often when we are attempting to take a [quotient](https://arbital.com/p/quotient_by_equivalence_relation).\nThe fact that we can take the quotient of a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $X$ by an [https://arbital.com/p/-53y](https://arbital.com/p/-53y) $\\sim$ is tantamount to saying:\n\n> The function $X \\to \\frac{X}{\\sim}$, given by $x \\mapsto [https://arbital.com/p/x](https://arbital.com/p/x)$ the equivalence class of $X$, is well-defined.\n\n---\n\nAnother, different, way a function could fail to be well-defined is if we tried to take the function $\\mathbb{N} \\to \\mathbb{N}$ given by $n \\mapsto n-5$.\nThis function is unambiguous, but it's not well-defined, because on the input $2$ it tries to output $-3$, which is not in the specified [codomain](https://arbital.com/p/3lg).", "date_published": "2016-08-07T13:09:35Z", "authors": ["Patrick Stevens"], "summaries": ["\"Well-defined\" is a slightly fuzzy word in mathematics. An object is said to be \"well-defined\" if it has been given a definition that is completely unambiguous, can be executed without regard to any arbitrary choices the mathematician might make, or generally is crisply defined."], "tags": [], "alias": "5ss"} {"id": "6fcb32dfb18ac14337e06d837f42d9de", "title": "Needs splitting by mastery", "url": "https://arbital.com/p/split_by_mastery_meta_tag", "source": "arbital", "source_type": "text", "text": "Readers who come to Arbital pages with different levels of mathematical background require different pages to explain the same concept. Sometimes a page is written in a way that seems to cater for multiple audiences (e.g. having a careful wordy explanation designed for people with little background, then throwing out formulas and proofs that could only be followed by someone with much more background). In these cases, it's often best to split the page into multiple [lenses](https://arbital.com/p/17b), designed for different [math level](https://arbital.com/p/52x) audiences.", "date_published": "2016-08-08T16:17:52Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "5sv"} {"id": "0708332b2618b6a997a2f9630642163c", "title": "Proof of Rice's theorem", "url": "https://arbital.com/p/proof_of_rice_theorem", "source": "arbital", "source_type": "text", "text": "Recall the formal statement of [Rice's theorem](https://arbital.com/p/5mv):\n\n> We will use the notation $[https://arbital.com/p/n](https://arbital.com/p/n)$ for the $n$th [Turing machine](https://arbital.com/p/5pd) under some fixed [numbering system](https://arbital.com/p/description_number).\nEach such machine induces a [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy), which we will also write as $[https://arbital.com/p/n](https://arbital.com/p/n)$ where this is unambiguous due to context; then it makes sense to write $[n](m)$ for the value that machine $[https://arbital.com/p/n](https://arbital.com/p/n)$ outputs when it is run on input $m$.\n\n> Let $A$ be a non-empty, proper %%note:That is, it is not the entire set.%% subset of $\\{ \\mathrm{Graph}(n) : n \\in \\mathbb{N} \\}$, where $\\mathrm{Graph}(n)$ is the [graph](https://arbital.com/p/graph_of_a_function) of the [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2) computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$, the $n$th Turing machine.\nThen there is no Turing machine $[https://arbital.com/p/r](https://arbital.com/p/r)$ such that:\n\n> - $[r](i)$ is $1$ if $\\mathrm{Graph}(i) \\in A$\n> - $[r](i)$ is $0$ if $\\mathrm{Graph}(i) \\not \\in A$.\n\nWe give a proof that is (very nearly) constructive: one which (if we could be bothered to work it all through) gives us an explicit example %%note:Well, very nearly; see the next note.%% of a [Turing machine](https://arbital.com/p/5pd) whose \"am-I-in-$A$\" nature cannot be determined by a Turing machine.\n%%note:It's only \"very nearly\" constructive. It would be *actually* constructive if we knew in advance a specific example of a program whose function is in $A$, and a program whose function is in $B$. The proof here assumes the existence of a program of each type, but ironically the theorem itself guarantees that there is no fully-general way to *find* such programs.%%\n\nWe will present an intermediate lemma which does all the heavy lifting; this makes the actual reasoning rather unclear but very succinct, so we will also include an extensive worked example of what this lemma does for us.\n\n# Fixed point theorem\n\nThe intermediate lemma is a certain fixed-point theorem.\n\n> Let $h: \\mathbb{N} \\to \\mathbb{N}$ be [total](https://arbital.com/p/total_function) computable: that is, it halts on every input.\nThen there is $n \\in \\mathbb{N}$ such that $\\mathrm{Graph}(n) = \\mathrm{Graph}(h(n))$. %%note:And, moreover, we can actually *find* such an $n$.%%\n\nThat is, the \"underlying function\" of $n$ - the partial function computed by $[https://arbital.com/p/n](https://arbital.com/p/n)$ - has the same output, at every point, as the function computed by $[https://arbital.com/p/h](https://arbital.com/p/h)$.\nIf we view $h$ as a way of manipulating a program (as specified by its [https://arbital.com/p/-description_number](https://arbital.com/p/-description_number)), then this fixed-point theorem states that we can find a program whose underlying function is not changed at all by $h$.\n\nThe proof of this lemma is quite simple once the magic steps have been discovered, but it is devilishly difficult to intuit, because it involves two rather strange and confusing recursions and some self-reference.\n\nRecall the [$s_{mn}$ theorem](https://arbital.com/p/translation_lemma), which states that there is a total computable function $S$ of two variables $m, n$ such that for every $e \\in \\mathbb{N}$, we have $[e](m, n) = [S(e,m)](n)$: that is, there is a total computable way $S$ of [https://arbital.com/p/-50p](https://arbital.com/p/-50p) computable functions.\n(Strictly speaking, our Turing machines only take one argument. Therefore we should use a computable pairing scheme such as [Cantor's pairing function](https://arbital.com/p/cantor_pairing_function), so that actually $[e](m,n)$ should be interpreted as $[e](\\mathrm{pair}) (m, n))$.)\n\nThen the function which takes the pair $(e, x)$ and outputs the value of $[ h(S(e,e)) ](x)$ is computable, so it has a description number $a$, say.\n%%note:This is the first strange part: we are treating $e$ both as a description number, and as an input to $[https://arbital.com/p/e](https://arbital.com/p/e)$, when we consider $S(e,e)$.%%\n\nNow we claim that $S(a, a)$ is the $n$ we seek. %%note:This is the second strange part, for the same reason as $S(e,e)$ was the first; but this one is even worse, because the definition of $a$ already involves a weird recursion and we've just added another one on top.%%\nIndeed, for any $x$, $[n](x) = [S(a,a)](x)$ by definition of $n$; this is $[a](a, x)$ by the $s_{mn}$ theorem; this is $[h(S(a,a))](x)$ by definition of $[https://arbital.com/p/a](https://arbital.com/p/a)$; and that is $[h(n)](x)$ by definition of $n$.\n\nTherefore $[n](x) = [h(n)](x)$, so we have found our fixed point.\n\n%%hidden(Worked example):\n\n\nSuppose our description numbering scheme is just \"expand $n$ as a number in base $128$, and interpret the result as an [ASCII](https://en.wikipedia.org/wiki/ASCII) %%note:This is a standard, agreed-upon method of turning a number between $0$ and $128$ into a character.%% string; then interpret that string as [Python](https://en.wikipedia.org/wiki/Python_) (programming_language)) code\".\n\nThen our function $h$, whatever it may be, can be viewed just as transforming Python code.\n\nSuppose $h$ does nothing more than insert the following line of code as the second line of its input:\n\n\tx = 0\n\nSo, for instance, it takes the string \n\n\tx = 1\n\tprint(x)\n\nand returns\n\n\tx = 1\n\tx = 0\n\tprint(x)\n\nthereby changing the function computed from \"return the constant $1$\" to \"return the constant $0$\", in this case.\nNote that many other functions will not change at all: for example, those which don't contain a variable $x$ in the first place will be unchanged, because all the modification does is add in an initialisation of a variable which will never subsequently be used.\n\nThe fixed-point theorem guarantees that there is indeed a Python program which will not change at all under this modification (though in this case it's very obvious).\nIn fact the theorem *constructs* such a program; can we work out what it is?\n\nFirst of all, $S(m, n)$ can be implemented as follows.\nWe will take our Python code to be written so that their input is given in the variable `r1`, so $[e](5)$ is simply the Python code represented by $e$ but where the code-variable `r1` is initialised to $5$ first; that is, it can be found by prepending the line `r1 = 5` to the code represented by $e$.\nThen we will assume that Python comes with a function `eval` (corresponding to $S$) which takes as its input a string %%note:The string is standing in place of $m$, but we have just skipped the intermediate step of \"unpack the integer into a string\" and gone straight to assuming it is a string.%% and another argument with which `eval` initialises the variable `x` before running the string as a Python program in a separate instance of Python:\n\n\teval(\"print(r1)\", 5) # does nothing other than print the number 5\n\teval(\"print(y)\", 5) # throws an error because `y` is not defined when it comes to printing it\n\teval(\"print(6)\", 5) # prints 6, ignoring the fact that the variable `r1` is equal to `5` in the sub-instance\n\nRemember, our proof of the fixed point theorem says that the program we want has code $S(a, a)$, where $a$ takes a pair $(e, x)$ as input, and outputs $[h(S(e,e))](x)$.\nWhat is $a$ specifically here?\nWell, on the one hand we're viewing it as a string of code (because it comes as the first argument to $S$), and on the other we're viewing it as an integer (because it also comes as the second argument to $S$).\n\nAs code, `a` is the following string, where `h` is to be replaced by whatever we've already decided $h$ is:\n\n\teval(\"r1 = e; h(eval(r1, str_as_int(r1)))\", x)\n\nWe are assuming the existence of a function `str_as_int` which takes an ASCII string and returns the integer whose places in base 128 are the ASCII for each character of the string in turn.\n\nFor example, we have $h$ inserting the line `x = 0` as the second line, so our `a` is:\n\n\teval(\"r1 = e; x = 0; eval(r1, str_as_int(r1))\", x)\n\nAs a number, `a` is just the ASCII for this, interpreted in base 128 (i.e. a certain number which in this case happens to have 106 digits, which is why we don't give it here).\n\nThe claim of the fixed-point theorem, then, is that the following program is unchanged by $h$:\n\n\teval(\"eval(\\\"r1 = e; x = 0; eval(r1, str_as_int(r1))\\\", x)\", str_to_int(\"eval(\\\"r1 = e; x = 0; eval(r1, str_as_int(r1))\\\", x)\"))\n\nYou may recognise this as a [quining](https://arbital.com/p/322) construction.\n%%\n\n# Deducing Rice's theorem from the fixed point theorem\n\nFinally, Rice's theorem follows quickly: suppose we could decide in general whether $\\mathrm{Graph}(n) \\in A$ or not, and label by $\\iota$ the computable function which decides this (that is, whose value is $1$ if $\\mathrm{Graph}(n) \\in A$, and $0$ otherwise).\n\nSince $A$ is nonempty and proper, there are natural numbers $a$ and $b$ such that $\\mathrm{Graph}(a) \\in A$ but $\\mathrm{Graph}(b) \\not \\in A$.\nDefine the computable function $g$ which takes $n$ and outputs $a$ if $\\iota(n) = 0$, and $b$ otherwise.\n(That is, it flips its input: if its input had the property of $A$, the function $g$ outputs $b$ whose graph is not in $A$, and vice versa.\nInformally, it is the program-transformer that reads in a program, determines whether the program computes a function in $A$ or not, and transforms the program into a specific canonical example of something which has the *opposite* $A$-ness status.)\n\nBy the fixed-point theorem, we can find $n$ such that $\\mathrm{Graph}(n) = \\mathrm{Graph}(g(n))$.\n\nBut now we can ask whether $\\mathrm{Graph}(n)$ is in $A$ (and therefore whether $\\mathrm{Graph}(g(n))$ is in $A$).\n\n- If it is in $A$, then $g(n) = b$ and so $\\mathrm{Graph}(g(n)) = \\mathrm{Graph}(b)$ which is not in $A$.\n- If it is not in $A$, then $g(n) = a$ and so $\\mathrm{Graph}(g(n)) = \\mathrm{Graph}(a)$ is in $A$.\n\nWe have obtained [contradictions](https://arbital.com/p/46z) in both cases (namely that $\\mathrm{Graph}(g(n))$ is both in $A$ and not in $A$), so it must be the case that $\\iota$ does not exist after all.", "date_published": "2016-08-08T12:35:54Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["This lens proves [Rice's theorem](https://arbital.com/p/5mv) via a fixed-point theorem on computable functions."], "tags": ["B-Class"], "alias": "5t9"} {"id": "369005f24c7466120f8613c5ef28871c", "title": "Needs links", "url": "https://arbital.com/p/needs_links_meta_tag", "source": "arbital", "source_type": "text", "text": "Some pages don't have many [greenlinks](https://arbital.com/p/17f), making them less useful than they could be.", "date_published": "2016-08-08T13:15:48Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "5tb"} {"id": "86b25b7db2179bc6f4c2959e61d67b72", "title": "Algorithmic complexity", "url": "https://arbital.com/p/Kolmogorov_complexity", "source": "arbital", "source_type": "text", "text": "Algorithmic complexity is a formal measure of the minimum amount of information required to specify some message, binary string, computer program, classification rule, etcetera. The algorithmic complexity or Kolmogorov complexity of a message is the number of 1s and 0s required to specify a Turing machine that reproduces the message.\n\n(The relevant Wikipedia article is filed under [Kolmogorov complexity](http://en.wikipedia.org/wiki/Kolmogorov_complexity). Not to be confused with \"[computational complexity](https://arbital.com/p/49w)\", which isn't the same thing at all.)\n\nA string of a million 1s has around as much K-complexity as the number 1,000,000, since getting a Turing machine to print out exactly a million 1s and then stop, is mostly a matter of encoding the number 1,000,000. The number 1,000,000 can itself potentially be compressed further, since it's an unusually simple number of that magnitude: it's 10 to the 6th power, so if we already have a concept of 'exponentiation' or can encode it simply, we just need to encode the numbers 10 and 6, which are quite small.\n\nWhen we say that a message has high Kolmogorov complexity, we mean that it *can't be compressed beyond a certain point* (unless the 'compression algorithm' is itself large and contains much of the key information baked in). Things have high Kolmogorov complexity when they're made up of many independent facts that can't be predicted from knowing the previous facts.\n\nShakespeare's _Romeo and Juliet_ will compress by a lot using simple algorithms, because there are many words used more than once, and some words are much more frequent than others. But we are exceedingly unlikely to find, anywhere in the first trillion Turing machines, any Turing machine that prints out the exact text of this play. 2^40 is greater than a trillion, so if we consider the set of all 40-bit binary strings, it's clear we can't print out *all* of them exactly using the first trillion Turing machines. Finding a 40-bit Turing machine that printed out the exact text of _Romeo and Juliet_ would be vastly improbable.\n\nOn the other hand, it wouldn't be particularly surprising to find a small Turing machine that printed out $3\\uparrow\\uparrow\\uparrow3$ 1s and then stopped, because the algorithm for this enormous number seems simple, and easy to encode as a computer program or Turing machine.\n\nThe algorithmic complexity of a system shouldn't be confused with the total number of visible, complicated-looking details. The [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set) looks very complicated visually - you can keep zooming in using more and more detail - but there's a very simple rule that generates it, so we say the algorithmic complexity is very low. \n\nAs a corollary, a piece of a big-looking object can easily have more Kolmogorov complexity than the whole. If you zoom far down into the Mandelbrot set and isolate a particular piece of the fractal, the information of that image now includes both the Mandelbrot-generating rule and also the exact location of that particular piece.\n\nSimilarly, the Earth is much more algorithmically complex than the laws of physics, and if there's a multiverse that developed deterministically out of the laws of physics, the Earth would be much more complex than that multiverse. To print out the whole multiverse, you'd only need to start from the laws of physics and work forward; to print out *Earth in particular* you'd need a huge number of additional bits to locate Earth *inside* the multiverse.\n\nOr more simply, we can observe that a program that prints out all possible books in order, is much simpler than a program that prints out only _Romeo and Juliet_. To put it another way, Borge's \"Library of Babel\" containing every possible book has far lower algorithmic complexity than an Earth library containing only some books. The moral is that the amount of visible stuff and its seeming surface complexity should not be confused with the notion of algorithmic complexity or theoretical compressibility.", "date_published": "2016-06-14T20:52:34Z", "authors": ["Jaime Sevilla Molina", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Work in progress", "Definition"], "alias": "5v"} {"id": "af34b3f8fce85319d3f5045436d8f6de", "title": "Image requested", "url": "https://arbital.com/p/image_requested_meta_tag", "source": "arbital", "source_type": "text", "text": "An editor has requested an image for this page, described in a \\.", "date_published": "2016-08-12T19:32:59Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Stub"], "alias": "5v6"} {"id": "7fc2ad2f79b4389d9249566d7d2561fc", "title": "Euclidean domains are principal ideal domains", "url": "https://arbital.com/p/euclidean_domain_is_pid", "source": "arbital", "source_type": "text", "text": "summary(Technical): Let $R$ be a [https://arbital.com/p/euclidean_domain](https://arbital.com/p/euclidean_domain). Then $R$ is a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5).\n\nA common theme in [ring theory](https://arbital.com/p/3gq) is the idea that we identify a property of the [integers](https://arbital.com/p/48l), and work out what that property means in a more general setting.\nThe idea of the [https://arbital.com/p/euclidean_domain](https://arbital.com/p/euclidean_domain) captures the fact that in $\\mathbb{Z}$, we may perform the [https://arbital.com/p/-division_algorithm](https://arbital.com/p/-division_algorithm) (which can then be used to work out [greatest common divisors](https://arbital.com/p/5mw) and other such nice things from $\\mathbb{Z}$).\nHere, we will prove that this simple property actually imposes a lot of structure on a ring: it forces the ring to be a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5), so that every [ideal](https://arbital.com/p/ideal_ring_theory) has just one generator.\n\nIn turn, this forces the ring to have [unique factorisation](https://arbital.com/p/5vk) ([proof](https://arbital.com/p/pid_implies_ufd)), so in some sense the [Fundamental Theorem of Arithmetic](https://arbital.com/p/5rh) (i.e. the statement that $\\mathbb{Z}$ is a unique factorisation domain) is true entirely because the division algorithm works in $\\mathbb{Z}$.\n\nThis result is essentially why we care about Euclidean domains: because if we know a Euclidean function for an integral domain, we have a very easy way of recognising that the ring is a principal ideal domain.\n\n# Formal statement\n\nLet $R$ be a [https://arbital.com/p/euclidean_domain](https://arbital.com/p/euclidean_domain).\nThen $R$ is a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5).\n\n# Proof\n\nThis proof essentially mirrors the first proof one might find in the concrete case of the integers, if one sat down to discover an integer-specific proof; but we cast it into slightly different language using an equivalent definition of \"ideal\", because it is a bit cleaner that way.\nIt is a very useful exercise to work through the proof, using $\\mathbb{Z}$ instead of the general ring $R$ and using \"size\" %%note:That is, if $n > 0$ then the size is $n$; if $n < 0$ then the size is $-n$. We just throw away the sign.%% as the Euclidean function.\n\nLet $R$ be a Euclidean domain, and say $\\phi: \\mathbb{R} \\setminus \\{ 0 \\} \\to \\mathbb{N}^{\\geq 0}$ is a Euclidean function.\nThat is,\n\n- if $a$ divides $b$ then $\\phi(a) \\leq \\phi(b)$;\n- for every $a$, and every $b$ not dividing $a$, we can find $q$ and $r$ such that $a = qb+r$ and $\\phi(r) < \\phi(b)$.\n\nWe need to show that every [ideal](https://arbital.com/p/ideal_ring_theory) is principal, so take an ideal $I \\subseteq R$.\nWe'll view $I$ as the [kernel](https://arbital.com/p/5r6) of a [homomorphism](https://arbital.com/p/ring_homomorphism) $\\alpha: R \\to S$; recall that this is the proper way to think of ideals. ([Proof of the equivalence.](https://arbital.com/p/5r9))\nThen we need to show that there is some $r \\in R$ such that $\\alpha(x) = 0$ if and only if $x$ is a multiple of $r$.\n\nIf $\\alpha$ only sends $0$ to $0$ (that is, everything else doesn't get sent to $0$), then we're immediately done: just let $r = 0$.\n\nOtherwise, $\\alpha$ sends something nonzero to $0$; choose $r$ to be nonzero with minimal $\\phi$.\nWe claim that this $r$ works.\n\nIndeed, let $x$ be a multiple of $r$, so we can write it as $ar$, say.\nThen $\\alpha(ar) = \\alpha(a) \\alpha(r) = \\alpha(a) \\times 0 = 0$.\nTherefore multiples of $r$ are sent by $\\alpha$ to $0$.\n\nConversely, if $x$ is *not* a multiple of $r$, then we can write $x = ar+b$ where $\\phi(b) < \\phi(r)$ and $b$ is nonzero. %%note: The fact that we can do this is part of the definition of the Euclidean function $\\phi$.%%\nThen $\\alpha(x) = \\alpha(ar)+\\alpha(b)$; we already have $\\alpha(r) = 0$, so $\\alpha(x) = \\alpha(b)$.\nBut $b$ has a smaller $\\phi$-value than $r$ does, and we picked $r$ to have the *smallest* $\\phi$-value among everything that $\\alpha$ sent to $0$; so $\\alpha(b)$ cannot be $0$, and hence nor can $\\alpha(x)$.\n\nSo we have shown that $\\alpha(x) = 0$ if and only if $x$ is a multiple of $r$, as required.\n\n# The converse is false\n\nThere do exist principal ideal domains which are not Euclidean domains: $\\mathbb{Z}[](https://arbital.com/p/\\frac{1}{2})$ is an example. ([Proof.](http://www.maths.qmul.ac.uk/~raw/MTH5100/PIDnotED.pdf))", "date_published": "2016-08-14T15:43:26Z", "authors": ["Patrick Stevens"], "summaries": ["The [https://arbital.com/p/-division_algorithm](https://arbital.com/p/-division_algorithm) is a fundamental property of the [integers](https://arbital.com/p/48l), and it turns out that almost all of the nicest properties of the integers stem from the division algorithm. All [rings](https://arbital.com/p/3gq) which have the division algorithm (that is, [Euclidean domains](https://arbital.com/p/euclidean_domain)) are [principal ideal domains](https://arbital.com/p/5r5): they have the property that every [ideal](https://arbital.com/p/ideal_ring_theory) has just one generator."], "tags": [], "alias": "5vj"} {"id": "0c665782e1bcc694a453e95890fdf430", "title": "Unique factorisation domain", "url": "https://arbital.com/p/unique_factorisation_domain", "source": "arbital", "source_type": "text", "text": "summary: A [ring](https://arbital.com/p/3gq) $R$ is a *unique factorisation domain* if an analog of the [https://arbital.com/p/5rh](https://arbital.com/p/5rh) holds in it. The condition is as follows: every nonzero element, which does not have a multiplicative inverse, must be expressible as a product of [irreducibles](https://arbital.com/p/5m1), and the expression must be unique if we do not care about ordering or about multiplying by elements which have multiplicative inverses.\n\n[Ring theory](https://arbital.com/p/3gq) is the art of extracting properties from the [integers](https://arbital.com/p/48l) and working out how they interact with each other.\nFrom this point of view, a *unique factorisation domain* is a ring in which the integers' [https://arbital.com/p/5rh](https://arbital.com/p/5rh) holds.\n\nThere have been various incorrect \"proofs\" of Fermat's Last Theorem throughout the ages; it turns out that if we assume the false \"fact\" that all subrings of [$\\mathbb{C}$](https://arbital.com/p/4zw) are unique factorisation domains, then FLT is not particularly difficult to prove.\nThis is an example of where abstraction really is helpful: having a name for the concept of a UFD, and a few examples, makes it clear that it is not a trivial property and that it does need to be checked whenever we try and use it.\n\n# Formal statement\n\nLet $R$ be an [https://arbital.com/p/-5md](https://arbital.com/p/-5md).\nThen $R$ is a *unique factorisation domain* if every nonzero non-[unit](https://arbital.com/p/5mg) element of $R$ may be expressed as a product of [irreducibles](https://arbital.com/p/5m1), and moreover this expression is unique up to reordering and multiplying by units.\n\n# Why ignore units?\n\nWe must set things up so that we don't care about units in the factorisations we discover.\nIndeed, if $u$ is a unit %%note:That is, it has a multiplicative inverse $u^{-1}$.%%, then $p \\times q$ is always equal to $(p \\times u) \\times (q \\times u^{-1})$, and this \"looks like\" a different factorisation into irreducibles.\n($p \\times u$ is irreducible if $p$ is irreducible and $u$ is a unit.)\nThe best we could possibly hope for is that the factorisation would be unique if we ignored multiplying by invertible elements, because those we may always forget about.\n\n## Example\n\nIn $\\mathbb{Z}$, the units are precisely $1$ and $-1$.\nWe have that $-10 = -1 \\times 5 \\times 2$ or $-5 \\times 2$ or $5 \\times -2$; we need these to be \"the same\" somehow.\n\nThe way we make them be \"the same\" is to insist that the $5$ and $-5$ are \"the same\" and the $2$ and $-2$ are \"the same\" (because they only differ by multiplication of the unit $-1$), and to note that $-1$ is not irreducible (because irreducibles are specifically defined to be non-unit) so $-1 \\times 5 \\times 2$ is not actually a factorisation into irreducibles.\n\nThat way, $-1 \\times 5 \\times 2$ is not a valid decomposition anyway, and $-5 \\times 2$ is just the same as $5 \\times -2$ because each of the irreducibles is the same up to multiplication by units.\n\n# Examples\n\n- Every [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5) is a unique factorisation domain. ([Proof.](https://arbital.com/p/pid_implies_ufd)) This fact is not trivial! Therefore $\\mathbb{Z}$ is a UFD, though we can also prove this directly; this is the [https://arbital.com/p/5rh](https://arbital.com/p/5rh).\n- $\\mathbb{Z}[https://arbital.com/p/-\\sqrt{3}](https://arbital.com/p/-\\sqrt{3})$ is *not* a UFD. Indeed, $4 = 2 \\times 2$ but also $(1+\\sqrt{-3})(1-\\sqrt{-3})$; these are both decompositions into irreducible elements. (See the page on [irreducibles](https://arbital.com/p/5m1) for a proof that $2$ is irreducible; the same proof can be adapted to show that $1 \\pm \\sqrt{-3}$ are both irreducible.)\n\n# Properties\n\n- If it is hard to test for uniqueness up to reordering and multiplying by units, there is an easier but equivalent condition to check: an integral domain is a unique factorisation domain if and only if every element can be written (not necessarily uniquely) as a product of irreducibles, and all irreducibles are [prime](https://arbital.com/p/5m2). ([Proof.](https://arbital.com/p/alternative_condition_for_ufd))", "date_published": "2016-08-14T12:08:00Z", "authors": ["Patrick Stevens"], "summaries": ["An [https://arbital.com/p/-5md](https://arbital.com/p/-5md) $R$ is said to be a *unique factorisation domain* if every nonzero non-[unit](https://arbital.com/p/5mg) element of $R$ may be written as a product of [irreducibles](https://arbital.com/p/5m1), and moreover this product is unique up to reordering and multiplying by units."], "tags": [], "alias": "5vk"} {"id": "1e8ee1286475b4e07e2986fb222ab04d", "title": "Factorial", "url": "https://arbital.com/p/5wv", "source": "arbital", "source_type": "text", "text": "# Three objects in a line\n\nHow many ways are there to arrange three objects in a line?\n(I'll use numbers $1,2,3$ to represent the objects; pretend I painted the numbers onto the respective objects.)\nFor example, $1,2,3$ is one way to arrange the three objects; $1,3,2$ is another; and so on.\n\nTo be completely concrete, let's say I have three same-sized cubes and I have three same-sized boxes to place them in; the boxes are arranged into one row and can't be moved (they're too heavy), but the cubes are made of balsa wood and can be moved freely.\nThe number $1$ is painted on one cube; $2$ on another; and $3$ on the third.\nHow many ways are there to arrange the cubes into the fixed boxes?\n\nHave a think about this, then reveal the answer and a possible way of getting the answer.\n\n%%hidden(Show solution):\nThe total number of ways is $6$.\nThe complete list of possible options is:\n\n- $1,2,3$\n- $1,3,2$\n- $2,1,3$\n- $2,3,1$\n- $3,1,2$\n- $3,2,1$\n\nI've listed them in an order that hopefully makes it fairly easy to see that there are no more possibilities.\nFirst I listed every possible way $1$ could come at the beginning; then every possible way $2$ could; then every possible way $3$ could.\n\nIf you got the answer $6$ through some *other* method, that's (probably) fine: there are many ways to think about this problem.\n%%\n\nHow about arranging four objects in a line? (That is, four cubes into four fixed boxes.)\n\nThe total number of ways is $24$.\nThe complete list of possible options is:\n\n- $1,2,3,4$\n- $1,2,4,3$\n- $1,3,2,4$\n- $1,3,4,2$\n- $1,4,2,3$\n- $1,4,3,2$\n- $2,1,3,4$\n\n… I got bored.\n\nHow could we do this *without* listing all the possibilities?\nI promise the answer really is $24$, but you should think about this for a bit before continuing.\n\n# How to arrange four objects\n\nThere's an insight that makes everything much easier.\n\n> Once we've placed a cube into the leftmost box, all we have left to do is fit the remaining three cubes into the remaining three boxes.\n\nWe've already seen above that there are $6$ ways to arrange three cubes among three boxes!\n\nSo the total number of ways of doing four cubes among four boxes is:\n\n- $6$ ways where the leftmost box contains cube $1$ (and I actually listed all of those above before I got bored);\n- $6$ ways where the leftmost box contains cube $2$;\n- $6$ ways where the leftmost box contains cube $3$;\n- $6$ ways where the leftmost box contains cube $4$.\n\nThat comes to $24$ in total.\n\n# Interlude: Exercise\n\nCan you work out how many ways there are to arrange *five* cubes into *five* fixed boxes?\nTake a hint from how we did four boxes above.\n\n%%hidden(Show solution):\nThere are $120$ ways to do this.\nRemember, there are $24$ ways to arrange four cubes among four boxes.\n\nThen to arrange five cubes among five boxes:\n\n- $24$ ways where the leftmost box contains cube $1$\n- $24$ ways where the leftmost box contains cube $2$\n- $24$ ways where the leftmost box contains cube $3$\n- $24$ ways where the leftmost box contains cube $4$\n- $24$ ways where the leftmost box contains cube $5$\n\nThat comes to $120$ in total.\n%%\n\n# In general\n\nOK, that was all well and good.\nBut if we didn't already know how to arrange four objects into four boxes, how could we jump straight to arranging five objects into five boxes?\n\nWell, you might have noticed a pattern already.\n\n- To arrange five boxes, we added the four-boxes number to itself five times; that is, we multiplied the four-boxes number by $5$.\n- To arrange four boxes, we added the three-boxes number to itself four times; that is, we multiplied the three-boxes number by $4$.\n\nPerhaps you can see that this will *always* work: to arrange $n$ boxes, we add the $n-1$-boxes number to itself $n$ times. That is, we multiply the $n-1$-boxes number by $n$.\nIndeed, there are $n$ possible ways to fill the leftmost box %%note:We can do it with the number $1$, or the number $2$, or… or the number $n$; that's $n$ ways.%% and once we've done that, there are \"the $n-1$-boxes number\" ways to fill the remaining $n-1$ boxes.\n\nBut this still doesn't help us jump straight to how to arrange five objects into five boxes.\nHere comes the clever bit.\n\nLet's write $5!$ (with an exclamation mark) for the number that is \"how many ways to arrange five objects into five boxes\". %%note:We already know this number is actually $120$.%%\nSimilarly, $4!$ is \"how many ways to arrange four objects into four boxes\", and in general $n!$ is \"how many ways to arrange $n$ objects into $n$ boxes\".\n\nThen the patterns we noted earlier become:\n\n- $5! = 5 \\times 4!$\n- $4! = 4 \\times 3!$\n\n(Notice how much cleaner that is than \"To arrange five boxes, we added the four-boxes number to itself five times; that is, we multiplied the four-boxes number by $5$\". This is why mathematicians use notation: to make everything easier to say.)\n\nAnd the general rule is: %%note: Being careful to put $n-1$ in brackets, because otherwise it looks like $n \\times n - 1!$, which means $(n \\times n)-1!$ according to the [https://arbital.com/p/-54s](https://arbital.com/p/-54s).%% $$n! = n \\times (n-1)!$$\n\nOK, we have\n\n- $n! = n \\times (n-1)!$, and \n- $(n-1)! = (n-1) \\times (n-2)!$, and \n- $(n-2)! = (n-2) \\times (n-3)!$, and so on.\n\nSo $$n! = n \\times (n-1)! = n \\times (n-1) \\times (n-2)! = n \\times (n-1) \\times (n-2) \\times (n-3)!$$\nand so on.\n\nIf we just keep going, we'll eventually reach $$n \\times (n-1) \\times (n-2) \\times \\dots \\times 5 \\times 4 \\times 3!$$\nand we already know that $3! = 6$, which I'll write as $3 \\times 2 \\times 1$ for reasons which are about to become obvious.\n\nSo we have the following formula, which is how we define the **factorial**:\n\n$$n! = n \\times (n-1) \\times \\dots \\times 4 \\times 3 \\times 2 \\times 1$$\n\n\"$n!$\" is read out loud as \"$n$ factorial\", and it means \"the number of ways to arrange $n$ objects into any order\".\n\n# Edge cases\n\nWe've seen $3!$, but never $2!$ or $1!$.\n\n- It's easy to see that there are two ways to arrange two objects into an order: $1,2$ or $2,1$.\nSo $2! = 2$.\n- It's also easy (if a bit weird) to see that there is just one way of arranging one object into an order: $1$ is the only possible way. So $1! = 1$.\n- How about arranging *no* objects into an order? This is even weirder, but the answer is $1$. There *is* a way to arrange no objects into an order: just don't put down any objects. This is something which you should just accept without thinking about it too hard, and it almost never crops up. Anyway, $0! = 1$.", "date_published": "2016-08-18T05:05:29Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["The *factorial* of a number $n$ is how we describe \"how many different ways we can arrange $n$ objects in a row\"."], "tags": ["C-Class"], "alias": "5wv"} {"id": "a8fd452331b7005e772ec6cade0d35d4", "title": "Transcendental number", "url": "https://arbital.com/p/transcendental_number", "source": "arbital", "source_type": "text", "text": "summary: A *transcendental* number $z$ is one such that there is no (nonzero) [https://arbital.com/p/-polynomial](https://arbital.com/p/-polynomial) function which outputs $0$ when given $z$ as input. $\\frac{1}{2}$, $\\sqrt{6}$, $i$ and $e^{i \\pi/2}$ are not transcendental; $\\pi$ and $e$ are both transcendental.\n\nA [real](https://arbital.com/p/4bc) or [complex](https://arbital.com/p/4zw) number is said to be *transcendental* if it is not the root of any (nonzero) [https://arbital.com/p/-48l](https://arbital.com/p/-48l)-coefficient [https://arbital.com/p/-polynomial](https://arbital.com/p/-polynomial).\n(\"Transcendental\" means \"not [algebraic](https://arbital.com/p/algebraic_number)\".)\n\n# Examples and non-examples\n\nMany of the most interesting numbers are *not* transcendental.\n\n- Every integer is *not* transcendental (i.e. is algebraic): the integer $n$ is the root of the integer-coefficient polynomial $x-n$.\n- Every [rational](https://arbital.com/p/4zq) is algebraic: the rational $\\frac{p}{q}$ is the root of the integer-coefficient polynomial $qx - p$.\n- $\\sqrt{2}$ is algebraic: it is a root of $x^2-2$.\n- $i$ is algebraic: it is a root of $x^2+1$.\n- $e^{i \\pi/2}$ (or $\\frac{\\sqrt{2}}{2} + \\frac{\\sqrt{2}}{2}i$) is algebraic: it is a root of $x^4+1$.\n\nHowever, $\\pi$ and $e$ are both transcendental. (Both of these are difficult to prove.)\n\n# Proof that there is a transcendental number\n\nThere is a very sneaky proof that there is some transcendental real number, though this proof doesn't give us an example.\nIn fact, the proof will tell us that \"[almost all](https://arbital.com/p/almost_every)\" real numbers are transcendental.\n(The same proof can be used to demonstrate the existence of [irrational numbers](https://arbital.com/p/54z).)\n\nIt is a fairly easy fact that the *non*-transcendental numbers (that is, the algebraic numbers) form a [https://arbital.com/p/-countable](https://arbital.com/p/-countable) subset of the real numbers.\nIndeed, the [Fundamental Theorem of Algebra](https://arbital.com/p/fundamental_theorem_of_algebra) states that every polynomial of degree $n$ has exactly $n$ complex roots (if we count them with [multiplicity](https://arbital.com/p/multiplicity), so that $x^2+2x+1$ has the \"two\" roots $x=-1$ and $x=-1$).\nThere are only countably many integer-coefficient polynomials , and each has only finitely many complex roots (and therefore only finitely many—possibly $0$—*real* roots), so there can only be countably many numbers which are roots of *any* integer-coefficient polynomial.\n\nBut there are uncountably many reals ([proof](https://arbital.com/p/reals_are_uncountable)), so there must be some real (indeed, uncountably many!) which is not algebraic.\nThat is, there are uncountably many transcendental numbers.\n\n# Explicit construction of a transcendental number", "date_published": "2016-08-20T19:59:36Z", "authors": ["Chris Barnett", "Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": ["A [real](https://arbital.com/p/4bc) or [complex](https://arbital.com/p/4zw) number $z$ is said to be *transcendental* if there is no (nonzero) [https://arbital.com/p/-48l](https://arbital.com/p/-48l)-coefficient [https://arbital.com/p/-polynomial](https://arbital.com/p/-polynomial) which has $z$ as a [root](https://arbital.com/p/root_of_polynomial)."], "tags": [], "alias": "5wx"} {"id": "2e51db9c3a6314448375e66324e717d8", "title": "Ordered field", "url": "https://arbital.com/p/ordered_field", "source": "arbital", "source_type": "text", "text": "An **ordered field** is an [https://arbital.com/p/-55j](https://arbital.com/p/-55j) with the additional property that [https://arbital.com/p/-division](https://arbital.com/p/-division) is possible, making it a [field](https://arbital.com/p/481).", "date_published": "2016-08-18T01:54:21Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Joe Zeng"], "summaries": [], "tags": ["Formal definition", "Needs parent", "Stub"], "alias": "5x3"} {"id": "4f1891d80ffeb1992a6bd6686a2b74d6", "title": "Placeholder", "url": "https://arbital.com/p/placeholder_meta_tag", "source": "arbital", "source_type": "text", "text": "This is an empty page created for a [parent](https://arbital.com/p/3n), [requisite](https://arbital.com/p/1ln), or [teaches](https://arbital.com/p/3jg) relationship. If you know about this topic and want to help others understand it, you're welcome to [fill in this gap](https://arbital.com/edit/) in [Arbital's knowledge network](https://arbital.com/p/1sm)!\n\nContrast: \n\n* [https://arbital.com/p/72](https://arbital.com/p/72), for tiny pages containing only a paragraph or a couple of sentences of text.\n* [https://arbital.com/p/5xq](https://arbital.com/p/5xq), for outlines of pages.\n\n**[Quality scale](https://arbital.com/p/4yg)**\n\n* [https://arbital.com/p/4ym](https://arbital.com/p/4ym)\n* [https://arbital.com/p/4gs](https://arbital.com/p/4gs)\n* [https://arbital.com/p/5xs](https://arbital.com/p/5xs)\n* [https://arbital.com/p/5xq](https://arbital.com/p/5xq)\n* [https://arbital.com/p/72](https://arbital.com/p/72)\n* [https://arbital.com/p/3rk](https://arbital.com/p/3rk)\n* [https://arbital.com/p/4y7](https://arbital.com/p/4y7)\n* [https://arbital.com/p/4yd](https://arbital.com/p/4yd)\n* [https://arbital.com/p/4yf](https://arbital.com/p/4yf)\n* [https://arbital.com/p/4yl](https://arbital.com/p/4yl)", "date_published": "2016-10-11T13:38:38Z", "authors": ["Eric Bruylant", "Jaime Sevilla Molina"], "summaries": [], "tags": ["Stub"], "alias": "5xs"} {"id": "8c079aede547fdf3489081a31b9db714", "title": "LaTeX", "url": "https://arbital.com/p/LaTeX", "source": "arbital", "source_type": "text", "text": "**[Placeholder](https://arbital.com/p/5xs)**", "date_published": "2016-08-20T10:35:55Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Placeholder"], "alias": "5xw"} {"id": "769b9bb2bd7f31080f711bc4ea902d14", "title": "Mathematical object", "url": "https://arbital.com/p/mathematical_object", "source": "arbital", "source_type": "text", "text": "**[https://arbital.com/p/5xs](https://arbital.com/p/5xs)**", "date_published": "2016-08-20T10:47:45Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Placeholder"], "alias": "5xx"} {"id": "6b6657eb28aba58d3e7779818afd6b1b", "title": "Element", "url": "https://arbital.com/p/element_mathematics", "source": "arbital", "source_type": "text", "text": "**[https://arbital.com/p/5xs](https://arbital.com/p/5xs)**", "date_published": "2016-08-20T11:16:43Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": [], "alias": "5xy"} {"id": "600707bf2a4295d7b7c04a47d599fcd8", "title": "Proof technique", "url": "https://arbital.com/p/proof_technique", "source": "arbital", "source_type": "text", "text": "**[https://arbital.com/p/5xs](https://arbital.com/p/5xs)**", "date_published": "2016-08-20T11:25:46Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["Placeholder"], "alias": "5xz"} {"id": "5d2ee5094ca4a0ed37c0d20d17f8040b", "title": "Turing machine: External resources", "url": "https://arbital.com/p/turing_machine_external_resources", "source": "arbital", "source_type": "text", "text": "* [Wikipedia](https://en.wikipedia.org/wiki/Turing_machine)\n* [Wolfram MathWorld](http://mathworld.wolfram.com/TuringMachine.html)\n* [Stanford Encyclopedia of Philosophy](http://plato.stanford.edu/entries/turing-machine/)", "date_published": "2016-08-20T21:22:35Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["External resources"], "alias": "5y6"} {"id": "a1800c367f0967f4f09fd134cc7ee9f5", "title": "Disjoint union of sets", "url": "https://arbital.com/p/set_disjoint_union", "source": "arbital", "source_type": "text", "text": "summary(Technical): The phrase \"disjoint union\" is used to mean one of two slightly different operations on [sets](https://arbital.com/p/3jz). It indicates either \"just take the [union](https://arbital.com/p/5s8), but also notice that we've been careful to ensure that the sets have empty [intersection](https://arbital.com/p/5sb)\", or \"perform a specific operation on each set to ensure that the resulting sets have empty intersection, and then take the union\".\n\nThe *disjoint union* of two [sets](https://arbital.com/p/3jz) is just the [union](https://arbital.com/p/5s8), but with the additional information that the two sets also don't have any elements in common. That is, we can use the phrase \"disjoint union\" to indicate that we've taken the union of two sets which have empty [intersection](https://arbital.com/p/5sb). The phrase can also be used to indicate a very slightly different operation: \"do something to the elements of each set to make sure they don't overlap, and then take the union\".\n\n# Definition of the disjoint union\n\n\"Disjoint union\" can mean one of two things:\n\n- The simple [union](https://arbital.com/p/5s8), together with the assertion that the two sets don't overlap;\n- The operation \"do something to the elements of each set to make sure they don't overlap, and then take the union\".\n\n(Mathematicians usually let context decide which of these meanings is intended.)\n\nThe disjoint union has the symbol $\\sqcup$: so the disjoint union of sets $A$ and $B$ is $A \\sqcup B$.\n\n## The first definition\n\nLet's look at $A = \\{6,7\\}$ and $B = \\{8, 9\\}$.\nThese two sets don't overlap: no element of $A$ is in $B$, and no element of $B$ is in $A$.\nSo we can announce that the union of $A$ and $B$ (that is, the set $\\{6,7,8,9\\}$) is in fact a *disjoint* union.\n\nIn this instance, writing $A \\sqcup B = \\{6,7,8,9\\}$ is just giving the reader an extra little hint that $A$ and $B$ are disjoint; I could just have written $A \\cup B$, and the formal meaning would be the same.\nFor the purposes of the first definition, think of $\\sqcup$ as $\\cup$ but with a footnote reading \"And, moreover, the union is disjoint\".\n\nAs a non-example, we could *not* legitimately write $\\{1,2\\} \\sqcup \\{1,3\\} = \\{1,2,3\\}$, even though $\\{1,2\\} \\cup \\{1,3\\} = \\{1,2,3\\}$; this is because $1$ is in both of the sets we are unioning.\n\n## The second definition\n\nThis is the more interesting definition, and it requires some fleshing out.\n\nLet's think about $A = \\{6,7\\}$ and $B = \\{6,8\\}$ (so the two sets overlap).\nWe want to massage these two sets so that they become disjoint, but are somehow \"still recognisably $A$ and $B$\".\n\nThere's a clever little trick we can do.\nWe tag every member of $A$ with a little note saying \"I'm in $A$\", and every member of $B$ with a note saying \"I'm in $B$\".\nTo turn this into something that fits into set theory, we tag an element $a$ of $A$ by putting it in an ordered pair with the number $1$: $(a, 1)$ is \"$a$ with its tag\".\nThen our massaged version of $A$ is the set $A'$ consisting of all the elements of $A$, but where we tag them first:\n$$A' = \\{ (a, 1) : a \\in A \\}$$\n\nNow, to tag the elements of $B$ in the same way, we should avoid using the tag $1$ because that means \"I'm in $A$\"; so we will use the number $2$ instead.\nOur massaged version of $B$ is the set $B'$ consisting of all the elements of $B$, but we tag them first as well:\n$$B' = \\{ (b,2) : b \\in B \\}$$\n\nNotice that $A$ [bijects](https://arbital.com/p/499) with $A'$ %%note: Indeed, a bijection from $A$ to $A'$ is the map $a \\mapsto (a,1)$.%%, and $B$ bijects with $B'$, so we've got two sets which are \"recognisably $A$ and $B$\".\n\nBut magically $A'$ and $B'$ are disjoint, because everything in $A'$ is a tuple with second element equal to $1$, while everything in $B'$ is a tuple with second element equal to $2$.\n\nWe define the *disjoint union of $A$ and $B$* to be $A' \\sqcup B'$ (where $\\sqcup$ now means the first definition: the ordinary union but where we have the extra information that the two sets are disjoint).\nThat is, \"make the sets $A$ and $B$ disjoint, and then take their union\".\n\n# Examples\n\n## $A = \\{6,7\\}$, $B=\\{6,8\\}$\nTake a specific example where $A = \\{6,7\\}$ and $B=\\{6,8\\}$.\nIn this case, it only makes sense to use $\\sqcup$ in the second sense, because $A$ and $B$ overlap (they both contain the element $6$).\n\nThen $A' = \\{ (6, 1), (7, 1) \\}$ and $B' = \\{ (6, 2), (8, 2) \\}$, and the disjoint union is $$A \\sqcup B = \\{ (6,1), (7,1), (6,2), (8,2) \\}$$\n\nNotice that $A \\cup B = \\{ 6, 7, 8 \\}$ has only three elements, because $6$ is in both $A$ and $B$ and that information has been lost on taking the union.\nOn the other hand, the disjoint union $A \\sqcup B$ has the required four elements because we've retained the information that the two $6$'s are \"different\": they appear as $(6,1)$ and $(6,2)$ respectively.\n\n## $A = \\{1,2\\}$, $B = \\{3,4\\}$\n\nIn this example, the notation $A \\sqcup B$ is slightly ambiguous, since $A$ and $B$ are disjoint already.\nDepending on context, it could either mean $A \\cup B = \\{1,2,3,4\\}$, or it could mean $A' \\cup B' = \\{(1,1), (2,1), (3,2), (4,2) \\}$ (where $A' = \\{(1,1), (2,1)\\}$ and $B' = \\{(3,2), (4,2) \\}$).\nIt will usually be clear which of the two senses is meant; the former is more common in everyday maths, while the latter is usually intended in set theory.\n\n## Exercise\n\nWhat happens if $A = B = \\{6,7\\}$?\n\n%%hidden(Show): Only the second definition makes sense.\n\nThen $A' = \\{(6,1), (7,1)\\}$ and $B' = \\{(6,2), (7,2)\\}$, so $$A \\sqcup B = \\{(6,1),(7,1),(6,2),(7,2)\\}$$\nwhich has four elements.%%\n\n## $A = \\mathbb{N}$, $B = \\{ 1, 2, x \\}$\n\nLet $A$ be the set $\\mathbb{N}$ of [natural numbers](https://arbital.com/p/45h) including $0$, and let $B$ be the set $\\{1,2,x\\}$ containing two natural numbers and one symbol $x$ which is not a natural number.\n\nThen $A \\sqcup B$ only makes sense under the second definition; it is the union of $A' = \\{ (0,1), (1,1), (2,1), (3,1), \\dots\\}$ and $B' = \\{(1,2), (2,2), (x,2)\\}$, or $$\\{(0,1), (1,1),(2,1),(3,1), \\dots, (1,2),(2,2),(x,2)\\}$$\n\n## $A = \\mathbb{N}$, $B = \\{x, y\\}$\n\nIn this case, again the notation $A \\sqcup B$ is ambiguous; it could mean $\\{ 0,1,2,\\dots, x, y \\}$, or it could mean $\\{(0,1), (1,1), (2,1), \\dots, (x,2), (y,2)\\}$.\n\n# Multiple operands\n\nWe can generalise the disjoint union so that we can write $A \\sqcup B \\sqcup C$ instead of just $A \\sqcup B$.\n\nTo use the first definition, the generalisation is easy to formulate: it's just $A \\cup B \\cup C$, but with the extra information that $A$, $B$ and $C$ are pairwise disjoint (so there is nothing in any of their intersections: $A$ and $B$ are disjoint, $B$ and $C$ are disjoint, and $A$ and $C$ are disjoin).\n\nTo use the second definition, we just tag each set again: let $A' = \\{(a, 1) : a \\in A \\}$, $B' = \\{ (b, 2) : b \\in B \\}$, and $C' = \\{ (c, 3) : c \\in C \\}$.\nThen $A \\sqcup B \\sqcup C$ is defined to be $A' \\cup B' \\cup C'$.\n\n## Infinite unions\nIn fact, both definitions generalise even further, to unions over arbitrary sets.\nIndeed, in the first sense we can define $$\\bigsqcup_{i \\in I} A_i = \\bigcup_{i \\in I} A_i$$ together with the information that no pair of $A_i$ intersect.\n\nIn the second sense, we can define $$\\bigsqcup_{i \\in I} A_i = \\bigcup_{i \\in I} A'_i$$\nwhere $A'_i = \\{ (a, i) : a \\in A_i \\}$.\n\nFor example, $$\\bigsqcup_{n \\in \\mathbb{N}} \\{0, 1,2,\\dots,n\\} = \\{(0,0)\\} \\cup \\{(0,1), (1,1) \\} \\cup \\{ (0,2), (1,2), (2,2)\\} \\cup \\dots = \\{ (n, m) : n \\leq m \\}$$\n\n# Why are there two definitions?\n\nThe first definition is basically just a notational convenience: it saves a few words when saying \"… and moreover the sets are pairwise disjoint\".\n\nThe real meat of the idea is the second definition, which provides a way of forcing the sets to be disjoint.\nIt's not necessarily the *only* way we could coherently define a disjoint union (since there's more than one way we could have tagged the sets; if nothing else, $A \\sqcup B$ could be defined the other way round, as $A' \\cup B'$ where $A' = \\{ (a, 2) : a \\in A \\}$ and $B' = \\{ (b,1) : b \\in B \\}$, swapping the tags).\nBut it's the one we use by convention.\nUsually when we're using the second definition we don't much care exactly how we force the sets to be disjoint; we only care that there *is* such a way.\n(For comparison, there is [more than one way](https://arbital.com/p/ordered_pair_formal_definitions) to define the ordered pair in the [https://arbital.com/p/ZF](https://arbital.com/p/ZF) set theory, but we almost never care really which exact definition we use; only that there is a definition that has the properties we want from it.)", "date_published": "2016-10-23T16:13:25Z", "authors": ["Eric Rogstad", "David Roldán Palacio", "Patrick Stevens"], "summaries": ["The *disjoint union* of two [sets](https://arbital.com/p/3jz) is just the [union](https://arbital.com/p/5s8), but with the additional information that the two sets also don't have any elements in common. That is, we can use the phrase \"disjoint union\" to indicate that we've taken the union of two sets which have empty [intersection](https://arbital.com/p/5sb). The phrase can also be used to indicate a very slightly different operation: \"do something to the elements of each set to make sure they don't overlap, and then take the union\"."], "tags": ["C-Class"], "alias": "5z9"} {"id": "227b54624fa3de7d13d03aa8dee06ac7", "title": "Empty set", "url": "https://arbital.com/p/empty_set", "source": "arbital", "source_type": "text", "text": "The empty set is the set having no members. It is usually denoted as $\\emptyset$. Whatever object is considered, it can't be a member of $\\emptyset$. It might be useful in the beginning to think about the empty set as an empty box. It has nothing inside it, but it still does exist.\n\nFormally, the existence of the empty set is asserted by the __Empty Set Axiom__:\n\n$$\\exists B \\forall x : x∉B$$\n\nThe empty set axiom itself does not postulate the uniqueness of $\\emptyset$. However, this fact is easy to prove using the [axiom of extensionality](https://arbital.com/p/618).\nConsider sets $A$ and $B$ such that both $\\forall x : x∉A$ and $\\forall x: x∉B$. %%note: That is, suppose we had two empty sets.%% Remember that the extensionality axiom tells us that if we can show $\\forall x : (x ∈ A \\Leftrightarrow x ∈ B)$, then we may deduce that $A=B$. \nIn this case, for every $x$, both parts of the statement $(x ∈ A \\Leftrightarrow x ∈ B)$ are false: we have $x \\not \\in A$ and $x \\not \\in B$.\nTherefore the [https://arbital.com/p/-46m](https://arbital.com/p/-46m) relation is true.\n\nThe existence of the empty set can be derived from the existence of any other set using the axiom schema of bounded comprehension, which states that for any formula $\\phi$ in the language of set theory, $\\forall a \\exists b \\forall x : x \\in b \\Leftrightarrow (x \\in a \\wedge \\phi(x))$. In particular, taking $\\phi$ to be $\\bot$, the always-false formula, we have that $\\forall a \\exists b \\forall x : x \\in b \\Leftrightarrow (x \\in a \\wedge \\bot)$. Since $x \\in b \\Leftrightarrow (x \\in a \\wedge \\bot)$ is logically equivalent to $x \\in b \\Leftrightarrow \\bot$ and hence to $x \\notin b$, the quantified statement is logically equivalent to $\\forall a \\exists b \\forall x : x \\notin b$, and as soon as we have the existence of at least one set to use as $a$, we obtain the Empty Set Axiom above.\n\nIt is worth noting that the empty set is itself a single object. One can construct a set *containing* the empty set: $\\{\\emptyset\\}$. $\\{\\emptyset\\} \\not= \\emptyset$, because $\\emptyset ∈ \\{\\emptyset\\}$ but $\\emptyset ∉ \\emptyset$; so the two sets have different elements and therefore cannot be equal by extensionality.\n%%note: In terms of the box metaphor above, $\\{\\emptyset\\}$ is a box, containing an empty box, whilst $\\emptyset$ is just an empty box%%\n\nAnother way to think about this is using [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5).\nIndeed, $|\\{\\emptyset\\}| = 1$ (as this set contains a single element - $\\emptyset$) and $|\\emptyset| = 0$ (as it contains no elements at all). Consequently, the two sets have different amounts of members and can not be equal.\n\n\n\n[Punctuation can be weird in this edit, as the author is not a native English speaker. Might need to be improved](https://arbital.com/p/comment:)", "date_published": "2016-09-26T17:29:23Z", "authors": ["Ilia Zaichuk", "Alexei Andreev", "Dylan Hendrickson", "Eric Rogstad", "Patrick Stevens", "Luke Sciarappa"], "summaries": [], "tags": ["Start"], "alias": "5zc"} {"id": "587a41e9ce6212c9e2fc69c87bd8dfb6", "title": "Universal property of the empty set", "url": "https://arbital.com/p/empty_set_universal_property", "source": "arbital", "source_type": "text", "text": "summary(Technical): The [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) $\\emptyset$ satisfies the following [https://arbital.com/p/-600](https://arbital.com/p/-600): for every [set](https://arbital.com/p/3jz) $X$, there is a unique [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) $f: \\emptyset \\to X$. That is, the empty set is an [https://arbital.com/p/-initial_object](https://arbital.com/p/-initial_object) in the [category](https://arbital.com/p/4cx) of sets.\n\nTo start us off, recall that the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) $\\emptyset$ is usually defined as the [set](https://arbital.com/p/3jz) which contains no elements.\nThat property picks it out uniquely, because we have the \"[Axiom of Extensionality](https://arbital.com/p/axiom_of_extensionality)\":\n\n> Two sets are the same if and only if their elements are all the same.\n\nIf we had two sets which both contained no elements, then all their elements would ([vacuously](https://arbital.com/p/vacuous_truth)) be the same, so by extensionality the sets are the same.\n\nIn this article, we will introduce another way to characterise the empty set.\nRather than working with the *elements of the set*, though, we will consider the *[functions](https://arbital.com/p/3jy) which have the set as their [domain](https://arbital.com/p/3js)*.\nYou might think at first that this is a very strange way to look at a set; and you might be right. But it turns out that if we do it this new way, characterising the empty set by examining the maps *between* sets %%note: \"Map\" is just a synonym for \"function\", in this context.%%, we end up with a recipe that is much, much more widely applicable.\n\n# The empty function\n\nWhen we use set theory to capture the idea of a \"function\", we would usually implement $f: A \\to B$ as a set of ordered pairs $(a, f(a))$, one for each element $a$ of the domain $A$, with the requirement that the elements $f(a)$ all must lie in $B$.\nThis set of ordered pairs holds all of the information about the function, except that it has omitted the (almost always unimportant) fact that $B$ is the [codomain](https://arbital.com/p/3lg). %%note:We usually only care about the [image](https://arbital.com/p/3lh) of the function.%%\n\nThis implementation of the function $f$ is just a set; it might look like $\\{ (0,1), (1,2), (2,3), (3,4), \\dots \\}$, which would represent the function $f: \\mathbb{N} \\to \\mathbb{N}$ given by $n \\mapsto n+1$.\n\nAnd we might ask, can we go the other way round? Given a set $X$, can we interpret it as a function?\nWell, of course we need $X$ to contain only ordered pairs (what could it mean for $(0,1,2)$ to lie in the implementation of the function $f$?), but also we must make sure that the function is [https://arbital.com/p/-5ss](https://arbital.com/p/-5ss) by ensuring that the first coordinates of each pair are all distinct.\nIf we had the set $\\{ (0,1), (0,2) \\}$, that would indicate a function that was trying to take $0$ to both $1$ and $2$, and that's ambiguous as a function definition.\nBut those are the *only* requirements: for any set that obeys those two conditions, we can find a function which is implemented as that set.\n\nVery well. Now look at the empty set.\nThat only contains ordered pairs: indeed, it doesn't contain anything at all, so everything in it is an ordered pair. %%note:If you're squeamish about this, see [https://arbital.com/p/vacuous_truth](https://arbital.com/p/vacuous_truth).%%\nAnd the first coordinates of each pair are distinct: indeed, there aren't any pairs at all, so certainly no two pairs have the same first coordinate.\n\nSo the empty set itself is implementing a function.\nWe call this the \"empty function\"; it is the [https://arbital.com/p/-identity_function](https://arbital.com/p/-identity_function) on the empty set. %%note:Don't get worried about the empty set representing a function that is the identity function on itself. To worry about this is to make a [type error](https://arbital.com/p/type_error) in your thinking. The same object (the empty set) is here standing for two *different* things, under two different \"encoding schemes\". Under one encoding scheme, where we look at sets as being functions, it's the empty function. Under another encoding scheme, where we look at sets just as being sets, it's the empty set.%%\n\n## Meditation: domain, codomain and image\n\nIf the empty set implements a function, then that function should have a domain.\nWhat is the [domain](https://arbital.com/p/-3js) here?\n%%hidden(Show solution):\nThe domain is the empty set.\n\nIndeed, there are no elements in the domain, because an *element of the domain* is just a *thing in the first coordinate of one of the ordered pairs*, but there aren't any ordered pairs so there can't be any elements of the domain. \n%%\n\nWhat is the [image](https://arbital.com/p/3lh)?\n%%hidden(Show solution):\nThe image is also the empty set.\n\nIndeed, there are no elements in the image, because an *element of the image* is just a *thing in the second coordinate of one of the ordered pairs*, but there aren't any ordered pairs so there can't be any elements of the image. \n%%\n\nWhat is the [codomain](https://arbital.com/p/-3lg)?\n%%hidden(Show solution):\nAha! Trick question. We said earlier that in looking at functions being implemented as sets, we threw away information about the codomain.\nWe can't actually tell what the codomain of the function implemented by $\\emptyset$ is.\n%%\n\n## Important fact\n\nFrom the domain/image/codomain meditation above, we can now note that for *every* set $A$, there is a function from $\\emptyset$ to $A$: namely, the empty function.\nPut another way, we can *always* interpret the empty set as being a function from $\\emptyset$ to $A$, whatever $A$ is.\n\n%%hidden(Example):\nLet $A = \\{ 1 \\}$, so we're showing that there is a function from $\\emptyset$ to $\\{1\\}$: namely, the empty function, which is implemented as a set by $\\emptyset$.\nThis is a valid implementation of a function from $\\emptyset$ to $\\{1\\}$, because $\\emptyset$ is a set which satisfies the three properties that are required for us to be able to interpret it as a function from the domain $\\emptyset$ to the codomain $\\{1\\}$:\n\n- Everything in $\\emptyset$ is an ordered pair ([vacuously](https://arbital.com/p/vacuous_truth)).\n- Every ordered pair in $\\emptyset$ consists of one element from the domain $\\emptyset$, followed by one element from the codomain $\\{1\\}$ (again vacuously).\n- Every element of the domain $\\emptyset$ appears exactly once in the first slot of an ordered pair in the function-implementation $\\emptyset$. \n\n%%\n\nMoreover, for every set $A$, there is *only one* function from $\\emptyset$ to $A$.\nIndeed, if we had a different function $f: \\emptyset \\to A$, then there would have to be some element of the domain $\\emptyset$ on which $f$ differed from the empty function.\nBut there aren't *any* elements of the domain at all, so there can't be one on which $f$ and the empty function differ.\nHence $f$ is actually the same as the empty function.\n\n# The universal property of the empty set\n\nThe universal property of the empty set is as follows:\n\n> The empty set is the unique set $X$ such that for every set $A$, there is a unique function from $X$ to $A$.\nTo bring this property in line with our usual definition, we denote that unique set $X$ by the symbol $\\emptyset$.\n\nWe've just proved that our standard definition of $\\emptyset$ does satisfy that universal property; that was the Important Fact just above.\n\n## Aside: why \"universal\"?\n\nThe property is a \"universal property\" because it's not \"local\" but \"universal\".\nRather than considering the individual things we can say about the object $\\emptyset$, the property talks about it in terms of *every other set*.\nThat is, the property defines $\\emptyset$ by reference to the \"universe\" of sets.\n\nIn general, a \"universal property\" is one which defines an object by specifying some way that the object interacts with everything else, rather than by looking into the object for some special identifying feature.\n\n# Does the universal property uniquely pick out $\\emptyset$?\n\nI sneakily slipped the words \"is the unique set\" into the property, without proving that they were justified.\nWhat use would it be if our universal property didn't actually characterise the good old $\\emptyset$ we know and love?\n%%note:Well, it might still be of some use. But it would mean the universal property might not work as a *definition* of $\\emptyset$. %%\nLet's see now that at least the property isn't totally stupid: there is a set which *doesn't* have the universal property.\n\n## $\\{1\\}$ doesn't satisfy the universal property\n\nWe need to show that the following is *not* true:\n\n> For every set $X$, there is a unique function from $\\{1\\}$ to $X$.\n\nHave a think about this yourself.\n\n%%hidden(Show possible solution):\nLet $X$ be any set at all with more than one element.\nFor concreteness, let $X$ be $\\{ a, b \\}$.\n\nNow, there are two functions from $\\{1\\}$ to $\\{a,b\\}$: namely, $f: 1 \\mapsto a$, and $g: 1 \\mapsto b$.\n\nSo $\\{1\\}$ fails to satisfy the universal property of $\\emptyset$, and indeed it fails massively: for every set $X$ which has more than one element, there is more than one function $\\{1\\} \\to X$.\n(Though all we needed was for this to hold for *some* $X$.)\n%%\n\n## Only the empty set satisfies the universal property\n\nIt's actually the case that the empty set is the only set which satisfies the universal property.\n[There are three proofs](https://arbital.com/p/603), none of them very complicated and all of them pedagogically useful in different ways.\nHere, we will duplicate one of the most \"category-theory-like\" proofs, because it's really rather a new way of thinking to a student who has not met category theory before, and the style turns up all over category theory.\nTo distinguish it from the other two proofs (which, remember, are detailed [here](https://arbital.com/p/603)), we'll call it the \"maps\" proof.\n\n### The \"maps\" way\n\nWe'll approach this in a slightly sneaky way: we will show that if two sets have the universal property, then there is a [bijection](https://arbital.com/p/499) between them. %%note: The most useful way to think of \"bijection\" in this context is \"function with an inverse\".%%\nOnce we have this fact, we're instantly done: the only set which bijects with $\\emptyset$ is $\\emptyset$ itself.\n\nSuppose we have two sets, $\\emptyset$ and $X$, both of which have the universal property of the empty set.\nThen, in particular (using the UP of $\\emptyset$) there is a unique map $f: \\emptyset \\to X$, and (using the UP of $X$) there is a unique map $g: X \\to \\emptyset$.\nAlso there is a unique map $\\mathrm{id}: \\emptyset \\to \\emptyset$. %%note: We use \"id\" for \"identity\", because as well as being the empty function, it happens to be the identity on $\\emptyset$.%%\n\nThe maps $f$ and $g$ are inverse to each other. Indeed, if we do $f$ and then $g$, we obtain a map from $\\emptyset$ (being the domain of $f$) to $\\emptyset$ (being the image of $g$); but we know there's a *unique* map $\\emptyset \\to \\emptyset$, so we must have the composition $g \\circ f$ being equal to $\\mathrm{id}$.\n\nWe've checked half of \"$f$ and $g$ are inverse\"; we still need to check that $f \\circ g$ is equal to the identity on $X$.\nThis follows by identical reasoning: there is a *unique* map $\\mathrm{id}_X : X \\to X$ by the fact that $X$ satisfies the universal property %%note: And we know that this map is the identity, because there's always an identity function from any set $Y$ to itself.%%, but $f \\circ g$ is a map from $X$ to $X$, so it must be $\\mathrm{id}_X$.\n\nSo $f$ and $g$ are bijections from $\\emptyset \\to X$ and $X \\to \\emptyset$ respectively.\n\n# Recap\n\nTo summarise the discussion above, we have shown that the universal property of the empty set uniquely characterises the empty set.\nWe could actually use it as a *definition* of the empty set: \n\n> We define the *empty set* to be the unique set $X$ such that for every set $A$, there is a unique function from $X$ to $A$.\n\nThis is a bit of a strange definition when we're talking about sets, but it opens the door to many different and useful constructions.\nYou might like to think about the following similar universal properties, and what they define.\n\n- The set $X$ such that for every set $A$, there is a unique function from $A$ to $X$. (That is, the \"reverse\" of the empty set's UP.)\n- The [group](https://arbital.com/p/3gd) $G$ such that for every group $H$, there is a unique [homomorphism](https://arbital.com/p/47t) from $H$ to $G$.\n- The [ring](https://arbital.com/p/3gq) $R$ such that for every ring $S$, there is a unique [homomorphism](https://arbital.com/p/ring_homomorphism) from $R$ to $S$. (This one is more interesting!)\n\n## Aside: definition \"up to isomorphism\"\n\nWith universal properties, it is usually the case that we obtain a characterisation of objects only \"up to [isomorphism](https://arbital.com/p/4f4)\": the universal property doesn't care about bijections.\nIf the object $X$ satisfies a universal property, and the object $Y$ is isomorphic to $X$, then $Y$ will probably satisfy the universal property as well.\n(Normally this doesn't really matter, since objects which are isomorphic are \"basically the same\" in most of the ways we care about.)\n\nIn this case, though, we do end up with a uniquely-defined object, because it so happens that the only object isomorphic %%note: In this case, when talking about sets, \"isomorphic to\" means \"bijecting with\".%% to the empty set is the empty set itself.\n\nAn example where we *don't* end up with a uniquely-defined object is the \"reverse\" of the empty set's universal property: \"the set $X$ such that for every set $A$, there is a unique function from $A$ to $X$\". (You may have thought about this property in the previous section.)\nIn this case, the sets which satisfy that \"reverse\" universal property are exactly the sets with one element.\n\nYou should try to go through the \"maps\" way of showing that the universal property of the empty set picks out a unique set, but instead of using the UP of the empty set, try using the \"reverse\" property.\nThe same structure of proof will show that if two sets satisfy the \"reverse\" universal property, then there is a bijection between them.\nSo we do still retain that if two sets satisfy the \"reverse\" UP, then they are isomorphic; and indeed if we have two sets with one element, there really is a bijection between them (given by matching up the single element of each set).\n\nThis is a general phenomenon: most universal properties don't define an object uniquely, but they *do* give us the fact that if any two objects satisfy the universal property, then the two objects are isomorphic.", "date_published": "2016-08-27T16:29:32Z", "authors": ["Eric Bruylant", "Patrick Stevens", "Alexei Andreev"], "summaries": ["The [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) can be characterised by how it interacts with *other* [sets](https://arbital.com/p/3jz), rather than by the usual definition as \"the set that contains no elements\". It therefore makes a very concrete example of defining an object through its [https://arbital.com/p/-600](https://arbital.com/p/-600). The property is: \n\n> The empty set is the unique set $X$ such that for every set $A$, there is a unique function from $X$ to $A$.\n(To bring this property in line with our usual definition, we denote that unique set $X$ by the symbol $\\emptyset$.)"], "tags": ["B-Class", "Empty set"], "alias": "5zr"} {"id": "9b6f082857fe12fc6b90ef9a4452cc9a", "title": "Set product", "url": "https://arbital.com/p/set_product", "source": "arbital", "source_type": "text", "text": "summary(Technical): The product of [sets](https://arbital.com/p/3jz) $Y_x$ indexed by the set $X$ is denoted $\\prod_{x \\in X} Y_x$, and it consists of all $X$-length ordered tuples of elements. For example, if $X = \\{1,2\\}$, and $Y_1 = \\{a,b\\}, Y_2 = \\{b,c\\}$, then $$\\prod_{x \\in X} Y_x = Y_1 \\times Y_2 = \\{(a,b), (a,c), (b,b), (b,c)\\}$$\nIf $X = \\mathbb{Z}$ and $Y_n = \\{ n \\}$, then $$\\prod_{x \\in X} Y_x = \\{(\\dots, -2, -1, 1, 0, 1, 2, \\dots)\\}", "date_published": "2016-08-26T10:46:41Z", "authors": ["mrkun", "Patrick Stevens"], "summaries": ["The product of two [sets](https://arbital.com/p/3jz) $A$ and $B$ is just the collection of ordered pairs $(a,b)$ where $a$ is in $A$ and $b$ is in $B$. The reason we call it the \"product\" can be seen if you consider the set-product of $\\{1,2,\\dots,n \\}$ and $\\{1,2,\\dots, m \\}$: it consists of ordered pairs $(a, b)$ where $1 \\leq a \\leq n$ and $1 \\leq b \\leq m$, but if we interpret these as integer coordinates in the plane, we obtain just an $n \\times m$ rectangle."], "tags": ["Needs summary", "Stub"], "alias": "5zs"} {"id": "97c1ab825425446966d95153d582d65c", "title": "Object identity via interactions", "url": "https://arbital.com/p/object_identity_via_interactions", "source": "arbital", "source_type": "text", "text": "Central to the mindset of category theory is the idea that we don't really care what an object is; we only care about the interactions between them. But if we don't care what an object is, how can we tell even the most basic properties of the objects? For instance, how can we even tell two objects apart? But we *can* do it, and we do it by looking at the interactions with *other* objects.\n\n# How to tell sets apart\n\nIf we're allowed to look at maps between sets, but not at the structure of the sets themselves, how can we tell whether two sets are the same?\nWe aren't allowed to look at the elements of the set directly, so it would be very optimistic to hope to do better than telling the sets apart \"up to isomorphism\"; indeed, it turns out not to be possible to tell whether two sets are literally identical without looking at their elements.\n\nBut how about telling whether there is a [bijection](https://arbital.com/p/499) between two sets or not? That is the next best thing, because a bijection between the sets would tell us that they \"behave the same in all reasonable ways\".\n\nIt turns out we can do this!\nLet $A$ and $B$ be two sets; we want to tell if they're different or not.\n\nLet's make a clever choice: we'll pick a kind of \"anchor\" set, such that the maps involving $A$, $B$ and the \"anchor\" set are enough to determine whether or not $A$ and $B$ are the same.\nIt will turn out that $\\{1\\}$ works.\n\nConsider all possible maps $\\{1\\} \\to A$ and $\\{1\\} \\to B$.\nA map from $\\{1\\}$ to $A$ is just a function $f$ which takes the input value $1$ (there's no choice about that) and returns a value in $A$.\nSo we can think of it as being \"a mappy way of specifying an element of $A$\": for every element of $A$, there is one of these functions, while every one of these functions denotes an element of $A$.\n\nSo $A$ is isomorphic to this collection of maps, and the key thing is that we don't need to know about $A$'s elements to say this!\nIf we only have access to the maps and we know nothing about the internal structure of $A$, we can still say $A$ is isomorphic to the set of all the maps from $\\{1\\}$ to $A$.\n\n%%hidden(Example):\nLet $A = \\{ 5, 6 \\}$.\n\nThen the functions $\\{1\\} \\to A$ are precisely the two following:\n\n- $f: 1 \\mapsto 5$;\n- $g: 1 \\mapsto 6$.\n\nSo we have the \"mappy\" way of viewing $5 \\in A$: namely, $f$.\nSimilarly, we have the \"mappy\" way of viewing $6 \\in A$: namely, $g$.\n\nAnd $A$ is isomorphic to the set $\\{ f, g \\}$.\nSo we've specified $A$ up to isomorphism without having to look at its elements!\n%%\n\nGiven access to the maps, then, we have worked out what $A$ is (up to isomorphism, which is the best we can do if we're not allowed to look into the internal structure of $A$); and we can similarly work out what $B$ is by looking at the maps $\\{1\\} \\to B$.\n\n\n\n## Less structure\n\nThat previous construction was rather contrived, and it doesn't seem at all practical.", "date_published": "2016-09-12T17:00:55Z", "authors": ["Eric Rogstad", "Patrick Stevens"], "summaries": ["Central to the mindset of category theory is the idea that we don't really care what an object is; we only care about the interactions between them. But if we don't care what an object is, how can we tell even the most basic properties of the objects? For instance, how can we even tell two objects apart? But we *can* do it, and we do it by looking at the interactions with *other* objects."], "tags": ["Start"], "alias": "5zt"} {"id": "991f82ffff27817567ec2366d3c62386", "title": "Universal property of the product", "url": "https://arbital.com/p/product_universal_property", "source": "arbital", "source_type": "text", "text": "summary(Technical): The *product* of objects $A$ and $B$ is a certain tuple $(P, \\pi_A, \\pi_B)$, where $\\pi_A$ is an arrow $P \\to A$ and $\\pi_B$ is an arrow $P \\to B$. We require the product to satisfy the following [https://arbital.com/p/-600](https://arbital.com/p/-600): for every object $X$ and every pair of arrows $f: X \\to A, g: X \\to B$, there must be a unique factorisation $\\gamma: X \\to P$ satisfying $\\pi_A \\gamma = f$ and $\\pi_B \\gamma = g$.\n\nJust as we can define the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) by means of a [https://arbital.com/p/-600](https://arbital.com/p/-600) ([like this](https://arbital.com/p/5zr), in fact), so we can define the general notion of a \"product\" by referring not to what is in the product, but instead to how the product interacts with other objects.\n\n# Motivation\n\nThe motivating example here is the [product of two sets](https://arbital.com/p/5zs).\n\nRemember, the product of two sets $A$ and $B$ consists of all ordered pairs, the first element from $A$ and the second element from $B$.\nCategory theory likes to think about *maps involving an object* rather than *elements of an object*, though, so we'd like a way of thinking about the product using only maps.\n\nWhat maps can we find? Well, there's an easy way to produce an element of $A$ given an element of $A \\times B$: namely, \"take the first coordinate\".\nConcretely, if we are given $(a,b) \\in A \\times B$, we output $a$.\nAnd similarly, we can take the second coordinate to get an element of $B$.\n\nSo we have two maps, one which we will denote $\\pi_A: A \\times B \\to A$, and one denoted $\\pi_B : A \\times B \\to B$.\n%%note: We use the Greek letter $\\pi$ to stand for the word \"[projection](https://arbital.com/p/projection_map)\", because the operation \"take a coordinate\" is often called \"projection\".%%\n\nNow, this is certainly not enough to characterise the product.\nFor one thing, there is a map from the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) to every other set ([proof](https://arbital.com/p/5zr)), so if all we knew about the product was that it had a map to $A$ and a map to $B$, then we might deduce that the empty set was a perfectly good product of $A$ and $B$.\nThat's definitely not something we want our definition of \"product\" to mean: that the empty set is the product of every pair of sets!\nSo we're going to have to find something else to add to our putative definition.\n\nWhat else do we know about the product?\nTo try for a universal property, we're going to want to talk about *all* the maps involving the product.\nIt turns out that the correct fact to note is that a map $f$ from any set $X$ to $A \\times B$ may be defined exactly by the first coordinate and the second coordinate of the result.\nConcretely, if we know $\\pi_A(f(x))$ and $\\pi_B(f(x))$, then we know $f(x)$: it's just $\\langle \\pi_A(f(x)), \\pi_B(f(x)) \\rangle$. %%note: I've used $\\langle \\text{angled brackets}\\rangle$ to indicate the ordered pair here, just because otherwise there are confusingly many round brackets.%%\n\nAnother way of stating the universal property of the product is that maps $h : A \\to B \\times C$ are naturally in bijection with pairs of maps $f : A \\to B$, $g : A \\to C$. That is, a [generalized element](https://arbital.com/p/61q) (of shape $A$) of $B \\times C$ is a pair consisting of a generalized element (of the same shape) of $B$ and a generalized element of $C$; this is the same as the usual description of the product in sets, but now since we use generalized elements instead of elements it makes sense in any category. Under this correspondence, $h$ being the identity on $B \\times C$ corresponds to $f$ being the projection $B \\times C \\to B$ and $g$ being the projection $B \\times C \\to C$. On the other hand, if $h$ corresponds to $f$ and $g$ via this bijection, we have $h = \\langle f, g\\rangle$. Naturality of the isomorphism turns out to imply that $\\pi_{B} \\langle f, g \\rangle = f$ and $\\pi_C \\langle f, g \\rangle = g$, as well as $\\langle \\pi_{B}, \\pi_{C} \\rangle = \\text{id}$.\n\n\n\n# The universal property\n\nFrom the above, we can extract the following universal property, which really *does* define the product (up to [https://arbital.com/p/-4f4](https://arbital.com/p/-4f4)):\n\n> Given objects $A$ and $B$, we define the *product* to be the following collection of three objects, if it exists: $$A \\times B \\\\ \\pi_A: A \\times B \\to A \\\\ \\pi_B : A \\times B \\to B$$ with the requirement that for every object $X$ and every pair of maps $f_A: X \\to A, f_B: X \\to B$, there is a *unique* map $f: X \\to A \\times B$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\nIf you're anything like me, you're probably a bit lost at this point.\nTrust me that once you've set up the right structures in your mind, the property is quite simple.\nAt the moment it probably feels more like I've given you a sentence in Swahili, and I've just told you it doesn't mean anything difficult.\n\nCategory theory contains lots of things like this, where it takes a lot of words to write them out and some time to get your head around them properly. %%note: In this particular instance, one unavoidable annoyance is that the object $X$ isn't really playing any interesting role other than \"[domain](https://arbital.com/p/3js) of an arbitrary function\", but we have to include the words \"for every object $X$\" because otherwise the definitions of $f_A$ and $f_B$ don't make sense. %%\nOne learns category theory by gradually getting used to examples and pictures; in particular, the idea of the **commutative diagram** is an extremely powerful tool for improving understanding.\n\nThe following \"commutative diagram\" (really just a picture showing the objects involved and how they interact) defines the product.\n\n![Product universal property](http://i.imgur.com/AcdaqkI.png)\n\nThe dashed arrow indicates \"the presence of all the other arrows forces this arrow to exist uniquely\"; and the diagram is said to **commute**, which means that if we follow any chain of arrows round the diagram, we'll get the same answer.\nHere, \"the diagram commutes\" %%note: That is, \"the diagram is a *commutative* diagram\".%% is exactly expressing $\\pi_A \\circ f = f_A$ (following one of the triangles round) and $\\pi_B \\circ f = f_B$ (following the other triangle).\nAnd the dashed-ness of the line is exactly expressing the words \"there is a *unique* map $f: X \\to A \\times B$\".\n\nWe'll see several examples now, some related to each other and some not.\nHopefully in working through these, you will pick up what the property means; and maybe you will start to get a feel for some underlying intuitions.\n\n# Example: finite sets\n\n(This discussion works just as well in the category of *all* sets, not just the finite ones, but finite sets are much easier to understand than general sets.)\n\nThe motivation above took some properties of the set-product, and turned them into a new general definition.\nSo we should hope that when we take the new general definition and bring it back into the world of sets, we recover the original set-product we started with.\nThat is indeed the case.\n\nLet $A$ and $B$ be finite sets; we want to see what $A \\otimes B$ %%note: Because $\\times$ already has a meaning in set theory, we'll use $\\otimes$ to distinguish this new concept. Of course, the aim of this section is to show that $\\otimes$ is actually the same as our usual $\\times$. %% is (using the universal-property definition), if it exists.\n\nWell, it would be the finite set $A \\otimes B$ (whatever that might be), together with two maps $\\pi_A: A \\otimes B \\to A$ and $\\pi_B: A \\otimes B \\to B$, such that:\n\n> For every finite set $X$ and every pair of maps $f_A: X \\to A, f_B: X \\to B$, there is a *unique* map $f: X \\to A \\otimes B$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\nLet's try and identify what set this universal property describes %%note: If such a set exists. (Spoiler: it does.)%%.\nThe property is very general, talking about the entire \"universe\" of finite sets, so to get a grasp of it, it should help to specialise to some simple sets $X$.\n\nYou might like to work out for yourself why letting $X = \\emptyset$ doesn't tell us anything about $A \\otimes B$.\n\nBut a much more enlightening choice of $X$ arises if we take $X = \\{ 1 \\}$.\nIn that case, a map from $X$ to $A$ is just identifying a specific element of $A$, so for example we may refer to $f_A: X \\to A$ as a \"choice of element of $A$\".\nIndeed, if $g: \\{1\\} \\to A$, then we have an element $g(1)$ in $A$, because there's no choice about which element of the [domain](https://arbital.com/p/3js) we use: there's only the element $1$, so the only possible output of $g$ is the element $g(1)$ of $A$.\n\nSpecialising to this instance of $X$, then, the universal property requires that:\n\n> For every pair of choices of element, $f_A: \\{ 1 \\} \\to A$ and $f_B : \\{ 1 \\} \\to B$, there is a *unique* choice of element $f: \\{1\\} \\to A \\otimes B$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\n(What does it mean to compose a function with a choice of element? Well, $\\pi_A \\circ f$ is a function from $\\{1\\}$ to $A$, so it's also just a choice of element of $A$; it's basically just $\\pi_A$ applied to the choice-of-element embodied by $f$.)\n\nMore familiarly stated:\n\n> For every pair of elements $a \\in A$ and $b \\in B$, there is a *unique* element $x \\in A \\otimes B$ such that $\\pi_A(x) = a$ and $\\pi_B(x) = b$.\n\nThat's just saying \"$A \\otimes B$ consists of objects which $\\pi_A$ and $\\pi_B$ allow us to interpret as ordered pairs\"!\nSpecifically, it's saying that for every ordered pair of objects in $A \\times B$, we can find a unique element of $A \\otimes B$ which gets interpreted by $\\pi_A$ and $\\pi_B$ as having the same components as that ordered pair.\n\n# Example: natural numbers\n\nTo give you an idea of where we're aiming for, in this section, we're going to use the fact that a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) may be viewed as a [category](https://arbital.com/p/4cx), by letting there be a single arrow $a \\to b$ if and only if $a \\leq b$ in the poset.\nThat is, instead of maps $f: A \\to B$, we have \"facts that $A \\leq B$ in the poset\" which we interpret as \"arrows from $A$ to $B$\"; there is only ever at most one fact that $A \\leq B$, so there is only ever at most one arrow from $A$ to $B$.\n\nWe'll examine what the category-theoretic product means in this category, and hopefully this should help indicate how versatile the \"universal property\" idea is.\n\n## Specific poset: \"less than\"\n\nLet us work in a specific poset: $\\mathbb{N}$ the [natural numbers](https://arbital.com/p/45h), ordered by the usual \"less than\".\nFor example, $1$ is less than or equal to every number except $0$, so there is an arrow from $1$ to every number except $0$.\nThere is an arrow from $2$ to every number bigger than or equal to $2$, and so on.\nThere is an arrow from $0$ to every number.\n\nThe universal property of the category-theoretic product %%note: I'll emphasise \"category-theoretic\" in this section, because it will turn out that in this context, \"product\" *doesn't* mean \"usual product of natural numbers\".%% here is:\n\n> Given natural numbers $m$ and $n$, we define the *category-theoretic product* $\\otimes$ %%note: Because $\\times$ already has a meaning in the naturals, we'll use $\\otimes$ to distinguish this new concept.%% to be the following collection of three objects, if it exists: \n\n> - $m \\otimes n$\n> - The fact that $m \\otimes n$ is less than or equal to $m$ %%note: In the sets case, this was the map $\\pi_A$; but here, we're in a poset category, so instead of \"a function from $A$ to $B$\", we have \"the fact that $A \\leq B$\".%% \n> - The fact that $m \\otimes n$ is less than or equal to $n$ %%note: This is corresponding to what was the map $\\pi_B$.%%.\n\n> We also have the requirement that for every natural number $x$ and every instance of the fact that $x \\leq m, x \\leq n$ %%note: Such an instance corresponds to the pair of maps $f_A: X \\to A$ and $f_B: X \\to B$.%%, there is a *unique* instance of the fact that $x \\leq m \\otimes n$. %%note: This corresponds to the map $f: X \\to A \\times B$. In fact we can omit \"unique\", because in a poset, there is never more than one arrow between any two given objects; in this context, there is only one way to express $x \\leq m \\otimes n$.%%\n\nYou might want to spend a little time convincing yourself that the requirement in the posets case is the same as the requirement in the general category-theoretic case.\nIndeed, we can omit the \"compositions\" part entirely.\nFor example, half of the compositions part reads \"$\\pi_A \\circ f = f_A$\"; when translated into poset language, that is \"the instance of the fact that $m \\otimes n \\leq m$, together with the instance of the fact that $x \\leq m \\otimes n$, yields the instance of the fact that $x \\leq m$\".\nBut this is immediate anyway because posets already let us deduce that $A \\leq C$ from $A \\leq B$ and $B \\leq C$.\n\nTo summarise, then, we define $m \\otimes n$, if it exists, to be the natural number which is less than or equal to $m$ and to $n$, such that for every natural $x$ with $x \\leq m, x \\leq n$, we have $x \\leq m \\otimes n$.\n\nIf you think about this for a few minutes, you might work out that this is just referring to the minimum of $m$ and $n$.\nSo we have defined the minimum of two natural numbers in a category-theoretic \"universal properties\" way: it's the category-theoretic product in this particular poset. \n\nNotice, by the way, that in this case, the category-theoretic product (i.e. the minimum) does always exist.\nRemember, the universal property comes with a caveat: it defines the product to be a certain triple of $(A \\times B, \\pi_A, \\pi_B)$, *if those three things exist*.\nIt's by no means a given that the triple always *does* exist; but in this particular instance, with the natural numbers under the \"less than\" relation, it so happens that it does always exist.\nFor every pair $m, n$ of naturals, there *is* a triple $(m \\otimes n, \\pi_m, \\pi_n)$ satisfying the universal property: namely, letting $m \\otimes n$ be the minimum of $m$ and $n$, and letting $\\pi_m$ and $\\pi_n$ be \"instances of the facts that $m \\otimes n \\leq m$ and $m \\otimes n \\leq n$\" respectively.\n\n## Specific example: divisibility\n\nLet's now use the poset which is given by the *nonzero* naturals $\\mathbb{N}^{\\geq 1}$ but using the order relation of divisibility: $a \\mid b$ if and only if $a$ divides $b$.\nFor example, $1$ divides every nonzero natural so there is an arrow $1 \\to n$ for every $n$;\nthere are arrows $2 \\to n$ for every *even* positive natural $n$, and so on.\n\nWe'll give the product definition again:\n\n> Given natural numbers $m$ and $n$, we define the *category-theoretic product* $\\otimes$ to be the following collection of three objects, if it exists: \n\n> - $m \\otimes n$\n> - The fact that $m \\otimes n$ divides $m$ %%note: In the sets case, this was the map $\\pi_A$; but here, we're in a poset category, so instead of \"a function from $A$ to $B$\", we have \"the fact that $A \\mid B$\".%% \n> - The fact that $m \\otimes n$ divides $n$.\n\n> We also have the requirement that for every natural number $x$ such that $x \\mid m, x \\mid n$, it is the case that $x \\mid m \\otimes n$.\n\nAgain, a few minutes of pondering might lead you to discover that this is referring to the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw), and again it so happens that it always exists.\n\n## Moral things to notice about the above examples\n\nWorking in a poset, the nature of the category-theoretic product as some kind of \"optimiser\" is a bit clearer.\nIt is the \"optimal\" object such that certain conditions hold.\n\n- In the first example, it is optimal in the sense that it is the *biggest* natural $a$ such that $a \\leq m$ and $a\\leq n$.\n- In the second example, it is optimal in the sense that it is the natural $a$ with the *most factors* such that $a \\mid m$ and $a \\mid n$.\n\nIn fact this is true in general posets: the category-theoretic product of $X$ and $Y$ in a poset is the [https://arbital.com/p/-greatest_lower_bound](https://arbital.com/p/-greatest_lower_bound) of $X$ and $Y$ in the poset.\n\n# The property characterises the product up to isomorphism\n\nThis is what makes our new definition really make sense: this theorem tells us that our construction really does pick out a specific object.\nThe proof appears [on a different page](https://arbital.com/p/60v); it follows a similar pattern to the \"maps\" way of proving that [the empty set is uniquely characterised by its universal property](https://arbital.com/p/603).\n\nIf we can show that some easy-to-specify property (such as the universal property of the product) characterises things uniquely or up to isomorphism, we can be pretty hopeful that the property is worth thinking more about.\nWhile the product's universal property is wordy, it has a simple diagram which defines it; the property itself is not very complicated, and the main difficulty is in understanding the background.\n\n# Why is the product interesting?\n\nWhy should we care about having this object defined by the universal property?\nWell, if we stop defining things by their internal properties (for instance, if we stop defining the product of sets $A$ and $B$ as the collection of ordered pairs) and use a universal property instead, we get a definition which doesn't care about internal properties.\nIt's immediately applicable to other situations. \nIf we make sure that, in proving things about the product, we only use the universal property (or things that can be derived from the universal property), we'll instantly get theorems that are true not just for set-products but also for more general situations.\nCategory theory, and its beginnings in the idea of the universal property, is in some ways the art of finding where in mathematics we're just doing the same things in a slightly different way, and seeing if we can show why they're *actually* the same rather than merely looking similar.\n\nUltimately, this viewpoint shows us that the following ideas are \"basically the same\" (and it doesn't matter if you don't know what some of these mean):\n\n- Product of natural numbers %%note: The easiest way to obtain this is by interpreting a [natural number as a category itself](https://arbital.com/p/natural_number_as_category). The product of natural numbers $m$ and $n$, in the colloquial sense of \"product\", is then given by the [https://arbital.com/p/-product_category](https://arbital.com/p/-product_category) of the two categories representing $m$ and $n$.%%\n- [Product of sets](https://arbital.com/p/5zs) %%note: We showed in this very page that the set product is captured by the category-theoretic product.%%\n- Product of [topological spaces](https://arbital.com/p/topological_space)\n- Direct product of [groups](https://arbital.com/p/3gd)\n- Direct sum of [abelian groups](https://arbital.com/p/3h2)\n- Greatest lower bounds in [posets](https://arbital.com/p/3rb) %%note: We showed two examples of this above, looking at the minimum and greatest common divisor of two naturals.%%\n- [Segre embedding](https://arbital.com/p/segre_embedding) of two [projective varieties](https://arbital.com/p/projective_variety), in [https://arbital.com/p/algebraic_geometry](https://arbital.com/p/algebraic_geometry). This is an example where the existence of the product is decidedly non-trivial.\n\nI personally took algebraic geometry and didn't understand a word of it, including about the Segre embedding. The lecturer didn't tell me at the time that the Segre embedding is a product, so I just went on thinking it was incomprehensible.\nBut now I know that the Segre embedding is \"just\" a product. \nThat immediately tells me that in some deep moral sense it's \"not too difficult\": even though algebraic geometry is fiendishly hard, the Segre embedding is not at its heart one of the really difficult bits, and if I chose to learn it properly, I wouldn't need to do too much mental gymnastics to understand it.", "date_published": "2016-10-07T00:49:07Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Patrick Stevens", "Luke Sciarappa", "Eric Bruylant"], "summaries": ["We can define the general notion of \"product\" to coincide with the product of natural numbers and with the [https://arbital.com/p/-5zs](https://arbital.com/p/-5zs), by means of a [https://arbital.com/p/-600](https://arbital.com/p/-600) that the product has.\nSpecifically, the product of objects $A$ and $B$ is an object denoted $A \\times B$ together with maps $\\pi_A : A \\times B \\to A$ and $\\pi_B: A \\times B \\to B$, such that for every object $X$ and every pair of maps $f_A: X \\to A$ and $f_B: X \\to B$, there is a unique map $\\gamma: X \\to A \\times B$ such that $\\pi_A \\circ \\gamma = f_A$ and $\\pi_B \\circ \\gamma = f_B$."], "tags": ["B-Class"], "alias": "5zv"} {"id": "8f4e8f2b94afbd0dafd21d7bf26963a9", "title": "Finite set", "url": "https://arbital.com/p/finite_set", "source": "arbital", "source_type": "text", "text": "summary(Technical): A finite [set](https://arbital.com/p/3jz) $X$ is a set which is not infinite: that is, there is some $n \\in \\mathbb{N}$ such that the [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5) of $X$ is equal to $n$. Examples: $\\{ 1,2 \\}$ and $\\{ \\mathbb{N} \\}$. Non-examples: $\\mathbb{N}$, $\\mathbb{R}$.\n\nA __finite set__ is like a package for a group of things. The package can be monstrously big to fit the amount of things it needs to hold but ultimately there is a limit to the amount of items in the package. This means that if you were to go by each item in this package one by one and count them you would eventually reach the last item. (Incidentally this number that you counted would be the cardinality of the set)\n\n\n\n\n\nMore formally a Finite set is a set that also has a bijection with one of the natural numbers.", "date_published": "2016-09-11T06:15:19Z", "authors": ["Patrick Stevens", "Travis Rivera"], "summaries": ["A finite [set](https://arbital.com/p/3jz) is a set which is not infinite. That is, we could go through its members one by one, seeing a new member every minute, writing a mark on a piece of paper each time, and eventually we'd be done and stop writing."], "tags": ["Stub"], "alias": "5zy"} {"id": "8abc8a7c69e9bc51376acdcdc5f07de3", "title": "Universal property", "url": "https://arbital.com/p/universal_property", "source": "arbital", "source_type": "text", "text": "In [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7), we attempt to avoid thinking about what an object *is*, and look only at how it interacts with other objects.\nIt turns out that even if we're not allowed to talk about the \"internal\" structure of an object, we can still pin down some objects just by talking about their interactions.\nFor example, if we are not allowed to define the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) as \"the set with no elements\", we [can still define it](https://arbital.com/p/5zr) by means of a \"universal property\", talking instead about the *functions* from the empty set rather than about the elements of the empty set.\n\nThis page is not designed to teach you any particular universal properties, but rather to convey a sense of what the idea of \"universal property\" is about.\nYou are supposed to let it wash over you gently, without worrying particularly if you don't understand words or even entire sentences.\n\n# Examples\n\n- The [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) can be [defined by a universal property](https://arbital.com/p/5zr). Specifically, it is an instance of the idea of an [https://arbital.com/p/-initial_object](https://arbital.com/p/-initial_object), in the category of sets. The same idea captures the trivial [group](https://arbital.com/p/3gd), the [ring](https://arbital.com/p/3gq) $\\mathbb{Z}$ of [integers](https://arbital.com/p/48l), and the natural number $0$.\n- The [product](https://arbital.com/p/5zv) has a universal property, generalising the [https://arbital.com/p/-5zs](https://arbital.com/p/-5zs), the product of integers, the [https://arbital.com/p/-greatest_lower_bound](https://arbital.com/p/-greatest_lower_bound) in a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), the products of many different [algebraic structures](https://arbital.com/p/3gx), and many other things besides.\n- The [free group](https://arbital.com/p/-6gd) has a universal property (we refer to this property by the unwieldy phrase \"the free-group [functor](https://arbital.com/p/functor_category_theory) is *left-[adjoint](https://arbital.com/p/adjoint_category_theory)* to the [https://arbital.com/p/-forgetful_functor](https://arbital.com/p/-forgetful_functor)\"). The same property can be used to create free rings, the [https://arbital.com/p/-discrete_topology](https://arbital.com/p/-discrete_topology) on a set, and the free [semigroup](https://arbital.com/p/algebraic_semigroup) on a set. This idea of the *left adjoint* can also be used to define initial objects (which is the generalised version of the universal property of the empty set). %%note: Indeed, an initial object of category $\\mathcal{C}$ is exactly a left adjoint to the unique functor from $\\mathcal{C}$ to $\\mathbf{1}$ the one-arrow category.%%\n\nThe above examples show that the ideas of category theory are very general.\nFor instance, the third example captures the idea of a \"free\" object, which turns up all over [https://arbital.com/p/-3h0](https://arbital.com/p/-3h0).\n\n# Definition \"up to isomorphism\"\n\n\n\n# Universal properties might not define objects\n\nUniversal properties are often good ways to define things, but just like with any definition, we always need to check in each individual case that we've actually defined something coherent.\nThere is no silver bullet for this: universal properties don't just magically work all the time.\n\nFor example, consider a very similar universal property to that of the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) (detailed [here](https://arbital.com/p/5zr)), but instead of working with sets, we'll work with [fields](https://arbital.com/p/481), and instead of [functions](https://arbital.com/p/3jy) between sets, we'll work with field homomorphisms.\n\nThe corresponding universal property will turn out *not* to be coherent:\n\n> The *initial field* %%note: Analogously with the empty set, but fields can't be empty so we'll call it \"initial\" for reasons which aren't important right now.%% is the unique field $F$ such that for every field $A$, there is a unique field homomorphism from $F$ to $A$.\n\n%%%hidden(Proof that there is no initial field):\n(The slick way to communicate this proof to a practised mathematician is \"there are no field homomorphisms between fields of different [characteristic](https://arbital.com/p/field_characteristic)\".)\n\nIt will turn out that all we need is that there are two fields [$\\mathbb{Q}$](https://arbital.com/p/4zq) and $F_2$ the field on two elements. %%note: $F_2$ has elements $0$ and $1$, and the relation $1 + 1 = 0$.%%\n\nSuppose we had an initial field $F$ with multiplicative identity element $1_F$; then there would have to be a field homomorphism $f$ from $F$ to $F_2$.\nRemember, $f$ can be viewed as (among other things) a [https://arbital.com/p/-47t](https://arbital.com/p/-47t) from the *multiplicative* group $F^*$ %%note: That is, the group whose [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) is $F$ without $0$, with the group operation being \"multiplication in $F$\".%% to $F_2^*$.\n\nNow $f(1_F) = 1_{F_2}$ because [the image of the identity is the identity](https://arbital.com/p/49z), and so $f(1_F + 1_F) = 1_{F_2} + 1_{F_2} = 0_{F_2}$.\n\nBut field homomorphisms are either [injective](https://arbital.com/p/4b7) or map everything to $0$ ([proof](https://arbital.com/p/76h)); and we've already seen that $f(1_F)$ is not $0_{F_2}$.\nSo $f$ must be injective; and hence $1_F + 1_F$ must be $0_F$ because $f(1_F + 1_F) = 0_{F_2} = f(0_F)$.\n\nNow examine $\\mathbb{Q}$.\nThere is a field homomorphism $g$ from $F$ to $\\mathbb{Q}$.\nWe have $g(1_F + 1_F) = g(1_F) + g(1_F) = 1 + 1 = 2$; but also $g(1_F + 1_F) = g(0_F) = 0$.\nThis is a contradiction.\n%%%", "date_published": "2016-12-31T13:33:03Z", "authors": ["Eric Rogstad", "Patrick Stevens"], "summaries": ["In [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7), we attempt to avoid thinking about what an object *is*, and look only at how it interacts with other objects. A universal property is a way of defining an object not in terms of its \"internal\" properties, but instead by its interactions with the \"universe\" of other objects in existence."], "tags": ["Start"], "alias": "600"} {"id": "4c5badca85a657c462f3291eb4cf002d", "title": "The empty set is the only set which satisfies the universal property of the empty set", "url": "https://arbital.com/p/only_empty_set_satisfies_up_of_emptyset", "source": "arbital", "source_type": "text", "text": "Here, we will prove that the only [set](https://arbital.com/p/3jz) which satisfies the [https://arbital.com/p/-5zr](https://arbital.com/p/-5zr) is the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) itself.\nThis will tell us that defining the empty set by this [https://arbital.com/p/-600](https://arbital.com/p/-600) is actually a coherent thing to do, because it's not ambiguous as a definition.\n\nThere are three ways to prove this fact: one way looks at the objects themselves, one way takes a more maps-oriented approach, and one way is sort of a mixture of the two.\nAll of the proofs are enlightening in different ways.\n\nRecall first that the universal property of the empty set is as follows:\n\n> The empty set is the unique set $X$ such that for every set $A$, there is a unique function from $X$ to $A$.\n(To bring this property in line with our usual definition, we denote that unique set $X$ by the symbol $\\emptyset$.)\n\n# The \"objects\" way\n\nSuppose we have a set $X$ which is not empty.\nThen it has an element, $x$ say.\nNow, consider maps from $X$ to $\\{ 1, 2 \\}$.\n\nWe will show that there cannot be a unique [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) from $X$ to $\\{ 1, 2 \\}$.\nIndeed, suppose $f: X \\to \\{ 1, 2 \\}$.\nThen $f(x) = 1$ or $f(x) = 2$.\nBut we can now define a new function $g: X \\to \\{1,2\\}$ which is given by setting $g(x)$ to be the *other* one of $1$ or $2$ to $f(x)$, and by letting $g(y) = f(y)$ for all $y \\not = x$.\n\nThis shows that the universal property of the empty set fails for $X$: we have shown that there is no unique function from $X$ to the specific set $\\{1,2\\}$.\n\n# The \"maps\" ways\n\nWe'll approach this in a slightly sneaky way: we will show that if two sets have the universal property, then there is a [bijection](https://arbital.com/p/499) between them. %%note: The most useful way to think of \"bijection\" in this context is \"function with an inverse\".%%\nOnce we have this fact, we're instantly done: the only set which bijects with $\\emptyset$ is $\\emptyset$ itself.\n\nSuppose we have two sets, $\\emptyset$ and $X$, both of which have the universal property of the empty set.\nThen, in particular (using the UP of $\\emptyset$) there is a unique map $f: \\emptyset \\to X$, and (using the UP of $X$) there is a unique map $g: X \\to \\emptyset$.\nAlso there is a unique map $\\mathrm{id}: \\emptyset \\to \\emptyset$. %%note: We use \"id\" for \"identity\", because as well as being the empty function, it happens to be the identity on $\\emptyset$.%%\n\nThe maps $f$ and $g$ are inverse to each other. Indeed, if we do $f$ and then $g$, we obtain a map from $\\emptyset$ (being the domain of $f$) to $\\emptyset$ (being the image of $g$); but we know there's a *unique* map $\\emptyset \\to \\emptyset$, so we must have the composition $g \\circ f$ being equal to $\\mathrm{id}$.\n\nWe've checked half of \"$f$ and $g$ are inverse\"; we still need to check that $f \\circ g$ is equal to the identity on $X$.\nThis follows by identical reasoning: there is a *unique* map $\\mathrm{id}_X : X \\to X$ by the fact that $X$ satisfies the universal property %%note: And we know that this map is the identity, because there's always an identity function from any set $Y$ to itself.%%, but $f \\circ g$ is a map from $X$ to $X$, so it must be $\\mathrm{id}_X$.\n\nSo $f$ and $g$ are bijections from $\\emptyset \\to X$ and $X \\to \\emptyset$ respectively.\n\n# The mixture\n\nThis time, let us suppose $X$ is a set which satisfies the universal property of the empty set.\nThen, in particular, there is a (unique) map $f: X \\to \\emptyset$.\n\nIf we pick any element $x \\in X$, what is $f(x)$?\nIt has to be a member of the empty set $\\emptyset$, because that's the codomain of $f$.\nBut there aren't any members of the empty set!\n\nSo there is no such $f$ after all, and so $X$ can't actually satisfy the universal property after all: we have found a set $Y = \\emptyset$ for which there is no map (and hence certainly no *unique* map) from $X$ to $Y$.\n\nThis method was a bit of a mixture of the two ways: it shows that a certain map can't exist if we specify a certain object.", "date_published": "2016-08-26T15:16:09Z", "authors": ["Patrick Stevens"], "summaries": ["Here we give three proofs that the only [set](https://arbital.com/p/3jz) which satisfies the [https://arbital.com/p/-5zr](https://arbital.com/p/-5zr) is the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) itself."], "tags": [], "alias": "603"} {"id": "16319af34f433a7196278e2409fbf34f", "title": "Product is unique up to isomorphism", "url": "https://arbital.com/p/product_is_unique_up_to_isomorphism", "source": "arbital", "source_type": "text", "text": "Recall the [https://arbital.com/p/-600](https://arbital.com/p/-600) of the [product](https://arbital.com/p/5zv):\n\n> Given objects $A$ and $B$, we define the *product* to be the following collection of three objects, if it exists: $$A \\times B \\\\ \\pi_A: A \\times B \\to A \\\\ \\pi_B : A \\times B \\to B$$ with the requirement that for every object $X$ and every pair of maps $f_A: X \\to A, f_B: X \\to B$, there is a *unique* map $f: X \\to A \\times B$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\nWe wish to show that if the collections $(R, \\pi_A, \\pi_B)$ and $(S, \\phi_A, \\phi_B)$ satisfy the above condition, then there is an [https://arbital.com/p/-4f4](https://arbital.com/p/-4f4) between $R$ and $S$. %%note: I'd write $A \\times_1 B$ and $A \\times_2 B$ instead of $R$ and $S$, except that would be really unwieldy. Just remember that $R$ and $S$ are both standing for products of $A$ and $B$.%%\n\n# Proof\n\nThe proof follows a pattern which is standard for these things.\n\nSince $R$ is a product of $A$ and $B$, we can let $X = S$ in the universal property to obtain:\n\n> For every pair of maps $f_A: S \\to A, f_B: S \\to B$ there is a unique map $f: S \\to R$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\nNow let $f_A = \\phi_A, f_B = \\phi_B$:\n\n> There is a unique map $\\phi: S \\to R$ such that $\\pi_A \\circ \\phi = \\phi_A$ and $\\pi_B \\circ \\phi = \\phi_B$.\n\nDoing the same again but swapping $R$ for $S$ and $\\phi$ for $\\pi$ (basically starting over with the line \"Since $S$ is a product of $A$ and $B$, we can let $X = R$…\"), we obtain:\n\n> There is a unique map $\\pi: R \\to S$ such that $\\phi_A \\circ \\pi = \\pi_A$ and $\\phi_B \\circ \\pi = \\pi_B$.\n\nNow, $\\pi \\circ \\phi: S \\to S$ is a map which we wish to be the identity on $S$; that would get us halfway to the answer, because it would tell us that $\\pi$ is left-inverse to $\\phi$.\n\nBut we can use the universal property of $S$ once more, this time looking at maps into $S$:\n\n> For every pair of maps $f_A: S \\to A, f_B: S \\to B$ there is a unique map $f: S \\to S$ such that $\\phi_A \\circ f = f_A$ and $\\phi_B \\circ f = f_B$.\n\nLetting $f_A = \\phi_A$ and $f_B = \\phi_B$, we obtain:\n\n> There is a unique map $f: S \\to S$ such that $\\phi_A \\circ f = \\phi_A$ and $\\phi_B \\circ f = \\phi_B$.\n\nBut I claim that both the identity $1_S$ and also $\\pi \\circ \\phi$ satisfy the same property as $f$, and hence they're equal by the uniqueness of $f$.\nIndeed, \n\n- $1_S$ certainly satisfies the property, since that would just say that $\\phi_A = \\phi_A$ and $\\phi_B = \\phi_B$;\n- $\\pi \\circ \\phi$ satisfies the property, since we already found that $\\phi_A \\circ \\pi = \\pi_A$ and that $\\phi_B \\circ \\pi = \\pi_B$.\n\nTherefore $\\pi$ is left-inverse to $\\phi$.\n\nNow to complete the proof, we just need to repeat *exactly* the same steps but with $(R, \\pi_A, \\pi_B)$ and $(S, \\phi_A, \\phi_B)$ interchanged throughout.\nThe outcome is that $\\phi$ is left-inverse to $\\pi$.\n\nHence $\\pi$ and $\\phi$ are genuinely inverse to each other, so they are both isomorphisms $R \\to S$ and $S \\to R$ respectively.\n\n# The characterisation is not unique\n\nTo show that we can't do better than \"characterised up to isomorphism\", we show that the product is not characterised *uniquely*.\nIndeed, if $(A \\times B, \\pi_A, \\pi_B)$ is a product of $A$ and $B$, then so is $(B \\times A, \\pi'_A, \\pi'_B)$, where $\\pi'_A(b, a) = a$ and $\\pi'_B(b, a) = b$.\n(You can check that this does satisfy the universal property for a product of $A$ and $B$.)\n\nNotice, though, that $A \\times B$ and $B \\times A$ are isomorphic as guaranteed by the theorem.\nThe isomorphism is the map $A \\times B \\to B \\times A$ given by $(a,b) \\mapsto (b,a)$.", "date_published": "2016-08-28T12:53:27Z", "authors": ["Patrick Stevens"], "summaries": ["Here, we prove that the [https://arbital.com/p/-5zv](https://arbital.com/p/-5zv) characterises objects uniquely up to [https://arbital.com/p/4f4](https://arbital.com/p/4f4). That is, if two objects both satisfy the universal property of the product of $A$ and $B$, then they are isomorphic objects."], "tags": ["Proof"], "alias": "60v"} {"id": "193aa1b24eccf1f682e72a94d4447d91", "title": "Category of finite sets", "url": "https://arbital.com/p/category_of_finite_sets", "source": "arbital", "source_type": "text", "text": "[This page is more of a definition page; it's not really intended to explain anything, because all the necessary explanations should already have been done in finite_set.](https://arbital.com/p/comment:)\n\nThe category of finite sets is a nice easy [category](https://arbital.com/p/4c7) to work in. Its objects are the finite sets, and its arrows are the [functions](https://arbital.com/p/3jy) between the [finite sets](https://arbital.com/p/5zy).This makes it a very concrete and understandable category to present some of the basic ideas of category theory.", "date_published": "2016-08-29T07:16:48Z", "authors": ["Patrick Stevens"], "summaries": ["The category of finite sets is a nice easy [category](https://arbital.com/p/4c7) to work in. Its objects are the finite sets, and its arrows are the [functions](https://arbital.com/p/3jy) between the [finite sets](https://arbital.com/p/5zy).This makes it a very concrete and understandable category to present some of the basic ideas of category theory."], "tags": ["Stub"], "alias": "614"} {"id": "0b267a10ef51d169d8b2d26d3b8e6cd5", "title": "Extensionality Axiom", "url": "https://arbital.com/p/extensionality_axiom", "source": "arbital", "source_type": "text", "text": "The axiom of extensionality is one of the fundamental axioms of set theory. Basically, it postulates the condition, by which two sets can be equal. This condition can be described as follows: *if any two sets have exactly the same members, then these sets are equal*. A formal notation of the extensionality axiom can be written as:\n\n$$ \\forall A \\forall B : ( \\forall x : (x \\in A \\iff x \\in B) \\Rightarrow A=B)$$\n\n##Examples\n\n - $\\{1,2\\} = \\{2,1\\}$, because whatever object we choose, it either belongs to both of these sets ($1$ or $2$), or to neither of them (e.g. $5$, $73$)\n\n%%comment:\n \n- If $A = \\{x \\mid x = 2n \\text{ for some integer } n \\}$ and $B = \\{x \\mid x \\text{ is even } \\}$, then $A=B$. The proof goes as follows: $\\forall x : (x \\in A \\Leftrightarrow (x = 2n \\text{ for some integer } n ) \\Leftrightarrow (x/2 = n \\text{ for some integer } n) \\Leftrightarrow (x/2 \\text{ is an integer}) \\Leftrightarrow (x \\text{ is even}) \\Leftrightarrow x \\in B)$ \n\nthat, if simplified, gives $\\forall x : (x \\in A \\iff x \\in B)$, which, by extensionality, implies $A=B$\n\n%%\n\n[Fix the formatting in the currently commeneted example. Every new statement needs to be in a new line, lined up.](https://arbital.com/p/fixme:)\n\n\n\n##Axiom's converse\n\nNote, that the axiom itself only works in one way - it implies that two sets are equal **if** they have the same elements, but does not provide the converse, i.e. any two equal sets have the same elements. Proving the converse requires giving a precise definition of equality, which in different cases can be done differently. %note: Sometimes the extensionality axiom itself can be used to define equality, in which case the converse is simply stated by the axiom.% However, generally, the converse fact can always be considered true, as the equality of two sets means that they are the same one thing, obviously consisting of a fixed selection of objects. [The substitution property of equality?](https://arbital.com/p/comment:)", "date_published": "2016-08-29T11:34:30Z", "authors": ["Eric Rogstad", "Ilia Zaichuk"], "summaries": [], "tags": ["Set", "Start"], "alias": "618"} {"id": "d26ef63267bfe75ba199ebcb4cfc7f62", "title": "Comprehensive guide to Bayes' Rule", "url": "https://arbital.com/p/61b", "source": "arbital", "source_type": "text", "text": "Click below to start reading!", "date_published": "2016-09-21T19:55:05Z", "authors": ["Eric Rogstad", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "61b"} {"id": "3a0f767469df69b967df88bd80fd3a25", "title": "Generalized element", "url": "https://arbital.com/p/gen_elt", "source": "arbital", "source_type": "text", "text": "In [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7), a **generalized element** of an object $X$ of a category is any morphism $x : A \\to X$ with [codomain](https://arbital.com/p/3lg) $X$. In this situation, $A$ is called the **shape**, or **domain of definition**, of the element $x$. We'll unpack this.\n\n## Generalized elements generalize elements ##\n We'll need a set with a single element: for concreteness, let us denote it $I$, and say that its single element is $*$. That is, let $I = \\{*\\}$.\n For a given set $X$, there is a natural correspondence between the following notions: an element of $X$, and a function from the set $I$ to the set $X$. On the one hand, if you have an element $x$ of $X$, you can define a function from $I$ to $X$ by setting $f(i) = x$ for any $i \\in I$; that is, by taking $f$ to be the constant function with value $x$. On the other hand, if you have a function $f : I \\to X$, then since $*$ is an element of $I$, $f(*)$ is an element of $X$. So in the category of sets, generalized elements of a set $X$ that have shape $I$, which are by definition maps $I \\to X$, are the same thing (at least up to isomorphism, which as usual is all we care about).\n\n## Generalized elements in sets ##\n In the category of sets, if a set $A$ has $n$ elements, a generalized element of shape $A$ of a set $X$ is an $n$-tuple of elements of $X$. \n\n\n## Sometimes there is no `best shape' ##\nBased on the case of sets, you might initially think that it suffices to consider generalized elements whose shape is the terminal object $1$. However, in the category of groups, since the terminal object is also initial , each object has a unique generalized element of shape $1$. However, in this case, there is a single shape that suffices, namely the integers $\\mathbb{Z}$. A generalized element of shape $\\mathbb{Z}$ of an abelian group $A$ is just an ordinary element of $A$. \n\nHowever, sometimes there is no single object whose generalized elements can distinguish everything up to isomorphism. For example, consider $\\text{Set} \\times \\text{Set}$ . If we use generalized elements of shape $(X,Y)$, then they won't be able to distinguish between the objects $(2^A, 2^{X + B})$ and $(2^{Y + A}, 2^{B})$, up to isomorphism, since maps from $(X,Y)$ into the first are the same as elements of $(2^A)^X\\times(2^{X+B})^Y \\cong 2^{X\\times A + Y \\times (X + B)} \\cong 2^{X \\times A + Y \\times B + X \\times Y}$, and maps from $(X,Y)$ into the second are the same as elements of $(2^{Y+A})^X \\times (2^B)^Y \\cong 2^{X\\times(Y+A) + Y \\times B} \\cong 2^{X \\times A + Y \\times B + X \\times Y}$. These objects will themselves be non-isomorphic as long as at least one of $X$ and $Y$ is not the empty set; if both are, then clearly the functor still fails to distinguish objects up to isomorphism. (More technically, it does not reflect isomorphisms. )\nIntuitively, because objects of this category contain the data of two sets, the information cannot be captured by a single homset. This intuition is consistent with the fact that it can be captured with two: the generalized elements of shapes $(0,1)$ and $(1,0)$ together determine every object up to isomorphism.\n\n## Morphisms are functions on generalized elements ##\n If $x$ is an $A$-shaped element of $X$, and $f$ is a morphism from $X$ to $Y$, then $f(x) := f\\circ x$ is an $A$-shaped element of $Y$. The Yoneda lemma states that every function on generalized elements which commutes with reparameterization, i.e. $f(xu) = f(x) u$, is actually given by a morphism in the category.", "date_published": "2016-10-07T15:59:07Z", "authors": ["Dylan Hendrickson", "Eric Bruylant", "Luke Sciarappa"], "summaries": [], "tags": [], "alias": "61q"} {"id": "c6178f456f023b4fe7070e59eb22bf2a", "title": "Introduction to Bayes' Rule odds form", "url": "https://arbital.com/p/62c", "source": "arbital", "source_type": "text", "text": "Click below to start reading!", "date_published": "2016-09-21T19:55:07Z", "authors": ["Eric Rogstad", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "62c"} {"id": "e78a9b6b305f481097a518636e1209d4", "title": "Bayes' Rule and its different forms", "url": "https://arbital.com/p/62d", "source": "arbital", "source_type": "text", "text": "Click below to start reading!", "date_published": "2016-09-21T19:55:08Z", "authors": ["Eric Rogstad", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "62d"} {"id": "103249bd673f12b47d8bf9b30342688e", "title": "Bayes' Rule and its implications", "url": "https://arbital.com/p/62f", "source": "arbital", "source_type": "text", "text": "Click below to start reading!", "date_published": "2016-09-21T19:55:03Z", "authors": ["Eric Rogstad", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "62f"} {"id": "2c35faad1de5b7163c75c312111785b4", "title": "Examination through isomorphism", "url": "https://arbital.com/p/examination_through_isomorphism", "source": "arbital", "source_type": "text", "text": "[Isomorphism](https://arbital.com/p/4f4) is the correct notion of equality between objects in a [category](https://arbital.com/p/4cx). From the category-theoretic point of view, if you want to distinguish between two objects which are isomorphic but not equal, it means that the morphisms in your category don't preserve whatever aspect of the objects allows you to make this distinction, and hence the category doesn't really capture what you want to be working with. If you want to talk about it categorically, you should consider a category with morphisms that preserve all of the structure you care about, including whatever allowed the distinction to be made.\n\nFor example (this example is due to [Qiaochu Yuan](https://www.quora.com/In-what-sense-are-topological-spaces-a-generalization-of-metric-spaces/answer/Qiaochu-Yuan-1)), consider the category with objects [metric spaces](https://arbital.com/p/metric_space), and morphisms [continuous maps](https://arbital.com/p/continuous_function). The diameter of a metric space $(X,d)$, which is the maximum value of $d(x,y)$ for $x,y \\in X$, is a feature of metric spaces which is not invariant under isomorphism in this category; for example, the subsets $[https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ and $[https://arbital.com/p/0,2](https://arbital.com/p/0,2)$ of $\\mathbb{R}$, equipped with the usual metrics inherited from $\\mathbb{R}$, are isomorphic in this category. There is a continuous map $f : [https://arbital.com/p/0,1](https://arbital.com/p/0,1) \\to [https://arbital.com/p/0,2](https://arbital.com/p/0,2)$ and a continuous map $g : [https://arbital.com/p/0,2](https://arbital.com/p/0,2) \\to [https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ such that $fg$ and $gf$ are identities. For example, one could take $f$ to be multiplication by $2$, and $g$ to be division by $2$. However, the diameter of $[https://arbital.com/p/0,1](https://arbital.com/p/0,1)$ is $1$, and the diameter of $[https://arbital.com/p/0,2](https://arbital.com/p/0,2)$ is $2$. Therefore, insofar as \"diameter\" is a property of metric spaces, the objects of these categories are not metric spaces. The correct name for them is \"metrizable spaces\", since this category is [equivalent](https://arbital.com/p/equivalence_of_categories) to the category whose objects are topological spaces whose topology is induced by some metric and whose morphisms are continuous maps.\n\nFor a less realistic (but more obvious) example, consider the category of [groups](https://arbital.com/p/3gd) and arbitrary functions between their [underlying sets](https://arbital.com/p/3gz). The objects of this category are, supposedly, groups, but properties of groups, such as \"simple\", do not respect isomorphism in this category.\n\nAnother example of this is the [product](https://arbital.com/p/4mj) of, say, sets. It determines a functor $\\text{Set}\\times\\text{Set}\\to\\text{Set}$. We would like to say that this is [associative](https://arbital.com/p/3h4), but this is false; a typical element of $A \\times (B \\times C)$ looks like $(a,(b,c))$, while a typical element of $(A \\times B) \\times C$ looks like $((a,b),c)$. Since these sets have different elements, they are not [equal](https://arbital.com/p/618). However, they are [isomorphic](https://arbital.com/p/4f4). In fact, the two functors $\\text{Set}\\times\\text{Set}\\times\\text{Set}\\to\\text{Set}$ given by $(A,B,C) \\mapsto A \\times (B \\times C)$ and $(A,B,C) \\mapsto (A \\times B) \\times C$ are isomorphic in the category of functors $\\text{Set}\\times\\text{Set}\\times\\text{Set}\\to\\text{Set}$. That is, they are naturally isomorphic.", "date_published": "2016-11-30T23:22:14Z", "authors": ["Eric Rogstad", "Kevin Clancy", "Luke Sciarappa"], "summaries": [], "tags": ["Start"], "alias": "64t"} {"id": "511af06a9c9d1e8c2d73c245660656f6", "title": "Modal combat", "url": "https://arbital.com/p/modal_combat", "source": "arbital", "source_type": "text", "text": "Modal combat is a way of studying the special case of the [one-shot prisoner's dilemma](https://arbital.com/p/5py) in which the players are programs which have read access to each other source code. There is a class of such agents, defined by expressions in modal logic, which can do general provability reasoning about themselves and each other, and furthermore there is a quick way to evaluate how two such agents will behave against each other.\n\nWhile they are not allowed to run each other's source code %%note: Otherwise we risk infinite regression as $A$ simulates $B$ that simulates $A$...%%, we are going to generously give each of them access to a [halting oracle](https://arbital.com/p/) so that they can make powerful reasoning about each other's behavior.\n\nWait, why is this interesting? Are not your assumptions a bit overpowered? Well, they are. But thinking about this machines which are obviously way more powerful than our current capabilities is a useful, albeit unrealistic, proxy for superintelligence. We should furthermore remark that while we are going to stick with halting oracles through this article there exist bounded versions of some of the results.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n##Optimality in modal combat\n\nWe are especially interested in figuring out how would one go about making ideal competitors in this setup, because that may shed light on [decision theory](https://arbital.com/p/) issues.\n\nFor that, we need a good optimality criterion to allow us to compare different players. Let's think about some conditions we would like our player to satisfy:\n\n1. **Unexploitability**: We would like our player to never end in the sucker's payoff (we cooperate and the opponent defects).\n2. **Robust cooperation**: Whenever it is possible to achieve mutual cooperation, we would like to do so. A particular way of highly desirable robust cooperation is achieving cooperation with one self. We call this property of self-trust **Löbian cooperation**.\n3. **Robust exploitation**: Whenever it is possible to exploit the opponent (tricking him into cooperating when we defect) we wish to do so.\n\nThe question now becomes: it is possible to design a program which performs optimally according to all those metrics? How close can we get?\n\n##DefectBot and CliqueBot\n\nThe simplest program that achieves unexploitability is the program which defects against every opponent, commonly known as $DefectBot$. This is a rather boring result, which does not manage to even cooperate with itself.\n\nTaking advantage of the access to source codes, we could build a program $CliqueBot$ which cooperates if and only if the opponent's source code is equal to his own code (wait, how do you do that? With the magic of [quining](https://arbital.com/p/322) we can always treat our own code as data!). Unexploitable? Check. Achieves Löbian cooperation? Check. But can't we do better? Innocuous changes to $CliqueBot$'s source code (such as adding comments) result in agents which fail to cooperate, which is rather lame.\n\n##Modal agents\nBefore we start making more complicated program designs we need to develop a good reasoning framework to figure out how the matches will end. Simple simulation will not do, because we have given them access to a halting oracle to make proofs about their opponent's program.\n\nFortunately, there is one formal system which nicely handles reasoning about provability: cue to [https://arbital.com/p/5l3](https://arbital.com/p/5l3).\n\nin provability logic we have a modal operator $\\square$ which [Solovay's theorem](https://arbital.com/p/5n5) tells us can be read as \"it is provable in $PA$ that\". Furthermore, this system of logic is fully decidable, though that comes at the expense of some expressive power (we, for example, do not have quantifiers in modal logic).\n\nThe next step is to express agents in modal combat as expressions of modal logic. \n\nFor that we need to extend provability logic with second-order formulas. A second-order formula $\\phi(x)$ is a fixed point expression in the language of modal logic in which $x$ appears as a formula and $\\phi$ can be used as a variable.\n\nFor example, we could have the formula $\\phi(x) \\leftrightarrow \\square x(\\phi)$ and the formula $\\psi(x)\\leftrightarrow \\neg\\square x(\\psi)$. To evaluate a formula with its free variable instantiated we substitute their variables its fixed point expression its body as many times as required until the formula with its argument its expressed in terms of itself, and then we compute its [fixed point in GL](https://arbital.com/p/5lx).\n\nFor example, $\\psi(\\phi)\\leftrightarrow \\neg\\square\\phi(\\psi)\\leftrightarrow \\neg\\square\\square\\psi (\\phi)$, and the fixed point can be calculated to be $GL\\vdash \\psi(\\phi)\\leftrightarrow \\neg [ \\bot \\vee \\square\\square\\bot \\wedge \\neg\\square \\bot](https://arbital.com/p/\\square) $, which is arithmetically false and thus we say that $\\phi(\\psi)$ evaluates to false.\n\nIt is possible to reference other formulas in the fixed point expression of a formula, though one precaution must be taken of lest we fall into circular dependencies. We are going to say that a formula is of rank $0$ if it is defined in terms of itself and no other formula. Then we say that a formula can only be defined in terms of itself and lower rank formulas, and define the rank of a formula as the biggest rank in its definition plus one. If the rank is well defined, then the formula is also well defined. To guarantee the existence of a fixed point it will also be convenient to restrict ourselves to [fully modalized](https://arbital.com/p/5m4) formulas.\n\n>A **modal agent** of rank $k$ is a one place [fully modalized](https://arbital.com/p/5m4) modal formula of the kind we have just defined of rank $k$, whose free variable represents the opponent's source code. To simulate a match, we substitute the free variable by the opponents formula, and evaluate it according to the procedure described in the previous paragraph. If the fixed point is arithmetically true, then the agent cooperates, and if it is false it defects. The rank of the agent is the maximum number of nested boxes that occur in the sentence.\n\nThus we have that we can express $DefectBot$ as a modal agent with the modal formula $DB(x)\\leftrightarrow \\bot$.\n\n\n##FairBot and PrudentBot\nTo try to beat CliqueBot's performance, we will define a bot which takes advantage of the halting oracle to achieve robust cooperation with a greater class of agents.\n\n FairBot(x): if Prv(x(FairBot))=C then output C else return D\n\n$FairBot$ tries to find a proof in $PA$ that its opponent is going to cooperate with him, and if it succeeds then it cooperates as well.\n\nWe can express $FairBot$ as a modal agent represented by the modal formula $FB(x) \\leftrightarrow \\square x(FB)$.\n\nSo, does $FairBot$ cooperate with himself? Let's find out!\n\nThe result of the match against itself is the same as the fixed point of $FB(FB)\\leftrightarrow \\square FB(FB)$. According to the methods for computing fixed points, the fixed point is thus $\\top$, which is of course arithmetically true%%note: This can also be figured out using [https://arbital.com/p/55w](https://arbital.com/p/55w)%%. Thus $FairBot$ satisfies the Löbian cooperation condition!\n\nFurthermore, this holds no matter how $FairBot$ is implemented, so it is way more robust than $CliqueBot$!\n\nWhat happens if $FairBot$ faces $DefectBot$? Then we have to find the fixed point of $FB(DB)\\leftrightarrow \\square \\bot$, which is obviously $\\square \\bot$, and since this expression is arithmetically false %%note: PA+1 disproves it!%%, then it is the case that FairBot does not cooperate with $DefectBot$.\n\n$FairBot$ is an impressive formalism, and it achieves very robust cooperation. However, it fails to exploit other agents. For example, it fails to exploit the simple program $CooperateBot$ %%note: $CB(x)\\leftrightarrow \\top$%%, which always cooperates.\n\nAn improvement can be made by considering the program represented by the rank $1$ modal formula $PB(x)\\leftrightarrow\\square [https://arbital.com/p/x](https://arbital.com/p/x)$, which we call $PrudentBot$. By its definition, $PrudentBot$ cooperates with another modal agent if and only if it can prove that the opponent will cooperate with him, and furthermore $PA+1$ proves that its opponent will not cooperate with $DefectBot$.\n\nThus $PrudentBot$ cooperates against itself and with $FairBot$, but defects against $DefectBot$ and $CooperateBot$. In fact, $PrudentBot$ dominates $FairBot$.\n\n##Can we do better?\nOne question that comes to mind: is it possible to design a modal agent which achieves cooperation with every agent for which there is at least an input which will make it cooperate without being exploitable? Sadly no. Consider the bot TrollBot$ defined by $TB(x)\\leftrightarrow \\square x(DB)$%%note: Exercise for the reader: does $TrollBot$ cooperate with himself?%%. That is, $TrollBot$ cooperates with you if and only if you cooperate with $DefectBot$.\n\nThen $CooperateBot$ achieves cooperation with $TrollBot$, but no bot cooperates with $TrollBot$ while defecting against $DefectBot$.\n\nIn general, there is no good optimality criteria developed. An open question is to formalize a good notion of optimality in the class of modal agents and design a bot which achieves it.", "date_published": "2016-09-26T22:01:57Z", "authors": ["Jaime Sevilla Molina", "Eric Bruylant", "Patrick LaVictoire"], "summaries": [], "tags": ["C-Class"], "alias": "655"} {"id": "2deb3b07fd98fa95cfed298d40dcc631", "title": "Least common multiple", "url": "https://arbital.com/p/least_common_multiple", "source": "arbital", "source_type": "text", "text": "Given two positive natural numbers $a$ and $b$, their **least common multiple** $\\text{LCM}(a,b)$ is the smallest natural number divided by both $a$ and $b$. As an example take $a=12, b=10$, then the smallest number divided by both of them is $60$.\n\nThere is an equivalent definition of the LCM, which is strange at first glance but turns out to be mathematically much more suited to generalisation: the LCM $l$ of $a$ and $b$ is the natural number such that for every number $c$ divisible by both $a$ and $b$, we have $l$ divides $c$.\nThis describes the LCM as a [poset least upper bound](https://arbital.com/p/3rc) (namely the [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb) $\\mathbb{N}$ under the relation of divisibility).\n\nNote that for $a$, $b$ given, their product $ab$ is a natural number divided by both of them. The least common multiple $\\text{LCM}(a,b)$ divides the product $ab$ and for $\\text{GCD}(a,b)$ the [https://arbital.com/p/-5mw](https://arbital.com/p/-5mw) of $a, b$ we have the formula\n$$a\\cdot b = \\text{GCD}(a,b) \\cdot \\text{LCM}(a,b). $$\nThis formula offers a fast way to compute the least common multiple: one can compute $\\text{GCD}(a,b)$ using the [https://arbital.com/p/euclidean_algorithm](https://arbital.com/p/euclidean_algorithm) and then divide the product $ab$ by this number.\n\nIn practice, for small numbers $a,b$ it is often easier to use their factorization into [prime numbers](https://arbital.com/p/4mf). In the example above we have $12=2 \\cdot 2 \\cdot 3$ and $10=2 \\cdot 5$, so if we want to build the smallest number $c$ divided by both of them, we can take $60=2 \\cdot 2 \\cdot 3 \\cdot 5$. Indeed, to compute $c$ look at each prime number $p$ dividing one of $a,b$ (in the example $p=2,3,5$). Then writing $c$ as a product we take the factor $p$ the maximal number of times it appears in $a$ and $b$. The factor $p=2$ appears twice in $12$ and once in $10$, so we take it two times. The factor $3$ appears once in $12$ and zero times in $10$, so we only take it once, and so on.", "date_published": "2016-09-25T19:50:36Z", "authors": ["Kevin Clancy", "Johannes Schmitt", "Patrick Stevens"], "summaries": ["The **least common multiple (LCM)** of two positive [natural numbers](https://arbital.com/p/45h) a, b is the smallest natural number that both a and b divide, so for instance LCM(12,10) = 60."], "tags": [], "alias": "65x"} {"id": "9688dff7aae0a11dc1cad2e04f4bf9aa", "title": "Up to isomorphism", "url": "https://arbital.com/p/up_to_isomorphism", "source": "arbital", "source_type": "text", "text": "summary: \"The property $P$ holds up to isomorphism\" is a phrase which means \"we might say an object $X$ has property $P$, but that's an abuse of notation. When we say that, we really mean that there is an object [isomorphic](https://arbital.com/p/4f4) to $X$ which has property $P$\". Essentially, it means \"the property might not hold as stated, but if we replace the idea of *equality* by the idea of *isomorphism*, then the property holds\".\n\nRelatedly, \"The object $X$ is [well-defined](https://arbital.com/p/5ss) up to isomorphism\" means \"if we replace $X$ by an object isomorphic to $X$, we still obtain something which satisfies the definition of $X$.\"\n\n\"The property $P$ holds up to isomorphism\" is a phrase which means \"we might say an object $X$ has property $P$, but that's an abuse of notation. When we say that, we really mean that there is an object [isomorphic](https://arbital.com/p/4f4) to $X$ which has property $P$\". Essentially, it means \"the property might not hold as stated, but if we replace the idea of *equality* by the idea of *isomorphism*, then the property holds\".\n\nRelatedly, \"The object $X$ is [well-defined](https://arbital.com/p/5ss) up to isomorphism\" means \"if we replace $X$ by an object isomorphic to $X$, we still obtain something which satisfies the definition of $X$.\"\n\n# Examples\n\n## Groups of order $2$\n\nThere is only one [https://arbital.com/p/-3gd](https://arbital.com/p/-3gd) of [order](https://arbital.com/p/3gg) $2$ *up to isomorphism*.\nWe can define the object \"group of order $2$\" as \"the group with two elements\"; this object is well-defined up to isomorphism, in that while there are several different groups of order $2$ %%note: Two such groups are $\\{0,1\\}$ with the operation \"addition [modulo](https://arbital.com/p/5ns) $2$\", and $\\{e, x \\}$ with [https://arbital.com/p/-54p](https://arbital.com/p/-54p) $e$ and the operation $x^2 = e$.%%, any two such groups are isomorphic.\nIf we don't think of isomorphic objects as being \"different\", then there is only one distinct group of order $2$.", "date_published": "2016-09-24T17:29:22Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["A phrase mathematicians use when saying \"we only care about the [structure](https://arbital.com/p/structure_mathematics) of an [object](https://arbital.com/p/object_mathematics), not about specific implementation details of the object\"."], "tags": ["Start"], "alias": "65y"} {"id": "35b669919b1e43a96d75f029a3ab077b", "title": "Greatest lower bound in a poset", "url": "https://arbital.com/p/poset_greatest_lower_bound", "source": "arbital", "source_type": "text", "text": "In a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), the greatest lower bound of two elements $x$ and $y$ is the \"largest\" element which is \"less than\" both $x$ and $y$, in whatever ordering the poset has.\nIn a rare moment of clarity in mathematical naming, the name \"greatest lower bound\" is a perfect description of the concept: a \"lower bound\" of two elements $x$ and $y$ is an object which is smaller than both $x$ and $y$ (it \"bounds them from below\"), and the \"greatest lower bound\" is the greatest of all the lower bounds.\n\nFormally, if $P$ is a [set](https://arbital.com/p/3jz) with partial order $\\leq$, and given elements $x$ and $y$ of $P$, we say an element $z \\in P$ is a **lower bound** of $x$ and $y$ if $z \\leq x$ and $z \\leq y$.\nWe say an element $z \\in P$ is the **greatest lower bound** of $x$ and $y$ if:\n\n- $z$ is a lower bound of $x$ and $y$, and\n- for every lower bound $w$ of $x$ and $y$, we have $w \\leq z$.", "date_published": "2016-09-24T07:09:26Z", "authors": ["Patrick Stevens"], "summaries": ["In a [https://arbital.com/p/-3rb](https://arbital.com/p/-3rb), the greatest lower bound of two elements $x$ and $y$ is the \"largest\" element which is \"less than\" both $x$ and $y$, in whatever ordering the poset has."], "tags": ["Stub"], "alias": "65z"} {"id": "56abec7cb2afb552597f673e8d29703f", "title": "An introductory guide to modern logic", "url": "https://arbital.com/p/intro_modern_logic", "source": "arbital", "source_type": "text", "text": "Welcome! We are about to start a journey through modern logic, in which we will visit two central results of this branch of mathematics, which are often misunderstood and sometimes even misquoted: [Löb's theorem](https://arbital.com/p/) and [Gödel's second incompleteness theorem](https://arbital.com/p/).\n\nModern logic can be said to have been born in the ideas of Gödel regarding the amazing property of self-reference that plagues arithmetic. Badly explained, this means that we can force an interpretation over natural numbers which relates how deductions are made in arithmetic to statements about arithmetical concepts such as divisibility.\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\nThis guide is targets people who are already comfortable with mathematical thinking and mathematical notation. It requires however no specific background knowledge about logic or math to be read. The guide starts lightweight and gets progressively more technical.\n\n#Formal proofs\nWhat is logic anyway? A short explanation refers to the fact that we humans happen to be able to draw conclusions about things we have not experienced by using facts about how the world is. For example, if you see somebody entering the room and is dripping water, then you infer that it is raining outside, even though you have not seen the rain yourself.\n\nThis may seem trivial, but what if we are trying to program that into a computer? How can we capture this intuition of what reasoning is and write down the rules for proper reasoning in a way precise enough to be turned into a program?\n\nTo accomplish that is to do logic. The ultimate goal is to have an efficient procedure that if followed will allow to **deduce** every true consequence of a set of premises, so simple that it could be taught to a dumb machine which only follows mechanical operations.\n\nSo let's try to formalize reasoning!\n\nWe are going to draw inspiration from mathematicians and see what they are doing when proving a result.\n\nThough the method is often convoluted, non linear and hard to understand, in essence what they are doing is writing down some hypothesis and facts that they assume to be true. Then they apply some manipulation rules which they assume that when applied on true premises allow to derive true facts. After iterating this process, they arrive to the wanted result.\n\n## Getting formal\n\nWell, let's wrap together this intuitive process in a formal definition. \n\n> A [proof of a sentence $\\phi$](https://arbital.com/p/) is a sequence of [well formed sentences](https://arbital.com/p/) such that every sentence in the sequence is either an [https://arbital.com/p/-6cd](https://arbital.com/p/-6cd) or can be derived of the previous sentences using a [derivation rule](https://arbital.com/p/), and the final sentence in the sequence is $\\phi$. A sentence which has a proof is called a [theorem](https://arbital.com/p/).\n\nDon't be scared by all the new terms! Let's go briefly over them.\n\nA **well formed sentence** is just a combination of letters which satisfies a certain criteria which makes them easy to interpret. The sentences we will be dealing with are logical formulas, which use a particular set of symbols such as $=, \\wedge, \\implies$. Our goal is to assign truth values to sentences. That is, to say if particular sentences are true or false.\n\nAn **axiom** is something that we assume to be true without requiring a proof. For example, a common axiom in arithmetic is assuming that $0$ is different than $n+1$ for all natural numbers $n$. Or expressed as a well formed sentence, $\\forall n. 0 \\not = n+1$. %%note:The symbol $\\forall$ is an upside-down A and means \"[for all](https://arbital.com/p/)\".%%\n\nA **derivation rule** is a valid basic step in a proof. Perhaps the simplest example is [modus ponens](https://arbital.com/p/) . Modus ponens tells you that if you have deduced somehow that $A\\implies B$ and also $A$, then you can deduce from both facts $B$.\n\nThose three components constitute the basis of a **logical system** for reasoning: a criteria for constructing [well formed sentences](https://arbital.com/p/), a set of [axioms](https://arbital.com/p/6cd) which we know to be true %%note: The set of axioms does not have to be finite! In particular, we can specify a criteria to identify infinitely many axioms. This is called an [axiom schema](https://arbital.com/p/)%% and some [derivation rules](https://arbital.com/p/) which allow us to deduce new conclusions.\n\n%%hidden(Show example):\nFor a rather silly example, let's consider the system $A$ with the following components:\n* Well formed sentences: the set of words in the Webster dictionary.\n* Axioms: The word 'break'.\n* Deduction rules: If a word $w$ is a theorem of $A$, then so it is a word which only differs in one letter of $w$, either by adding, substracting or replacing it anywhere in $w$.\n\nThus the word 'trend' is a theorem of $A$, as shown by the 4 steps long proof 'break - bread - tread - trend'. The first sentence in the proof is an axiom. Every sentence after is deduced from our deduction rule using the previous sentence as a premise.\n%%\n\n## Interpretations\n\nGiven how we defined a logical system, there is a certain degree of freedom we have, as we can choose the underlying sets of well formed sentences, the axioms and the deduction rules. However, as shown in the example above, if we want to make meaningful deductions we should be careful in what components do we choose for our logical system.\n\nThe criteria we use to choose these components is intertwined with the [https://arbital.com/p/-model](https://arbital.com/p/-model) we are trying to reason about. Models can be anything that captures an aspect of the world, or just a mathematical concept. To put it simply, they are the thing we are trying to reason *about*.\n\nDepending on what kind of entities the model has, we will lean towards a particular choice of sentences. For example, if our model consists in just facts which are subject to some known logical relations we could use the language of [https://arbital.com/p/-propositional_logic](https://arbital.com/p/-propositional_logic). But if our model contains different objects which are related in different ways, we will prefer to use the language of [https://arbital.com/p/first_order_logic](https://arbital.com/p/first_order_logic).\n\nEach model has an interpretation associated, which relates the terms in the sentences to aspects of the model. For all practical purposes, a model and its interpretation are the same once we have fixed the language, and thus we will use the terms as synonyms.\n\nThen, depending on how the model evolves and what things can we infer from what facts we should choose some deduction rules. Together, the sentences and the deduction rules specify a class of possible models. If the deduction rules correctly capture the properties of the class of models we had in mind, in the sense that from true premises we really derive true consequences, then we will say that they are [https://arbital.com/p/-sound](https://arbital.com/p/-sound). \n\nIntuitively, we say that the choice of language and deduction rules decides the \"shape\" that our models will have. Every model with such a shape will be part of the universe of models specified by those components of a logical system.\n\nWith this imagery of a class of models, we can think of axioms as an attempt to reduce the class of models and pin down the concrete model we are interested in, by choosing as axioms sentences whose interpretation is true in only the model we are interested in and in no other model %%note: sadly, there are occasions in which this is just not possible%%.\n\nA property tightly related to soundness is [https://arbital.com/p/-completeness](https://arbital.com/p/-completeness). A system is complete if every sentence which is true under all interpretations which satisfy our axioms has a proof.\n\nIt is important to realize that logical systems can be talking about many models at once. In particular, if two interpretations satisfy the axioms but disagree on the truth of a particular sentence, then that sentence will be [undecidable](https://arbital.com/p/) in our model.\n\nThis suggest a technique for proving independence of logical statements. To show that a certain sentence is independent from some axioms, construct two models which satisfy those axioms but contradict each other on the value of the statement. A nice example of this is how Lobachevsky showed that [Euclides Fifth postulate is independent from the other four postulates](https://en.wikipedia.org/wiki/Parallel_postulate#History).\n\n\n\n## Introducing PA\n\nFor our purposes, we will stick to a particular logical system called [Peano arithmetic](https://arbital.com/p/3ft). This particular choice of axioms and deduction rules is interesting because it reflects a lot of our intuitions about how numbers work, which in turn can be used to talk about many phenomena in the real world. In particular, *they can talk about themselves*.\n\nBefore we move on to this \"talking about themselves\" business I am going to introduce more notation. We will refer to Peano Arithmetic as $PA$. If a sentence $\\phi$ follows from the axioms of $PA$ and its deduction rules %%note: ie, there is a proof of $\\phi$ using only axioms and deduction rules from $PA$%%, we will say that $PA\\vdash \\phi$, read as \"$PA$ proves $\\phi$\".\n\n#Self reference and the provability predicate\nLet's try to get an intuition of what I mean when I say that numbers can talk about themselves.\n\nThe first key intuition is that we can refer to arbitrary sequences of characters using numbers. For example, I can declare that from now onwards the number $1$ is going to refer to the sequence of letters \"I love Peano Arithmetic\". Furthermore, using a clever rule to relate numbers and sentences I can make sure that every possible finite sentence gets assigned a number %%note: This is called [https://arbital.com/p/-encoding](https://arbital.com/p/-encoding)%%.\n\nA simple encoding goes by the name of **Gödel encoding**, and consists of assigning to every symbol we are going to allow to be used in sentences a number. For example, $=$ could be $1$, and $a$ could be $0$, and so on and so forth. Then we could encode a sentence consisting of $n$ symbols as the number $2^{a_1}3^{a_2}5^{a_3}\\cdots p(n)^{a_n}$. That is, we take the number which is the product of the $n$th first primes with exponents equal to the assigned numbers.%%note:This is possible because every number can be [decomposed as a unique product of primes](https://arbital.com/p/-5rh).%%\n\nThe process can be repeated to encode sequences of sentences as single numbers. Thus, we can encode whole proofs as single numbers. We could also encode sequences of sequences of sentences, but that is going too far for our purposes.\n\n\n\nSo the point is, we can exchange sentences by numbers and vice versa. Can we also talk about deduction using numbers?\n\nIt turns out we can! With the encoding we have chosen it is cumbersome to show, but we can write a **predicate** $Axiom(x)$ %%note: A predicate is a well formed sentence in which we have left one or more \"holes\" in the form of variables which we can substitute for literal numbers or quantify over. For example, we can have the predicate $IsEqualTo42(x)$ of the form $x = 42$. Then $PA\\vdash IsEqualTo42(42)$ and $PA\\vdash \\exists x IsEqualTo42(x)$, but $PA\\not\\vdash IsEqualTo42(7)$%% in the language of Peano Arithmetic such that $PA\\vdash Axiom(\\textbf{n})$ if and only if $n$ is a number which encodes an axiom of $PA$. \n\nFurthermore, deduction rules which require $n$ premises can be represented by $n+1$ predicates $Rule(p_1, p_2,..., p_n, r)$ which is provable in $PA$ if and only if $p_1, ...., p_n$ are numbers encoding valid premises for the rule and $r$ is the corresponding deduced fact.\n\nA bit more of work and we can put together a predicate $Proof(x,y)$, which is provable in $PA$ if and only if $x$ encodes the valid proof of the sentence $y$. Isn't that neat!? \n\nSince we are not too interested in the proof itself, and more in the fact that there is a proof at all, we are going to construct the **provability predicate** $\\exists x. Proof(x,y)$, which we will call $\\square_{PA}(y)$.%%note: $\\exists$ is a backwards E and means \"there exists\"%%. So then, $\\square_{PA}(x)$ literally means \"There is a proof in $PA$ of $x$\". \n\nThus we can say that if the number $\\ulcorner 1+1=2 \\urcorner$ corresponds to the sentence $1+1=2$%%note: This notation remains in effect for the rest of the article.%%, then $PA\\vdash \\square_{PA}(\\ulcorner 1+1=2 \\urcorner)$. That is, $PA$ proves that there is a proof in $PA$ of $1+1=2$.\n\nNow, the predicates we have seen so far are quite intuitive and nicely behaved, in the sense that their deducibility from $PA$ matches quite well what we would expect from our intuition. However, adding the existential quantifier in front of $Proof(x,y)$ we get some nasty side effects.\n\nWarning! Really bad analogy incoming!\n\nThe thing is that $PA$ \"hallucinates\" numbers which it is not sure whether they exist or not. Those are no mere natural numbers, but infinite numbers further in the horizon of math. While $PA$ cannot prove their existence, it neither cannot prove its non existence. So $PA$ becomes wary of asserting that proofs do not exist for a certain false sentence. After all, one of those **non standard** numbers may encode the proof of the false sentence! Who is to prove otherwise?\n\nCan we patch this somehow? Maybe by adding more axioms or deduction rules to $PA$ so that it can prove that those numbers do not exists? The answer is yes but not really. While it is doable in principle, the resulting theory becomes too difficult to manage, and we can no longer use it for effective deduction %%note: Technically, $PA$ loses its [semidecidability](https://arbital.com/p/) that way.%%.\n\nWe will now proceed to prove two technical results which better formalize this idea: Löb's theorem and Gödel's Second Incompleteness Theorem.\n\n#Löb's theorem\n\nLöb's result follows from the intuitive properties of deduction that the provability predicate actually manages to capture. This points to the fact that it is not our definition what is wrong, but rather to a fundamental impossibility in logic.\n\nThe intuitive, good properties of $\\square_{PA}$ are known as the Hilbert-Bernais derivability conditions, and are as follows:\n\n1. If $PA\\vdash A$, then $PA\\vdash \\square_{PA}(\\ulcorner A\\urcorner)$.\n2. $PA\\vdash \\square_{PA}(\\ulcorner A\\rightarrow B\\urcorner) \\rightarrow [https://arbital.com/p/\\square_{PA}](https://arbital.com/p/\\square_{PA})$\n3. $PA\\vdash \\square_{PA}(\\ulcorner A\\urcorner) \\rightarrow \\square_{PA} \\square_{PA} (\\ulcorner A\\urcorner)$.\n\nLet's go over each of them in turn. \n\n1) says reasonably that if $PA$ proves a sentence $A$, then it also proves that there is a proof of $A$.\n\n2) affirms that if you can prove that $A$ implies $B$ then the existence of a proof of $A$ implies the existence of a proof of $B$. This is quite intuitive, as we can concatenate a proof of $A$ with a proof of $A\\rightarrow B$ and deduce $B$ from an application of *modus ponens*.\n\n3) is a technical result which states that the formalization of 1) is provable when we are dealing with sentences of the form $\\square_{PA}(\\ulcorner A \\urcorner)$.\n\nOne more ingredient is needed to derive Löb's: the [https://arbital.com/p/-59c](https://arbital.com/p/-59c), which states that for all predicates $\\phi(x)$ there is a formula $\\psi$ such that $PA\\vdash \\psi \\leftrightarrow \\phi(\\ulcorner \\psi \\urcorner)$.\n\nThe details of the proof can be found in [https://arbital.com/p/59b](https://arbital.com/p/59b). \n\nThe details are not essential to the main idea, but it can be illustrative to work through the formal proof. Plus the intuition behind related to the non-standard numbers we talked about before.\n\nWhat is really interesting is that now we are in position to enunciate and understand Löb's theorem!\n\n> **Löb's theorem**\n\n> If $PA\\vdash \\square_{PA}(\\ulcorner A\\urcorner) \\rightarrow A$, then $PA\\vdash A$.\n\n> (or equivalently, if $PA\\not\\vdash A$ then $PA\\not\\vdash \\square_{PA}(\\ulcorner A\\urcorner) \\rightarrow A$).\n\nTalk about an unintuitive result! Let's take a moment to ponder about its meaning.\n\nIntuitively, we should be able to derive from the existence of a proof of $A$ that $A$ is true. After all, proofs are guarantees that something really follows from the axioms, so if we got those right and our derivation rules are correct then $A$ should be true. However, $PA$ does not trust that being able to find the existence of a proof is enough to make $A$ true!\n\nIndeed, he needs to be able to see by himself the proof. In other words, there must be a $n$ such that $PA\\vdash Proof(\\textbf n, \\ulcorner A\\urcorner)$. Then he will trust that it is indeed the case that $A$.\n\nI will repeat that again because it looks like a tongue twister. Suppose somebody came and assured $PA$ that from the axioms of $PA$ it follows that there exists a number $n$ satisfying $Proof(\\textbf n,\\ulcorner A\\urcorner)$. Then $PA$ would say, show me the proof or I am going to assume that that $n$ is actually a non standard number and you a friggin' liar.\n\nIf somebody just comes saying he has a proof, but produces none, $PA$ becomes suspicious. After all, the proof in question could be a non-standard proof encoded by one of the dreaded non standard numbers! Who is going to trust that!\n\n\n#Gödel II\nAnd finally we arrive to Gödel's Second Incompleteness Theorem, perhaps the most widely misunderstood theorem of all mathematics.\n\nWe first need to introduce the notion of [consistency.](https://arbital.com/p/) Simply enough, a logical system is consistent if it does not prove a contradiction, where a contradiction is something impossible. For example, it cannot ever be the case that $P\\wedge \\neg P$, so $P\\wedge \\neg P$ is a contradiction no matter what $P$ is. We use the symbol $\\bot$ to represent a contradiction.\n\nThe statement of GII is as follows:\n\n> Gödel Second Incompleteness Theorem (concrete form)\n> If $PA$ is consistent, then $PA\\not \\vdash \\neg \\square_{PA}(\\bot)$\n\nNotice that GII follows quite directly from Löb's theorem. Actually, it is the case that Löb's theorem also follows from GII, so [both results are equivalent](https://arbital.com/p/5hs).\n\nThis result can be interpreted as follows: you cannot make a system as complex as $PA$ in which you can talk about deduction with complete certainty, and thus about consistency. In particular, such a system cannot prove that he himself is consistent.\n\nThe result is startling, but in the light of our previous exposition is clear what is going on! There is always the shadow of non standard numbers menacing our deductions.\n\n#Summary\nAnd that concludes our introduction to formal logic!\n\nTo recall some important things we have learned:\n\n* Logical systems capture the intuition behind deductive reasoning. They are composed of axioms and deductive rules that are chained to compose proofs.\n* Simple logical systems that are used to talk about numbers, such as $PA$, can be interpreted as talking about many things through encodings, and in particular they can talk about themselves.\n* There are expressions in logic that capture the concepts involved in deductions. However, the most important of them, the provability predicate $\\square_{PA}$, fails to satisfy some intuitive properties due to the inability of $PA$ to prove that non standard numbers do not exists.\n* Löb's theorem says that if $PA$ cannot prove $A$, then it can neither prove that from $\\square_{PA}(\\ulcorner A\\urcorner$ follows $A$.\n* $PA$ cannot prove its own consistency, in the sense that it cannot prove that the standard predicate is never satisfied by a contradiction.\n\nIf you want to get deeper in the rabbit hole, read about [model theory](https://arbital.com/p/) and [https://arbital.com/p/-semantics](https://arbital.com/p/-semantics) or [https://arbital.com/p/-534](https://arbital.com/p/-534).", "date_published": "2016-10-21T15:27:08Z", "authors": ["Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina"], "summaries": [], "tags": [], "alias": "661"} {"id": "bd0c1b877156c339ea161a95ce810db3", "title": "Wants to get straight to Bayes", "url": "https://arbital.com/p/692", "source": "arbital", "source_type": "text", "text": "A simple requisite page to mark whether the user has selected wanting to get straight into Bayes on the [Bayes' Rule Guide start page](https://arbital.com/p/1zq).", "date_published": "2016-09-28T21:23:00Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Just a requisite"], "alias": "692"} {"id": "2937d7dc5c0e6d94a13f6752090cdac2", "title": "High-speed intro to Bayes's rule", "url": "https://arbital.com/p/bayes_rule_fast_intro", "source": "arbital", "source_type": "text", "text": "(This is a high-speed introduction to [https://arbital.com/p/1lz](https://arbital.com/p/1lz) for people who want to get straight to it and are good at math. If you'd like a gentler or more thorough introduction, try starting at the [Bayes' Rule Guide](https://arbital.com/p/1zq) page instead.)\n\n# Percentages, frequencies, and waterfalls\n\nSuppose you're screening a set of patients for a disease, which we'll call Diseasitis.%note:Lit. \"inflammation of the disease\".% Your initial test is a tongue depressor containing a chemical strip, which usually turns black if the patient has Diseasitis.\n\n- Based on prior epidemiology, you expect that around 20% of patients in the screening population have Diseasitis.\n- Among patients with Diseasitis, 90% turn the tongue depressor black.\n- 30% of the patients without Diseasitis will also turn the tongue depressor black.\n\nWhat fraction of patients with black tongue depressors have Diseasitis?\n\n%%hidden(Answer): 3/7 or 43%, quickly obtainable as follows: In the screened population, there's 1 sick patient for 4 healthy patients. Sick patients are 3 times more likely to turn the tongue depressor black than healthy patients. $(1 : 4) \\cdot (3 : 1) = (3 : 4)$ or 3 sick patients to 4 healthy patients among those that turn the tongue depressor black, corresponding to a probability of $3/7 = 43\\%$ that the patient is sick.%%\n\n(Take your own stab at answering this question, then please click \"Answer\" above to read the answer before continuing.)\n\nBayes' rule is a theorem which describes the general form of the operation we carried out to find the answer above. In the form we used above, we:\n\n- Started from the *prior odds* of (1 : 4) for sick versus healthy patients;\n- Multiplied by the *likelihood ratio* of (3 : 1) for sick versus healthy patients blackening the tongue depressor;\n- Arrived at *posterior odds* of (3 : 4) for a patient with a positive test result being sick versus healthy.\n\nBayes' rule in this form thus states that **the prior odds times the likelihood ratio equals the posterior odds.**\n\nWe could also potentially see the positive test result as **revising** a *prior belief* or *prior probability* of 20% that the patient was sick, to a *posterior belief* or *posterior probability* of 43%.\n\nTo make it clearer that we did the correct calculation above, and further pump intuitions for Bayes' rule, we'll walk through some additional visualizations.\n\n## Frequency representation\n\nThe *frequency representation* of Bayes' rule would describe the problem as follows: \"Among 100 patients, there will be 20 sick patients and 80 healthy patients.\"\n\n![prior frequency](https://i.imgur.com/gvcoqQN.png?0)\n\n\"18 out of 20 sick patients will turn the tongue depressor black. 24 out of 80 healthy patients will blacken the tongue depressor.\"\n\n![posterior frequency](https://i.imgur.com/0XDbrYi.png?0)\n\n\"Therefore, there are (18+24)=42 patients who turn the tongue depressor black, among whom 18 are actually sick. (18/42)=(3/7)=43%.\"\n\n(Some experiments show %%note: E.g. \"[Probabilistic reasoning in clinical medicine](https://faculty.washington.edu/jmiyamot/p548/eddydm%20prob%20reas%20i%20clin%20medicine.pdf)\" by David M. Eddy (1982).%% that this way of explaining the problem is the easiest for e.g. medical students to understand, so you may want to remember this format for future use. Assuming you can't just send them to Arbital!)\n\n## Waterfall representation\n\nThe *waterfall representation* may make clearer why we're also allowed to transform the problem into prior odds and a likelihood ratio, and multiply (1 : 4) by (3 : 1) to get posterior odds of (3 : 4) and a probability of 3/7.\n\nThe following problem is isomorphic to the Diseasitis one:\n\n\"A waterfall has two streams of water at the top, a red stream and a blue stream. These streams flow down the waterfall, with some of each stream being diverted off to the side, and the remainder pools at the bottom of the waterfall.\"\n\n![unlabeled waterfall](https://i.imgur.com/D8EhY65.png?0)\n\n\"At the top of the waterfall, there's around 20 gallons/second flowing from the red stream, and 80 gallons/second flowing from the blue stream. 90% of the red water makes it to the bottom of the waterfall, and 30% of the blue water makes it to the bottom of the waterfall. Of the purplish water that mixes at the bottom, what fraction is from the red stream versus the blue stream?\"\n\n![labeled waterfall](https://i.imgur.com/QIrtuVU.png?0)\n\nWe can see from staring at the diagram that the *prior odds* and *likelihood ratio* are the only numbers we need to arrive at the answer:\n\n- The problem would have the same answer if there were 40 gallons/sec of red water and 160 gallons/sec of blue water (instead of 20 gallons/sec and 80 gallons/sec). This would just multiply the total amount of water by a factor of 2, without changing the ratio of red to blue at the bottom.\n- The problem would also have the same answer if 45% of the red stream and 15% of the blue stream made it to the bottom (instead of 90% and 30%). This would just cut down the total amount of water by a factor of 2, without changing the *relative* proportions of red and blue water.\n\n![wide vs narrow waterfall](https://i.imgur.com/6FOndjc.png?0)\n\nSo only the *ratio* of red to blue water at the top (prior odds of the proposition), and only the *ratio* between the percentages of red and blue water that make it to the bottom (likelihood ratio of the evidence), together determine the *posterior* ratio at the bottom: 3 parts red to 4 parts blue.\n\n## Test problem\n\nHere's another Bayesian problem to attempt. If you successfully solved the earlier problem on your first try, you might try doing this one in your head.\n\n10% of widgets are bad and 90% are good. 4% of good widgets emit sparks, and 12% of bad widgets emit sparks. What percentage of sparking widgets are bad?\n\n%%hidden(Answer):\n- There's $1 : 9$ bad vs. good widgets. (9 times as many good widgets as bad widgets; widgets are 1/9 as likely to be bad as good.)\n- Bad vs. good widgets have a $12 : 4$ relative likelihood to spark, which simplifies to $3 : 1.$ (Bad widgets are 3 times as likely to emit sparks as good widgets.)\n- $(1 : 9) \\cdot (3 : 1) = (3 : 9) \\cong (1 : 3).$ (1 bad sparking widget for every 3 good sparking widgets.)\n- Odds of $1 : 3$ convert to a probability of $\\frac{1}{1+3} = \\frac{1}{4} = 25\\%.$ (25% of sparking widgets are bad.) %%\n\n(If you're having trouble using odds ratios to represent uncertainty, see [this intro](https://arbital.com/p/561) or [this page](https://arbital.com/p/1rb).)\n\n# General equation and proof\n\nTo say exactly what we're doing and prove its validity, we need to introduce some notation from [probability theory](https://arbital.com/p/1rf).\n\nIf $X$ is a proposition, $\\mathbb P(X)$ will denote $X$'s probability, our quantitative degree of belief in $X.$\n\n$\\neg X$ will denote the negation of $X$ or the proposition \"$X$ is false\".\n\nIf $X$ and $Y$ are propositions, then $X \\wedge Y$ denotes the proposition that both X and Y are true. Thus $\\mathbb P(X \\wedge Y)$ denotes \"The probability that $X$ and $Y$ are both true.\"\n\nWe now define [conditional probability](https://arbital.com/p/1rj):\n\n$$\\mathbb P(X|Y) := \\dfrac{\\mathbb P(X \\wedge Y)}{\\mathbb P(Y)} \\tag*{(definition of conditional probability)}$$\n\nWe pronounce $\\mathbb P(X|Y)$ as \"the conditional probability of X, given Y\". Intuitively, this is supposed to mean \"The probability that $X$ is true, *assuming* that proposition $Y$ is true\".\n\nDefining conditional probability in this way means that to get \"the probability that a patient is sick, given that they turned the tongue depressor black\" we should put all the sick *plus* healthy patients with positive test results into a bag, and ask about the probability of drawing a patient who is sick *and* got a positive test result from that bag. In other words, we perform the calculation $\\frac{18}{18+24} = \\frac{3}{7}.$\n\n![diseasitis frequency](https://i.imgur.com/0XDbrYi.png?0)\n\nRearranging [the definition of conditional probability](https://arbital.com/p/1rj), $\\mathbb P(X \\wedge Y) = \\mathbb P(Y) \\cdot \\mathbb P(X|Y).$ So to find \"the fraction of all patients that are sick *and* get a positive result\", we multiply \"the fraction of patients that are sick\" times \"the probability that a sick patient blackens the tongue depressor\".\n\nWe're now ready to prove Bayes's rule in the form, \"the prior odds times the likelihood ratio equals the posterior odds\".\n\nThe \"prior odds\" is the ratio of sick to healthy patients:\n\n$$\\frac{\\mathbb P(sick)}{\\mathbb P(healthy)} \\tag*{(prior odds)}$$\n\nThe \"likelihood ratio\" is how much more *relatively* likely a sick patient is to get a positive test result (turn the tongue depressor black), compared to a healthy patient:\n\n$$\\frac{\\mathbb P(positive | sick)}{\\mathbb P(positive | healthy)} \\tag*{(likelihood ratio)}$$\n\nThe \"posterior odds\" is the odds that a patient is sick versus healthy, *given* that they got a positive test result:\n\n$$\\frac{\\mathbb P(sick | positive)}{\\mathbb P(healthy | positive)} \\tag*{(posterior odds)}$$\n\nBayes's theorem asserts that *prior odds times likelihood ratio equals posterior odds:*\n\n$$\\frac{\\mathbb P(sick)}{\\mathbb P(healthy)} \\cdot \\frac{\\mathbb P(positive | sick)}{\\mathbb P(positive | healthy)} = \\frac{\\mathbb P(sick | positive)}{\\mathbb P(healthy | positive)}$$\n\nWe will show this by proving the general form of Bayes's Rule. For any two hypotheses $H_j$ and $H_k$ and any piece of new evidence $e_0$:\n\n$$\n\\frac{\\mathbb P(H_j)}{\\mathbb P(H_k)}\n\\cdot\n\\frac{\\mathbb P(e_0 | H_j)}{\\mathbb P(e_0 | H_k)}\n=\n\\frac{\\mathbb P(e_0 \\wedge H_j)}{\\mathbb P(e_0 \\wedge H_k)}\n= \n\\frac{\\mathbb P(e_0 \\wedge H_j)/\\mathbb P(e_0)}{\\mathbb P(e_0 \\wedge H_k)/\\mathbb P(e_0)}\n= \n\\frac{\\mathbb P(H_j | e_0)}{\\mathbb P(H_k | e_0)}\n$$\n\nIn the Diseasitis example, this corresponds to performing the operations:\n\n$$\n\\frac{0.20}{0.80}\n\\cdot\n\\frac{0.90}{0.30}\n=\n\\frac{0.18}{0.24}\n= \n\\frac{0.18/0.42}{0.24/0.42}\n= \n\\frac{0.43}{0.57}\n$$\n\nUsing red for sick, blue for healthy, grey for a mix of sick and healthy patients, and + signs for positive test results, the proof above can be visualized as follows:\n\n![bayes venn](https://i.imgur.com/YBc2nYo.png?0)\n\n%todo: less red in first circle (top left). in general, don't have prior proportions equal posterior proportions graphically!%\n\n## Bayes' theorem\n\nAn alternative form, sometimes called \"Bayes' theorem\" to distinguish it from \"Bayes' rule\" (although not everyone follows this convention), uses absolute probabilities instead of ratios. The [law of marginal probability](https://arbital.com/p/marginal_probability) states that for any set of [mutually exclusive and exhaustive](https://arbital.com/p/1rd) possibilities $\\{X_1, X_2, ..., X_i\\}$ and any proposition $Y$:\n\n$$\\mathbb P(Y) = \\sum_i \\mathbb P(Y \\wedge X_i) \\tag*{(law of marginal probability)}$$\n\nThen we can derive an expression for the absolute (non-relative) probability of a proposition $H_k$ after observing evidence $e_0$ as follows:\n\n$$\n\\mathbb P(H_k | e_0)\n= \\frac{\\mathbb P(H_k \\wedge e_0)}{\\mathbb P(e_0)}\n= \\frac{\\mathbb P(e_0 \\wedge H_k)}{\\sum_i P(e_0 \\wedge H_i)}\n= \\frac{\\mathbb P(e_0 | X_k) \\cdot \\mathbb P(X_k)}{\\sum_i \\mathbb P(e_0 | X_i) \\cdot \\mathbb P(X_i)}\n$$\n\nThe equation of the first and last terms above is what you will usually see described as Bayes' theorem.\n\nTo see why this decomposition might be useful, note that $\\mathbb P(sick | positive)$ is an *inferential* step, a conclusion that we make after observing a new piece of evidence. $\\mathbb P(positive | sick)$ is a piece of *causal* information we are likely to have on hand, for example by testing groups of sick patients to see how many of them turn the tongue depressor black. $\\mathbb P(sick)$ describes our state of belief before making any new observations. So Bayes' theorem can be seen as taking what we already believe about the world (including our prior belief about how different imaginable states of affairs would generate different observations), plus an actual observation, and outputting a new state of belief about the world.\n\n## Vector and functional generalizations\n\nSince the proof of Bayes' rule holds for *any pair* of hypotheses, it also holds for relative belief in any number of hypotheses. Furthermore, we can repeatedly multiply by likelihood ratios to chain together any number of pieces of evidence.\n\nSuppose there's a bathtub full of coins:\n\n- Half the coins are \"fair\" and have a 50% probability of coming up Heads each time they are thrown.\n- A third of the coins are biased to produce Heads 25% of the time (Tails 75%).\n- The remaining sixth of the coins are biased to produce Heads 75% of the time.\n\nYou randomly draw a coin, flip it three times, and get the result **HTH**. What's the chance this is a fair coin?\n\nWe can validly calculate the answer as follows:\n\n$$\n\\begin{array}{rll}\n & (3 : 2 : 1) & \\cong (\\frac{1}{2} : \\frac{1}{3} : \\frac{1}{6}) \\\\\n\\times & (2 : 1 : 3) & \\cong ( \\frac{1}{2} : \\frac{1}{4} : \\frac{3}{4} ) \\\\\n\\times & (2 : 3 : 1) & \\cong ( \\frac{1}{2} : \\frac{3}{4} : \\frac{1}{4} ) \\\\\n\\times & (2 : 1 : 3) & \\\\\n= & (24 : 6 : 9) & \\cong (8 : 2 : 3) \\cong (\\frac{8}{13} : \\frac{2}{13} : \\frac{3}{13})\n\\end{array}\n$$\n\nSo the posterior probability the coin is fair is 8/13 or ~62%.\n\nThis is one reason it's good to know the [odds form](https://arbital.com/p/1x5) of Bayes' rule, not just the [probability form](https://arbital.com/p/554) in which Bayes' theorem is often given.%%note: Imagine trying to do the above calculation by repeatedly applying the form of the theorem that says: $$\\mathbb P(H_k | e_0) = \\frac{\\mathbb P(e_0 | X_k) \\cdot \\mathbb P(X_k)}{\\sum_i \\mathbb P(e_o | X_i) \\cdot \\mathbb P(X_i)}$$ %%\n\nWe can generalize further by writing Bayes' rule in a functional form. If $\\mathbb O(H_i)$ is a relative belief vector or relative belief function on the variable $H,$ and $\\mathcal L(e_0 | H_i)$ is the likelihood function giving the relative chance of observing evidence $e_0$ given each possible state of affairs $H_i,$ then relative posterior belief $\\mathbb O(H_i | e_0)$ is given by:\n\n$$\\mathbb O(H_i | e_0) = \\mathcal L(e_0 | H_i) \\cdot \\mathbb O(H_i)$$\n\nIf we [normalize](https://arbital.com/p/1rk) the relative odds $\\mathbb O$ into absolute probabilities $\\mathbb P$ - that is, divide through $\\mathbb O$ by its sum or integral so that the new function sums or integrates to $1$ - then we obtain Bayes' rule for probability functions:\n\n$$\\mathbb P(H_i | e_0) \\propto \\mathcal L(e_0 | H_i) \\cdot \\mathbb P(H_i) \\tag*{(functional form of Bayes' rule)}$$\n\n# Applications of Bayesian reasoning\n\nThis general Bayesian framework - prior belief, evidence, posterior belief - is a lens through which we can view a *lot* of formal and informal reasoning plus a large amount of entirely nonverbal cognitive-ish phenomena.%%note:This broad statement is widely agreed. Exactly *which* phenomena are good to view through a Bayesian lens is sometimes disputed.%%\n\nExamples of people who might want to study Bayesian reasoning include:\n\n- Professionals who use statistics, such as scientists or medical doctors.\n- Computer programmers working in the field of machine learning.\n- Human beings trying to think.\n\nThe third application is probably of the widest general interest.\n\n## Example human applications of Bayesian reasoning\n\nPhilip Tetlock found when studying \"superforecasters\", people who were especially good at predicting future events:\n\n\"The superforecasters are a numerate bunch: many know about Bayes' theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes' theorem is Bayes' core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.\" — Philip Tetlock and Dan Gardner, [_Superforecasting_](https://arbital.com/p/https://en.wikipedia.org/wiki/Superforecasting)\n\nThis is some evidence that *knowing about* Bayes' rule and understanding its *qualitative* implications is a factor in delivering better-than-average intuitive human reasoning. This pattern is illustrated in the next couple of examples.\n\n### The OKCupid date.\n\nOne realistic example of Bayesian reasoning was deployed by one of the early test volunteers for a much earlier version of a guide to Bayes' rule. She had scheduled a date with a 96% OKCupid match, who had then cancelled that date without other explanation. After spending some mental time bouncing back and forth between \"that doesn't seem like a good sign\" versus \"maybe there was a good reason he canceled\", she decided to try looking at the problem using that Bayes thing she'd just learned about. She estimated:\n\n- A 96% OKCupid match like this one, had prior odds of 2 : 5 for being a desirable versus undesirable date. (Based on her prior experience with 96% OKCupid matches, and the details of his profile.)\n- Men she doesn't want to go out with are 3 times as likely as men she might want to go out with to cancel a first date without other explanation.\n\nThis implied posterior odds of 2 : 15 that this was an undesirable date, which was unfavorable enough not to pursue him further.%%note: She sent him what might very well have been the first explicitly Bayesian rejection notice in dating history, reasoning that if he wrote back with a Bayesian counterargument, this would promote him to being interesting again. He didn't write back.%%\n\nThe point of looking at the problem this way is not that she knew *exact* probabilities and could calculate that the man had an exactly 88% chance of being undesirable. Rather, by breaking up the problem in that way, she was able to summarize what she thought she knew in compact form, see what those beliefs already implied, and stop bouncing back and forth between imagined reasons why a good date might cancel versus reasons to protect herself from potential bad dates. An answer *roughly* in the range of 15/17 made the decision clear.\n\n### Internment of Japanese-Americans during World War II\n\nFrom Robyn Dawes's [Rational Choice in an Uncertain World](https://amazon.com/Rational-Choice-Uncertain-World-Robyn/dp/0155752154):\n\n> Post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, \"I take the view that this lack [subversive activity](https://arbital.com/p/of) is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed... I believe we are just being lulled into a false sense of security.\"\n\nYou might want to take your own shot at guessing what Dawes had to say about a Bayesian view of this situation, before reading further.\n\n%%hidden(Answer):Suppose we put ourselves into the shoes of this congressional hearing, and imagine ourselves trying to set up this problem.\n\n- The [prior](https://arbital.com/p/1rm) [odds](https://arbital.com/p/1rb) that there would be a conspiracy of Japanese-American saboteurs.\n- The [likelihood](https://arbital.com/p/56t) of the observation \"no visible sabotage or any other type of espionage\", *given* that a Fifth Column actually existed.\n- The likelihood of the observation \"no visible sabotage from Japanese-Americans\", in the possible world where there is *no* such conspiracy.\n\nAs soon as we set up this problem, we realize that, whatever the probability of \"no sabotage\" being observed if there is a conspiracy, the likelihood of observing \"no sabotage\" if there *isn't* a conspiracy must be even higher. This means that the likelihood ratio:\n\n$$\\frac{\\mathbb P(\\neg \\text{sabotage} | \\text {conspiracy})}{\\mathbb P(\\neg \\text {sabotage} | \\neg \\text {conspiracy})}$$\n\n...must be *less than 1,* and accordingly:\n\n$$\n\\frac{\\mathbb P(\\text {conspiracy} | \\neg \\text{sabotage})}{\\mathbb P(\\neg \\text {conspiracy} | \\neg \\text{sabotage})}\n<\n\\frac{\\mathbb P(\\text {conspiracy})}{\\mathbb P(\\neg \\text {conspiracy})}\n\\cdot\n\\frac{\\mathbb P(\\neg \\text{sabotage} | \\text {conspiracy})}{\\mathbb P(\\neg \\text {sabotage} | \\neg \\text {conspiracy})}\n$$\n\nObserving the total absence of any sabotage can only decrease our estimate that there's a Japanese-American Fifth Column, not increase it. (It definitely shouldn't be \"the most ominous\" sign that convinces us \"more than any other factor\" that the Fifth Column exists.)\n\nAgain, what matters is not the *exact* likelihood of observing no sabotage given that a Fifth Column actually exists. As soon as we set up the Bayesian problem, we can see there's something *qualitatively* wrong with Earl Warren's reasoning.%%\n\n# Further reading\n\nThis has been a very brief and high-speed presentation of Bayes and Bayesianism. It should go without saying that a vast literature, nay, a universe of literature, exists on Bayesian statistical methods and Bayesian epistemology and Bayesian algorithms in machine learning. Staying inside Arbital, you might be interested in moving on to read:\n\n## More on the technical side of Bayes' rule\n\n- A sadly short list of [example Bayesian word problems](https://arbital.com/p/22w). Want to add more? (Hint hint.)\n- [__Bayes' rule: Proportional form.__](https://arbital.com/p/1zm) The fastest way to present a step in Bayesian reasoning in a way that will sound sort of understandable to somebody who's never heard of Bayes.\n- [__Bayes' rule: Log-odds form.__](https://arbital.com/p/1zh) A simple transformation of Bayes' rule reveals tools for measuring degree of belief, and strength of evidence.\n- [__The \"Naive Bayes\" algorithm.__](https://arbital.com/p/1zg) (Scroll down to the middle.) The original simple Bayesian spam filter.\n- [__Non-naive multiple updates.__](https://arbital.com/p/1zg) (Scroll down past Naive Bayes.) How to avoid double-counting the evidence, or worse, when considering multiple items of *correlated* evidence.\n- [__Laplace's Rule of Succession.__](https://arbital.com/p/21c) The classic example of an [inductive prior](https://arbital.com/p/21b).\n\n## More on intuitive implications of Bayes' rule\n\n- [__A Bayesian view of scientific virtues.__](https://arbital.com/p/220) Why is it that science relies on bold, precise, and falsifiable predictions? Because of Bayes' rule, of course.\n- [__Update by inches.__](https://arbital.com/p/update_by_inches) It's virtuous to change your mind in response to overwhelming evidence. It's even more virtuous to shift your beliefs a little bit at a time, in response to *all* evidence (no matter how small).\n- [__Belief revision as probability elimination.__](https://arbital.com/p/1y6) Update your beliefs by throwing away large chunks of probability mass.\n- [__Shift towards the hypothesis of least surprise.__](https://arbital.com/p/552) When you see new evidence, ask: which hypothesis is *least surprised?*\n- [__Extraordinary claims require extraordinary evidence.__](https://arbital.com/p/21v) The people who adamantly claim they were abducted by aliens do provide *some* evidence for aliens. They just don't provide quantitatively *enough* evidence.\n- [__Ideal reasoning via Bayes' rule.__](https://arbital.com/p/) Bayes' rule is to reasoning as the [Carnot cycle](https://arbital.com/p/https://en.wikipedia.org/wiki/Carnot_cycle) is to engines: Nobody can be a perfect Bayesian, but Bayesian reasoning is still the theoretical ideal.\n- [__Likelihoods, p-values, and the replication crisis.__](https://arbital.com/p/4xx) Arguably, a large part of the replication crisis can ultimately be traced back to the way journals treat p-values, and a large number of those problems can be summed up as \"P-values are not Bayesian.\"", "date_published": "2017-12-24T23:39:11Z", "authors": ["Joe Huang", "Brie Hoffman", "Eric Rogstad", "Gijs van Dam", "Rihn H", "Mike Totman", "Connor Flexman", "Eliezer Yudkowsky"], "summaries": [], "tags": ["B-Class", "High-speed explanation"], "alias": "693"} {"id": "9a78e97fc4ac3c046b7ee816e9f4b549", "title": "Axiom of Choice", "url": "https://arbital.com/p/axiom_of_choice", "source": "arbital", "source_type": "text", "text": "\"The Axiom of Choice is necessary to select a set from an infinite number of pairs of socks, but not an infinite number of pairs of shoes.\" — *Bertrand Russell, Introduction to mathematical philosophy*\n\n\"Tarski told me the following story. He tried to publish his theorem \\[equivalence between the Axiom of Choice and the statement 'every infinite set A has the same cardinality as AxA'\\](https://arbital.com/p/the) in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never again tried to publish in the Comptes Rendus.\"\n- *Jan Mycielski, A System of Axioms of Set Theory for the Rationalists*\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n#Obligatory Introduction#\nThe Axiom of Choice, the most controversial axiom of the 20th Century. \n\nThe axiom states that a certain kind of function, called a `choice' function, always exists. It is called a choice function, because, given a collection of non-empty sets, the function 'chooses' a single element from each of the sets. It is a powerful and useful axiom, asserting the existence of useful mathematical structures (such as bases for [vector spaces](https://arbital.com/p/-3w0) of arbitrary [dimension](https://arbital.com/p/-dimension_mathematics), and [ultraproducts](https://arbital.com/p/-ultraproduct)). It is a generally accepted axiom, and is in wide use by mathematicians. In fact, according to Elliott Mendelson in Introduction to Mathematical Logic (1964) \"The status of the Axiom of Choice has become less controversial in recent years. To most mathematicians it seems quite plausible and it has so many important applications in practically all branches of mathematics that not to accept it would seem to be a wilful hobbling of the practicing mathematician. \"\n\nNeverless, being an [axiom](https://arbital.com/p/-axiom_mathematics), it cannot be proven and must instead be assumed. \nIn particular, it is an axiom of [set theory](https://arbital.com/p/-set_theory) and it is not provable from the other axioms (the Zermelo-Fraenkel axioms of Set Theory). In fact many mathematicians, in particular [constructive](https://arbital.com/p/-constructive_mathematics) mathematicians, reject the axiom, stating that it does not capture a 'real' or 'physical' property, but is instead just a mathematical oddity, an artefact of the mathematics used to approximate reality, rather than reality itself. In the words of the LessWrong community: the constructive mathematicians would claim it is a statement about [https://arbital.com/p/-https://wiki.lesswrong.com/wiki/Map_and_Territory_](https://arbital.com/p/-https://wiki.lesswrong.com/wiki/Map_and_Territory_). \n\nHistorically, the axiom has experienced much controversy. Before it was shown to be independent of the other axioms, it was believed either to follow from them (i.e., be 'True') or lead to a contradiction (i.e., be 'False'). Its independence from the other axioms was, in fact, a very surprising result at the time. \n\n#Getting the Heavy Maths out the Way: Definitions#\nIntuitively, the [axiom](https://arbital.com/p/-axiom_mathematics) of choice states that, given a collection of *[non-empty](https://arbital.com/p/-5zc)* [sets](https://arbital.com/p/-3jz), there is a [function](https://arbital.com/p/-3jy) which selects a single element from each of the sets. \n\nMore formally, given a set $X$ whose [elements](https://arbital.com/p/-5xy) are only non-empty sets, there is a function \n$$\nf: X \\rightarrow \\bigcup_{Y \\in X} Y \n$$\nfrom $X$ to the [union](https://arbital.com/p/-5s8) of all the elements of $X$ such that, for each $Y \\in X$, the [image](https://arbital.com/p/-3lh) of $Y$ under $f$ is an element of $Y$, i.e., $f(Y) \\in Y$. \n\nIn [logical notation](https://arbital.com/p/-logical_notation),\n$$\n\\forall_X \n\\left( \n\\left[\\in X} Y \\not= \\emptyset \\right](https://arbital.com/p/\\forall_{Y) \n\\Rightarrow \n\\left[\n\\left](https://arbital.com/p/\\exists)\n\\right)\n$$\n\n#Axiom Unnecessary for Finite Collections of Sets#\nFor a [finite set](https://arbital.com/p/-5zy) $X$ containing only [finite](https://arbital.com/p/-5zy) non-empty sets, the axiom is actually provable (from the [Zermelo-Fraenkel axioms](https://arbital.com/p/-zermelo_fraenkel_axioms) of set theory ZF), and hence does not need to be given as an [axiom](https://arbital.com/p/-axiom_mathematics). In fact, even for a finite collection of possibly infinite non-empty sets, the axiom of choice is provable (from ZF), using the [axiom of induction](https://arbital.com/p/-axiom_of_induction). In this case, the function can be explicitly described. For example, if the set $X$ contains only three, potentially infinite, non-empty sets $Y_1, Y_2, Y_3$, then the fact that they are non-empty means they each contain at least one element, say $y_1 \\in Y_1, y_2 \\in Y_2, y_3 \\in Y_3$. Then define $f$ by $f(Y_1) = y_1$, $f(Y_2) = y_2$ and $f(Y_3) = y_3$. This construction is permitted by the axioms ZF.\n\nThe problem comes in if $X$ contains an infinite number of non-empty sets. Let's assume $X$ contains a [countable](https://arbital.com/p/-2w0) number of sets $Y_1, Y_2, Y_3, \\ldots$. Then, again intuitively speaking, we can explicitly describe how $f$ might act on finitely many of the $Y$s (say the first $n$ for any natural number $n$), but we cannot describe it on all of them at once. \n\nTo understand this properly, one must understand what it means to be able to 'describe' or 'construct' a function $f$. This is described in more detail in the sections which follow. But first, a bit of background on why the axiom of choice is interesting to mathematicians.\n\n#Controversy: Mathematicians Divided! Counter-Intuitive Results, and The History of the Axiom of Choice#\nMathematicians have been using an intuitive concept of a set for probably as long as mathematics has been practiced. \nAt first, mathematicians assumed that the axiom of choice was simply true (as indeed it is for finite collections of sets). \n\n[Georg Cantor](https://arbital.com/p/-https://en.wikipedia.org/wiki/Georg_Cantor) introduced the concept of [transfinite numbers](https://arbital.com/p/-transfinite_number) \nand different [cardinalities of infinity](https://arbital.com/p/-4w5) in a 1874 \n[paper](https://arbital.com/p/https://en.wikipedia.org/wiki/Georg_Cantor%27s_first_set_theory_article) (which contains his infamous\n[Diagonalization Argument](https://arbital.com/p/-https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument)) \n and along with this sparked the introduction of [set theory](https://arbital.com/p/-set_theory).\n In 1883, Cantor introduced a principle called the 'Well-Ordering Princple'\n(discussed further in a section below) which he called a 'law of thought' (i.e., intuitively true). \nHe attempted to prove this principle from his other principles, but found that he was unable to do so.\n\n[Ernst Zermelo](https://arbital.com/p/-https://en.wikipedia.org/wiki/Ernst_Zermelo) attempted to \ndevelop an [axiomatic](https://arbital.com/p/-axiom_system) treatment of set theory. He \n managed to prove the Well-Ordering Principle in 1904 by introducing a new principle: The Principle of Choice.\nThis sparked much discussion amongst mathematicians. In 1908 published a paper containing responses to this debate,\nas well as a new formulation of the Axiom of Choice. In this year, he also published his first version of \nthe set theoretic axioms, known as the [Zermelo Axioms of Set Theory](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo_set_theory).\nMathematicians, [Abraham Fraenkel](https://arbital.com/p/-https://en.wikipedia.org/wiki/Abraham_Fraenkel) and \n[Thoralf Skolem](https://arbital.com/p/-https://en.wikipedia.org/wiki/Thoralf_Skolem) improved this system (independently of each other)\ninto its modern version, the [Zermelo Fraenkel Axioms of Set Theory](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory\n).\n\nIn 1914, [Felix Hausdorff](https://arbital.com/p/https://en.wikipedia.org/wiki/Felix_Hausdorff) proved \n[Hausdorff's paradox](https://arbital.com/p/https://en.wikipedia.org/wiki/Hausdorff_paradox). The ideas\nbehind this proof were used in 1924 by [Banach](https://arbital.com/p/-https://en.wikipedia.org/wiki/Stefan_Banach\nStefan) and [Alfred Tarski](https://arbital.com/p/-https://en.wikipedia.org/wiki/Alfred_Tarski)\nto prove the more famous Banach-Tarski paradox (discussed in more detail below).\nThis latter theorem is often quoted as evidence of the falsehood of the axiom \nof choice.\n\nBetween 1935 and 1938, [Kurt Gödel](https://arbital.com/p/-https://en.wikipedia.org/wiki/Kurt_G%C3%B6del) proved that\nthe Axiom of Choice is consistent with the rest of the ZF axioms.\n\nFinally, in 1963, [Paul Cohen](https://arbital.com/p/-https://en.wikipedia.org/wiki/Paul_Cohen) developed a revolutionary\nmathematical technique called [forcing](https://arbital.com/p/-forcing_mathematics), with which he proved that the \naxiom of choice could not be proven from the ZF axioms (in particular, that the negation of AC\nis consistent with ZF). For this, and his proof of the consistency of the negation of the \n[Generalized Continuum Hypothesis](https://arbital.com/p/-continuum_hypothesis) from ZF, he was awarded a fields medal\nin 1966.\n\nThis axiom came to be accepted in the general mathematical community, but was rejected by the\n[constructive](https://arbital.com/p/-constructive_mathematics) mathematicians as being fundamentally non-constructive. \nHowever, it should be noted that in many forms of constructive mathematics, \nthere are *provable* versions of the axiom of choice.\nThe difference is that in general in constructive mathematics, exhibiting a set of non-empty sets\n(technically, in constructive set-theory, these should be 'inhabited' sets) also amounts to \nexhibiting a proof that they are all non-empty, which amounts to exhibiting an element for all\nof them, which amounts to exhibiting a function choosing an element in each. So in constructive \nmathematics, to even state that you have a set of inhabited sets requires stating that you have a choice\nfunction to these sets proving they are all inhabited.\n\nSome explanation of the history of the axiom of choice (as well as some of its issues)\ncan be found in the \npaper \"100 years of Zermelo's axiom of choice: what was the problem with it?\"\nby the constructive mathematician \n[Per Martin-Löf](https://arbital.com/p/-https://en.wikipedia.org/wiki/Per_Martin-L%C3%B6f)\nat [this webpage](https://arbital.com/p/-http://comjnl.oxfordjournals.org/content/49/3/345.full). \n\n(Martin-Löf studied under [Andrey Kolmogorov](https://arbital.com/p/-https://en.wikipedia.org/wiki/Andrey_Kolmogorov) of\n [Kolmogorov complexity](https://arbital.com/p/-5v) and has made contributions to [information theory](https://arbital.com/p/-3qq), \n[mathematical_statistics](https://arbital.com/p/-statistics), and [mathematical_logic](https://arbital.com/p/-mathematical_logic), including developing a form of \nintuitionistic [https://arbital.com/p/-3sz](https://arbital.com/p/-3sz)).\n\nA nice timeline is also summarised on [Stanford Encyclopaedia of Logic](https://arbital.com/p/-http://plato.stanford.edu/entries/axiom-choice/index.html#note-6\nThe).\n\n#So, What is this Choice Thing Good for Anyways?#\nThe Axiom of Choice is in common use by mathematicians in practice. Amongst its many applications are the following:\n\n###Non-Empty Products###\nThis is the statment that taking the mathematical [product](https://arbital.com/p/-product_of_sets) of non-empty sets will always yield a non-empty set.\nConsider infinitely many sets $X_1, X_2, X_3, \\ldots$ indexed by the natural numbers.\n Then an element of the product $\\prod_{i \\in \\mathbb{N}} X_i$ is a family (essentially an infinite tuple) \nof the form $(x_1, x_2, x_3, \\ldots )$ where $x_1 \\in X_1$, $x_2 \\in X_2$ and so on. However, to select such a tuple in\nthe product amounts to selecting a single element from each of the sets. Hence, without the axiom of choice, it is\nnot provable that there are any elements in this product. \n\nNote, that without the axiom, it is not necessarily the case that the product is empty. It just isn't provable that\nthere are any elements. However, it is consistent that there exists some product that is empty. Also, bear in mind\nthat even without the axiom of choice, there are products of this form which are non-empty. It just can't be shown\nthat they are non-empty in general.\n\nHowever, it does seem somewhat counterintuitive that such a product be non-empty. In general, given two non-empty sets, \n$X_1$ and $X_2$, the product is at least as large as either of the sets. Adding another non-empty set $X_3$ usually makes\nthe product larger still. It may seem strange, then, that taking the limit of this procedure could result in an empty\nstructure. \n\n###Existence of a basis for any (esp. infinite dimensional) vector space###\nThis example is discussed in more detail in the next section.\n\nA [https://arbital.com/p/-3w0](https://arbital.com/p/-3w0) can be intuitively thought of as a collection of [vectors](https://arbital.com/p/-vector) which themselves can be thought of as arrows (or the information of a 'direction' and a 'magnitude'). \n\nA vector space has a number of 'directions' in which the vectors can point, referred to as its [https://arbital.com/p/-vector_space_dimension](https://arbital.com/p/-vector_space_dimension). For a finite-dimensional vector space, it is possible to find a [basis](https://arbital.com/p/-vector_space_basis) consisting of vectors. The property of a basis is that: \n\n - any vector in the space can be built up as a combination of the\n vectors in the basis \n - none of the vectors in the basis can be built up as a combination of the rest of the vectors in the basis.\n\nFor finite-dimensional vector spaces, such a basis is finite and can be found. However, for infinite-dimensional vector spaces, a way of finding a basis does not always exist. In fact, if the axiom of choice is false, then there is an infinite-dimensional vector space for which it is impossible to find a basis.\n\n###Brouwer's Fixed-Point Theorem: the existence of a fixed point for a function from a \"nice\" shape to itself###\nAny [continuous functions](https://arbital.com/p/-continuous_function) $f: C \\rightarrow C$ from a [https://arbital.com/p/-closed_disk](https://arbital.com/p/-closed_disk) $C$ onto itself has a [https://arbital.com/p/-fixed_point](https://arbital.com/p/-fixed_point) $x_0$. (In full generality, $C$ may be any [https://arbital.com/p/-5xr](https://arbital.com/p/-5xr) [compact](https://arbital.com/p/-compact_mathematics) [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz)).\n\nIn other words, there is at least one point $x_0 \\in C$ such that $f(x_0) = x_0$.\n\nThis works also for rectangles, and even in multiple dimensions. Hence, if true for real world objects, this theorem has the following somewhat surprising consequences:\n\n - Take two identical sheets of graph with marked coordinates. Crumple up sheet B and place the crumpled ball on top of sheet A. Then, there is some coordinate $(x , y)$ (where $x$ and $y$ are [real numbers](https://arbital.com/p/-4bc), not necessarily [integers](https://arbital.com/p/-48l)), such that this coordinate on sheet B is directly above that coordinate on sheet A. \n - Take a cup of tea. Stir it and let it settle. There is some point of the tea which ends up in the same place it started.\n - Take a map of France. Place it on the ground in France. Take a pin. There is a point on the map through which you could stick the pin and the pin will also stick into the ground at the point represented on the map.\n\n###Existence of ultrafilters and hence ultraproducts###\nThe following example is somewhat technical. An attempt is made to describe it very roughly.\n\nGiven an indexing set $I$, and a collection of mathematical structures \n$(A_i)_{i \\in I}$\n(of a certain type) indexed by $I$. (For example, let $I$ be the [natural numbers](https://arbital.com/p/-45h) $\\mathbb{N}$ and let each of the mathematical structures $A_n$ be numbered). \n\nAn [https://arbital.com/p/-ultrafilter](https://arbital.com/p/-ultrafilter) $\\mathcal{U}$ on $I$ is a collection of subsets of $I$ of a special type. Intuitively it should be thought of as a collection of `big subsets' of $I$. It is possible to form the set of all [https://arbital.com/p/-cofinite](https://arbital.com/p/-cofinite) subsets\nof $I$ without the axiom of choice, and $\\mathcal{U}$ should contain at least these. However, for mathematical reasons, $\\mathcal{U}$ should also contain 'as many sets as possible'. However, in order to do so, there are some 'arbitrary choices' that have to be made. This is where the axiom of choice comes in.\n\nOne of the applications of ultrafilters is [ultraproducts](https://arbital.com/p/-ultraproduct). For each subset $X \\subseteq I$ such that $X \\in \\mathcal{U}$ there is a subcollection\n$(A_i)_{i \\in X}$ of $(A_i)_{i \\in I}$. Call such a subcollection a \"large collection\". The ultraproduct $A$ is a structure that captures the properties of large collections of the $A_i$s, in the sense that a statement of [https://arbital.com/p/-first_order_logic](https://arbital.com/p/-first_order_logic) is true of the ultraproduct $A$ if and only if it is true of some large collection of the $A_i$s.\n\nNow, any statement that is either true for cofinitely many $A_i$s or false for cofinitely many $A_i$s will be true or false respectively for $A$. But what about the other statements? This is where the arbitrary choices come in. Each statement needs to be either true or false of $A$, and we use the axiom of choice to form an ultrafilter that does that for us.\n\nOne basic example of an application of ultrafilters is forming the [https://arbital.com/p/-nonstandard_real_numbers](https://arbital.com/p/-nonstandard_real_numbers).\n\nFurther examples of applications of the axiom of choice may be found on\nthe Wikipedia page [here](https://arbital.com/p/-https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents) \nand [here](https://arbital.com/p/-https://en.wikipedia.org/wiki/Axiom_of_choice#Results_requiring_AC_.28or_weaker_forms.29_but_weaker_than_it). \n\n#Physicists Hate Them! Find out How Banach and Tarski Make Infinity Dollars with this One Simple Trick! #\n\nOne of the most counter-intuitive consequences of the Axiom of Choice is the Banach-Tarski Paradox. It is a theorem provable using the Zermelo-Fraenkel axioms along with the axiom of choice.\n\nThis theorem was proven in a 1924 paper by [Stefan Banach](https://arbital.com/p/-https://en.wikipedia.org/wiki/Stefan_Banach) and\n[Alfred Tarski](https://arbital.com/p/-https://en.wikipedia.org/wiki/Alfred_Tarski).\n\nIntuitively, what the theorem says is that it is possible to take a ball, cut it into five pieces, rotate and shift these pieces and end up with two balls. \n\nNow, there are some complications, including the fact the pieces themselves are infinitely complex, and the have to pass through each other when they are being shifted. There is no way a practical implementation of this theorem could be developed. Nevertheless, that the volume of a ball could be changed just by cutting, rotating and shifting seems highly counter-intuitive. \n\nA suprisingly good video explanation in laymans terms by Vsauce can be found [Here](https://arbital.com/p/-https://www.youtube.com/watch?v=s86-Z-CbaHA).\n\nThe video [\nInfinity shapeshifter vs. Banach-Tarski paradox](https://arbital.com/p/-https://www.youtube.com/watch?v=ZUHWNfqzPJ8) by Mathologor advertises itself as a\n prequel to the above video, which puts you 'in the mindset' of a mathematician, so-to-speak, and makes the\nresult a bit less surprising. \n\nThis theorem is the main counterexample used as evidence of the falsehood of the Axiom of Choice. If not taken as evidence of its falsehood, thsi is at least used as evidence of the counter-intuitiveness of AC. \n\nQ: What's an Anagram of Banach-Tarski? \nA: Banach-Tarski Banach Tarski\n\n#How Something Can Exist Without Actually Existing: The Zermelo Fraenkel Axioms and the Existence of a Choice Function#\nThe [Zermelo-Fraenkel axioms of Set Theory](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory)\n(ZF)\nintroduce a fundamental concept, called a [set](https://arbital.com/p/-3jz), and a [relation](https://arbital.com/p/-3nt) between \nsets, called\nthe [element of](https://arbital.com/p/-5xy) relation (usally written $\\in$) where $x \\in X$ should be interpreted as\n'$x$ is contained inside $X$ in the way that an item is contained inside a box'. There are then a number\nof [axioms](https://arbital.com/p/-axiom_mathematics) imposed on these fundamental objects and this relation.\n\nWhat one must remember is that theorems derived from these axioms are merely statements of the form that \n'if one has a system which satisfies these laws (even if, for example $\\in$ is interpreted as something entirely \ndifferent from being contained inside something), then it must also satisfy the statements of the theorems'.\nHowever, its general use is to imagine sets as being something like boxes which contain mathematical objects. \nMoreover, almost any statement of mathematics can be stated in terms of sets, where the mathematical objects\nin question become sets of a certain kind. In this way, since the mathematical objects in question are set up\nto satisfy the axioms, then anything which can be derived from these axioms will also hold for the mathematical\nobjects.\n\nIn particular, a [function](https://arbital.com/p/-function_mathematics) can be interpreted as a specific kind of set. In particular, \nit is a set of [ordered pairs](https://arbital.com/p/-ordered_pair) (more generally, [ordered n-tuples](https://arbital.com/p/-tuple), each of which can \nitself be interpreted as a specific kind of set) satisfying a specific property. \n\nThere are different ways of stating the same axioms (by separating or combining axioms, giving different\nformulations of the same axioms, or giving different axioms that are equivalent given the other axioms) hence\nwhat follows is only a specific formulation, namely, the \n[Zermelo-Fraenkel one from Wikipedia](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory).\n\nThe axioms begin by stating that two sets are the same if they have the same elements. \nThen the axiom regularity states sets are well-behaved in a certain way that's not so\nimportant to us right now. \n\nNow comes the part that is important for our purposes: \nThe axiom schema of specification (actually a schema specifying \ninfinitely many axioms, but we can pretend it is\njust one axiom for now). This is an axiom asserting the *existence* of certain sets. \nIn a sense, it allows one to 'create' a new set out of an existing one. Namely,\ngiven a set $X$ and a statement $\\phi$ of [first order logic](https://arbital.com/p/-first_order_logic)\n(a statement about sets of a specific, very formal form, \nand which uses only the $\\in$ symbol and the reserved symbols of logic), it is \npossible to create a set $\\{x \\in X : \\phi(x) \\}$ of all of the elements of\n$X$ for which the formula $\\phi$ is true. \n\nFor example, if we know (or assume) the set of all numbers $\\mathbb{N}$ exists, and \nwe have some way of formalising the statement '$x$ is an even number' as a first-order\nstatement $\\phi(x)$, then the set of all even numbers exists.\n\nAdditionally the axioms of pairing and union, axiom schema of replacement, \nand axiom of power set are all of the form \"Given that some sets $A, B, C, \\ldots$ \nexist, then some other set $X$ also exists. The axiom of infinity simply states\nthat an infinite set with certain properties exists. \n\nNotice that all of the above are axioms. It is not expected that any of them be proven.\nThey are simply assumptions that you make about whichever system you\nwant to reason. Any theorems that you can prove from these axioms will then be true\nabout your system. However, mathematicians generally agree that these axioms capture\nour intuitive notion about how \"sets\" of objects (or even concepts)\n should behave, and about which sets we are allowed to reason (which sets 'exist').\nMost of these (except maybe the axiom of infinity, and even that one possibly) \nseem to apply to our world and seem to work fine.\n\nNow, the last axiom, the axiom of choice (or the well-ordering principle) asserts\nthat a certain kind of function exists. It\n cannot\nbe proven from the above. In other words, given a system that satisfies all of\nthe above, it cannot be assumed that the system also satisfies this axiom\n(nor in fact that it does *not* satisfy this axiom). That's all\nthere is to it, really. \n\nYet, mathematicians do disagree about this axiom, and whether it applies to our \nworld as well. Some mathematicians take a [Platonic](https://arbital.com/p/-https://en.wikipedia.org/wiki/Platonism)\nview of mathematics, in which mathematical objects such as sets actually exist in some \nabstract realm, and for which the axiom of choice is either true or false, but we do\nnot know which. Others take a highly constructive view (in many cases motivated\nby realism and the ability of the mathematics to model the world) in which \neither the axiom of choice is false, or infinite sets do not exist in which \ncase the axiom of choice is provable and hence superfluous. Others take the view\nthat the axiom is not true or false, but merely useless, and that anything provable\nfrom it is meaningless. Many seem not to care: the axiom is convenient for the mathematics\nthey wish to do (whose application they are not much concerned about in any case) and\nhence they assume it without qualm. \n\n#How Something can be Neither True nor False:\nHow Could we Possibly Know that AC is Independent of ZF?#\nIt has been stated above multiple times that the Axiom of Choice is independent from the other\nZermelo-Fraenkel axioms. In other words, it can neither be proven nor disproven from those axioms.\nBut how can we possibly know this fact? \n\nThe answer lies in [models](https://arbital.com/p/-model_theory). However, these are not physical or even computational models,\nbut models of a more abstract kind. \n\nFor example, a model of [group theory](https://arbital.com/p/-3g8) is a specific [group](https://arbital.com/p/-3gd),\nwhich itself can be characterized as a specific set. Now, notice that the axioms of group theory\nsay nothing about whether a given group is abelian (commutative) or not. It does not follow\nfrom the axioms that for groups it is always true that $xy = yx$, nor does it follow that for\ngroups there are always some $x$ and $y$ for which $xy \\not= yx$. In other words, the \n\"abelian axiom\" is independent of the axioms of group theory. How do we know this fact? We \nneed simply exhibit two models, two groups, one of which is abelian and the other not. \nFor these groups, I pick, oh say the [cyclic group on 3 elements](https://arbital.com/p/-47y)\nand the [symmetric group](https://arbital.com/p/-497) $S_3$. The first is abelian, the second, not.\n\nIn order to reason about such models of set theory, one assumes the existence \nof \"meta-sets\" in some meta-theory. The entire \"universe\" of set theory is then a certain \"meta-set\" \nbehaving in a certain way. In case this feels too much like cheating,\n[this \nStackExchange answer](https://arbital.com/p/-http://math.stackexchange.com/questions/531516/meta-theory-when-studying-set-theory) should help clear things up. In particular, the following quote from\nVanLiere's thesis:\n\n\"\nSince these questions all have to do with first-order provability, we could take as our metatheory some very weak theory (such as Peano arithmetic) which is sufficient for formalizing first-order logic. However, as is customary in treatises about set theory, we take as our metatheory ZF plus the Axiom of choice in order to have at our disposal the infinitary tools of model theory. We will also use locutions such as ... which are only really justifiable in some even stronger metatheory with the understanding that they could be eliminated through the use of Boolean-valued models or some other device.\n\"\n\nIn other words, it is possible to form these models in some weaker theory on which we have \"more of a grip\". The entirity\nof set theory are then special objects satisfying the axioms of this weaker theory. Is it possible to repeat this process\nad infinitum? No. But if we want, we could even deal with just a finite fragment of set theory: We assume that any mathematics\nwe want to do only needs a finite number of the (infinitely many) axioms of set theory. We then prove what we want about this \nfinite fragment. But we may as well have proved it about the whole theory. \n\nNow, pick (or construct) two specific objects of the meta-theory such\n that in one of them, the axiom of choice is true, and that\nin the other, the axiom of choice is false.\n\nTo obtain these two models requires vastly different approaches which will not be \ndescribed in detail here. \nMore detail can be found online\n in [\nKunen's text](https://arbital.com/p/-http://vanilla47.com/PDFs/Cryptography/Mathematics/Set%20Theory%20PDFs/SET%20THEORY.pdf). The consistency of choice is the easier direction, and involves constructing\nsomething called [Gödel's constructible universe of sets](https://arbital.com/p/-constructible_universe) (or just\nGödel's universe or the constructible universe). The consistency of the negation of choice\nis more difficult, and involves a technique developed by \n[https://arbital.com/p/-https://en.wikipedia.org/wiki/Paul_Cohen_](https://arbital.com/p/-https://en.wikipedia.org/wiki/Paul_Cohen_)\ncalled [forcing](https://arbital.com/p/-forcing_mathematics).\n\n#A Rose by Any Other Name: Alternative Characterizations of AC#\nThere are a few similar ways to state the axiom of choice. For example:\n\nGiven any set $C$ containing non-empty sets which are pairwise [disjoint](https://arbital.com/p/-disjoint_set)\n(any two sets in $C$ do not [intersect](https://arbital.com/p/-5sb)), \nthen there is some set $X$ that intersects each of the sets in $C$ in exactly one element.\n\nThere are also many alternative theorems which at first glance appear to be very different from the \naxiom of choice, but which are actually equivalent to it, in the sense that each of the theorems\ncan both be proven from the axiom of choice and be used to prove it (in conjunction\nwith the other ZF axioms).\n\nA few examples:\n - Zorn's Lemma (Described in more detail below).\n - Well-Ordering Theorem (Also described in more detail below).\n - The [product](https://arbital.com/p/-5zs) of non-empty sets is non-empty.\n - Tarski's Theorem for Binary [Products](https://arbital.com/p/-product_mathematics) \n (from the quote at the start of this article) that $A$ is [bijective](https://arbital.com/p/-bijection) to $A \\times A$. \n - Every [surjective function](https://arbital.com/p/-4bg) has a [right inverse](https://arbital.com/p/-inverse_of_function).\n - Given two sets, either \n - Every [vector space](https://arbital.com/p/-3w0) has a [basis](https://arbital.com/p/-vector_space_basis)\n - Every [connected graph](https://arbital.com/p/-connected_graph) has a [spanning tree](https://arbital.com/p/-spanning_tree).\n - For any pair of sets have comparable[cardinality](https://arbital.com/p/-4w5): for any pair of sets\n$X$ and $Y$, either they are the same size, or one is smaller than the other.\n \nMore examples can be found on the [Wikipedia page](https://arbital.com/p/-https://en.wikipedia.org/wiki/Axiom_of_choice#Equivalents).\n\nBecause of the intuitive nature of some of these statements (especially that products are non-empty,\nthat vector spaces have bases and that cardinalities are comparable), they are often used as evidence\nfor the truth, or motivation for the use, of the Axiom of Choice.\n\n#Zorn's Lemma? I hardly Know her!#\nThe following is a specific example of a very common way in which the axiom of choice is used, called Zorn's Lemma. \nIt is a statement equivalent to the axiom of choice, but easier to use in many mathematical applications.\n\nThe statement is as follows:\n\nEvery [partially ordered set](https://arbital.com/p/-3rb) (poset) for which every [chain](https://arbital.com/p/-chain_order_theory) has an [upper bound](https://arbital.com/p/-upper_bound_mathematics)\nhas a [maximal element](https://arbital.com/p/-maximal_mathematics). \n\nIn other words, if you have an ordered set $X$ and you consider any linearly ordered subset $C$, and there is some \nelement $u \\in X$ which is at least as large as any element in $C$, i.e., $u \\geq c$ for any $c \\in C$, then there\nis an element $m \\in X$ which is maximal, in the sense that it is at least as big as any comparable \nelement of $X$, i.e., for any $x \\in X$, it holds that $m \\not< x$. \n(It is maximal, but not necessarily a global maximum. There is nothing in $X$\nlying above $m$, but there may be incomparable elements).\n\nAgain, this is provable for a finite poset X, and for some infinite posets X, but not provable in general. \n\nNow, why is this rather arcane statement useful? \n\nWell, often for some type of mathematical structure we are interested in, \nall the structures of that type form a poset under inclusion, and if a maximal\nsuch structure existed, it would have a particularly nice property.\nFurthermore, for many structures, a union of all the structures in a \nupwards-increasing chain of such structures (under inclusion) is itself\na structure of the right type, as well as an upper-bound for the chain.\nThen Zorn's lemma gives us the maximal structure we are looking for.\n\nAs an example, consider a (esp. infinite-dimensional) vector space $V$, and trying to find a basis\nfor $V$.\n\nChoose any element $v_1 \\in V$. Now consider all possible [linearly independent](https://arbital.com/p/-linear_independence) sets\ncontaining $v$. These form a poset (which contains at least the set $\\{v_1\\}$ since that is linearly independent).\nNow consider any chain, possibly infinite, of such sets. It looks like\n $\\{v_1\\} \\subseteq \\{v, v_2\\} \\subseteq \\{v_1, v_2, v_3 \\} \\subseteq \\cdots$. Then take the \n[union](https://arbital.com/p/-union) of all the sets in the chain\n$ \\{v_1\\} \\cup \\{v_1, v_2\\} \\cup \\{v_1, v_2, v_3 \\} \\cdots = \\{v_1, v_2, v_3, \\ldots \\}$. Call it $B$.\nThen $B$ \ncontains all the elements in any of the sets in the chain. \n\nIt can be shown to be linearly independent, since if some element $v_i$ could be formed\nas a linear combination of finitely many other elements in $B$, then this could be done\nalready in one of the sets in the chain.\n\nThen every chain of such linearly indpendent sets has an upper bound, so\nthe hypothesis of Zorn's Lemma holds. Then by Zorn's Lemma, there is a maximal element\n$M$. By definition, this maximal element has no superset that is itself linearly independent.\nThis set of vectors also spans $V$ (every element of $V$ can be written as a linear\ncombination of vectors in $M$), since if it did not, then there would be a vector\n$v \\in V$ which is linearly independent of $M$ (cannot be written as a linear \ncombination of vectors in $M$) and then the set $M \\cup \\{v\\}$, which is $M$ adjoined\nwith $v$, strictly contains $M$, contradicting the maximality of $M$. \n\nSince the definition of a basis is a maximal linearly independent set spanning $V$,\n the proof is done.\nQED.\n\n\nOne might wonder why Zorn's lemma is even necessary. Why could we not just have picked the \nunion of the chain as our basis? In a sense, we could have, provided we use the correct chain.\nFor example, the chain of two elements $\\{v_1\\}$ and $\\{v_1, v_2\\}$ is not sufficient. We need\nan infinite chain (and, in fact, a [large enough infinite](https://arbital.com/p/-transfinite) chain at that) . \nBut there is a difference between being able to \nprove that any chain has an upper bound, and being able to actually choose a specific chain that works.\nIn some sense, without Zorns lemma, we can reason in a very general vague way that, yes, the chains all\nhave upper bounds, and there *might* be a long enough chain, and if there is then its upper bound will\nbe the maximal element we need. Zorn's lemma formalizes this intuition, and without it,\nlemma we can't always pin down a specific chain which works.\n\nNote how Zorn's Lemma allows us to make infinitely many arbitrary choices as far as selecting\nelements of the infinite basis for the vector space is concerned. In general, this is where \nZorn's lemma comes in useful. \n\nThe upper bounds are necessary. For example, in the [natural numbers](https://arbital.com/p/-45h) $\\mathbb{N}$, the \nentirety of the natural numbers forms a chain. This chain has no upper bound in $\\mathbb{N}$. Also,\nthe natural numbers do not have a maximal element.\n\nNote that it may also be possible for a maximal element to exist without there being an upper bound.\nConsider for example the natural numbers with an 'extra' element which is not comparable to any of the\nnumbers. Then this is a perfectly acceptable poset. Since this extra element is incomparable, then in \nparticular there is no \n\n#Getting Your Ducks in a Row, or, Rather, Getting Your Real Numbers in a Row: The Well-Ordering Principle#\nA [linearly ordered set](https://arbital.com/p/540) is called well-ordered if any of its non-empty subsets has a least element.\n\nFor example, the [natural numbers](https://arbital.com/p/-45h) $\\mathbb{N}$ are well-ordered. Consider any non-empty subset of the natural \nnumbers (e.g., $\\{42, 48, 64, \\ldots\\}$. It has a least element (e.g., $42$). \n\nThe positive [real numbers](https://arbital.com/p/-4bc) (and in fact the positive [rational numbers](https://arbital.com/p/-4zq)) \nare not well-ordered. \nThere is no least element, since for any number bigger than zero (e.g. 1/3) it is possible to find a smaller number\n(e.g. 1/4) which is also bigger than zero.\n\nThe well-ordering principle states that any set is [bijective](https://arbital.com/p/-bijection) to some well-ordered one. \nThis basically states that you can have a well-ordered set of any size. \n\nTo see why this is surprising, try imaging a different linear order on the reals such that any subset\nyou may choose - *Any* subset - has a least element. \n\nAgain, the Axiom of Choice allows us to do this.\n\nIn fact if we are always able to well order sets, then\nwe are able to use it to make choice functions: imagine you needed to choose an element from each set\nin a set of sets, then you can just choose the least element from each set. \n\n#AC On a Budget: Weaker Versions of the Axiom#\nThere are also theorems which do not follow from ZF, and which do follow from AC, but are not strong\nenough to use to prove AC. What this is equivalent to saying is that there are models of set theory\nin which these theorems are true, but for which the axiom of choice does not hold in full generality.\n\nA few examples of such theorems:\n\n - The Hausdorff paradox and Banach-Tarski paradox (mentioned above).\n - A [union](https://arbital.com/p/-union_mathematics) of [countably many](https://arbital.com/p/-countble_infinity) countable sets is countable.\n - The [axiom of dependent choice](https://arbital.com/p/-axiom_of_dependent_choice) (given a non-empty set $X$ and\n([entire](https://arbital.com/p/-entire_relation)) [binary relation](https://arbital.com/p/-binary_relation) $R$, there exists a sequence \n$(x_n)_{n \\in \\mathbb{N}}$ such that $x_n$ is $R$-related to $x_{n+1}$) .\n - The [axiom of countable choice](https://arbital.com/p/-axiom_of_countable_choice) (every countable set of sets \nhas a choice function).\n - Every [field](https://arbital.com/p/-field_mathematics) has an [algebraic closure](https://arbital.com/p/-algebraic_closure)\n - Existence of non-principal [ultrafilters](https://arbital.com/p/-ultrafilter).\n - Gödel's [completeness theorem](https://arbital.com/p/-completeness_theorem) for first-order logic.\n - [Boolean Prime Ideal Theorem](https://arbital.com/p/-booelan_prime_ideal_theorem) (useful for proving existence \nof non-principal ultrafilters and Gödel's completenes theorem).\n - The [Law of excluded middle](https://arbital.com/p/-excluded_middle) for logic.\n\nMore examples may be found on \n[Wikipedia page](https://arbital.com/p/-https://en.wikipedia.org/wiki/Axiom_of_choice#Results_requiring_AC_.28or_weaker_forms.29_but_weaker_than_it\nthe).\n\n#And In Related News: The Continuum Hypothesis#\nIntuitively, the [Continuum Hypothesis](https://arbital.com/p/-continuum_hypothesis) (CH) states that there is no\nset strictly bigger than the set of all [natural numbers](https://arbital.com/p/-45h), \nbut strictly smaller than the set of all [real numbers](https://arbital.com/p/-4bc). (These are two\n[infinite](https://arbital.com/p/-infnity) sets, but they are [different infinities](https://arbital.com/p/-2w0)). \n\nThe formal statement concerns [cardinality](https://arbital.com/p/-4w5) of sets. In particular, \nit states that there is no set which has cardinality strictly larger than the\nset of natural numbers, but strictly smaller than the set of real numbers.\n\nIt is called the 'Continuum Hypothesis' because it concerns the size of the\ncontinuum (the set of real numbers) and was hypothesized to be true by Georg\nCantor in 1878. \n\nIt is again independent of the Zermelo-Fraenkel axioms, and this was proven in the same manner and\nat the same time as the proof of the independence of AC from ZF (described in more detail above).\n\nIn fact, the continuum hypothesis was shown to be independent even from ZFC, (ZF with the Axiom of Choice).\nHowever, the continuum hypothesis *implies* the Axiom of Choice under ZF. In other words, given the \nZF axioms, if you know that the Axiom of Choice is true then you do not yet know anything about the truth\nof Continuum Hypothesis. However, if you know that the Continuum Hypothesis is true, then you know the \nAxiom of Choice must also be true.\n\nThe [Generalized Continuum Hypthosis](https://arbital.com/p/-generalized_continuum_hypthesis) (GCH) is, well, a generalized\nversion of the Continuum Hypothesis, stating that not only are there no sets of size lying strictly\nbetween the natural numbers $\\mathbb{N}$ and the reals $\\mathbb{R}$, \nbut that for any set $X$, there is no set of size\nlying strictly between the sizes of $X$ and the [power set](https://arbital.com/p/-6gl) $P(X)$. In particular, note\nthat the reals $\\mathbb{R}$ is of the same size as the power set $P(\\mathbb{N})$ of the naturals,\nso that GCH implies CH. It is also strictly stronger than CH (it is not implied by CH).\n\n#Axiom of Choice Considered Harmful: Constructive Mathematics and the Potential Pitfalls of AC#\nWhy does the axiom of choice have such a bad reputation with [constructive](https://arbital.com/p/-constructive_mathematics) \nmathematicians? \n\nIt is important to realise that some of the reasons mathematicians had for doubting AC are no longer \nrelevant. Because of some of its counter-intuitive results, mathematicians \n\nTo understand this view, it is necessary to understand more about the constructive view in general. %TODO\n\n(Posting a list of links here until I have a better idea of what to write, in approx. reverse order of relevance / usefulness)\nSee also \n\n\n - [this section of the Wikipedia page on AC](https://arbital.com/p/-https://en.wikipedia.org/wiki/Axiom_of_choice#Criticism_and_acceptance)\n - [this StackOverflow question](https://arbital.com/p/-http://mathoverflow.net/questions/22927/why-worry-about-the-axiom-of-choice),\n - [this paper by Per Martin-Löf on the history and problems with the Axiom of Choice](https://arbital.com/p/-http://comjnl.oxfordjournals.org/content/49/3/345.full)\n - [this post by Greg Muller on why the Axiom of Choice is wrong](https://arbital.com/p/-https://cornellmath.wordpress.com/2007/09/13/the-axiom-of-choice-is-wrong/)\n - [this post on constructive mathematics on the Internet Encyclopaedia of Philosophy](https://arbital.com/p/-http://www.iep.utm.edu/con-math/)\n - [this post \non Good Math Bad Math by Mark C. Chu-Carroll](https://arbital.com/p/-http://scienceblogs.com/goodmath/2007/05/27/the-axiom-of-choice/),\n - [this\npost by Terrance Tao on the usefulness of the axiom in \"concrete\" mathematics](https://arbital.com/p/-https://terrytao.wordpress.com/2007/06/25/ultrafilters-nonstandard-analysis-and-epsilon-management/), \n - [this post regarding constructive mathematics on the University of Canterbury website](https://arbital.com/p/-http://www.math.canterbury.ac.nz/php/groups/cm/faq/)\n - [\"interview with a constructivist\"](https://arbital.com/p/-https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=8&ved=0ahUKEwiPnovngczPAhVM1hoKHflFDtMQFghYMAc&url=http%3A%2F%2Fmath.fau.edu%2Frichman%2FDocs%2FIntrview.tex&usg=AFQjCNFJZO1QGPzG1Cqe_8XcnlzMCrlvsA&sig2=R2goExeTv4EPJ6TF9ks5Yw\nthis)\n - [this page on Standford Enclyclopaedia on constructive mathematics](https://arbital.com/p/-http://plato.stanford.edu/entries/mathematics-constructive/)\n - [this page on Stanford Encyclopaedia on intuitionistic mathematics](https://arbital.com/p/-http://plato.stanford.edu/entries/intuitionism/)\n\n#Choosing Not to Choose: Set-Theoretic Axioms Which Contradict Choice#\nIt is also possible to assume axioms which contradict the axiom of choice. For example, there is the \n[Axiom of Determinancy](https://arbital.com/p/-axiom_of_determinancy). This axiom states that for any two-person [game](https://arbital.com/p/-game_mathematics) \nof a certain type, one player has a winning strategy. \n%TODO\n\n#I Want to Play a Game: Counterintuitive Strategies Using AC#\nThere are a few examples of very counter-intuitive solutions to (seemingly) impossible challanges\nwhich work only in the presence of the Axiom of Choice.\n\nExamples may be found in the following places:\n\n - [Countably many prisoners have to guess hats with no information transfer, yet all but a finite number may go free.](https://arbital.com/p/-https://cornellmath.wordpress.com/2007/09/13/the-axiom-of-choice-is-wrong/)\n - [Countably many people wear hats with \nreal numbers. They can only see everyone else's hats. Shouting at the same time, all but finitely many are guaranteed to guess correctly.](https://arbital.com/p/-http://mathoverflow.net/questions/20882/most-unintuitive-application-of-the-axiom-of-choice)\n - [Countably many mathematicians go into countably many identical rooms each containing countably many boxes containing real numbers. Each mathematician opens all but one box, and guesses the number in the unopened box. All but one mathematician are correct.](https://arbital.com/p/-http://math.stackexchange.com/questions/613506/real-guessing-puzzle)", "date_published": "2016-12-02T20:22:51Z", "authors": ["Daniel Satanove", "Dylan Hendrickson", "Yoni Lavi", "Eric Bruylant", "Mark Chimes", "Jaime Sevilla Molina"], "summaries": ["The axiom of choice states that given an infinite collection of non-empty sets, there is a function that picks out one element from each set."], "tags": ["Needs work"], "alias": "69b"} {"id": "66c7d511d1ade45c6c840f5b26c493b8", "title": "Low-speed explanation", "url": "https://arbital.com/p/low_speed_meta_tag", "source": "arbital", "source_type": "text", "text": "Use this tag to indicate that a page offers a relatively slower, more gentle, or more wordy explanation.\n\nNote that the speed of an explanation is not the same as its technical level. An explanation can assume a high technical level and still take its time going into details and so deserve the low-speed tag.", "date_published": "2016-10-06T16:16:40Z", "authors": ["Eric Rogstad"], "summaries": [], "tags": [], "alias": "6b4"} {"id": "ca79dfbac046ce1a83689a18cb657dc1", "title": "High-speed explanation", "url": "https://arbital.com/p/high_speed_meta_tag", "source": "arbital", "source_type": "text", "text": "Use this tag to indicate that a page offers a relatively faster and more terse explanation.\n\nNote that the speed of an explanation is not the same as its technical level. An explanation can assume a low technical level and still be brief and so deserve the high-speed tag.", "date_published": "2016-10-06T16:18:32Z", "authors": ["Eric Rogstad"], "summaries": [], "tags": [], "alias": "6b5"} {"id": "9e82830c69644e48cd5950b0606e7a33", "title": "Rice's Theorem: Intro (Math 1)", "url": "https://arbital.com/p/6bf", "source": "arbital", "source_type": "text", "text": "Rice's Theorem says that you can't in general figure out anything about what a computer program outputs just by looking at the program. Specifically, we think of programs as taking inputs, doing some kind of computation on the input, and possibly eventually stopping and giving an output. Only \"possibly\" because sometimes programs get in an infinite loop and go forever. The assignment that takes an input to the output of a program on that input (if it exists) is called the [https://arbital.com/p/-5p2](https://arbital.com/p/-5p2) computed by that program %%note:\"Partial\" functions because the output doesn't exist if the program runs forever.%%. \n\nRice's Theorem says that for any property of partial functions, you can't write a program that looks at some other program's source code, and tells you whether that program computes a function with the property %%note:This is true except for \"trivial\" properties like \"all partial functions\" or \"no partial functions,\" which are easy to write programs to test.%%.\n\nFor example, say we want to know whether a program computes the [https://arbital.com/p/Fibonacci_sequence](https://arbital.com/p/Fibonacci_sequence), i.e. returns the $n$th Fibonacci number on input $n$. If we try to write a Fibonacci-checker program that tells us whether programs compute the Fibonacci sequence, we're going to have a hard time, because by Rice's Theorem it's impossible!\n\nThere might be some programs for which we can say \"yes, this program definitely computes the Fibonacci sequence,\" but we can't write a general procedure that always tells whether some program computes the Fibonacci sequence; there are some very convoluted programs that compute it, but not obviously, and if you modify the Fibonacci-checker to handle some of them, there will always be even more convoluted programs that it doesn't work on.\n\nWe make a distinction between *programs* and *(partial) functions computed by programs*. There are usually many programs that compute the same function. Rice's Theorem only cares about the partial functions; it's possible to test statements about programs, like \"has fewer than 100 lines.\" But this kind of statement won't always tell you something about the function the program computes.\n\n# Proof\n\nTo prove Rice's Theorem, we'll use the [halting theorem](https://arbital.com/p/46h), which says that you can't write a program that can always tell whether some program will halt on some input. Here's how the proof will go: suppose you come up with a program that checks whether programs compute partial functions with some property, e.g. you successfully write a Fibonacci-checker. We'll then use the Fibonacci-checker to design a Halt-checker, which can tell when a program halts. But this is impossible by the Halting theorem, so this couldn't be the case: your Fibonacci-checker must not work!\n\nWe need to make a couple of assumptions. First, that the empty function, meaning the function \"computed\" by a program that always enters an infinite loop regardless of its input, does not satisfy the property. If it does, we can replace the property by its opposite (say, replace \"computes the Fibonacci sequence\" with \"doesn't compute the Fibonacci sequence\"). If we can tell whether a program computes a function that *doesn't* have the property, then we can tell when it does, simply be reversing the output.\n\nSecond, we assume there is *some* program that does compute a function with the property, i.e. a program that the checker says \"yes\" to. If there isn't, we could have the checker always say \"no.\" Say the program $s$ computes a function with the property.\n\n Now suppose you have a checker $C$ that, when given a program as input, returns whether the program computes a function with some property of interest. We want to test whether some program $M$ halts, when given input $x$. \n\nWe start by building a new program, called $Proxy_s$, which does the following on some input $z$:\n\n- Run $M$ on input $x$.\n- Run $s$ on input $z$, and return the result.\n\nIf $M$ halts on $x$, $Proxy_s$ moves on to the second step, and returns the same value as $s$. Since they return the same value (or no value, because of an infinite loop) on every possible input, $s$ and $Proxy_s$ compute the same function. So $Proxy_s$ computes a function with the property.\n\nIf $M$ doesn't halt on $x$, $Proxy_s$ never finishes the first step, and runs forever. So $Proxy_s$ always ends up in an infinite loop, which means it computes the empty function. So $Proxy_s$ computes a function that doesn't have the property.\n\nWhat happens if we plug $Proxy_s$ as input into $C$? If $C$ says \"yes,\" $Proxy_s$ computes a function with the property, which means $M$ halts on $x$. If $C$ says \"no,\" $Proxy_s$ doesn't compute a function with the property, so $M$ doesn't halt on $x$.\n\nSo we can tell whether $M$ halts on $x$! All we have to do is write this proxy function, and ask $C$ whether its function has the property. But the Halting theorem says we can't do this! That means $C$ can't exist, and we can't write a checker to see whether programs compute functions with the property.\n\nAs an example, suppose we have a Fibonacci-checker $fib\\_checker$. The following Python program computes the Fibonacci numbers, and so $fib\\_checker$ says \"yes\" to it:\n\n def fib(n):\n a, b = 0, 1\n for i in range(n):\n a, b = b, a+b\n return a\n\nThen, for some program $M$ and input $x$, $Proxy_{fib}$ is the program:\n\n def Proxy_fib(z):\n M(x)\n return fib(z)\n\nIf $fib\\_checker$ works, this program says whether $M$ halts on $x$:\n\n def halts(M,x)\n return fib_checker(Proxy_fib)\n\nExpanding this, we have the program:\n\n def halts(M,x)\n return fib_checker('''\n M(x)\n a, b = 0, 1\n for i in range(n):\n a, b = b, a+b\n return a\n ''')\n\nIf $M$ halts on $x$, the program inside the triple quotes computes the Fibonacci sequence, so $fib\\_checker$ returns $true$, and so does $halts$. If $M$ doesn't halt on $x$, the program in the triple quotes also doesn't halt, and returns the empty function, which isn't the Fibonacci sequence. So $fib\\_checker$ returns $false$, and so does $halts$.", "date_published": "2016-11-18T19:38:43Z", "authors": ["Dylan Hendrickson", "Eric Rogstad"], "summaries": [], "tags": ["Math 1", "Start"], "alias": "6bf"} {"id": "669699ca14d33456d51c3d33cadace6a", "title": "Axiom of Choice: Guide", "url": "https://arbital.com/p/axiom_of_choice_guide", "source": "arbital", "source_type": "text", "text": "## Learning the Axiom of Choice ##\n\n[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)\n\n\n[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)\n\n\n[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)\n\n[https://arbital.com/p/multiple-choice](https://arbital.com/p/multiple-choice)\n\n%%%box:\nYou will get the following pages:\n%%wants-requisite([https://arbital.com/p/6c7](https://arbital.com/p/6c7)):\nBasic intro %%\n%%wants-requisite([https://arbital.com/p/6c9](https://arbital.com/p/6c9)):\nHistory and controversy %%\n%%wants-requisite([https://arbital.com/p/6c8](https://arbital.com/p/6c8)):\nDefinition (Formal) %%\n%%wants-requisite([https://arbital.com/p/6c7](https://arbital.com/p/6c7)):\nDefinition (Intuitive) %%\n%%wants-requisite([https://arbital.com/p/6cb](https://arbital.com/p/6cb)):\n\n%start-path([https://arbital.com/p/6c9](https://arbital.com/p/6c9))%\n%%\n%%%\n\n\n\n\nPlan for this guide:\n\nAxiom of Choice: Guide\n\n\n\nConditional paragraphs for concepts being described on later pages.\n\nQuestions on what the main page should look like\n\n1 Introduction\n2 Getting the Heavy Maths out the Way: Definitions\n3 Axiom Unnecessary for Finite Collections of Sets\n4 Controversy: Mathematicians Divided! Counter-Intuitive Results, and The History of the Axiom of Choice\n5 So, What is this Choice Thing Good for Anyways?\n6 Physicists Hate Them! Find out How Banach and Tarski Make Infinity Dollars with this One Simple Trick!\n7 How Something Can Exist Without Actually Existing: The Zermelo Fraenkel Axioms and the Existence of a Choice Function\n8 How Something can be Neither True nor False:\n9 A Rose by Any Other Name: Alternative Characterizations of AC\n10 Zorn's Lemma? I hardly Know her!\n11 Getting Your Ducks in a Row, or, Rather, Getting Your Real Numbers in a Row: The Well-Ordering Principle\n12 AC On a Budget: Weaker Versions of the Axiom\n13 And In Related News: The Continuum Hypothesis\n14 Axiom of Choice Considered Harmful: Constructive Mathematics and the Potential Pitfalls of AC\n15 Choosing Not to Choose: Set-Theoretic Axioms Which Contradict Choice\n16 I Want to Play a Game: Counterintuitive Strategies Using AC \n\n-Guide Questions-\n\nI'd also like the corresponding pages to show or hide some information based on what is chosen here. \n\nChoose one of the pregenerated paths, or customize your own.\na. Comprehensive path: Learn more than you wanted to know!\n\tAdd all pages\nb. Substantial path: All the most important stuff\n\tAdd 1,2,3,4,5,6,7,8,10,14\nc. Compact path: Only the very important stuff\n\tAdd 2, 3, 5, 7, 10\nd. First-time path: Learn the important basics without getting too bogged down.\n\tAdd 1. 2 (intuitive), 2 (definition), 3, 4, 5, 6, 7, 8, 10\ne. Overview path: Just get a taste of the axiom\n\tAdd 1, 2 (intuitive), 2 (definition), 3, 4, 5, 7\nf. Custom Path\n\n\n\nIf the user chooses to customize a path:\n1. What do you know about set theory, mathematical logic and axioms?\na. Almost nothing\n\tAdd Definitions (Intuitive)\n\tAdd 3. Finite Sets\n\tAdd 6. Banach-Tarski\n\tAdd 7. ZF Axioms and Existence of a Choice Function\n\tAdd 8. How can something be neither True nor False\nb. Well, I've used them for other maths, but haven't studied them directly.\n\tAdd 2. Definitions\t\n\tAdd 3. Finite Sets\n\tAdd 7. ZF Axioms and Existence of a Choice Function\n\tAdd 8. How can something be neither True nor False\nc. I have a good grasp of it, but would like some explanation anyway. \n\tAdd 2. Definitions\n\tAdd 3. Finite Sets\n\tAdd 7. ZF Axioms and Existence of a Choice Function\n\tAdd 8. How can something be neither True nor False\nd. I have a good grasp of it and don't need to hear more. \n\tAdd 2. Definitions\n\n\n2. This axiom has a rich and interesting history. How much do you want to learn about?\na. Give me all of the juicy history side-facts!\n\tAdds 1. Introduction\n\tAdds 4. Controversy (History)\nb. Eh, give me a short intro.\n\tAdds 1. Intro\nc. Just stick to the mathematics, please.\n\n\n-Questions 3-5 are revealed if, and only if, the answer to Question 1 is b,c, or d.-\n\n32. How much detail would you like to read about the mathematics related to the axiom?\na. I'd like to know a lot of the detail.\n\tAdds 5. What is this Choice Thing Good For\n\tAdds 9. Alternative Characterizations\n\tAdds 10. Zorn's Lemma\n\tAdds 12. Weaker Versions of the Axiom\n\tAdds 13. Continuum Hypthesis\nb. I only want the most important extra details.\n\tAdds 5. What is this Choice thing good for\n\tAdds 10. Zorn's Lemma\nc. I only want the absolute essentials.\n\n4. How much would you like to know about constructive mathematics and its relation to the axiom of choice?\na. I don't care to read about it right now.\nb. I don't know what that is, please tell me about it.\n\tAdds 14. Constructive Mathematics and Pitfalls of AC\n\tPotentially adds a link to an intro on constructivsm.\nc. I know what it is, but I would like to hear more about how it relates to Axiom of Choice.\n\tAdds 14. Constructive Mathematics and Pitfalls of AC\n5. How much do you care about the paradoxes the axiom implies? \na. I don't care about them, I just want to know about the axiom itself.\nb. Give me a very basic overview.\n\tAdds 6. Banach-Tarski Paradox\nd. Just tell me some interesting ones.\n\tAdds 6. Banach-Tarksi Paradox\n\tAdds 16. Counterintuitive strategies using AC\nc. Tell me the whole story! \n\tAdds 6. Banach-Tarski Paradox\n\tAdds 11. Well-Ordering Principle\n\tAdds 15. Axioms which contradict choice\n\tAdds 16. Counterintuitive strategies using AC\n\n\n\n\nO", "date_published": "2016-12-02T16:59:36Z", "authors": ["Mark Chimes", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "6c6"} {"id": "3dfb76ce701e267157780098c14ad387", "title": "Axiom of Choice: Introduction", "url": "https://arbital.com/p/axiom_of_choice_introduction", "source": "arbital", "source_type": "text", "text": "\"The Axiom of Choice is necessary to select a set from an infinite number of pairs of socks, but not an infinite number of pairs of shoes.\" — *Bertrand Russell, Introduction to mathematical philosophy*\n\n\"Tarski told me the following story. He tried to publish his theorem \\[equivalence between the Axiom of Choice and the statement 'every infinite set A has the same cardinality as AxA'\\](https://arbital.com/p/the) in the Comptes Rendus Acad. Sci. Paris but Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never again tried to publish in the Comptes Rendus.\"\n- *Jan Mycielski, A System of Axioms of Set Theory for the Rationalists*\n\n#Obligatory Introduction#\nThe Axiom of Choice, the most controversial axiom of the 20th Century. \n\nThe axiom states that a certain kind of function, called a `choice' function, always exists. It is called a choice function, because, given a collection of non-empty sets, the function 'chooses' a single element from each of the sets. It is a powerful and useful axiom, asserting the existence of useful mathematical structures (such as bases for [vector spaces](https://arbital.com/p/-3w0) of arbitrary [dimension](https://arbital.com/p/-dimension_mathematics), and [ultraproducts](https://arbital.com/p/-ultraproduct)). It is a generally accepted axiom, and is in wide use by mathematicians. In fact, according to Elliott Mendelson in Introduction to Mathematical Logic (1964) \"The status of the Axiom of Choice has become less controversial in recent years. To most mathematicians it seems quite plausible and it has so many important applications in practically all branches of mathematics that not to accept it would seem to be a wilful hobbling of the practicing mathematician. \"\n\nNeverless, being an [axiom](https://arbital.com/p/-axiom_mathematics), it cannot be proven and must instead be assumed. \nIn particular, it is an axiom of [set theory](https://arbital.com/p/-set_theory) and it is not provable from the other axioms (the Zermelo-Fraenkel axioms of Set Theory). In fact many mathematicians, in particular [constructive](https://arbital.com/p/-constructive_mathematics) mathematicians, reject the axiom, stating that it does not capture a 'real' or 'physical' property, but is instead just a mathematical oddity, an artefact of the mathematics used to approximate reality, rather than reality itself. In the words of the LessWrong community: the constructive mathematicians would claim it is a statement about [https://arbital.com/p/-https://wiki.lesswrong.com/wiki/Map_and_Territory_](https://arbital.com/p/-https://wiki.lesswrong.com/wiki/Map_and_Territory_). \n\nHistorically, the axiom has experienced much controversy. Before it was shown to be independent of the other axioms, it was believed either to follow from them (i.e., be 'True') or lead to a contradiction (i.e., be 'False'). Its independence from the other axioms was, in fact, a very surprising result at the time.", "date_published": "2016-10-10T18:29:09Z", "authors": ["Mark Chimes"], "summaries": [], "tags": [], "alias": "6c7"} {"id": "956a4e87e685190636e8a31508343b9d", "title": "Axiom of Choice: Definition (Formal)", "url": "https://arbital.com/p/axiom_of_choice_definition_mathematical", "source": "arbital", "source_type": "text", "text": "#Getting the Heavy Maths out the Way: Definitions#\nIntuitively, the [axiom](https://arbital.com/p/-axiom_mathematics) of choice states that, given a collection of *[non-empty](https://arbital.com/p/-5zc)* [sets](https://arbital.com/p/-3jz), there is a [function](https://arbital.com/p/-3jy) which selects a single element from each of the sets. \n\nMore formally, given a set $X$ whose [elements](https://arbital.com/p/-5xy) are only non-empty sets, there is a function \n$$\nf: X \\rightarrow \\bigcup_{Y \\in X} Y \n$$\nfrom $X$ to the [union](https://arbital.com/p/-5s8) of all the elements of $X$ such that, for each $Y \\in X$, the [image](https://arbital.com/p/-3lh) of $Y$ under $f$ is an element of $Y$, i.e., $f(Y) \\in Y$. \n\nIn [logical notation](https://arbital.com/p/-logical_notation),\n$$\n\\forall_X \n\\left( \n\\left[\\in X} Y \\not= \\emptyset \\right](https://arbital.com/p/\\forall_{Y) \n\\Rightarrow \n\\left[\n\\left](https://arbital.com/p/\\exists)\n\\right)\n$$\n\n#Axiom Unnecessary for Finite Collections of Sets#\nFor a [finite set](https://arbital.com/p/-5zy) $X$ containing only [finite](https://arbital.com/p/-5zy) non-empty sets, the axiom is actually provable (from the [Zermelo-Fraenkel axioms](https://arbital.com/p/-zermelo_fraenkel_axioms) of set theory ZF), and hence does not need to be given as an [axiom](https://arbital.com/p/-axiom_mathematics). In fact, even for a finite collection of possibly infinite non-empty sets, the axiom of choice is provable (from ZF), using the [axiom of induction](https://arbital.com/p/-axiom_of_induction). In this case, the function can be explicitly described. For example, if the set $X$ contains only three, potentially infinite, non-empty sets $Y_1, Y_2, Y_3$, then the fact that they are non-empty means they each contain at least one element, say $y_1 \\in Y_1, y_2 \\in Y_2, y_3 \\in Y_3$. Then define $f$ by $f(Y_1) = y_1$, $f(Y_2) = y_2$ and $f(Y_3) = y_3$. This construction is permitted by the axioms ZF.\n\nThe problem comes in if $X$ contains an infinite number of non-empty sets. Let's assume $X$ contains a [countable](https://arbital.com/p/-2w0) number of sets $Y_1, Y_2, Y_3, \\ldots$. Then, again intuitively speaking, we can explicitly describe how $f$ might act on finitely many of the $Y$s (say the first $n$ for any natural number $n$), but we cannot describe it on all of them at once. \n\nTo understand this properly, one must understand what it means to be able to 'describe' or 'construct' a function $f$. This is described in more detail in the sections which follow. But first, a bit of background on why the axiom of choice is interesting to mathematicians.", "date_published": "2016-10-10T19:04:45Z", "authors": ["Mark Chimes"], "summaries": [], "tags": [], "alias": "6c8"} {"id": "000499bb7ebb027a1ed49e3fd674ac69", "title": "Axiom of Choice: History and Controversy", "url": "https://arbital.com/p/axiom_of_choice_history_and_controversy", "source": "arbital", "source_type": "text", "text": "#Controversy: Mathematicians Divided! Counter-Intuitive Results, and The History of the Axiom of Choice#\nMathematicians have been using an intuitive concept of a set for probably as long as mathematics has been practiced. \nAt first, mathematicians assumed that the axiom of choice was simply true (as indeed it is for finite collections of sets). \n\n[Georg Cantor](https://arbital.com/p/-https://en.wikipedia.org/wiki/Georg_Cantor) introduced the concept of [transfinite numbers](https://arbital.com/p/-transfinite_number) \nand different [cardinalities of infinity](https://arbital.com/p/-4w5) in a 1874 \n[paper](https://arbital.com/p/https://en.wikipedia.org/wiki/Georg_Cantor%27s_first_set_theory_article) (which contains his infamous\n[Diagonalization Argument](https://arbital.com/p/-https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument)) \n and along with this sparked the introduction of [set theory](https://arbital.com/p/-set_theory).\n In 1883, Cantor introduced a principle called the 'Well-Ordering Princple'\n(discussed further in a section below) which he called a 'law of thought' (i.e., intuitively true). \nHe attempted to prove this principle from his other principles, but found that he was unable to do so.\n\n[Ernst Zermelo](https://arbital.com/p/-https://en.wikipedia.org/wiki/Ernst_Zermelo) attempted to \ndevelop an [axiomatic](https://arbital.com/p/-axiom_system) treatment of set theory. He \n managed to prove the Well-Ordering Principle in 1904 by introducing a new principle: The Principle of Choice.\nThis sparked much discussion amongst mathematicians. In 1908 published a paper containing responses to this debate,\nas well as a new formulation of the Axiom of Choice. In this year, he also published his first version of \nthe set theoretic axioms, known as the [Zermelo Axioms of Set Theory](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo_set_theory).\nMathematicians, [Abraham Fraenkel](https://arbital.com/p/-https://en.wikipedia.org/wiki/Abraham_Fraenkel) and \n[Thoralf Skolem](https://arbital.com/p/-https://en.wikipedia.org/wiki/Thoralf_Skolem) improved this system (independently of each other)\ninto its modern version, the [Zermelo Fraenkel Axioms of Set Theory](https://arbital.com/p/-https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory\n).\n\nIn 1914, [Felix Hausdorff](https://arbital.com/p/https://en.wikipedia.org/wiki/Felix_Hausdorff) proved \n[Hausdorff's paradox](https://arbital.com/p/https://en.wikipedia.org/wiki/Hausdorff_paradox). The ideas\nbehind this proof were used in 1924 by [Banach](https://arbital.com/p/-https://en.wikipedia.org/wiki/Stefan_Banach\nStefan) and [Alfred Tarski](https://arbital.com/p/-https://en.wikipedia.org/wiki/Alfred_Tarski)\nto prove the more famous Banach-Tarski paradox (discussed in more detail below).\nThis latter theorem is often quoted as evidence of the falsehood of the axiom \nof choice.\n\nBetween 1935 and 1938, [Kurt Gödel](https://arbital.com/p/-https://en.wikipedia.org/wiki/Kurt_G%C3%B6del) proved that\nthe Axiom of Choice is consistent with the rest of the ZF axioms.\n\nFinally, in 1963, [Paul Cohen](https://arbital.com/p/-https://en.wikipedia.org/wiki/Paul_Cohen) developed a revolutionary\nmathematical technique called [forcing](https://arbital.com/p/-forcing_mathematics), with which he proved that the \naxiom of choice could not be proven from the ZF axioms (in particular, that the negation of AC\nis consistent with ZF). For this, and his proof of the consistency of the negation of the \n[Generalized Continuum Hypothesis](https://arbital.com/p/-continuum_hypothesis) from ZF, he was awarded a fields medal\nin 1966.\n\nThis axiom came to be accepted in the general mathematical community, but was rejected by the\n[constructive](https://arbital.com/p/-constructive_mathematics) mathematicians as being fundamentally non-constructive. \nHowever, it should be noted that in many forms of constructive mathematics, \nthere are *provable* versions of the axiom of choice.\nThe difference is that in general in constructive mathematics, exhibiting a set of non-empty sets\n(technically, in constructive set-theory, these should be 'inhabited' sets) also amounts to \nexhibiting a proof that they are all non-empty, which amounts to exhibiting an element for all\nof them, which amounts to exhibiting a function choosing an element in each. So in constructive \nmathematics, to even state that you have a set of inhabited sets requires stating that you have a choice\nfunction to these sets proving they are all inhabited.\n\nSome explanation of the history of the axiom of choice (as well as some of its issues)\ncan be found in the \npaper \"100 years of Zermelo's axiom of choice: what was the problem with it?\"\nby the constructive mathematician \n[Per Martin-Löf](https://arbital.com/p/-https://en.wikipedia.org/wiki/Per_Martin-L%C3%B6f)\nat [this webpage](https://arbital.com/p/-http://comjnl.oxfordjournals.org/content/49/3/345.full). \n\n(Martin-Löf studied under [Andrey Kolmogorov](https://arbital.com/p/-https://en.wikipedia.org/wiki/Andrey_Kolmogorov) of\n [Kolmogorov complexity](https://arbital.com/p/-5v) and has made contributions to [information theory](https://arbital.com/p/-3qq), \n[mathematical_statistics](https://arbital.com/p/-statistics), and [mathematical_logic](https://arbital.com/p/-mathematical_logic), including developing a form of \nintuitionistic [https://arbital.com/p/-3sz](https://arbital.com/p/-3sz)).\n\nA nice timeline is also summarised on [Stanford Encyclopaedia of Philosophy](https://arbital.com/p/-http://plato.stanford.edu/entries/axiom-choice/index.html#note-6\nThe).", "date_published": "2016-10-12T12:15:33Z", "authors": ["Tarn Somervell Fletcher", "Mark Chimes"], "summaries": [], "tags": [], "alias": "6c9"} {"id": "27622ad024b1c35b79ac2aa92f090d83", "title": "Axiom of Choice Definition (Intuitive)", "url": "https://arbital.com/p/axiom_of_choice_definition_intuitive", "source": "arbital", "source_type": "text", "text": "#Getting the Heavy Maths out the Way: Definitions#\nIntuitively, the [axiom](https://arbital.com/p/-axiom_mathematics) of choice states that, given a collection of *[non-empty](https://arbital.com/p/-5zc)* [sets](https://arbital.com/p/-3jz), there is a [function](https://arbital.com/p/-3jy) which selects a single element from each of the sets. \n\nMore formally, given a set $X$ whose [elements](https://arbital.com/p/-5xy) are only non-empty sets, there is a function \n$$\nf: X \\rightarrow \\bigcup_{Y \\in X} Y \n$$\nfrom $X$ to the [union](https://arbital.com/p/-5s8) of all the elements of $X$ such that, for each $Y \\in X$, the [image](https://arbital.com/p/-3lh) of $Y$ under $f$ is an element of $Y$, i.e., $f(Y) \\in Y$. \n\nIn [logical notation](https://arbital.com/p/-logical_notation),\n$$\n\\forall_X \n\\left( \n\\left[\\in X} Y \\not= \\emptyset \\right](https://arbital.com/p/\\forall_{Y) \n\\Rightarrow \n\\left[\n\\left](https://arbital.com/p/\\exists)\n\\right)\n$$\n\n#Axiom Unnecessary for Finite Collections of Sets#\nFor a [finite set](https://arbital.com/p/-5zy) $X$ containing only [finite](https://arbital.com/p/-5zy) non-empty sets, the axiom is actually provable (from the [Zermelo-Fraenkel axioms](https://arbital.com/p/-zermelo_fraenkel_axioms) of set theory ZF), and hence does not need to be given as an [axiom](https://arbital.com/p/-axiom_mathematics). In fact, even for a finite collection of possibly infinite non-empty sets, the axiom of choice is provable (from ZF), using the [axiom of induction](https://arbital.com/p/-axiom_of_induction). In this case, the function can be explicitly described. For example, if the set $X$ contains only three, potentially infinite, non-empty sets $Y_1, Y_2, Y_3$, then the fact that they are non-empty means they each contain at least one element, say $y_1 \\in Y_1, y_2 \\in Y_2, y_3 \\in Y_3$. Then define $f$ by $f(Y_1) = y_1$, $f(Y_2) = y_2$ and $f(Y_3) = y_3$. This construction is permitted by the axioms ZF.\n\nThe problem comes in if $X$ contains an infinite number of non-empty sets. Let's assume $X$ contains a [countable](https://arbital.com/p/-2w0) number of sets $Y_1, Y_2, Y_3, \\ldots$. Then, again intuitively speaking, we can explicitly describe how $f$ might act on finitely many of the $Y$s (say the first $n$ for any natural number $n$), but we cannot describe it on all of them at once. \n\nTo understand this properly, one must understand what it means to be able to 'describe' or 'construct' a function $f$. This is described in more detail in the sections which follow. But first, a bit of background on why the axiom of choice is interesting to mathematicians.", "date_published": "2016-10-10T19:11:14Z", "authors": ["Mark Chimes"], "summaries": [], "tags": [" Axiom of Choice: Definition (Formal)"], "alias": "6cb"} {"id": "dac1e9b88fd932a4113542697ac19641", "title": "Concept", "url": "https://arbital.com/p/concept_meta_tag", "source": "arbital", "source_type": "text", "text": "Add this tag to pages which represent a concept. A concept page doesn't have an explanation, it merely describes what the concept is. Take a look at the text that comes before the table of contents on any Wikipedia page to see what a good example of a concept page might contain. Concepts are also expected to be used as requisites.", "date_published": "2016-10-10T22:07:05Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "6cc"} {"id": "ec93be37b70f96f0eb11050a46436ca7", "title": "Axiom", "url": "https://arbital.com/p/axiom", "source": "arbital", "source_type": "text", "text": "An **axiom** of a [theory](https://arbital.com/p/theory_mathematics) $T$ is a [well-formed](https://arbital.com/p/well_formed) [sentence](https://arbital.com/p/sentence_mathematics) in the [language](https://arbital.com/p/language_mathematics) of the theory that we assume to be true without a formal justification.\n\n#Models\nModels of a certain theory are going to be those mathematical objects in which the axioms hold, so they can be used to pin down the mathematical structures we want to talk about.\n\nNormally, when we want to reason about a particular aspect of the world we have to try to figure out a sufficiently descriptive set of axioms which are satisfied by the thing we want to reason about. Then we can use [deduction rules](https://arbital.com/p/) to deduce consequences of those axioms, which will also be satisfied by the thing in question.\n\nFor example, we may want to model how viral videos spread across the internet. Then we can make some assumptions about this situation. For example, we may consider that the internet is a [graph](https://arbital.com/p/) in which each node is a person, and its edges are friendships. We may further assume that the edges have a weight between 0 and 1 representing the probability that in a time step a person will tell its friend about the video. Then we can use this model to figure out how kitten videos end up on your twitter feed.\n\nThis is a particularly complex model with many assumptions behind. Formalizing all those assumptions and turning them into axioms would be a pain in the ass, but they are still there, albeit hidden.\n\nFor example, there might be an axiom in the language of [first order logic](https://arbital.com/p/) stating that $\\forall w. weight(w)\\rightarrow 0 0$ such that $ \\alpha x_i = y_i$ for all $i$ from 1 to $n.$\n\nWhen we write a set of odds using colons, like $(x_1 : x_2 : \\ldots : x_n),$ it is understood that the '=' sign denotes this equivalence. Thus, $(3 : 6) = (9 : 18).$\n\nA set of odds with only two terms can also be written as a fraction $\\frac{x}{y},$ where it is understood that $\\frac{x}{y}$ denotes the odds $(x : y).$ These fractions are often called \"odds ratios.\"\n\n# Example\n\nSuppose that in some forest, 40% of the trees are rotten and 60% of the trees are healthy. There are then 2 rotten trees for every 3 healthy trees, so we say that the relative *odds* of rotten trees to healthy trees is 2 : 3. If we selected a tree at random from this forest, the *probability* of getting a rotten tree would be 2/5, but the *odds* would be 2 : 3 for rotten vs. healthy trees.\n\n![2 sick trees, 3 healthy trees](https://i.imgur.com/GVZnz2c.png?0)\n\n# Conversion between odds and probabilities\n\nConsider three propositions, $X,$ $Y,$ and $Z,$ with odds of $(3 : 2 : 6).$ These odds assert that $X$ is half as probable as $Z.$\n\nWhen the set of propositions are [mutually exclusive and exhaustive](https://arbital.com/p/1rd), we can convert a set of odds into a set of [probabilities](https://arbital.com/p/1rf) by [normalizing](https://arbital.com/p/1rk) the terms so that they sum to 1. This can be done by summing all the components of the ratio, then dividing each component by the sum:\n\n$$(x_1 : x_2 : \\dots : x_n) = \\left(\\frac{x_1}{\\sum_{i=1}^n x_i} : \\frac{x_2}{\\sum_{i=1}^n x_i} : \\dots : \\frac{x_n}{\\sum_{i=1}^n x_i}\\right)$$\n\nFor example, to obtain probabilities from the odds ratio 1/3, w write:\n\n$$(1 : 3) = \\left(\\frac{1}{1+3}:\\frac{3}{1+3}\\right) = ( 0.25 : 0.75 )$$\n\nwhich corresponds to the probabilities of 25% and 75%.\n\nTo go the other direction, recall that $\\mathbb P(X) + \\mathbb P(\\neg X) = 1,$ where $\\neg X$ is the negation of $X.$ So the odds for $X$ vs $\\neg X$ are $\\mathbb P(X) : \\mathbb P(\\neg X)$ $=$ $\\mathbb P(X) : 1 - \\mathbb P(X).$ If Alexander Hamilton has a 20% probability of winning the election, his odds for winning vs losing are $(0.2 : 1 - 0.2)$ $=$ $(0.2 : 0.8)$ $=$ $(1 : 4).$\n\n# Bayes' rule\n\nOdds are exceptionally convenient when reasoning using [Bayes' rule](https://arbital.com/p/1lz), since the [prior](https://arbital.com/p/1rm) odds can be term-by-term multiplied by a set of [relative likelihoods](https://arbital.com/p/relative_likelihoods) to yield the [posterior](https://arbital.com/p/1rp) odds. (The posterior odds in turn can be normalized to yield posterior probabilities, but if performing repeated updates, it's [more convenient](https://arbital.com/p/1zg) to multiply by all the likelihood ratios under consideration before normalizing at the end.)\n\n$$\\dfrac{\\mathbb{P}(H_i\\mid e_0)}{\\mathbb{P}(H_j\\mid e_0)} = \\dfrac{\\mathbb{P}(e_0\\mid H_i)}{\\mathbb{P}(e_0\\mid H_j)} \\cdot \\dfrac{\\mathbb{P}(H_i)}{\\mathbb{P}(H_j)}$$\n\nAs a more striking illustration, suppose we receive emails on three subjects: Business (60%), personal (30%), and spam (10%). Suppose that business, personal, and spam emails are 60%, 10%, and 90% likely respectively to contain the word \"money\"; and that they are respectively 20%, 80%, and 10% likely to contain the word \"probability\". Assume for the sake of discussion that a business email containing the word \"money\" [is thereby no more or less likely](https://arbital.com/p/naive_bayes) to contain the word \"probability\", and similarly with personal and spam emails. Then if we see an email containing both the words \"money\" and \"probability\":\n\n$$(6 : 3 : 1) \\times (6 : 1 : 9) \\times (2 : 8 : 1) = (72 : 24 : 9) = (24 : 8: 3)$$\n\n...so the posterior odds are 24 : 8 : 3 favoring the email being a business email, or roughly 69% probability after [normalizing](https://arbital.com/p/1rk).\n\n# Log odds\n\nThe odds $\\mathbb{P}(X) : \\mathbb{P}(\\neg X)$ can be viewed as a dimensionless scalar quantity $\\frac{\\mathbb{P}(X)}{\\mathbb{P}(\\neg X)}$ in the range $[+\\infty](https://arbital.com/p/0,)$. If the odds of Alexander Hamilton becoming President are 0.75 to 0.25 in favor, we can also say that Andrew Jackson is 3 times as likely to become President as not. Or if the odds were 0.4 to 0.6, we could say that Alexander Hamilton was 2/3rds as likely to become President as not.\n\nThe **log odds** are the logarithm of this dimensionless positive quantity, $\\log\\left(\\frac{\\mathbb{P}(X)}{\\mathbb{P}(\\neg X)}\\right),$ e.g., $\\log_2(1:4) = \\log_2(0.25) = -2.$ Log odds fall in the [range](https://arbital.com/p/range_notation) $[+\\infty](https://arbital.com/p/-\\infty,)$ and are finite for probabilities inside the range $(0, 1).$\n\nWhen using a log odds form of [Bayes' rule](https://arbital.com/p/1lz), the posterior log odds are equal to the prior log odds plus the log likelihood. This means that the change in log odds can be identified with [the strength of the evidence](https://arbital.com/p/). If the probability goes from 1/3 to 4/5, our odds have gone from 1:2 to 4:1 and the log odds have shifted from -1 bits to +2 bits. So we must have seen evidence with a strength of +3 bits (a likelihood ratio of 8:1).\n\nThe convenience of this representation is what Han Solo refers to in *Star Wars* when he shouts: \"Never tell me the odds!\", implying that he would much prefer to be told the logarithm of the odds ratio.\n\n## Direct representation of infinite certainty\n\nIn the log odds representation, the probabilities $0$ and $1$ are represented as $-\\infty$ and $+\\infty$ respectively.\n\nThis exposes the specialness of the classical probabilities $0$ and $1,$ and the ways in which these \"infinite certainties\" sometimes behave qualitatively differently from all finite credences. If we don't start by being absolutely certain of a proposition, it will require infinitely strong evidence to shift our belief all the way out to infinity. If we do start out absolutely certain of a proposition, no amount of ordinary evidence no matter how great can ever shift us away from infinity.\n\nThis reasoning is part of the justification of [Cromwell's rule](https://en.wikipedia.org/wiki/Cromwell%27s_rule) which states that probabilities of exactly $0$ or $1$ should be avoided except for logical truths and falsities (and maybe [not even then](http://lesswrong.com/lw/mo/infinite_certainty/)). It also demonstrates how log odds are a good fit for measuring *strength of belief and evidence,* even if classical probabilities are a better representation of *degrees of caring* and betting odds.\n\n%%%comment: We are checking to see if users will click this button, even though we don't have the content for it yet.%%%\n%%hidden(Check my understanding):\nComing soon!\n%%", "date_published": "2016-10-12T22:40:11Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["C-Class"], "alias": "6cj"} {"id": "637455779e16e9f41f08ad616cd65a9b", "title": "Countability", "url": "https://arbital.com/p/countability", "source": "arbital", "source_type": "text", "text": "The [set](https://arbital.com/p/-3jz) of *counting numbers*, or of *positive integers*, is the set $\\mathbb{Z}^+ = \\{1, 2, 3, 4, \\ldots\\}$.\n\nA set $S$ is called *countable* or *enumerable* if there exists a [surjection](https://arbital.com/p/4bg) from the counting numbers onto $S$.\n\n### Example: The rational numbers ###\n\nThe set of *rational numbers*, $\\mathbb Q$, is the set of integer fractions $\\frac{p}{q}$ in reduced form; the greatest common divisor of $p$ and $q$ is one, with $q > 0$.\n\n**Theorem** The rational numbers are countable.\n\nThe proof is, essentially, that $\\mathbb Z^+ \\times \\mathbb Z^+$ is isomorphic to $\\mathbb Z$; we count in a roughly spiral pattern centered at zero.\n\n**Proof** Define the *height* of $\\frac{a}{b}$ to be $|a| + |b|$. We may count the rational numbers in order of height, and ordering by $a$, and then $b$, when the heights are the same. The beginning of this counting is $0 / 1$, $-1 / 1$, $1 / 1$, $-2 / 1$, $-1 / 2$, $1 / 2$, $2 / 1$, $\\ldots$ Since there are at most $(2d+1)^2$ rational numbers of height less than or equal to $d$, a rational number with height $d$ is mapped on to by one of the counting numbers up to $(2d+1)^2$; every rational number is mapped onto by this counting. Thus, the rational numbers are countable. $\\square$\n\n*Note*: It is not hard to extend this proof to show that $(\\mathbb Z^+)^n$ is countable for any finite $n$.\n\n**Theorem** If there exists a surjection $f$ from a countable set $A$ to a set $B$, then $B$ is countable.\n**Proof** By definition of countable, there exists an enumeration $E$ of $A$. Now, $E\\circ f$ is an enumeration of $B$, so $B$ is countable.\n\n##Exercises\n\n>Show that the set $\\Sigma^*$ of [finite words](https://arbital.com/p/) of an enumerable [alphabet](https://arbital.com/p/) is countable.\n\n%%hidden(Show solution):\nFirst, we note that since $\\mathbb N^n$ is countable, the set of words of length $n$ for each $n\\in \\mathbb N$ is countable. \n\nLet $E_n: \\mathbb N \\to \\mathbb N^n$ stand for an enumeration of $\\mathbb N ^n$, and $(J_1,J_2)(n)$ for an enumeration of $\\mathbb N^2$.\n\nConsider the function $E: \\mathbb N \\to \\Sigma^* , n\\hookrightarrow E_{J_1(n)}(J_2(n))$ which maps every number to a word in $\\Sigma^*$. Then a little thought shows that $E$ is an enumeration of $\\Sigma^*$.\n\n$\\square$\n%%\n\n\n\n>Show that the set $P_\\omega(A)$ of finite subsets of an enumerable set $A$ is countable.\n\n%%hidden(Show solution):\nLet $E$ be an enumeration of $A$.\n\nConsider the function $E': \\mathbb N^* \\to P_\\omega(A)$ which relates a word $n_0 n_1 n_2 ... n_r$ made from natural numbers to the set $\\{a\\in A:\\exists m\\le k E(n_m)=a\\}\\subseteq A$. Clearly $E'$ is an enumeration of $P_\\omega(A)$.\n%%\n\n>Show that the set of [cofinite subsets](https://arbital.com/p/) of an enumerable set is countable.\n\n%%hidden(Show solution):\nSimply consider the function which relates each cofinite set with its complementary.\n%%", "date_published": "2016-10-20T20:56:47Z", "authors": ["Eric Bruylant", "Alexei Andreev"], "summaries": [], "tags": ["Concept"], "alias": "6f8"} {"id": "873834280a1d8cc7810d7dfb2ca05261", "title": "Real numbers are uncountable", "url": "https://arbital.com/p/real_numbers_uncountable", "source": "arbital", "source_type": "text", "text": "We present a variant of Cantor's diagonalization argument to prove the [real numbers](https://arbital.com/p/4bc) are [uncountable](https://arbital.com/p/2w0). This [constructively proves](https://arbital.com/p/constructive_proof) that there exist [uncountable sets](https://arbital.com/p/uncountable_set) %%note: Since the real numbers are an example of one.%%.\n\nWe use the decimal representation of the real numbers. An overline ( $\\bar{\\phantom{9}}$ ) is used to mean that the digit(s) under it are repeated forever. Note that $a.bcd\\cdots z\\overline{9} = a.bcd\\cdots (z+1)\\overline{0}$ (if $z < 9$; otherwise, we need to continue carrying the one); $\\sum_{i=k}^\\infty 10^{-k} \\cdot 9 = 1 \\cdot 10^{-k + 1} + \\sum_{i=k}^\\infty 10^{-k} \\cdot 0$. Furthermore, these are the only equivalences between decimal representations; there are no other real numbers with multiple representations, and these real numbers have only these two decimal representations.\n\n**Theorem** The real numbers are uncountable.\n\n**Proof** Suppose, for [contradiction](https://arbital.com/p/46z), that the real numbers are [countable](https://arbital.com/p/-6f8); suppose that $f: \\mathbb Z^+ \\twoheadrightarrow \\mathbb R$ is a surjection. Let $r_n$ denote the $n^\\text{th}$ decimal digit of $r$, so that the fractional part of $r$ is $r_1r_2r_3r_4r_5\\ldots$ Then define a real number $r'$ with $0 \\le r' < 1$ so that $r'_n$ is 5 if $(f(n))_n \\ne 5$, and 6 if $(f(n))_n = 5$. Then there can be no $n$ such that $r' = f(n)$ since $r'_n \\ne (f(n))_n$. Thus $f$ is not surjective, contradicting our assumption, and $\\mathbb R$ is uncountable. $\\square$\n\n\nNote that choosing 5 and 6 as our allowable digits for $r'$ side-steps the issue that $0.\\overline{9} = 1.\\overline{0}$. %%", "date_published": "2016-10-21T09:16:58Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["C-Class", "Proof"], "alias": "6fk"} {"id": "5707643fb85b62d0b411839754f1be30", "title": "Empty set", "url": "https://arbital.com/p/6gb", "source": "arbital", "source_type": "text", "text": "The empty set, $\\emptyset$, is the set with no elements.\nWhy, *a priori*, should this set exist at all?\nWell, if we think of sets as \"containers of elements\", the idea of an empty container is intuitive: just imagine a box with nothing in it.\n\n# Definitions\n\n## Definition in [https://arbital.com/p/ZF](https://arbital.com/p/ZF)\n\nIn the set theory [https://arbital.com/p/ZF](https://arbital.com/p/ZF), there are exactly two axioms which assert the existence of a set *ex nihilo*; all of the rest of set theory builds sets from those given sets, or else postulates the existence of more sets.\n\nThere is the [https://arbital.com/p/-axiom_of_infinity](https://arbital.com/p/-axiom_of_infinity), which asserts the existence of an infinite set, and there is the [https://arbital.com/p/-empty_set_axiom](https://arbital.com/p/-empty_set_axiom), which asserts that an empty set exists.\n\nIn fact, we can deduce the existence of an empty set even without using the empty set axiom, as long as we are allowed to use the [https://arbital.com/p/-axiom_of_comprehension](https://arbital.com/p/-axiom_of_comprehension) to select a certain specially-chosen subset of an infinite set.\nMore formally: let $X$ be an infinite set (as guaranteed by the axiom of infinity).\nThen select the subset of $X$ consisting of all those members $x$ of $X$ which have the property that $x$ contains $X$ and also does not contain $X$.\n\nThere are no such members, so we must have just constructed an empty set.\n\n## Definition by a universal property\n\nThe empty set [has a definition](https://arbital.com/p/5zr) in terms of a [https://arbital.com/p/-600](https://arbital.com/p/-600): the empty set is the unique set $X$ such that for every set $A$, there is exactly one map from $X$ to $A$.\nMore succinctly, it is the [https://arbital.com/p/-initial_object](https://arbital.com/p/-initial_object) in the [https://arbital.com/p/-category_of_sets](https://arbital.com/p/-category_of_sets) (or in the [https://arbital.com/p/-614](https://arbital.com/p/-614)).\n\n# Uniqueness of the empty set\n\nThe [https://arbital.com/p/-axiom_of_extensionality](https://arbital.com/p/-axiom_of_extensionality) states that two sets are the same if and only if they have exactly the same elements.\nIf we had two empty sets $A$ and $B$, then certainly anything in $A$ is also in $B$ ([vacuously](https://arbital.com/p/vacuous_truth)), and anything in $B$ is also in $A$, so they have the same elements.\nTherefore $A = B$, and we have shown the uniqueness of the empty set.\n\n# A common misconception: $\\emptyset$ vs $\\{ \\emptyset \\}$\n\nIt is very common for people to start out by getting confused between $\\emptyset$ and $\\{\\emptyset\\}$.\nThe first contains no elements; the second is a set containing exactly one element (namely $\\emptyset$).\nThe sets don't [biject](https://arbital.com/p/499), because they are finite and have different numbers of elements.\n\n# [Vacuous truth](https://arbital.com/p/vacuous_truth): a whistlestop tour\n\nThe idea of vacuous truth can be stated as follows:\n\n> For *any* property $P$, everything in $\\emptyset$ has the property $P$.\n\nIt's a bit unintuitive at first sight: it's true that everything in $\\emptyset$ is the Pope, for instance.\nWhy should this be the case?\n\nIn order for there to be a counterexample to the statement that \"everything in $\\emptyset$ is the Pope\", we would need to find an element of $\\emptyset$ which was not the Pope. %%note: Strictly, we'd only need to show that such an element must exist, without necessarily finding it.%%\nBut there aren't *any* elements of $\\emptyset$, let alone elements which fail to be the Pope.\nSo there can't be any counterexample to the statement that \"everything in $\\emptyset$ is the Pope\", so the statement is true.", "date_published": "2016-10-25T04:12:29Z", "authors": ["Eric Bruylant", "Patrick Stevens"], "summaries": ["The empty set, $\\emptyset$, is the set with no elements. For every object $x$, $x$ is not in $\\emptyset$."], "tags": [], "alias": "6gb"} {"id": "36232966658e4576f6b6d86ab637c3fa", "title": "Free group universal property", "url": "https://arbital.com/p/free_group_universal_property", "source": "arbital", "source_type": "text", "text": "The [https://arbital.com/p/-600](https://arbital.com/p/-600) of the [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) basically tells us that \"the definition of the free group doesn't depend (up to isomorphism) on the exact details of the set $X$ we picked; only on its [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5)\", which is morally a very useful thing to know.\nYou may skip down to the next subheading if you might be scared of category theory, but the property itself doesn't need category theory and is helpful.\n\nThe universal property is the technical [category-theoretic](https://arbital.com/p/4c7) fact that [the free-group functor is left adjoint to the forgetful functor](https://arbital.com/p/free_group_functor_left_adjoint_to_forgetful), and it is not so immediately useful as the other more concrete properties on this page, but it is exceedingly important in category theory as a very natural example of a [pair of adjoint functors](https://arbital.com/p/adjoint_functor) and as an example for the [https://arbital.com/p/-general_adjoint_functor_theorem](https://arbital.com/p/-general_adjoint_functor_theorem).\n\n# Statement and explanation\nThe universal property which characterises the free group is:\n\n> The free group $FX$ on the set $X$ is the group, unique up to isomorphism, such that for any group $G$ and any [function of sets](https://arbital.com/p/3jy) $f: X \\to G$ %%note:Here we're slightly abusing notation: we've written $G$ for the [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) of the group $G$ here.%%, there is a unique [https://arbital.com/p/-47t](https://arbital.com/p/-47t) $\\overline{f}: FX \\to G$ such that $\\overline{f}(\\rho_{a_1} \\rho_{a_2} \\dots \\rho_{a_n}) = f(a_1) \\cdot f(a_2) \\cdot \\dots \\cdot f(a_n)$.\n\nThis looks very opaque at first sight, but what it says is that $FX$ is the unique group such that:\n\n> Given any target group $G$, we can extend any map $f: X \\to G$ to a unique homomorphism $FX \\to G$, in the sense that whenever we're given the image of each generator (that is, member of $X$) by $f$, the laws of a group homomorphism force exactly where every other element of $FX$ must go.\nThat is, we can specify homomorphisms from $FX$ by specifying where the generators go, and moreover, *every* possible such specification does indeed correspond to a homomorphism.\n\n# Why is this a non-trivial property?\n\nConsider the cyclic group $C_3$ with three elements; say $\\{ e, a, b\\}$ with $e$ the identity and $a + a = b$, $a+b = e = b+a$, and $b+b = a$.\nThen this group is generated by the element $a$, because $a=a$, $a+a = b$, and $a+a+a = e$.\nLet us pick $G = (\\mathbb{Z}, +)$.\nWe'll try and define a map $f: C_3 \\to \\mathbb{Z}$ by $a \\mapsto 1$.\n\nIf $C_3$ had the universal property of the free group on $\\{ e, a, b\\}$, then we would be able to find a homomorphism $\\overline{f}: C_3 \\to \\mathbb{Z}$, such that $\\overline{f}(a) = 1$ (that is, mimicking the action of the set-function $f$).\nBut in fact, no such homomorphism can exist, because if $\\overline{f}$ were such a homomorphism, then $\\overline{f}(e) = \\overline{f}(a+a+a) = 1+1+1 = 3$ so $\\overline{f}(e) = 3$, which contradicts that [the image of the identity under a group homomorphism is the identity](https://arbital.com/p/49z).\n\nIn essence, $C_3$ \"has extra relations\" (namely that $a+a+a = e$) which the free group doesn't have, and which can thwart the attempt to define $\\overline{f}$; this is reflected in the fact that $C_3$ fails to have the universal property.\n\nA proof of the universal property may be found [elsewhere](https://arbital.com/p/free_group_satisfies_universal_property).", "date_published": "2016-10-23T16:25:51Z", "authors": ["Patrick Stevens"], "summaries": ["The [https://arbital.com/p/-5kg](https://arbital.com/p/-5kg) may be defined by a [https://arbital.com/p/-600](https://arbital.com/p/-600), allowing [https://arbital.com/p/-4c7](https://arbital.com/p/-4c7) to talk about free groups. The universal property is a helpful way to think about free groups even without any category theory: it opens the way to considering [group presentations](https://arbital.com/p/5j9), which are very interesting objects in their own right (for example, because they're easy to compute with)."], "tags": [], "alias": "6gd"} {"id": "397b5f0e8ed8e5f13a23968c4909ed45", "title": "Featured math content", "url": "https://arbital.com/p/featured_math", "source": "arbital", "source_type": "text", "text": "Here are a few of the best pages on Arbital, selected by our editors.\n\n### [https://arbital.com/p/1zq](https://arbital.com/p/1zq)\n\nA customizable multi-page exploration of one of the core concepts behind good reasoning.\n\n### [https://arbital.com/p/3nd](https://arbital.com/p/3nd)\n\nAn extended exploration of logarithms and their various uses.\n\n### [https://arbital.com/p/47d](https://arbital.com/p/47d)\n\nAn engaging explanation of derivatives, suitable for almost everyone.\n\n### [https://arbital.com/p/2w7](https://arbital.com/p/2w7)\n\nA graphical explanation uncountability, with alternate [lenses](https://arbital.com/p/17b) for people with different mathematical backgrounds.\n\n### [https://arbital.com/p/3rb](https://arbital.com/p/3rb)\n\nPosets and tightly related concepts pitched for a technical audience, with exercises and examples.\n\n### [https://arbital.com/p/5mv](https://arbital.com/p/5mv)\n\nAn explanation of Rice's Theorem and its implications, with a lens that explores the connection between the theorem and the [https://arbital.com/p/-halting_problem](https://arbital.com/p/-halting_problem).\n\n### [Category theory](https://arbital.com/explore/category_theory/)\n\nVarious pages on [Category Theory](https://arbital.com/p/4c7), exploring how concepts from different domains of mathematics share the same underlying structure.\n\n### [https://arbital.com/p/3y2](https://arbital.com/p/3y2)\n\nA handy [https://arbital.com/p/-57n](https://arbital.com/p/-57n) page between the four different things people sometimes mean when they say \"bit\".", "date_published": "2016-10-24T16:34:13Z", "authors": ["Eric Bruylant"], "summaries": [], "tags": ["C-Class"], "alias": "6gg"} {"id": "9f4a3eacafd0b477276d3ddce521f240", "title": "Primer on Infinite Series", "url": "https://arbital.com/p/6hk", "source": "arbital", "source_type": "text", "text": "What does it mean to add things together forever? While many get introduced to infinite series in Calculus class, it seems that everyone skips ahead to memorizing terminology without first motivating the subject, or considering that it's sorta weird. In reality, the long history of infinite series would suggest that the subject is subtle and strange. But by steaming ahead, memorizing the vocabulary, examples, and rules for convergence, learners miss a chance to really think about what these infinite sums really are or could be. They end up being able to more or less accurately parrot those rules, but if you speak with them a bit or overhear a conversation with others about the topic, you realize their understanding is a hollow shell. This article is a look at infinite series, adding things together forever, from more or less first principles.\n\nExample One: Zeno Crosses the Room\n----------------------------------\n\nOne of the origin stories for infinite series is attributed to the Greek Philosopher Zeno. It's a good place to start motivating why you end up dealing with infinite series and also a place from which you can begin to appreciate the need for a way of thinking about infinity that doesn't lead to pure nonsense. Although he had many paradoxes, they all boil down to certain weirdnesses that come up when you imagine space or time being infinitely divisible. Things that are obvious and simple from the point of view of everyday experience are contradicted by their analysis from the point of view of infinite divisibility. Consider the following scenario.\n\nYou want to cross the room. \n\nThat is you wish to cover one room's worth of distance. Easy enough right? You don't even need to stretch. But...(Zeno's voice) to cross the room, first you need to get halfway there. Before you cross the 2nd half of the room, you need to cross half of the 2nd half of the room. And so forth. At each stage you subdivide the remaining distance in half and cover that distance.\n\nZeno's problem was that this sequence of instants is infinite. If each turn took say 1 second, it would take infinity seconds to cross the room. From this perspective any motion—not just room crossing—appears impossible. Zeno's conclusion was that bringing infinity into the discussion, in this case dividing a given distance into infinitely many pieces, leads to nonsense and should be avoided.\n\n%note:For this and other similar reasons, the official use of infinity in mathematical arguments was postponed until the innovation of Calculus. Essentially, due to the simultaneous creation of mechanics and all the practical uses to which it could be put, the problems with reasoning with infinite sums were swept under the rug for a while. Calculus uses infinity all the time and doesn't apologize for it. Newton, Leibniz, etc. didn't figure out how to solve Zeno's paradox, they just ignored it. It wasn't until a couple centuries later that mathematicians really confronted the problem.% \n\nZeno's room crossing can be written using numbers and symbols instead of sentences, where it might look a bit more familiar as an \"infinite sum\".\n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}+\\ldots$$\n\nAnd we can think of its sum either from the statement of the problem (where it is clear that the sum should be 1, a whole room) or by a direct calculation of the distance remaining to cover and noting where its limit is.\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\n\n\n\n\n\n\t\n\t\n\t\n\n\n\t\n\t\n\t\n\n\n\t\n\t\n\t\n\n\n\t\n\t\n\t\n\n\n\t\n\t\n\t\n\n\n\t\n\t\n\t\n\n\n
StepDistance CoveredDistance Remaining
001
1\\(\\frac{1}{2}\\)\\(\\frac{1}{2}\\)
2\\(\\frac{1}{2}+\\frac{1}{4}\\)\\(\\frac{3}{4}\\)
3\\(\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}\\)\\(\\frac{7}{8}\\)
4\\(\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}\\)\\(\\frac{15}{16}\\)
5\\(\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}\\)\\(\\frac{31}{32}\\)
\n\nSo, at this point, we have a good reason to want to be able to write \n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}+\\ldots = 1$$\n\nMetaphysical or existential doubts notwithstanding, it just looks like we should end up with one if this process of making up half the remaining distance were to continue. In this and many other situations, its clear that adding together infinitely many things should work out to give something finite. The way we would read these symbols in words is \n\n>If you keep adding halves of halves ($\\frac{1}{2}$ + \\frac{1{4}$ and so on), you get closer and closer to 1. If somehow you could add forever, you'd have exactly one.\n\nExact and Approximate\n---------------------\n\nAt this point we should clear up one thing about *exact* versus *approximate*. You might be thinking that, regardless of the \"infinite\" part of the sum, this is a clear case of \"good enough for all practical purposes\". At some point you get close enough to touch the wall at the end of the room. So yes, we can think of the sum as approximating one. And these infinite sums were originally very useful because they gave ways of approximating difficult to calculate things with very easy to calculate things. But there is some subtlety here in the words we use. When we say, \n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}+\\ldots = 1$$\n\nWe do not mean\n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}+\\ldots \\approx \\text{ (is approximately equal to) }1$$\n\nIt *would* be correct to say\n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8} \\approx 1$$\n\nor also\n\n$$\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32} \\approx 1$$\n\nbut once we have infinitely many terms, once we are asking what would happen if the process were allowed to continue indefinitely, we are no longer taking about approximately, we are talking about exactly equal. \n\nAnd this is where people, not just young students but the many mathematical thinkers through history, have had trouble. We are used to the things related by equal signs both being simple numbers. Finite things. Not infinite processes. So what does it mean for an infinite process to be *exactly equal* to a finite number? So far in this discussion we're not yet sure. But we have a clue. We have exactly one case where we have an infinite process that has an obvious limit. We'd like to equate them. \n\nNext, let's look at a case where the infinite process, instead of telling us something we already know, helps us see something new.\n\nExample 2: Archie and the Area of a Circle\n------------------------------------------\n\nYou may have been asked to remember formulas for the circumference and area of a circle in school. \n$$ \\text{Circumference }(C) = \\pi \\cdot \\text{Diameter }(D) = 2\\pi \\cdot \\text{Radius }(r)$$\n\n$$ \\text{Area of a Circle}(A) = \\pi \\cdot r^2$$\n\nBut did you ever wonder why these formulas are true or why they both happen to have $\\pi$ in them? Did you ever wonder how we figured out the area of a round shape like the circle in the first place?\n\nWell, Archimedes did. And the way he figured it out was by creating an infinite sum. \n\nThe formula for circumference is easy. It's just a definition. Ancient Greeks and many others noticed that $\\frac{C}{D}$ was the same ratio for any circle (*WHY?*). They didn't call this $\\pi$ or even consider it a number, but we do. By definition, $\\frac{C}{D} = \\pi$, so of course $C=\\pi D$. \n\nThe real mystery is with the area formula. And this is what Archimedes did. He imagined a regular polygon inscribed within a circle. Here's a picture.\n\n![](some pic)\n\nNow, in my picture, there are 8 sides. But there's nothing special about 8. It could be 6, or 12, or a million. But...the more sides there are, the more the polygon looks like a circle, the better approximation its area is for the area of the circle. And, as it turns out, the area of the polygon is easy to compute. Moreover, computing the area of the pylon, we can see how it changes with the number of sides and how it stays the same, *and* we can see what happens to this area if we somehow had infinitely many sides. Let's look:\n\nFirst, we notice that a polygon can be split into a bunch of triangles, each of which is the same. How many? The same number as we have sides of the polygon. Just like slices of pie. Add up the area of all the slices to get the area of the original polygon.\n\n$$\\text{Area of polygon with }n \\text{ sides} = \\text{sum of } n \\text{ triangles}$$\n\nOf course, each triangle has the same area, and any triangle has area $= \\frac{1}{2} \\cdot \\text{base}\\cdot \\text{height}$. Here, the base is the side of a polygon (let's call it $s$) and the height we can call $h$. Our formula for the area of a polygon becomes:\n\n$$\\text{Area of polygon with }n \\text{ sides} = \\underbrace{\\frac{1}{2}sh+\\frac{1}{2}sh+\\ldots+\\frac{1}{2}sh}_{n \\text{ times}}$$\n\nNow, factor the $\\frac{1}{2}h$ out of the above expression to get something even nicer.\n\n$$\\text{Area of polygon with }n \\text{ sides} = \\frac{1}{2}h(\\underbrace{s+s+\\ldots+s}_{n \\text{ times}})$$\n\nAnd look, $n$ sides added together is nothing other than the perimeter ($P$) of the polygon.\n\n$$\\text{Area of polygon} = \\frac{1}{2}hP$$\n\nThis formula is true regardless of how many sides we have. All we have to do now to recover the formula for the area of the circle is to imagine what happens to all of the parts when the number of sides, the extent of the approximation, the number of terms in the underlying sum, gets larger.\n\n$$ \\text{Area of polygon} \\rightarrow \\text{Area of a Circle}(A)$$\n\n$$h \\rightarrow r$$\n\n$$P \\rightarrow C$$\n\nSo our formula becomes \n\n$$ A = \\frac{1}{2}rC = \\frac{1}{2}r(\\pi D) = \\frac{1}{2}r(\\pi(2r))=\\pi r^2$$\n\nAmazing, no?\n\nThere are *tons* of other situations where you can answer a question by putting together infinitely many pieces, situations where we would like to be able to formally write down something of the sort:\n\n$$\\text{infinite sum} = \\text{finite number}$$\n\nbut there are also obvious and not so obvious situations where doing so is nonsense. We would like to go so far as to extend our rules for algebra to be able to operate on these infinite sums (things like multiplying both sides of an equation by the same number or grouping terms), but the truth is we don't have enough information to know when that does and doesn't work out. Next we will take a look at this more general question.", "date_published": "2016-11-02T21:04:22Z", "authors": ["Eric Bruylant", "Chris Holden"], "summaries": [], "tags": [], "alias": "6hk"} {"id": "58a530e7a5c27b7a3e1cfc30cef6df43", "title": "Lambda calculus", "url": "https://arbital.com/p/lambda_calculus", "source": "arbital", "source_type": "text", "text": "What's the least we can put in a programming language and still have it be able to do anything other [languages can](https://arbital.com/p/computability) %%note:i.e. [https://arbital.com/p/turing_complete](https://arbital.com/p/turing_complete)%%? It turns out we can get away with including only a single built in feature: [https://arbital.com/p/-3jy](https://arbital.com/p/-3jy) definition, written using $\\lambda$. The language that has only this one feature is called \"lambda calculus\" or \"$\\lambda$ calculus.\" We also need a way to apply our functions to inputs; we write $f\\ x$ to indicate the value of the function $f$ on $x$, perhaps more familiarly written $f(x)$.\n\nA $\\lambda$ expression is something of the form $\\lambda x.f(x)$, which means \"take in an input $x$, and return $f(x)$. For example, $\\lambda x.x+1$ takes in a number and returns one more than it. We only directly have functions with one input, but we can build functions with multiple inputs out of this. For example, applying $\\lambda x.\\lambda y.x+y$ to $3$, and applying the result to $4$ gives $3+4=7$ . We can think of this as a two-input function, and abbreviate it as $\\lambda xy.x+y$; \"$\\lambda xy$\" is simply a shorthand for \"$\\lambda x.\\lambda y$.\" We don't lose any power by restricting ourselves to unary functions, because we can think of functions with multiple inputs as taking only one input at a time, and returning another function at each intermediate step.\n\nWe said the only feature we have is function definition, so what's this about \"$+$\" and \"$1$\"? These symbols don't exist in $\\lambda$ calculus, so we'll have to define [numbers](https://arbital.com/p/-54y) and operations on them using $\\lambda$ %%note:The standard way of doing this is called the [https://arbital.com/p/6qb](https://arbital.com/p/6qb).%%. But $\\lambda$ expressions are useful even when we have other features as well; programming languages such as Python allow function definition using $\\lambda$. It's also useful when trying to understand what $\\lambda$ means to see examples with functions you're more familiar with.\n\nWe're not going to put constraints on what inputs $\\lambda$ expressions can take. If we allowed expressions like $\\lambda x.x+1$, it would get confused when we try to give it something other than a number; this is why we can't have $+$ and $1$. So what sorts of objects will we give our functions? The only objects have we: other functions, of course!\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n#Formal definition\n\nLet's formally define $\\lambda$ calculus. First, the language of $\\lambda$ calculus contains\n\n- Parentheses ( and )\n- Symbols $\\lambda$ and .\n- Variables $v_1,v_2,\\dots$ (in practice we'll use other letters too).\n\nWhat counts as a $\\lambda$ expression? We define them recursively:\n\n- For any variable $x$, $x$ is a $\\lambda$ expression.\n- For any variable $x$ and $\\lambda$ expression $M$, $(\\lambda x.M)$ is a $\\lambda$ expression.\n- If $M$ and $N$ are $\\lambda$ expressions, so is $(M\\ N)$.\n\nWe also want to define free and bound variables, and also do this recursively:\n\n- The variable $x$ is free in the expression $x$.\n- Every instance of $x$ is bound in the expression $(\\lambda x.M)$. If an instance of $x$ is free in $M$, we say the initial $\\lambda x$ in $(\\lambda x.M)$ *binds* it. We say that $M$ is the *scope* of that $\\lambda x$.\n- An instance of $x$ in $(M\\ N)$ is free if it is free in $M$ or $N$, and bound otherwise.\n\nWe can use the rules to build expressions like $((\\lambda x.(\\lambda y.(x\\ y))))\\ x)$, in which the first instance of $x$ and the only instance of $y$ are bound, and the second instance of $x$ is free. We have an oppressive amount of parentheses; we'll establish informal conventions to reduce them in the next section.\n\n#Parenthesis conventions\n\nIf we want to be extremely careful, we should write the add-two-numbers function as $(\\lambda x.(\\lambda y.(x+y)))$, using parentheses as in the formal definition. But we often don't need them, and use the following rules:\n\n- We don't need outermost parentheses. We can write $\\lambda x.(\\lambda y.(x+y))$ instead of $(\\lambda x.(\\lambda y.(x+y)))$.\n- Application is, by default, left associative. $f\\ x\\ y$ means \"the function $f$ with inputs $x$ and $y$,\" i.e. $(f\\ x)\\ y$, not $f\\ (x\\ y)$.\n- The scope of any $\\lambda$ is as large is possible. $\\lambda x.\\lambda y.x+y$ means $\\lambda x.(\\lambda y.(x+y))$, not $(\\lambda x.\\lambda y.x)+y$ or $\\lambda x.(\\lambda y.x)+y$ or anything like that.\n- We can abbreviate sequences of $\\lambda$s. As mentioned before, we can write $\\lambda xy.x+y$ for $\\lambda x.\\lambda y.x+y$.\n\nOf course, we don't have to omit parentheses whenever we can. These rules govern what it means when we don't have all the parentheses strictly necessary, but we can include them if we want to be extra clear.\n\n#Reducing lambda expressions\n\nSo far, $\\lambda$ expressions don't have any meaning; we've said what we're allowed to write down but not how the relate to each other. What we usually do with a $\\lambda$ expression is reduce it, and there are three rules that we can use: [$\\beta$-reduction](https://arbital.com/p/beta_reduction), [$\\alpha$-conversion](https://arbital.com/p/alpha_conversion), and [$\\eta$-conversion](https://arbital.com/p/eta_conversion).\n\n[$\\beta$-reduction](https://arbital.com/p/beta_reduction) is the fancy name for \"stick in the inputs to a function.\" For example, to evaluate $(\\lambda x.\\lambda y.x+y)\\ 6\\ 3$, we first notice that $6$ is the input to a $\\lambda$ expression that starts $\\lambda x$, and substitute $6$ for $x$ in the expression, giving us $(\\lambda y.6+y)\\ 3$. Now $3$ is substituted for $y$, giving us $6+3=9$.\n\nFormally, for a variable $x$ and $\\lambda$ expressions $M$ and $N$, $\\beta$-reduction converts $((\\lambda x.M)\\ N)$ to $M[https://arbital.com/p/N/x](https://arbital.com/p/N/x)$, i.e. $M$ with every free instance of $x$ replaced with $N$. The expressions $((\\lambda x.M)\\ N)$ and $M[https://arbital.com/p/N/x](https://arbital.com/p/N/x)$ are equivalent. %%todo: is there a more standard notation for this?%%\n\n[$\\alpha$-conversion](https://arbital.com/p/alpha_conversion) says we can rename variables; $\\lambda x.f\\ x$ is the same as $\\lambda y.f\\ y$. Formally, if $M$ is a $\\lambda$ expression containing $\\lambda x$, and no instance of $x$ bound by that $\\lambda x$ is within the scope of a $\\lambda y$, then we can replace every $x$ in the scope of the $\\lambda x$ with a $y$ to get an equivalent expression. We need the second criterion because otherwise we could replace the $x$s in $\\lambda x.\\lambda y.x$ with $y$s to get $\\lambda y.\\lambda y.y$, and then replace the outer $y$ with $x$ to get $\\lambda x.\\lambda y.x$, which is not equivalent (the second substitution here is valid; the first one is the problem). \n\nFinally, [$\\eta$-conversion](https://arbital.com/p/eta_conversion) says that if two expressions give the same result on any input, then they're equivalent. We should have $\\lambda x.f\\ x$ and $f$ be the same, since $\\beta$-reduction converts $(\\lambda x.f\\ x)\\ x$ to $f\\ x$, so they're the same on any input. Formally, if $x$ isn't free in $M$, then $(\\lambda x.(M\\ x))$ is equivalent to $M$.\n\n#[https://arbital.com/p/50p](https://arbital.com/p/50p)\n\nLet's look again at the function that adds two numbers, $\\lambda x.\\lambda y.x+y$. There are two different ways we can think of this: first, as a function that takes in two numbers, and returns their sum. If we're talking about [natural numbers](https://arbital.com/p/45h), this is a function $\\mathbb N^2\\to\\mathbb N$. \n\nWe could also think of the function as taking one input at a time. It says \"take in input $x$, return the expression $\\lambda y.x+y$.\" For example, on input $6$ we have $(\\lambda x.\\lambda y.x+y)\\ 6=\\lambda y.6+y$. Now we have a function that takes in a number, and gives us a function that takes in a number and returns a number. We can write this as $\\mathbb N\\to(\\mathbb N\\to\\mathbb N)$.\n\nThis equivalence between functions on two (or more) inputs, and functions on one input that return a new function, is known as [https://arbital.com/p/50p](https://arbital.com/p/50p) and is important in $\\lambda$ calculus, where the only objects are functions. We have to get used to the idea that a function can take other functions as arguments and give functions as return values.\n\n#Defining variables\n\n$\\lambda$ calculus doesn't have a built in way to define a [https://arbital.com/p/-variable](https://arbital.com/p/-variable). Suppose we have the function $d=\\lambda f.\\lambda x.f\\ (f\\ x)$ which applies its first input to its second input twice. Currying, $d$ takes in a function and returns that function composed with itself. Say we want to know what $d\\ d$ is. We could just write $d$ twice, to get $(\\lambda f.\\lambda x.f\\ (f\\ x))\\ (\\lambda f.\\lambda x.f\\ (f\\ x))$. $\\beta$-reduction eventually reduces this to $\\lambda f.\\lambda x.f\\ (f\\ (f\\ (f\\ x)))$, the function that applies its first input to its second input four times (as you might have expected, applying a function twice, twice, is the same as applying it four times).\n\nWe can also get $d\\ d$ while only writing out $d$ once. Take the $\\lambda$ expression $(\\lambda x.x\\ x)\\ d$ or, writing out $d$, $(\\lambda x.x\\ x)\\ (\\lambda f.\\lambda x.f\\ (f\\ x))$. A single step of $\\beta$-reduction turns this into the previous version, where we wrote $d$ out twice. The expression we started with was shorter, and expressed better the idea that we wanted to use the same value of $d$ twice. In general, if we want to say something like $x = a; f\\ x$ (i.e. define the variable $x$, and then use it in some expression), we can write $(\\lambda x.f\\ x)\\ a$, first building the expression we want to use the value of $x$ in, and then substituting in the value using function application. This will be very useful in the next section when we make [recursive](https://arbital.com/p/recursion) functions.\n\n#Loops\n\nIt might seem like $\\lambda$ expressions are very finite, and $\\beta$-reduction will always finish after a reasonable amount of time. But for $\\lambda$ calculus to be as powerful as normal programming languages, there must be $\\lambda$ expressions that don't reduce in a [finite amount of time](https://arbital.com/p/46h). The way to resolve this is to figure out how to build while loops in $\\lambda$ calculus; if you can make a while loop, you can write a program that runs forever pretty easily.\n\nSince all we have is functions, we'll use [https://arbital.com/p/-recursion](https://arbital.com/p/-recursion) to make loopy programs. For example, consider this factorial function in Python:\n\n factorial = lambda n: 1 if n==0 else n*factorial(n-1)\n\nHow can we write this in lambda calculus? Let's assume we have [numbers](https://arbital.com/p/54y), [https://arbital.com/p/-arithmetic](https://arbital.com/p/-arithmetic), [booleans](https://arbital.com/p/57f), %%todo: possibly change these links to church encoding pages once they exist%%\nand a function $Z$ that returns True on $0$ and False on anything else. Let's also a assume we have an \"if\" function $I$ that takes in three inputs: $I = (\\lambda p.\\lambda x.\\lambda y.$if $p$, then $x$, else $y)$. So $I\\ True\\ x\\ y=x$ and $I\\ False\\ x\\ y=y$.\n\nStarting to translate from Python to $\\lambda$ calculus, we have $F = \\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(F\\ (n-1)))$, which says \"if $n$ is $0$, then $1$, else $n\\times F(n-1)$. But we can't define $F$ like this; we have to use the method described in the previous section. In order for each recursive call of the factorial function to know how to call further recursive calls, we need to pass in $F$ to itself. Consider the expression\n\n$$F=(\\lambda x.x\\ x)\\ (\\lambda r.\\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(r\\ r\\ (n-1))))$$\n\nLet $g=\\lambda r.\\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(r\\ r\\ (n-1)))$. Then $F=(\\lambda x.x\\ x)\\ g=g\\ g$ is $g$ applied to itself. Plugging in $g$ for $r$, $g\\ g=\\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(g\\ g\\ (n-1)))$. But since $g\\ g=F$, this is just $\\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(F\\ (n-1)))$, exactly what we wanted $F$ to be.\n\nLet's generalize this. Say we want to define a recursive formula $f=\\lambda x.h\\ f\\ x$, where $h$ is any expression (in the factorial example $h=\\lambda f.\\lambda n.I\\ (Z\\ n)\\ 1\\ (n\\times(f\\ (n-1)))$). We define\n\n\\begin{align*}\n g&=\\lambda r.h\\ (r\\ r)\\newline\n &=\\lambda r.\\lambda x.h\\ (r\\ r)\\ x\n\\end{align*}\n\nand use this to build\n\n\\begin{align*}\n f&=(\\lambda x.x\\ x)\\ g\\newline\n &=g\\ g\\newline\n &=(\\lambda r.\\lambda x.h\\ (r\\ r)\\ x)\\ g\\newline\n &=\\lambda x.h\\ (g\\ g)\\ x\\newline\n &=\\lambda x.h\\ f\\ x.\n\\end{align*}\n\nWe can express this process taking any $h$ to the recursive function $f$ as a $\\lambda$ expression:\n\n\\begin{align*}\n Y&=\\lambda h.(\\lambda x.x\\ x)\\ (\\lambda r.h\\ (r\\ r))\\newline\n &=\\lambda h.(\\lambda r.h\\ (r\\ r))\\ (\\lambda r.h\\ (r\\ r)).\n\\end{align*}\n\nFor any $h$, which takes as arguments $f$ and $x$ and returns some expression using them, $Y\\ h$ recursively calls $h$. Specifically, letting $f=Y\\ h$, we have $f=\\lambda x.h\\ f\\ x$.\n\nFor fun, let's make a $\\lambda$ expression that enters an infinite loop by reducing to itself. If we make $h$ the function that applies $f$ to $x$, recursively calling it should accomplish this, since it's analogous to the Python function `f = lambda x: f(x)`, which just loops forever. So let $h=\\lambda f.\\lambda x.f x=\\lambda f.f$. Now \\begin{align*}\n Y\\ h&=(\\lambda x.x\\ x)\\ (\\lambda r.(\\lambda f.f)\\ (r\\ r))\\newline\n &=(\\lambda x.x\\ x)\\ (\\lambda r.r\\ r)\\newline\n &=(\\lambda r.r\\ r)\\ (\\lambda r.r\\ r).\n\\end{align*}\n\nContinued $\\beta$-reduction doesn't get us anywhere. This expression is also interesting because it's a simple [https://arbital.com/p/-322](https://arbital.com/p/-322), a program that outputs its own code, and this is a sort of template you can use to make quines in other languages.\n\nThis strategy of building recursive functions results in some very long intermediate computations. If $F$ is the factorial function, we compute $F(100)$ as $100\\times F(99)=100\\times 99\\times F(98)$, etc. By the time the recursive calls finally stop, we have the very long expression $100\\times99\\times\\dots\\times2\\times1$, which takes a lot of space to write down or store in a computer. Many programming languages will give you an error if you try to do this, complaining that you exceeded the max recursion depth.\n\nThe fix is something called [https://arbital.com/p/-tail_recursion](https://arbital.com/p/-tail_recursion), which roughly says \"return only a function evaluation, so you don't need to remember to do something with it when it finishes.\" The general way do accomplish this is to pass along the intermediate computation to the recursive calls, instead of applying it to the result of the recursive calls. We can make a tail recursive version of our Python factorial like so %%note:Python will still give you a \"maximum recursion depth exceeded\" error with this, but the strategy works in lambda calculus and lazy languages like Lisp and Haskell, and Python is easier to demonstrate with.%%:\n\n f = lambda n, k: k if n==0 else f(n-1, k*n)\n\nWe pass along $k$ as an intermediate value. It has to start with $k=1$; we can do this by defining a new function `g = lambda n: f(n,1)` or using Python's default value feature to have $k$ be $1$ be default. Now if we want $100$ factorial, we get $f(100,1)=f(99,1*100)=f(99,100)=f(98,9900)=\\dots$. We compute the products as we go instead of all at the end, and only ever have one call of $f$ active at a time.\n\nWriting a tail recursive version of factorial in $\\lambda$ calculus is left as an exercise for the reader.\n\n#[https://arbital.com/p/Y_combinator](https://arbital.com/p/Y_combinator)\n\nYou might be wondering why we called the recursive-function-maker $Y$, so let's look at that again. We said earlier that $Y\\ h=\\lambda x.h\\ (Y\\ h)\\ x$. By $\\eta$-conversion, $Y\\ h=h\\ (Y\\ h)$. That means $Y\\ h$ is a fixed point of $h$! For any function $h$, $Y$ finds a value $f$ such that $h\\ f=f$. This kind of object is called a [https://arbital.com/p/-fixed_point_combinator](https://arbital.com/p/-fixed_point_combinator); this particular one is called the [https://arbital.com/p/Y_combinator](https://arbital.com/p/Y_combinator), conventionally denoted $Y$.\n\nWhy is a fixed point finder the right way to make recursive functions? Consider again the example of factorial. If a function $f$ says \"do factorial for $k$ recursive steps,\" then $h\\ f$ says \"do factorial for $k+1$ recursive steps;\" $h$ was designed to be the function which, calling recursively, gives factorial. Naming the [identity function](https://arbital.com/p/identity_map) $i=\\lambda n.n$, $i$ does factorial for $0$ steps, so $h\\ i$ does it for $1$ step, $h\\ (h\\ i)$ for $2$, and so on. To completely compute factorial, we need a function $F$ that says \"do factorial for infinity recursive steps.\" But doing one more than infinity is the same as just doing infinity, so it must be true that $h\\ F=F$; i.e. that $F$ is a fixed point of $h$. There may be other fixed points of $h$, but the Y combinator finds the fixed point that corresponds to recursively calling it.\n\n## With $Y$ and without $Y$\n\nWhat do we get, and what do we lose, if we allow the use of $Y$?\n\n$Y$ takes a lambda-term which may have corresponded to a [https://arbital.com/p/-primitive_recursive](https://arbital.com/p/-primitive_recursive) function, and outputs a lambda term which need not correspond to a primitive recursive function.\nThat is because it allows us to perform an *unbounded* search: it expresses fully-general looping strategies.\n(As discussed earlier, without a fixed-point finder of some kind, we can only create lambda-terms corresponding to programs which provably halt.)\n\nOn the other side of the coin, $Y$ may produce non-[well-typed](https://arbital.com/p/well_typed) terms, while terms without a fixed-point finder are guaranteed to be well-typed.\nThat means it's hard or even impossible to perform type-checking (one of the most basic forms of static analysis) on terms which invoke $Y$.\n\nStrongly relatedly, in a rather specific sense through the [Curry-Howard correspondence](https://arbital.com/p/curry_howard), the use of $Y$ corresponds to accepting non-[constructiveness](https://arbital.com/p/constructive_proof) in one's proofs.\nWithout a fixed-point finder, we can always build a constructive proof corresponding to a given lambda-term; but with $Y$ involved, we can no longer keep things constructive.", "date_published": "2016-12-06T03:33:25Z", "authors": ["Dylan Hendrickson"], "summaries": [], "tags": ["C-Class"], "alias": "6ld"} {"id": "cdf5b6ff8e407bd79d187b494a898039", "title": "Monotone function: exercises", "url": "https://arbital.com/p/order_monotone_exercises", "source": "arbital", "source_type": "text", "text": "Try these exercises and become a *deity* of monotonicity.\n\n\nMonotone composition\n-----\n\nLet $P, Q$, and $R$ be posets and let $f : P \\to Q$ and $g : Q \\to R$ be monotone\nfunctions. Prove that their composition $g \\circ f$ is a monotone function from $P$ to $R$.\n\n\nEvil twin \n-------\n\nLet $P$ and $Q$ be posets. A function $f : P \\to Q$ is called **antitone** if it *reverses* order: that is, $f$ is antitone whenever $p_1 \\leq_P p_2$ implies $f(p_1) \\geq_Q f(p_2)$. Prove that the composition of two antitone functions is monotone.\n\nPartial monotonicity\n----------\n\nA two argument function $f : P \\times A \\to Q$ is called *partially monotone in the 1st argument* whenever $P$ and $Q$ are posets and for all $a \\in A$, $p_1 \\leq_P p_2$ implies $f(a, p_1) \\leq_Q f(a, p_2)$. Likewise a 2-argument function $f : A \\times P \\to Q$ is called *partially monotone in the second argument* whenever $P$ and $Q$ are posets and for all $a \\in A$, $p_1 \\leq_P p_2$ implies $f(p_1, a) \\leq_Q f(p_2, a)$.\n\nLet $P, Q, R$, and $S$ be posets, and let $f : P \\times Q \\to R$ be a function that is partially monotone in both of its arguments. Furthermore, let $g_1 : S \\to P$ and $g_2 : S \\to Q$ be monotone functions.\n\nProve that the function $h : S \\to R$ defined as $h(s) \\doteq f(g_1(s), g_2(s))$ is monotone.\n\nBrain storm\n------\n\nList all of the commonly used two argument functions you can think of that are partially monotone in both arguments. Also, list all of the commonly used two argument functions you can think of that are partially monotone in one argument and partially antitone in the other.", "date_published": "2016-12-03T22:49:22Z", "authors": ["Kevin Clancy"], "summaries": [], "tags": [], "alias": "6pv"} {"id": "77effe1f2b8d5d7b73924bed15e908e9", "title": "Complete lattice", "url": "https://arbital.com/p/math_order_complete_lattice", "source": "arbital", "source_type": "text", "text": "A **complete lattice** is a [poset](https://arbital.com/p/3rb) that is closed under arbitrary [joins and meets](https://arbital.com/p/3rc). A complete lattice, being closed under arbitrary joins and meets, is closed in particular under binary joins and meets; a complete lattice is thus a specific type of [lattice](https://arbital.com/p/46c), and hence satisfies [associativity](https://arbital.com/p/3h4), [commutativity](https://arbital.com/p/3jb), idempotence, and absorption of joins and meets.\n\nComplete lattices can be equivalently formulated as posets which are closed under arbitrary joins; it then follows that complete lattices are closed under arbitrary meets as well.\n\n%%hidden(Proof):\nSuppose that $P$ is a poset which is closed under arbitrary joins. Let $A \\subseteq P$. Let $A^L$ be the set of lower bounds of $A$, i.e. the set $\\{ p \\in P \\mid \\forall a \\in A. p \\leq a \\}$. Since $P$ is closed under joins, we have the existence of $\\bigvee A^L$ in $P$. We will now show that $\\bigvee A^L$ is the meet of $A$.\n\nFirst, we show that $\\bigvee A^L$ is a lower bound of $A$. Let $a \\in A$. By the definition of $A^L$, $a$ is an upper bound of $A^L$. Because $\\bigvee A^L$ is less than or equal to any upper bound of $A^L$, we have $\\bigvee A^L \\leq a$. $\\bigvee A^L$ is therefore a lower bound of $A$.\n\nNow we will show that $\\bigvee A^L$ is greater than or equal to any lower bound of $A$. Let $p \\in P$ be a lower bound of $A$. Then $p \\in A^L$. Because $\\bigvee A^L$ is an upper bound of $A^L$, we have $p \\leq \\bigvee A^L$.\n%%\n\nComplete lattices are bounded\n========================\n\nAs a consequence of closure under arbitrary joins, a complete attice $L$ contains both $\\bigvee \\emptyset$ and $\\bigvee L$. The former is the least element of $L$ satisfying a vacuous set of constraints; every element of $L$ satisfies a vacuous set of constraints, so this is really the minimum element of $L$. The latter is an upper bound of all elements of $L$, and so it is a maximum. A lattice with both minimum and maximum elements is called bounded, and as this discussion has shown, all complete lattices are bounded.\n\nBasic examples\n=============\n\nFinite Lattices\n------------------------\n\nThe collection of all subsets of a finite lattice coincides with its collection of finite subsets. A finite lattice, being a finite poset that is closed under finite joins, is then necessarily closed under arbitrary joins. All finite lattices are therefore complete lattices.\n\nPowersets\n------------------\n\nFor any set $X$, consider the poset $\\langle \\mathcal P(X), \\subseteq \\rangle$ of $X$'s powerset ordered by inclusion.\nThis poset is a complete lattice in which for all $Y \\subset \\mathcal P(X)$, $\\bigvee Y = \\bigcup Y$.\n\nTo see that $\\bigvee Y = \\bigcup Y$, first note that because union contains all of its constituent sets, for all $A \\in Y$, $A \\subseteq \\bigcup Y$. This makes $\\bigcup Y$ an upper bound of $Y$. Now suppose that $B \\in \\mathcal P(X)$ is an upper bound of $Y$; i.e., for all $A \\in Y$, $A \\subseteq B$. Let $x \\in \\bigcup Y$. Then $x \\in A$ for some $A \\in Y$. Since $A \\subseteq B$, $x \\in B$. Hence, $\\bigcup Y \\subseteq B$, and so $\\bigcup Y$ is the least upper bound of $Y$.\n\n\nThe Knaster-Tarski fixpoint theorem\n=============================\n\nSuppose that we have a poset $X$ and a [monotone function](https://arbital.com/p/5jg) $F : X \\to X$. An element $x \\in X$ is called $F$**-consistent** if $x \\leq F(x)$ and is called $F$**-closed** if $F(x) \\leq x$. A fixpoint of $F$ is then an element of $X$ which is both $F$-consistent and $F$-closed.\n\nLet $A \\subseteq X$ be the set of all fixpoints of $F$. We are often interested in the maximum and minimum elements of $A$, if indeed it has such elements. Most often it is the minimum element of $A$, denoted $\\mu F$ and called the **least fixpoint** of $F$, that holds our interest. In the deduction system example from [https://arbital.com/p/5lf](https://arbital.com/p/5lf), the least fixpoint of the deduction system $F$ is equal to the set of all judgments which can be proven without assumptions. Knowing $\\mu F$ may be first step toward testing a judgment's membership in $\\mu F$, thus determining whether or not it is provable. In less pedestrian scenarios, we may be interested in the set of all judgments which can be proven without assumption using *possibly infinite proof trees*; in these cases, it is the **greatest fixpoint** of $F$, denoted $\\nu F$, that we are interested in.\n\nNow that we've established the notions of the least and greatest fixpoints, let's try an exercise. Namely, I'd like you to think of a lattice $L$ and a monotone function $F : L \\to L$ such that neither $\\mu F$ nor $\\nu F$ exists.\n\n%%hidden(Show solution):\nLet $L = \\langle \\mathbb R, \\leq \\rangle$ and let $F$ be the identity function $F(x) = x$. $x \\leq y \\implies F(x) = x \\leq y = F(y)$, and so $F$ is monotone. The fixpoints of $F$ are all elements of $\\mathbb R$. Because $\\mathbb R$ does not have a maximum or minimum element, neither $\\mu F$ nor $\\nu F$ exist.\n%%\n\nIf that was too easy, here is a harder exercise: think of a complete lattice $L$ and monotone function $F : L \\to L$ for which neither $\\mu F$ nor $\\nu F$ exist.\n\n%%hidden(Show solution):\nThere are none. :p\n%%\n\nIn fact, every monotone function on a complete lattice has both least and greatest fixpoints. This is a consequence of the **Knaster-Tarski fixpoint theorem**.\n\n**Theorem (The Knaster-Tarski fixpoint theorem)**: Let $L$ be a complete lattice and $F : L \\to L$ a monotone function on $L$. Then $\\mu F$ exists and is equal to $\\bigwedge \\{x \\in L \\mid F(x) \\leq x\\}$. Dually, $\\nu F$ exists and is equal to $\\bigvee \\{x \\in L \\mid x \\leq F(x) \\}$.\n\n%%hidden(Proof):\nWe know that both $\\bigwedge \\{x \\in L \\mid F(x) \\leq x\\}$ and $\\bigvee \\{x \\in L \\mid F(x) \\leq x \\}$ exist due to the closure of complete lattices under meets and joins. We therefore only need to prove that $\\bigwedge \\{x \\in L \\mid F(x) \\leq x\\}$ is a fixpoint of $F$ that is less or equal to all other fixpoints of $F$. The rest follows from duality.\n\nLet $U = \\{x \\in L \\mid F(x) \\leq x\\}$ and $y = \\bigwedge U$. We seek to show that $F(y) = y$. Let $V$ be the set of fixpoints of $F$. Clearly, $V \\subseteq U$. Because $y \\leq u$ for all $u \\in U$, $y \\leq v$ for all $v \\in V$. In other words, $y$ is less than or equal to all fixpoints of $F$.\n\nFor $u \\in U$, $y \\leq u$, and so $F(y) \\leq F(u) \\leq u$. Since $F(y)$ is a lower bound of $U$, the definition of $y$ gives $F(y) \\leq y$. Hence, $y \\in U$. Using the monotonicity of $F$ on the inequality $F(y) \\leq y$ gives $F(F(y)) \\leq F(y)$, and so $F(y) \\in U$. By the definition of $y$, we then have $y \\leq F(y)$. Since we have established $y \\leq F(y)$ and $F(y) \\leq y$, we can conclude that $F(y) = y$.\n%%\n \n\nTODO: Prove the knaster tarski theorem and explain these images\n\nadd !'s in front of the following two lines\n[A Knaster-Tarski-style view of complete latticess](http://i.imgur.com/wKq74gC.png)\n[More Knaster-Tarski-style view of complete latticess](http://i.imgur.com/AYKyxlF.png)", "date_published": "2017-02-10T20:53:34Z", "authors": ["Kevin Clancy"], "summaries": ["A **complete lattice** is a [poset](https://arbital.com/p/3rb) that is closed under arbitrary [joins and meets](https://arbital.com/p/3rc). A complete lattice, being closed under arbitrary joins and meets, is closed in particular under binary joins and meets. A complete lattice is thus a specific type of [lattice](https://arbital.com/p/46c), and hence satisfies [associativity](https://arbital.com/p/3h4), [commutativity](https://arbital.com/p/3jb), idempotence, and absorption of joins and meets. Complete lattices can be equivalently formulated as posets which are closed under arbitrary joins; it then follows that complete lattices are closed under arbitrary meets as well.\n\nBecause complete lattices are closed under all joins, a complete lattice $L$ must contain both $\\bigvee \\emptyset$ and $\\bigvee L$ as elements. Since $\\bigvee \\emptyset$ is a lower bound of $L$ and $\\bigvee L$ is an upper bound of $L$, complete lattices are bounded."], "tags": ["Work in progress"], "alias": "76m"} {"id": "d0062c75a2a143ac53e7a6f773296924", "title": "Square root", "url": "https://arbital.com/p/square_root", "source": "arbital", "source_type": "text", "text": "##Introduction \nWhen you try to get the area of a square %%note: A shape where all sides are equal%% and know the length of a **side** %%note: Otherwise known as its **root**%% you multiply the length of one side by itself or [square](https://arbital.com/p/4ts) it.\n\n![area of square](http://2.bp.blogspot.com/-hkA5woVs-NQ/VRw85xkXlDI/AAAAAAAAAFc/Xc7utStLO7E/s1600/C%2BProgram%2BArea%2BOf%2BSquare.jpg)\n\nWhat if you don't know the length of a side and instead only know the area of a square and want to get the length of a side?\nYou would have to do something to the area that transforms it in the opposite direction that squaring the length of a side does. This is just like how placing 2 apples to a pile transforms a pile of apples by increasing the amount of apples and eating 2 apples transforms the pile back to the way it was. So in other words adding is to subtracting as squaring is to.... what?\n\nWell this kind of transformation is called a **Square Root**.", "date_published": "2017-09-20T02:08:58Z", "authors": ["Travis Rivera"], "summaries": [], "tags": [], "alias": "8p3"} {"id": "5a1354bc6bde7cb5b692e7cbd55611db", "title": "Rationality", "url": "https://arbital.com/p/rationality", "source": "arbital", "source_type": "text", "text": "The subject domain for [epistemic](https://arbital.com/p/) and [instrumental](https://arbital.com/p/) rationality.", "date_published": "2015-12-16T16:54:08Z", "authors": ["Alexei Andreev", "Eliezer Yudkowsky", "Aaron Gertler"], "summaries": [], "tags": ["Stub"], "alias": "9l"} {"id": "c983cf55f09680ff79dbf9b44333a851", "title": "Central examples", "url": "https://arbital.com/p/central_example", "source": "arbital", "source_type": "text", "text": "The \"central examples\" for a subject are examples that are referred to over and over again in the course of making several different points. They should usually come with their own tutorials.", "date_published": "2015-12-16T16:22:15Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "103"} {"id": "5ff2a87b2aa80f3f293a54f471c0dc3b", "title": "The empiricist-theorist false dichotomy", "url": "https://arbital.com/p/108", "source": "arbital", "source_type": "text", "text": "No discussion here yet: See https://www.facebook.com/groups/674486385982694/permalink/784664101631588/", "date_published": "2015-07-14T18:51:58Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "108"} {"id": "a22f009b65ab266e45a677a01e303b50", "title": "Intension vs. extension", "url": "https://arbital.com/p/intension_extension", "source": "arbital", "source_type": "text", "text": "To give an \"intensional definition\" is to define a word or phrase in terms of other words, as a dictionary does. To give an \"extensional definition\" is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of \"extensional definition\", which makes it an extensional example of \"intensional definition\". See http://lesswrong.com/lw/nh/extensions_and_intensions/\n\nIn the context of AI, an \"intensional concept\" is the code or statistical pattern that executes to determine whether something is a member of the concept, while the \"extension\" is the set of things that are thus determined to belong to the concept. The intensional concept \"test: does 2 evenly divide x?\" recognizes the even numbers 0, 2, 4, 6... as its extension.\n\nGiven the modern level of visual recognition technology, a neural network that tries to classify cat photos vs. noncat photos would have some cat photos in its extension, but almost certainly also many things we think are 'cat photos' that would fail to be in its extension and many non-cat-photos that did end up in the extension of that particular neural network. The intensional concept would be the classifier network itself - its weights, propagation rules, and so on.", "date_published": "2015-12-16T16:20:57Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "10b"} {"id": "40524bc03817462941e0cd89772fc7fd", "title": "Strained argument", "url": "https://arbital.com/p/strained_argument", "source": "arbital", "source_type": "text", "text": "A phenomenological feeling associated with a step of reasoning going from X to Y where it feels like somebody is stretching a hand across a cliff and not quite making it.\n\nExample:\n\nAlice: \"Hey, do you think that guy there is stealing that bike?\" \nBob: \"They could be the bike's owner.\" \nAlice: \"They're cutting the lock off with a hacksaw.\" \nBob: \"Maybe they lost the key. They'd have to get the bike back somehow, after that.\" \nAlice: \"So what do you think will happen if I go over there and ask to see their driver's license and maybe take a picture of them using my cellphone, just in case?\" \nBob: \"Maybe they'll have a phobia of being photographed and so they'll react with anger or maybe even run away.\"\n\nAt the point where Bob suggested that maybe the bicycle thief(?) would have a phobia of being photographed, you might have detected a sense of *strained argument* - even by comparison to 'maybe they lost the key', which should sound improbable or like an excuse, but not have the same sense of 'somebody trying to reach across a cliff that is actually too far to cross', or overcomplicatedness, or 'at this point you're trying too hard', as \"Maybe they have a phobia of being photographed\".", "date_published": "2015-12-16T16:17:44Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "10m"} {"id": "2f65cbe5dc348c7800df7370f85c75c5", "title": "Perfect rolling sphere", "url": "https://arbital.com/p/perfect_rolling_sphere", "source": "arbital", "source_type": "text", "text": "A \"perfect rolling sphere\" is metaphorically an object that is too simple or idealized to exist in real life. It derives from an old joke about a physicist who tries to bet on horse races, starting from the assumption that each horse is a perfect rolling sphere. Nonetheless, in a lot of cases, a perfect rolling sphere is a pretty good approximation to a roughly spherical wooden ball moving down an iron trough inside an atmosphere. More importantly, you're probably not going to be able to analyze the 'realistic' case until you understand the perfect rolling sphere.\n\nsummary(Brief): A \"perfect rolling sphere\" is metaphorically an object that is too simple or idealized to exist in real life. (But which might be illuminating to consider.)", "date_published": "2016-03-31T19:05:37Z", "authors": ["Eric Rogstad", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": ["Stub"], "alias": "12b"} {"id": "15d6df3be3eccd42c26a6991d77f3c68", "title": "Aaron Gertler", "url": "https://arbital.com/p/AaronGertler", "source": "arbital", "source_type": "text", "text": "summary: Nothing here yet.\n\nAutomatically generated page for \"Aaron Gertler\" user.\nIf you are the owner, click [here to edit](https://arbital.com/edit/2d8).", "date_published": "2016-03-06T00:00:00Z", "authors": ["Aaron Gertler"], "summaries": [], "tags": [], "alias": "2d8"} {"id": "f3bf1a1d5f33f311589ae45aaf15e373", "title": "Introduction to Effective Altruism", "url": "https://arbital.com/p/ea_intro", "source": "arbital", "source_type": "text", "text": "Effective altruism (EA) means using evidence and reason to take actions that help others as much as possible. \n\nThe world has a lot of terrible problems, but we can't work on them all at once. And we don't agree on what the worst problems are.\n\nDespite this, most people want to \"make the world a better place\". What should those people actually be doing? \n\nSome of the questions EA sets out to answer:\n\n - Is it better to help people in your own country, or abroad? \n - Is it better to take a high-paying job and give money to charity, or to work for a charity directly?\n - Is it better to support an intervention with a lot of strong evidence, or to support research on a promising new intervention?\n\nThe answers to these kinds of questions will be different for each person. What you should do depends on who you are: What are your skills? What are your values? How much time do you have?\n\nBut no matter who you are, effective altruism can help you figure out how to improve the world. To learn more, follow the links below.\n\n##More on EA##\n\nAre you a high school student? Are you a college student?\n\nDo you work for a nonprofit or a business with a social mission?\n\nAre you religious?\n\nWhich of the following issues most concerns you? (Politics, poverty, health, education, animals, environment, far future)\n\nAre you interested in donating? In direct work?\n\nAre you financially comfortable?\n\n##Common Questions##\n\n - What beliefs do all EAs have in common?\n - What are the different causes EAs support?\n - What are some actions EAs take to improve the world?\n - What makes EA different from other ways of helping people?\n - What are some common criticisms of EA, and how do EAs respond?\n - What is the philosophical justification for EA?\n\n\nHere are some really good articles from outside Arbital:\n\n - [What is Effective Altruism?](https://whatiseffectivealtruism.com) (The best general introduction to the topic.)\n\n\n**This page is a work in progress. If you have questions, or you'd like to see a topic added, please contact the author: aaronlgertler (at) gmail.com.**", "date_published": "2017-01-04T01:09:15Z", "authors": ["Eric Rogstad", "Aaron Gertler"], "summaries": [], "tags": [], "alias": "2qj"} {"id": "e666f97c3683599d774e9fc9ecae9408", "title": "Strictly factual question", "url": "https://arbital.com/p/factual_question", "source": "arbital", "source_type": "text", "text": "A *strict question of fact* has an answer determined solely by the state of the material universe, and sufficiently straightforward math, in a clearly understandable way - none of our uncertainty about this question is about definitions, values, or viewpoints; we are just wondering which quarks go where.\n\nQuestions of strict fact:\n\n- What happens if I press the giant red button?(\\*)\n- Did Sally claim more charitable donations as an income tax deduction than Bob?\n- Are there unicorns anywhere on Earth?\n- Will Peano arithmetic prove a contradiction if I search all the proofs less than a billion steps long?\n\nQuestions not yet of strict fact:\n\n- Should I press the giant red button?\n- Is Sally a more charitable person than Bob?\n- Do unicorns exist?\n- Will Zermelo-Frankel set theory prove a contradiction if I search for one forever?\n\nWhy the second group aren't yet questions of strict fact:\n\n- Because the notion of *should* has not yet been fully specified or determined.\n- Because we haven't said exactly what it means to be charitable.\n- Because \"exist\" is a much more ambiguous notion than \"exist inside our galaxy\". For example, there could be unicorns $10^{1,000,000}$ lightyears away, or inside some other mathematical universe that has as much \"existence\" as this one does. The notion of \"X exists\" is less firmly nailed down than \"X is inside our closet\".\n- Because the notion of 'forever' is more mathematically fraught than 'for ten to the billionth power years'. For example, nailing down exactly which infinity we're talking about can't be done in a system of first-order logic.\n\n(\\*) At least, this is a straightforward question so long as we don't poke too hard at the nature of [what-if counterfactuals](https://arbital.com/p/). In most real-life situations, the question \"What happens if I turn on this blender with a fork inside?\" is something that has a sufficiently straightforward material answer for us to say that it's just a question of material facts.", "date_published": "2016-05-25T19:12:12Z", "authors": ["Eric Rogstad", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "3t3"} {"id": "0bdc61343a3fb1bd05b7b061ba3ec158", "title": "Proving too much", "url": "https://arbital.com/p/prove_too_much", "source": "arbital", "source_type": "text", "text": "An argument \"proves too much\" when you can just as easily or naturally extend it to prove false things as true things. For example, the argument \"Nobody should engage in genetic engineering because Nazis\" can just as easily be extended to say \"Nobody should eat vegetables because Hitler ate vegetables\", so the general rule *proves too much.*\n\nSee also:\n\n- http://en.wikipedia.org/wiki/Proving_too_much\n- http://slatestarcodex.com/2013/04/13/proving-too-much/", "date_published": "2016-05-25T19:32:02Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "3tc"} {"id": "aefa5ee781af6e7bf8dfac1b30f492cf", "title": "Bulverism", "url": "https://arbital.com/p/bulverism", "source": "arbital", "source_type": "text", "text": "[Bulverism](https://en.wikipedia.org/wiki/Bulverism) is when you [explain what goes so horribly wrong in people's minds](https://arbital.com/p/43h) when they believe X, before you've actually explained why X is wrong. It can be seen as a subspecies of the fallacy of \"[poisoning the well](https://en.wikipedia.org/wiki/Poisoning_the_well)\". **Bulverism is forbidden on Arbital.** Every time you talk about what goes wrong in people's minds that makes them believe X, make sure you first discuss why X is, in fact, wrong, or route your readers to a page that does discuss it. Also see the entry on [https://arbital.com/p/43h](https://arbital.com/p/43h).", "date_published": "2016-06-08T18:26:32Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "43k"} {"id": "876eb0ed2ddb2c05e450462267185f2c", "title": "Emphemeral premises", "url": "https://arbital.com/p/emphemeral_premises", "source": "arbital", "source_type": "text", "text": "A purported [psychological](https://arbital.com/p/43h) fallacy wherein people confronted with an incongruent proposition X will say \"Oh, not-X because Y\", and perhaps a week later, say, \"Oh, not-X because Z.\" [Ephemeral Premises](https://www.facebook.com/yudkowsky/posts/10154268184344228) is when you make up assumptions to deflect X, don't write them down or list them anywhere, and since there's no central list, you might be making up different assumptions when talking to someone else. It's a disposable premise being used as a parry in short-term combat, not part of a long-term edifice being made with consistent, well-defended materials. Needs to be paired with understanding of the [Multiple-Stage Fallacy](https://arbital.com/p/multiple_stage_fallacy) so that you don't fall for the reverse fallacy of making people list out lots of Ys and Zs to make arbitrary propositions sound very improbable.", "date_published": "2016-06-19T21:30:37Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": ["Stub"], "alias": "4m0"} {"id": "fbdebb1408a773f1af7149da471a9024", "title": "Multiple stage fallacy", "url": "https://arbital.com/p/multiple_stage_fallacy", "source": "arbital", "source_type": "text", "text": "In August 2015, renowned statistician and predictor Nate Silver wrote \"[Trump's Six Stages of Doom](http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/)\" in which he gave Donald Trump a 2% chance of getting the Republican *nomination* (not the presidency). Silver reasoned that Trump would need to pass through six stages to get the nomination, \"Free-for-all\", \"Heightened scrutiny\", \"Iowa and New Hampshire\", \"Winnowing\", \"Delegate accumulation\", and \"Endgame.\" Nate Silver argued that Trump had at best a 50% chance of passing each stage, implying a final nomination probability of at most 2%.\n\nIn late March, Trump had passed the first four stages, while prediction markets gave him a 75% chance of clinching the Republican nomination. By Nate Silver's logic, Trump's probability of passing the remaining stages should have been $0.50 \\cdot 0.50 = 0.25$ *conditional* on Trump passing the first four stages.\n\nNate Silver might not have been wrong to assign Trump a low advance probability of being nominated. Many people were surprised by Trump's nomination. But it seems more likely that Silver committed an error when he said, specifically, that if we'd observed Trump to pass the first four stages, this would probably be taking place in a world where Trump had a 50% probability of passing the two remaining stages.\n\nThe purported \"Multiple-Stage Fallacy\" is when you list multiple 'stages' that need to happen on the way to some final outcome, assign probabilities to each 'stage', multiply the probabilities together, and end up with a small final answer. The alleged problem is that you can do this to almost any kind of proposition by staring at it hard enough, including things that actually happen.\n\nOn a probability-theoretic level, the three problems at work in the usual Multiple-Stage Fallacy are as follows:\n\n- You need to multiply *conditional* probabilities rather than the absolute probabilities. When you're considering a later stage, you need to assume that the world was such that every prior stage went through. Nate Silver was [probably](https://arbital.com/p/43h) trying to simulate his prior model of Trump accumulating enough delegates in March through June, not imagining his *updated* beliefs about Trump and the world after seeing Trump be victorious up to March.\n - Even if you're aware in principle that you need to use conditional probabilities - Nate Silver certainly knew about them - it's hard to update far *enough* when you imagine the pure hypothetical possibility that Trump wins stages 1-4 for some reason - compared to how much you actually update when you actually see Trump winning! (Some sort of reverse hindsight bias or something? We don't realize how much we'd need to update our current model if we were already that surprised?)\n- Often, people neglect to consider disjunctive alternatives - there may be more than one way to reach a stage, so that not *all* the listed things need to happen. Trump accumulated enough delegates in Nate's fifth stage that there was no \"Endgame\" convention fight in the supposed sixth stage.\n- People have tendencies to assign middle-tending probabilities. So if you list enough stages, you can drive the apparent probability of anything down to zero, *even if you seem to be soliciting probabilities from the reader.*\n - If you're a [motivated skeptic](https://wiki.lesswrong.com/wiki/Motivated_skepticism), you will be tempted to list more 'stages'.\n\nThe Multiple-Stage Fallacy is particularly dangerous for people who've read studies on the dangers of probabilistic overconfidence. In late March, the 75% prediction-market probabilities must have corresponded to, e.g., something like an 80% chance of getting enough delegates and a 90% chance of passing the convention conditional on getting enough delegates. Imagine how overconfident this might have sounded without the prediction market to establish a definite probability - \"Oh, don't you know that what people assign 90% confidence doesn't usually happen 90% of the time?\"\n\nInstances of the Multiple-Stage Fallacy may also sound more persuasive to readers who've read about the [Conjunction Fallacy](http://lesswrong.com/lw/ji/conjunction_fallacy/).\n\nYudkowsky [argues](https://www.facebook.com/yudkowsky/posts/10154036150109228):\n\n> If you're not willing to make \"overconfident\" probability assignments like those, then you can drive the apparent probability of anything down to zero by breaking it down into enough 'stages'. In fact, even if someone hasn't heard about overconfidence, people's probability assignments often trend toward the middle, so you can drive down their \"personally assigned\" probability of anything just by breaking it down into more stages...\n\n> From beginning to end, I've never used this style of reasoning and I don't recommend that you do so either. \\[https://arbital.com/p/Since\\](https://arbital.com/p/Since\\) even Nate Silver couldn't get away with it, I think we just shouldn't try. It's a doomed methodology.", "date_published": "2016-06-19T21:54:17Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["The purported \"Multiple-Stage Fallacy\" is when you list multiple 'stages' that need to happen on the way to some final outcome, assign probabilities to each 'stage', multiply the probabilities together, and end up with a small final answer. The alleged problem is that you can do this to almost *any* proposition by staring at it hard enough, including propositions that turn out to be true.\n\nAn alleged example would be that in August 2015, renowned forecaster Nate Silver wrote \"[Trump's Six Stages of Doom](http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/)\" in which he gave Donald Trump a 2% chance of getting the Republican *nomination* (not the presidency), based on six stages each with at most a 50% probability of being passed. In late March, Trump had passed the first four stages Nate listed, and prediction markets gave Trump a 75% chance of clinching the Republican nomination. By Nate Silver's logic, Trump's probability of passing the remaining stages should have been $0.50 \\cdot 0.50 = 0.25.$\n\nThe three main purported problems with the Multiple-Stage Fallacy are that:\n\n- To get the final estimate, you logically need to multiply [conditional probabilities](https://arbital.com/p/1rj), not unconditional probabilities. Getting your mind to give good conditional probabilities isn't easy; Nate Silver might have failed to imagine how much he'd *actually* update after *actually* seeing Trump pass the first four stages.\n- Often, people fail to consider disjunctive alternatives, or ways that not all the stages may need passing.\n- People tend to assign middle-tending probabilities. So if you list enough stages, you can drive the apparent probability of anything down to zero, *even if you seem to be soliciting probabilities from the reader.*"], "tags": [], "alias": "4m2"} {"id": "bdf7a89f9d99a2d4632f3b2f5cb16fa5", "title": "Mind projection fallacy", "url": "https://arbital.com/p/mind_projection", "source": "arbital", "source_type": "text", "text": "The \"mind projection fallacy\" occurs when somebody expects an overly direct resemblance between the intuitive language of the mind, and the language of physical reality.\n\nConsider the [map and territory](https://arbital.com/p/map_territory) metaphor, in which the world is a like a territory and your mental model of the world is like a map of that territory. In this metaphor, the mind projection fallacy is analogous to thinking that the territory can be folded up and put into your pocket.\n\nAs an archetypal example: Suppose you flip a coin, slap it against your wrist, and don't yet look at it. Does it make sense to say that the probability of the coin being heads is 50%? How can this be true, when the coin itself is already either definitely heads or definitely tails?\n\nOne who says \"the coin is fundamentally uncertain; it is a feature of the coin that it is always 50% likely to be heads\" commits the mind projection fallacy. Uncertainty is in the mind, not in reality. If you're ignorant about a coin, that's not a fact about the coin, it's a fact about you. It makes sense that your brain, the map, has an internal measure of how it's more or less sure of something. But that doesn't mean the coin itself has to contain a corresponding quantity of increased or decreased sureness; it is just heads or tails.\n\nThe [https://arbital.com/p/-ontology](https://arbital.com/p/-ontology) of a system is the elementary or basic components of that system. The ontology of your model of the world may include intuitive measures of uncertainty that it can use to represent the state of the coin, used as primitives like [floating-point numbers](https://arbital.com/p/float) are primitive in computers. The mind projection fallacy occurs whenever someone reasons as if the territory, the physical universe and its laws, must have the same sort of ontology as the map, our models of reality.\n\nSee also:\n\n- [https://arbital.com/p/4vr](https://arbital.com/p/4vr)\n- The LessWrong sequence on [Reductionism](https://wiki.lesswrong.com/wiki/Reductionism_%28sequence%29), especially:\n - [How an algorithm feels from the inside](http://lesswrong.com/lw/no/how_an_algorithm_feels_from_inside/)\n - [The Mind Projection Fallacy](http://lesswrong.com/lw/oi/mind_projection_fallacy/)\n - [Probability is in the mind](http://lesswrong.com/lw/oj/probability_is_in_the_mind/)", "date_published": "2016-06-30T02:51:58Z", "authors": ["Nate Soares", "Eliezer Yudkowsky"], "summaries": ["One commits the mind projection fallacy when they postulate that features of how their model of the world works are actually features of the world.\n\nSuppose you flip a coin, slap it against your wrist, and don't look at the result. Does it make sense to say that the probability of the coin being heads is 50%? How can this be true, when the coin has already landed, and is either definitely heads or definitely tails? One who says \"the coin is fundamentally uncertain; it is a feature of the coin that it is always 50% likely to be heads\" commits the mind projection fallacy. Uncertainty is in the mind, not in reality. It makes sense that brains have an internal measure of how uncertain they are about the world, but that uncertainty is not a fact about the coin, it's a fact about the uncertain person. The coin itself is not sure or unsure."], "tags": ["Start"], "alias": "4yk"} {"id": "dcb306d083e7364c85a62b723a7b8317", "title": "Invisible background fallacies", "url": "https://arbital.com/p/invisible_background", "source": "arbital", "source_type": "text", "text": "Suppose that you went outdoors and jumped up into the air, and then, as you were jumping, the law of gravity suddenly switched off--for whatever weird reason, masses everywhere in the Universe stopped attracting one another. Would your feet hit the ground again?\n\nAs an initial thought about this problem, you might imagine that gravity no longer pulls you down toward the Earth, and that therefore your jump would keep you continuing up into the air. If you were thinking more broadly, you might realize that the concept of 'air pressure' means that the air on Earth's surface is kept pressurized by gravity, and in the absence of gravity holding down all that air, the atmosphere might start to rush off into space as it expanded to relieve the pressure. Although (you then reason further) the speed of sound is finite, so the atmosphere might start rushing out at the edges, and would need to disappear there before the air started to expand away from the Earth's surface. And then that rushing air would only carry you further away from the ground.\n\nThis answer is incorrect. Why? Because the ground under your feet is also currently being pulled downward by gravity.\n\nIndeed, the ground under your feet is currently in a state of [equilibrium](https://arbital.com/p/) despite being pulled downward by gravity at what would otherwise be an acceleration of $9.8m/s^2.$ It follows that other pressures on the ground immediately below your feet--from the air above it, and from the rock below--must net out to an upward force that would, if not for gravity, push the ground upward at $9.8m/s^2.$ This logic applies to the dirt below your feet, the air around you, and the rocks immediately underneath the dirt under your feet. Everything around you that seemed motionless was previously in equilibrium with gravity, so as soon as gravity vanishes, it all accelerates upward in near unison.\n\nSo your feet hit the ground again, as the Earth expands beneath you. And then the Earth continues to accelerate upward at $9.8m/s^2$, pressuring your feet and accelerating you as well. So in fact, if gravity had switched off everywhere in the universe 10 seconds earlier as you read this, you probably would not have noticed anything different, yet. (Though you *would* notice shortly afterward as things began to expand far enough to change the balance of forces, and as the Sun finished exploding.)\n\nIn many situations and scenarios, there are rules or laws or considerations that properly apply to *everything*--including background objects and processes and concepts we don't usually think about--and then considering the *complete* effect gives us a very different picture from thinking only about the effect on *visible* things, with the latter being a common species of fallacy.\n\nAn essay now famous in the field of [economics](https://arbital.com/p/) is Frederic Bastiat's \"[That Which Is Seen, And That Which Is Not Seen](http://bastiat.org/en/twisatwins.html)\" which illustrated this idea and introduced the now-famous 'broken window fallacy'. Suppose you heave a rock through somebody's window; have you done a good deed or a bad deed? You can imagine somebody arguing that they've done a good deed, despite the initial inconvenience, because the homeowner needs to pay a glazier to fix the window, and then the glazier spends money at the baker, and so the whole economy is helped. This is a fallacy because, if the window hadn't been broken, the homeowner would have spent the money somewhere else. But the broken window and the payment to the glazier are highly visible; while the disappearing of whatever economic activity would otherwise have occurred, is not seen. If we consider the effect on the whole picture, including things that are ordinarily hard to see or fading into the background or invisible for some other reason, we arrive at a very different picture of the net effect of breaking somebody's window.", "date_published": "2017-01-05T22:49:41Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["Considering *all* the implications often gives us a very different picture from considering *some* of the implications, and the latter is a common species of fallacy.\n\nSuppose that you went outdoors and jumped up into the air, and then, as you were jumping, gravity suddenly switched off. Would your feet hit the ground again?\n\nYou might initially imagine that gravity no longer pulls you down toward the Earth, and that therefore your jump would keep you continuing up into the air. If you were thinking more broadly, you might imagine that the air on Earth's surface, previously kept pressurized by gravity, might start to rush off into space.\n\nActually, the ground under your feet is also currently being pulled downward by gravity. It follows that other pressures on the ground immediately below your feet--from the air above it, and from the rock below--must net out to an upward force that would, if *not* for gravity, push the ground upward at $9.8m/s^2.$ So if you jumped up into the air, and then gravity switched off, the ground and everything previously motionless around you would accelerate upward at $9.8m/s^2,$ causing your feet to hit the ground at the expected time. That is, if gravity switched off everywhere in the universe, you wouldn't notice anything for a couple of minutes."], "tags": [], "alias": "78n"} {"id": "fde671c3508228923d6097d6b41fec1f", "title": "Ideological Turing test", "url": "https://arbital.com/p/ideological_turing_test", "source": "arbital", "source_type": "text", "text": "The [Ideological Turing Test](https://en.wikipedia.org/wiki/Ideological_Turing_Test) is when, to try to make sure you really understand an opposing position, you explain it such that *an advocate of the position cannot tell whether the explanation was written by you or a real advocate.* This is often run online as a formal test where an audience tries to distinguish which pro-X and anti-X essays have been written by real pro-X and anti-X advocates, and which essays were actually written by advocates for the opposing side. If you're pro-X, and you write an intelligent essay advocating anti-X, and other people can't tell that you were really pro-X, you win the Ideological Turing Test by having demonstrated that you really and honestly understand the arguments for anti-X.", "date_published": "2017-01-10T20:55:25Z", "authors": ["Sean Cummins", "Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "7cm"} {"id": "0941f81e062ebebedcacdfafd9294335", "title": "Gotcha button", "url": "https://arbital.com/p/gotcha_button", "source": "arbital", "source_type": "text", "text": "A \"gotcha button\" is a conversational point which, when pressed, causes some substantial fraction of listeners to leap up and shout \"Gotcha! That's obviously wrong and shows you don't know what you're talking about!\", after which they have [license to dismiss](https://arbital.com/p/motivated_skepticism) the conversation.\n\nThis behavior does not *necessarily* mean the listener is wrong. If you're describing your brilliant invention to a physics-literate person, and you mention that it can accelerate forward without pushing anything else in the opposite direction, the listener will shout \"Gotcha! You can't violate Conservation of Momentum!\" They'll be right, and their implied inference about your level of physics literacy will probably also be right.\n\nNonetheless, there are a *lot* of ideas that function as Gotcha Buttons for some significant fraction of various audiences, which do *not* correspond to a claimed known falsity.\n\n- Some true/plausible claims will be seen as 'obviously false' by many listeners.\n- Some phrases or ideas will be predictably misinterpreted to a claim that actually is false.\n\nExample of the first type: If you were explaining evolutionary biology to a novice audience, you might be wise to start with some example other than how hominids in particular evolved from primate ancestors. This isn't wrong, but a fraction of the audience thinks it's obviously wrong and diagnostic of you as someone they shouldn't listen to.\n\nExample of the second type: If you were discussing [Artificial General Intelligence](https://arbital.com/p/42g), or AI with [par-human performance](https://arbital.com/p/7mt) in some domain, you might be wise to avoid the term \"human-level AI\"; because that particular phrase will cause some fraction of the audience to jump up and shout \"Gotcha! AIs won't have a humanlike balance of abilities! There's no such thing as the human level!\" (They can then dismiss the rest of your argument, without trying to carefully consider whether your further arguments are actually refuted by accepting \"AIs won't have a humanlike balance of abilities\", because you've made such an obvious and severe mistake that you just lose.)", "date_published": "2017-01-29T20:51:43Z", "authors": ["Eliezer Yudkowsky"], "summaries": [], "tags": [], "alias": "7mz"} {"id": "e3e43c0354161942da30dfe60cccfc07", "title": "Harmless supernova fallacy", "url": "https://arbital.com/p/harmless_supernova", "source": "arbital", "source_type": "text", "text": "Harmless supernova fallacies are a class of arguments, usually a subspecies of false dichotomy or continuum fallacy, which [can equally be used to argue](https://arbital.com/p/3tc) that almost any physically real phenomenon--including a supernova--is harmless / manageable / safe / unimportant.\n\n- **Bounded, therefore harmless:** \"A supernova isn't infinitely energetic--that would violate the laws of physics! Just wear a flame-retardant jumpsuit and you'll be fine.\" (All physical phenomena are finitely energetic; some are nonetheless energetic enough that a flame-retardant jumpsuit won't stop them. \"Infinite or harmless\" is a false dichotomy.)\n- **Continuous, therefore harmless:** \"Temperature is continuous; there's no qualitative threshold where something becomes 'super' hot. We just need better versions of our existing heat-resistant materials, which is a well-understood engineering problem.\" (Direct instance of the standard continuum fallacy: the existence of a continuum of states between two points does not mean they are not distinct. Some temperatures, though not qualitatively distinct from lower temperatures, exceed what we can handle using methods that work for lower temperatures. \"Quantity has a quality all of its own.\")\n- **Varying, therefore harmless:** \"A supernova wouldn't heat up all areas of the solar system to exactly the same temperature! It'll be hotter closer to the center, and cooler toward the outside. We just need to stay on a part of Earth that's further way from the Sun.\" (The temperatures near a supernova vary, but they are all quantitatively high enough to be far above the greatest manageable level; there is no nearby temperature low enough to form a survivable valley.)\n- **Mundane, therefore harmless** or **straw superpower**: \"Contrary to what many non-astronomers seem to believe, a supernova can't burn hot enough to sear through time itself, so we'll be fine.\" (False dichotomy: the divine ability is not required for the phenomenon to be dangerous / non-survivable.)\n- **Precedented, therefore harmless:** \"Really, we've already had supernovas around for a while: there are already devices that produce 'super' amounts of heat by fusing elements low in the periodic table, and they're called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there's no reason the same shouldn't be true of supernovas.\" (Reference class tennis / noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn't make them able to be handled by similar strategies, nor does finding a class such that it contains both supernovas and hydrogen bombs.)\n- **Undefinable, therefore harmless:** \"What is 'heat', really? Somebody from Greenland might think 295 Kelvin was 'warm', somebody from the equator might consider the same weather 'cool'. And when exactly does a 'sun shining' become a 'supernova'? This whole idea seems ill-defined.\" (Someone finding it difficult to make exacting definitions about a physical process doesn't license the conclusion that the physical process is harmless. %%note: It also happens that the distinction between the runaway process in a supernova and a sun shining is empirically sharp; but this is not why the argument is invalid--a super-hot ordinary fire can also be harmful even if we personally are having trouble defining \"super\" and \"hot\". %%)", "date_published": "2018-02-16T17:57:02Z", "authors": ["Eliezer Yudkowsky"], "summaries": ["\"Harmless supernova\" fallacies are arguments that [can equally be used to conclude](https://arbital.com/p/3tc) than any real phenomenon, including a supernova, is harmless.\n\n- **Bounded, therefore harmless:** \"A supernova isn't *infinitely* energetic--that would violate the laws of physics! Just wear a flame-retardant jumpsuit and you'll be fine.\"\n- **Continuous, therefore harmless:** \"Temperature is continuous; there's no qualitative threshold where something becomes 'super' hot. We just need better versions of our existing heat-resistant materials.\"\n- **Varying, therefore harmless:** \"A supernova wouldn't heat up all areas of the solar system to exactly the same temperature! It'll be hotter closer to the center, and cooler toward the outside. We just need to stay on a part of Earth that's further way from the Sun.\"\n- **Mundane, therefore harmless** aka **straw superpower**: \"Contrary to what many non-astronomers seem to believe, a supernova can't burn hot enough to sear through time itself. So we'll be fine.\"\n- **Precedented, therefore harmless:** \"Really, we've already had supernovas around for a while; there already exist devices that produce tons of energy by fusion, and they're called thermonuclear weapons. Society has proven itself able to handle those.\"\n- **Undefinable, therefore harmless:** \"What exactly is heat? Different people could call different temperatures 'too warm'. When exactly does a sun shining become so 'super' that it's a supernova?\""], "tags": [], "alias": "7nf"} {"id": "e98682be818391a4e69e157871380987", "title": "Epistemology", "url": "https://arbital.com/p/epistemology", "source": "arbital", "source_type": "text", "text": "\"Epistemology\" is the subject matter that deals with truth on a meta level: e.g., questions such as \"What is truth?\" and \"What methods of reasoning are most likely to lead to true beliefs?\" Epistemology can be considered as a child subject of [https://arbital.com/p/9l](https://arbital.com/p/9l). Many [Arbital Discussion Practices](https://arbital.com/p/9n) have their roots in a thesis about epistemology - about what forms of reasoning are most likely to lead to true beliefs - and you may have ended up looking at this subject after trying to track down how a Discussion Practice is justified.", "date_published": "2016-08-08T15:01:53Z", "authors": ["Eric Rogstad", "Eric Bruylant", "Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "9c"} {"id": "998c35b391bc849a66cc769b26791aa1", "title": "Conceivability", "url": "https://arbital.com/p/conceivability", "source": "arbital", "source_type": "text", "text": "A hypothetical scenario is 'conceivable' or 'imaginable' when it is not *immediately* incoherent, although it may still contain some subtle flaw that makes it [logically impossible](https://arbital.com/p/). If you haven't yet checked for factors, it's conceivable to you that 91 is either a [prime number](https://arbital.com/p/) or a [composite number](https://arbital.com/p/), even though only one of these scenarios is [logically possible](https://arbital.com/p/).\n\nWhether 91 is prime or compositive is a logical consequence of the definitions of 'prime' and 'composite', but it's not literally and explicitly written into our brain's representations of the definitions of 'prime' and 'composite' that 91 is a prime/composite number. If you can conceive of 91 being prime, this illustrates that, in your own mind, the property of 91-ness is not the same writing on your map as the property of composite-ness, but it doesn't demonstrate that it's logically or physically possible for 91 to be composite. If you can *conceive* of a scenario in which X is true and Y is false, this demonstrates mainly that X and Y have different [intensional definitions](https://arbital.com/p/10b) inside your own mind; you have separate concepts in your mind for X and Y. It doesn't demonstrate that a world where 'X but not Y' is logically possible, let alone physically possible, or that X and Y refer to two different things, etcetera. \n\nLabeling a scenario 'conceivable' or saying 'we can conceive that' introduces a scenario for discussion or refutation, *without* assuming in the introduction that the scenario is logically possible (let alone physically possible, or at all plausible in our world). Since many true facts are also plausible, physically possible, logically possible, and conceivable, introducing a scenario as 'conceivable' also does not imply that it is false. It means that someone found themselves able to imagine something, perhaps a true thing, perhaps without visualizing it in the complete detail that would reveal its impossibility; but if all we know at the start is that someone felt like they could imagine something, we'll call it 'conceivable' or 'imaginable'.", "date_published": "2015-12-16T16:53:16Z", "authors": ["Eliezer Yudkowsky", "Alexei Andreev"], "summaries": [], "tags": [], "alias": "9d"} {"id": "478019c6c0782d9d8af4829e74089ad5", "title": "Diamond maximizer", "url": "https://arbital.com/p/diamond_maximizer", "source": "arbital", "source_type": "text", "text": "A difficult, far-reaching open problem in [https://arbital.com/p/2v](https://arbital.com/p/2v) is to specify an [unbounded](https://arbital.com/p/) formula for an agent that would, if run on an [unphysically large finite computer](https://arbital.com/p/), create as much diamond material as possible. The goal of 'diamonds' was chosen to make it physically crisp as to what constitutes a 'diamond'. Supposing a crisp goal plus hypercomputation avoids some problems in value alignment, while still invoking many others, making it an interesting intermediate problem.\n\n### Importance\n\nThe diamond maximizer problem is to give an [unbounded](https://arbital.com/p/) description of a computer program such that, if it were instantiated on a sufficiently powerful but [physical computer](https://arbital.com/p/), the result of running the program would be the creation of an immense amount of diamond - around as much diamond as is physically possible for an agent to create.\n\nThe fact that this problem is still extremely hard shows that the value alignment problem is not just due to the [https://arbital.com/p/5l](https://arbital.com/p/5l). As a thought experiment, it helps to distinguish value-complexity-laden difficulties from those that arise even for simple goals.\n\nIt also helps to [illustrate the difficulty of value alignment](https://arbital.com/p/) by making the more clearly visible point that we can't even figure out how to create lots of diamond using unlimited computing power, never mind creating [value](https://arbital.com/p/55) using [bounded computing power](https://arbital.com/p/).\n\n### Problems avoided\n\nIf we can crisply define exactly what a 'diamond' is, in theory it seems like we should be able to avoid issues of [Edge Instantiation](https://arbital.com/p/2w), [Unforeseen Maximums](https://arbital.com/p/47), and trying to convey [complex values](https://arbital.com/p/5l) into the agent.\n\nThe amount of diamond is defined as the number of carbon atoms that are covalently bonded, by electrons, to exactly four other carbon atoms. A carbon atom is any nucleus containing six protons and any number of neutrons, bound by the strong force. The utility of a universal history is the total amount of Minkowskian interval spent by all carbon atoms being bound to exactly four other carbon atoms. More precise definitions of 'bound', or the amount of measure in a quantum system that is being bound, are left to the reader - any crisp definition will do, so long as we are confident that it has no [unforeseen maximum](https://arbital.com/p/47) at things we don't intuitively see as diamonds.\n\n### Problems challenged\n\nSince this diamond maximizer would hypothetically be implemented on a very large but physical computer, it would confront [reflective stability](https://arbital.com/p/), the [anvil problem](https://arbital.com/p/), and the problems of making [subagents](https://arbital.com/p/).\n\nTo the extent the diamond maximizer might need to worry about other agents in the environment that have a good ability to model, or that it may need to cooperate with other diamond maximizers, it must resolve [Newcomblike problems](https://arbital.com/p/) using some [logical decision theory](https://arbital.com/p/). This would also require it to confront [logical uncertainty](https://arbital.com/p/) despite possessing immense amounts of computing power.\n\nTo the extent the diamond maximizer must work well in a rich real universe that might operate according to any number of possible physical laws, it faces a problem of [naturalized induction](https://arbital.com/p/) and [ontology identification](https://arbital.com/p/5c). See the article on [ontology identification](https://arbital.com/p/5c) for the case that even for the goal of 'make diamonds', the problem of [goal identification](https://arbital.com/p/) remains difficult.\n\n### Unreflective diamond maximizer\n\nAs a further-simplified but still unsolved problem, an **unreflective diamond maximizer** is a diamond maximizer implemented on a [Cartesian hypercomputer](https://arbital.com/p/) in a [causal universe](https://arbital.com/p/) that does not face any [Newcomblike problems](https://arbital.com/p/). This further avoids problems of reflectivity and logical uncertainty. In this case, it seems plausible that the primary difficulty remaining is *just* the [ontology identification problem](https://arbital.com/p/5c). Thus the open problem of describing an unreflective diamond maximizer is a central illustration for the difficulty of ontology identification.", "date_published": "2015-12-17T21:58:47Z", "authors": ["Eliezer Yudkowsky", "Nate Soares", "Eric Bruylant"], "summaries": [], "tags": ["AI alignment open problem", "B-Class"], "alias": "5g"} {"id": "43cddf930baa0ad2618b95a6401e1aae", "title": "Introductory guide to logarithms", "url": "https://arbital.com/p/log_guide", "source": "arbital", "source_type": "text", "text": "Welcome to the Arbital introduction to logarithms! In modern education, logarithms are often mentioned but rarely motivated. At best, students are told that logarithms are just a tool for inverting [exponentials](https://arbital.com/p/4ts). At worst, they're told a bunch of properties of the logarithm that they're expected to memorize, just because. The goal of this tutorial is to explore what logarithms are actually doing, and help you build an intuition for how they work.\n\nFor example, one motivation we will explore is this: Logarithms measure how long a number is when you write it down, for a generalized notion of \"length\" that allows fractional lengths. The number 139 is three digits long:\n\n$$\\underbrace{139}_\\text{3 digits}$$\n\nand the logarithm (base 10) of 139 is pretty close to 3. It's actually closer to 2 than it is to 3, because 139 is closer to the largest 2-digit number than it is to the largest 3-digit number. Specifically, $\\log_{10}(139) \\approx 2.14$. We can interpret this as saying \"139 is three digits long in [https://arbital.com/p/-4sl](https://arbital.com/p/-4sl), but it's not really using its third digit to the fullest extent.\"\n\nYou might be thinking \"Wait, what do you mean it's closer to 2 digits than it is to 3? It plainly takes three digits: '1', '3', and '9'. What does it mean to say that 139 is 'almost' a 2-digit number?\"\n\nYou might also be wondering what it means to say that a number is \"two and a half digits long,\" and you might be surprised that it is 316 (rather than 500) that is most naturally seen as 2.5 digits long. Why? What does that mean?\n\nThese questions and others will be answered throughout the tutorial, as we explore what logarithms actually do.\n\n%box:\nThis path contains 9 pages:\n\n1. [What is a logarithm?](https://arbital.com/p/40j)\n2. [Log as generalized length](https://arbital.com/p/416)\n3. [Exchange rates between digits](https://arbital.com/p/427)\n4. [Fractional digits](https://arbital.com/p/44l)\n5. [Log as the change in the cost of communicating](https://arbital.com/p/45q)\n6. [The characteristic of the logarithm](https://arbital.com/p/47k)\n7. [The log lattice](https://arbital.com/p/4gp)\n8. [Life in logspace](https://arbital.com/p/4h0)\n9. [The End ](https://arbital.com/p/4h2)\n%", "date_published": "2016-10-21T18:09:55Z", "authors": ["Alexei Andreev", "Eric Rogstad", "Eric Bruylant", "Daniel Satanove", "Nate Soares"], "summaries": [], "tags": ["Guide", "B-Class"], "alias": "3wj"} {"id": "5ca8fe084354a4efdd05e75baff2410d", "title": "Universal property of the disjoint union", "url": "https://arbital.com/p/disjoint_union_universal_property", "source": "arbital", "source_type": "text", "text": "summary(Technical): The [https://arbital.com/p/-5z9](https://arbital.com/p/-5z9) may be described as the [https://arbital.com/p/-coproduct](https://arbital.com/p/-coproduct) in the [category](https://arbital.com/p/4c7) of [sets](https://arbital.com/p/3jz). Specifically, the disjoint union of sets $A$ and $B$ is a set denoted $A \\sqcup B$ together with maps $i_A : A \\to A \\sqcup B$ and $i_B: B \\to A \\sqcup B$, such that for every set $X$ and every pair of maps $f_A: A \\to X$ and $f_B: B \\to X$, there is a unique map $\\gamma: A \\sqcup B \\to X$ such that $\\gamma \\circ i_A = f_A$ and $\\gamma \\circ i_B = f_B$.\n\nWhere could we start if we were looking for a nice \"easy\" [https://arbital.com/p/-600](https://arbital.com/p/-600) describing the [union of sets](https://arbital.com/p/5s8)?\n\nThe first thing to notice is that universal properties only identify objects up to [https://arbital.com/p/-4f4](https://arbital.com/p/-4f4), but there's a sense in which the union is *not* [https://arbital.com/p/-5ss](https://arbital.com/p/-5ss) [https://arbital.com/p/-65y](https://arbital.com/p/-65y): it is possible to find sets $A$ and $B$, and sets $A'$ and $B'$, with $A$ isomorphic to $A'$ and $B$ isomorphic to $B'$, but where $A \\cup B$ is not isomorphic to $A' \\cup B'$. %%note: In this case, when we're talking about [sets](https://arbital.com/p/3jz), an isomorphism is just a [bijection](https://arbital.com/p/499).%%\n\n%%hidden(The union is not well-defined up to isomorphism in this sense):\nConsider $A = \\{ 1 \\}$ and $B = \\{ 1 \\}$.\nThen the union $A \\cup B$ is equal to $\\{1\\}$.\n\nOn the other hand, consider $X = \\{1\\}$ and $Y = \\{2\\}$.\nThen the union $X \\cup Y = \\{1,2\\}$.\n\nSo by replacing $A$ with the isomorphic $X$, and $B$ with the isomorphic $Y$, we have obtained $\\{1,2\\}$ which is *not* isomorphic to $\\{1\\} = A \\cup B$.\n%%\n\nSo it's not obvious whether the union could even in principle be defined by a universal property.%%note: Actually [it *is* possible](https://arbital.com/p/union_universal_property).%%\n\nBut we can make our job easier by taking the next best thing: the *[disjoint union](https://arbital.com/p/5z9)* $A \\sqcup B$, which is well-defined up to isomorphism in the above sense: the definition of $A \\sqcup B$ is constructed so that even if $A$ and $B$ overlap, the intersection still doesn't get treated any differently.\n\nBefore reading this, you should make sure you grasp the [https://arbital.com/p/-5z9](https://arbital.com/p/-5z9) well enough to know the difference between the two sets $\\{ 2, 3, 5 \\} \\cup \\{ 2, 6, 7 \\}$ and $\\{ 2, 3, 5 \\} \\sqcup \\{ 2, 6, 7 \\}$.\n\n%%hidden(Brief recap of the definition):\nGiven sets $A$ and $B$, we define their disjoint union to be $A \\sqcup B = A' \\cup B'$, where $A' = \\{ (a, 1) : a \\in A \\}$ and $B' = \\{ (b, 2) : b \\in B \\}$.\nThat is, \"tag each element of $A$ with the label $1$, and tag each element of $B$ with the label $2$; then take the union of the tagged sets\".\n%%\n\nThe important fact about the disjoint union here is that if $A \\cong A'$ and $B \\cong B'$, then $A \\sqcup B \\cong A' \\sqcup B'$.\nThis is the fact that makes the disjoint union a fairly accessible idea to bring under the umbrella of category theory, and it also means we are justified in using a convention that simplifies a lot of the notation: we will assume from now on that $A$ and $B$ are disjoint.\n(Since we only care about them up to isomorphism, this is fine to do: we can replace $A$ and $B$ with some disjoint pair of sets of the same size if necessary, to ensure that they *are* disjoint.)\n\n[https://arbital.com/p/toc:](https://arbital.com/p/toc:)\n\n# Universal property of disjoint union\n\nHow can we look at the disjoint union solely in terms of the maps to and from it?\nFirst of all, how can we look at the disjoint union *at all* in terms of those maps?\n\nThere are certainly two maps $A \\to A \\sqcup B$ and $B \\to A \\sqcup B$: namely, the [inclusions](https://arbital.com/p/inclusion_map) $i_A : a \\mapsto a$ and $i_B : b \\mapsto b$.\nBut that's not enough to pin down the disjoint union precisely, because those maps exist not just to $A \\sqcup B$ but also to any superset of $A \\sqcup B$. %%note: If this is not obvious, stop and think for a couple of minutes about the case $A = \\{ 1 \\}$, $B = \\{ 2 \\}$, $A \\sqcup B = \\{1,2\\}$, and the superset $\\{1,2,3\\}$ of $A \\sqcup B$.%%\nSo we need to find some way to limit ourselves to $A \\sqcup B$.\n\nThe key limiting fact about the disjoint union is that any map *from* the disjoint union can be expressed in terms of two other maps (one from $A$ and one from $B$), and vice versa.\nThe following discussion shows us how.\n\n- If we know $f: A \\sqcup B \\to X$, then we know the [restriction maps](https://arbital.com/p/restriction_map) $f \\big|_A : A \\to X$ %%note: This is defined by $f \\big|_A (a) = f(a)$; that is, it is the function we obtain by \"restricting\" $f$ so that its [domain](https://arbital.com/p/3js) is $A$ rather than the larger $A \\sqcup B$.%% and $f \\big|_B : B \\to X$.\n- If we know two maps $\\alpha: A \\to X$ and $\\beta: B \\to X$, then we can *uniquely* construct a map $f: A \\sqcup B \\to X$ which is \"$\\alpha$ and $\\beta$ glued together\". Formally, the map is defined as $f(x) = \\alpha(x)$ if $x \\in A$, and $f(x) = \\beta(x)$ if $x \\in B$.\n\nThe reason this pins down the disjoint union exactly (up to isomorphism) is because the disjoint union is the *only* set which leaves us with no choice about how we construct $f$ from $\\alpha$ and $\\beta$:\n\n- We rule out *strict subsets* of $A \\sqcup B$ because we use the fact that every element of $A$ is in $A \\sqcup B$ and every element of $B$ is in $A \\sqcup B$, to ensure that the restriction maps make sense.\n- We rule out *strict supersets* of $A \\sqcup B$ because we use the fact that any element in $A \\sqcup B$ is either in $A$ or it is in $B$ %%note: If we replaced $A \\sqcup B$ by any strict superset, this stops being true: by passing to a superset, we introduce an element which is neither in $A$ nor in $B$.%%, to ensure that $f$ is defined everywhere on its [domain](https://arbital.com/p/3js). Moreover, we use the fact that every element of $A \\sqcup B$ is in *exactly one* of $A$ or $B$, because if $x$ is in both $A$ and $B$, then we can't generally decide whether we should define $f(x)$ by $\\alpha(x)$ or $\\beta(x)$.\n\n## The property\n\nOne who is well-versed in category theory and universal properties would be able to take the above discussion and condense it into the following statement, which is the **universal property of the disjoint union**:\n\n> Given sets $A$ and $B$, we define the *disjoint union* to be the following collection of three objects: $$\\text{A set labelled }\\ A \\sqcup B \\\\ i_A : A \\to A \\sqcup B \\\\ i_B : B \\to A \\sqcup B$$ with the requirement that for every set $X$ and every pair of maps $f_A: A \\to X$ and $f_B: B \\to X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f \\circ i_A = f_A$ and $f \\circ i_B = f_B$.\n\n[Imgur was down when I made this picture, so I'm hosting it instead.](https://arbital.com/p/comment:)\n![Disjoint union universal property](https://www.patrickstevens.co.uk/images/Coproduct.png)\n\n## Aside: relation to the product\n\nRecall the [https://arbital.com/p/-5zv](https://arbital.com/p/-5zv):\n\n> Given objects $A$ and $B$, we define the *product* to be the following collection of three objects, if it exists: $$A \\times B \\\\ \\pi_A: A \\times B \\to A \\\\ \\pi_B : A \\times B \\to B$$ with the requirement that for every object $X$ and every pair of maps $f_A: X \\to A, f_B: X \\to B$, there is a *unique* map $f: X \\to A \\times B$ such that $\\pi_A \\circ f = f_A$ and $\\pi_B \\circ f = f_B$.\n\nNotice that this is just the same as the universal property of the disjoint union, but we have reversed the [domain](https://arbital.com/p/3js) and [codomain](https://arbital.com/p/3lg) of every function, and we have correspondingly reversed any function compositions.\nThis shows us that the product and the disjoint union are, in some sense, \"two sides of the same coin\".\nIf you remember the [https://arbital.com/p/-5zr](https://arbital.com/p/-5zr), we saw that the empty set's universal property also had a \"flip side\", and this flipped property characterises the one-element sets.\nSo in the same sense, the empty set and the one-point sets are \"two sides of the same coin\".\nThis is an instance of the concept of [duality](https://arbital.com/p/duality_category_theory), and it turns up all over the place.\n\n# Examples\n\n## {1} ⊔ {2}\n\nLet's think about the very first example that came from the top of the page: the disjoint union of $A=\\{1\\}$ and $B=\\{2\\}$.\n(This is, of course, $\\{1,2\\}$, but we'll try and tie the *universal property* point of view in with the *elements* point of view.)\n\nThe definition becomes:\n\n> The *disjoint union* of $\\{1\\}$ and $\\{2\\}$ is the following collection of three objects: $$\\text{A set labelled } \\{1\\} \\sqcup \\{2\\} \\\\ i_A : \\{1\\} \\to \\{1\\} \\sqcup \\{2\\} \\\\ i_B : \\{2\\} \\to \\{1\\} \\sqcup \\{2\\}$$ with the requirement that for every set $X$ and every pair of maps $f_A: \\{1\\} \\to X$ and $f_B: \\{2\\} \\to X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f \\circ i_A = f_A$ and $f \\circ i_B = f_B$.\n\nThis is still long and complicated, but remember from [when we discussed the product](https://arbital.com/p/5zv) that a map $f_A: \\{1\\} \\to X$ is precisely picking out a single element of $X$: namely $f_A(1)$.\nTo every element of $X$, there is exactly one such map; and for every map $\\{1\\} \\to X$, there is exactly one element of $X$ it picks out.\n(Of course, the same reasoning goes through with $B$ as well: $f_B$ is just picking out an element of $X$, too.)\n\nAlso, $i_A$ and $i_B$ are just picking out single elements of $A \\sqcup B$, though we are currently pretending that we don't know what $A \\sqcup B$ is yet.\n\nSo we can rewrite this talk of *maps* in terms of *elements*, to make it easier for us to understand this definition in the specific case that $A$ and $B$ have just one element each:\n\n> The *disjoint union* of $\\{1\\}$ and $\\{2\\}$ is the following collection of three objects: $$\\text{A set labelled }\\ \\{1\\} \\sqcup \\{2\\} \\\\ \\text{An element } i_A \\text{ of } \\{1\\} \\sqcup \\{2\\} \\\\ \\text{An element }i_B \\text{ of } \\{1\\} \\sqcup \\{2\\}$$ with the requirement that for every set $X$ and every pair of elements $x \\in X$ and $y \\in X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f(i_A) = x$ and $f(i_B) = y$.\n\nFrom this, we can see that $i_A$ and $i_B$ can't be the same, because $f$ acts differently on them (if we take $X = \\{x,y\\}$ where $x \\not = y$, for instance).\nBut also if there were any $z$ in $\\{1\\} \\sqcup \\{2\\}$ *other* than $i_A$ and $i_B$, then for any attempt at our unique $f: A \\sqcup B \\to X$, we would be able to make a *different* $f$ by just changing where $z$ is sent to.\n\nSo $A \\sqcup B$ is precisely the set $\\{i_A, i_B\\}$; and this has determined it up to isomorphism as \"the set with two elements\".\n\n### Aside: [https://arbital.com/p/-4w5](https://arbital.com/p/-4w5)\n\nNotice that the disjoint union of \"the set with one element\" and \"the set with one element\" yields \"the set with two elements\".\nIt's actually true for general finite sets that the disjoint union of \"the set with $m$ elements\" with \"the set with $n$ elements\" yields \"the set with $m+n$ elements\"; a few minutes' thought should be enough to convince you of this, but it is an excellent exercise for you to do this from the category-theoretic *universal properties* viewpoint as well as from the (much easier) *elements* viewpoint.\nWe will touch on this connection between \"disjoint union\" and \"addition\" a bit more later.\n\n## The empty set\n\nWhat is the disjoint union of the [https://arbital.com/p/-5zc](https://arbital.com/p/-5zc) with itself?\n(Of course, it's just the empty set, but let's pretend we don't know that.)\nThe advantage of this example is that we already know a lot about the [https://arbital.com/p/-5zr](https://arbital.com/p/-5zr), so it's a good testing ground to see if we can do this without thinking about *elements* at all.\n\nRecall that the empty set is defined as follows:\n\n> The empty set $\\emptyset$ is the unique set such that for every set $X$, there is precisely one function $\\emptyset \\to X$.\n\nThe disjoint union of $\\emptyset$ with itself would be defined as follows (where I've stuck to using $A$ and $B$ in certain places, because otherwise the whole thing just fills up with the $\\emptyset$ symbol):\n\n> The following collection of three objects: $$\\text{A set labelled }\\ A \\sqcup B \\\\ i_A : \\emptyset \\to A \\sqcup B \\\\ i_B : \\emptyset \\to A \\sqcup B$$ with the requirement that for every set $X$ and every pair of maps $f_A: \\emptyset \\to X$ and $f_B: \\emptyset \\to X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f \\circ i_A = f_A$ and $f \\circ i_B = f_B$.\n\nNow, since we know that for every set $X$ there is a *unique* map $!_X: \\emptyset \\to X$ %%note: In general, the exclamation mark $!$ is used for a uniquely-defined map.%%, we can replace our talk of $f_A$ and $f_B$ by just the same $!_X$:\n\n> The following collection of three objects: $$\\text{A set labelled }\\ A \\sqcup B \\\\ i_A : \\emptyset \\to A \\sqcup B \\\\ i_B : \\emptyset \\to A \\sqcup B$$ with the requirement that for every set $X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f \\circ i_A = (!_{X})$ and $f \\circ i_B = (!_{X})$.\n\nMoreover, we know that $i_A$ and $i_B$ are also uniquely defined, because there is only one map $\\emptyset \\to A \\sqcup B$ (no matter what $A \\sqcup B$ might end up being); so we can remove them from the definition because they're forced to exist anyway.\nThey're both just $!_{A \\sqcup B}$.\n\n> The set $$A \\sqcup B$$ with the requirement that for every set $X$, there is a *unique* map $f: A \\sqcup B \\to X$ such that $f \\circ (!_{A \\sqcup B}) = (!_X)$.\n\nA diagram is in order.\n\n![Universal property of the disjoint union of the empty set with itself](http://i.imgur.com/B72uW9o.png)\n\nThe universal property is precisely saying that for every set $X$, there is a unique map $f: A \\sqcup B \\to X$ such that $f \\circ (!_{A \\sqcup B})$ is the map $!_X : \\emptyset \\to X$.\n\nBut since $!_X$ is the *only* map from $\\emptyset \\to X$, we have $f \\circ (!_{A \\sqcup B}) = (!_X)$ *whatever $f$ is*.\nIndeed, $f \\circ (!_{A \\sqcup B})$ is a map from $\\emptyset \\to X$, and $!_X$ is the only such map.\nSo we can drop that condition from the requirements, because it automatically holds!\n\nTherefore the disjoint union is: \n\n> The set $$A \\sqcup B$$ with the requirement that for every set $X$, there is a *unique* map $f: A \\sqcup B \\to X$.\n\nDo you recognise that definition?\nIt's just the universal property of the empty set!\n\nSo $A \\sqcup B$ is the empty set in this example.\n\n# More general concept\n\nWe saw earlier that the disjoint union is characterised by the \"reverse\" (or \"[dual](https://arbital.com/p/duality_category_theory)\") of the universal property of the product.\nIn a more general [category](https://arbital.com/p/4c7), we say that something which satisfies this property is a *[https://arbital.com/p/-coproduct](https://arbital.com/p/-coproduct)*. %%note: Prepending \"co\" to something means \"reverse the maps\".%%\n\nIn general, the coproduct of objects $A$ and $B$ is written not as $A \\sqcup B$ (which is specific to the category of [sets](https://arbital.com/p/3jz)), but as $A + B$.\nAmong other things, the coproduct unifies the ideas of addition of [integers](https://arbital.com/p/48l), [least upper bounds](https://arbital.com/p/least_upper_bound) in [posets](https://arbital.com/p/3rb), disjoint union of sets, [https://arbital.com/p/-65x](https://arbital.com/p/-65x) of [naturals](https://arbital.com/p/45h), [https://arbital.com/p/-3w3](https://arbital.com/p/-3w3), and [https://arbital.com/p/-free_product](https://arbital.com/p/-free_product) of [groups](https://arbital.com/p/3gd); all of these are coproducts in their respective categories.", "date_published": "2016-10-23T18:43:27Z", "authors": ["Patrick Stevens", "Eric Rogstad", "Eric Bruylant"], "summaries": ["The [https://arbital.com/p/-5z9](https://arbital.com/p/-5z9) may be described by a [https://arbital.com/p/-600](https://arbital.com/p/-600) which is very similar to that of the [universal property of the product](https://arbital.com/p/5zv). Specifically, the disjoint union of sets $A$ and $B$ is a set denoted $A \\sqcup B$ together with maps $i_A : A \\to A \\sqcup B$ and $i_B: B \\to A \\sqcup B$, such that for every set $X$ and every pair of maps $f_A: A \\to X$ and $f_B: B \\to X$, there is a unique map $\\gamma: A \\sqcup B \\to X$ such that $\\gamma \\circ i_A = f_A$ and $\\gamma \\circ i_B = f_B$."], "tags": ["C-Class"], "alias": "63m"}