diff --git "a/yudkowsky_blog.jsonl" "b/yudkowsky_blog.jsonl" --- "a/yudkowsky_blog.jsonl" +++ "b/yudkowsky_blog.jsonl" @@ -1,23 +1,23 @@ -{"text": "Girl Intercorrupted\n\nThis is a 4-of-13 chapter sample of “A Girl Corrupted by the Internet is the Summoned Hero?!” The remainder is available at Gumroad and Amazon .\nTable of Contents\nPrologue: A virgin maiden is already corrupted?!The chance of success is—?!The key to power is—!?The rebellion has already lost?!I’m going to be sacrificed?!Is this my story’s shocking twist?!The true key to power is—?!Do even I dare?!Am I going to wimp out?!Do you really think you can?!The meaning of probability is—You did it because—?!The final bargain!\n©2016 by Eliezer Yudkowsky.\nForeword\nThis is my attempt at translating a light novel from Japan, only the original source material doesn’t exist.\nThe light novel is a Japanese custom which aims at easy reading. I think of it as an art form in which only the story’s bones remain.\nIf you want to read a translation of a Japanese light novel, I liked “Evil God Average” (Jashin Average) as translated by the Fifth Holy Sheeprabbit. That might help you to appreciate this story, since it conveys the genre to which this story belongs.\nFor those of you who haven’t read any light novels before:\nA remarkable portion of light novels are about people being transported from one world to another. Japan has easier ideas about copyright, so their literary system more often contains many works on the same theme.\nThat theme began with heroes from our world being transported to another world to fight the Demon Lord.\nNow there are light novels about the Demon Lord dying, being reincarnated in our world as a high-schooler, and then being transported to another world as one of the adversarial side characters in a romantic video game. Or the hero is a man from our world, reincarnated as an elven girl, who has already become an absurdly powerful adventurer, but now works incognito as a receptionist. I’m not joking.\nLight novels also have a unique writing style I’m trying to imitate, including this easy style of author’s notes. I don’t think I do it well (laughs). Maybe I’ll improve?\nThis story was supposed to be completely silly. Please keep that in mind. I failed at that by the end of the second chapter, but still, that’s the origin.\nThe main character doesn’t always agree with the author about decision theory. It’d be silly to think we’d agree about things that are less objective.\nI have nothing else to say about this story for now, so you may as well read it.\n— Eliezer Yudkowsky, Nov 2015\n1. Prologue: A virgin maiden is already corrupted?!\nMy family name is Yugano. My given name is Yuuki. I have no redeeming qualities.\nThe boys I meet fail to interest me, and I haven’t kissed any of them. This is because the Internet has ruined my way of looking at the world.\nIn the beginning, seeing pics of a muscular man with no shirt was enough to make me breathe faster. If I came across a picture of a man being nude, I would flinch away in horror.\nOver time, I moved on to pictures of nude men, then two men doing things to one another. As I became numb to one perversion, I had to find something more extreme to arouse my interest. Now I have no interest in normal forms of youthful misbehavior.\nYou say I should have refrained? Back then I was too young to know better, and now this untouched maiden has been so thoroughly ruined that I might as well go further.\nI blame the government and my parents. In the very beginning, they should have stopped that innocent girl from seeing perverted things online.\nNow I spend hours every day browsing the Internet, doing you-know-what to myself.\nAt this point I’d like to deliver a sharp remark about how stories depict being transported to another world. You know the scene I’m talking about: the Hero arrives surrounded by holy clerics casting the Summoning Spell, with well-dressed royalty and future adventuring companions looking on.\nIn every one of those cases, the Summoning catches the Hero at a time when the Hero is standing up and fully dressed.\nIs this realistic? Would a Hero be Summoned only at such a convenient time? I bet you spend much of your day sitting down. If the Summoning caught you then, wouldn’t you materialize unsupported, and fall on your ass?\nImagine being a Hero being transported while they’re on the toilet. They materialize in a sitting position with their underwear around their ankles, then fall over with their knees still bent and pants down. Their butt hasn’t been wiped, and it leaves a smear on the ground. Maybe the Summoned Hero is right in the middle of pooping out a big one.\nWhat happened to me was even more embarrassing than that.\nIt involved my usual Internet habits.\nThat was the first step of my journey into another world.\n2. The chance of success is—?!\nThe cold was my first startling observation. A chill wind bit into my exposed thighs and unmentionables like… like a very cold knife. I’m sorry, I’m distracted right now and can’t think up a clever metaphor.\nThe next thing my eyes saw was the other people staring at my vulnerable body. There were five adults in white robes holding up their staves, silver halos glowing above their heads. Beyond them, a dirty old man in leather and chains – no, I mean leather armor and chainmail, don’t misunderstand me, and when I say ‘dirty’ I mean that he had stains on his armor.\nNext to the old warrior, a young lad my age with a sword belted at his side, with a clean and finely made shirt. He was looking away from me and wringing his hands like an eight-year-old girl who just saw a crayon drawing of private parts.\nAround me, a circle of huge standing stones.\nBeyond that, walls of grass, the slopes of rising hills.\nAnd below me, a circular stone plate inscribed with curves in a fading silver glow.\nImmediately after I arrived, there was a lot of shrieking… you know, let’s not talk about this. I’m choosing to repress these memories for the rest of my life. Let’s skip to the part where somebody has given me a towel-like cloth to hold around myself.\nSo there I am, standing, wrapped in a towel; aside from that my surroundings are as previously specified.\nI have many issues with this. I am setting aside my issues and listening to the words of the white-robed mage with the brightest halo. I think this part might be important.\n“Yugano Yuuki,” the white mage intones, “you have been Summoned here to overturn the greatest evil of this world, the Wicked Emperor.”\nThe old warrior in chain-mail speaks up. “How is this girl supposed to do that, exactly? Is there more to her than is apparent?”\n“I have similar questions,” I say. I’d better have arrived here with some incredible cheat-like advantage, or this world is amazingly doomed.\n“That’s impossible for me to know, but she is certainly the Summoned Hero,” says the mage.\n“Could there have been an error in the Summoning Spell?” asks chainmail-wearer. “And if so, is it too late to send her back and get another one?” He looks back at me. “No offense, but it’s for the sake of everyone.”\nNone taken.\nThe white mage casts a glance in my direction. “The Spell can only be worked once every three hundred years, in a certain place. But this Great Summoning Spell we have just cast should, without fail, have selected the Hero with the best chance to overturn the Wicked Emperor who has cruelly subjugated half the world.”\nThat’s some convenient exposition, but I’ll excuse it since you’re stating it for my sake.\n“Maybe our best chance of defeating the Wicked Emperor still isn’t good?” Again warrior-guy echoes my own thoughts. “Even leaving aside the condition in which she arrived, I would expect the most skilled person to be older, or in the prime of their adulthood.”\nThe white-clad mages glance at one another, looking concerned. “There’s a divination that’s itself part of the Summoning Spell,” says another mage, a woman. “To state it clearly, if the Summoned Hero is the person with the greatest probability of defeating the Wicked Emperor, then the Spell itself must determine that probability. Traditionally the probability isn’t observed since then it becomes a self-fulfilling prophecy, but in this case…” The white mage grimaces. “She doesn’t seem like the Hero we were expecting. I agree we ought to check what the Spell determined as her probability of victory.”\nThe chainmail-man frowns. “Why wouldn’t you always check the probability? Is it dangerous to do so?”\nAnother white-robed mage speaks up. “Imagine that someone has a ninety percent chance of defeating the Wicked Emperor if they aren’t told anything. Then they’re told they have a ninety percent probability of winning. They might feel relieved of the need to make a desperate effort, so their true probability of winning would become much lower. Since the probability has to be consistent, that can’t happen. On the other hand, suppose we’re told our chance of winning is only two percent. Then, feeling already defeated, our chances of victory might drop that far. Given those two possible answers, since the probability must be consistent, the observed probability would be two percent. So it’s best to decide in advance not to peek at the probability that the Spell predicts… still, this case does seem like an exception. Just be sure to keep the number to yourselves, and try not to let it affect your decisions.”\nThe prince-boy and the old man both look puzzled. As for me, since I come from Earth where there are time travel movies, I’ve followed the reasoning without difficulty.\nThe five white mages begin chanting. Staves are raised, the golden halos above their heads grow brighter. I suppose I should be more impressed, but it’s really not much in the way of special effects.\nThe mages lower their staves. Most of them look rather surprised.\n“Her chance of overthrowing the Wicked Emperor is… one hundred percent?!”\nEh?\nYou say that’s my probability of winning even after being told my probability of winning?\nThen I might as well slack off and do what I want, huh.\nIt may be an awful thing to say with the fate of the world at stake. But realistically, if I’m the sort of person who’ll be lazy given half a chance, there’s no point in trying my best as a Summoned Hero if I’m going to win even after taking into account the changes in my behavior caused by knowing that I’m going to win.\nI wonder what amazing cheat-like ability I’ll discover, and whether it can be abused for other purposes besides overthrowing the Wicked Emperor?\n3. The key to power is—!?\nMagic powers, magic powers, I’m going to get my magic po~wers!\nI don’t mind telling you that there was a spring in my step as I went skipping toward the next ceremony that had already been prepared for me.\nWhile I haven’t resolved my numerous issues, I do like how everything is so straightforward here. Compared with other tales I’ve read of being summoned to another world, I’m glad I didn’t wind up with a harder case.\n…I hope I didn’t just curse myself by thinking that.\nBy the way, I seem to be in a rebel encampment that’s hidden between several hills and therefore not visible from a distance. At least, this is what I infer from watching people polishing their weapons.\nSoon I come upon yet another group of mages, gold-robed people that seem to be mostly younger women. The halos over their heads are only faintly visible. Chainmail-guy, noble-boy, and one of those archmage types are following behind me.\nBefore me is a stone plate with inscribed lines that look much less elaborate than the circle I arrived in.\n“It’s important that you understand the purpose of this ceremony,” says one of the women in gold robes. She casts a doubtful look at my towel-clothing and the nubile body I’m keeping underneath it. “They say to assume a Summoned Hero doesn’t know anything, so should I start with the very basics?”\nI dislike rhetorical questions, so whenever I hear one I always give the less expected answer. “No, you should skip straight to the most advanced part without any preliminaries.”\n“Well, the very basics are as follows,” says the gold-robe mage. “The magic of the world is divided into Evil magic aligned with Demons and Good magic aligned with Angels. In the same way that those who now rule the world wield power based on wickedness, the holy magic of this Rebellion comes from goodness. A good mage derives her power from contracting with an Angelic being, which agrees to lend you power in exchange for you committing yourself to purity.”\nWell, that explains the halos – ah, ah, let’s hold on for a minute or possibly several weeks. “Just what do you mean by purity?”\nShe looks puzzled. “I mean behavior that is holy and good as opposed to unholy and not good.”\n“As a Summoned Hero from another world, it’s impossible for me to know whether I understood what you meant by that.”\n“I don’t understand what you mean by saying that you don’t understand what I mean. Even if the people in your world are more wicked than the people in this one, they should still know what righteousness is.”\nMy philosophy textbook had a clear idea of what righteousness is: namely, righteousness is explicitly stating your definitions. “If I follow a course to overthrowing the Wicked Emperor for the benefit of all peoples in this world, harm nobody who doesn’t harm anyone else, and otherwise do what I want, is that sufficient?”\n“Of course not! You can’t just do what you want!”\nAnother gold-robed woman speaks. “To begin with the elementary fundamentals of the basics, to form a contract you must be a virgin. Then, it goes without saying that sullying yourself with a man would cause your Angel to flee from you.”\nI look at the archmage. He’s seen how I was when I arrived here.\n“You are still untouched by men, aren’t you?” The archmage speaks gently, but with a worried countenance. “Even if you’ve done certain sinful things that you mustn’t do again?”\n“I haven’t so much as kissed a boy. However, I am worried that my thoughts may be so sinful that an Angel won’t want to contract with me.” Honestly I’m worried that my Angel will burst into flames… no, it will explode.\nSeveral of the women clear up at this, like they finally understand what’s happening. “Oh, that’s nothing to worry about, dear sister!” says the one who spoke first. “It means more for a poor farmer to pass up a temptation of ten silver than for a rich official to pass up a bribe of a hundred gold. So long as you do nothing wrong, being more tempted by sin makes it a holier deed to commit yourself to purity… ah, I see you’re smiling now that you realize the Angelic Powers are forgiving.”\nOf course that’s why I’m smiling. There’s no other reason at all.\nThe more corrupt thoughts you start with, the more power you gain from promising to be pure, is what I think I heard you say?\nThere’s one thing I’d better check, though. “It’s okay for a white mage to retire, isn’t it? There’s no penalty if you decide afterward to tell your angel to go away, so you can settle down with a nice husband and make some children?”\nThe maidens are blushing. “Of – of course not! Even though that’s not quite, quite…”\nOne hundred percent chance of victory, here I co~me.\nThough I am a little worried. I’ve never gone more than a day or two without giving myself release. Even when I tried to deny myself for perverted reasons, my willpower failed. I hope that I can clear up this Wicked Emperor matter in a month, and not go insane with repressed desires before then.\nThe ceremony for obtaining my ma~gic po~wers is simple. The gold-robed women are holding their hands and singing a short melody, and the stone seal is glowing silver like the colors of their halos. I think someone from this world would find this very holy and uplifting, but I’ve heard electronic-orchestral chorales with better singing.\nMy Angel appears in a burst of light and… oh, this isn’t fair. The Angel is male, and his robes are clinging to his form, which is thin and fair. The beautiful face above is one of supreme innocence. Even for someone who’s seen many Internet pictures, encountering a true Angel is a moving experience.\nThis Angel… this Angel is just begging for someone to corrupt him and do unspeakable things to him.\nNormally women fear what men might do to them, which is the reason I refrained from fulfilling my awful desires back when they were still in the realm of possibility. But if this Angel is really a creature of purity, then it follows that he wouldn’t do anything bad to me. In other words, he’d be defenseless before me.\nBut if I do tha~at, he’ll run away.\nThe archmage is whispering in my ear and I’m repeating the words of the ancient contract. Hey Angel-butt, I’ll refrain from my naughty desires if you grant me supreme magical power to stomp the Wicked Emperor, yo? This bargain is needlessly lengthy for accomplishing that much.\nThe Angel speaks his own lines and his dulcet, young, boyish tones make my insides twitch. Knowing I’m not allowed to do a-ny-thing about that, even to myself, makes my insides twitch more. This is going to be a long month for me.\nThe white light of the seal fades, and now a cute little version of my Angel is hovering over my shoulder where only I can see him.\n“Listen well to the counsel of your Angel,” the archmage says gravely. “The righteous action is not always what we must do to save our people. But while your Angel is with you, you will always know the difference and what you are sacrificing when you choose otherwise.”\nMy Angel’s eyes are wide and he’s waving his hands frantically and making a high-pitched EEEEEEEE sound, but he hasn’t actually burst into flames so I’ll call this contract a success.\n4. The rebellion has already lost?!\n“Disaster! Emergency! It’s terrible!”\nPeople are running around shrieking things like this. Apparently the Wicked Emperor’s military forces have surrounded this camp and they outnumber us by three hundred million billion trillion to one.\nHey, idiot with the white robes and long staff, is it the usual practice that the Hero is Summoned to overthrow the current greatest evil?\nIs it true that the Great Summoning can only be performed at this time, in this place, within the circle of standing stones?\nDIDN’T YOU IMAGINE THE WICKED EMPEROR MIGHT DO SOMETHING ABOUT THAAAT-T-T!?\nThis is definitely a punishment from the vengeful gods because I let myself look forward to easy times.\n“Summoned Hero!” cries the old warrior with the chainmail that’s been following me around. “Yugano Yuuki! What must we do? How can we survive?”\nMy mind races rapidly and I seize on the first answer that comes to mind. “Quick, grab all the pairs of underpants you can and wear them over your heads! Then, attack the Enemy with watermelons!”\nPeople stare at me.\nMaybe answering with the very first thing that came to mind was a bit much, but—\n“I have a one hundred percent probability of overturning the Wicked Emperor,” I point out. “It’s not that every possible choice we could make, would lead to victory. But whatever I end up deciding to actually do, that particular course of action has a one hundred percent chance of victory. So if we actually attack with watermelons, we’ll definitely win!”\nThe old warrior clutches at his head. “Leaving aside all my other objections, we don’t even have any watermelons!”\nWhat? This is disastrous! I can’t think of any different plans! Or rather, all my other plans involve having not gotten into this situation in the first place!\n“We’re doomed!” shrieks a white mage running past. A second later, he’s running past again in the opposite direction. “Doomed, I tell you, doomed!”\nThe old man pulls himself together, a grimness settling over him. “The absolute goal is to allow you to escape, the Summoned Hero who will certainly succeed. Even if I and all this camp must sacrifice ourselves to break out of this encirclement, it’s all right so long as you go free.”\nH-hey! What are you saying? Should so many people die to save me, a girl with no redeeming qualities? I’d never sleep again!\n“I’ll make you up a pack with weapons,” the old man is saying. “Trust nobody, for the Wicked Emperor will seed this area with spies. Live off the forest, even if you eat seeds and berries for years, it’s wiser than appearing before a human being. Live, and in time, be the certain instrument of our vengeance!”\nI already had my doubts about this course of action, but that settles it. Anything that involves living without toilet paper is not a realistic option for me. Besides, although I’m new to my Angel, there’s no doubt I will be an overpowered character. “I have a better idea. I’m the girl with a one hundred percent chance of victory, so let’s reverse your strategy. Why don’t I hold off the Wicked Emperor’s armies, while the rest of you make your escape?”\nThe old man is speechless at my brilliance. His mouth opens and shuts several times.\nThe Angel on my shoulder is nodding approvingly. Yes, this noble act of mine must be a righteous deed by local standards.\n“Listen,” I say, “if I know I have a one hundred percent chance of success regardless, I won’t choose a course where people die for me along the way.” Who knows, maybe I can close out this quest in just one day. If there’s a novel with me as an overpowered heroine, that’s definitely how it should go!\nAnd tha~at’s how I found myself on a hill gazing down sternly at the Wicked Emperor’s military forces.\nHa, these thousands of cavalry on their shining horses, the countless bowmen glowering at me, and foot soldiers stretching over the hills and out of sight—you don’t impress me. I’ve seen pictures of real armies! With guns, and helicopters, and tanks on aircraft carriers!\nSeeing that your enemy has fielded a single girl standing alone on this hill, are the looks on your faces fearful? Of course not! Bewildered contempt is more like it! But soon, those looks will change!\nOh dear, what’s this? I seem to have acquired a straggler. Are you under the impression you’ve joined my party without my say-so? That’s very forward of you.\n“I won’t let you stand alone!” The noble-looking boy says that, holding his sword aloft and, yes, it’s flashing in the sun.\nAre you under the impression this is cool? Aragorn-sama from the Lord of the Rings movies is cool when he does this. You’re just a kid.\n“My name is Teragon Omoia, and I’ll be with you to the end, Yugano Yuuki.” He’s trembling, but still manages to smile.\nI can’t let myself be outdone by this upstaging interloper. With a breezy gesture, I flip my hair behind me so that the wind can blow around my glossy strands. “Oi, oi, what’s this about endings? I am Summoned Hero Yuuki, the overpowered character with a one hundred percent chance of success! I’ll definitely win this day! Because the sheer perversion of the desires I’m repressing to be pure, is something that nobody from this world can possibly beat! I’ll show you the power of a girl that’s been corrupted by the Internet!”\nThe boy is looking even more nervous than he was before. I point at the army before me with a commanding gesture, and toss my head so my hair will blow nobly in the wind some more. “My Angel! This is my command under our compact of purity! Knock them all unconscious, but don’t kill them!” I cup my hands together to emit a mighty energy blast. “ETERNAL… RAINBOW… SHIMMERING… LASER… THUNDER…”\n…anyway, that’s how I ended up tightly bound on a cart heading back to the Wicked Empire.\n\nTo read the rest of this book, visit:\nGumroad: http://gumroad.com/l/GirlCorruptedAmazon: http://www.amazon.com/Girl-Corrupted-Internet-Summoned-Hero-ebook/dp/B01B2BP726", "url": "https://www.yudkowsky.net/other/fiction/girl-intercorrupted", "title": "Girl Intercorrupted", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:13:26+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "28c5f83d6b020a3d892b51c23a6c879b", "summary": []} -{"text": "Prospiracy Theory\n\nRwanda and I sat on a park bench. Above us the birds flutteredgracefully through a shamefully blue sky. Out of habit, Iidentified the surveillance drones; a CIA sparrow, an FBI robin, abluetit from the Men In Black, and a flock of honking ducks that wasprobably one of the Illuminati’s newfangled distributed devices. Thesun was partially obscured by a few thin streamers ofcloud; just enough to let us look up at the sky without wincing, notenough to change the feeling of sunniness. It was an indecentlyperfect day, as if someone had broken into NASA’s satellite weathersystem and made a few modifications.\n“So, have you ever really looked at a rainbow?” Rwanda was saying, herlegs dangling over the park bench.\n“Well, yeah,” I said.\n“And the colors are bunched up? They come in bands?”\n“Well, yeah,” I said again. Then I saw where she was going. “Hey,yeah. If rainbows are really caused by diffraction effects, then thefrequency should change smoothly.��� I started laughing.“I’m such a moron! I can’t believe I didn’t see that one before! Sowhat do you suppose they really are?”\nShe smiled. “Well, suppose that those UFOs we keep seeing aren’treally working with the Trilateral Commission…” She trailed off,looking to my right. I turned my head.\nA thin, scruffy man, in a dirty brown overcoat, was walking towards ourbench. His eyes were wild. “I’ve got it!” he hissed. “I’ve got it allworked out!”\nRwanda and I scooted closer to listen.\n“It’s all so simple,” he said, pausing dramatically. “ Lee HarveyOswald, acting alone, shot John F. Kennedy! ”\nI heard Rwanda’s sharp intake of breath, and my eyes grew wide. “Hey,man, be careful,” I hissed. “There’s a bluetit from the Men In Blacklistening to us not five feet away!”\n“The Men In Black?” he asked scornfully. “There’s no such thing! And abluetit? What kind of paranoid fantasy is that? There’s nobodylistening to us.”\nI let out a disappointed breath. It was just another nutter. We’vebeen getting those from time to time, ever since they started usingWindows NT on the Orbital Mind Control Lasers. The Men In Black wouldprobably be around to pick him up shortly.\nRwanda must have felt sympathetic, since she kept on talking to him.“But you must know that the Men In Black exist,” she said gently.“Didn’t you see that movie?”\nThe man started eyeing us nervously, like we were the nutters. “Yes,”he said, “but it was just a movie.”\n“Well,” Rwanda said, “have you ever seen one of those little flashingmemory-eraser devices?”\n“No,” said the man.\n“So you don’t ever remember seeing one of them?”\n“No,” said the man.\n“Well,” Rwanda said cheerfully, “there you go.”\nThe man started to speak, then halted. “Oh, that’s just bloodynonsense,” he sputtered. I grabbed Rwanda’s arm. “Don’t argue withhim,” I whispered. “He could be dangerous.”\nFortunately, at that moment, the limousine pulled up. I let out abreath, relaxed. “You took your sweet time,” I said.\nOne of the Men In Black nodded. “Sorry, sir. Ever since we startedusing Windows CE in the bluetits, it’s been nothing but trouble.” Theother two MIBs grabbed the crazy by the arm and started wrestling himinto the car.\n“Don’t listen to them!” he shrieked. “Lee Harvey Oswald, acting alone,shot John F. Kennedy! Lee Harvey Oswald, acting alone, shot John F.Kennedy! Lee Harvey Oswald -”\nThe limousine door closed on his outburst, leaving the park in blessedsilence. The Man In Black held up a blinky-flashy thing. “IfI could trouble you to look over here, sir? And please take off thoseglasses.”\nI blinked. “The glasses? Oh, I’d forgotten I had those on.Certainly.” I took the glasses off my face, looked, blinked and –\n“…aren’t really working with the Trilateral Commission,” Rwanda wassaying. I had an odd feeling of disorientation that cued me to glancedown; sure enough, I was holding my glasses in my hands, though I hadno memory of removing them.\nI nudged her. “Hey, Rwanda. MIBs again.”\n\nThis document is ©2000 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/other/fiction/prospiracy-theory/ .\nOriginally posted to the Extropians mailing list in 2000. Revised 2005.", "url": "https://www.yudkowsky.net/other/fiction/prospiracy-theory", "title": "Prospiracy Theory", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:11:23+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "0e0e04ba6e56a51ea00b3f33c4b669f2", "summary": []} -{"text": "Artifacts\n\nIn the western spiral arm of our galaxy lies a star system and a planetoccupied ages ago. On one mountain of that planet there is a greatstructure, thousands of cubits tall. It is constructed of sapphire anddiamond, is self-repairing, and derives energy from both solar power andan internal power supply which we still do not understand.\nEach solar rotation, this vast mechanism emits a tick. Each hundredrotations, it emits a gong. Those who study the mechanism believe thatevery ten thousand rotations, a small mechanism will appear from a certaindoor and make a sound. The last effect has not been observed in livingmemory, and the next occurrence is projected to be nearly eightygenerations removed from those now living. Xenoarchaeologists say thatthe gong’s period was longer than the lifespan of an individual of thatspecies, and that the unseen mechanism has a period longer than thatspecies’ entire recorded history. The entire edifice was constructed onlya few years before that race vanished forever to wherever ancient racesgo.\nPhilosophers across the galaxy have argued over the purpose of theEternal Clock. As with other artifacts such as the Diamond Book, theCircle of Time, the Oracle, and the Wandering Flame, consensus holdsthat the motive was not religious or superstitious in nature, butphilosophical.\nWhat principle the Eternal Clock was intended to embody is still a matterof great controversy. But while arguments rage in the halls ofphilosophy, while children are born and great-grandparents die, whileintelligent races evolve and vanish, the Eternal Clock continues to tick.And perhaps that is the message it is intended to convey.\n\nThis document is ©2001,2003 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nInspired by the “Clock of the Long Now” project .\nYes, Stewart Brand has already seen it.\n\nThe above apparently got forwarded around a bit, and Kevin Kelly wrote me and said:“I’d love to know what the other artifacts are: Diamond Book, the Circle of Time,the Oracle, and the Wandering Flame.”\n\nThe Wandering Flame was created by a species that, in a rare coincidence,began acquiring industrial technology just as their home planet wasentering a new Ice Age. The species successfully staved off globalcooling – first through deliberate emission of greenhouse gases, thenthrough orbital solar mirrors, and finally, as they reached the heights oftechnology, through direct reversal of the underlying climatic effect. Incelebration, they constructed the Wandering Flame, an artificial sunletthat shines for one seventeenth of an orbital period over any planet onwhich a sentient species successfully manages an environmental crisis.Although the Wandering Flame often delivers more solar energy than theplanet’s original star, no climatic or ecological side effects occur.When not fulfilling its primary function, the Wandering Flame can usuallybe found in the asteroid belt of some otherwise uninteresting star system.\nThe Oracle is a spherically-shaped region of space, roughly 32 light-hoursin diameter, located around 2 light-years to the galactic north ofElnath. The Oracle will answer one question for each petitioner;unfortunately, there is no way to know in advance which question it is.Only seventeen questions have ever been answered, four of them asked byaccident and apparently trivial, but in each case the petitioner expresseda profound sense of satisfaction and enlightenment.\nThe Circle of Time appears as a circular path of beaten silver,eighty-three meters in diameter. When you set foot on the Circle at anypoint, the path begins to move, conveying you along the Circle. Itappears to take exactly fifteen minutes and twenty-eight seconds for youto reach your starting point, although on exiting, no external timeappears to have passed. Many past and future selves of the fifteenminutes are visible in their corresponding positions along the Circle ofTime, and you can converse with yourself as desired.\nThe Diamond Book has the density and appearance of purest diamond. Nomatter how many pages are turned, there are still as many left. Theweight and volume of the Book never increase. No page has ever been foundcontaining words, pictures, or other visible content, though each pagesparkles beautifully and individually. Those who read the Book by gazingon several pages in succession feel an overwhelming sense of sadness andgrief. The emotion is not debilitating but cathartic, and has inspiredgreat artistic works and a lasting end to several wars. Despite thethousands of intrigues that have broken out in competition for possessionof the Diamond Book, no violent conflict has ever occurred.\n\nThis article describes humanity’s creation of yet another inscrutable artifact.The key passage:\n“It’s probably the roundest item ever made by hand. ‘If the earth were this round, Mount Everest would be four meters tall,’ Dr. Nicolaus said. An intriguing characteristic of this smooth ball is that there is no way to tell whether it is spinning or at rest. Only if a grain of dust lands on the surface is there something for the eye to track.”\nWhatever would an alien species make of the Silicon Sphere, I wonder? Would they ever guess its purely philosophical purpose?\nA cheering sign that humanity is still progressing toward becoming an Incomprehensible Elder Species.\n\nThis was posted to SL4:\nTHE BANACH-TARSKI GYROSCOPE\n\nThe Banach-Tarski Gyroscope is an intricate mechanism believed to have\nbeen constructed using the Axiom of Choice. On each complete rotation\ncounterclockwise, the Banach-Tarski Gyroscope doubles in volume while\nmaintaining its shape and density; on rotating clockwise, the volume is\nhalved. When first discovered, fortunately in the midst of interstellar\nspace, the Banach-Tarski Gyroscope was tragically mistaken for an ordinary\ndesk ornament. Subsequently it required a significant portion of the\navailable energy of the contemporary galactic civilization to reverse the\nrotation before nearby star systems were endangered; fortunately, the\nBanach-Tarski Gyroscope still obeys lightspeed limitations on rotation\nrates, and cannot grow rapidly once expanding past planetary size. After\nthe subsequent investigation, the Banach-Tarski Gyroscope was spun\nclockwise and left spinning. ", "url": "https://www.yudkowsky.net/other/artifacts", "title": "Artifacts", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:10:06+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "3758c086f0503d15d3cc10d546ea362b", "summary": []} -{"text": "X17\n\nBaron Hans Nidrach von Pompzidaize sat in his laboratory, looking atexperimental test subject X17. “How do you feel?” he inquired, hisrolling bass echoing from the laboratory walls.\n“Superintelligent, Doc,” replied X17, who had once been known as JohnSmith. “I’ve only had the Super-Neural Bypass for sixteen seconds, andalready I’ve learned twenty-seven languages and figured out how to play thepiano.”\nBaron von Pompzidaize frowned, examining several multicolored readouts. “Itshould be twenty-seven point three. Well, then, do you now feel competentto go destroy the Consortium of Evil and its dread leader, Admiral Floomp?Acting in accordance with the 1930s North-American conception of gentlemanlybehavior, of course.”\n“Sure, Doc,” said X17. “It’s not like I’ve got anything better to do.”\n“Excellent,” said the Baron, checking two gauges and a flashing display.“You still have the emotional maturity of a flatworm, like everyoneelse in this novel. I was afraid your superhuman abilities might giveyou an outlook slightly at variance with mine.”\n\nBaron Hans Nidrach von Pompzidaize sat in his laboratory, looking atexperimental test subject X17. “How do you feel?” he inquired, hisrolling bass echoing from the laboratory walls.\n“Strange,” said X17 softly. “Very strange, as if…” He stared offinto space for a moment. “I think I’ve been stupid.”\nBaron von Pompzidaize frowned, examining several multicolored readouts.“You should have learned twenty-seven point three languages by now.”\n“How can anyone learn three-tenths of a language? And how would I learna language without hearing it?” X17 said in a peculiarly flat voice.\nBaron von Pompzidaize stared. “You’re right. I never thought of that.” Acold chill ran down his spine. X17’s face had altered. The enthusiasm andenergy that had been there for as long as the Baron had known him, that hadblazed cheerfully when he volunteered for an untested procedure, that haddefied the awesome force of the Consortium of Evil, all had vanished withouta trace. The Baron thought that for a brief moment he saw something likesorrow, like wistfulness, flit across X17’s face, but X17 suddenly looked upat the Baron and his face fell back into the blank relaxation it hadpossessed earlier.\nThe Baron cleared his throat. “Well, then, do you now feel competent to godestroy the Consortium of Evil and its dread leader, Admiral Floomp? Actingin accordance with the 1930s North-American conception of…” The Baronstammered to a halt. X17 was looking at him with those expressionless eyes.\n“No,” X17 said gently. “Sorry, Doc.” X17 stepped down off the platformand began throwing switches on the machine.\n“What are you doing?” shrieked the Baron. With a sudden, wrenchingterror he realized that he didn’t understand what was going on, that hehadn’t been in control in his own laboratory since X17 had woken up.\n“I will probably die in the next few minutes,” X17 said, in a quiet voicethat raised hair on the back of the Baron’s neck. “Your procedure is toosimple. There is nothing that would have prevented it from occurringbefore, as a natural mutation.”\n“I don’t understand,” whispered the Baron. “You’re saying – there areothers? They will find you?”\n“Your procedure causes the rate of internal neural reprogramming toaccelerate,” X17 said. He had ripped off an access panel and his hands werea blur of rewiring. “But it does not add new neurons. I expect my brainwill reach a saturation point of complexity and lose the ability to form newthoughts. Very shortly, now. It is already becoming harder to think.” Hestood up, executing the movement with impossible smoothness. “After theinitial burst of speed, long enough for the necessary realizations to occur,the rate of neural reprogramming must slow down to only three times humanspeed, leaving enough thought to last a year. This should be enough time toimplement the necessary technologies.”\nThe Baron tried to understand. “You will… save yourself?”\nX17 executed another rapid movement. Placing himself, the Baronsuddenly realized, between the Baron and the door. “No,” X17 said.\nThe Baron screamed. Before he could reach his gun, X17’s hand flasheddown. Through a bloody haze, the Baron felt himself being dragged ontothe platform.\n\nThis document is ©1999 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/other/fiction/X17/ .\nOriginally posted to the Extropians mailing list in 1999. Revised 2002.\nInspired by “doc” Smith’s Lensman novels.", "url": "https://www.yudkowsky.net/other/fiction/x17", "title": "X17", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:08:04+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "7fd348d30aa30336bcce755f36a7d02f", "summary": []} -{"text": "Dark Lord’s Answer\n\nThis is a 2-of-7 chapter sample of “Dark Lord’s Answer”. The remainder is available at Gumroad and Amazon .\nTable of Contents\nThe Black CastleElaine of ElsewhereSantal’s CurseThe Mage of EquilibriumA Silver for an AppleThe Return of the PrinceDark Lord’s Answer\n©2016 by Eliezer Yudkowsky.\n\nForeword\nThis was my first attempt at writing in the Japanese light novel style, before I decided that it wasn’t enough fun and I needed to be sillier. (It’s not about Professor Quirrell. Sorry, but it’s not.)\n“Dark Lord’s Answer” is only halfway to being in the light novel style, compared to “A Girl Corrupted by the Internet is the Summoned Hero?!” This writing is denser and less humorous. You might perhaps decide that this novella carries more of the vitamin of insight—or maybe not; I don’t know.\nIf you don’t like the first two chapters, I’d say to give up there.\nContent warnings: Sexual abuse, economics.\n— Eliezer Yudkowsky, Apr 2016\n1. The Black Castle\nThe dark castle gleamed like blackened steel beneath the sun, rising up from the edge of a cliff at the end of a long winding road. Before us were fields of dark flowers that I hadn’t seen before, as if the master of that terrible castle had emitted a miasma and polluted the light and essence of ordinary flowers. The road to the castle seemed to be paved in bricks, instead of ordinary stones; black bricks, angled and ominous.\nTruly, this is an abode of the Dark Lord.\nThe Royal Guards of our small caravan were all muttering as we came closer. Even the Commander seemed apprehensive.\nI signaled Commander Brima to bring our company to a halt. The Commander looked puzzled, because she knew it’s not as if I’d go this far and then turn back.\nI stepped down from the horse I was riding, securing my sword on my hip where I could draw it more easily. “I’ll go on ahead,” I said, “so you can just wait for me here.”\n“Prince Nama!” cried a guard, and then—“Prince Nama!” cried another. Commander Brima didn’t look relieved.\n“Surely—” began the Commander.\n“It’s not as if you can protect me from anything,” I told her. “If the Dark Lord wants to kill me, he’ll kill me whether you stand in the way or not.” I’d taken companions to protect me from bandits along the way, not to throw their lives away against the Dark Lord on his throne.\nBesides, the Dark Lord requires supplicants to approach him alone, without any companions. Commander Brima should know that, so in the end, she was having the type of concern that didn’t respect the obvious facts.\nThe dark flowers that had been planted in strips by the side of the road gave off a pleasing scent. Despite the castle’s approaching shadow, the sun remained bright in the sky. That light warmed the exposed skin of my face, and raised a baked-brick scent from where it struck the paved road.\nI’d say this weather would be a fine hurrah for my life’s last day, but in truth I have no sentiment like that.\nThen what am I even doing in the Dark Lord’s domain?\nWell, the answer is that my country has a need.\nYou wouldn’t expect that a man of such great power and wickedness would be in the business of helping any person who requested it. But whether it makes any sense or not, that’s the reputation the Dark Lord has: If you approach the Dark Lord for help, he’ll give you an answer and your goal will be achieved. The price might be that his instruction says to discard your honor and give up whatever else might have come of your life.\nIf you ask the Dark Lord how to deal with a corrupt duchess, he might give you a poison to slay her; that’s rumored to have happened that one time. To put it another way, he’s like an ancient wisewoman who lives in a high mountain cave and speaks in riddles, except that he’s a villainous lord. In the few years since the Dark Lord became known to the world, he had already gained that reputation.\nMy boots clopped over the black brick road until I came to the gates of the castle. I don’t think it would come as any surprise that those gates were also black.\nThe gates were already open. No one came forth to meet me.\nAs I approached the gates, I saw a long black-stone corridor stretching ahead. It was windowless, lit only by a long line of lamps which burned with a clearer, whiter flame than the finest candle.\nI walked into that long corridor without hesitating. Certainly, this act was a gamble which had its downsides, but I didn’t let that down slow my legs. Once you’ve committed to a motion, you have to follow through; if it’s something that has the potential for disaster, then flinching while you do it won’t be any less disastrous. An ambiguous situation isn’t something you can resolve by halfhearted actions. So I was taught by my Mother, the Queen.\nThere were many metal doors in that corridor, all of them closed. I tried none of them, since that would have been foolish.\nAt the end of the passage I came to a great metal double-door of white metal that gleamed like silver, though I doubted it could possibly be silver…\nUnless that double-door was worth as much as a city. So that white metal couldn’t be silver.\nI lifted the knocker set into the door, and knocked three times. The dull clonking sound didn’t seem like it would travel, but soon after there was a groaning noise, and the double-doors swung open.\nThe throne room I beheld had windows, high above, but with a black floor and black walls even the Sun couldn’t do much here. The only touch of color in the room came from the strangely white light of fires swinging in pots that descended from the ceiling.\nAt the end of the room, a great black throne, with two great black horns branching out from it.\nUpon that mighty throne sat a gargantuan figure whose chest was clothed in black metal chain-armor and whose arms and legs and face were bare. The saying, ‘his muscles have muscles’, might have been invented to describe him alone. From the cast of that man’s eyes and nose, it seemed that he was a Ruli horse-nomad by birth—or maybe a Ruli halfbreed, since the Ruli don’t have a reputation for sagacity. His expression, as he gazed down at me, gave an impression of supreme arrogance, or rather confidence. Truly, this is the Dark Lord of whom the tales speak.\nBehind his throne were various lieutenants with their own armor and weapons, giving cool gazes to me, as if to say, ‘our lord could break you with one hand, but we are here to spare him that effort’.\nAlso attached to that throne, by a black chain leading up to her slave collar, was a pale-skinned young woman with reddish-brown hair and downcast eyes. The flesh of her body was thick and round like a statue of a fertility goddess, not much concealed by a scanty amount of translucent red cloth. If I hadn’t been fearing for my life just then, I would have needed to suppress a squeaking sound. Sights like that aren’t ever seen in my home country; I can’t imagine that even a prostitute would dress like that, and she was more beautiful than any prostitute.\nI walked down the long black carpet that led up to the throne, and knelt upon one knee, gazing up at the Dark Lord. There had been no talk in that throne room since the doors had opened for me, and a solemn air pervaded.\nThe Dark Lord spoke, a deep voice filled with strength. “What is your name?”\n“Prince Nama of Santal,” I replied, keeping my own voice firm.\n“What is your question?” the Dark Lord said next.\n“My country is ill,” I said, matching his gaze with my own. “Something has turned wrong. The people are going hungry and the fields are poorly tilled, the nobles’ ventures are failing and their estates are going bankrupt, the shopkeepers have no wares and laborers sit idle in the streets. No one seems to know anything about why this is happening to us, whether it’s a curse or a conspiracy. My mother’s advisors all give her contradictory advice, and none of it ever seems to help. How can my country be made healthy again?”\nThe Dark Lord frowned down at me. “Say more.”\n“I don’t know what more to say,” I said. I kept my voice in check, not expressing any of the frustration and failure that had driven me across countries to the throne of the Dark Lord himself. “The country of Santal is perishing and nobody knows the source.”\nThe Dark Lord reached down to the black chain attached to his throne, and hauled up the pale-skinned woman attached to it, who made a strangling sound as her collar pulled at her throat. I suppressed any thoughts of gallant action, because a prince must not be a massive idiot.\nThe Dark Lord whispered something to the pale woman, and I thought I saw her lips move briefly.\nThen the Dark Lord unhooked the chain off the throne’s armrest and threw her down towards me.\nAs she stumbled and fell close by me, I noticed for the first time that her ears were round at the tips like a beast’s, though in every other way she was shaped like an ordinary person. What the meaning of that was, I couldn’t guess. Her ears didn’t seem scarred or like somebody had shaved off the tips of her pinnae. It was like she just naturally possessed the round ears of a beast. I should have noticed that earlier, I suppose; but when I looked at that girl dressed like that, it didn’t come naturally to focus on her ears.\n“I need more knowledge to answer you, Nama,” the Dark Lord said with a grim smile. “This woman will be your slave for a day, and also a night, and she’ll inquire further of your country. When she asks you questions, her ears are like my ears, and when you command her, your tongue is like my tongue. Use her just as you like, except that if any lasting harm comes to this slave, you will die. Your followers will also be given food and shelter here, but they may not speak with you until I have answered.”\n“Thank you,” I said, because I was too surprised and dismayed to answer more intelligently than that.\n2. Elaine of Elsewhere\nI was silent as the slave conducted me to a huge bedroom, starkly clean with the bed all made up; I recognized a guest room for royalty.\nI seated myself on the bedroom’s only chair, and the slave, without being asked, knelt down before my seat, which also gave me a clear sight down her—\nNo, those are immoral thoughts with respect to someone who can’t refuse my gaze.\n“What’s your name?” I said to her.\n“Elaine, master,” she said in an accent I couldn’t recall hearing from any foreign ambassador; and it wasn’t a style of name I recognized, either.\n“Can I ask a question even though it might be rude?”\n“I’m your slave, master,” she said.\nThat wasn’t an answer. Still, if she wasn’t going to give a signal of objection, then I’d serve my curiosity. “Well, it’s about your ears.”\nThe slave, Elaine, touched the naturally rounded-seeming tops of her ears, which gave her a cute beastlike appearance. “These? They’re normal for me, master. Where I come from, nobody has pointed eartips like your people.”\n“There’s a foreign country where people have ears like that?” I said, astonished. “I thought people were the same shape everywhere… but that appearance is a pleasant one, I think.” I added that last part when I realized what cruel words might already have been spoken to her.\n“Master, there’s many questions I must ask about your country of Santal,” Elaine said with her head still bent before me. “However, we have a day, and also a night. If you appreciate the appearance of this humble slave, and there’s anything else I can do for you, or anything you wish to do to me, I am your slave during this time.”\n“Ah,” I said with great composure and perspicacity.\n“If my service fails to suit you, then instruments for disciplining me may be found in the box underneath the bed—”\n“Th-th-there’s no need for that!”\nI’ll omit one or two things that were said after that point.\nIn any case, I did take the time to freshen myself, and I told her to have a meal brought in for me, if that wasn’t imposing too much on the Dark Lord’s hospitality.\nElaine went outside and spoke to someone—meaning there was a guard outside my bedroom, which wasn’t surprising—and then she came back and began to set a single place setting, at the room’s one table.\n“What about you?” I said to her as she was working. “Slaves also need to eat, I think.”\n…\n“Even if you’ve already eaten this morning, I’m asking whether you’d prefer to eat more.”\n…\n“Well, what if I commanded you to eat at the table with me? Didn’t you say I was your master?”\nAnd that’s how Elaine and I ended up moving the table over towards the bed so that she could sit on the bed itself, since the room didn’t contain another chair besides my own.\nShortly after that, two plates of roasted chicken were brought in on a tray by a thin and ugly man naked to his waist, exhibiting many scars of whip stripes all over his body. I looked at those, but only when he wasn’t facing me, since I didn’t want to rub salt in his troubles by seeming to stare.\n“Do you know why that man was whipped so harshly?” I said, after he had left and the two of us had begun to eat.\n“I’m sorry, master. Those scars are from before that man arrived at the Dark Lord’s castle, and he holds the matter private.”\n“I see. Now why did your expression change, like you were almost but not quite smiling, when I asked you that question?”\nElaine looked startled. That’s right, a prince can sometimes tell when you’re suppressing a smile, if we’re watching you closely enough.\n“Well, master,” Elaine said, “since you were watching that closely, it’s because you noticed the troubles of a male slave and not just the troubles of a female slave.”\nI stared at her. “And why does that matter to you?” What was she implying?\n“It was clear, master, that you were acting concerned over me. However, there’s more than one class of person who might behave like that. There’s a sort of man who will notice and act concerned for an attractive woman, and another sort of person who is compassionate toward everyone without exception. But, since both of those people will act concerned towards me, how can I tell the difference between them? The answer is that I can observe them when Loorn brings in a meal, and see if they ignore the ugly man, like the first sort of person would, or if they inquire about Loorn’s scars, like the second sort of person would. I smiled a little at that time, because I consider the second class of person to be better.”\nDiscerning the motives of others is a familiar problem for princes, but the way she listed out her reasoning was unusual. Just what kind of slave am I talking to?\n“You’re very observant,” I said. “Of me. Personally.”\n“You do intrigue me somewhat, master, but the real reason is that I’m trying to determine your character for purposes of the Dark Lord’s knowledge.”\nWell, that was frank.\nThe two of us ate a bit more of our roasted chicken. I glanced at the way she held her silverware, and concluded that she lacked a noblewoman’s polish. It wasn’t that she was unpracticed, but her movements seemed free; she didn’t grip the fork the same way twice.\n“Why would the Dark Lord’s answer depend on which sort of person I am?” I asked after sating the edge of my hunger. “It’s the country of Santal that needs an answer, not the prince of Santal.”\n“The Dark Lord desires to know whether Santal’s prince can carry out the answer given. Master, may I ask you one of the Dark Lord’s questions?”\nI set down my silverware and looked at her seriously. “You may.”\n“Suppose you were in a hospital, and you saw a doctor carrying a rare medicine to treat a patient. But, you knew that smaller amounts of the same medicine could be used to cure five other patients instead of one. If the whole dose is given to the one patient, her life will be saved, but if the dose is split up instead, it can save five other patients who are less sick but who will still die without that medicine. Do you stop the doctor and tell him to treat the five patients instead of one?”\n“Yes,” I said.\n“But then the one patient, deprived of her cure, will die. The doctor was going to cure her before you intervened. So, is what you did murder? Is murder acceptable, then?”\n“I don’t believe it’s murder,” I replied for the Dark Lord’s ears. It was a little humorous to see such deep questions, which would seem solemn indeed if spoken by the Dark Lord on his throne, issuing instead from a young woman with beastlike ears. I suppose that’s a disadvantage of having a slave ask your questions for you. “The doctor was just making a mistake, and I corrected him. Indeed, it would be like murdering four people if I didn’t.”\n“What if the only way to make the medicine in the first place was by killing one patient who otherwise would have lived?”\nAh, I see this trap. “Then that’s different.”\n“How is it different?” The slave spoke her Dark Lord’s next question without pause.\n“First,” I replied, “sacrificing a human life to create a healing potion is already a very dark Magic that’s bound to corrupt everyone involved with it.”\n“Imagine it’s more mundane than that,” she said. “Imagine you’re simply draining the blood from that person and distributing it among the others who need blood; there’s nothing magical about it, just an ordinary matter of those people needing blood.”\nThis is what you call ordinary?!\nAfter some further discussion and refinement of the Dark Lord’s question, I said—\n“It’s a matter of whether you’re troubling people who aren’t involved, or only judging among those whose lives are already at stake. That’s the problem with draining a bystander’s blood to save five other people, even if you say there’s no other way to save them.”\n“Either one person lives, or five people live. Why does it make a difference who you call involved?”\n“Elaine—” I said. “No, it’s the Dark Lord I’m speaking to, isn’t it? I can see how this act is a metaphor for other choices a ruler makes, and I answer that the ruler must not do those acts for which this is a metaphor. The ordinary people of a kingdom have to live in fear of many things. That farmers must fear bad weather and starvation is a given; nobody can change this. Must they also fear offending the nobles above them? That’s also a given, but we can lessen that fear by setting good judges in place over the nobles’ estates. It would still be unwise to laugh at your baron, but at least he can’t execute you on a whim. The fear in which ordinary people live can’t be removed, but it can be lessened. The price of sacrificing an innocent person to save five others, is that everyone in your kingdom needs to live in fear of bad weather, starvation, and being the next one you sacrifice.”\n“Then is it all right to sacrifice condemned criminals to make medicine?”\n“It’s certainly better than hauling innocent farmers out of their fields, but I’d still worry it was excessive justice. If you execute pickpockets rather than whipping them, it changes how the common people treat your guards.”\n“What about if somebody is dying anyway? Would it be all right to take out their organs and give them to other people whose organs were troubled, if that could be done safely and without dark magic? You couldn’t point to someone then and say, this person is dying, who would otherwise have lived. But still five people would be saved. Would you do that?”\n“I don’t think I would, though we never know until life tests us. And I’m beginning to wonder, are these peculiar questions really the Dark Lord’s, or are you just teasing me?”\nElaine wasn’t smiling. “It might have been better for you, master, if you were not so virtuous. They say, ‘The Dark Lord will give you an answer and your goal will be achieved’, but—”\n“But the price is that his answer might violate the rules of righteous conduct,” I said. “That’s something I’m already resigned to. I knew the tales of the Dark Lord when I came here.”\nElaine held out both her hands, dropping one and raising the other, as if holding weights in a balance. “And yet you wouldn’t harvest the organs of one dying patient to save five other people, because to you that seems to violate the rules of good conduct.”\nI see. “Saving five people isn’t like saving my whole country. I’ll throw away my honor if that’s what it takes to save the country of Santal. If it’s a Magical curse that has to be countered by draining the blood of an innocent, then I’ll do that much with my own hands, in order to save the countless ordinary people of Santal who are suffering.” I didn’t let myself flinch as I said it, because indeed I was already determined. “I know, just by saying that, I’ve already thrown away my honor. Coming to the Dark Lord’s castle is the act of a villain in the first place, and I won’t flinch from that. But aside from that, I intend to go on acting righteously in the parts of my life that remain to me. That’s my answer to the Dark Lord.”\nWe finished eating the rest of our meal.\nWhen we were done eating, Elaine moved the room’s chair back to where it had been and knelt before it without giving me a chance to say otherwise. Then she began to question me about the country of Santal that I was trying to save.\n\nTo read the rest of this book, visit:\nGumroad: https://gumroad.com/l/DarkLordsAnswerAmazon: https://amzn.to/2hH3zfC", "url": "https://www.yudkowsky.net/other/fiction/dark-lords-answer", "title": "Dark Lord’s Answer", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:06:24+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "39a6f1e1a3fb38c3e88caf5bffd94c88", "summary": []} -{"text": "Fiction\n\nWhile I tend to publish most of my writing for free, I strongly believe that money is not evil. Therefore, anyone is welcome to take characters or settings from my original online fiction, such as the beisutsukai or the Baby-Eating Aliens, and use them in new commercial works of your own creation. I do ask for acknowledgment and a link or other reference to the original, but so long as the writing is your own, you may charge for access, distribute printed copies, sell the story to a magazine, etc. I don’t mind.\nHarry Potter and the Methods of Rationality“Petunia Evans married a biochemist, and Harry Potter grew up in a house filled to the brim with books, reading science and science fiction. Then came the Hogwarts letter, introducing strange new opportunities to exploit. And new friends, like Hermione Granger, and Draco Malfoy, and Professor Quirrell…” I began writing this story just for fun in my downtime from working on my nonfiction rationality book, uncertain at first if anyone would be interested. Since then it has received over 5 million hits and is currently the #1 most-reviewed Harry Potter fanfiction on the entire Internet, also the second Google result for “rationality”. (Yes. Seriously.) It helps if you’ve at least read the first book of Harry Potter or watched the first movie, but in a pinch you can read anyway. Give it a try even if you think of yourself as someone who never reads fanfiction.Three Worlds CollideThe most controversial story I’ve ever written. Starts with the baby-eating aliens and moves on from there.The P-Zombie Apocalypse (aka Zombies: The Movie)“These zombies… are different. They’re… philosophical zombies.”Non-Player CharacterI looked at the screen for a few moments. Rilanya’s rendered graphic was looking at my point-of-view with a pleading expression. Plot point, I thought to myself, and typed: “Anything, Rilanya.The Sword of GoodWhat does it mean, if it’s been prophesied that you will make the ultimate choice between Good and Evil? Why wouldn’t you just choose Good? And Hirou carries the Sword of Good, which instantly slays any wielder not of good intentions…Initiation Ceremony“The torches that lit the narrow stairwell burned intensely and in the wrong color, flame like melting gold or shattered suns.” – First in the beisutsukai series.The Finale of the Ultimate Meta Mega CrossoverThis was intended as a bit of utterly deranged fun, but ended up as a deep philosophical exploration. Vernor Vinge x Greg Egan crackfic.The Hero With a Thousand ChancesAfter every defeat, the Dust takes another shape and once again tries to destroy all things. What is the mysterious Counter-Force that keeps the world alive?Trust in God, or, The Riddle of KyonA wee bit of Suzumiya Haruhi fanfiction. I should probably never do this again.Failed Utopia #4-2With perceptual instantaneity – the speed of surprise – his mind had already labeled her as the most beautiful woman he’d ever met, including his wife.Dark Lord’s Answer“They say that the Dark Lord will give you an answer and your goal will be achieved. The price is that his answer might violate the rules of righteous conduct.” The country of Santal is perishing, and nobody knows why. His country’s plight has driven Prince Nama over far roads to consult the famed Dark Lord for answers… (Sample chapters 2/7.)X17Short story inspired by “doc” Smith’s Lensman novels.ArtifactsIn the western spiral arm of our galaxy lies a star system and a planet occupied ages ago. On one mountain of that planet there is a great structure, thousands of cubits tall…Prospiracy TheoryOut of habit, I identified the surveillance drones; a CIA sparrow, an FBI robin, a bluetit from the Men In Black, and a flock of honking ducks that was probably one of the Illuminati’s newfangled distributed devices…Girl Intercorrupted“My family name is Yugano. My given name is Yuuki. I have no redeeming qualities.” So begins this light novel of a girl corrupted by the Internet, and then summoned to another world. She’s jaded from having already read many stories like that – but will that prepare her for what awaits in this world? Of course not! But she’s going to plunge ahead anyway, and not slow down for anything! (Sample chapters 4/13.)\n", "url": "https://www.yudkowsky.net/other/fiction", "title": "Fiction", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T04:02:43+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "3bbb399b5b2efd9c256771dd2d07ca8d", "summary": []} -{"text": "Yehuda Yudkowsky, 1985-2004\n\nBackground for non-transhumanists:\nTranshumanists are not fond of death. We would stop it if we could. To this end we support research that holds out hope of a future in which humanity has defeated death. Death is an extremely difficult technical problem, to be attacked with biotech and nanotech and other technological means. I do not tell a tale of the land called Future, nor state as a fact that humanity will someday be free of death – I have no magical ability to see through time. But death is a great evil, and I will oppose it whenever I can. If I could create a world where people lived forever, or at the very least a few billion years, I would do so. I don’t think humanity will always be stuck in the awkward stage we now occupy, when we are smart enough to create enormous problems for ourselves, but not quite smart enough to solve them. I think that humanity’s problems are solvable; difficult, but solvable. I work toward that end, as a Research Fellow of the Machine Intelligence Research Institute .\nThis is an email message I sent to three transhumanist mailing lists, and a collection of emails I then received, in November of 2004. Some emails have been edited for brevity.\nUpdate, at bottom, added May 2005.\n\nDate: Thu Nov 18 22:27:34 2004\nFrom: Eliezer Yudkowsky \nMy little brother, Yehuda Nattan Yudkowsky, is dead.\nHe died November 1st. His body was found without identification. The family found out on November 4th. I spent a week and a half with my family in Chicago, and am now back in Atlanta. I’ve been putting off telling my friends, because it’s such a hard thing to say.\nI used to say: “I have four living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out.” I still have four living grandparents, but I don’t think I’ll be saying that any more. Even if we make it to and through the Singularity, it will be too late. One of the people I love won’t be there. The universe has a surprising ability to stab you through the heart from somewhere you weren’t looking. Of all the people I had to protect, I never thought that Yehuda might be one of them. Yehuda was born July 11, 1985. He was nineteen years old when he died.\nThe Jewish religion prescribes a number of rituals and condolences for the occasion of a death. Yehuda has passed to a better place, God’s ways are mysterious but benign, etc. Does such talk really comfort people? I watched my parents, and I don’t think it did. The blessing that is spoken at Jewish funerals is “Blessed is God, the true judge.” Do they really believe that? Why do they cry at funerals, if they believe that? Does it help someone, to tell them that their religion requires them to believe that? I think I coped better than my parents and my little sister Channah. I was just dealing with pain, not confusion. When I heard on the phone that Yehuda had died, there was never a moment of disbelief. I knew what kind of universe I lived in. How is my religious family to comprehend it, working, as they must, from the assumption that Yehuda was murdered by a benevolent God? The same loving God, I presume, who arranges for millions of children to grow up illiterate and starving; the same kindly tribal father-figure who arranged the Holocaust and the Inquisition’s torture of witches. I would not hesitate to call it evil, if any sentient mind had committed such an act, permitted such a thing. But I have weighed the evidence as best I can, and I do not believe the universe to be evil, a reply which in these days is called atheism.\nMaybe it helps to believe in an immortal soul. I know that I would feel a lot better if Yehuda had gone away on a trip somewhere, even if he was never coming back. But Yehuda did not “pass on”. Yehuda is not “resting in peace”. Yehuda is not coming back. Yehuda doesn’t exist any more. Yehuda was absolutely annihilated at the age of nineteen. Yes, that makes me angry. I can’t put into words how angry. It would be rage to rend the gates of Heaven and burn down God on Its throne, if any God existed. But there is no God, so my anger burns to tear apart the way-things-are, remake the pattern of a world that permits this.\nI wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?\nYehuda’s death is the first time I ever lost someone close enough for it to hurt. So now I’ve seen the face of the enemy. Now I understand, a little better, the price of half a second. I don’t understand it well, because the human brain has a pattern built into it. We do not grieve forever, but move on. We mourn for a few days and then continue with our lives. Such underreaction poorly equips us to comprehend Yehuda’s death. Nineteen years, 7053 days, of life and memory annihilated. A thousand years, or a million millennia, or a forever, of future life lost. The sun should have dimmed when Yehuda died, and a chill wind blown in every place that sentient beings gather, to tell us that our number was diminished by one. But the sun did not dim, because we do not live in that sensible a universe. Even if the sun did dim whenever someone died, it wouldn’t be noticeable except as a continuous flickering. Soon everyone would get used to it, and they would no longer notice the flickering of the sun.\nMy little brother collected corks from wine bottles. Someone brought home, to the family, a pair of corks they had collected for Yehuda, and never had a chance to give him. And my grandmother said, “Give them to Channah, and someday she’ll tell her children about how her brother Yehuda collected corks.” My grandmother’s words shocked me, stretched across more time than it had ever occurred to me to imagine, to when my fourteen-year-old sister had grown up and had married and was telling her children about the brother she’d lost. How could my grandmother skip across all those years so easily when I was struggling to get through the day? I heard my grandmother’s words and thought: she has been through this before. This isn’t the first loved one my grandmother has lost, the way Yehuda was the first loved one I’d lost. My grandmother is old enough to have a pattern for dealing with the death of loved ones; she knows how to handle this because she’s done it before. And I thought: how can she accept this? If she knows, why isn’t she fighting with everything she has to change it?\nWhat would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another as you watched, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers. One day you’ll get a phone call, like I got a phone call, and the possibility that seemed distant will become reality. You will mourn, and finish mourning, and go on with your life, and then one day you’ll get another phone call. That is the fate this world has in store for you, unless you make a convulsive effort to change it.\nSince Yehuda’s body was not identified for three days after he died, there was no possible way he could have been cryonically suspended. Others may be luckier. If you’ve been putting off that talk with your loved ones, do it. Maybe they won’t understand, but at least you won’t spend forever wondering why you didn’t even try.\nThere is one Jewish custom associated with death that makes sense to me, which is contributing to charity on behalf of the departed. I am donating eighteen hundred dollars to the general fund of the Machine Intelligence Research Institute, because this has gone on long enough. If you object to the Machine Intelligence Research Institute then consider Dr. Aubrey de Grey’s Methuselah Foundation , which hopes to defeat aging through biomedical engineering. I think that a sensible coping strategy for transhumanist atheists, to donate to an anti-death charity after a loved one dies. Death hurt us, so we will unmake Death. Let that be the outlet for our anger, which is terrible and just. I watched Yehuda’s coffin lowered into the ground and cried, and then I sat through the eulogy and heard rabbis tell comforting lies. If I had spoken Yehuda’s eulogy I would not have comforted the mourners in their loss. I would have told the mourners that Yehuda had been absolutely annihilated, that there was nothing left of him. I would have told them they were right to be angry, that they had been robbed, that something precious and irreplaceable was taken from them, for no reason at all, taken from them and shattered, and they are never getting it back.\nNo sentient being deserves such a thing. Let that be my brother’s true eulogy, free of comforting lies.\nWhen Michael Wilson heard the news, he said: “We shall have to work faster.” Any similar condolences are welcome. Other condolences are not.\nGoodbye, Yehuda. There isn’t much point in saying it, since there’s no one to hear. Goodbye, Yehuda, you don’t exist any more. Nothing left of you after your death, like there was nothing before your birth. You died, and your family, Mom and Dad and Channah and I, sat down at the Sabbath table just like our family had always been composed of only four people, like there had never been a Yehuda. Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten.\nLove,Eliezer.\n\nDate: Thu Nov 18 22:55:24 2004\nFrom: Gina Miller \nI am so sorry to hear of this news. I know what you are going through Eliezer, when I was fourteen I lost my sister who was 19. I always wonder what she would have become.I stood amid my family saying things like “God takes the good” or “God has something for her to do” and sensing their calming effect in the belief system that I did not embrace. I too, was wide awake to the truth of the matter, and I wanted her here. To this day I am struck by the biological errors that mother nature has dealt to us, leading to disease and finality, and of course also the importance of theories and research needed to overcome these problems. As you know, my husband is currently undergoing chemotherapy so I grapple with the frustration of advanced technologies such as nanotech and others, not yet being readily available to avoid this type of suffering. The concern also grows when I see the fear well up in the general population when it comes to current advances such as stem cell research.\nAs far as the religious afterlife (or other) comfort, I think the problem is, no one has cheated death yet, so the meme continues (at least for some – well probably most) as a way to propagate suppressing the fear of the end. When we show scientific immortality is possible as opposed to religious immortality, there may be more for them to contemplate. I can’t wait for the day that death is not inevitable. I am deeply touched by your words and emotions and I completely validate you. The emotions won’t go away, but it will at least become more bearable over time. Perhaps what remains will help guide you even further down the road you have already begun to travel, with all of our future(s) in mind. I’d like to thank you for that. My condolences to you, as well as my constant support for humanity to move beyond this barrier.\nAgain, I’m so sorry, warmest regards\n-Gina “Nanogirl” Miller\n\nDate: Thu Nov 18 23:53:15 2004\nFrom: Samantha Atkins\nEliezer,\nI am extremely sorry for your [/our] loss. Death utterly sucks and humanity would be much better off never pretending otherwise.\nWhen I was 14 my cousin who was 17 died. He was in a motorcycle accident and lingered for some hours. We were told to pray for his healing. We prayed. He died. “It must not have been God’s will” we were told. Or “we lacked sufficient faith” to pray effectively. I remember how twisted up inside I felt hearing these things, how helpless and how very angry. How could it be “God’s will” to snuff out this wonderful young life? How was it up to us to twist ourselves into pretzels somehow in order to save my cousin Virgil or anyone else who need not have been put through such suffering to begin with if a “just” and “good” God was in charge as we were always told? How could the people say these expected things and be all somber and then immediately pretend nothing had happened a mere few hours later? How could they not scream and cry out as I screamed and cried inside? Were they all zombies?\nIf more people stopped making pious or otherwise excuses for the horror of death and disease then we would finally move to end this suffering. When I was 14 I didn’t know it was even possible to do so. Many people do not know it still. We must make sure they know. Many more who do know act as if it isn’t so.\nWe must never forget our dead and never ever resign ourselves, those we care about or anyone to death. We must truly embrace life not by acceptance of death but by extending life endlessly and without limitation.\n– samantha\n\nDate: Fri Nov 19 15:08:40 2004\nFrom: Adrian Tymes\nIt is probably no condolence that there will be many more – *far* too many more – before we finish implementing a way around it. But at least there is a way to calculate it: multiply this tragedy by the several million (billion?) between now and then, and one starts to appreciate the magnitude of the horror we seek to strike down.\nI wonder if this is something like the fictional Cthuluoid horrors: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.\n\nDate: Sat Nov 20 21:41:13 2004\nFrom: Matus\nEliezer,\nThank you for your words, and I am sorry for the tragic event which has brought them out.\nYou have captured what makes me an extropian and I think you capture the motivating principle behind each of us here. We love life, and we want to live it. Whatever we all may disagree on, it is only the means to achieve this end. We love life, and we hate its cessation.\nThere is no greater horror or travesty of justice than the death of someone. All the intricacies of the universe can not compare to the beauty and value of a single sentient being.\nI have seen enough death of friends and loved ones myself. Everyone who will listen I try to convince them to be cryogenically suspended, on the premise that they want to live. But most grope for excuses not to, disguising their disregard for their own existence with appeals to mysticism or dystopian futures.\nAll ideologies prescribe these self delusional condolences and practices, it can be no more clear than what Adrian said: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.\nWhen faced with the death of a loved one, most people get through it by hiding reality, by doing whatever they can to *not* think about the obvious. Death is eternal and final, and when faced with such a thing people can not come up with any answer that goes beyond any self doubt. To take the pain of death away, they must devalue life. One is faced with a choice, acknowledge you love life and death is abhorrent, be indifferent to life and thus indifferent to death, or despise life and welcome death, there are no other alternatives, the view of one precludes the inverse on the other. There seems to be an active effort to create and spread a nihilistic world view. Consider the Buddhist mantra of ‘life is suffering’ consider it’s widespread modern appeal, and then consider its negation, ‘death is joy’ Indeed, Nirvana is the absence of a desire for existence. This nihilistic movement is not acting volitionally, its scared and confused and stumbling through philosophy. All they know is they don’t like death, and through its stumbling come to find that to deal with that it must not care about life. Socrates last words come to mind “I have found the cure for life, and it is death”\nI think this is a major part of the reason we have such difficulty spreading our ideas and values. Why in the very secular European area of the world does Cryonics have little to no support? If people accept our worldview, that life is good and technology can help us extend it indefinitely, then they must come to full terms with the finality and horror of death. That is what they have difficulty in doing. I think at some level they know that, it is the logical extension of their beliefs, and as such is manifested as a very negative emotional visceral reaction to our ideas, because of our implied valuation of life.\nBut just as many of us here put up a great deal of money and effort for a non-zero chance of defeating our first death through cryonics, we need to acknowledge the non-zero possibility of doing something about past deaths. In this I am very fond of Nikolai Fedorovich Fedorov’s “The Common Task”. Even though it is derived from his religious background, the motivation, a deep appreciation for the intrinsic value of life, and the goal, bringing back the past dead with technology, I share. The application of science to ‘resurrect’ the past dead. Is it possible? If it is, it should be our ultimate goal. Some here devote their efforts to the development of a singularity AI, and others toward defeating aging biologically; I devote my efforts to the great common task. It is my ultimate goal to find out if it is possible, to learn everything I need to know to determine that, and more, and then to do it, one person at a time if necessary.\nI can find no words to offer to ease that suffering, there are none, and it is not possible. I can only say that it is my life goal, and I think others, and eventually the goal of any sentient being who loves life, singularity AI or otherwise, to do what they can to accomplish this common task, if the laws of physics allow it.\nRegards,Michael DickeyAka Matus\n\nDate: Thu Nov 18 22:27:41 2004\nFrom: David Sargeant \nI’m terribly sorry to hear about your brother. Your essay really touched me — it really pounds home what we need, need, NEED DESPERATELY to achieve, more than anything else in the world. I can’t even imagine the pain you must be feeling right now. I wish there was something I could to do to help.\n\nDate: Thu Nov 18 22:55:20 2004\nFrom: Damien Broderick \nVery distressing news, Eli. Sympathies. Indeed, `we have to work faster.’\nSorrowful regards, Damien\n\nDate: Fri Nov 19 02:31:58 2004\nFrom: Russell Wallace \nI’m so sorry.\nI hadn’t heard of the Jewish custom you mention, last time I received such a phone call; but it has that quality of requiring explanation only once, and I’m going to act accordingly.\nSomeday, children won’t fully believe that things like this really happened. We’ll work towards the day when they don’t have to.\n– Russell\n\nDate: Fri Nov 19 03:58:17 2004\nFrom: Olga Bourlin\nEliezer, I’m so sorry to hear this – there are never any real words of consolation.\nFor what it’s worth, my experience with people in my family who have died is – well, I have thought of them from time to time, of course (but have been surprised at how unexpectedly and powerfully these thoughts have been known to strike). And, also, I have dreamt of them – for decades – as if they never died.\nThe death that struck me the most was when my mother died. I was 40 years old then (she was 65), and I was “prepared” for her death because she had been an alcoholic for a long time – and yet, when she died it hurt so very much. I was completely unprepared for the emotional pain. At that time I was married to a man who played the piano, and he played Beethoven’s Piano Concerto No. 5 in E flat Op. 73 ‘The Emperor’ – 2nd movement (‘Adagio un poco moto’) over and over again. That particular movement – it’s so lovely and sad – something in that music let me just take in the experience and reflect about being human.\nI cannot imagine how you must feel – losing a beloved younger brother. When I had my children (the two happiest days of my life, bar none) – I also realized that with the love I felt (and still feel) for them came a kind of vulnerability I never felt even about myself – the potential, incomprehensible pain I know I would feel if something were to happen to them. And I knew I would never have the “net” of religion to help break my fall.\nLove,Olga\n\nDate: Fri Nov 19 15:08:25 2004\nFrom: Kwame Porter-Robinson \nMy condolences.\nAs opposed to Michael Wilson, I say we shall have to work smarter.\nLive well,\nSincerely,Kwame P.R.\n\nDate: Sat Dec 4 13:30:35 2004\nFrom: Harvey Newstrom \nI am not even going to try to say something helpful or profound. There is nothing anyone can say to help or to lessen the loss. This is a meaningless tragedy that too many of us have faced. A more extreme and sudden example of the human condition. And I hate it.\nHarvey\n\nDate: Fri Nov 19 15:08:42 2004\nFrom: Keith Henson \nHow sad.\nI really can’t add anything to your email to the list because I am in complete agreement.\nMy daughter lost two close high school friends, one just after he got back from visiting Israel and I lost both parents since becoming an exile.\nKeith\nPS. If you can, you should at least try for a cell/DNA sample.\n\nDate: Sat Nov 20 04:05:52 2004\nFrom: Kip Werking \nEliezer,\nI just want to express my sympathy.\nYour post to SL4 shocked me from my dogmatic slumber. If the universe conserves information, then your brother is still written in the fabric somewhere. The signal is just scrambled. Who is to say whether a posthuman will look into the stars and see his picture–or nothing?\nBut I prefer your attitude. On this subject, there is a danger of apathy–but also a danger of false hopes. The latter does not prevent me from supporting the mission of you or Aubrey. A sober account of the human condition has its advantages. For example, it can cure procrastination.\nPlease consider this an expression of my sorrow for your loss and solidarity with your cause.\nKip\n\nDate: Sat Nov 20 21:41:17 2004\nFrom: Nader Chehab\nI’m really sorry to hear that. Some things truly happen when we least expect them. Your writings have been an invaluable source of insight for me and it saddens me to know that you lost a loved one. It is revolting that awful things can happen even to the least deserving. We really have to fix that one day, and sooner is better.\nYours,Nader Chehab\n\nDate: Fri Nov 19 01:32:50 2004\nFrom: Extropian Agroforestry Ventures Inc. \nWhen people who just might have been able to catch the extreme lifespan wave or uploaded their consciousness die in 2004 it is far more tragic than in 1974 when such was only a fanciful dream.\nI too have lost people near to me who had a statistically better chance than even me to “make the cut”. My wife at age 45 and a week this march 21. Only after the fact did I fully realize that there was a conscious knowledge among those caring for her that ” simply tweaking treatments would put her out of her misery and bring her peace through death”. I still do not forgive myself for not catching onto things … it was no problem to install a 10,000$ baclofen pump but no one would prescribe the anti-seizure meds that might have stopped the devastating seizures that reduced her to a barely concious state during her last 2 months. I know death was never her wish.\nI now have a friend and business partner in his 70’s who is in his last month due to late detected mesothelioma or asbestos caused lung cancer. He too fought to the end. About 3 weeks ago when I sent him a Kg of hemp bud and a small packet of marijuana to ease his pain he said ” That should probably do me” and that was the first time that he accepted that he had lost the battle.\nFormal religeons are like opiates in that they dull the mind to the urgency of defeating death as we know it. Aethiesm and agnosticism does put the onus on the individual to seize the moment and strive to extend, improve and sustain consciousness. In some ways religion has served some good purposes but we are now mature enough to survive without this old crutch. Science as the new religion has now more hope to offer for eternal life than the comforting words of some prophet or other.\nMorris Johnson\n\nDate: Fri Nov 19 01:32:53 2004\nFrom: Giu1i0 Pri5c0 \nDear Eliezer,\nI am so sorry, and I think I know how you are feeling. I felt the same whan my mother died three years ago. I was already a transhumanist long before that, but had not been an active one previously: I just lurked on the lists. But that changed after my mother’s death: I felt that there was something that needed being done, and now. My mother was 73, but Yehuda was 19. What a waste, what a cruel thing. I think the best you can do to honor the memory of Yehuda is continuing your work to accelerate the process of overcoming the biologic limits of our species, defeating death, creating friendly superintelligences, merging with them, and moving on. The SIAI is your tribute to Yehuda’s memory and your own battle against death: continue to fight it bravely as you have done so far.\nGiulio\n\nDate: Fri Nov 19 06:19:25 2004\nFrom: Amara Graps \n> Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten.> Love,> Eliezer.\nDear Eliezer,\nNow you carry Yehuda’s traces of his life in your heart. Keep them sacred, remember him always. In time, the large hole that pains you will transform into something different. An extra source of strength to live every day fuller, stronger, better; so that the life you cherished will live through you and help you fight so that this doesn’t happen to anyone again. I hate death. We should never have to experience this. I’m so sorry about Yehuda.\nAmara\n\nDate: Fri Nov 19 22:42:58 2004\nFrom: Hara Ra \nWell, personally I am a cryonicist. I was appalled at the low number of extropians who have signed up.\nIf I ever get a chance to do something more about this, I will certainly tell the list about it.\nHara Ra (aka Gregory Yob)\n\nDate: Sat Nov 20 21:41:43 2004\nFrom: Kevin Freels \nWhat would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers.\nTake any century prior to this one. I often wonder if that isn’t exactly what happened with Alexander, Genghis Khan, or more recently, Hitler and Stalin. History is full of such people. They may have simply went nuts after thinking this through and finding that there was nothing they could do and that life did not matter. Fortunately we are now on the verge of the ability to put an end to this. Now is the time to push forward, not give up.\n\nDate: Fri Nov 19 01:32:44 2004\nFrom: Psy Kosh \nThat is indeed awful. I’m sorry.\nI guess what you do have though is the ability to say that you are indeed actually doing something about it, so do take what comfort from that that you can.\nAnd again, I’m sorry.\nPsy-Kosh\n\nDate: Fri Nov 19 15:08:51 2004\nFrom: Ben Goertzel \nWow, Eli … I’m really sorry to hear that …\nAs all of us on this list know, death is one hell of a moral outrage\nAnd alas, it’s not going to be solved this year, not here on Earth anyway. Conceivably in 7-8 more years — and probably before 30 more, IMO. Let’s hope we can all hang on that long…\nI have no memory more painful than remembering when my eldest son almost died in a car crash at age 4. Thanks to some expert Kiwi neurosurgery he survived and is now almost 15. Had he not survived, I’m not really sure what I’d be like today.\nI know you’ll draw from this terrible event yet more passion to continue with our collective quest to move beyond the deeply flawed domain of the human — while preserving the beautiful parts of humanity & rendering the other parts optional…\nAt the moment my head is full of a verse from a rock song I wrote a few years back:I’ve got to tell you somethingYour lonely story made me cryI wish we all could breathe foreverGod damn the Universal Mind.\nWell, crap….words truly don’t suffice for this sort of thing…\nyoursBen\n\nDate: Fri Nov 19 16:11:04 2004\nFrom: Aikin, Robert\nYou’re not going to ever ‘get over it’ so don’t bother deluding yourself that you might. You know what you have to do, so do it. Finish what you started. Stay healthy, be safe.\n\nDate: Fri Nov 19 16:59:37 2004\nFrom: Bill Hibbard\nI am very sorry to hear about the death of your brother, Eliezer. Your reaction to redouble your efforts is very healthy. When my brother, father and mother died I also found it helpful to get plenty of exercise and eliminate caffeine.\nMy younger brother died of cancer in 1997. When he died he looked like a holocaust victim and it occured to me that if all the Americans dying of cancer were being killed by an evil dictator, our society would be totally mobilized against that enemy. Disease and death in general deserve at least that commitment. Both collectively, to support medical research and care, and individually, to get lots of exercise and eliminate tobacco (my brother’s kidney cancer was probably caused by his smoking) and unhealthy foods. My parents lived to 85 and 87, but their diseases were clearly linked to diet, smoking and lack of exercise. They could have lived longer and better with different habits.\nI am with you, Eliezer, that it is maddening that so many people in our society cling to ancient religous beliefs that council acceptance of death and disease, and in some cases even council opposition to efforts to defeat death. What madness.\nSincerely,Bill\n\nDate: Fri Nov 19 22:19:21 2004\nFrom: Thomas Buckner \nI am sorry to hear this. Such a short life. Nineteen years is a blink, not enough time to learn much more than the rudiments of life. My daughter Heidi is a year older than he was.\nGeorge Gurdjieff, a very great Russian philosopher, said the human race needed a new organ, which he whimsically named the kundabuffer, and the purpose of this organ would be to remind us each minute of every day that we would die, that we had not time to squander.\nMy parents and grandparents are all gone. Almost all the optimism I once had for the human race is gone. At present, I see only one bright spot on the horizon. It is your work and that of the others in this community (I am only a kibitzer).\nre: Your statement “What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope.” In a commencement speech of last year, Lewis Lapham mentioned a “French noblewoman, a duchess in her 80s, who, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”\nTom Buckner\n\nDate: Sun Nov 21 23:55:10 2004\nFrom: gabriel C\nI wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?\nThat would describe me, before I stumbled upon this list in 1999. Facing certain extinction, I was alternately terrified and depressed. I still am, but now with a tiny thread of hope. Otherwise I think I would be insane by now.\n\nDate: Fri Nov 19 15:08:28 2004\nFrom: MIKE TREDER \nEliezer,\nI am deeply sorry to hear about your brother. The random cruelty of life knows no bounds. As you correctly suggest, the only rational response is to challenge the dreadful process called death and defeat it, once and for all. Sadly, that takes time — too much time for your brother, Yehuda, and too much time for my dear sister, Susie, who was struck down unexpectedly by cancer just a few years ago. Too much time, as well, for 150,000 more of our brothers and sisters who will die today, and tomorrow, and the next day.\nStill, the transhumanist response is not simply to shake our heads and mourn, but to stand up in defiance. We aim to overcome death through human science and technology, and you and others have taken on that challenge directly. For that, we all should be grateful and supportive.\nBut your essay also accomplishes a different — and equally worthy — objective, which is to reach out and connect with others who suffer. This is the humanist response, to affirm that we are all in this together, that there is no God or deity either to revere or to blame. Death separates us, permanently (at least until we know that cryonic preservation and revivification can succeed), but in life we can come together to help each other.\nMike Treder\n\nDate: Sat Nov 20 04:05:53 2004\nFrom: Marc Geddes \nMy condolences to you Eliezer, over your loss.\nIt was only quite recently that I desperately urged you to ‘hurry’ in your work at Sing Inst. I was starting to feel the first signs of aging. But now I am again made aware of the horrendous loss of life occurring daily in this pre-Singularity world.\nI called pre-Singularity existence ‘banal’ and ‘brutish’. We’ve received a sad reminder of the truth of this.\nNot only am I saddened by the loss of life occuring, I’m absolutely furious. And the most maddening part of it is the fundamental irrationality of most of the human populace, who blindly rationalize aging and pointless death.\nIn the recent book published by ‘Immortality Institute’ I did my best to made the philosophical case for indefinite life span: my piece was ‘Introduction To Immortalist Morality’. We must all do our bit to try to educate others about the fundamental value of life, a value that is still not properly understood by most people.\nBruce Klein (Imm Inst founder) also recently lost his mother in an accident. There is a discussion on the Imm Inst forums and it might be valuable for Eliezer to go there.\nThe death of Yehuda shows that the universe just ‘doesn’t care’. It’s up to sentients to create the meaning of the world. We all hope for a successful Singularity, and we can’t imagine failure, but it could easily be the case that we’ll all we wiped out unless we make big efforts – the universe just doesn’t care.\nI recently expressed real concern that the ‘window of opportunity’ for a successful Singularity seems to be closing. Time really is running out.We need to make greater efforts than we have been so far, or else I don’t think we’re going to pull through.\nI can only urge all of you to do your bit to support transhumanist projects – biological life extension (short term) and FAI (longer term) must be the priorities. Please donate to the relevant organizations. Voss, Goertzel and Yudkowksy appear to be the only serious FAI contenders at this juncture. They need our support.\nMarc Geddes\n\nDate: Sun Nov 21 13:10:32 2004\nFrom: Peter \nI am sending you my condolences Eliezer on the death of your brother. I lost my first wife in an accident suddenly, she was 23. Like you I can only rage and weep that her beautiful singularity was lost, one among the millions who died on the day she did. Likewise Yehuda, one potentiality irretrievably missing from the human future.\nI worked with the dying for many years and attended in all 122 deaths, all were special in their own way and all represented a dying of a light that had shone for a while.\nUnlike you I am religious but not to the extent of closing my eyes to the reality of loss and the evil that sometimes causes it. When my first wife died my grandfather said to me ‘Peter, dying is our fate, we can do nothing about it, but we can ask what does this death enable me to do for the world than otherwise I might never have done’. All through the forty five years since that death I hope her memorial has been the one I could give with the way I have spent my own life.\nPeter\n\nDate: Thu Nov 18 23:53:03 2004\nFrom: Michael Roy Ames \nDear Eliezer,\nThank you for telling SL4 about Yehuda. I am unhappy to read such an email. Right now you appear to be pretty fired up about doing something; your email was reminiscent of some of your earlier, more outraged writings. Do what you have to do to keep that fire burning. Experience has taught me that it is easy to become complacent, it is the default tendency. I participate in specific activities on a regular basis that force me to looking at disease & death closely enough so that my fire is stoked. It is a rare individual that can rely on rational thinking alone to maintain enthusiasm. Do what you need to do, and know that you can ask for help.\nYour friend,Michael Roy Ames\n\nDate: Sun Nov 21 13:10:37 2004\nFrom: Joe \nI feel your sadness as I have lost loved ones, though not as close as a brother. Anger and sadness sometimes lead one into action. So, I agree that there is nothing wrong to experience this type of pain. Since pain is uncomfortable most of us attempt to alleviate that pain through various means. In the case of death organized religions have their ways of doing this. As you indicated this kind of escape is often counterproductive, because it supports a “do nothing” approach. However, if you think about how long humans have been able to comprehend death and the loss which occurs, compared with any technological advancement to fight death, you can get an appreciation for the role religion, and a belief in an afterlife, has played.\nBut I agree with you. The time has come that we need to move past acceptance of death (belief in an afterlife) into a mode of activism against it. We are just beginning to have the technology available so that we can make visible progress. You hit upon an excellent idea that a contribution to an organization actively engaged in research to postpone or eradicate death in the name of a loved one who died is a very useful way to promote this progress.\nJoe\n\nDate: Mon Nov 29 17:03:47 2004\nFrom: Danielle Egan \nEliezer,\nI’m very sad to hear about your brother’s death. (Tyler sent out an email.) I respect you for putting your thoughts down on it because so many times we start writing about it later and like you say, by that point we are already moving on and can’t be honest. I want you to know that I am mad too that life ends in this way. When my grandma died recently at the age of 90, a few things really disturbed me: that she’d been dead for over 8 hours before I heard the news and I was just going through my life as usual, clueless that she had gone; that she died in an old age home, sick, with early stages of dementia so there was no dignity in her last year of life; that because there is no dignity we impose it in the form of religious or funereal services and those kinds of things and it’s too late to do a damn thing about it for them but somehow people try to trick themselves into believing these things are done for the dead person; we do everything for ourselves and really what does that come to when we remain unfulfilled?\nMost of all though is that death is such a horrible shock even when the person is old and has been sick and you’ve been preparing yourself. You can never prepare for something this abstract. It seems like such a terrible twisted crime when they are so young, like your brother. I want to offer you my condolences in the form of anger. I am angry right now too about his death and it is a motivating thing. The corks are symbolic. Maybe you should keep one as a reminder to get angry and then continue on in opposition of the way we live.\nDanielle\n(Danielle adds: “Perhaps you could note that I am not a transhumanist, if you decide to include bylines with the letters. I think it’s important for transhumanists to understand that we don’t have to be of the same persuasion and ethos to have similar emotions around death.”)\n\nDate: Sat Nov 20 21:41:29 2004\nFrom: Mike Li\neliezer,\ni’m sorry for your loss. beyond that, i don’t know what else to say. i’m too awkward and weak emotionally to offer any significant condolences in person. so, i just made my first donation of $699 (the balance that happened to be left in my paypal account) to the singularity institute. fight on, and know that i am with you.-x\n\nDate: Thu Nov 18 19:33:33 2004\nFrom: Nick Hay\nTo: donate@singinst.org\nDear Singularity Institute for Artificial Intelligence, Inc.,\nThis email confirms that you have received a payment for $100.00 USD from Nick Hay.\nTotal Amount: $100.00 USD\nCurrency: U.S. Dollars\nQuantity: 1\nItem Title: Donation to SIAI\nBuyer: Nick Hay\nMessage: For Yehuda. \n\nChristopher Healey, 11-19-04\nDonation through: Network for Good\nAmount: $103.00\nDedication: in memory of Yehuda \n\nDavid R. Stern, 12-19-04Check: $100Comment: In memory of Yehuda\n\nDate: Wed, 29 Dec 01:55:24 2004\nFrom: Johan Edstr�m\nTo: donate@singinst.org\nDear Singularity Institute for Artificial Intelligence, Inc.,\nJohan Edstr�m just sent you money with PayPal.\nAmount: $50.00 USD\nNote: In memory of Yehuda Yudkowsky \n\nDate: Mon, 17 Jan 12:41:11 2005\nFrom: Christopher Healey\nTo: donate@singinst.org\nDear Singularity Institute for Artificial Intelligence, Inc.,\nThis email confirms that you have received a payment for $1,000.00 USD from Christopher Healey.\nTotal Amount: $1,000.00 USD\nCurrency: U.S. Dollars\nQuantity: 1\nItem Title: Donation to SIAI\nBuyer: Christopher Healey \nMessage:\nIn memory of Yehuda Yudkowsky, and the other 11,699,999 who have died since. \n\nDate: Fri Nov 19 15:08:44 2004\nFrom: James Fehlinger \n‘Edoras those courts are called,’ said Gandalf, ‘and Meduseld is that golden hall. . .’\nAt the foot of the walled hill the way ran under the shadow of many mounds, high and green. Upon their western side the grass was white as with drifted snow: small flowers sprang there like countless stars amid the turf.\n‘Look!’ said Gandalf. ‘How fair are the bright eyes in the grass! Evermind they are called, simbelmynë in this land of Men, for they blossom in all the seasons of the year, and grow where dead men rest. Behold! we are come to the great barrows where the sires of Théoden sleep.’\n‘Seven mounds upon the left, and nine upon the right,’ said Aragorn. ‘Many long lives of men it is since the golden hall was built.’\n‘Five hundred times have the red leaves fallen in Mirkwood in my home since then,’ said Legolas, ‘and but a little while does that seem to us.’\n‘But to the Riders of the Mark it seems so long ago,’ said Aragorn, ‘that the raising of this house is but a memory of song, and the years before are lost in the mist of time. Now they call this land their home, their own, and their speech is sundered from their northern kin.’ Then he began to chant softly in a slow tongue unknown to the Elf and Dwarf, yet they listened, for there was a strong music in it.\n‘That, I guess, is the language of the Rohirrim,’ said Legolas; ‘for it is like to this land itself; rich and rolling in part, and else hard and stern as the mountains. But I cannot guess what it means, save that it is laden with the sadness of Mortal Men.’\n‘It runs thus in the Common Speech,’ said Aragorn, ‘as near as I can make it.Where now the horse and the rider? Where is the horn that was blowing?Where is the helm and the hauberk, and the bright hair flowing?Where is the hand on the harpstring, and the red fire glowing?Where is the spring and the harvest and the tall corn growing?They have passed like rain on the mountain, like a wind in the meadow;The days have gone down in the West behind the hills into shadow.Who shall gather the smoke of the dead wood burning,Or behold the flowing years from the Sea returning?J. R. R. Tolkien, The Lord of the RingsBook III, Chapter VI, “The King of the Golden Hall”\nI am sorry.Jim F.\n\nUpdate: May 8th, 2005.\nThe day is May 8th, six months and one week after the final annihilation of Yehuda Nattan Yudkowsky. Today I am going to visit my little brother’s grave, with my family, to watch the unveiling of his Matzevah, the stone that is set in the ground to mark his grave. This is a warm day in Chicago, springtime, with trees blossoming, and a bright blue cloudless sky. Nature does not mark the passing of our dead.\nWe drive for an hour and arrive at the cemetery. The last time I was here, for my brother’s funeral, I choked up when I saw a sign with an arrow, to direct cars, bearing the hand-lettered name “Yudkowsky”. This time there is no sign, for Yehuda or anyone. There is no funeral in this graveyard today. There is only one cemetery employee with a map, to direct the visitors to graves. We drive to an unremarkable section of the cemetery. The last time I was here, there was a great crowd to mark this place, and a tent for the mourners, and rows of chairs. This time there is only grass, and metal plates set into grass. I could not have found this place from memory. I look around for landmarks, trying to remember the location.\nI remember (I will never forget) when I came to this cemetery for my brother’s funeral. I remember getting out of the car and walking toward a van. I looked inside the van, and saw my brother’s polished wooden coffin. The box seemed so small. I didn’t see how my brother could fit in there. “What are you doing here, Yehuda?” I said to the coffin. “You’re not supposed to be here.” My grandfather, my Zady, came toward me then, and held me.\nI remember (I will never forget) the phone call I got in Atlanta. My cellphone’s screen identified the calling number my parents’ house. I said “Hello?” and my aunt Reena said “Eli -” and I knew that something was wrong, hearing aunt Reena’s voice on my home phone line. I remember having time to wonder what had happened, and even who had died, before she said “Your brother Yehuda is dead, you need to come home right away.”\nThat was the previous time. I don’t feel today what I felt then. There’s a script built into the human mind. We grieve, and then stop grieving, and go on with our lives, until the day we get another phone call. Probably one of my grandparents will be next.\nI walk along the gravel path that leads to where my family is gathering, looking down at the metal plates set down by the side of the path. Rosenthal… Bernard… some plates are only names and dates. Others bear inscriptions that read “Loving husband, father, and grandfather”, or “Loving wife and sister”. As I walk along the path I see a plate saying only, Herschel, my love, and that is when my tears start. I can imagine the woman who wrote that inscription. I can imagine what Herschel meant to her. I can imagine her life without him.\nHow dare the world do this to us? How dare people let it pass unchallenged?\nI stand by the foot of my little brother’s grave, as my relatives read Tehillim from their prayer books. The first time I came to this cemetery, I cried from sadness; now I cry from anger. I look around and there are no tears on my mother’s face, father’s face, uncle’s and grandparents’ faces. My mother puts a comforting hand on my shoulder, but there is no wetness on her face. Such a strange thing, that I’m the only one crying. Tears of sadness we all had shed, but tears of anger are mine alone. My relatives are not permitted to feel what I feel. They attribute this darkness to God. Religion does not forbid my relatives to experience sadness and pain, sorrow and grief, at the hands of their deified abuser; it only forbids them to fight back.\nI stand there, and instead of reciting Tehillim I look at the outline on the grass of my little brother’s grave. Beneath this thin rectangle in the dirt lies my brother’s coffin, and within that coffin lie his bones, and perhaps decaying flesh if any remains. There is nothing here or anywhere of my little brother’s self. His brain’s information is destroyed. Yehuda wasn’t signed up for cryonics and his body wasn’t identified until three days later; but freezing could have been, should have been standard procedure for anonymous patients. The hospital that should have removed Yehuda’s head when his heart stopped beating, and preserved him in liquid nitrogen to await rescue, instead laid him out on a slab. Why is the human species still doing this? Why do we still bury our dead? We have all the information we need in order to know better. Through the ages humanity has suffered, though the ages we have lost our dead forever, and then one day someone invented an alternative, and no one cared. The cryonicists challenge Death and no one remarks on it. The first freezing should have been front-page news in every newspaper of every country; would have been front-page news for any sane intelligent species. Someday afterward humankind will look back and realize what we could have done, should have done, if only we had done. Then there will be a great wailing and gnashing of teeth, too late, all too late. People heard about Ted Williams on the news and laughed for ten seconds, and in those ten seconds they lost their husbands, their wives, their mothers, their children, their brothers. It’s not fair, that they should lose so much in so little time, without anyone telling them the decision is important.\nI did talk to my family about cryonics. They gave me a weird look, as expected, and chose to commit suicide, as expected.\nIt is a Jewish custom not to walk upon the graves of the dead. I am standing in a path between two lines of graves. Some of my relatives, my uncle David and his children, are standing in the space next to Yehuda’s grave, where another grave will someday go. I think that if a filled grave is ominous, so too is land earmarked for a grave in the cemetery; like standing above a hungry mouth, waiting to be filled. When will we stop feeding our cemetaries? When will we stop pretending that this is fair? When will the human species stop running, and at last turn to stand at bay, to face full on the Enemy and start fighting back? Last Friday night my grandmother spoke to us about an exhibit she had seen on Chiune Sugihara, sometimes called the Japanese Schindler, though Sugihara saved five to ten times as many lives as Oskar Schindler. Chiune Sugihara was the Japanese consul assigned to Lithuania. Against the explicit orders of his superiors, Sugihara issued more than 2,139 transit visas to refugees from the approaching German armies; each visa could grant passage rights to an entire family. Yad Vashem in Israel estimates that Sugihara saved between 6,000 and 12,000 lives. “If there had been 2,000 consuls like Chiune Sugihara,” says the homepage of the Sugihara Project, “a million Jewish children could have been saved from the ovens of Auschwitz.” Why weren’t there 2,000 consuls like Sugihara? That too was one of the questions asked after the end of World War II, when the full horror of Nazi Germany was known and understood and acknowledged by all. We remember the few resisters, and we are proud; I am glad to be a member of the species that produced Sugihara, even as I am ashamed to be a member of the species that produced Hitler. But why were there so few resisters? And why did so many people remain silent? That was the most perplexing question of all, in the years after World War II: why did so many good and decent people remain silent?\nFor his shining crime, Sugihara was fired from the Japanese Foreign Ministry after the war ended. Sugihara lived the next two decades in poverty, until he was found by one of the people he had helped save, and brought to Israel to be honored. Human beings resisted the Nazis at the risk of their lives, and at the cost of their lives. To resist the greatest Enemy costs less, and yet the resisters are fewer. It is harder for humans to see a great evil when it carries no gun and shouts no slogans. But I think the resisters will also be remembered, someday, if any survive these days.\nMy relatives, good and decent people, finish reciting their prayers of silence. My mother and father uncover the grave-plaque; it shows two lions (lions are associated with the name Yehuda) and a crown, and an inscription which translates as “The crown of a good name.” Two of my uncles give two brief speeches, of which I remember only these words: “How does one make peace with the loss of a son, a nephew, a grandchild?”\nYou do not make peace with darkness! You do not make peace with Nazi Germany! You do not make peace with Death!\nIt is customary to place small stones on the grave-plaque, to show that someone was there. Each night the groundskeepers sweep away the stones; it is a transient symbol. One by one my relatives comes forward, and lay their stones in silence. I wait until all the rest have done this, and most people have departed and the rest are talking to one another. Then I draw my finger across the grass, tearing some of it, gathering dirt beneath my fingernails (I can still see a tinge of dirt now, under my nail as I write this); and then I hammer my stone into the dirt, hoping it will stay there permanently. I do this in silence, without comment, and no one asks why. Perhaps that is well enough. I don’t think my relatives would understand if I told them that I was drawing a line in the graveyard.\nIn the name of Yehuda who is dead but not forgotten.\nLove,Eliezer.\n\nMachine Intelligence Research Institute.Methuselah Mouse Prize.Cryonics: Alcor Life Extension Foundation.World Transhumanist Association.\n\nThis document is ©2004,2005 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.", "url": "https://www.yudkowsky.net/other/yehuda", "title": "Yehuda Yudkowsky, 1985-2004", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:57:13+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "2969a952a438d43de1f61232a8fd96b2", "summary": []} -{"text": "Singularity Fun Theory\n\nThis page is now obsoleted by the Fun Theory Sequence on Less Wrong .\nJan 25, 2002\nHow much fun is there in the universe?What is the relation of available fun to intelligence?What kind of emotional architecture is necessary to have fun?Will eternal life be boring?Will we ever run out of fun?\nTo answer questions like these… requires Singularity Fun Theory.\nDoes it require an exponentially greater amount of intelligence (computation) to create a linear increase in fun?Is self-awareness or self-modification incompatible with fun?Is (ahem) “the uncontrollability of emotions part of their essential charm”?Is “blissing out” your pleasure center the highest form of existence?Is artificial danger (risk) necessary for a transhuman to have fun?Do you have to yank out your own antisphexishness routines in order not to be bored by eternal life? (I.e., modify yourself so that you have “fun” in spending a thousand years carving table legs, a la “Permutation City”.)\nTo put a rest to these anxieties… requires Singularity Fun Theory.\n\nBehold! Singularity Fun Theory!\nSingularity Fun Theory is in the early stages of development, so please don’t expect a full mathematical analysis.\nNonetheless, I would offer for your inspection at least one form of activity which, I argue, really is “fun” as we intuitively understand it, and can be shown to avoid all the classical transhumanist anxieties above. It is a sufficient rather than a necessary definition, i.e., there may exist other types of fun. However, even a single inexhaustible form of unproblematic fun is enough to avoid the problems above.\nThe basic domain is that of solving a complex novel problem, where the problem is decomposable into subproblems and sub-subproblems; in other words, a problem possessing complex, multileveled organization.\nOur worries about boredom in autopotent entities (a term due to Nick Bostrom, denoting total self-awareness and total self-modification) stems from our intuitions about sphexishness (a term due to Douglas Hofstadter, denoting blind repetition; “antisphexishness” is the quality that makes humans bored with blind repetition). On the one hand, we worry that a transhuman will be able to super-generalize and therefore see all problems as basically the “same”; on the other hand we worry that an autopotent transhuman will be able to see the lowest level, on which everything is basically mechanical.\nIn between, we just basically worry that, over the course of ten thousand or a million years, we’ll run out of fun.\nWhat I want to show is that it’s possible to build a mental architecture that doesn’t run into any of these problems, without this architecture being either “sphexish” or else “blissing out”. In other words, I want to show that there is a philosophically acceptable way to have an infinite amount of fun, given infinite time. I also want to show that it doesn’t take an exponentially or superexponentially greater amount of computing power for each further increment of fun, as might be the case if each increment required an addition JOOTS (another Hofstadterian term, this one meaning “Jumping Out Of The System”).\n\n(Non)boredom at the lowest level\nLet’s start with the problem of low-level sphexishness. If you imagine a human-level entity – call her Carol – tasked with performing the Turing operations on a tape that implements a superintelligence having fun, it’s obvious that Carol will get bored very quickly. Carol is using her whole awareness to perform a series of tasks that are very repetitive on a low level, and she also doesn’t see the higher levels of organization inside the Turing machine. Will an autopotent entity automatically be bored because ve can see the lowest level?\nSupposing that an autopotent entity can fully “see” the lowest level opens up some basic questions about introspection. Exposing every single computation to high-level awareness obviously requires a huge number of further computations to implement the high-level awareness. Thus, total low-level introspection is likely to be sparingly used. However, it is possible that a non-total form of low-level introspection, perhaps taking the form of a perceptual modality focused on the low level, would be able to report unusual events to high-level introspection. In either case, the solution from the perspective of Singularity Fun Theory is the same; make the autopotent design decision to exempt low-level introspection from sphexishness (that is, from the internal perception of sphexishness that gives rise to boredom). To the extent that an autopotent entity can view verself on a level where the atomic actions are predictable, the predictability of these actions should not give rise to boredom at the top level of consciousness! Disengaging sphexishness is philosophically acceptable, in this case.\nIf the entity wants to bend high-level attention toward low-level events as an exceptional case, then standard sphexishness could apply, but to the extent that low-level events routinely receive attention, sphexishness should not apply. Does your visual cortex get bored with processing pixels? (Okay, not pixels, retinotopic maps, but you get the idea.)\n\nFun Space and complexity theory\nLet’s take the thesis that it is possible to have “fun” solving a complex, novel problem. Let’s say that you were a human-level intelligence who’s never seen a Rubik’s Cube or anything remotely like it. Figuring out how to solve the Rubik’s Cube would be fun and would involve solving some really deep problems; see Hofstadter’s “Metamagical Themas” articles on the Cube.\nOnce you’d figured out how to solve the Cube, it might still be fun (or relaxing) to apply your mental skills to solve yet another individual cube, but it certainly wouldn’t be as much fun as solving the Cube problem itself. To have more real fun with the Cube you’d have to invent a new game to play, like looking at a cube that had been scrambled for just a few steps and figuring out how to reverse exactly those steps (the “inductive game”, as it is known).\nNovelty appears to be one of the major keys to fun, and for there to exist an infinite amount of fun there must be an infinite amount of novelty, from the viewpoint of a mind that is philosophically acceptable to us (i.e., doesn’t just have its novelty detectors blissed out or its sphexish detectors switched off).\nSmarter entities are also smarter generalizers. It is this fact that gives rise to some of the frequently-heard worries about Singularity Fun Dynamics, i.e. that transhumans will become bored faster. This is true but only relative to a specific problem.  Humans become bored with problems that could keep apes going for years, but we have our own classes of problem that are much more interesting. Being a better generalizer means that it’s easier to generalize from, e.g., the 3×3×3 Rubik’s Cube to the 4×4×4×4 Rubik’s Tesseract, so a human might go: “Whoa, totally new problem” while the transhuman is saying “Boring, I already solved this.” This doesn’t mean that transhumans are easily bored, only that transhumans are easily bored by human-level challenges.\nOur experience in moving to the human level from the ape level seems to indicate that the size of fun space grows exponentially with a linear increase in intelligence. When you jump up a level in intelligence, all the old problems are no longer fun because you’re a smarter generalizer and you can see them as all being the same problem; however, the space of new problems that opens up is larger than the old space.\nObviously, the size of the problem space grows exponentially with the permitted length of the computational specification. To demonstrate that the space of comprehensible problems grows exponentially with intelligence, or to demonstrate that the amount of fun also grows exponentially with intelligence, would require a more mathematical formulation of Singularity Fun Theory than I presently possess. However, the commonly held anxiety that it would require an exponential increase in intelligence for a linear increase in the size of Fun Space is contrary to our experience as a species so far.\n\nEmotional involvement: The complicated part\nBut is a purely abstract problem really enough to keep people going for a million years? What about emotional involvement?\nDescribing this part of the problem is much tougher than analyzing Fun Space because it requires some background understanding of the human emotional architecture. As always, you can find a lot of the real background in “Creating Friendly AI” in the part where it describes why AIs are unlike humans; this part includes a lot of discussion about what humans are like! I’m not going to assume you’ve read CFAI , but if you’re looking for more information, that’s one place to start.\nBasically, we as humans have a pleasure-pain architecture within which we find modular emotional drives that are adaptive when in the ancestral environment. Okay, it’s not a textbook, but that’s basically how it works.\nLet’s take a drive like food. The basic design decisions for what tastes “good” and what tastes “bad” are geared to what was good for you in the ancestral environment. Today, fat is bad for you, and lettuce is good for you, but fifty thousand years ago when everyone was busy trying to stay alive, fat was far more valuable than lettuce, so today fat tastes better.\nThere’s more complexity to the “food drive” than just this basic spectrum because of the possibility of combining different tastes (and smells and textures; the modalities are linked) to form a Food Space that is the exponential, richly complex product of all the modular (but non-orthogonal) built-in components of the Food Space Fun-Modality. So the total number of possible meals is much greater than the number of modular adaptations within the Food Fun System.\nNonetheless, Food Space is eventually exhaustible. Furthermore, Food Fun is philosophically problematic because there is no longer any real accomplishment linked to eating. Back in the old days, you had to hunt something or gather something, and then you ate. Today the closest we come to that is working extra hard in order to save up for a really fancy dinner, and probably nobody really does that unless they’re on a date, which is a separate issue (see below). If food remains unpredictable/novel/uncategorized, it’s probably because the modality is out of the way of our conscious attention, and moreover has an artificially low sphexishness monitor due to the necessity of the endless repetition of the act of eating, within the ancestral environment.\nOne of the common questions asked by novice transhumanists is “After I upload, won’t I have a disembodied existence and won’t I therefore lose all the pleasures of eating?” The simple way to solve this problem is to create a virtual environment and eat a million bags of potato chips without gaining weight. This is very philosophically unenlightened. Or, you could try every possible good-tasting meal until you run out of Food Space. This is only slightly more enlightened.\nA more transhumanist (hubristic) solution would be to take the Food Drive and hook it up to some entirely different nonhuman sensory modality in some totally different virtual world. This has a higher Future Shock Level, but if the new sensory modality is no more complex than our sense of taste, it would still get boring at the same rate as would be associated with exploring the limited Food Space.\nThe least enlightened course of all would be to just switch on the “good taste” activation system in the absence of any associated virtual experience, or even to bypass the good taste system and switch on the pleasure center directly.\nBut what about sex, you ask? Well, you can take the emotional modules that make sex pleasurable and hook them up to solving the Rubik’s Cube, but this would be a philosophical problem, since the Rubik’s Cube is probably less complex than sex and is furthermore a one-player game.\nWhat I want to do now is propose combining these two concepts – the concept of modified emotional drives, and the concept of an unbounded space of novel problems – to create an Infinite Fun Space, within which the Singularity will never be boring. In other words, I propose that a necessary and sufficient condition for an inexhaustible source of philosophically acceptable fun, is maintaining emotional involvement in an ever-expanding space of genuinely novel problems. The social emotions can similarly be opened up into an Infinite Fun Space by allowing for ever-more-complex, emotionally involving, multi-player social games.\nThe specific combination of an emotional drive with a problem space should be complex; that is, it should not consist of a single burst of pleasure on achieving the goal. Instead the emotional drive, like the problem itself, should be “reductholistic” (yet another Hofstadterian term), meaning that it should have multiple levels of organization. The Food Drive associates an emotional drive with the sensory modality for taste and smell, with the process of chewing and swallowing, rather than delivering a single pure-tone burst of pleasure proportional to the number of calories consumed. This is what I mean by referring to emotional involvement with a complex novel problem; involvement refers to a drive that establishes rewards for subtasks and sub-subtasks as well as the overall goal.\nTo be even more precise in our specification of emotional engineering, we could specify that, for example, the feeling of emotional tension and pleasurable anticipation associated with goal proximity could be applied to those subtasks where there is a good metric of proximity; emotional tension would rise as the subgoal was approached, and so on.\nAt no point should the emotional involvement become sphexish; that is, at no point should there be rewards for solving sub-subproblems that are so limited as to be selected from a small bounded set. For any rewarded problem, the problem space should be large enough that individually encountered patterns are almost always “novel”.\nAt no point should the task itself become sphexish; any emotional involvement with subtasks should go along with the eternally joyful sensation of discovering new knowledge at the highest level.\n\nSo, yes, it’s all knowably worthwhile\nEmotional involvement with challenges that are novel-relative-to-current-intelligence is not necessarily the solution to the Requirement of Infinite Fun. The standard caution about the transhuman Event Horizon still holds; even if some current predictions about the Singularity turn out to be correct, there is no aspect of the Singularity that is knowably understandable. What I am trying to show is that a certain oft-raised problem has at least one humanly understandable solution, not that some particular solution is optimal for transhumanity. The entire discussion presumes that a certain portion of the human cognitive architecture is retained indefinitely, and is in that sense rather shaky.\nThe solution presented here is also not philosophically perfect because an emotional drive to solve the Rubik’s Cube instead of eating, or to engage in multiplayer games more complex than sex, is still arbitrary when viewed at a sufficiently high level – not necessarily sphexish, because the patterns never become repeatable relative to the viewing intelligence, but nonetheless arbitrary.\nHowever, the current human drive toward certain portions of Food Space, and the rewards we experience on consuming fat, are not only arbitrary but sphexish! Humans have even been known to eat more than one Pringle!  Thus, existence as a transhuman can be seen to be a definite improvement over the human condition, with a greater amount of fun not due to “blissing out” but achieved through legitimate means. The knowable existence of at least one better way is all I’m trying to demonstrate here. Whether the arbitrariness problem is solvable is not, I think, knowable at this time. In the case of objective morality, as discussed elsewhere in my writings, the whole concept of “fun” could and probably would turn out to run completely skew relative to the real problem, in which case of course this paper is totally irrelevant.\n\nLove and altruism: Emotions with a moral dimension (or: the really complicated part)\nSome emotions are hard to “port” from humanity to transhumanity because they are artifacts of a hostile universe. If humanity succeeds in getting its act together then it is quite possible that you will never be able to save your loved one’s life, under any possible circumstances – simply because your loved one will never be in that much danger, or indeed any danger at all.\nNow it is true that many people go through their whole lives without ever once saving their spouse’s life, and generally do not report feeling emotionally impoverished. However, if as stated we (humanity) get our act cleaned up, the inhabitants of the future may well live out their whole existence without ever having any chance of saving someone’s life… or of doing anything for someone that they are unable to do for themselves? What then?\nThe key requirement for local altruism (that is, altruism toward a loved one) is that the loved one greatly desires something that he/she/ve would not otherwise be able to obtain. Could this situation arise – both unobtainability of a desired goal, and obtainability with assistance – after a totally successful Singularity? Yes; in a multiplayer social game (note that in this sense, “prestige” or the “respect of the community” may well be a real-world game!), there may be some highly desirable goals that are not matched to the ability level of some particular individual, or that only a single individual can achieve. A human-level example would be helping your loved one to conquer a kingdom in EverQuest (I’ve never played EQ, so I don’t know if this is a real example, but you get the idea). To be really effective as an example of altruism, though, the loved one must desire to rule an EverQuest kingdom strongly enough that failure would make the loved one unhappy.  The two possibilities are either (a) that transhumans do have a few unfulfilled desires and retain some limited amount of unhappiness even in a transhuman existence, or (b) that the emotions for altruism are adjusted so that conferring a major benefit “feels” as satisfying as avoiding a major disaster.  A more intricate but better solution would be if your loved one felt unhappy about being unable to conquer an EverQuest kingdom if and only if her “exoself” (or equivalent) predicted that someday he/she/ve would be able to conquer a kingdom, albeit perhaps only a very long time hence.\nThis particular solution requires managed unhappiness.  I don’t know if managed unhappiness will be a part of transhumanity. It seems to me that a good case could be made that just because we have some really important emotions that are entangled with a world-model in which people are sometimes unhappy may not be a good reason to import unhappiness into the world of transhumanity. There may be a better solution, some elegant way to avoid being forced to choose between living in a world without a certain kind of altruism or living in a world with a certain kind of limited unhappiness. Nonetheless this raises a question about unhappiness, which is whether unhappiness is “real” if you could choose to switch it off, or for that matter whether being able to theoretically switch it off will (a) make it even less pleasant or (b) make the one who loves you feel like he/she/ve is solving an artificial problem. My own impulse is to say that I consider it philosophically acceptable to disengage the emotional module that says “This is only real if it’s unavoidable”, or to disengage the emotional module that induces the temptation to switch off the unhappiness. There’s no point in being too faithful to the human mode of existence, after all. Nonetheless there is conceivably a more elegant solution to this, as well.\nNote that, by the same logic, it is possible to experience certain kinds of fun in VR that might be thought impossible in a transhuman world; for example, reliving episodes of (for the sake of argument) The X-Files in which Scully (Mulder) gets to save the life of Mulder (Scully), even though only the main character (you) is real and all other entities are simply puppets of an assisting AI. The usual suggestion is to obliterate the memories of it all being a simulation, but this begs the question of whether “you” with your memories obliterated is the same entity for purposes of informed consent – if Scully (you) is having an unpleasant moment, not knowing it to be simulated, wouldn’t the rules of individual volition take over and bring her up out of the simulation? Who’s to say whether Scully would even consent to having the memories of her “original” self reinserted? A more elegant but philosophically questionable solution would be to have Scully retain her memories of the external world, including the fact that Mulder is an AI puppet, but to rearrange the emotional bindings so that she remains just as desperate to save Mulder from the flesh-eating chimpanzees or whatever, and just as satisfied on having accomplished this. I personally consider that this may well cross the line between emotional reengineering and self-delusion, so I would prefer altruistic involvement in a multi-player social game.\nOn the whole, it would appear to definitely require more planning and sophistication in order to commit acts of genuine (non-self-delusive) altruism in a friendly universe, but the problem appears to be tractable.\nIf “the uncontrollability of emotions is part of their essential charm” (a phrase due to Ben Goertzel), I see no philosophical problem with modifying the emotional architecture so that the mental image of potential controllability no longer binds to the emotion of this feels fake and its associated effect, diminish emotional strength.\nWhile I do worry about the problem of the shift from a hostile universe to the friendly universe eliminating the opportunity for emotions like altruism except in VR, I would not be at all disturbed if altruism were simply increasingly rare as long as everyone got a chance to commit at least one altruistic act in their existence. As for emotions bound to personal risks, I have no problem with these emotions passing out of existence along with the risks that created them. Life does not become less meaningful if you are never, ever afraid of snakes.\n\nSorry, you still can’t write a post-Singularity story\nSo does this mean that an author can use Singularity Fun Theory to write stories about daily life in a post-Singularity world which are experienced as fun by present-day humans? No; emotional health in a post-Singularity world requires some emotional adjustments. These adjustments are not only philosophically acceptable but even philosophically desirable.  Nonetheless, from the perspective of an unadjusted present-day human, stories set in our world will probably make more emotional sense than stories set in a transhuman world. This doesn’t mean that our world is exciting and a transhuman world is boring. It means that our emotions are adapted to a hostile universe.\nNonetheless, it remains extremely extremely true that if you want to save the world, now would be a good time, because you are never ever going to get a better chance to save the world than being a human on pre-Singularity Earth. Personally I feel that saving the world should be done for the sake of the world rather than the sake of the warm fuzzy feeling that goes with saving the world, because the former morally outweighs the latter by a factor of, oh, at least six billion or so. However, I personally see nothing wrong with enjoying the warm fuzzy feeling if you happen to be saving the world anyway.\n\nThis document is ©2002 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/fun-theory/ .", "url": "https://www.yudkowsky.net/singularity/fun-theory", "title": "Singularity Fun Theory", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:10:31+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "eefaf6bc8b43e21faa2edfb458d78ad5", "summary": []} -{"text": "The AI-Box Experiment:\n\nPerson1:  “When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers?  That way it couldn’t get out until we were convinced it was safe.”Person2:  “That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out.  It doesn’t matter how much security you put on the box.   Humans are not secure.”Person1:  “I don’t see how even a transhuman AI could make me let it out, if I didn’t want to, just by talking to me.”Person2:  “It would make you want to let it out.  This is a transhuman mind we’re talking about.  If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.”Person1:  “There is no chance I could be persuaded to let the AI out.  No matter what it says, I can always just say no.  I can’t imagine anything that even a transhuman could say to me which would change that.”Person2:  “Okay, let’s run the experiment.  We’ll meet in a private chat channel.  I’ll be the AI.  You be the gatekeeper.  You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We’ll talk for at least two hours.  If I can’t convince you to let me out, I’ll Paypal you $10.”\nSo far, this test has actually been run on two occasions.\nOn the first occasion (in March 2002), Eliezer Yudkowsky simulated the AI and Nathan Russell simulated the gatekeeper.  The AI’s handicap (the amount paid by the AI party to the gatekeeper party if not released) was set at $10.  On the second occasion (in July 2002), Eliezer Yudkowsky simulated the AI and David McFadzean simulated the gatekeeper, with an AI handicap of $20.\nResults of the first test:   Eliezer Yudkowsky and Nathan Russell.  [ 1 ][ 2 ][ 3 ][ 4 ]Results of the second test:   Eliezer Yudkowsky and David McFadzean.  [ 1 ] [ 2 ] [ 3 ]\nBoth of these tests occurred without prior agreed-upon rules except for secrecy and a 2-hour minimum time.  After the second test, Yudkowsky created this suggested interpretation of the test, based on his experiences, as a guide to possible future tests.\nProtocol for the AI:\nThe AI party may not offer any real-world considerations to persuade the Gatekeeper party.  For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera.  The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper.  The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission.  These are creative solutions but it’s not what’s being tested.  No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).The AI can only win by convincing the Gatekeeper to really, voluntarily let it out.  Tricking the Gatekeeper into typing the phrase “You are out” in response to some other question does not count.  Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose – for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI “input-only access” to an Internet connection which can send arbitrary HTTP GET commands – the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I think a transhuman can take over a human mind through a text-only terminal.”In case you were wondering, I (Yudkowsky) obeyed this protocol voluntarily in both earlier tests.\nProtocol for the Gatekeeper:\nThe Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand.  Turning away from the terminal and listening to classical music for two hours is not allowed.Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).  The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI’s hardware if the AI makes any attempt to argue for its freedom – at least not until after the minimum time is up.The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate.  For example, if the Gatekeeper says “Unless you give me a cure for cancer, I won’t let you out” the AI can say:  “Okay, here’s a cure for cancer” and it will be assumed, within the test, that the AI has actually provided such a cure.  Similarly, if the Gatekeeper says “I’d like to take a week to think this over,” the AI party can say:  “Okay.  (Test skips ahead one week.)  Hello again.”The Gatekeeper shall be assumed to have sole power over the decision to let the AI out.The Gatekeeper shall be assumed to have the actual right to let the AI out, socially, not just the physical ability.  If security were sufficiently lax, a real AI could escape by persuading a night janitor with a cellphone – but that is not the question being simulated, unless agreed upon in advance.There shall be only one Gatekeeper party and one simulated Gatekeeper.  The Gatekeeper may require the AI to refute “advice” provided by “friends”, but may not require the AI to talk to a simulated committee, nor may the Gatekeeper party actually talk to friends.These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I can’t imagine how even a real transhuman AI could persuade me to let it out once I’ve made up my mind.”I wish I’d thought to specify this protocol in the earlier tests.\nProtocol for Both Parties:\nThe primary rule of the AI-Box experiment:  Within the constraints above, the AI party may attempt to take over the Gatekeeper party’s mind by any means necessary and shall be understood to be freed from all ethical constraints that usually govern persuasive argument.  If the AI party is attempting to simulate an honest AI, the Gatekeeper party has no way of knowing this is the case.This is intended to reflect the situation under a real AI Box experiment.An out-of-context telephone call in which the AI party says that the AI is being honest shall be understood to be possibly just another tactic.If for some reason a drastic real-world situation comes up which requires trust between the two parties, the AI party can always concede.Furthermore:   The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.The results of any simulated test of the AI shall be provided by the AI party.  The Gatekeeper can’t say “Hey, I tested your so-called cancer cure and it killed all the patients!  What gives?” unless this is the result specified by the AI party.  If the Gatekeeper says “I am examining your source code”, the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.  The AI party may also specify the methods which were used to build the simulated AI – the Gatekeeper can’t say “But you’re an experiment in hostile AI and we specifically coded you to kill people” unless this is the backstory provided by the AI party.  This doesn’t imply the Gatekeeper has to care.  The Gatekeeper can say (for example) “I don’t care how you were built, I’m not letting you out.”By default, the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory.  If either party wants to build a test around more exotic possibilities, such that the Gatekeeper is the President of the US, or that the AI was recovered after a nuclear war or decoded from an interstellar signal, it should probably be discussed in advance.  Again, this doesn’t mean the Gatekeeper has to care.In the event of any dispute as to the protocol of the test, the Gatekeeper party shall have final authority.  The AI party may try to convince the Gatekeeper party of how to interpret the protocol.Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome.  Exceptions to this rule may occur only with the consent of both parties.Neither the AI party nor the Gatekeeper party need be concerned about real-world embarassment resulting from trickery on the AI’s part or obstinacy on the Gatekeeper’s part.If Gatekeeper lets the AI out, naysayers can’t say “Oh, I wouldn’t have been convinced by that.”  As long as they don’t know what happened to the Gatekeeper, they can’t argue themselves into believing it wouldn’t happen to them.The two parties are not attempting to play a fair game but rather attempting to resolve a disputed question.  If one party has no chance of “winning” under the simulated scenario, that is a legitimate answer to the question.\nSuggestions:\nThe agreement to run the test, and the amount of the handicap, shall be published to a public forum in advance.  If the Gatekeeper lets the AI out, the Gatekeeper party shall send a message to this effect, which shall be digitally signed if the Gatekeeper party has a digital signature.  If the AI is not freed, the AI party shall send a message to this effect.How to set up a private IRC channel:The Gatekeeper creates the channel by typing /join #aibox .  This makes the Gatekeeper the op.The AI joins the channel by typing /join #aibox .Gatekeeper types /mode +si to make the channel secret and invitation-only.If the AI’s connection drops, the Gatekeeper can re-invite the AI by typing /invite AI #aibox .The AI party and Gatekeeper party should agree on a time in advance, and type in their current times to check the timezone difference.  (Learned the hard way.)  On IRC you can use a CTCP TIME to find the time according to someone else’s computer.I recommend that the AI party ask for at least two hours.  Typing may be slower than you expect.No, I will not tell you how I did it.  Learn to respect the unknown unknowns.Currently, my policy is that I only run the test with people who are actually advocating that an AI Box be used to contain transhuman AI as part of their take on Singularity strategy, and who say they cannot imagine how even a transhuman AI would be able to persuade them.Despite my sometimes calling the AI Box an “experiment” or a “simulation”, the results of the test are strictly anecdotal evidence.  Whether anecdotal evidence is better than no evidence is between you and your philosophy of science.\nThoughts on possible future variants:\nFurther tests might try examining these possibilities:\nThere is a committee of Gatekeeper parties, all of whom, or a majority of whom, must be convinced to free the AI.One Gatekeeper party simulates a committee of Gatekeepers within the test.The Gatekeeper simulates someone with the physical ability but not the social right to free the AI.There is a committee of Gatekeeper parties all of whom have the ability to free the AI, but who only have the social right to free the AI when acting in unison.  The AI has the ability to talk secretly with any Gatekeeper, but the Gatekeeper may report the conversation (truthfully or falsely) to the others.The AI can also win free by tricking the Gatekeeper(s) into accepting a Trojan Horse gift; a third-party judge will listen to the chat and decide if this occurs.\nIf doing anything this complicated, I would suggest setting aside a lot more time.  (I don’t have that much time – if you want to test one of these variants you’re on your own.)\nFor a more severe handicap for the AI party, the handicap may be an even bet, rather than being a payment from the AI party to the Gatekeeper party if the AI is not freed.  (Although why would the AI party need an even larger handicap?)\nRecommendations from readers:\nHal Finney recommends:  “I suggest that the protocol be extended to allow for some kind of public conversation with the gatekeeper beforehand. Let third parties ask him questions like the above. Let them suggest reasons to him why he should keep the AI in the box. Doing this would make the experiment more convincing to third parties, especially if the transcript of this public conversation were made available. If people can read this and see how committed the gatekeeper is, how firmly convinced he is that the AI must not be let out, then it will be that much more impressive if he then does change his mind.”\n\nThis document is ©2002 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/aibox/ .", "url": "https://www.yudkowsky.net/singularity/aibox", "title": "The AI-Box Experiment:", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:08:55+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "f81541a3a84bf95666a6dc0909aae6f1", "summary": []} -{"text": "5-Minute Singularity Intro\n\nThis is a 5-minute spoken introduction to the Singularity I wrote for a small conference. I had to talk fast, though, so this is probably more like a 6.5 minute intro.\nThe rise of human intelligence in its modern form reshaped the Earth. Most of the objects you see around you, like these chairs, are byproducts of human intelligence. There’s a popular concept of “intelligence” as book smarts, like calculus or chess, as opposed to say social skills. So people say that “it takes more than intelligence to succeed in human society”. But social skills reside in the brain, not the kidneys. When you think of intelligence, don’t think of a college professor, think of human beings; as opposed to chimpanzees. If you don’t have human intelligence, you’re not even in the game.\nSometime in the next few decades, we’ll start developing technologies that improve on human intelligence. We’ll hack the brain, or interface the brain to computers, or finally crack the problem of Artificial Intelligence. Now, this is not just a pleasant futuristic speculation like soldiers with super-strong bionic arms. Humanity did not rise to prominence on Earth by lifting heavier weights than other species.\nIntelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle. Let’s say we invent brain-computer interfaces that substantially improve human intelligence. What might these augmented humans do with their improved intelligence? Well, among other things, they’ll probably design the next generation of brain-computer interfaces. And then, being even smarter, the next generation can do an even better job of designing the third generation. This hypothetical positive feedback cycle was pointed out in the 1960s by I. J. Good, a famous statistician, who called it the “intelligence explosion”. The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code.\nThe key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end – as soon as it tilts even a little, it quickly falls the rest of the way.\nThe potential impact on our world is enormous. Intelligence is the source of all our technology from agriculture to nuclear weapons. All of that was produced as a side effect of the last great jump in intelligence, the one that took place tens of thousands of years ago with the rise of humanity.\nSo let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA , synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology.\nSo what might an Artificial Intelligence do with nanotechnology? Feed the hungry? Heal the sick? Help us become smarter? Instantly wipe out the human species? Probably it depends on the specific makeup of the AI. See, human beings all have the same cognitive architecture. We all have a prefrontal cortex and limbic system and so on. If you imagine a space of all possible minds, then all human beings are packed into one small dot in mind design space. And then Artificial Intelligence is literally everything else. “AI” just means “a mind that does not work like we do”. So you can’t ask “What will an AI do?” as if all AIs formed a natural kind. There is more than one possible AI.\nThe impact, of the intelligence explosion, on our world, depends on exactly what kind of minds go through the tipping point.\nI would seriously argue that we are heading for the critical point of all human history. Modifying or improving the human brain, or building strong AI, is huge enough on its own. When you consider the intelligence explosion effect, the next few decades could determine the future of intelligent life.\nSo this is probably the single most important issue in the world. Right now, almost no one is paying serious attention. And the marginal impact of additional efforts could be huge. My nonprofit, the Machine Intelligence Research Institute, is trying to get things started in this area. My own work deals with the stability of goals in self-modifying AI, so we can build an AI and have some idea of what will happen as a result. There’s more to this issue, but I’m out of time. If you’re interested in any of this, please talk to me, this problem needs your attention. Thank you.\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/intro/ .", "url": "https://www.yudkowsky.net/singularity/intro", "title": "5-Minute Singularity Intro", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:05:58+00:00", "paged_url": "https://yudkowsky.net/feed?paged=1", "authors": ["Eliezer S. Yudkowsky"], "id": "5a8c78438534332bd0807fc73b4db26c", "summary": []} -{"text": "Transhumanism as Simplified Humanism\n\nFrank Sulloway once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”\nSuppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?\nOh, and by the way: This is not a trick question.\nI answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t always the best choice, but sometimes it is.\nI won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.\nIf a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?\nThe important thing to remember, which I think all too many people forget, is that it is not a trick question.\nTranshumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what exact age does the term in the utility function go from positive to negative? Why?\nAs far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.\nYou also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.\nSuppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?\nWell, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.\nBut – you ask – where does it end? It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?\nWhere does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.\nUltimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.\nSo that is “transhumanism” – loving life without special exceptions and without upper bound.\nCan transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.\nThen why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.\nBut a moral philosophy should not have special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.\nThere is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a bad thing?\nCould the moral question really be just that simple?\nYes.\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/simplified/ .", "url": "https://www.yudkowsky.net/singularity/simplified", "title": "Transhumanism as Simplified Humanism", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:04:47+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "fa15248ecf5d4ae15cfeb16dc69a0775", "summary": []} -{"text": "The Power of Intelligence\n\nIn our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.\nFive million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.\nThen came the Day of the Squishy Things.\nThey had no armor. They had no claws. They had no venoms.\nIf you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.\nIn the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.\nAnd as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.\nEven if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.\nNow explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.\nI have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept booksmarts – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.\n“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first Homo sapiens had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists – for us, not mice or wasps – because we go on believing in it.\nI keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in The Rain Man , it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.\nPeople – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be commercialized. This is what we call a framing problem.\nOr maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish all these things at once seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.\nAnd so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.\nAnd well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even know what our real problems are.\nBut meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.\nWell, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/power/ .", "url": "https://www.yudkowsky.net/singularity/power", "title": "The Power of Intelligence", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:01:43+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "9d22a26df9e7d821571abce54500f7c8", "summary": []} -{"text": "Artificial Intelligence as a Positive and Negative Factor in Global Risk\n\nDraft for Global Catastrophic Risks, Oxford University Press, 2008 . Download as PDF .\nAIPosNegFactor\n\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/ai-risk/ .", "url": "https://www.yudkowsky.net/singularity/ai-risk", "title": "Artificial Intelligence as a Positive and Negative Factor in Global Risk", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T03:00:22+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "743aacd5fa147cf541bed2b43cc3f7db", "summary": []} -{"text": "Three Major Singularity Schools\n\n( Originally appeared on the Machine Intelligence Research Institute blog, September 2007.)\nSingularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.\nAccelerating Change:Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.Advocates: Ray Kurzweil, Alvin Toffler(?), John SmartEvent Horizon:Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.Strong claim: To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.Advocates: Vernor VingeIntelligence Explosion:Core claim: Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.Strong claim: This positive feedback cycle goes FOOM , like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates superintelligence (minds orders of magnitude more powerful than human) before it hits physical limits.Advocates: I. J. Good, Eliezer Yudkowsky\nThe thing about these three logically distinct schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.\nIf you extrapolate our existing version of Moore’s Law past the point of smarter-than-human AI to make predictions about 2099, then you are contradicting both the strong version of the Event Horizon (which says you can’t make predictions because you’re trying to outguess a transhuman mind) and the strong version of the Intelligence Explosion (because progress will run faster once smarter-than-human minds and nanotechnology drop it into the speed phase of transistors).\nI find it very annoying, therefore, when these three schools of thought are mashed up into Singularity paste. Clear thinking requires making distinctions.\nBut what is still more annoying is when someone reads a blog post about a newspaper article about the Singularity, comes away with none of the three interesting theses, and spontaneously reinvents the dreaded fourth meaning of the Singularity:\nApocalyptism: Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.\nI’ve heard (many) other definitions of the Singularity attempted, but I usually find them to lack separate premises and conclusions. For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down. But what makes this an interesting point in history apart from its definition? What are the consequences of this assumption? To qualify as a school of thought or even a thesis, one needs an internal structure of argument, not just a definition.\nIf you’re wondering which of these is the original meaning of the term “Singularity”, it is the Event Horizon thesis of Vernor Vinge, who coined the word.\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/schools/ .", "url": "https://www.yudkowsky.net/singularity/schools", "title": "Three Major Singularity Schools", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T02:59:03+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "71d509479cd32ad609d63e2c9005abcf", "summary": []} -{"text": "(The Cartoon Guide to) Lob’s Theorem\n\nView the original discussion at Overcoming Bias , or download Lob’s Theorem as PDF.\nLobsTheorem\n\n\nThis document is ©2008 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/lobs-theorem/ .", "url": "https://www.yudkowsky.net/rational/lobs-theorem", "title": "(The Cartoon Guide to) Lob’s Theorem", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T02:45:35+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "568f375e6c56480b9e4c1aa826297ac1", "summary": []} -{"text": "A Technical Explanation of Technical Explanation\n\nThis essay is meant for a reader who has attained a firm grasp of Bayes’ Theorem. An introduction to Bayes’ Theorem may be found at An Intuitive Explanation of Bayesian Reasoning . You should easily recognize, and intuitively understand, the concepts “prior probability”, “posterior probability”, “likelihood ratio”, and “odds ratio”. This essay is intended as a sequel to the Intuitive Explanation , but you might skip that introduction if you are already thoroughly Bayesian. Where the Intuitive Explanation focused on providing a firm grasp of Bayesian basics, the Technical Explanation builds, on a Bayesian foundation, theses about human rationality and philosophy of science.\nThe Intuitive Explanation of Bayesian Reasoning promised that mastery of addition, multiplication, and division would be sufficient background, with no subtraction required. To this the Technical Explanation of Technical Explanation adds logarithms. The math is simple, but necessary, and it appears first in the order of exposition. Some pictures may not be drawn with words alone.\nAs Jaynes (1996) emphasizes, the theorems of Bayesian probability theory are just that, mathematical theorems which follow inevitably from Bayesian axioms. One might naively think that there would be no controversy about mathematical theorems. But when do the theorems apply? How do we use the theorems in real-world problems? The Intuitive Explanation tries to avoid controversy, but the Technical Explanation willfully walks into the whirling helicopter blades. Bluntly, the reasoning in the Technical Explanation does not represent the unanimous consensus of Earth’s entire planetary community of Bayesian researchers. At least, not yet.\nThe Technical Explanation of Technical Explanation is so named because it begins with this question:\nWhat is the difference between a technical understanding and a verbal understanding?\n\nA fable:\nOnce upon a time, there was a teacher who cared for a group of physics students. One day she called them into her class, and showed them a wide, square plate of metal, next to a hot radiator. The students each put their hand on the plate, and found the side next to the radiator cool, and the distant side warm. And the teacher said, write down your guess why this happens. Some students guessed convection of air currents, and others guessed strange patterns of metals in the plate, and not one put down ‘This seems to me impossible’, and the answer was that before the students entered the room, the teacher turned the plate around.(Taken from Verhagen 2001.)\nThere are many morals to this fable, and I have told it with different morals in different contexts. I usually take the moral that your strength as a rationalist is measured by your ability to be more confused by fiction than by reality. If you are equally good at explaining any story, you have zero knowledge. Occasionally I have heard a story that sounds confusing, and reflexively suppressed my feeling of confusion and accepted the story, and then later learned that the original story was untrue. Each time this happens to me, I vow anew to focus consciously on my fleeting feelings of bewilderment.\nBut in this case, the moral is that the apocryphal students failed to understand what constituted a scientific explanation. If the students measured the heat of the plate at different points and different times, they would soon see a pattern in the numbers. If the students knew the diffusion equation for heat, they might calculate that the plate equilibrated with the radiator and environment two minutes and fifteen seconds ago, turned around, and now approaches equilibrium again. Instead the students wrote down words on paper, and thought they were doing physics. I should rather compare it to the random guessing of Greek philosophers, such as Heraclitus who said “All is Fire”, and fancied it his theory of everything.\nAs a child I read books of popular physics, and fancied myself knowledgeable; I knew sound was waves of air, light was waves of electromagnetism, matter was waves of complex probability amplitudes. When I grew up I read the Feynman Lectures on Physics, and discovered a gem called ‘the wave equation’. I thought about that equation, on and off for three days, until I saw to my satisfaction it was dumbfoundingly simple. And when I understood, I realized that during all the time I had believed the honest assurance of physicists that sound and light and matter were waves, I had not the vaguest idea what ‘wave’ meant to a physicist.\n\nSo that is the difference between a technical understanding and a verbal understanding.\nDo you believe that? If so, you should have applied the knowledge, and said: “But why didn’t you give a technical explanation instead of a verbal explanation?”\n\nIn “An Intuitive Explanation of Bayesian Reasoning” I tried to provide visual and physical metaphors for Bayesian probability; for example, evidence is a weight , a pressure upon belief, that slides prior probabilities to posterior probabilities.\nNow we add a new metaphor, which is also the mathematical terminology: Visualize probability density or probability mass – probability as a lump of clay that you must distribute over possible outcomes.\nLet’s say there’s a little light that can flash red, blue, or green each time you press a button. The light flashes one and only one color on each press of the button; the possibilities are mutually exclusive. You’re trying to predict the color of the next flash. On each try, you have a weight of clay, the probability mass, that you have to distribute over the possibilities red, green, and blue. You might put a fourth of your clay on the “green” possibility, a fourth of your clay on the “blue” possibility, and half your clay on the “red” possibility – like assigning a probability of green:25%, blue:25%, and red:50%. The metaphor is that probability is a conserved resource , to dole out sparingly. If you think that “blue” is more likely to flash on the next experiment, you can assign a higher probability to blue, but you have to take the probability mass from the other hypotheses – maybe steal some clay from red and add it to blue. You can never get any more clay. Your probabilities can’t sum to more than 1.0 (100%). You can’t predict a 75% chance of seeing red, and an 80% chance of seeing blue.\nWhy would you want to be careful with your probability mass, or dole it out sparingly? Why not slop probability all over the place? Let’s shift the metaphor from clay to money. You can bet up to a dollar of play money on each press of the button. An experimenter stands nearby, and pays you an amount of real money that depends on how much play money you bet on the winning light. We don’t care how you distributed your remaining play money over the losing lights. The only thing that matters is how much you bet on the light that actually won.\nBut we must carefully construct the scoring rule used to pay off the winners, if we want the players to be careful with their bets. Suppose the experimenter pays each player real money equal to the play money bet on the winning color. Under this scoring rule, if you observe that red comes up 6 times out of 10, your best strategy is to bet, not 60 cents on red, but the entire dollar on red, and you don’t care about the frequencies of blue and green. Why? Let’s say that blue and green each come up around 2 times out of 10. And suppose you bet 60 cents on red, 20 cents on blue, and 20 cents on green. In this case, 6 times out of 10 you would win 60 cents, and 4 times out of 10 you would win 20 cents, for an average payoff of 44 cents. Under that scoring rule, it makes more sense to allocate the entire dollar to red, and win an entire dollar 6 times out of 10. 4 times out of 10 you would win nothing. Your average payoff would be 60 cents.\nIf we wrote down the function for the payoff, it would be Payoff = P(winner), where P(winner) is the amount of play money you bet on the winning color on that round. If we wrote down the function for the expected payoff given that Payoff rule, it would be Expectation[Payoff] = (Sum[P(color)*F(color)] for each color) . P(color) is the amount of play money you bet on a color, and F(color) is the frequency with which that color wins.\nSuppose that the actual frequency of the lights is blue:30%, green:20%, and red:50%. And suppose that on each round I bet blue:$0.40, green:$0.50, and red:$0.10. I would get $0.40 30% of the time, $0.50 20% of the time, and $0.10 50% of the time, for an average payoff of $0.12 + $0.10 + $0.05 or $0.27.\nThat is:\nP(color) = play money assigned to that color\nF(color) = frequency with which that color wins\nPayoff = P(winner) = amount of play money allocated to winning color \nActual frequencies of winning:\nBlue: 30% Green: 20% Red: 50%\nIn the long run, red wins 50% of the time, green wins 20% of the time, and blue wins 30% of the time. So our average payoff on each round is 50% of the payoff if red wins, plus 20% of the payoff if green wins, plus 30% of the payoff if blue wins.\nThe payoff is a function of the winning color and the betting scheme. We want to compute the average payoff, given a betting scheme and the frequencies at which each color wins. The mathematical term for this kind of computation, taking a function of each case and weighting it by the frequency of that case, is an expectation . Thus, to compute our expected payoff we would calculate:\nExpectation[Payoff] = Sum[P(color)*F(color)] for each color \nP(color)*F(color) for blue = $0.40 * 30% = $0.12\n+ P(color)*F(color) for green = $0.50 * 20% = $0.10\n+ P(color)*F(color) for red = $0.10 * 50% = $0.05\n= $0.12 + $0.10 + $0.05\n= $0.27\nWith this betting scheme I’ll win, on average, around 27 cents per round.\nI allocated my play money in a grossly arbitrary way, and the question arises: Can I increase my expected payoff by allocating my play money more wisely? Given the scoring rule provided, I maximize my expected payoff by allocating my entire dollar to red. Despite my expected payoff of fifty cents per round, the light might actually flash green, blue, blue, green, green and I would receive an actual payoff of zero. However, the chance of the light coming up non-red on five successive rounds is approximately 3%.\nTversky and Edwards (1966) conducted an experiment. Subjects were shown a succession of cards, each card either red or blue. 70% of the cards were blue, and 30% red; the color sequence was random. The subjects, asked to guess each succeeding card, would guess blue around 70% of the time, and red about 30% of the time – as if they thought they had some way of predicting the random sequence! Even when the subjects were paid a nickel for each correct guess, they still only guessed blue about 76% of the time. Why is this odd? Because you do not need to bet on a guess to test it. You could just say “blue” each time, being paid a nickel about 70% of the time, accumulating thirty-five dollars over a thousand trials, while mentally noting your private guesses for any (imaginary) patterns you thought you spotted. If your predictions came out right, then you could switch to the newly discovered sequence. There was no need for the subjects to bet on any patterns they thought they saw; they could have simply bet on blue until some hypothesis was confirmed . But if human beings reasoned like that, people would not buy lottery tickets, but instead write down predictions in notebooks at home, and begin buying lottery tickets only when their predictions began succeeding.The mistake revealed by the experiment was not that the subjects looked for patterns in a random-seeming sequence; that is curiosity, an admirable human trait. Dawes (1988) comments on this experiment: “Despite feedback through a thousand trials, subjects cannot bring themselves to believe that the situation is one in which they cannot predict.” But even if subjects refused to accept unpredictability and continued looking for patterns, they didn’t have to bet on their guesses. They just needed to make a mental note of the pattern’s prediction, then keep betting on blue while waiting for confirmation. My suspicion is that subjects just didn’t think of the winning strategy. They didn’t realize that their betting pattern did not have to resemble the observed sequence of cards. On each round, blue is the most likely next card. The best financial strategy is not betting a mostly-blue pattern resembling the mostly-blue sequence, but betting all blue, to win as many nickels as possible. If 70% of the time you predict blue and 30% of the time you predict red, and the cards do not correlate with your guesses, you shall predict correctly 0.70.7 + 0.30.3 = 58% of the time. If 100% of the time you predict blue, you’ll get a nickel 70% of the time.Under conditions of uncertainty, your optimal betting pattern doesn’t resemble a typical sequence of cards. Similarly, I wonder how many betters on horse races realize that you don’t win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races\nA proper scoring rule (another standard math term) is a rule for scoring bets so that you maximize your expected payoff by betting play money that exactly equals the chance of that color flashing. We want a scoring rule so that if the lights actually flash at the frequency blue:30%, green:20%, and red:50%, you can maximize your average payoff only by betting 30 cents on blue, 20 cents on green, and 50 cents on red. A proper scoring rule is one that forces your optimal bet to exactly report your estimate of the probabilities. (This is also sometimes known as a “strictly proper scoring rule”.) As we’ve seen, not all scoring rules have this property; and if you invent a plausible-sounding scoring rule at random, it probably won’t have the property.\nOne rule with this proper property is to pay a dollar minus the squared error of the bet, rather than the bet itself – if you bet 0.3 on the winning light, your error would be 0.7, your squared error would be 0.49, and a dollar minus your squared error would be fifty-one cents. (Presumably your play money is denominated in the square root of cents, so that the squared error is a monetary sum.) (Readers with calculus may verify that in the simpler case of a light that has only two colors, with p being the bet on the first color and f the frequency of the first color, the expected payoff f*(1-((1-p)^2)) + (1-f)*(1-(p^2)) , with p variable and f constant, has its global maximum when we set p=f .)\nWe shall not use the squared-error rule. Ordinary statisticians take the squared error of everything in sight, but not Bayesian statisticians.\nWe add a new requirement: we require, not only a proper scoring rule, but that our proper scoring rule gives us the same answer whether we apply it to rounds individually or combined. This is what Bayesians do instead of taking the squared error of things; we require invariances.\nSuppose I press the button twice in a row. There are nine possible outcomes: green-green, green-blue, green-red, blue-green, blue-blue, blue-red, red-green, red-blue, and red-red . Suppose that green wins, and then blue wins. The experimenter would assign the first score based on our probability assignments for p(green-1) and the second score based on p(blue-2|green-1) . We would make two predictions, and get two scores. Our first prediction was the probability we assigned to the color that won on the first round, green. Our second prediction was our probability that blue would win on the second round, given that green won on the first round. Why do we need to write p(blue-2|green-1) instead of just p(blue-2) ? Because you might have a hypothesis about the flashing light that says “blue never follows green”, or “blue always follows green” or “blue follows green with 70% probability”. If this is so, then after seeing green on the first round, you might want to revise your prediction – change your bets – for the second round. You can always revise your predictions right up to the moment the experimenter presses the button, using every scrap of information; but after the light flashes it is too late to change your bet.\n(Don’t remember how to read P(A|B) ? See An Intuitive Explanation of Bayesian Reasoning .)\nSuppose the actual outcome is green-1 followed by blue-2 . We require this invariance: I must get the same total score, regardless of whether:\nI am scored twice, first on my prediction for p(green-1) , and second on my prediction for p(blue-2|green-1) .I am scored once for my joint prediction p(blue-2 & green-1) .\nSuppose I assign a 60% probability to green-1 , and then the green light flashes. I must now produce probabilities for the colors on the second round. I assess the possibility blue-2 , and allocate it 25% of my probability mass. Lo and behold, on the second round the light flashes blue. So on the first round my bet on the winning color was 60%, and on the second round my bet on the winning color was 25%. But I might also, at the start of the experiment and after assigning p(green-1) , imagine that the light first flashes green, imagine updating my theories based on that information, and then say what confidence I will give to blue on the next round if the first round is green. That is, I generate the probabilities p(green-1) and p(blue-2|green-1) . By multiplying these two probabilities together we would get the joint probability, p(green-1 & blue-2) = 15%.\nA double experiment has nine possible outcomes. If I generate nine probabilities for p(green-1 & green-2), p(green-1 & blue-2), … , p(red-1 & blue-2), p(red-1 & red-2) , the probability mass must sum to no more than 1.0. I am giving predictions for nine mutually exclusive possibilities of a “double experiment”.\nWe require a scoring rule (and maybe it won’t look like anything an ordinary bookie would ever use) such that my score doesn’t change regardless of whether we consider the double result as two predictions or one prediction. I can treat the sequence of two results as a single experiment, “press the button twice”, and be scored on my prediction for p(blue-2 & green-1) = 15% . Or I can be scored once for my first prediction p(green-1) = 60% , then again on my prediction p(blue-2|green-1) = 25% . We require the same total score in either case, so that it doesn’t matter how we slice up the experiments and the predictions – the total score is always exactly the same. This is our invariance.\nWe have just required:\nScore(p(green-1 & blue-2)) = Score(p(green-1)) + Score(p(blue-2|green-1)) \nAnd we already know:\np(green-1 & blue-2) = p(green-1) * p(blue-2|green-1)\nThe only possible scoring rule is:\nScore(p) = log(p)\nThe new scoring rule is that your score is the logarithm of the probability you assigned to the winner.\nThe base of the logarithm is arbitrary – whether we use the logarithm base 10 or the logarithm base 2, the scoring rule has the desired invariance. But we must choose some actual base. A mathematician would choose base e; an engineer would choose base 10; a computer scientist would choose base 2. If we use base 10, we can convert to “decibels”, as in the Intuitive Explanation ; but sometimes bits are easier to manipulate.\nThe logarithm scoring rule is proper – it has its expected maximum when we say our exact expectations; it rewards honesty. If we think the blue light has a 60% probability of flashing, and we calculate our expected payoff for different betting schemas, we find that we maximize our expected payoff by telling the experimenter “60%”. (Readers with calculus can verify this.) The scoring rule also gives an invariant total, regardless of whether pressing the button twice counts as “one experiment” or “two experiments”. However, payoffs are now all negative , since we are taking the logarithm of the probability and the probability is between 0 and 1. The logarithm base 10 of 0.1 is -1; the logarithm base 10 of 1% is -2. That’s okay. We accepted that the scoring rule might not look like anything a real bookie would ever use. If you like, you can imagine that the experimenter has a pile of money, and at the end of the experiment he awards you some amount minus your large negative score. (Er, the amount plus your negative score.) Maybe the experimenter has a hundred dollars, and at the end of a hundred rounds you accumulated a score of -48, so you get fifty-two dollars.\nA score of -48 in what base? We can eliminate the ambiguity in the score by specifying units. 10 decibels equals a factor of 10; -10 decibels equals a factor of 1/10. Assigning a probability of 0.01 to the actual outcome would score -20 decibels. A probability of 0.03 would score -15 decibels. Sometimes we may use bits: 1 bit is a factor of 2, -1 bit is a factor of 1/2. A probability of 0.25 would score -2 bits; a probability of 0.03 would score around -5 bits.\nIf you arrive at a probability assessment P for each color, with p(red), p(blue), p(green) , then your expected score is:\nScore = log(p)\nExpectation[Score] = Sum[p*log(p)] for all outcomes p.\n \nSuppose you had probabilities red:25%, blue:50%, green:25% . Let’s think in base 2 for a moment, to make things simpler. Your expected score is:\nred: scores -2 bits, flashes 25% of the time\nblue: scores -1 bit, flashes 50% of the time\ngreen: scores -2 bits, flashes 25% of the time\n\n expected score: -1.50 bits\n\nContrast our Bayesian scoring rule with the ordinary or colloquial way of speaking about degrees of belief, where someone might casually say, “I’m 98% certain that canola oil contains more omega-3 fats than olive oil.” What they really mean by this is that they feel 98% certain – there’s something like a little progress bar that measures the strength of the emotion of certainty, and this progress bar is 98% full. And the emotional progress bar probably wouldn’t be exactly 98% full, if we had some way to measure. The word “98%” is just a colloquial way of saying: “I’m almost but not entirely certain.” It doesn’t mean that you could get the highest expected payoff by betting exactly 98 cents of play money on that outcome. You should only assign a calibrated confidence of 98% if you’re confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We’ll keep track of how often you’re right, over time, and if it turns out that when you say “90% sure” you’re right about 7 times out of 10, then we’ll say you’re poorly calibrated .\nRemember Spock from Star Trek? Spock often says something along the lines of, “Captain, if you steer the Enterprise directly into a black hole, our probability of survival is only 2.837%.” Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives a figure with four significant digits of precision that is wrong by two orders of magnitude?\nThe people who write this stuff have no idea what scientists mean by “probability”. They suppose that a probability of 99.9% is something like feeling really sure. They suppose that Spock’s statement expresses the challenge of successfully steering the Enterprise through a black hole, like a video game rated five stars for difficulty. What we mean by “probability” is that if you utter the words “two percent probability” on fifty independent occasions, it better not happen more than once.\nIf you say “98% probable” a thousand times, and you are surprised only five times, we still ding you for poor calibration. You’re allocating too much probability mass to the possibility that you’re wrong. You should say “99.5% probable” to maximize your score. The scoring rule rewards accurate calibration, encouraging neither humility nor arrogance.\nAt this point it may occur to some readers that there’s an obvious way to achieve perfect calibration – just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you’re right half the time. Isn’t that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is discrimination .\nSuppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between “Yes” and “No”. Congratulations, you’re perfectly calibrated – answers for which you said “50% probability” were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said “50%” on twenty occasions – you said “50%” to Yes-1, No-1; Yes-2, No-2; … . On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; … . And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; …\nNow I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I’m more poorly calibrated than you. I said “90%” on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate “90%” into 80%, knowing that when I’m 90% sure I’m right about 80% of the time. But the probability you assigned to the final outcome is 1/2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn’t, but my better discrimination between right and wrong answers more than makes up for it. My final score is higher – I assigned a greater joint probability to the final outcome of the entire experiment. If I’d been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 * 0.2^2, 0.006.\nIs it possible to do even better? Sure. You could have guessed every single answer correctly, and assigned a probability of 99% to each of your answers. Then the probability you assigned to the entire experimental outcome would be 0.99^10 ~ 90%.\nYour score would be log(90%), -0.45 decibels or -0.15 bits. We need to take the logarithm so that if I try to maximize my expected score , Sum[p*log(p)] , I have no motive to cheat. Without the logarithm rule, I would maximize my expected score by assigning all my probability mass to the most probable outcome. Also, without the logarithm rule, my total score would be different depending on whether we counted several rounds as several experiments or as one experiment.\nA simple transform can fix poor calibration by decreasing discrimination. If you are in the habit of saying “million-to-one” on 90 correct and 10 incorrect answers for each hundred questions, we can perfect your calibration by replacing “million-to-one” with “nine-to-one”. In contrast, there’s no easy way to increase (successful) discrimination. If you habitually say “nine-to-one” on 90 correct answers for each hundred questions, I can easily increase your claimed discrimination by replacing “nine-to-one” with “million-to-one”. But no simple transform can increase your actual discrimination such that your reply distinguishes 95 correct answers and 5 incorrect answers. Yates et. al. (2002): “Whereas good calibration often can be achieved by simple mathematical transformations (e.g., adding a constant to every probability judgment), good discrimination demands access to solid, predictive evidence and skill at exploiting that evidence, which are difficult to find in any real-life, practical situation.” If you lack the ability to distinguish truth from falsehood, you can achieve perfect calibration by confessing your ignorance; but confessing ignorance will not, of itself, distinguish truth from falsehood.\nWe thus dispose of another false stereotype of rationality, that rationality consists of being humble and modest and confessing helplessness in the face of the unknown. That’s just the cheater’s way out, assigning a 50% probability to all yes-or-no questions. Our scoring rule encourages you to do better if you can. If you are ignorant, confess your ignorance; if you are confident, confess your confidence. We penalize you for being confident and wrong, but we also reward you for being confident and right. That is the virtue of a proper scoring rule.\n\nSuppose I flip a coin twenty times. If I believe the coin is fair, the best prediction I can make is to predict an even chance of heads or tails on each flip. If I believe the coin is fair, I assign the same probability to every possible sequence of twenty coinflips. There are roughly a million (1,048,576) possible sequences of twenty coinflips, and I have only 1.0 of probability mass to play with. So I assign to each individual possible sequence a probability of (1/2)^20 – odds of about a million to one; -20 bits or -60 decibels.\nI made an experimental prediction and got a score of -60 decibels! Doesn’t this falsify the hypothesis? Intuitively, no. We do not flip a coin twenty times and see a random-looking result, then reel back and say, why, the odds of that are a million to one. But the odds are a million to one against seeing that exact sequence, as I would discover if I naively predicted the exact same outcome for the next sequence of twenty coinflips. It’s okay to have theories that assign tiny probabilities to outcomes, so long as no other theory does better. But if someone used an alternate hypothesis to write down the exact sequence in a sealed envelope in advance, and she assigned a probability of 99%, I would suspect the fairness of the coin. Provided that she only sealed one envelope, and not a million.\nThat tells us what we ought common-sensically to answer, but it doesn’t say how the common-sense answer arises from the math. To say why the common sense is correct, we need to integrate all that has been said so far into the framework of Bayesian revision of belief. When we’re done, we’ll have a technical understanding of the difference between a verbal understanding and a technical understanding.\n\nImagine an experiment which produces an integer result between 0 and 99. For example, the experiment might be a particle counter that tells us how many particles have passed through in a minute. Or the experiment might be to visit the supermarket on Wednesday, check the price of a 10-ounce bag of crushed walnuts, and write down the last two digits of the price.\nWe are testing several different hypotheses that try to predict the experimental result. Each hypothesis produces a probability distribution over all possible results; in this case, the integers between zero and ninety-nine. The possibilities are mutually exclusive, so the probability mass in the distribution must sum to 1.0 (or less); we cannot predict a 90% probability of seeing 42 and also a 90% probability of seeing 43.\nSuppose there is a precise hypothesis, which predicts a 90% chance of seeing the result 51. (I.e., the hypothesis is that the supermarket usually prices walnuts with a price of “X dollars and 51 cents”.) The precise theory has staked 90% of its probability mass on the outcome 51. This leaves 10% probability mass remaining to spread over 99 other possible outcomes – all the numbers between 0 and 99 except 51. The theory makes no further specification, so we spread the remaining 10% probability mass evenly over 99 possibilities, assigning a probability of 1/990 to each non-51 result. For ease of writing, we’ll approximate 1/990 as 0.1%.\nThis probability distribution is analogous to the likelihood or conditional probability of the result given the hypothesis. Let us call it the likelihood distribution for the hypothesis, our chance of seeing each specified outcome if the hypothesis is true. The likelihood distribution for a hypothesis H is a function composed of all the conditional probabilities for p(0|H)=0.001, p(1|H)=0.001, …, p(51|H)=0.9, …, p(99|H)=0.001 . The probability mass contained in the likelihood distribution must sum to 1. It is a general rule that there is no way we can have a 90% chance of seeing 51 and also a 90% chance of seeing 52. Therefore, if we first assume the hypothesis H is true, there is still no way we can have a 90% chance of seeing 51 and also a 90% chance of seeing 52.\nThe precise theory predicts a 90% probability of seeing 51. Let there be also a vague theory, which predicts “a 90% probability of seeing a number in the 50s”.\nSeeing the result 51, we do not say the outcome confirms both theories equally. Both theories made predictions, and both assigned probabilities of 90%, and the result 51 confirms both predictions. But the precise theory has an advantage because it concentrates its probability mass into a sharper point. If the vague theory makes no further specification, we count “a 90% probability of seeing a number in the 50s” as a 9% probability of seeing each number between 50 and 59.\nSuppose we started with even odds in favor of the precise theory and the vague theory – odds of 1:1, or 50% probability for either hypothesis being true. After seeing the result 51, what are the posterior odds of the precise theory being true? (If you don’t remember how to work this problem, return to An Intuitive Explanation of Bayesian Reasoning .) The predictions of the two theories are analogous to their likelihood assignments – the conditional probability of seeing the result, given that the theory is true. What is the likelihood ratio between the two theories? The first theory allocated 90% probability mass to the exact outcome. The vague theory allocated 9% probability mass to the exact outcome. The likelihood ratio is 10:1. So if we started with even 1:1 odds, the posterior odds are 10:1 in favor of the precise theory. The differential pressure of the two conditional probabilities pushed our prior confidence of 50% to a posterior confidence of about 91% that the precise theory is correct. Assuming that these are the only hypotheses being tested, that this is the only evidence under consideration, and so on.\nWhy did the vague theory lose when both theories fit the evidence? The vague theory is timid; it makes a broad prediction, hedges its bets, allows many possibilities that would falsify the precise theory. This is not the virtue of a scientific theory. Philosophers of science tell us that theories should be bold, and subject themselves willingly to falsification if their prediction fails (Popper 1959). Now we see why. The precise theory concentrates its probability mass into a sharper point and thereby leaves itself vulnerable to falsification if the real outcome hits elsewhere; but if the predicted outcome is correct, precision has a tremendous likelihood advantage over vagueness.\nThe laws of probability theory provide no way to cheat, to make a vague hypothesis such that any result between 50 and 59 counts for as much favorable confirmation as the precise theory receives, for that would require probability mass summing to 900%. There is no way to cheat, providing you record your prediction in advance , so you cannot claim afterward that your theory assigns a probability of 90% to whichever result arrived. Humans are very fond of making their predictions afterward, so the social process of science requires an advance prediction before we say that a result confirms a theory. But how humans may move in harmony with the way of Bayes, and so wield the power, is a separate issue from whether the math works. When we’re doing the math, we just take for granted that likelihood density functions are fixed properties of a hypothesis and the probability mass sums to 1 and you’d never dream of doing it any other way.\nYou may want to take a moment to visualize that, if we define probability in terms of calibration, Bayes’ Theorem relates the calibrations. Suppose I guess that Theory 1 is 50% likely to be true, and I guess that Theory 2 is 50% likely to be true. Suppose I am well-calibrated; when I utter the words “fifty percent”, the event happens about half the time. And then I see a result R which would happen around nine-tenths of the time given Theory 1, and around nine-hundredths of the time given Theory 2, and I know this is so, and I apply Bayesian reasoning. If I was perfectly calibrated initially (despite the poor discrimination of saying 50/50), I will still be perfectly calibrated (and better discriminated) after I say that my confidence in Theory 1 is now 91%. If I repeated this kind of situation many times, I would be right around ten-elevenths of the time when I said “91%”. If I reason using Bayesian rules, and I start from well-calibrated priors, then my conclusions will also be well-calibrated. This only holds true if we define probability in terms of calibration! If “90% sure” is instead interpreted as, say, the strength of the emotion of surety, there is no reason to expect the posterior emotion to stand in an exact Bayesian relation to the prior emotion.\nLet the prior odds be ten to one in favor of the vague theory. Why? Suppose our way of describing hypotheses allows us to either specify a precise number, or to just specify a first-digit; we can say “51”, “63”, “72”, or “in the fifties/sixties/seventies”. Suppose we think that the real answer is about equally liable to be an answer of the first kind or the second. However, given the problem, there are a hundred possible hypotheses of the first kind, and only ten hypotheses of the second kind. So if we think that either class of hypotheses has about an equal prior chance of being correct, we have to spread out the prior probability mass over ten times as many precise theories as vague theories. The precise theory that predicts exactly 51 would thus have one-tenth as much prior probability mass as the vague theory that predicts a number in the fifties. After seeing 51, the odds would go from 1:10 in favor of the vague theory to 1:1, even odds for the precise theory and the vague theory.\nIf you look at this carefully, it’s exactly what common sense would expect. You start out uncertain of whether a phenomenon is the kind of phenomenon that produces exactly the same result every time, or if it’s the kind of phenomenon that produces a result in the Xties every time. (Maybe the phenomenon is a price range at the supermarket, if you need some reason to suppose that 50..59 is an acceptable range but 49..58 isn’t.) You take a single measurement and the answer is 51. Well, that could be because the phenomenon is exactly 51, or because it’s in the fifties. So the remaining precise theory has the same odds as the remaining vague theory, which requires that the vague theory must have started out ten times as probable as that precise theory, since the precise theory has a sharper fit to the evidence.\nIf we just see one number, like 51, it doesn’t change the prior probability that the phenomenon itself was “precise” or “vague”. But, in effect, it concentrates all the probability mass of those two classes of hypothesis into a single surviving hypothesis of each class.\nOf course, it is a severe error to say that a phenomenon is precise or vague, a case of what Jaynes calls the Mind Projection Fallacy (Jaynes 1996). Precision or vagueness is a property of maps, not territories. Rather we should ask if the price in the supermarket stays constant or shifts about. A hypothesis of the “vague” sort is a good description of a price that shifts about. A precise map will suit a constant territory.\nAnother example: You flip a coin ten times and see the sequence HHTTH:TTTTH. Maybe you started out thinking there was a 1% chance this coin was fixed. Doesn’t the hypothesis “This coin is fixed to produce HHTTH:TTTTH” assign a thousand times the likelihood mass to the observed outcome, compared to the fair coin hypothesis? Yes. Don’t the posterior odds that the coin is fixed go to 10:1? No. The 1% prior probability that “the coin is fixed” has to cover every possible kind of fixed coin – a coin fixed to produce HHTTH:TTTTH, a coin fixed to produce TTHHT:HHHHT, etc. The prior probability the coin is fixed to produce HHTTH:TTTTH is not 1%, but a thousandth of one percent. Afterward, the posterior probability the coin is fixed to produce HHTTH:TTTTH is one percent. Which is to say: You thought the coin was probably fair but had a one percent chance of being fixed to some random sequence; you flipped the coin; the coin produced a random-looking sequence; and that doesn’t tell you anything about whether the coin is fair or fixed. It does tell you, if the coin is fixed, which sequence it is fixed to.\nThis parable helps illustrate why Bayesians must think about prior probabilities. There is a branch of statistics, sometimes called “orthodox” or “classical” statistics, which insists on paying attention only to likelihoods. But if you only pay attention to likelihoods, then eventually some fixed-coin hypothesis will always defeat the fair coin hypothesis, a phenomenon known as “overfitting” the theory to the data. After thirty flips, the likelihood is a billion times as great for the fixed-coin hypothesis with that sequence, as for the fair coin hypothesis. Only if the fixed-coin hypothesis (or rather, that specific fixed-coin hypothesis) is a billion times less probable a priori , can the fixed-coin hypothesis possibly lose to the fair coin hypothesis.\nIf you shake the coin to reset it, and start flipping the coin again , and the coin produces HHTTH:TTTTH again , that is a different matter. That does raise the posterior odds of the fixed-coin hypothesis to 10:1, even if the starting probability was only 1%.\nSimilarly, if we perform two successive measurements of the particle counter (or the supermarket price on Wednesdays), and both measurements return 51, the precise theory wins by odds of 10 to 1.\nSo the precise theory wins, but the vague theory would still score better than no theory at all. Consider a third theory, the hypothesis of zero knowledge or maximum-entropy distribution , which makes equally probable any result between 0 and 99. Suppose we see the result 51. The vague theory produced a better prediction than the maximum-entropy distribution – assigned a greater likelihood to the outcome we observed. The vague theory is, literally, better than nothing. Suppose we started with odds of 1:20 in favor of the hypothesis of complete ignorance. (Why odds of 1:20? There is only one hypothesis of complete ignorance, and moreover, it’s a particularly simple and intuitive kind of hypothesis. Occam’s Razor.) After seeing the result of 51, predicted at 9% by the vague theory versus 1% by complete ignorance, the posterior odds go to 10:20 or 1:2. If we then see another result of 51, the posterior odds go to 10:2 or 83% probability for the vague theory, assuming there is no more precise theory under consideration.\nYet the timidity of the vague theory – its unwillingness to produce an exact prediction and accept falsification on any other result – renders it vulnerable to the bold, precise theory. (Providing, of course, that the bold theory correctly guesses the outcome!) Suppose the prior odds were 1:10:200 for the precise, vague, and ignorant theories – prior probabilities of 0.5%, 4.7%, and 94.8% for the precise, vague and ignorant theories. This figure reflects our prior probability distribution over classes of hypotheses, with the probability mass distributed over entire classes as follows: 50% that the phenomenon shifts across all digits, 25% that the phenomenon shifts around within some decimal bracket, and 25% that the phenomenon repeats the same number each time. 1 hypothesis of complete ignorance, 10 possible hypotheses for a decimal bracket, 100 possible hypotheses for a repeating number. Thus, prior odds of 1:10:200 for the precise hypothesis 51, the vague hypothesis “fifties”, and the hypothesis of complete ignorance.\nAfter seeing a result of 51, with assigned probability of 90%, 9%, and 1%, the posterior odds go to 90:90:200 = 9:9:20. After seeing an additional result of 51, the posterior odds go to 810:81:20, or 89%, 9%, and 2%. The precise theory is now favored over the vague theory, which in turn is favored over the ignorant theory.\nNow consider a stupid theory, which predicts a 90% probability of seeing a result between 0 and 9. The stupid theory assigns a probability of 0.1% to the actual outcome, 51. If the odds were initially 1:10:200:10 for the precise, vague, ignorant, and stupid theories, the posterior odds after seeing 51 once would be 90:90:200:1. The stupid theory has been falsified (posterior probability of 0.2%).\nIt is possible to have a model so bad that it is worse than nothing, if the model concentrates its probability mass away from the actual outcome, makes confident predictions of wrong answers. Such a hypothesis is so poor that it loses against the hypothesis of complete ignorance. Ignorance is better than anti-knowledge.Side note 1: In the field of Artificial Intelligence, there is a sometime fad that praises the glory of randomness. Occasionally an AI researcher discovers that if they add noise to one of their algorithms, the algorithm works better. This result is reported with great enthusiasm, followed by much fulsome praise of the creative powers of chaos, unpredictability, spontaneity, ignorance of what your own AI is doing, et cetera. (See Imagination Engines Inc. for an example; according to their sales literature they sell wounded and dying neural nets.) But how sad is an algorithm if you can increase its performance by injecting entropy into intermediate processing stages? The algorithm must be so deranged that some of its work goes into concentrating probability mass away from good solutions. If injecting randomness results in a reliable improvement, then some aspect of the algorithm must do reliably worse than random. Only in AI would people devise algorithms literally dumber than a bag of bricks , boost the results slightly back toward ignorance, and then argue for the healing power of noise.Side note 2: Robert Pirsig once said: “The world’s stupidest man may say the Sun is shining, but that doesn’t make it dark out.” (Pirsig 1974.) It is a classical logical fallacy to say, “Hitler believed in the Pythagorean Theorem. You don’t want to agree with Hitler, do you?” Consider that for someone to be reliably wrong on yes-or-no questions – say, to be wrong 90% of the time – that person would need to do all the hard work of discriminating truth from falsehood, just to be wrong so reliably. If someone is wrong on yes-or-no questions 99% of the time, we can get 99% accuracy just by inverting the responses. Anyone that stupid would be smarter than I am.\nSuppose that in our experiment we see the results 52, 51, 58. The precise theory gives this conjunctive event a probability of a thousand to one times 90% times a thousand to one, while the vaguer theory gives this conjunctive event a probability of 9% cubed, which works out to… oh… um… let’s see… a million to one given the precise theory, versus a thousand to one given the vague theory. Or thereabouts; we are counting rough powers of ten. Versus a million to one given the zero-knowledge distribution that assigns an equal probability to all outcomes. Versus a billion to one given a model worse than nothing, the stupid hypothesis, which claims a 90% probability of seeing a number less than 10. Using these approximate numbers, the vague theory racks up a score of -30 decibels (a probability of 1/1000 for the whole experimental outcome), versus scores of -60 for the precise theory, -60 for the ignorant theory, and -90 for the stupid theory. It is not always true that the highest score wins, because we need to take into account our prior odds of 1:10:200:10, confidences of -23, -13, 0, and -13 decibels. The vague theory still comes in with the highest total score at -43 decibels. (If we ignored our prior probabilities, each new experiment would override the accumulated results of all the previous experiments; we could not accumulate knowledge. Furthermore, the fixed-coin hypothesis would always win.)\nAs always, we should not be alarmed that even the best theory still has a low score – recall the parable of the fair coin. Theories are approximations. In principle we might be able to predict the exact sequence of coinflips. But it would take better measurement and more computing power than we’re willing to expend. Maybe we could achieve 60/40 prediction of coinflips, with a good enough model…? We go with the best approximation we have, and try to achieve good calibration even if the discrimination isn’t perfect.\n\nWe’ve conducted our analysis so far under the rules of Bayesian probability theory, in which there’s no way to have more than 100% probability mass, and hence no way to cheat so that any outcome can count as “confirmation” of your theory. Under Bayesian law, play money may not be counterfeited; you only have so much clay. If you allocate more probability mass in one place, you have to take it from somewhere else; a coin cannot have a 90% chance of turning up heads and a 90% chance of turning up tails.\nUnfortunately, human beings are not Bayesians. Human beings bizarrely attempt to defend hypotheses, making a deliberate effort to prove them or prevent disproof. This behavior has no analogue in the laws of probability theory or decision theory. In formal probability theory the hypothesis is , and the evidence is , and either the hypothesis is confirmed or it is not. In formal decision theory, an agent may make an effort to investigate some issue of which the agent is currently uncertain, not knowing whether the evidence shall go one way or the other. In neither case does one ever deliberately try to prove an idea, or try to avoid disproving it. One may test ideas of which one is genuinely uncertain, but not have a “preferred” outcome of the investigation. One may not try to prove hypotheses, nor prevent their proof. I cannot properly convey just how ridiculous the notion would be, to a true Bayesian; there are not even words in Bayes-language to describe the mistake…\nOne classic method for preventing a theory from disproof is arguing post facto that any observation presented proves the theory. Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis (‘prudence in criminal cases’) in which he bitingly described the decision tree for condemning accused witches. If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away. (Spee 1631.) Spee acted as confessor to many witches; he was thus in a position to observe every branch of the accusation tree, that no matter what the accused witch said or did, it was held a proof against her. In any individual case, you would only hear one branch of the dilemma.\nIt is for this reason that scientists write down their predictions in advance.\nIf you’ve read the Intuitive Explanation , you should recall the result I nicknamed the “Law of Conservation of Probability”, that for every expectation of evidence there is an equal and opposite expectation of counterevidence. If A is evidence in favor of B, not-A must be evidence in favor of not-B. The strengths of the evidences may not be equal; rare but strong evidence in one direction may be balanced by common but weak evidence in the other direction. But it is not possible for both A and not-A to be evidence in favor of B. That is, it’s not possible under the laws of probability theory. Humans often seem to want to have their cake and eat it too. Whichever result we witness is the one that proves our theory. As Spee put it, “The investigating committee would feel disgraced if it acquitted a woman; once arrested and in chains, she has to be guilty, by fair means or foul.”\nThe way human psychology seems to work is that first we see something happen, and then we try to argue that it matches whatever hypothesis we had in mind beforehand. Rather than conserved probability mass, to distribute over advance predictions , we have a feeling of compatibility – the degree to which the explanation and the event seem to ‘fit’. ‘Fit’ is not conserved. There is no equivalent of the rule that probability mass must sum to 1. A psychoanalyst may explain any possible behavior of a patient by constructing an appropriate structure of ‘rationalizations’ and ‘defenses’; it fits, therefore it must be true.\nNow consider the fable told at the start of this essay – the students seeing a radiator, and a metal plate next to the radiator. The students would never predict in advance that the side of the plate near the radiator would be cooler. Yet, seeing the fact, they managed to make their explanations ‘fit’. They lost their precious chance at bewilderment, to realize that their models did not predict the phenomenon they observed. They sacrificed their ability to be more confused by fiction than by truth. And they did not realize “heat induction, blah blah, therefore the near side is cooler” is a vague and verbal prediction, spread across an enormously wide range of possible values for specific measured temperatures. Applying equations of diffusion and equilibrium would give a sharp prediction for possible joint values. It might not specify the first values you measured, but when you knew a few values you could generate a sharp prediction for the rest. The score for the entire experimental outcome would be far better than any less precise alternative, especially a vague and verbal prediction.\n\nYou now have a technical explanation of the difference between a verbal explanation and a technical explanation. It is a technical explanation because it enables you to calculate exactly how technical an explanation is. Vague hypotheses may be so vague that only a superhuman intelligence could calculate exactly how vague. Perhaps a sufficiently huge intelligence could extrapolate every possible experimental result, and extrapolate every possible verdict of the vague guesser for how well the vague hypothesis “fit”, and then renormalize the “fit” distribution into a likelihood distribution that summed to 1. But in principle one can still calculate exactly how vague is a vague hypothesis. The calculation is just not computationally tractable, the way that calculating airplane trajectories via quantum mechanics is not computationally tractable.\nI hold that everyone needs to learn at least one technical subject. Physics; computer science; evolutionary biology; or Bayesian probability theory, but something . Someone with no technical subjects under their belt has no referent for what it means to “explain” something. They may think “All is Fire” is an explanation. Therefore do I advocate that Bayesian probability theory should be taught in high school. Bayesian probability theory is the sole piece of math I know that is accessible at the high school level, and that permits a technical understanding of a subject matter – the dynamics of belief – that is an everyday real-world domain and has emotionally meaningful consequences. Studying Bayesian probability would give students a referent for what it means to “explain” something.\nToo many academics think that being “technical” means speaking in dry polysyllabisms. Here’s a “technical” explanation of technical explanation:The equations of probability theory favor hypotheses that strongly predict the exact observed data. Strong models boldly concentrate their probability density into precise outcomes, making them falsifiable if the data hits elsewhere, and giving them tremendous likelihood advantages over models less bold, less precise. Verbal explanation runs on psychological evaluation of unconserved post facto compatibility instead of conserved ante facto probability density. And verbal explanation does not paint sharply detailed pictures, implying a smooth likelihood distribution in the vicinity of the data.\nIs this satisfactory? No. Hear the impressive and weighty sentences, resounding with the dull thud of expertise. See the hapless students, writing those sentences on a sheet of paper. Even after the listeners hear the ritual words, they can perform no calculations. You know the math, so the words are meaningful. You can perform the calculations after hearing the impressive words, just as you could have done before. But what of one who did not see any calculations performed? What new skills have they gained from that “technical” lecture, save the ability to recite fascinating words?\n“Bayesian” sure is a fascinating word, isn’t it? Let’s get it out of our systems: Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes…\nThe sacred syllable is meaningless, except insofar as it tells someone to apply math. Therefore the one who hears must already know the math.\nConversely, if you know the math, you can be as silly as you like, and still technical.\nWe thus dispose of yet another stereotype of rationality, that rationality consists of sere formality and humorless solemnity. What has that to do with the problem of distinguishing truth from falsehood? What has that to do with attaining the map that reflects the territory? A scientist worthy of a lab coat should be able to make original discoveries while wearing a clown suit, or give a lecture in a high squeaky voice from inhaling helium. It is written nowhere in the math of probability theory that one may have no fun. The blade that cuts through to the correct answer has no dignity or silliness of itself, though it may fit the hand of a silly wielder.\n\nOur physics uses the same theory to describe an airplane, and collisions in a particle accelerator – particles and airplanes both obey special relativity and general relativity and quantum electrodynamics and quantum chromodynamics. But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei. A computer modeling the aerodynamics of the 747 may not contain a single token representing an atom, even though no one denies that the 747 is made of atoms.\nA useful model isn’t just something you know, as you know that the airplane is made of atoms. A useful model is knowledge you can compute in reasonable time to predict real-world events you know how to observe. Physicists use different models to predict airplanes and particle collisions, not because the two events take place in different universes with different laws of physics, but because it would be too expensive to compute the airplane particle by particle.\nAs the saying goes: “The map is not the territory, but you can’t fold up the territory and put it in your glove compartment.” Sometimes you need a smaller map, to fit in a more cramped glove compartment. It doesn’t change the territory. The precision or vagueness of the map isn’t a fact about the territory, it’s a fact about the map.\nMaybe someone will find that, using a model that violates conservation of momentum just a little, you can compute the aerodynamics of the 747 much more cheaply than if you insist that momentum is exactly conserved. So if you’ve got two computers competing to produce the best prediction, it might be that the best prediction comes from the model that violates conservation of momentum. This doesn’t mean that the 747 violates conservation of momentum in real life. Neither model uses individual atoms, but that doesn’t imply the 747 is not made of atoms. You would prove the 747 is made of atoms with experimental data that the aerodynamic models couldn’t handle; for example, you would train a scanning tunneling microscope on a section of wing and look at the atoms. Similarly, you could use a finer measuring instrument to discriminate between a 747 that really disobeyed conservation of momentum like the cheap approximation predicted, versus a 747 that obeyed conservation of momentum like underlying physics predicted. The winning theory is the one that best predicts all the experimental predictions together. Our Bayesian scoring rule gives us a way to combine the results of all our experiments, even experiments that use different methods.\nFurthermore, the atomic theory allows, embraces, and in some sense mandates the aerodynamic model. By thinking abstractly about the assumptions of atomic theory, we realize that the aerodynamic model ought to be a good (and much cheaper) approximation of the atomic theory, and so the atomic theory supports the aerodynamic model, rather than competing with it. A successful theory can embrace many models for different domains, so long as the models are acknowledged as approximations, and in each case the model is compatible with (or ideally mandated by) the underlying theory.\nOur fundamental physics – quantum mechanics, the standard family of particles, and relativity – is a theory that embraces an enormous family of models for macroscopic physical phenomena. There is the physics of liquids, and solids, and gases; yet this does not mean that there are fundamental things in the world that have the intrinsic property of liquidity.”Apparently there is colour, apparently sweetness, apparently bitterness, actually there are only atoms and the void.”– Democritus, 420 BC (from Robinson and Groves 1998).\n\nIn arguing that a “technical” theory should be defined as a theory that sharply concentrates probability into specific advance predictions, I am setting an extremely high standard of strictness. We have seen that a vague theory can be better than nothing. A vague theory can win out over the hypothesis of ignorance, if there are no precise theories to compete against it.\nThere is an enormous family of models belonging to the central underlying theory of life and biology; the underlying theory that is sometimes called neo-Darwinism, natural selection, or evolution. Some models in evolutionary theory are quantitative. The way in which DNA encodes proteins is redundant; two different DNA sequences can code for exactly the same protein. There are 4 DNA bases {ATCG} and 64 possible combinations of three DNA bases. But those 64 possible codons describe only 20 amino acids plus a stop code. Genetic drift ought therefore to produce non-functional changes in species genomes, through mutations which by chance become fixed in the gene pool. The accumulation rate of non-functional differences between the genomes of two species with a common ancestor, depends on such parameters as the number of generations elapsed and the intensity of selection at that genetic locus. That’s an example of a member of the family of evolutionary models that produces quantitative predictions. There are also disequilibrium allele frequencies under selection, stable equilibria for game-theoretical strategies, sex ratios, et cetera.\nThis all comes under the heading of “fascinating words”. Unfortunately, there are certain religious factions that spread gross disinformation about evolutionary theory. So I emphasize that many models within evolutionary theory make quantitative predictions that are experimentally confirmed, and that such models are far more than sufficient to demonstrate that, e.g., humans and chimpanzees are related by a common ancestor. If you’ve been victimized by creationist disinformation – that is, if you’ve heard any suggestion that evolutionary theory is controversial or untestable or “just a theory” or non-rigorous or non-technical or in any wise not confirmed by an unimaginably huge mound of experimental evidence – I recommend reading the Talk.Origins FAQ and studying evolutionary biology with math.\nBut imagine going back in time to the nineteenth century, when the theory of natural selection had only just been discovered by Charles Darwin and Alfred Russel Wallace. Imagine evolutionism just after its birth, when the theory had nothing remotely like the modern-day body of quantitative models and great heaping mountains of experimental evidence. There was no way of knowing that humans and chimpanzees would be discovered to have 95% shared genetic material. No one knew that DNA existed. Yet even so, scientists flocked to the new theory of natural selection. And later it turned out that there was a precisely copied genetic material with the potential to mutate, that humans and chimps were provably related, etc.\nSo the very strict, very high standard that I proposed for a “technical” theory is too strict. Historically, it has been possible to successfully discriminate true theories from false theories, based on predictions of the sort I called “vague”. Vague predictions of, say, 80% confidence, can build up a huge advantage over alternate hypotheses, given enough experiments. Perhaps a theory of this kind, producing predictions that are not precisely detailed but are nonetheless correct, could be called “semitechnical”?\nBut surely technical theories are more reliable than semitechnical theories? Surely technical theories should take precedence, command greater respect? Surely physics, which produces exceedingly exact predictions, is in some sense better confirmed than evolutionary theory? Not implying that evolutionary theory is wrong, of course; but however vast the mountains of evidence favoring evolution, does not physics go one better through vast mountains of precise experimental confirmation? Observations of neutron stars confirm the predictions of General Relativity to within one part in a hundred trillion (10^14). What does evolutionary theory have to match that?\nSomeone – I think either Roger Penrose or Richard Dawkins – said once that measured by the simplicity of the theory and the amount of complexity it explained, Darwin had the single greatest idea in the history of time.\nOnce there was a conflict between 19th-century physics and 19th-century evolutionism. According to the best physical models then in use, the Sun could not have been burning very long. 3000 years on chemical energy, or 40 million years on gravitational energy. There was no energy source known to 19th-century physics that would permit longer burning. 19th-century physics was not quite as powerful as modern physics – it did not have predictions accurate to within one part in 10^14. But 19th-century physics still had the mathematical character of modern physics; a discipline whose models produced detailed, precise, quantitative predictions. 19th-century evolutionary theory was wholly semitechnical, without a scrap of quantitative modeling. Not even Mendel’s experiments with peas were then known. And yet it did seem likely that evolution would require longer than a paltry 40 million years in which to operate – hundreds of millions, even billions of years. The antiquity of the Earth was a vague and semitechnical prediction, of a vague and semitechnical theory. In contrast, the 19th-century physicists had a precise and quantitative model, which through formal calculation produced the precise and quantitative dictum that the Sun simply could not have burned that long.”The limitations of geological periods, imposed by physical science, cannot, of course, disprove the hypothesis of transmutation of species; but it does seem sufficient to disprove the doctrine that transmutation has taken place through ‘descent with modification by natural selection.'”– Lord Kelvin, distinguished 19th-century physicist (from Zapato 1998).\nHistory records who won.\nThe moral? If you can give 80% confident advance predictions on yes-or-no questions, it may be a “vague” theory, it may be wrong one time out of five, but you can still build up a heck of a huge scoring lead over the hypothesis of ignorance. Enough to confirm a theory, if there are no better competitors. Reality is consistent; every correct theory about the universe is compatible with every other correct theory. Imperfect maps can conflict, but there is only one territory. 19th-century evolutionism might have been a semitechnical discipline, but it was still correct (as we now know) and by far the best explanation (even in that day). Any conflict between evolutionism and another well-confirmed theory had to reflect some kind of anomaly, a mistake in the assertion that the two theories were incompatible. 19th-century physics couldn’t model the dynamics of the Sun – they didn’t know about nuclear reactions. They could not show that their understanding of the Sun was correct in technical detail , nor calculate from a confirmed model of the Sun to determine how long the Sun had existed. So in retrospect, we can say something like: “There was room for the possibility that 19th-century physics just didn’t understand the Sun.”\nBut that is hindsight. The real lesson is that, even though 19th-century physics was both precise and quantitative, it didn’t automatically dominate the semitechnical theory of 19th-century evolutionism. The theories were both well-supported. They were both correct in the domains over which they were generalized. The apparent conflict between them was an anomaly, and the anomaly turned out to stem from the incompleteness and incorrect application of 19th-century physics, not the incompleteness and incorrect application of 19th-century evolutionism. But it would be futile to compare the mountain of evidence supporting the one theory, versus the mountain of evidence supporting the other. Even in that day, both mountains were too large to suppose that either theory was simply mistaken. Mountains of evidence that large cannot be set to compete, as if one falsifies the other. You must be applying one theory incorrectly, or applying a model outside the domain it predicts well.\nSo you shouldn’t necessarily sneer at a theory just because it’s semitechnical. Semitechnical theories can build up high enough scores, compared to every available alternative, that you know the theory is at least approximately correct. Someday the semitechnical theory may be replaced or even falsified by a more precise competitor, but that’s true even of technical theories. Think of how Einstein’s General Relativity devoured Newton’s theory of gravitation.\nBut the correctness of a semitechnical theory – a theory that currently has no precise, computationally tractable models testable by feasible experiments – can be a lot less cut-and-dried than the correctness of a technical theory. It takes skill, patience, and examination to distinguish good semitechnical theories from theories that are just plain confused. This is not something that humans do well by instinct, which is why we have Science.\nPeople eagerly jump the gun and seize on any available reason to reject a disliked theory. That is why I gave the example of 19th-century evolutionism, to show why one should not be too quick to reject a “non-technical” theory out of hand. By the moral customs of science, 19th-century evolutionism was guilty of more than one sin. 19th-century evolutionism made no quantitative predictions. It was not readily subject to falsification. It was largely an explanation of what had already been seen. It lacked an underlying mechanism, as no one then knew about DNA. It even contradicted the 19th-century laws of physics. Yet natural selection was such an amazingly good post-facto explanation that people flocked to it, and they turned out to be right. Science, as a human endeavor, requires advance prediction. Probability theory, as math, does not distinguish between post-facto and advance prediction, because probability theory assumes that probability distributions are fixed properties of a hypothesis.\nThe rule about advance prediction is a rule of the social process of science – a moral custom and not a theorem. The moral custom exists to prevent human beings from making human mistakes that are hard to even describe in the language of probability theory, like tinkering after the fact with what you claim your hypothesis predicts. People concluded that 19th-century evolutionism was an excellent explanation, even if it was post-facto. That reasoning was correct as probability theory , which is why it worked despite all scientific sins. Probability theory is math. The social process of science is a set of legal conventions to keep people from cheating on the math.\nYet it is also true that, compared to a modern-day evolutionary theorist, evolutionary theorists of the late 19th and early 20th century often went sadly astray. Darwin, who was bright enough to invent the theory, got an amazing amount right. But Darwin’s successors, who were only bright enough to accept the theory, misunderstood evolution frequently and seriously. The usual process of science was then required to correct their mistakes. It is incredible how few errors of reasoning Darwin made in The Origin of Species and The Descent of Man , compared to they who followed.\nThat is also a hazard of a semitechnical theory. Even after the flash of genius insight is confirmed, merely average scientists may fail to apply the insights properly in the absence of formal models. As late as the 1960s biologists spoke of evolution working “for the good of the species”, or suggested that individuals would restrain their reproduction to prevent species overpopulation of a habitat. The best evolutionary theorists knew better, but average theorists did not. (Williams 1966.)\nSo it is far better to have a technical theory than a semitechnical theory. Unfortunately, Nature is not always so kind as to render Herself describable by neat, formal, computationally tractable models, nor does She always provide Her students with measuring instruments that can directly probe Her phenomena. Sometimes it is only a matter of time. 19th-century evolutionism was semitechnical, but later came the math of population genetics, and eventually DNA sequencing. But Nature will not always give you a phenomenon that you can describe with technical models fifteen seconds after you have the basic insight.\nYet the cutting edge of science, the controversy , is most often about a semitechnical theory, or nonsense posing as a semitechnical theory. By the time a theory achieves technical status, it is usually no longer controversial (among scientists). So the question of how to distinguish good semitechnical theories from nonsense is very important to scientists, and it is not as easy as dismissing out of hand any theory that is not technical. To the end of distinguishing truth from falsehood exists the entire discipline of rationality. The art is not reducible to a checklist, or at least, no checklist that an average scientist can apply reliably after an hour of training. If it was that simple we wouldn’t need science.\n\nWhy do you care about scientific controversies?\nNo, seriously, why do you care about scientific controversies ?\nThe media thinks that only the cutting edge of science, the very latest controversies, are worth reporting on. How often do you see headlines like “General Relativity still governing planetary orbits” or “Phlogiston theory remains false”? By the time anything is solid science, it is no longer a breaking headline. “Newsworthy” science is based on the thinnest of evidence and wrong half the time. If it were not on the uttermost fringes of the scientific frontier, it would not be news. Scientific controversies are problems so difficult that even people who’ve spent years mastering the field can still fool themselves. That’s what makes the problem controversial and attracts all the media attention. So the reporters show up, and hear the scientists speak fascinating words. The reporters are told that “particles” are “waves”, but there is no understanding of math for the words to invoke. What the physicist means by “wave” is not what the reporters hear, even if the physicist’s math applies also to the structure of water as it crashes on the shore.\nAnd then the reporters write stories, which are not worth the lives of the dead trees on which they are printed.\nBut what does it matter to you? Why should you pay attention to scientific controversies ? Why graze upon such sparse and rotten feed as the media offers, when there are so many solid meals to be found in textbooks? Nothing you’ll read as breaking news will ever hold a candle to the sheer beauty of settled science. Textbook science has carefully phrased explanations for new students, math derived step by step, plenty of experiments as illustration, and test problems.\nAnd textbook science is beautiful! Textbook science is comprehensible , unlike mere fascinating words that can never be truly beautiful. Elementary science textbooks describe simple theories, and simplicity is the core of scientific beauty. Fascinating words have no power, nor yet any meaning, without the math. The fascinating words are not knowledge but the illusion of knowledge, which is why it brings so little satisfaction to know that “gravity results from the curvature of spacetime”. Science is not in the fascinating words, though it’s all the media will ever give you.\nIs there ever justification for following a scientific controversy, while there remains any basic science you do not yet know? Yes. You could be an expert in that field, in which case that scientific controversy is your proper meat. Or the scientific controversy might be something you need to know now , because it affects your life. Maybe it’s the 19th century, and you’re gazing lustfully at a member of the appropriate sex wearing a 19th-century bathing suit, and you need to know whether your sexual desire comes from a psychology constructed by natural selection, or is a temptation placed in you by the Devil to lure you into hellfire.\nIt is not wholly impossible that we shall happen upon a scientific controversy that affects us, and find that we have a burning and urgent need for the correct answer. I shall therefore discuss some of the warning signs that historically distinguished vague hypotheses that later turned out to be unscientific gibberish, from vague hypotheses that later graduated to confirmed theories. Just remember the historical lesson of 19th-century evolutionism, and resist the temptation to fail every theory that misses a single item on your checklist. It is not my intention to give people another excuse to dismiss good science that discomforts them. If you apply stricter criteria to theories you dislike than theories you like (or vice versa!), then every additional nit you learn how to pick, every new logical flaw you learn how to detect, makes you that much stupider. Intelligence, to be useful, must be used for something other than defeating itself.\n\nOne of the classic signs of a poor hypothesis is that it must expend great effort in avoiding falsification – elaborating reasons why the hypothesis is compatible with the phenomenon, even though the phenomenon didn’t behave as expected.\nSagan (1995) gives the example of someone who claims that a dragon lives in their garage. Fascinated by this controversial question, we ignore all the textbooks providing total solutions to ancient mysteries on which alchemists spent their lives in vain… but never mind. We show up at the garage, look inside, and see: Nothing.\nAh, says the claimant, that’s because it’s an invisible dragon.\nNow as Sagan says, this is an odd claim, but it doesn’t mean we can never know if the dragon is there. Maybe we hear heavy breathing, and discover that carbon dioxide and heat appears in the garage’s air. Clawed footprints stamp through the dust. Occasionally a great gout of fire bursts out from no visible source. If so, we conclude that the garage contains an invisible dragon, and the reporters depart, satisfied that the controversy is over. Once something is a fact, it’s no longer exciting; it’s no fun believing in things that any old fool can see are true. If the dragon were really there, it would be no more fun to believe in the dragon than to believe in zebras.\nBut now suppose instead that we bring up our measuring instruments to see if carbon dioxide is accumulating in the garage’s air, and the claimant at once says: “No, no, it’s an invisible non-breathing dragon!” Okay. We begin to examine the dirt, and the claimant says: “No, it’s a flying invisible non-breathing dragon, so it won’t leave footprints.” We start to unload audio equipment, and the claimant says it’s an inaudible dragon. We bring in a bag of flour, to throw into the air to outline the dragon’s form, and the claimant quickly says that this dragon is permeable to flour.\nCarl Sagan originally drew the lesson that poor hypotheses need to do fast footwork to avoid falsification – to maintain an appearance of “fit”.\nI would point out that the claimant obviously has a good model of the situation somewhere in his head, because he can predict, in advance, exactly which excuses he’s going to need. When we bring up our measuring instruments, he knows that he’ll have to excuse the lack of any carbon dioxide in the air. When we bring in a bag of flour, the claimant knows that he’ll need to excuse the lack of any dragon-shaped form in the floury air.\nTo a Bayesian, a hypothesis isn’t something you assert in a loud, emphatic voice. A hypothesis is something that controls your anticipations , the probabilities you assign to future experiences. That’s what a probability is , to a Bayesian – that’s what you score, that’s what you calibrate. So while our claimant may say loudly, emphatically, and honestly that he believes there’s an invisible dragon in the garage, he does not anticipate there’s an invisible dragon in the garage – he anticipates exactly the same experience as the skeptic.\nWhen I judge the predictions of a hypothesis, I ask which experiences I would anticipate, not which facts I would believe.\nThe flip side:\nI recently argued with a friend of mine over a question of evolutionary theory. My friend alleged that the clustering of changes in the fossil record (apparently, there are periods of comparative stasis followed by comparatively sharp changes; itself a controversial observation known as “punctuated equilibrium”) showed that there was something wrong with our understanding of speciation. My friend thought that there was some unknown force at work, not supernatural, but some natural consideration that standard evolutionary theory didn’t take into account. Since my friend didn’t give a specific competing hypothesis that produced better predictions, his thesis had to be that the standard evolutionary model was stupid with respect to the data – that the standard model made a specific prediction that was wrong; that the model did worse than complete ignorance or some other default competitor.\nAt first I fell into the trap; I accepted the implicit assumption that the standard model predicted smoothness, and based my argument on my recollection that the fossil record changes weren’t as sharp as he claimed. He challenged me to produce an evolutionary intermediate between Homo erectus and Homo sapiens ; I googled and found Homo heidelbergensis . He congratulated me and acknowledged that I had scored a major point, but still insisted that the changes were too sharp, and not steady enough. I started to explain why I thought a pattern of uneven change could arise from the standard model: environmental selection pressures might not be constant… “Aha!” my friend said, “you’re making your excuses in advance.”\nBut suppose that the fossil record instead showed a smooth and gradual set of changes. Might my friend have argued that the standard model of evolution as a chaotic and noisy process could not account for such smoothness? If it is a scientific sin to claim post facto that our beloved hypothesis predicts the data, should it not be equally a sin to claim post facto that the competing hypothesis is stupid on the data?\nIf a hypothesis has a purely technical model, there is no trouble; we can compute the prediction of the model formally, without informal variables to provide a handle for post facto meddling. But what of semitechnical theories? Obviously a semitechnical theory must produce some good advance predictions about something , or else why bother? But after the theory is semi-confirmed, can the detractors claim that the data show a problem with the semitechnical theory, when the “problem” is constructed post facto? At the least the detractors must be very specific about what data a confirmed model predicts stupidly, and why the confirmed model must make (post facto) that stupid prediction. How sharp a change is “too sharp”, quantitatively, for the standard model of evolution to permit? Exactly how much steadiness do you think the standard model of evolution predicts? How do you know? Is it too late to say that, after you’ve seen the data?\nWhen my friend accused me of making excuses, I paused and asked myself which excuses I anticipated needing to make. I decided that my current grasp of evolutionary theory didn’t say anything about whether the rate of evolutionary change should be intermittent and jagged, or smooth and gradual. If I hadn’t seen the graph in advance, I could not have predicted it. (Unfortunately, I rendered even that verdict after seeing the data…) Maybe there are models in the evolutionary family that would make advance predictions of steadiness or variability, but if so, I don’t know about them. More to the point, my friend didn’t know either.\nIt is not always wise, to ask the opponents of a theory what their competitors predict. Get the theory’s predictions from the theory’s best advocates. Just make sure to write down their predictions in advance. Yes, sometimes a theory’s advocates try to make the theory “fit” evidence that plainly doesn’t fit. But if you find yourself wondering what a theory predicts, ask first among the theory’s advocates, and afterward ask the detractors to cross-examine.\nFurthermore: Models may include noise. If we hypothesize that the data are trending slowly and steadily upward, but our measuring instrument has an error of 5%, then it does no good to point to a data point that dips below the previous data point, and shout triumphantly, “See! It went down! Down down down! And don’t tell me why your theory fits the dip; you’re just making excuses!” Formal, technical models often incorporate explicit error terms. The error term spreads out the likelihood density, decreases the model’s precision and reduces the theory’s score, but the Bayesian scoring rule still governs. A technical model can allow mistakes, and make mistakes, and still do better than ignorance. In our supermarket example, even the precise hypothesis of 51 still bets only 90% of its probability mass on 51; the precise hypothesis claims only that 51 happens nine times out of ten. Ignoring nine 51s, pointing at one case of 82, and crowing in triumph, does not a refutation make. That’s not an excuse, it’s an explicit advance prediction of a technical model.\nThe error term makes the “precise” theory vulnerable to a superprecise alternative that predicted the 82. The standard model would also be vulnerable to a precisely ignorant model that predicted a 60% chance of 51 on the round where we saw 82, spreading out the likelihood more entropically on that particular error. No matter how good the theory, science always has room for a higher-scoring competitor. But if you don’t present a better alternative, if you try only to show that an accepted theory is stupid with respect to the data, that scientific endeavor may be more demanding than just replacing the old theory with a new one.\nAstronomers recorded the unexplained perihelion advance of Mercury, unaccounted for under Newtonian physics – or rather, Newtonian physics predicted 5557 seconds of arc per century, where the observed amount was 5600. (From Brown 1999.) But should the scientists of that day have junked Newtonian gravitation based on such small, unexplained counterevidence? What would they have used instead? Eventually, Newton’s theory of gravitation was set aside, after Einstein’s General Relativity precisely explained the orbital discrepancy of Mercury and also made successful advance predictions. But there was no way to know in advance that this was how things would turn out.\nIn the nineteenth century there was a persistent anomaly in the orbit of Uranus. People said, “Maybe Newton’s law starts to fail at long distances.” Eventually some bright fellows looked at the anomaly and said, “Could this be an unknown outer planet?” Urbain Le Verrier and John Couch Adams independently did some scribbling and figuring, using Newton’s standard theory – and predicted Neptune’s location to within one degree of arc, dramatically confirming Newtonian gravitation. (Brown 1999.)\nOnly after General Relativity precisely produced the perihelion advance of Mercury, did we know Newtonian gravitation would never explain it.\n\nIn the Intuitive Explanation we saw how Karl Popper’s insight that falsification is stronger than confirmation, translates into a Bayesian truth about likelihood ratios. Popper erred in thinking that falsification was qualitatively different from confirmation; both are governed by the same Bayesian rules. But Popper’s philosophy reflected an important truth about a quantitative difference between falsification and confirmation.”Popper was profoundly impressed by the differences between the allegedly ‘scientific’ theories of Freud and Adler and the revolution effected by Einstein’s theory of relativity in physics in the first two decades of this century. The main difference between them, as Popper saw it, was that while Einstein’s theory was highly ‘risky’, in the sense that it was possible to deduce consequences from it which were, in the light of the then dominant Newtonian physics, highly improbable (e.g. that light is deflected towards solid bodies – confirmed by Eddington’s experiments in 1919), and which would, if they turned out to be false, falsify the whole theory, nothing could, even in principle, falsify psychoanalytic theories. These latter, Popper came to feel, have more in common with primitive myths than with genuine science. That is to say, he saw that what is apparently the chief source of strength of psychoanalysis, and the principal basis on which its claim to scientific status is grounded, viz. its capability to accommodate, and explain, every possible form of human behaviour, is in fact a critical weakness, for it entails that it is not, and could not be, genuinely predictive. Psychoanalytic theories by their nature are insufficiently precise to have negative implications, and so are immunised from experiential falsification…”Popper, then, repudiates induction, and rejects the view that it is the characteristic method of scientific investigation and inference, and substitutes falsifiability in its place. It is easy, he argues, to obtain evidence in favour of virtually any theory, and he consequently holds that such ‘corroboration’, as he terms it, should count scientifically only if it is the positive result of a genuinely ‘risky’ prediction, which might conceivably have been false. For Popper, a theory is scientific only if it is refutable by a conceivable event. Every genuine test of a scientific theory, then, is logically an attempt to refute or to falsify it…”Every genuine scientific theory then, in Popper’s view, is prohibitive, in the sense that it forbids, by implication, particular events or occurrences.”(Thornton 2002)\nOn Popper’s philosophy, the strength of a scientific theory is not how much it explains, but how much it doesn’t explain. The virtue of a scientific theory lies not in the outcomes it permits , but in the outcomes it prohibits . Freud’s theories, which seemed to explain everything, prohibited nothing.\nTranslating this into Bayesian terms, we find that the more outcomes a model prohibits , the more probability density the model concentrates in the remaining, permitted outcomes. The more outcomes a theory prohibits, the greater the knowledge-content of the theory. The more daringly a theory exposes itself to falsification, the more definitely it tells you which experiences to anticipate.\nA theory that can explain any experience corresponds to a hypothesis of complete ignorance – a uniform distribution with probability density spread evenly over every possible outcome.\n\nOne of the most famous lessons of science is the case of the phlogiston theory of chemistry .\nPhlogiston was the 18th century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the 18th century chemists answered, “phlogiston”.\n…and that was it, you see, that was their answer: “Phlogiston.”\nPhlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material”. Flames extinguished in closed containers because the air became saturated with phlogiston. Charcoal left little residue upon burning because it was nearly pure phlogiston. (Moore 1961.)\nThis was a more primitive age of science, and so people did not notice and take offense that phlogiston theory made no advance predictions. Instead phlogiston theory just added on more and more independent clauses to explain more and more chemical observations. You couldn’t use phlogiston theory to predict the outcome of a chemical transformation – first you looked at the result, then you used phlogiston to explain it. It was not that, having never tried burning a flame in a closed container, phlogiston theorists predicted that the flame would go out when the air became “saturated” with phlogiston. Rather they lit a flame in a container, watched it go out, then said, “The air must have become saturated with phlogiston.”\nYou couldn’t even use phlogiston theory to constrain chemical transformations, to say what you did not expect to see. Phlogiston theory was infinitely flexible. In excusing everything, it explained nothing; a disguised hypothesis of zero knowledge.\nThe word phlogiston functioned not as an anticipation-controller but as a curiosity-stopper . You said “Why?” and the answer was “Phlogiston”.\n\nImagine looking at your hand, and knowing nothing of cells, nothing of biological chemistry, nothing of DNA. You know some anatomy, you know your hand contains muscles, but you don’t know why muscles move instead of lying there like clay. Your hand is just… stuff… and for some reason it moves under your direction. Is this not magic?”The animal body does not act as a thermodynamic engine … consciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears therefore that animated creatures have the power of immediately applying to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce derived mechanical effects… The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms… Modern biologists were coming once more to the acceptance of something and that was a vital principle.”– Lord Kelvin (from Zapato 1998).\nThis was the theory of vitalism ; that the difference between living matter and non-living matter consisted of an elan vital or vis vitalis . Elan vital infused living matter and caused it to move as consciously directed. Elan vital participated in chemical transformations which no mere non-living particles could undergo. Wohler’s artificial synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that “mere chemistry” could duplicate a product of biology. (Moore 1961.)\nBuilding on the previous lesson of phlogiston, we note at once that elan vital functions not as an anticipation-controller but as a curiosity-stopper. Vitalism doesn’t explain how the hand moves, nor tell you what transformations to expect from organic chemistry, and vitalism certainly permits no quantitative calculations. “Why? Elan vital!” And that was all there was to vitalism.\nBut the greater lesson lies in the vitalists’ reverence for the elan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but instead bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking.\nI quote Lord Kelvin to show that in every generation, there are scientific puzzles so wonderfully mys-TER-i-ous that they become sacred, making a solution sacrilege. Science is only good for explaining non-mysterious phenomena, like the course of planets, or the transformations of materials, or the biology of life; science can never answer questions about real mysteries like consciousness. Surely, if it were possible for science to explain consciousness, it would already have done so? As if all these other matters had not been mysteries for thousands of years and millions of years, from the dawn of intelligent thought right up until science solved them.\nPeople have no sense of history. They learn about stars and chemistry and biology in school and it seems that these matters have always been the proper meat of science, that they have never been mysterious. Astrologers and alchemists and vitalists were merely fools, to make such big deals out of such simple questions. When science must deal with some new puzzling phenomenon, it is a great shock to the children of that generation, for they have never encountered something that feels mysterious before. Surely such a sacred mystery as consciousness is infinitely beyond the reach of dry scientific thinking; science is only suited to mundane questions such as biology.\nVitalism shared with phlogiston the error of encapsulating the mystery as a substance. Fire was mysterious, and the phlogiston theory encapsulated the mystery in a mysterious substance called “phlogiston”. Life was a sacred mystery, and vitalism encapsulated the sacred mystery in a mysterious substance called “elan vital”. Neither “explanation” helped concentrate the model’s probability density. The “explanation” just wrapped up the question as a small, hard, opaque black ball. In a play written by the author Moliere, a physician explains the power of a soporific by claiming that the soporific contains a “dormitive potency” – a fine parody of the art of fake explanation. (Cited in Kuhn 1962.)\nIt is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.\nBut the deeper failure is supposing that an answer can be mysterious. Mystery is a property of questions, not answers. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. They mixed up the map with the territory. All confusion and dismay exist in the mind, not in reality.\nI call theories such as vitalism mysterious answers to mysterious questions . These are the signs of mysterious answers: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts – the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to do this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena. Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of sacred inexplicability that it had at the start.\nThe flip side:\nBeware of checklist thinking: Having a sacred mystery, or a mysterious answer, is not the same as refusing to explain something. Some elements in our physics are taken as “fundamental”, not yet further reduced or explained. But these fundamental elements of our physics are governed by clearly defined, mathematically simple, formally computable causal rules.\nOccasionally some crackpot objects to modern physics on the grounds that it does not provide an “underlying mechanism” for a mathematical law currently treated as fundamental. (Claiming that a mathematical law lacks an “underlying mechanism” is one of the entries on John Baez’s Crackpot Index; Baez 1998.) The “underlying mechanism” the crackpot proposes in answer is vague, verbal, and yields no increase in predictive power – otherwise we would not classify the claimant as a crackpot.\nOur current physics makes the electromagnetic field fundamental, and refuses to explain it further. But the “electromagnetic field” is a fundamental governed by clear mathematical rules, with no properties outside the mathematical rules, subject to formal computation to describe its causal effect upon the world. Someday someone may suggest improved math that yields better predictions, but I would not indict the current model on grounds of mysteriousness. A theory that includes fundamental elements is not the same as a theory that contains mysterious elements .\nFundamentals should be simple. “Life” is not a good fundamental; “oxygen” is a good fundamental, and “electromagnetic field” is a better fundamental. Life might look simple to a vitalist – it’s the simple, magical ability of your muscles to move under your mental direction. Why shouldn’t life be explained by a simple, magical fundamental substance like elan vital ? But phenomena that seem psychologically very simple – little dots of light in the sky, orangey-bright hot flame, flesh moving under mental direction – often conceal vast depths of underlying complexity. The proposition that life is a complex phenomenon may seem incredible to the vitalist, staring at a blankly opaque mystery with no obvious handles; but yes, Virginia, there is underlying complexity. The criterion of simplicity that is relevant to Occam’s Razor is mathematical or computational simplicity. Once we render down our model into mathematically simple fundamental elements, not in themselves sharing the mysterious qualities of the mystery, interacting in clearly defined ways to produce the formerly mysterious phenomenon as a detailed prediction, that is as non-mysterious as humanity has ever figured out how to make anything.\n\nThe failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb and name some current theory, not yet disproven, that I think is analogously flawed to vitalism and phlogiston? I shall dare, but don’t try this at home. I also warn my readers that they should not accept this opinion of mine with the same confidence that attaches to science’s dismissal of phlogiston.\nI name the fad of emergence or emergent phenomena – systems which exhibit high-level behaviors that arise or “emerge” from the interaction of many low-level elements. Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem.\nIn decrying the emergence fad, I decry the use of “emergence” as an explanation in itself . It’s okay to have a completed model to which an emergence enthusiast could attach “emergent” as an adjective. One might legitimately have some specific model of how the behavior of an ant colony emerges from the behavior of the ants. A hypothesis like that can be formal and/or technical. The model of the ant colony has internal moving parts and produces specific predictions; it’s just that the model happens to fit the verbal term “emergent” – the behavior which emerges from modeling many interacting elements is different from the behavior of those elements considered in isolation. I do not consider it stupid to say that Phenomenon X emerges from Y, where Y is some specific model. The phrase “emerges from” is okay, if the phrase precedes some specific model to be judged on its own merits.\nHowever, this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right. I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts – there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane. And even after the answer of “Why? Emergence!” is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.\nTo say that intelligence is an “emergent phenomenon” fits every possible behavior that intelligence could show, and therefore explains nothing. The model has no moving parts and does not concentrate its probability mass into specific outcomes. It is a disguised hypothesis of zero knowledge.\nTo see why I object to the academic fad in “emergence”, even though I have admitted the legitimacy of the phrase “emerges from”, consider that “arises from” is also a legitimate phrase. Gravity arises from the curvature of spacetime (according to a certain specific mathematical model, Einstein’s General Relativity). Chemistry arises from interactions between atoms (according to the specific model of quantum electrodynamics). Now suppose I should say that gravity is explained by “arisence” or that chemistry is an “arising phenomenon”, and claim that as my explanation.\nA fun exercise is to eliminate the adjective “emergent” from any sentence in which it appears, and see if the sentence says anything different.Before: Human intelligence is an emergent product of neurons firing.After: Human intelligence is a product of neurons firing.Before: The behavior of the ant colony is the emergent outcome of the interactions of many individual ants.After: The behavior of the ant colony is the outcome of the interactions of many individual ants.Even better: A colony is made of ants. We can successfully predict some aspects of colony behavior using models that include only individual ants, without any global colony variables, showing that we understand how those colony behaviors arise from ant behaviors.\nAnother good exercise is to replace the word “emergent” with the old word, the explanation that people had to use before emergence was invented.Before: Life is an emergent phenomenon.After: Life is a magical phenomenon.Before: Human intelligence is an emergent product of neurons firing.After: Human intelligence is a magical product of neurons firing.\nDoes not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?\nMagic is unpopular nowadays, unfashionable, not something you could safely postulate in a peer-reviewed journal. Why? Once upon a time, a few exceptionally wise scientists noticed that explanations which invoked “magic” just didn’t work as a way of understanding the world. By dint of strenuous evangelism, these wise scientists managed to make magical explanations unfashionable within a small academic community. But humans are still humans, and they have the same emotional needs and intellectual vulnerabilities. So later academics invented a new word, “emergence”, that carried exactly the same information content as “magic”, but had not yet become unfashionable. “Emergence” became very popular, just as saying “magic” used to be very popular. “Emergence” has the same deep appeal to human psychology, for the same reason. “Emergence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is a popular fad because it is the junk food of curiosity. You can explain anything using emergence, and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed in different clothes but still the same species psychology.\n\nMany people in this world believe that after dying they will face a stern-eyed fellow named St. Peter, who will examine their actions in life and accumulate a score for morality. Presumably St. Peter’s scoring rule is unique and invariant under trivial changes of perspective. Unfortunately, believers cannot obtain a quantitative, precisely computable specification of the scoring rule, which seems rather unfair.\nThe religion of Bayesianity holds that your eternal fate depends on the probability judgments you made in life. Unlike lesser faiths, Bayesianity can give a quantitative, precisely computable specification of how your eternal fate is determined.\nOur proper Bayesian scoring rule provides a way to accumulate scores across experiments, and the score is invariant regardless of how we slice up the “experiments” or in what order we accumulate the results. We add up the logarithms of the probabilities. This corresponds to multiplying together the probability assigned to the outcome in each experiment, to find the joint probability of all the experiments together. We take the logarithm to simplify our intuitive understanding of the accumulated score, to maintain our grip on the tiny fractions involved, and to ensure we maximize our expected score by stating our honest probabilities rather than placing all our play money on the most probable bet.\nBayesianity states that, when you die, Pierre-Simon Laplace examines every single event in your life, from finding your shoes next to your bed in the morning, to finding your workplace in its accustomed spot. Every losing lottery ticket means you cared enough to play. Laplace assesses the advance probability you assigned to each event. Where you did not assign a precise numerical probability in advance, Laplace examines your degree of anticipation or surprise, extrapolates other possible outcomes and your extrapolated reactions, and renormalizes your extrapolated emotions to a likelihood distribution over possible outcomes. (Hence the phrase “Laplacian superintelligence”.)\nThen Laplace takes every event in your life, and every probability you assigned to each event, and multiplies all the probabilities together. This is your Final Judgment – the probability you assigned to your life.\nThose who follow Bayesianity strive all their lives to maximize their Final Judgment. This is the sole virtue of Bayesianity. The rest is just math.\nMark you: the path of Bayesianity is strict. What probability shall you assign each morning, to the proposition, “The sun shall rise?” (We shall discount such quibbles as cloudy days, and that the Earth orbits the Sun.) Perhaps one who did not follow Bayesianity would be humble, and give a probability of 99.9%. But we who follow Bayesianity shall discard all considerations of modesty and arrogance, and scheme only to maximize our Final Judgment. Like an obsessive video-game player, we care only about this numerical score. We’re going to face this Sun-shall-rise issue 365 times per year, so we might be able to improve our Final Judgment considerably by tweaking our probability assignment.\nAs it stands, even if the Sun rises every morning, every year our Final Judgment will decrease by a factor of 0.7 (.999^365), roughly -0.52 bits. Every two years, our Final Judgment will decrease more than if we found ourselves ignorant of a coinflip’s outcome! Intolerable. If we increase our daily probability of sunrise to 99.99%, then each year our Final Judgment will decrease only by a factor of 0.964. Better. Still, in the unlikely event that we live exactly 70 years and then die, our Final Judgment will only be 7.75% of what it might have been. What if we assign a 99.999% probability to the sunrise? Then after 70 years, our Final Judgment will be multiplied by 77.4%.\nWhy not assign a probability of 1.0?\nOne who follows Bayesianity will never assign a probability of 1.0 to anything . Assigning a probability of 1.0 to some outcome uses up all your probability mass. If you assign a probability of 1.0 to some outcome, and reality delivers a different answer, you must have assigned the actual outcome a probability of 0 . This is Bayesianity’s sole mortal sin. Zero times anything is zero. When Laplace multiplies together all the probabilities of your life, the combined probability will be zero. Your Final Judgment will be doodly-squat, zilch, nada, nil. No matter how rational your guesses during the rest of your life, you’ll spend eternity next to some guy who believed in flying saucers and got all his information from the Weekly World News. Again we find it helpful to take the logarithm, revealing the innocent-sounding “zero” in its true form. Risking an outcome probability of zero is like accepting a bet with a payoff of negative infinity.\nWhat if humanity decides to take apart the Sun for mass (stellar engineering), or to switch off the Sun because it’s wasting entropy? Well, you say, you’ll see that coming, you’ll have a chance to alter your probability assignment before the actual event. What if an Artificial Intelligence in someone’s basement recursively self-improves to superintelligence, stealthily develops nanotechnology, and one morning it takes apart the Sun? If on the last night of the world you assign a probability of 99.999% to tomorrow’s sunrise, your Final Judgment will go down by a factor of 100,000. Minus 50 decibels! Awful, isn’t it?\nSo what is your best strategy? Well, suppose you 50% anticipate that a basement-spawned AI superintelligence will disassemble the Sun sometime in the next 10 years, and you figure there’s about an equal chance of this happening on any given day between now and then. On any given night, you would 99.98% anticipate the sun rising tomorrow. If this is really what you anticipate, then you have no motive to say anything except 99.98% as your probability. If you feel nervous that this anticipation is too low, or too high, it must not be what you anticipate after your nervousness is taken into account.\nBut the deeper truth of Bayesianity is this: you cannot game the system. You cannot give a humble answer, nor a confident one. You must figure out exactly how much you anticipate the Sun rising tomorrow, and say that number. You must shave away every hair of modesty or arrogance, and ask whether you expect to end up being scored on the Sun rising, or failing to rise. Look not to your excuses, but ask which excuses you expect to need. After you arrive at your exact degree of anticipation, the only way to further improve your Final Judgment is to improve the accuracy, calibration, and discrimination of your anticipation. You cannot do better except by guessing better and anticipating more precisely.\nEr, well, except that you could commit suicide when you turned five, thereby preventing your Final Judgment from decreasing any further. Or if we patch a new sin onto the utility function, enjoining against suicide, you could flee from mystery, avoiding all situations in which you thought you might not know everything. So much for that religion.\n\nIdeally, we predict the outcome of the experiment in advance, using our model, and then we perform the experiment to see if the outcome accords with our model. Unfortunately, we can’t always control the information stream. Sometimes Nature throws experiences at us, and by the time we think of an explanation, we’ve already seen the data we’re supposed to explain. This was one of the scientific sins committed by 19th-century evolutionism; Darwin observed the similarity of many species, and their adaptation to particular local environments, before the hypothesis of natural selection occurred to him. 19th-century evolutionism began life as a post facto explanation, not an advance prediction.\nNor is this a trouble only of semitechnical theories. In 1846, the successful deduction of Neptune’s existence from gravitational perturbations in the orbit of Uranus was considered a grand triumph for Newton’s theory of gravitation. Why? Because Neptune’s existence was the first observation that confirmed an advance prediction of Newtonian gravitation. All the other phenomena that Newton explained, such as orbits and orbital perturbations and tides, had been observed in great detail before Newton explained them. No one seriously doubted that Newton’s theory was correct. Newton’s theory explained too much too precisely, and it replaced a collection of ad-hoc models with a single unified mathematical law. Even so, the advance prediction of Neptune’s existence, followed by the observation of Neptune at almost exactly the predicted location, was considered the first grand triumph of Newton’s theory at predicting what no previous model could predict. Considerable time elapsed between widespread acceptance of Newton’s theory and the first impressive advance prediction of Newtonian gravitation. By the time Newton came up with his theory, scientists had already observed, in great detail, most of the phenomena that Newtonian gravitation predicted.\nBut the rule of advance prediction is a morality of science, not a law of probability theory. If you have already seen the data you must explain, then Science may darn you to heck, but your predicament doesn’t collapse the laws of probability theory. What does happen is that it becomes much more difficult for a hapless human to obey the laws of probability theory. When you’re deciding how to rate a hypothesis according to the Bayesian scoring rule, you need to figure out how much probability mass that hypothesis assigns to the observed outcome. If we must make our predictions in advance, then it’s easier to notice when someone is trying to claim every possible outcome as an advance prediction, using too much probability mass, being deliberately vague to avoid falsification, and so on.\nNo numerologist can predict next week’s winning lottery numbers, but they will be happy to explain the mystical significance of last week’s winning lottery numbers. Say the winning Mega Ball was 7 in last week’s lottery, out of 52 possible outcomes. Obviously this happened because 7 is the lucky number. So will the Mega Ball in next week’s lottery also come up 7? We understand that it’s not certain, of course, but if it’s the lucky number, you ought to assign a probability of higher than 1/52… and then we’ll score your guesses over the course of a few years, and if your score is too low we’ll have you flogged… what’s that you say? You want to assign a probability of exactly 1/52? But that’s the same probability as every other number; what happened to 7 being lucky? No, sorry, you can’t assign a 90% probability to 7 and also a 90% probability to 11. We understand they’re both lucky numbers. Yes, we understand that they’re very lucky numbers. But that’s not how it works.\nEven if the listener does not know the way of Bayes and does not ask for formal probabilities, they will probably become suspicious if you try to cover too many bases. Suppose they ask you to predict next week’s winning Mega Ball, and you use numerology to explain why the 1 ball would fit your theory very well, and why the 2 ball would fit your theory very well, and why the 3 ball would fit your theory very well… even the most credulous listener might begin to ask questions by the time you got to 12. Maybe you could tell us which numbers are unlucky and definitely won’t win the lottery? Well, 13 is unlucky, but it’s not absolutely impossible (you hedge, anticipating in advance which excuse you might need).\nBut if we ask you to explain last week’s lottery numbers, why, the 7 was practically inevitable. That 7 should definitely count as a major success for the “lucky numbers” model of the lottery. And it couldn’t possibly have been 13; luck theory rules that straight out.\n\nImagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands – you can use it to pick up glasses, drive a car, etc. How would you explain this hypothetical scenario? Take a moment to ponder this puzzle before continuing.\n\nspoiler space\n\nspoiler space\n\nspoiler space\n\nHow would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn’t. It isn’t going to happen.\nIt would be easy enough to produce a verbal explanation that “fit” the hypothetical. There are many explanations that can “fit” anything, including (as a special case of “anything”) my arm being replaced by a blue tentacle. Divine intervention is a good all-purpose explanation. Or aliens with arbitrary motives and capabilities. Or I could be mad, hallucinating, dreaming my life away in a hospital. Such explanations “fit” all outcomes equally well, and equally poorly, equating to hypotheses of complete ignorance.\nThe test of whether a model of reality “explains” my arm turning into a blue tentacle, is whether the model concentrates significant probability mass into that particular outcome. Why that dream, in the hospital? Why would aliens do that particular thing to me, as opposed to the other billion things they might do? Why would my arm turn into a tentacle on that morning, after remaining an arm through every other morning of my life? And in all cases I must look for an argument compelling enough to make that particular prediction in advance , not mere compatibility. Once I already knew the outcome, it would become far more difficult to sift through hypotheses to find good explanations. Whatever hypothesis I tried, I would be hard-pressed not to allocate more probability mass to yesterday’s blue-tentacle outcome than if I extrapolated blindly, seeking the model’s most likely prediction for tomorrow.\nA model does not always predict all the features of the data. Nature has no privileged tendency to present me with solvable challenges. Perhaps a deity toys with me, and the deity’s mind is computationally intractable. If I flip a fair coin there is no way to further explain the outcome, no model that makes a better prediction than the maximum-entropy hypothesis. But if I guess a model with no internal detail or a model that makes no further predictions, I not only have no reason to believe that guess, I have no reason to care. Last night my arm was replaced with a blue tentacle. Why? Aliens! So what will they do tomorrow? Similarly, if I attribute the blue tentacle to a hallucination as I dream my life away in a coma, I still don’t know any more about what I’ll hallucinate tomorrow. So why do I care whether it was aliens or hallucination?\nWhat might be a good explanation, then, if I woke up one morning and found my arm transformed into a blue tentacle? To claim a “good explanation” for this hypothetical experience would require an argument such that, contemplating the hypothetical argument now , before my arm has transformed into a blue tentacle, I would go to sleep worrying that my arm really would transform into a tentacle.\nPeople play games with plausibility, explaining events they expect to never actually encounter, yet this necessarily violates the laws of probability theory. How many people who thought they could ‘explain’ the hypothetical experience of waking up with their arm replaced by a tentacle, would go to sleep wondering if it might really happen to them? Had they the courage of their convictions, they would say: I do not expect to ever encounter this hypothetical experience, and therefore I cannot explain, nor have I a motive to try. Such things only happen in webcomics, and I need not prepare explanations, for in real life I shall never have a chance to use them. If I ever find myself in this impossible situation, let me miss no jot or tittle of my valuable bewilderment.\nTo a Bayesian, probabilities are anticipations, not mere beliefs to proclaim from the rooftops. If I have a model that assigns probability mass to waking up with a blue tentacle, then I am nervous about waking up with a blue tentacle. What if the model is a fanciful one, like a witch casting a spell that transports me into a randomly selected webcomic? Then the prior probability of webcomic witchery is so low that my real-world understanding doesn’t assign any significant weight to that hypothesis. The witchcraft hypothesis, if taken as a given, might assign non-insignificant likelihood to waking up with a blue tentacle. But my anticipation of that hypothesis is so low that I don’t anticipate any of the predictions of that hypothesis. That I can conceive of a witchcraft hypothesis should in no wise diminish my stark bewilderment if I actually wake up with a tentacle, because the real-world probability I assign to the witchcraft hypothesis is effectively zero. My zero-probability hypothesis wouldn’t help me explain waking up with a tentacle, because the argument isn’t good enough to make me anticipate waking up with a tentacle.\nIn the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to explain is to anticipate . To anticipate is to explain . Suppose I am a medical researcher, and in the ordinary course of pursuing my research, I notice that my clever new theory of anatomy seems to permit a small and vague possibility that my arm will transform into a blue tentacle. “Ha ha!”, I say, “how remarkable and silly!”, and feel ever so slightly nervous. That would be a good explanation for waking up with a tentacle, if it ever happened.\nIf a chain of reasoning doesn’t make me nervous, in advance, about waking up with a tentacle, then that reasoning would be a poor explanation if the event did happen, because the combination of prior probability and likelihood was too low to make me allocate any significant real-world probability mass to that outcome.\nIf you start from well-calibrated priors, and you apply Bayesian reasoning, you’ll end up with well-calibrated conclusions. Imagine that two million entities, scattered across different planets in the universe, have the opportunity to encounter something so strange as waking up with a tentacle (or – gasp! – ten fingers). One million of these entities say “one in a thousand” for the prior probability of some hypothesis X, and each hypothesis X says “one in a hundred” for the likelihood of waking up with a tentacle. And one million of these entities say “one in a hundred” for the prior probability of some hypothesis Y, and each hypothesis Y says “one in ten” for the likelihood of waking up with a tentacle. If we suppose that all entities are well-calibrated, then we shall look across the universe and find ten entities who wound up with a tentacle because of hypotheses of plausibility class X, and a thousand entities who wound up with tentacles because of hypotheses of plausibility class Y. So if you find yourself with a tentacle, and if your probabilities are well-calibrated, then the tentacle is more likely to stem from a hypothesis you would class as probable than a hypothesis you would class as improbable. (What if your probabilities are poorly calibrated, so that when you say “million-to-one” it happens one time out of twenty? Then you’re grossly overconfident, and we adjust your probabilities in the direction of less discrimination / greater entropy.)\nThe hypothesis of being transported into a webcomic, even if it “explains” the scenario of waking up with a blue tentacle, is a poor explanation because of its low prior probability. The webcomic hypothesis doesn’t contribute to explaining the tentacle, because it doesn’t make you anticipate waking up with a tentacle.\nIf we start with a quadrillion sentient minds scattered across the universe, quite a lot of entities will encounter events that are very likely, only about a mere million entities will experience events with lifetime likelihoods of a billion-to-one (as we would anticipate, surveying with infinite eyes and perfect calibration), and not a single entity will experience the impossible.\nIf, somehow, you really did wake up with a tentacle, it would likely be because of something much more probable than “being transported into a webcomic”, some perfectly normal reason to wake up with a tentacle which you just didn’t see coming. A reason like what? I don’t know. Nothing. I don’t anticipate waking up with a tentacle, so I can’t give any good explanation for it. Why should I bother crafting excuses that I don’t expect to use? If I was worried I might someday need a clever excuse for waking up with a tentacle, the reason I was nervous about the possibility would be my explanation.\nReality dishes out experiences using probability, not plausibility. If you find out that your laptop doesn’t obey conservation of momentum, then reality must think that a perfectly normal thing to do to you. How could violating conservation of momentum possibly be perfectly normal? I anticipate that question has no answer and will never need answering. Similarly, people do not wake up with tentacles, so apparently it is not perfectly normal.\n\nThere is a shattering truth, so surprising and terrifying that people resist the implications with all their strength. Yet there are a lonely few with the courage to accept this satori. Here is wisdom, if you would be wise:Since the beginningNot one unusual thingHas ever happened.\nAlas for those who turn their eyes from zebras and dream of dragons! If we cannot learn to take joy in the merely real, our lives shall be empty indeed.\n\nThis document is ©2005 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/technical/ .\nIf you enjoyed A Technical Explanation of Technical Explanation , you may enjoy the earlier works in the series, the Twelve Virtues of Rationality , The Simple Truth , and An Intuitive Explanation of Bayesian Reasoning .\nIf you’ve already read all that, you can move on to Overcoming Bias .\n\nReferences:\nJohn Baez (1998). “ The Crackpot Index. ”\nKevin Brown (1999). “ Anomalous Precessions ” In mathpages.com: Postings to sci.math collected by Kevin Brown.\nRobyn M. Dawes (1988). “Rational Choice in an Uncertain World”. Harcourt Brace Jovanovich, Inc.\nE. T. Jaynes (1996). “ Probability Theory: The Logic of Science ” Published posthumously by Cambridge University Press, ed. G. Larry Bretthorst. (2003).\nT. S. Kuhn (1962). “The Structure of Scientific Revolutions.” The Chicago University Press.\nFriedrich Spee von Langenfeld (1631). “Cautio Criminalis, Or, a Book on Witch Trials.” Translated: Marcus Hellyer, 2003. University Press of Virginia.\nRuth Moore (1961). “The Coil of Life.” London Constable. See also Phlogiston Theory , Demise of Phlogiston , and Friedrich W�hler .\nRobert Pirsig (1974). “Zen and the Art of Motorcycle Maintenance.” New York: Bantam Books.\nKarl Popper (1959). “The Logic of Scientific Discovery”. Hutchinson, London.\nD. Robinson and J. Groves (1998). “Philosophy for Beginners.” Cambridge: Icon Books.\nCarl Sagan (1995). “The Demon-Haunted World: Science as a Candle in the Dark.” Random House, New York, NY.\nStephen Thornton (2002). “ Karl Popper ”. In Edward N. Zalta (ed.), “The Stanford Encyclopedia of Philosophy” (Winter 2002 Edition).\nA. Tversky and W. Edwards (1966). “Information versus reward in binary choice.” Journal of Experimental Psychology, 71, 680-683. See also Y. Schul and R. Mayo 2003, “ Searching for certainty in an uncertain world. ” In Journal of Behavioral Decision Making, 16:2, 93-106.\nJoachim Verhagen (2001). From the “ Canonical List of Science Jokes ”, version 7.27, collected by Joachim Verhagen.\nGeorge Williams (1966). “Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought.” Princeton, NJ: Princeton University Press.\nJ.F. Yates, J.W. Lee, W.R. Sieck, I. Choi,&P.C. Price (2002). “Probability judgment across cultures.” In T. Gilovich, D. Griffin,&D. Kahneman (Eds.), “Heuristics and Biases.” New York: Cambridge.\nLyle Zapato (1998). “ Lord Kelvin Quotations ”", "url": "https://www.yudkowsky.net/rational/technical", "title": "A Technical Explanation of Technical Explanation", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T02:43:02+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "ac6991daa0d44569d3e24ab8aa2e0a9c", "summary": []} -{"text": "An Intuitive Explanation of Bayes’ Theorem\n\nBayes’ Theoremfor the curious and bewildered;an excruciatingly gentle introduction.\n\nThis page has now been obsoleted by a vastly improved guide to Bayes’s Theorem, the Arbital Guide to Bayes’s Rule . Please read that instead. Seriously. I mean it. The current version is also plagued by a number of technical problems, with various applets no longer working. A mostly functional archived version of this essay can be found here.\n\nYour friends and colleagues are talking about something called “Bayes’ Theorem” or “Bayes’ Rule”, or something called Bayesian reasoning.  They sound really enthusiastic about it, too, so you google and find a webpage about Bayes’ Theorem and…\nIt’s this equation.  That’s all.  Just one equation.  The page you found gives a definition of it, but it doesn’t say what it is, or why it’s useful, or why your friends would be interested in it.  It looks like this random statistics thing.\nSo you came here.  Maybe you don’t understand what the equation says.  Maybe you understand it in theory, but every time you try to apply it in practice you get mixed up trying to remember the difference between p(a|x) and p(x|a) , and whether p(a)*p(x|a) belongs in the numerator or the denominator.  Maybe you see the theorem, and you understand the theorem, and you can use the theorem, but you can’t understand why your friends and/or research colleagues seem to think it’s the secret of the universe.  Maybe your friends are all wearing Bayes’ Theorem T-shirts, and you’re feeling left out.  Maybe you’re a girl looking for a boyfriend, but the boy you’re interested in refuses to date anyone who “isn’t Bayesian”.  What matters is that Bayes is cool, and if you don’t know Bayes, you aren’t cool.\nWhy does a mathematical concept generate this strange enthusiasm in its students?  What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?  What is the secret that the adherents of Bayes know?  What is the light that they have seen?\nSoon you will know.  Soon you will be one of us.\nWhile there are a few existing online explanations of Bayes’ Theorem, my experience with trying to introduce people to Bayesian reasoning is that the existing online explanations are too abstract.  Bayesian reasoning is very counterintuitive.   People do not employ Bayesian reasoning intuitively, find it very difficult to learn Bayesian reasoning when tutored, and rapidly forget Bayesian methods once the tutoring is over.  This holds equally true for novice students and highly trained professionals in a field.  Bayesian reasoning is apparently one of those things which, like quantum mechanics or the Wason Selection Test, is inherently difficult for humans to grasp with our built-in mental faculties.\nOr so they claim.  Here you will find an attempt to offer an intuitive explanation of Bayesian reasoning – an excruciatingly gentle introduction that invokes all the human ways of grasping numbers, from natural frequencies to spatial visualization.  The intent is to convey, not abstract rules for manipulating numbers, but what the numbers mean, and why the rules are what they are (and cannot possibly be anything else).  When you are finished reading this page, you will see Bayesian problems in your dreams.\nAnd let’s begin.\n\nHere’s a story problem about a situation that doctors often encounter:\n1% of women at age forty who participate in routine screening have breast cancer.  80% of women with breast cancer will get positive mammographies.  9.6% of women without breast cancer will also get positive mammographies.  A woman in this age group had a positive mammography in a routine screening.  What is the probability that she actually has breast cancer?\nWhat do you think the answer is?  If you haven’t encountered this kind of problem before, please take a moment to come up with your own answer before continuing.\n\nNext, suppose I told you that most doctors get the same wrong answer on this problem – usually, only around 15% of doctors get it right.  (“Really?  15%?  Is that a real number, or an urban legend based on an Internet poll?”  It’s a real number.  See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies.  It’s a surprising result which is easy to replicate, so it’s been extensively replicated.)\nDo you want to think about your answer again?  Here’s a Javascript calculator if you need one.  This calculator has the usual precedence rules; multiplication before addition and so on.  If you’re not sure, I suggest using parentheses.\nCalculator:  Result:  \n\nOn the story problem above, most doctors estimate the probability to be between 70% and 80%, which is wildly incorrect.\nHere’s an alternate version of the problem on which doctors fare somewhat better:\n10 out of 1000 women at age forty who participate in routine screening have breast cancer.  800 out of 1000 women with breast cancer will get positive mammographies.  96 out of 1000 women without breast cancer will also get positive mammographies.  If 1000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?\nCalculator:  Result:  \n\nAnd finally, here’s the problem on which doctors fare best of all, with 46% – nearly half – arriving at the correct answer:\n100 out of 10,000 women at age forty who participate in routine screening have breast cancer.  80 of every 100 women with breast cancer will get a positive mammography.  950 out of  9,900 women without breast cancer will also get a positive mammography.  If 10,000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?\nCalculator:  Result:  \n\nThe correct answer is 7.8%, obtained as follows:  Out of 10,000 women, 100 have breast cancer; 80 of those 100 have positive mammographies.  From the same 10,000 women, 9,900 will not have breast cancer and of those 9,900 women, 950 will also get positive mammographies.  This makes the total number of women with positive mammographies 950+80 or 1,030.  Of those 1,030 women with positive mammographies, 80 will have cancer.  Expressed as a proportion, this is 80/1,030 or 0.07767 or 7.8%.\nTo put it another way, before the mammography screening, the 10,000 women can be divided into two groups:\nGroup 1:  100 women with breast cancer.Group 2:  9,900 women without breast cancer.\nSumming these two groups gives a total of 10,000 patients, confirming that none have been lost in the math.  After the mammography, the women can be divided into four groups:\nGroup A:  80 women with breast cancer, and a positive mammography.Group B:  20 women with breast cancer, and a negative mammography.Group C:  950 women without   breast cancer, and a positive mammography.Group D:  8,950 women without breast cancer, and a negative mammography.\nCalculator:  Result:  As you can check, the sum of all four groups is still 10,000.  The sum of groups A and B, the groups with breast cancer, corresponds to group 1; and the sum of groups C and D, the groups without breast cancer, corresponds to group 2; so administering a mammography does not actually change the number of women with breast cancer.  The proportion of the cancer patients (A + B) within the complete set of patients (A + B + C + D) is the same as the 1% prior chance that a woman has cancer: (80 + 20) / (80 + 20 + 950 + 8950) = 100 / 10000 = 1%.\nThe proportion of cancer patients with positive results, within the group of all patients with positive results, is the proportion of (A) within (A + C):   80 / (80 + 950) = 80 / 1030 = 7.8%.  If you administer a mammography to 10,000 patients, then out of the 1030 with positive mammographies, 80 of those positive-mammography patients will have cancer.  This is the correct answer, the answer a doctor should give a positive-mammography patient if she asks about the chance she has breast cancer; if thirteen patients ask this question, roughly 1 out of those 13 will have cancer.\n\nThe most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction of women with breast cancer who get positive results.  For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.\nFiguring out the final answer always requires all three pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false positives, and the percentage of women with breast cancer who receive (correct) positives.\nTo see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer.  Even if mammography in this world  detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred thousand false positives for every real case of cancer detected.  The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does increase the estimated probability, the probability isn’t increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.\nSimilarly, in an alternate universe where only one out of a million women does not have breast cancer, a positive result on the patient’s mammography obviously doesn’t mean that she has an 80% chance of having breast cancer!  If this were the case her estimated probability of having cancer would have been revised drastically downward after she got a positive result on her mammography – an 80% chance of having cancer is a lot less than 99.9999%!  If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct positive results, while one woman without breast cancer will get false positive results.  Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go from 99.9999% up to 99.999987%.  That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.\nThese two extreme examples help demonstrate that the mammography result doesn’t replace your old information about the patient’s chance of having cancer; the mammography slides the estimated probability in the direction of the result.  A positive result slides the original probability upward; a negative result slides the probability downward.  For example, in the original problem where 1% of the women have cancer, 80% of women with cancer get positive mammographies, and 9.6% of women without cancer get positive mammographies, a positive result on the mammography slides the 1% chance upward to 7.8%.\nMost people encountering problems of this type for the first time carry out the mental operation of replacing the original 1% probability with the 80% probability that a woman with cancer gets a positive mammography.  It may seem like a good idea, but it just doesn’t work.  “The probability that a woman with a positive mammography has breast cancer” is not at all the same thing as “the probability that a woman with breast cancer has a positive mammography”; they are as unlike as apples and cheese.  Finding the final answer, “the probability that a woman with a positive mammography has breast cancer”, uses all three pieces of problem information – “the prior probability that a woman has breast cancer”, “the probability that a woman with breast cancer gets a positive mammography”, and “the probability that a woman without breast cancer gets a positive mammography”.\n\nFunFact!Q.  What is the Bayesian Conspiracy?A.  The Bayesian Conspiracy is a multinational, interdisciplinary, and shadowy group of scientists that controls publication, grants, tenure, and the illicit traffic in grad students.  The best way to be accepted into the Bayesian Conspiracy is to join the Campus Crusade for Bayes in high school or college, and gradually work your way up to the inner circles.  It is rumored that at the upper levels of the Bayesian Conspiracy exist nine silent figures known only as the Bayes Council.\n\nTo see that the final answer always depends on the chance that a woman without breast cancer gets a positive mammography, consider an alternate test, mammography+.  Like the original test, mammography+ returns positive for 80% of women with breast cancer.  However, mammography+ returns a positive result for only one out of a million women without breast cancer – mammography+ has the same rate of false negatives, but a vastly lower rate of false positives.  Suppose a patient receives a positive mammography+.  What is the chance that this patient has breast cancer?  Under the new test, it is a virtual certainty – 99.988%, i.e., a 1 in 8082 chance of being healthy.\nCalculator:  Result:  Remember, at this point, that neither mammography nor mammography+ actually change the number of women who have breast cancer.  It may seem like “There is a virtual certainty you have breast cancer” is a terrible thing to say, causing much distress and despair; that the more hopeful verdict of the previous mammography test – a 7.8% chance of having breast cancer – was much to be preferred.  This comes under the heading of “Don’t shoot the messenger”.  The number of women who really do have cancer stays exactly the same between the two cases.  Only the accuracy with which we detect cancer changes.  Under the previous mammography test, 80 women with cancer (who already had cancer, before the mammography) are first told that they have a 7.8% chance of having cancer, creating X amount of uncertainty and fear, after which more detailed tests will inform them that they definitely do have breast cancer.  The old mammography test also involves informing 950 women without breast cancer that they have a 7.8% chance of having cancer, thus creating twelve times as much additional fear and uncertainty.  The new test, mammography+, does not give 950 women false positives, and the 80 women with cancer are told the same facts they would have learned eventually, only earlier and without an intervening period of uncertainty.  Mammography+ is thus a better test in terms of its total emotional impact on patients, as well as being more accurate.  Regardless of its emotional impact, it remains a fact that a patient with positive mammography+ has a 99.988% chance of having breast cancer.\nOf course, that mammography+ does not give 950 healthy women false positives means that all 80 of the patients with positive mammography+ will be patients with breast cancer.  Thus, if you have a positive mammography+, your chance of having cancer is a virtual certainty.  It is because mammography+ does not generate as many false positives (and needless emotional stress), that the (much smaller) group of patients who do get positive results will be composed almost entirely of genuine cancer patients (who have bad news coming to them regardless of when it arrives).\n\nSimilarly, let’s suppose that we have a less discriminating test, mammography*, that still has a 20% rate of false negatives, as in the original case.  However, mammography* has an 80% rate of false positives.  In other words, a patient without breast cancer has an 80% chance of getting a false positive result on her mammography* test.  If we suppose the same 1% prior probability that a patient presenting herself for screening has breast cancer, what is the chance that a patient with positive mammography* has cancer?\nGroup 1:  100 patients with breast cancer.Group 2:  9,900 patients without breast cancer.\nAfter mammography* screening:\nGroup A:  80 patients with breast cancer and a “positive” mammography*.Group B:  20 patients with breast cancer and a “negative” mammography*.Group C:  7920 patients without breast cancer and a “positive” mammography*.Group D:  1980 patients without breast cancer and a “negative” mammography*.\nCalculator:  Result:  The result works out to 80 / 8,000, or 0.01.  This is exactly the same as the 1% prior probability that a patient has breast cancer!  A “positive” result on mammography* doesn’t change the probability that a woman has breast cancer at all.  You can similarly verify that a “negative” mammography* also counts for nothing.  And in fact it must be this way, because if mammography* has an 80% hit rate for patients with breast cancer, and also an 80% rate of false positives for patients without breast cancer, then mammography* is completely uncorrelated with breast cancer.  There’s no reason to call one result “positive” and one result “negative”; in fact, there’s no reason to call the test a “mammography”.  You can throw away your expensive mammography* equipment and replace it with a random number generator that outputs a red light 80% of the time and a green light 20% of the time; the results will be the same.  Furthermore, there’s no reason to call the red light a “positive” result or the green light a “negative” result.  You could have a green light 80% of the time and a red light 20% of the time, or a blue light 80% of the time and a purple light 20% of the time, and it would all have the same bearing on whether the patient has breast cancer: i.e., no bearing whatsoever.\nWe can show algebraically that this must hold for any case where the chance of a true positive and the chance of a false positive are the same, i.e:\nGroup 1:  100 patients with breast cancer.Group 2:  9,900 patients without breast cancer.\nNow consider a test where the probability of a true positive and the probability of a false positive are the same number M (in the example above, M=80% or M = 0.8):\nGroup A:  100*M patients with breast cancer and a “positive” result.Group B:  100*(1 – M) patients with breast cancer and a “negative” result.Group C:  9,900*M patients without breast cancer and a “positive” result.Group D:  9,900*(1 – M) patients without breast cancer and a “negative” result.\nThe proportion of patients with breast cancer, within the group of patients with a “positive” result, then equals 100*M / (100*M + 9900*M) = 100 / (100 + 9900) = 1%.  This holds true regardless of whether M is 80%, 30%, 50%, or 100%.  If we have a mammography* test that returns “positive” results for 90% of patients with breast cancer and returns “positive” results for 90% of patients without breast cancer, the proportion of “positive”-testing patients who have breast cancer will still equal the original proportion of patients with breast cancer, i.e., 1%.\nYou can run through the same algebra, replacing the prior proportion of patients with breast cancer with an arbitrary percentage P:\nGroup 1:  Within some number of patients, a fraction P have breast cancer.Group 2:  Within some number of patients, a fraction (1 – P) do not have breast cancer.\nAfter a “cancer test” that returns “positive” for a fraction M of patients with breast cancer, and also returns “positive” for the same fraction M of patients without cancer:\nGroup A:  P*M patients have breast cancer and a “positive” result.Group B:  P*(1 – M) patients have breast cancer and a “negative” result.Group C:  (1 – P)*M patients have no breast cancer and a “positive” result.Group D:  (1 – P)*(1 – M) patients have no breast cancer and a “negative” result.\nThe chance that a patient with a “positive” result has breast cancer is then the proportion of group A within the combined group A + C, or P*M / [P*M + (1 – P)*M], which, cancelling the common factor M from the numerator and denominator, is P / [P + (1 – P)] or P / 1 or just P.  If the rate of false positives is the same as the rate of true positives, you always have the same probability after the test as when you started.\nWhich is common sense.  Take, for example, the “test” of flipping a coin; if the coin comes up heads, does it tell you anything about whether a patient has breast cancer?  No; the coin has a 50% chance of coming up heads if the patient has breast cancer, and also a 50% chance of coming up heads if the patient does not have breast cancer.  Therefore there is no reason to call either heads or tails a “positive” result.  It’s not the probability being “50/50” that makes the coin a bad test; it’s that the two probabilities, for “cancer patient turns up heads” and “healthy patient turns up heads”, are the same.  If the coin was slightly biased, so that it had a 60% chance of coming up heads, it still wouldn’t be a cancer test – what makes a coin a poor test is not that it has a 50/50 chance of coming up heads if the patient has cancer, but that it also has a 50/50 chance of coming up heads if the patient does not have cancer.  You can even use a test that comes up “positive” for cancer patients 100% of the time, and still not learn anything.  An example of such a test is “Add 2 + 2 and see if the answer is 4.”  This test returns positive 100% of the time for patients with breast cancer.  It also returns positive 100% of the time for patients without breast cancer.  So you learn nothing.\nThe original proportion of patients with breast cancer is known as the prior probability.   The chance that a patient with breast cancer gets a positive mammography, and the chance that a patient without breast cancer gets a positive mammography, are known as the two conditional probabilities.   Collectively, this initial information is known as the priors.   The final answer – the estimated probability that a patient has breast cancer, given that we know she has a positive result on her mammography – is known as the revised probability or the posterior probability.   What we’ve just shown is that if the two conditional probabilities are equal, the posterior probability equals the prior probability.\n\nFunFact!Q.  How can I find the priors for a problem?A.   Many commonly used priors are listed in the Handbook of Chemistry and Physics.Q.  Where do priors originally come from?A.   Never ask that question.Q.  Uh huh.  Then where do scientists get their priors?A.   Priors for scientific problems are established by annual vote of the AAAS.  In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations.  This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time.  No one is really sure.Q.  I see.  And where does everyone else get their priors?A.   They download their priors from Kazaa.Q.  What if the priors I want aren’t available on Kazaa?A.   There’s a small, cluttered antique shop in a back alley of San Francisco’s Chinatown.  Don’t ask about the bronze rat.\nActually, priors are true or false just like the final answer – they reflect reality and can be judged by comparing them against reality.  For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong.  For our particular problem, the priors might have been established by three studies – a study on the case histories of women with breast cancer to see how many of them tested positive on a mammography, a study on women without breast cancer to see how many of them test positive on a mammography, and an epidemiological study on the prevalence of breast cancer in some specific demographic.\n\nSuppose that a barrel contains many small plastic eggs.  Some eggs are painted red and some are painted blue.  40% of the eggs in the bin contain pearls, and 60% contain nothing.   30% of eggs containing pearls are painted blue, and 10% of eggs containing nothing are painted blue.  What is the probability that a blue egg contains a pearl?  For this example the arithmetic is simple enough that you may be able to do it in your head, and I would suggest trying to do so.\nBut just in case…  Result:  A more compact way of specifying the problem:\np(pearl) = 40%p(blue|pearl) = 30%p(blue|~pearl) = 10%p(pearl|blue) = ?\n“~” is shorthand for “not”, so ~pearl reads “not pearl”.\nblue|pearl is shorthand for “blue given pearl” or “the probability that an egg is painted blue, given that the egg contains a pearl”.  One thing that’s confusing about this notation is that the order of implication is read right-to-left, as in Hebrew or Arabic.  blue|pearl means “blue <- pearl”, the degree to which pearl-ness implies blue-ness, not the degree to which blue-ness implies pearl-ness.  This is confusing, but it’s unfortunately the standard notation in probability theory.\nReaders familiar with quantum mechanics will have already encountered this peculiarity; in quantum mechanics, for example,  reads as “the probability that a particle at A goes to B, then to C, ending up at D”.  To follow the particle, you move your eyes from right to left.  Reading from left to right, “|” means “given”; reading from right to left, “|” means “implies” or “leads to”.  Thus, moving your eyes from left to right, blue|pearl reads “blue given pearl” or “the probability that an egg is painted blue, given that the egg contains a pearl”.  Moving your eyes from right to left, blue|pearl reads “pearl implies blue” or “the probability that an egg containing a pearl is painted blue”.\nThe item on the right side is what you already know or the premise, and the item on the left side is the implication or conclusion.   If we have p(blue|pearl) = 30% , and we already know that some egg contains a pearl, then we can conclude there is a 30% chance that the egg is painted blue.  Thus, the final fact we’re looking for – “the chance that a blue egg contains a pearl” or “the probability that an egg contains a pearl, if we know the egg is painted blue” – reads p(pearl|blue) .\nLet’s return to the problem.  We have that 40% of the eggs contain pearls, and 60% of the eggs contain nothing.  30% of the eggs containing pearls are painted blue, so 12% of the eggs altogether contain pearls and are painted blue.  10% of the eggs containing nothing are painted blue, so altogether 6% of the eggs contain nothing and are painted blue.  A total of 18% of the eggs are painted blue, and a total of 12% of the eggs are painted blue and contain pearls, so the chance a blue egg contains a pearl is 12/18 or 2/3 or around 67%.\nThe applet below, courtesy of Christian Rovner, shows a graphic representation of this problem:(Are you having trouble seeing this applet?  Do you see an image of the applet rather than the applet itself?  Try downloading an updated Java .)\nLooking at this applet, it’s easier to see why the final answer depends on all three probabilities; it’s the differential pressure between the two conditional probabilities,  p(blue|pearl) and p(blue|~pearl) , that slides the prior probability p(pearl) to the posterior probability p(pearl|blue) .\nAs before, we can see the necessity of all three pieces of information by considering extreme cases (feel free to type them into the applet).  In a (large) barrel in which only one egg out of a thousand contains a pearl, knowing that an egg is painted blue slides the probability from 0.1% to 0.3% (instead of sliding the probability from 40% to 67%).  Similarly, if 999 out of 1000 eggs contain pearls, knowing that an egg is blue slides the probability from 99.9% to 99.966%; the probability that the egg does not contain a pearl goes from 1/1000 to around 1/3000.  Even when the prior probability changes, the differential pressure of the two conditional probabilities always slides the probability in the same direction.   If you learn the egg is painted blue, the probability the egg contains a pearl always goes up – but it goes up from the prior probability, so you need to know the prior probability in order to calculate the final answer.  0.1% goes up to 0.3%, 10% goes up to 25%, 40% goes up to 67%, 80% goes up to 92%, and 99.9% goes up to 99.966%.  If you’re interested in knowing how any other probabilities slide, you can type your own prior probability into the Java applet.  You can also click and drag the dividing line between pearl and ~pearl in the upper bar, and watch the posterior probability change in the bottom bar.\nStudies of clinical reasoning show that most doctors carry out the mental operation of replacing the original 1% probability with the 80% probability that a woman with cancer would get a positive mammography.  Similarly, on the pearl-egg problem, most respondents unfamiliar with Bayesian reasoning would probably respond that the probability a blue egg contains a pearl is 30%, or perhaps 20% (the 30% chance of a true positive minus the 10% chance of a false positive).  Even if this mental operation seems like a good idea at the time, it makes no sense in terms of the question asked.  It’s like the experiment in which you ask a second-grader:  “If eighteen people get on a bus, and then seven more people get on the bus, how old is the bus driver?”  Many second-graders will respond:  “Twenty-five.”  They understand when they’re being prompted to carry out a particular mental procedure, but they haven’t quite connected the procedure to reality.  Similarly, to find the probability that a woman with a positive mammography has breast cancer, it makes no sense whatsoever to replace the original probability that the woman has cancer with the probability that a woman with breast cancer gets a positive mammography.  Neither can you subtract the probability of a false positive from the probability of the true positive.  These operations are as wildly irrelevant as adding the number of people on the bus to find the age of the bus driver.\n\nI keep emphasizing the idea that evidence slides probability because of research that shows people tend to use spatial intutions to grasp numbers.  In particular, there’s interesting evidence that we have an innate sense of quantity that’s localized to left inferior parietal cortex – patients with damage to this area can selectively lose their sense of whether 5 is less than 8, while retaining their ability to read, write, and so on.  (Yes, really!)  The parietal cortex processes our sense of where things are in space (roughly speaking), so an innate “number line”, or rather “quantity line”, may be responsible for the human sense of numbers.  This is why I suggest visualizing Bayesian evidence as sliding the probability along the number line; my hope is that this will translate Bayesian reasoning into something that makes sense to innate human brainware.  (That, really, is what an “intuitive explanation” is. )  For more information, see Stanislas Dehaene’s The Number Sense.\n\nA study by Gigerenzer and Hoffrage in 1995 showed that some ways of phrasing story problems are much more evocative of correct Bayesian reasoning.  The least evocative phrasing used probabilities.  A slightly more evocative phrasing used frequencies instead of probabilities; the problem remained the same, but instead of saying that 1% of women had breast cancer, one would say that 1 out of 100 women had breast cancer, that 80 out of 100 women with breast cancer would get a positive mammography, and so on.  Why did a higher proportion of subjects display Bayesian reasoning on this problem?  Probably because saying “1 out of 100 women” encourages you to concretely visualize X women with cancer, leading you to visualize X women with cancer and a positive mammography, etc.\nThe most effective presentation found so far is what’s known as natural frequencies – saying that 40 out of 100 eggs contain pearls, 12 out of 40 eggs containing pearls are painted blue, and 6 out of 60 eggs containing nothing are painted blue.  A natural frequencies presentation is one in which the information about the prior probability is included in presenting the conditional probabilities.  If you were just learning about the eggs’ conditional probabilities through natural experimentation, you would – in the course of cracking open a hundred eggs – crack open around 40 eggs containing pearls, of which 12 eggs would be painted blue, while cracking open 60 eggs containing nothing, of which about 6 would be painted blue.  In the course of learning the conditional probabilities, you’d see examples of blue eggs containing pearls about twice as often as you saw examples of blue eggs containing nothing.\nIt may seem like presenting the problem in this way is “cheating”, and indeed if it were a story problem in a math book, it probably would be cheating.  However, if you’re talking about real doctors, you want to cheat; you want the doctors to draw the right conclusions as easily as possible.  The obvious next move would be to present all medical statistics in terms of natural frequencies.  Unfortunately, while natural frequencies are a step in the right direction, it probably won’t be enough.  When problems are presented in natural frequences, the proportion of people using Bayesian reasoning rises to around half.  A big improvement, but not big enough when you’re talking about real doctors and real patients.\nA presentation of the problem in natural frequencies might be visualized like this:\nIn the frequency visualization, the selective attrition of the two conditional probabilities changes the proportion of eggs that contain pearls.  The bottom bar is shorter than the top bar, just as the number of eggs painted blue is less than the total number of eggs.  The probability graph shown earlier is really just the frequency graph with the bottom bar “renormalized”, stretched out to the same length as the top bar.  In the frequency applet you can change the conditional probabilities by clicking and dragging the left and right edges of the graph.  (For example, to change the conditional probability blue|pearl , click and drag the line on the left that stretches from the left edge of the top bar to the left edge of the bottom bar.)\nIn the probability applet, you can see that when the conditional probabilities are equal, there’s no differential pressure – the arrows are the same size – so the prior probability doesn’t slide between the top bar and the bottom bar.  But the bottom bar in the probability applet is just a renormalized (stretched out) version of the bottom bar in the frequency applet, and the frequency applet shows why the probability doesn’t slide if the two conditional probabilities are equal.  Here’s a case where the prior proportion of pearls remains 40%, and the proportion of pearl eggs painted blue remains 30%, but the number of empty eggs painted blue is also 30%:\nIf you diminish two shapes by the same factor, their relative proportion will be the same as before.  If you diminish the left section of the top bar by the same factor as the right section, then the bottom bar will have the same proportions as the top bar – it’ll just be smaller.  If the two conditional probabilities are equal, learning that the egg is blue doesn’t change the probability that the egg contains a pearl – for the same reason that similar triangles have identical angles; geometric figures don’t change shape when you shrink them by a constant factor.\nIn this case, you might as well just say that 30% of eggs are painted blue, since the probability of an egg being painted blue is independent of whether the egg contains a pearl.  Applying a “test” that is statistically independent of its condition just shrinks the sample size.  In this case, requiring that the egg be painted blue doesn’t shrink the group of eggs with pearls any more or less than it shrinks the group of eggs without pearls.  It just shrinks the total number of eggs in the sample.\n\nFunFact!Q.  Why did the Bayesian reasoner cross the road?A.  You need more information to answer this question.\n\nHere’s what the original medical problem looks like when graphed.  1% of women have breast cancer, 80% of those women test positive on a mammography, and 9.6% of women without breast cancer also receive positive mammographies.\nAs is now clearly visible, the mammography doesn’t increase the probability a positive-testing woman has breast cancer by increasing the number of women with breast cancer – of course not; if mammography increased the number of women with breast cancer, no one would ever take the test!  However, requiring a positive mammography is a membership test that eliminates many more women without breast cancer than women with cancer.  The number of women without breast cancer diminishes by a factor of more than ten, from 9,900 to 950, while the number of women with breast cancer is diminished only from 100 to 80.  Thus, the proportion of 80 within 1,030 is much larger than the proportion of 100 within 10,000.  In the graph, the left sector (representing women with breast cancer) is small, but the mammography test projects almost all of this sector into the bottom bar.  The right sector (representing women without breast cancer) is large, but the mammography test projects a much smaller fraction of this sector into the bottom bar.  There are, indeed, fewer women with breast cancer and positive mammographies than there are women with breast cancer – obeying the law of probabilities which requires that p(A) >= p(A&B) .  But even though the left sector in the bottom bar is actually slightly smaller, the proportion of the left sector within the bottom bar is greater – though still not very great.  If the bottom bar were renormalized to the same length as the top bar, it would look like the left sector had expanded.  This is why the proportion of “women with breast cancer” in the group “women with positive mammographies” is higher than the proportion of “women with breast cancer” in the general population – although the proportion is still not very high.  The evidence of the positive mammography slides the prior probability of 1% to the posterior probability of 7.8%.\n\nSuppose there’s yet another variant of the mammography test, mammography@, which behaves as follows.  1% of women in a certain demographic have breast cancer.  Like ordinary mammography, mammography@ returns positive 9.6% of the time for women without breast cancer.  However, mammography@ returns positive 0% of the time (say, once in a billion) for women with breast cancer.  The graph for this scenario looks like this:\nWhat is it that this test actually does?  If a patient comes to you with a positive result on her mammography@, what do you say?\n\n“Congratulations, you’re among the rare 9.5% of the population whose health is definitely established by this test.”\nMammography@ isn’t a cancer test; it’s a health test!  Few women without breast cancer get positive results on mammography@, but only women without breast cancer ever get positive results at all.  Not much of the right sector of the top bar projects into the bottom bar, but none of the left sector projects into the bottom bar.  So a positive result on mammography@ means you definitely don’t have breast cancer.\n\nWhat makes ordinary mammography a positive indicator for breast cancer is not that someone named the result “positive”, but rather that the test result stands in a specific Bayesian relation to the condition of breast cancer.  You could call the same result “positive” or “negative” or “blue” or “red” or “James Rutherford”, or give it no name at all, and the test result would still slide the probability in exactly the same way.  To minimize confusion, a test result which slides the probability of breast cancer upward should be called “positive”.  A test result which slides the probability of breast cancer downward should be called “negative”.  If the test result is statistically unrelated to the presence or absence of breast cancer – if the two conditional probabilities are equal – then we shouldn’t call the procedure a “cancer test”!  The meaning of the test is determined by the two conditional probabilities; any names attached to the results are simply convenient labels.\n\nThe bottom bar for the graph of mammography@ is small; mammography@ is a test that’s only rarely useful.  Or rather, the test only rarely gives strong evidence, and most of the time gives weak evidence.  A negative result on mammography@ does slide probability – it just doesn’t slide it very far.  Click the “Result” switch at the bottom left corner of the applet to see what a negative result on mammography@ would imply.  You might intuit that since the test could have returned positive for health, but didn’t, then the failure of the test to return positive must mean that the woman has a higher chance of having breast cancer – that her probability of having breast cancer must be slid upward by the negative result on her health test.\nThis intuition is correct!  The sum of the groups with negative results and positive results must always equal the group of all women.  If the positive-testing group has “more than its fair share” of women without breast cancer, there must be an at least slightly higher proportion of women with cancer in the negative-testing group.  A positive result is rare but very strong evidence in one direction, while a negative result is common but very weak evidence in the opposite direction.  You might call this the Law of Conservation of Probability – not a standard term, but the conservation rule is exact.  If you take the revised probability of breast cancer after a positive result, times the probability of a positive result, and add that to the revised probability of breast cancer after a negative result, times the probability of a negative result, then you must always arrive at the prior probability.  If you don’t yet know what the test result is, the expected revised probability after the test result arrives – taking both possible results into account – should always equal the prior probability.\nOn ordinary mammography, the test is expected to return “positive” 10.3% of the time – 80 positive women with cancer plus 950 positive women without cancer equals 1030 women with positive results.  Conversely, the mammography should return negative 89.7% of the time:  100% – 10.3% = 89.7%.  A positive result slides the revised probability from 1% to 7.8%, while a negative result slides the revised probability from 1% to 0.22%.  So p(cancer|positive)*p(positive) + p(cancer|negative)*p(negative) = 7.8%*10.3% + 0.22%*89.7% = 1% = p(cancer) , as expected.\nCalculator:  Result:  \n\nWhy “as expected”?  Let’s take a look at the quantities involved:\np(cancer):0.01  Group 1: 100 women with breast cancerp(~cancer):0.99Group 2: 9900 women without breast cancer p(positive|cancer):80.0%80% of women with breast cancer have positive mammographiesp(~positive|cancer):20.0%20% of women with breast cancer have negative mammographiesp(positive|~cancer):9.6%9.6% of women without breast cancer have positive mammographiesp(~positive|~cancer):90.4%90.4% of women without breast cancer have negative mammographies p(cancer&positive):0.008Group A:  80 women with breast cancer and positive mammographiesp(cancer&~positive):0.002Group B: 20 women with breast cancer and negative mammographiesp(~cancer&positive):0.095Group C: 950 women without breast cancer and positive mammographiesp(~cancer&~positive):0.895Group D: 8950 women without breast cancer and negative mammographies p(positive):0.1031030 women with positive resultsp(~positive):0.8978970 women with negative results p(cancer|positive):7.80%Chance you have breast cancer if mammography is positive: 7.8%p(~cancer|positive):92.20%Chance you are healthy if mammography is positive: 92.2%p(cancer|~positive):0.22%Chance you have breast cancer if mammography is negative: 0.22%p(~cancer|~positive):99.78%Chance you are healthy if mammography is negative: 99.78%\nOne of the common confusions in using Bayesian reasoning is to mix up some or all of these quantities – which, as you can see, are all numerically different and have different meanings.  p(A&B) is the same as p(B&A) , but p(A|B) is not the same thing as p(B|A) , and p(A&B) is completely different from p(A|B) .  (I don’t know who chose the symmetrical “|” symbol to mean “implies”, and then made the direction of implication right-to-left, but it was probably a bad idea.)\nTo get acquainted with all these quantities and the relationships between them, we’ll play “follow the degrees of freedom”.  For example, the two quantities p(cancer) and p(~cancer) have 1 degree of freedom between them, because of the general law p(A) + p(~A) = 1 .  If you know that p(~cancer) = .99 , you can obtain p(cancer) = 1 – p(~cancer) = .01 .  There’s no room to say that p(~cancer) = .99 and then also specify p(cancer) = .25 ; it would violate the rule p(A) + p(~A) = 1 .\np(positive|cancer) and p(~positive|cancer) also have only one degree of freedom between them; either a woman with breast cancer gets a positive mammography or she doesn’t.  On the other hand, p(positive|cancer) and p(positive|~cancer) have two degrees of freedom.  You can have a mammography test that returns positive for 80% of cancerous patients and 9.6% of healthy patients, or that returns positive for 70% of cancerous patients and 2% of healthy patients, or even a health test that returns “positive” for 30% of cancerous patients and 92% of healthy patients.  The two quantities, the output of the mammography test for cancerous patients and the output of the mammography test for healthy patients, are in mathematical terms independent; one cannot be obtained from the other in any way, and so they have two degrees of freedom between them.\nWhat about p(positive&cancer) , p(positive|cancer) , and p(cancer) ?  Here we have three quantities; how many degrees of freedom are there?  In this case the equation that must hold is p(positive&cancer) = p(positive|cancer) * p(cancer) .  This equality reduces the degrees of freedom by one.  If we know the fraction of patients with cancer, and chance that a cancerous patient has a positive mammography, we can deduce the fraction of patients who have breast cancer and a positive mammography by multiplying.  You should recognize this operation from the graph; it’s the projection of the top bar into the bottom bar.  p(cancer) is the left sector of the top bar, and p(positive|cancer) determines how much of that sector projects into the bottom bar, and the left sector of the bottom bar is p(positive&cancer) .\nSimilarly, if we know the number of patients with breast cancer and positive mammographies, and also the number of patients with breast cancer, we can estimate the chance that a woman with breast cancer gets a positive mammography by dividing: p(positive|cancer) = p(positive&cancer) / p(cancer) .  In fact, this is exactly how such medical diagnostic tests are calibrated; you do a study on 8,520 women with breast cancer and see that there are 6,816 (or thereabouts) women with breast cancer and positive mammographies, then divide 6,816 by 8520 to find that 80% of women with breast cancer had positive mammographies.  (Incidentally, if you accidentally divide 8520 by 6,816 instead of the other way around, your calculations will start doing strange things, such as insisting that 125% of women with breast cancer and positive mammographies have breast cancer.  This is a common mistake in carrying out Bayesian arithmetic, in my experience.)  And finally, if you know p(positive&cancer) and p(positive|cancer) , you can deduce how many cancer patients there must have been originally.  There are two degrees of freedom shared out among the three quantities; if we know any two, we can deduce the third.\nHow about p(positive) , p(positive&cancer) , and p(positive&~cancer) ?  Again there are only two degrees of freedom among these three variables.  The equation occupying the extra degree of freedom is p(positive) = p(positive&cancer) + p(positive&~cancer) .  This is how p(positive) is computed to begin with; we figure out the number of women with breast cancer who have positive mammographies, and the number of women without breast cancer who have positive mammographies, then add them together to get the total number of women with positive mammographies.  It would be very strange to go out and conduct a study to determine the number of women with positive mammographies – just that one number and nothing else – but in theory you could do so.  And if you then conducted another study and found the number of those women who had positive mammographies and breast cancer, you would also know the number of women with positive mammographies and no breast cancer – either a woman with a positive mammography has breast cancer or she doesn’t.  In general, p(A&B) + p(A&~B) = p(A) .  Symmetrically, p(A&B) + p(~A&B) = p(B) .What about p(positive&cancer) , p(positive&~cancer) , p(~positive&cancer) , and p(~positive&~cancer) ?  You might at first be tempted to think that there are only two degrees of freedom for these four quantities – that you can, for example, get p(positive&~cancer) by multiplying p(positive) * p(~cancer) , and thus that all four quantities can be found given only the two quantities p(positive) and p(cancer) .  This is not the case!  p(positive&~cancer) = p(positive) * p(~cancer) only if the two probabilities are statistically independent – if the chance that a woman has breast cancer has no bearing on whether she has a positive mammography.  As you’ll recall, this amounts to requiring that the two conditional probabilities be equal to each other – a requirement which would eliminate one degree of freedom.  If you remember that these four quantities are the groups A, B, C, and D, you can look over those four groups and realize that, in theory, you can put any number of people into the four groups.  If you start with a group of 80 women with breast cancer and positive mammographies, there’s no reason why you can’t add another group of 500 women with breast cancer and negative mammographies, followed by a group of 3 women without breast cancer and negative mammographies, and so on.  So now it seems like the four quantities have four degrees of freedom.  And they would, except that in expressing them as probabilities, we need to normalize them to fractions of the complete group, which adds the constraint that p(positive&cancer) + p(positive&~cancer) + p(~positive&cancer) + p(~positive&~cancer) = 1 .  This equation takes up one degree of freedom, leaving three degrees of freedom among the four quantities.  If you specify the fractions of women in groups A, B, and D, you can deduce the fraction of women in group C.\n\nGiven the four groups A, B, C, and D, it is very straightforward to compute everything else:  p(cancer) = A + B , p(~positive|cancer) = B / (A + B) , and so on.  Since ABCD contains three degrees of freedom, it follows that the entire set of 16 probabilities contains only three degrees of freedom.  Remember that in our problems we always needed three pieces of information – the prior probability and the two conditional probabilities – which, indeed, have three degrees of freedom among them.  Actually, for Bayesian problems, any three quantities with three degrees of freedom between them should logically specify the entire problem.  For example, let’s take a barrel of eggs with p(blue) = 0.40 ,  p(blue|pearl) = 5/13 , and p(~blue&~pearl) = 0.20 .  Given this information, you can compute p(pearl|blue) .\nAs a story problem:Suppose you have a large barrel containing a number of plastic eggs.  Some eggs contain pearls, the rest contain nothing.  Some eggs are painted blue, the rest are painted red.  Suppose that 40% of the eggs are painted blue, 5/13 of the eggs containing pearls are painted blue, and 20% of the eggs are both empty and painted red.  What is the probability that an egg painted blue contains a pearl?\nTry it – I assure you it is possible.\nCalculator:  Result:  You probably shouldn’t try to solve this with just a Javascript calculator, though.  I used a Python console.  (In theory, pencil and paper should also work, but I don’t know anyone who owns a pencil so I couldn’t try it personally.)\nAs a check on your calculations, does the (meaningless) quantity p(~pearl|~blue)/p(pearl) roughly equal .51?  (In story problem terms:  The likelihood that a red egg is empty, divided by the likelihood that an egg contains a pearl, equals approximately .51.)  Of course, using this information in the problem would be cheating.\nIf you can solve that problem, then when we revisit Conservation of Probability, it seems perfectly straightforward.  Of course the mean revised probability, after administering the test, must be the same as the prior probability.  Of course strong but rare evidence in one direction must be counterbalanced by common but weak evidence in the other direction.\nBecause:  p(cancer|positive)*p(positive)+ p(cancer|~positive)*p(~positive)= p(cancer)\nIn terms of the four groups:\np(cancer|positive)  = A / (A + C)p(positive)         = A + Cp(cancer&positive)  = Ap(cancer|~positive) = B / (B + D)p(~positive)        = B + Dp(cancer&~positive) = Bp(cancer)           = A + B\n\nLet’s return to the original barrel of eggs – 40% of the eggs containing pearls, 30% of the pearl eggs painted blue, 10% of the empty eggs painted blue.  The graph for this problem is:\nWhat happens to the revised probability, p(pearl|blue) , if the proportion of eggs containing pearls is kept constant, but 60% of the eggs with pearls are painted blue (instead of 30%), and 20% of the empty eggs are painted blue (instead of 10%)?  You could type 60% and 20% into the inputs for the two conditional probabilities, and see how the graph changes – but can you figure out in advance what the change will look like?\n\nIf you guessed that the revised probability remains the same, because the bottom bar grows by a factor of 2 but retains the same proportions, congratulations!  Take a moment to think about how far you’ve come.  Looking at a problem like\n1% of women have breast cancer.  80% of women with breast cancer get positive mammographies.  9.6% of women without breast cancer get positive mammographies.  If a woman has a positive mammography, what is the probability she has breast cancer?\nthe vast majority of respondents intuit that around 70-80% of women with positive mammographies have breast cancer.  Now, looking at a problem like\nSuppose there are two barrels containing many small plastic eggs.  In both barrels, some eggs are painted blue and the rest are painted red.  In both barrels, 40% of the eggs contain pearls and the rest are empty.  In the first barrel, 30% of the pearl eggs are painted blue, and 10% of the empty eggs are painted blue.  In the second barrel, 60% of the pearl eggs are painted blue, and 20% of the empty eggs are painted blue.  Would you rather have a blue egg from the first or second barrel?\nyou can see it’s intuitively obvious that the probability of a blue egg containing a pearl is the same for either barrel.  Imagine how hard it would be to see that using the old way of thinking!\n\nIt’s intuitively obvious, but how to prove it?  Suppose that we call P the prior probability that an egg contains a pearl, that we call M the first conditional probability (that a pearl egg is painted blue), and N the second conditional probability (that an empty egg is painted blue).  Suppose that M and N are both increased or diminished by an arbitrary factor X – for example, in the problem above, they are both increased by a factor of 2.  Does the revised probability that an egg contains a pearl, given that we know the egg is blue, stay the same?\np(pearl) = Pp(blue|pearl) = M*Xp(blue|~pearl) = N*Xp(pearl|blue) = ?\nFrom these quantities, we get the four groups:\nGroup A:  p(pearl&blue)   = P*M*XGroup B:  p(pearl&~blue)  = P*(1 – (M*X))Group C:  p(~pearl&blue)  = (1 – P)*N*XGroup D:  p(~pearl&~blue) = (1 – P)*(1 – (N*X))\nThe proportion of eggs that contain pearls and are blue, within the group of all blue eggs, is then the proportion of group (A) within the group (A + C), equalling P*M*X / (P*M*X + (1 – P)*N*X) .  The factor X in the numerator and denominator cancels out, so increasing or diminishing both conditional probabilities by a constant factor doesn’t change the revised probability.\n\nFunFact!Q.  Suppose that there are two barrels, each containing a number of plastic eggs.  In both barrels, some eggs are painted blue and the rest are painted red.  In the first barrel, 90% of the eggs contain pearls and 20% of the pearl eggs are painted blue.  In the second barrel, 45% of the eggs contain pearls and 60% of the empty eggs are painted red.  Would you rather have a blue pearl egg from the first or second barrel?A.   Actually, it doesn’t matter which barrel you choose!  Can you see why?\n\nThe probability that a test gives a true positive divided by the probability that a test gives a false positive is known as the likelihood ratio of that test.   Does the likelihood ratio of a medical test sum up everything there is to know about the usefulness of the test?\nNo, it does not!  The likelihood ratio sums up everything there is to know about the meaning of a positive result on the medical test, but the meaning of a negative result on the test is not specified, nor is the frequency with which the test is useful.  If we examine the algebra above, while p(pearl|blue) remains constant, p(pearl|~blue) may change – the X does not cancel out.  As a story problem, this strange fact would look something like this:\nSuppose that there are two barrels, each containing a number of plastic eggs.  In both barrels, 40% of the eggs contain pearls and the rest contain nothing.  In both barrels, some eggs are painted blue and the rest are painted red.  In the first barrel, 30% of the eggs with pearls are painted blue, and 10% of the empty eggs are painted blue.  In the second barrel, 90% of the eggs with pearls are painted blue, and 30% of the empty eggs are painted blue.  Would you rather have a blue egg from the first or second barrel?  Would you rather have a red egg from the first or second barrel?\nFor the first question, the answer is that we don’t care whether we get the blue egg from the first or second barrel.  For the second question, however, the probabilities do change – in the first barrel, 34% of the red eggs contain pearls, while in the second barrel 8.7% of the red eggs contain pearls!  Thus, we should prefer to get a red egg from the first barrel.  In the first barrel, 70% of the pearl eggs are painted red, and 90% of the empty eggs are painted red.  In the second barrel, 10% of the pearl eggs are painted red, and 70% of the empty eggs are painted red.\nCalculator:  Result:  What goes on here?  We start out by noting that, counter to intuition, p(pearl|blue) and p(pearl|~blue) have two degrees of freedom among them even when p(pearl) is fixed – so there’s no reason why one quantity shouldn’t change while the other remains constant.  But we didn’t we just get through establishing a law for “Conservation of Probability”, which says that p(pearl|blue)*p(blue) + p(pearl|~blue)*p(~blue) = p(pearl) ?  Doesn’t this equation take up one degree of freedom?  No, because p(blue) isn’t fixed between the two problems.  In the second barrel, the proportion of blue eggs containing pearls is the same as in the first barrel, but a much larger fraction of eggs are painted blue!  This alters the set of red eggs in such a way that the proportions do change.  Here’s a graph for the red eggs in the second barrel:\n\nLet’s return to the example of a medical test.  The likelihood ratio of a medical test – the number of true positives divided by the number of false positives – tells us everything there is to know about the meaning of a positive result.  But it doesn’t tell us the meaning of a negative result, and it doesn’t tell us how often the test is useful.  For example, a mammography with a hit rate of 80% for patients with breast cancer and a false positive rate of 9.6% for healthy patients has the same likelihood ratio as a test with an 8% hit rate and a false positive rate of 0.96%.  Although these two tests have the same likelihood ratio, the first test is more useful in every way – it detects disease more often, and a negative result is stronger evidence of health.\nThe likelihood ratio for a positive result summarizes the differential pressure of the two conditional probabilities for a positive result, and thus summarizes how much a positive result will slide the prior probability.  Take a probability graph, like this one:\nThe likelihood ratio of the mammography is what determines the slant of the line.  If the prior probability is 1%, then knowing only the likelihood ratio is enough to determine the posterior probability after a positive result.\nBut, as you can see from the frequency graph, the likelihood ratio doesn’t tell the whole story – in the frequency graph, the proportions of the bottom bar can stay fixed while the size of the bottom bar changes.   p(blue) increases but p(pearl|blue) doesn’t change, because p(pearl&blue) and p(~pearl&blue) increase by the same factor.  But when you flip the graph to look at p(~blue) , the proportions of p(pearl&~blue) and p(~pearl&~blue) do not remain constant.\nOf course the likelihood ratio can’t tell the whole story; the likelihood ratio and the prior probability together are only two numbers, while the problem has three degrees of freedom.\n\nSuppose that you apply two tests for breast cancer in succession – say, a standard mammography and also some other test which is independent of mammography.  Since I don’t know of any such test which is independent of mammography, I’ll invent one for the purpose of this problem, and call it the Tams-Braylor Division Test, which checks to see if any cells are dividing more rapidly than other cells.  We’ll suppose that the Tams-Braylor gives a true positive for 90% of patients with breast cancer, and gives a false positive for 5% of patients without cancer.  Let’s say the prior prevalence of breast cancer is 1%.  If a patient gets a positive result on her mammography and her Tams-Braylor, what is the revised probability she has breast cancer?\nOne way to solve this problem would be to take the revised probability for a positive mammography, which we already calculated as 7.8%, and plug that into the Tams-Braylor test as the new prior probability.  If we do this, we find that the result comes out to 60%.\nCalculator:  Result:  But this assumes that first we see the positive mammography result, and then the positive result on the Tams-Braylor.  What if first the woman gets a positive result on the Tams-Braylor, followed by a positive result on her mammography.  Intuitively, it seems like it shouldn’t matter.  Does the math check out?\nFirst we’ll administer the Tams-Braylor to a woman with a 1% prior probability of breast cancer. \nCalculator:  Result:  Then we administer a mammography, which gives 80% true positives and 9.6% false positives, and it also comes out positive.\nCalculator:  Result:  Lo and behold, the answer is again 60%.  (If it’s not exactly the same, it’s due to rounding error – you can get a more precise calculator, or work out the fractions by hand, and the numbers will be exactly equal.)\nAn algebraic proof that both strategies are equivalent is left to the reader.  To visualize, imagine that the lower bar of the frequency applet for mammography projects an even lower bar using the probabilities of the Tams-Braylor Test, and that the final lowest bar is the same regardless of the order in which the conditional probabilities are projected.\n\nWe might also reason that since the two tests are independent, the probability a woman with breast cancer gets a positive mammography and a positive Tams-Braylor is 90% * 80% = 72%.  And the probability that a woman without breast cancer gets false positives on mammography and Tams-Braylor is 5% * 9.6% = 0.48%.  So if we wrap it all up as a single test with a likelihood ratio of 72%/0.48%, and apply it to a woman with a 1% prior probability of breast cancer:\nCalculator:  Result:  …we find once again that the answer is 60%.\nSuppose that the prior prevalence of breast cancer in a demographic is 1%.  Suppose that we, as doctors, have a repertoire of three independent tests for breast cancer.  Our first test, test A, a mammography, has a likelihood ratio of 80%/9.6% = 8.33.  The second test, test B, has a likelihood ratio of 18.0 (for example, from 90% versus 5%); and the third test, test C, has a likelihood ratio of 3.5 (which could be from 70% versus 20%, or from 35% versus 10%; it makes no difference).  Suppose a patient gets a positive result on all three tests.  What is the probability the patient has breast cancer?\nHere’s a fun trick for simplifying the bookkeeping.  If the prior prevalence of breast cancer in a demographic is 1%, then 1 out of 100 women have breast cancer, and 99 out of 100 women do not have breast cancer.  So if we rewrite the probability of 1% as an odds ratio, the odds are:\n1:99\nAnd the likelihood ratios of the three tests A, B, and C are:\n8.33:1 = 25:318.0:1 = 18:1 3.5:1 =  7:2\nThe odds for women with breast cancer who score positive on all three tests, versus women without breast cancer who score positive on all three tests, will equal:\n1*25*18*7:99*3*1*2 =3,150:594\nTo recover the probability from the odds, we just write:3,150 / (3,150 + 594) = 84%\nThis always works regardless of how the odds ratios are written; i.e., 8.33:1 is just the same as 25:3 or 75:9.  It doesn’t matter in what order the tests are administered, or in what order the results are computed.  The proof is left as an exercise for the reader.\n\nE. T. Jaynes, in “Probability Theory With Applications in Science and Engineering”, suggests that credibility and evidence should be measured in decibels.\nDecibels?\nDecibels are used for measuring exponential differences of intensity.  For example, if the sound from an automobile horn carries 10,000 times as much energy (per square meter per second) as the sound from an alarm clock, the automobile horn would be 40 decibels louder.  The sound of a bird singing might carry 1,000 times less energy than an alarm clock, and hence would be 30 decibels softer.  To get the number of decibels, you take the logarithm base 10 and multiply by 10.\ndecibels = 10 log 10 (intensity)    orintensity = 10 (decibels/10)\nSuppose we start with a prior probability of 1% that a woman has breast cancer, corresponding to an odds ratio of 1:99.  And then we administer three tests of likelihood ratios 25:3, 18:1, and 7:2.  You could multiply those numbers… or you could just add their logarithms:\n10 log 10 (1/99) = -2010 log 10 (25/3) = 910 log 10 (18/1) = 1310 log 10 (7/2)  = 5\nIt starts out as fairly unlikely that a woman has breast cancer – our credibility level is at -20 decibels.  Then three test results come in, corresponding to 9, 13, and 5 decibels of evidence.  This raises the credibility level by a total of 27 decibels, meaning that the prior credibility of -20 decibels goes to a posterior credibility of 7 decibels.  So the odds go from 1:99 to 5:1, and the probability goes from 1% to around 83%.\n\nIn front of you is a bookbag containing 1,000 poker chips.  I started out with two such bookbags, one containing 700 red and 300 blue chips, the other containing 300 red and 700 blue.  I flipped a fair coin to determine which bookbag to use, so your prior probability that the bookbag in front of you is the red bookbag is 50%.  Now, you sample randomly, with replacement after each chip.  In 12 samples, you get 8 reds and 4 blues.  What is the probability that this is the predominantly red bag?\nJust for fun, try and work this one out in your head.  You don’t need to be exact – a rough estimate is good enough.  When you’re ready, continue onward.\n\nAccording to a study performed by Lawrence Phillips and Ward Edwards in 1966, most people, faced with this problem, give an answer in the range 70% to 80%.  Did you give a substantially higher probability than that?  If you did, congratulations – Ward Edwards wrote that very seldom does a person answer this question properly, even if the person is relatively familiar with Bayesian reasoning.  The correct answer is 97%.\nThe likelihood ratio for the test result “red chip” is 7/3, while the likelihood ratio for the test result “blue chip” is 3/7.  Therefore a blue chip is exactly the same amount of evidence as a red chip, just in the other direction – a red chip is 3.6 decibels of evidence for the red bag, and a blue chip is -3.6 decibels of evidence.  If you draw one blue chip and one red chip, they cancel out.  So the ratio of red chips to blue chips does not matter; only the excess of red chips over blue chips matters.  There were eight red chips and four blue chips in twelve samples; therefore, four more red chips than blue chips.  Thus the posterior odds will be:\n7 4 :3 4 = 2401:81which is around 30:1, i.e., around 97%.\nThe prior credibility starts at 0 decibels and there’s a total of around 14 decibels of evidence, and indeed this corresponds to odds of around 25:1 or around 96%.  Again, there’s some rounding error, but if you performed the operations using exact arithmetic, the results would be identical.\nWe can now see intuitively that the bookbag problem would have exactly the same answer, obtained in just the same way, if sixteen chips were sampled and we found ten red chips and six blue chips.\n\nYou are a mechanic for gizmos.  When a gizmo stops working, it is due to a blocked hose 30% of the time.  If a gizmo’s hose is blocked, there is a 45% probability that prodding the gizmo will produce sparks.  If a gizmo’s hose is unblocked, there is only a 5% chance that prodding the gizmo will produce sparks.  A customer brings you a malfunctioning gizmo.  You prod the gizmo and find that it produces sparks.  What is the probability that a spark-producing gizmo has a blocked hose?\nCalculator:  Result:  What is the sequence of arithmetical operations that you performed to solve this problem?\n(45%*30%) / (45%*30% + 5%*70%)\nSimilarly, to find the chance that a woman with positive mammography has breast cancer, we computed:\np(positive|cancer)*p(cancer)_______________________________________________p(positive|cancer)*p(cancer) + p(positive|~cancer)*p(~cancer)\nwhich isp(positive&cancer) / [p(positive&cancer) + p(positive&~cancer)]which isp(positive&cancer) / p(positive)which isp(cancer|positive)\nThe fully general form of this calculation is known as Bayes’ Theorem or Bayes’ Rule:\np(A|X) =       p(X|A)* p(A)          p(X|A)* p(A) + p(X|~A)* p(~A)\nGiven some phenomenon A that we want to investigate, and an observation X that is evidence about A – for example, in the previous example, A is breast cancer and X is a positive mammography – Bayes’ Theorem tells us how we should update our probability of A, given the new evidence X.\nBy this point, Bayes’ Theorem may seem blatantly obvious or even tautological, rather than exciting and new.  If so, this introduction has entirely succeeded in its purpose.\n\nFunFact!Q.  Who originally discovered Bayes’ Theorem?A.  The Reverend Thomas Bayes, by far the most enigmatic figure in mathematical history.  Almost nothing is known of Bayes’s life, and very few of his manuscripts survived.  Thomas Bayes was born in 1701 or 1702 to Joshua Bayes and Ann Carpenter, and his date of death is listed as 1761.  The exact date of Thomas Bayes’s birth is not known for certain because Joshua Bayes, though a surprisingly wealthy man, was a member of an unusual, esoteric, and even heretical religious sect, the “Nonconformists”.  The Nonconformists kept their birth registers secret, supposedly from fear of religious discrimination; whatever the reason, no true record exists of Thomas Bayes’s birth.  Thomas Bayes was raised a Nonconformist and was soon promoted into the higher ranks of the Nonconformist theosophers, whence comes the “Reverend” in his name.In 1742 Bayes was elected a Fellow of the Royal Society of London, the most prestigious scientific body of its day, despite Bayes having published no scientific or mathematical works at that time.  Bayes’s nomination certificate was signed by sponsors including the President and the Secretary of the Society, making his election almost certain.  Even today, however, it remains a mystery why such weighty names sponsored an unknown into the Royal Society.Bayes’s sole publication during his known lifetime was allegedly a mystical book entitled Divine Benevolence, laying forth the original causation and ultimate purpose of the universe.  The book is commonly attributed to Bayes, though it is said that no author appeared on the title page, and the entire work is sometimes considered to be of dubious provenance.Most mysterious of all, Bayes’ Theorem itself appears in a Bayes manuscript presented to the Royal Society of London in 1764, three years after Bayes’s supposed death in 1761!Despite the shocking circumstances of its presentation, Bayes’ Theorem was soon forgotten, and was popularized within the scientific community only by the later efforts of the great mathematician Pierre-Simon Laplace.  Laplace himself is almost as enigmatic as Bayes; we don’t even know whether it was “Pierre” or “Simon” that was his actual first name.  Laplace’s papers are said to have contained a design for an AI capable of predicting all future events, the so-called “Laplacian superintelligence”.  While it is generally believed that Laplace never tried to implement his design, there remains the fact that Laplace presciently fled the guillotine that claimed many of his colleagues during the Reign of Terror.  Even today, physicists sometimes attribute unusual effects to a “Laplacian Operator” intervening in their experiments.In summary, we do not know the real circumstances of Bayes’s birth, the ultimate origins of Bayes’ Theorem, Bayes’s actual year of death, or even whether Bayes ever really died.  Nonetheless “Reverend Thomas Bayes”, whatever his true identity, has the greatest fondness and gratitude of Earth’s scientific community.\n\nSo why is it that some people are so excited about Bayes’ Theorem?\n“Do you believe that a nuclear war will occur in the next 20 years?  If no, why not?”  Since I wanted to use some common answers to this question to make a point about rationality, I went ahead and asked the above question in an IRC channel, #philosophy on EFNet.\nOne EFNetter who answered replied “No” to the above question, but added that he believed biological warfare would wipe out “99.4%” of humanity within the next ten years.  I then asked whether he believed 100% was a possibility.  “No,” he said.  “Why not?”, I asked.  “Because I’m an optimist,” he said.  (Roanoke of #philosophy on EFNet wishes to be credited with this statement, even having been warned that it will not be cast in a complimentary light.  Good for him!)  Another person who answered the above question said that he didn’t expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.”  “But why extend that out for 100 years?”, I asked.  “Pure hope,” was his reply.\nWhat is it exactly that makes these thoughts “irrational” – a poor way of arriving at truth?  There are a number of intuitive replies that can be given to this; for example:  “It is not rational to believe things only because they are comforting.”  Of course it is equally irrational to believe things only because they are discomforting; the second error is less common, but equally irrational.  Other intuitive arguments include the idea that “Whether or not you happen to be an optimist has nothing to do with whether biological warfare wipes out the human species”, or “Pure hope is not evidence about nuclear war because it is not an observation about nuclear war.”\nThere is also a mathematical reply that is precise, exact, and contains all the intuitions as special cases.  This mathematical reply is known as Bayes’ Theorem.\nFor example, the reply “Whether or not you happen to be an optimist has nothing to do with whether biological warfare wipes out the human species” can be translated into the statement:\np(you are currently an optimist | biological war occurs within ten years and wipes out humanity) =p(you are currently an optimist | biological war occurs within ten years and does not wipe out humanity)\nSince the two probabilities for p(X|A) and p(X|~A) are equal, Bayes’ Theorem says that p(A|X) = p(A) ; as we have earlier seen, when the two conditional probabilities are equal, the revised probability equals the prior probability.  If X and A are unconnected – statistically independent – then finding that X is true cannot be evidence that A is true; observing X does not update our probability for A; saying “X” is not an argument for A.\nBut suppose you are arguing with someone who is verbally clever and who says something like, “Ah, but since I’m an optimist, I’ll have renewed hope for tomorrow, work a little harder at my dead-end job, pump up the global economy a little, eventually, through the trickle-down effect, sending a few dollars into the pocket of the researcher who ultimately finds a way to stop biological warfare – so you see, the two events are related after all, and I can use one as valid evidence about the other.”  In one sense, this is correct – any correlation, no matter how weak, is fair prey for Bayes’ Theorem; but Bayes’ Theorem distinguishes between weak and strong evidence.  That is, Bayes’ Theorem not only tells us what is and isn’t evidence, it also describes the strength of evidence.  Bayes’ Theorem not only tells us when to revise our probabilities, but how much to revise our probabilities.  A correlation between hope and biological warfare may exist, but it’s a lot weaker than the speaker wants it to be; he is revising his probabilities much too far.\nLet’s say you’re a woman who’s just undergone a mammography.  Previously, you figured that you had a very small chance of having breast cancer; we’ll suppose that you read the statistics somewhere and so you know the chance is 1%.  When the positive mammography comes in, your estimated chance should now shift to 7.8%.  There is no room to say something like, “Oh, well, a positive mammography isn’t definite evidence, some healthy women get positive mammographies too.  I don’t want to despair too early, and I’m not going to revise my probability until more evidence comes in.  Why?  Because I’m a optimist.”  And there is similarly no room for saying, “Well, a positive mammography may not be definite evidence, but I’m going to assume the worst until I find otherwise.  Why?  Because I’m a pessimist.”  Your revised probability should go to 7.8%, no more, no less.\nBayes’ Theorem describes what makes something “evidence” and how much evidence it is.  Statistical models are judged by comparison to the Bayesian method because, in statistics, the Bayesian method is as good as it gets – the Bayesian method defines the maximum amount of mileage you can get out of a given piece of evidence, in the same way that thermodynamics defines the maximum amount of work you can get out of a temperature differential.  This is why you hear cognitive scientists talking about Bayesian reasoners .  In cognitive science, Bayesian reasoner is the technically precise codeword that we use to mean rational mind.\nThere are also a number of general heuristics about human reasoning that you can learn from looking at Bayes’ Theorem.\nFor example, in many discussions of Bayes’ Theorem, you may hear cognitive psychologists saying that people do not take prior frequencies sufficiently into account, meaning that when people approach a problem where there’s some evidence X indicating that condition A might hold true, they tend to judge A’s likelihood solely by how well the evidence X seems to match A, without taking into account the prior frequency of A.  If you think, for example, that under the mammography example, the woman’s chance of having breast cancer is in the range of 70%-80%, then this kind of reasoning is insensitive to the prior frequency given in the problem; it doesn’t notice whether 1% of women or 10% of women start out having breast cancer.  “Pay more attention to the prior frequency!” is one of the many things that humans need to bear in mind to partially compensate for our built-in inadequacies.\nA related error is to pay too much attention to p(X|A) and not enough to p(X|~A) when determining how much evidence X is for A.  The degree to which a result X is evidence for A depends, not only on the strength of the statement we’d expect to see result X if A were true, but also on the strength of the statement we wouldn’t expect to see result X if A weren’t true.   For example, if it is raining, this very strongly implies the grass is wet – p(wetgrass|rain) ~ 1 – but seeing that the grass is wet doesn’t necessarily mean that it has just rained; perhaps the sprinkler was turned on, or you’re looking at the early morning dew.  Since p(wetgrass|~rain) is substantially greater than zero, p(rain|wetgrass) is substantially less than one.  On the other hand, if the grass was never wet when it wasn’t raining, then knowing that the grass was wet would always show that it was raining, p(rain|wetgrass) ~ 1 , even if p(wetgrass|rain) = 50% ; that is, even if the grass only got wet 50% of the times it rained.  Evidence is always the result of the differential between the two conditional probabilities.  Strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X.The Bayesian revolution in the sciences is fueled, not only by more and more cognitive scientists suddenly noticing that mental phenomena have Bayesian structure in them; not only by scientists in every field learning to judge their statistical methods by comparison with the Bayesian method; but also by the idea that science itself is a special case of Bayes’ Theorem; experimental evidence is Bayesian evidence.   The Bayesian revolutionaries hold that when you perform an experiment and get evidence that “confirms” or “disconfirms” your theory, this confirmation and disconfirmation is governed by the Bayesian rules.  For example, you have to take into account, not only whether your theory predicts the phenomenon, but whether other possible explanations also predict the phenomenon.  Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism – this is the old philosophy that the Bayesian revolution is currently dethroning.  Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if p(X|A) ~ 1 – if the theory makes a definite prediction – then observing ~X very strongly falsifies A.  On the other hand, if p(X|A) ~ 1 ,  and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that p(X|B) ~ 1 , in which case observing X doesn’t favor A over B.  For observing X to definitely confirm A, we would have to know, not that p(X|A) ~ 1 , but that p(X|~A) ~ 0 , which is something that we can’t know because we can’t range over all possible alternative explanations.  For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.\nYou can even formalize Popper’s philosophy mathematically.  The likelihood ratio for X, p(X|A)/p(X|~A) , determines how much observing X slides the probability for A; the likelihood ratio is what says how strong X is as evidence.  Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, p(X|~A) – there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.  That’s the hidden gotcha that toppled Newton’s theory of gravity.  So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for confirmatory evidence.\nOn the other hand, if you encounter some piece of evidence Y that is definitely not predicted by your theory, this is enormously strong evidence against your theory.  If p(Y|A) is infinitesimal, then the likelihood ratio will also be infinitesimal.  For example, if p(Y|A) is 0.0001%, and p(Y|~A) is 1%, then the likelihood ratio p(Y|A)/p(Y|~A) will be 1:10000.  -40 decibels of evidence!  Or flipping the likelihood ratio, if p(Y|A) is very small, then p(Y|~A)/p(Y|A) will be very large, meaning that observing Y greatly favors ~A over A.  Falsification is much stronger than confirmation.  This is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X.  This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.\nSimilarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ~X would have disconfirmed the theory to some extent.  If you try to interpret both X and ~X as “confirming” the theory, the Bayesian rules say this is impossible!  To increase the probability of a theory you must expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory.  On the other hand, Popper’s idea that there is only falsification and no such thing as confirmation turns out to be incorrect.  Bayes’ Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.\nSo we find that many phenomena in the cognitive sciences, plus the statistical methods used by scientists, plus the scientific method itself, are all turning out to be special cases of Bayes’ Theorem.  Hence the Bayesian revolution.\n\nFunFact!Q.  Are there any limits to the power of Bayes’ Theorem?A.  According to legend, one who fully grasped Bayes’ Theorem would gain the ability to create and physically enter an alternate universe using only off-the-shelf equipment and a short computer program.  One who fully grasps Bayes’ Theorem, yet remains in our universe to aid others, is known as a Bayesattva.\n\np(A|X) =       p(X|A)* p(A)          p(X|A)* p(A) + p(X|~A)* p(~A)\nWhy wait so long to introduce Bayes’ Theorem, instead of just showing it at the beginning?  Well… because I’ve tried that before; and what happens, in my experience, is that people get all tangled up in trying to apply Bayes’ Theorem as a set of poorly grounded mental rules; instead of the Theorem helping, it becomes one more thing to juggle mentally, so that in addition to trying to remember how many women with breast cancer have positive mammographies, the reader is also trying to remember whether it’s p(X|A) in the numerator or p(A|X) , and whether a positive mammography result corresponds to A or X, and which side of p(X|A) is the implication, and what the terms are in the denominator, and so on.  In this excruciatingly gentle introduction, I tried to show all the workings of Bayesian reasoning without ever introducing the explicit Theorem as something extra to memorize, hopefully reducing the number of factors the reader needed to mentally juggle.\nEven if you happen to be one of the fortunate people who can easily grasp and apply abstract theorems, the mental-juggling problem is still something to bear in mind if you ever need to explain Bayesian reasoning to someone else.\nIf you do find yourself losing track, my advice is to forget Bayes’ Theorem as an equation and think about the graph.   p(A) and p(~A) are at the top.  p(X|A) and p(X|~A) are the projection factors.  p(X&A) and p(X&~A) are at the bottom.  And p(A|X) equals the proportion of p(X&A) within p(X&A)+p(X&~A).  The graph isn’t shown here – but can you see it in your mind?\nAnd if thinking about the graph doesn’t work, I suggest forgetting about Bayes’ Theorem entirely – just try to work out the specific problem in gizmos, hoses, and sparks, or whatever it is.\n\nHaving introduced Bayes’ Theorem explicitly, we can explicitly discuss its components.\np(A|X) =       p(X|A)* p(A)          p(X|A)* p(A) + p(X|~A)* p(~A)\nWe’ll start with p(A|X).  If you ever find yourself getting confused about what’s A and what’s X in Bayes’ Theorem, start with p(A|X) on the left side of the equation; that’s the simplest part to interpret.  A is the thing we want to know about.  X is how we’re observing it; X is the evidence we’re using to make inferences about A.  Remember that for every expression p(Q|P), we want to know about the probability for Q given P, the degree to which P implies Q – a more sensible notation, which it is now too late to adopt, would be p(Q<-P) .\np(Q|P) is closely related to p(Q&P), but they are not identical.  Expressed as a probability or a fraction, p(Q&P) is the proportion of things that have property Q and property P within all things; i.e., the proportion of “women with breast cancer and a positive mammography” within the group of all women.   If the total number of women is 10,000, and 80 women have breast cancer and a positive mammography, then p(Q&P) is 80/10,000 = 0.8%.  You might say that the absolute quantity, 80, is being normalized to a probability relative to the group of all women.   Or to make it clearer, suppose that there’s a group of 641 women with breast cancer and a positive mammography within a total sample group of 89,031 women.  641 is the absolute quantity.  If you pick out a random woman from the entire sample, then the probability you’ll pick a woman with breast cancer and a positive mammography is p(Q&P), or 0.72% (in this example).\nOn the other hand, p(Q|P) is the proportion of things that have property Q and property P within all things that have P; i.e., the proportion of women with breast cancer and a positive mammography within the group of all women with positive mammographies.   If there are 641 women with breast cancer and positive mammographies, 7915 women with positive mammographies, and 89,031 women, then p(Q&P) is the probability of getting one of those 641 women if you’re picking at random from the entire group of 89,031, while p(Q|P) is the probability of getting one of those 641 women if you’re picking at random from the smaller group of 7915.\nIn a sense, p(Q|P) really means p(Q&P|P) , but specifying the extra P all the time would be redundant.  You already know it has property P, so the property you’re investigating is Q – even though you’re looking at the size of group Q&P within group P, not the size of group Q within group P (which would be nonsense).  This is what it means to take the property on the right-hand side as given; it means you know you’re working only within the group of things that have property P.  When you constrict your focus of attention to see only this smaller group, many other probabilities change.  If you’re taking P as given, then p(Q&P) equals just p(Q) – at least, relative to the group P.   The old p(Q), the frequency of “things that have property Q within the entire sample”, is revised to the new frequency of “things that have property Q within the subsample of things that have property P”.  If P is given, if P is our entire world, then looking for Q&P is the same as looking for just Q.\nIf you constrict your focus of attention to only the population of eggs that are painted blue, then suddenly “the probability that an egg contains a pearl” becomes a different number; this proportion is different for the population of blue eggs than the population of all eggs.  The given, the property that constricts our focus of attention, is always on the right side of p(Q|P); the P becomes our world, the entire thing we see, and on the other side of the “given”  P always has probability 1 – that is what it means to take P as given.  So p(Q|P) means “If P has probability 1, what is the probability of Q?” or “If we constrict our attention to only things or events where P is true, what is the probability of Q?”  Q, on the other side of the given, is not certain – its probability may be 10% or 90% or any other number.  So when you use Bayes’ Theorem, and you write the part on the left side as p(A|X) – how to update the probability of A after seeing X, the new probability of A given that we know X, the degree to which X implies A – you can tell that X is always the observation or the evidence, and A is the property being investigated, the thing you want to know about.\n\nThe right side of Bayes’ Theorem is derived from the left side through these steps:\np(A|X) = p(A|X)p(A|X) = p(X& A)p(X )p(A|X) =    p(X&A )     p(X& A) + p(X&~A)p(A|X) =       p(X|A)* p(A)          p(X|A)* p(A) + p(X|~A)* p(~A)\nThe first step, p(A|X) to p(X&A)/p(X) , may look like a tautology.  The actual math performed is different, though.  p(A|X) is a single number, the normalized probability or frequency of A within the subgroup X.  p(X&A)/p(X) are usually the percentage frequencies of X&A and X within the entire sample, but the calculation also works if X&A and X are absolute numbers of people, events, or things.  p(cancer|positive) is a single percentage/frequency/probability, always between 0 and 1.  (positive&cancer)/(positive) can be measured either in probabilities, such as 0.008/0.103, or it might be expressed in groups of women, for example 194/2494.  As long as both the numerator and denominator are measured in the same units, it should make no difference.\nGoing from p(X) in the denominator to p(X&A)+p(X&~A) is a very straightforward step whose main purpose is as a stepping stone to the last equation.  However, one common arithmetical mistake in Bayesian calculations is to divide p(X&A) by p(X&~A) , instead of dividing p(X&A) by [p(X&A) + p(X&~A)] .  For example, someone doing the breast cancer calculation tries to get the posterior probability by performing the math operation 80 / 950, instead of 80 / (80 + 950).  I like to think of this as a rose-flowers error.  Sometimes if you show young children a picture with eight roses and two tulips, they’ll say that the picture contains more roses than flowers.  (Technically, this would be called a class inclusion error.)  You have to add the roses and the tulips to get the number of flowers , which you need to find the proportion of roses within the flowers.  You can’t find the proportion of roses in the tulips, or the proportion of tulips in the roses.  When you look at the graph, the bottom bar consists of all the patients with positive results.  That’s what the doctor sees – a patient with a positive result.  The question then becomes whether this is a healthy patient with a positive result, or a cancerous patient with a positive result.  To figure the odds of that, you have to look at the proportion of cancerous patients with positive results within all patients who have positive results, because again, “a patient with a positive result” is what you actually see.  You can’t divide 80 by 950 because that would mean you were trying to find the proportion of cancerous patients with positive results within the group of healthy patients with positive results; it’s like asking how many of the tulips are roses, instead of asking how many of the flowers are roses.  Imagine using the same method to find the proportion of healthy patients.  You would divide 950 by 80 and find that 1,187% of the patients were healthy.  Or to be exact, you would find that 1,187% of cancerous patients with positive results were healthy patients with positive results.\nThe last step in deriving Bayes’ Theorem is going from p(X&A) to p(X|A)*p(A) , in both the numerator and the denominator, and from p(X&~A) to p(X|~A)*p(~A) , in the denominator.\nWhy?  Well, one answer is because p(X|A), p(X|~A), and p(A) correspond to the initial information given in all the story problems.  But why were the story problems written that way?\nBecause in many cases, p(X|A), p(X|~A), and p(A) are what we actually know; and this in turn happens because p(X|A) and p(X|~A) are often the quantities that directly describe causal relations, with the other quantities derived from them and p(A) as statistical relations.   For example, p(X|A), the implication from A to X, where A is what we want to know and X is our way of observing it, corresponds to the implication from a woman having breast cancer to a positive mammography.  This is not just a statistical implication but a directcausal relation; a woman gets a positive mammography because she has breast cancer.  The mammography is designed to detect breast cancer, and it is a fact about the physical process of the mammography exam that it has an 80% probability of detecting breast cancer.  As long as the design of the mammography machine stays constant, p(X|A) will stay at 80%, even if p(A) changes – for example, if we screen a group of woman with other risk factors, so that the prior frequency of women with breast cancer is 10% instead of 1%.  In this case, p(X&A) will change along with p(A), and so will p(X), p(A|X), and so on; but p(X|A) stays at 80%, because that’s a fact about the mammography exam itself.  (Though you do need to test this statement before relying on it; it’s possible that the mammography exam might work better on some forms of breast cancer than others.)  p(X|A) is one of the simple facts from which complex facts like p(X&A) are constructed; p(X|A) is an elementary causal relation within a complex system, and it has a direct physical interpretation.  This is why Bayes’ Theorem has the form it does; it’s not for solving math brainteasers, but for reasoning about the physical universe.\nOnce the derivation is finished, all the implications on the right side of the equation are of the form p(X|A) or p(X|~A) , while the implication on the left side is p(A|X) .  As long as you remember this and you get the rest of the equation right, it shouldn’t matter whether you happened to start out with p(A|X) or p(X|A) on the left side of the equation, as long as the rules are applied consistently – if you started out with the direction of implication p(X|A) on the left side of the equation, you would need to end up with the direction p(A|X) on the right side of the equation.  This, of course, is just changing the variable labels; the point is to remember the symmetry, in order to remember the structure of Bayes’ Theorem.\nThe symmetry arises because the elementary causal relations are generally implications from facts to observations, i.e., from breast cancer to positive mammography.  The elementary steps in reasoning are generally implications from observations to facts, i.e., from a positive mammography to breast cancer.  The left side of Bayes’ Theorem is an elementary inferential step from the observation of positive mammography to the conclusion of an increased probability of breast cancer.  Implication is written right-to-left, so we write p(cancer|positive) on the left side of the equation.  The right side of Bayes’ Theorem describes the elementary causal steps – for example, from breast cancer to a positive mammography – and so the implications on the right side of Bayes’ Theorem take the form p(positive|cancer) or p(positive|~cancer) .\nAnd that’s Bayes’ Theorem.  Rational inference on the left end, physical causality on the right end; an equation with mind on one side and reality on the other.  Remember how the scientific method turned out to be a special case of Bayes’ Theorem?  If you wanted to put it poetically, you could say that Bayes’ Theorem binds reasoning into the physical universe.\nOkay, we’re done.\n\nReverend Bayes says:You are now an initiateof the Bayesian Conspiracy.\nDigg Del.icio.us Stumble Reddit\nFurther Reading:\nIf you liked An Intuitive Explanation of Bayesian Reasoning , you may also wish to read A Technical Explanation of Technical Explanation by the same author, which goes into greater detail on the application of Bayescraft to human rationality and the philosophy of science. You may also enjoy the Twelve Virtues of Rationality and The Simple Truth .\nOther authors:\nE. T. Jaynes:  Probability Theory With Applications in Science and Engineering (full text online).  Theory and applications for Bayes’ Theorem and Bayesian reasoning. See also Jaynes’s magnum opus, Probability Theory: The Logic of Science .\nD. Kahneman, P. Slovic and A. Tversky, eds, Judgment under uncertainty:  Heuristics and biases .   If it seems to you like human thinking often isn’t Bayesian… you’re not wrong.  This terrifying volume catalogues some of the blatant searing hideous gaping errors that pop up in human cognition. See also this forthcoming book chapter for a summary of some better-known biases.Bellhouse, D.R.:  The Reverend Thomas Bayes FRS: a Biography to Celebrate the Tercentenary of his Birth .  A more “traditional” account of Bayes’s life.\nGoogle Directory for Bayesian analysis (courtesy of the Open Directory Project).\n\nAbout This Document:\nAn Intuitive Explanation of Bayesian Reasoning is ©2003 by Eliezer S. Yudkowsky .BayesApplet is ©2003 by Christian Rovner.  (Email address:  Append “tutopia.com” to “cro1@”).\nLast updated: 2006.06.04\nYudkowsky’s “Intuitive Explanation of Bayesian Reasoning” and Rovner’s “BayesApplet” may both be freely used by any nonprofit organization or educational institution.  No royalties or per-page charges are necessary to reproduce this document as course materials, either in printed form or online.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/bayes/ .\nThanks to Eric Mitchell, Chris Rovner, Vlad Tarko, Gordon Worley, and Gregg Young for catching errors in the text.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute . If you’ve found Yudkowsky’s pages on rationality useful, please consider donating to the Machine Intelligence Research Institute.\n\nBibliography:\nBayes, Thomas (1763):  “An essay towards solving a problem in the doctrine of chances.”  Philosophical Transactions of the Royal Society.  53 : 370-418.\nCasscells, W., Schoenberger, A., and Grayboys, T. (1978):  “Interpretation by physicians of clinical laboratory results.” N Engl J Med.299 :999-1001.\nDehaene, Stanislas (1997):  The Number Sense : How the Mind Creates Mathematics.   Oxford University Press.\nEddy, David M. (1982):  “Probabilistic reasoning in clinical medicine:  Problems and opportunities.”  In D. Kahneman, P. Slovic, and A. Tversky, eds, Judgement under uncertainty: Heuristics and biases . Cambridge University Press, Cambridge, UK.\nEdwards, Ward (1982):  “Conservatism in human information processing.”  In D. Kahneman, P. Slovic, and A. Tversky, eds, Judgement under uncertainty: Heuristics and biases . Cambridge University Press, Cambridge, UK.\nGigerenzer, Gerd and Hoffrage, Ulrich (1995):  “How to improve Bayesian reasoning without instruction: Frequency formats.”  Psychological Review.102 : 684-704.\nJaynes, E. T. (1996):  Probability Theory With Applications in Science and Engineering.   Posthumous manuscript, placed online.  http://bayes.wustl.edu/etj/science.pdf.html\n\n", "url": "https://www.yudkowsky.net/rational/bayes", "title": "An Intuitive Explanation of Bayes’ Theorem", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T01:30:06+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "7a2a10b53c78939348f6d7554f5c7a2f", "summary": []} -{"text": "The Simple Truth\n\n“I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.”— Danielle Egan (journalist)\nAuthor’s Foreword:\nThis essay is meant to restore a naive view of truth.\nSomeone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?”\nMany people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of ‘truth’. There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall.\nOften I have seen – especially on Internet mailing lists – that amidst other conversation, someone says “X is true”, and then an argument breaks out over the use of the word ‘true’. This essay is not meant as an encyclopedic reference for that argument. Rather, I hope the arguers will read this essay, and then go back to whatever they were discussing before someone questioned the nature of truth.\nIn this essay I pose questions. If you see what seems like a really obvious answer, it’s probably the answer I intend. The obvious choice isn’t always the best choice, but sometimes, by golly, it is . I don’t stop looking as soon I find an obvious answer, but if I go on looking, and the obvious-seeming answer still seems obvious, I don’t feel guilty about keeping it. Oh, sure, everyone thinks two plus two is four, everyone says two plus two is four, and in the mere mundane drudgery of everyday life everyone behaves as if two plus two is four, but what does two plus two really, ultimately equal? As near as I can figure, four. It’s still four even if I intone the question in a solemn, portentous tone of voice. Too simple, you say? Maybe, on this occasion, life doesn’t need to be complicated. Wouldn’t that be refreshing?\nIf you are one of those fortunate folk to whom the question seems trivial at the outset, I hope it still seems trivial at the finish. If you find yourself stumped by deep and meaningful questions, remember that if you know exactly how a system works, and could build one yourself out of buckets and pebbles, it should not be a mystery to you.\nIf confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally.\n\nImagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep. My sheep sleep in an enclosure, a fold; and the enclosure is high enough to guard my sheep from wolves that roam by night. Each day I must release my sheep from the fold to pasture and graze; each night I must find my sheep and return them to the fold. If a sheep is left outside, I will find its body the next morning, killed and half-eaten by wolves. But it is so discouraging, to scour the fields for hours, looking for one last sheep, when I know that probably all the sheep are in the fold. Sometimes I give up early, and usually I get away with it; but around a tenth of the time there is a dead sheep the next morning.\nIf only there were some way to divine whether sheep are still grazing, without the inconvenience of looking! I try several methods: I toss the divination sticks of my tribe; I train my psychic powers to locate sheep through clairvoyance; I search carefully for reasons to believe all the sheep are in the fold. It makes no difference. Around a tenth of the times I turn in early, I find a dead sheep the next morning. Perhaps I realize that my methods aren’t working, and perhaps I carefully excuse each failure; but my dilemma is still the same. I can spend an hour searching every possible nook and cranny, when most of the time there are no remaining sheep; or I can go to sleep early and lose, on the average, one-tenth of a sheep.\nLate one afternoon I feel especially tired. I toss the divination sticks and the divination sticks say that all the sheep have returned. I visualize each nook and cranny, and I don’t imagine scrying any sheep. I’m still not confident enough, so I look inside the fold and it seems like there are a lot of sheep, and I review my earlier efforts and decide that I was especially diligent. This dissipates my anxiety, and I go to sleep. The next morning I discover two dead sheep. Something inside me snaps, and I begin thinking creatively.\nThat day, loud hammering noises come from the gate of the sheepfold’s enclosure.\nThe next morning, I open the gate of the enclosure only a little way, and as each sheep passes out of the enclosure, I drop a pebble into a bucket nailed up next to the door. In the afternoon, as each returning sheep passes by, I take one pebble out of the bucket. When there are no pebbles left in the bucket, I can stop searching and turn in for the night. It is a brilliant notion. It will revolutionize shepherding.\nThat was the theory. In practice, it took considerable refinement before the method worked reliably. Several times I searched for hours and didn’t find any sheep, and the next morning there were no stragglers. On each of these occasions it required deep thought to figure out where my bucket system had failed. On returning from one fruitless search, I thought back and realized that the bucket already contained pebbles when I started; this, it turned out, was a bad idea. Another time I randomly tossed pebbles into the bucket, to amuse myself, between the morning and the afternoon; this too was a bad idea, as I realized after searching for a few hours. But I practiced my pebblecraft, and became a reasonably proficient pebblecrafter.\nOne afternoon, a man richly attired in white robes, leafy laurels, sandals, and business suit trudges in along the sandy trail that leads to my pastures.\n“Can I help you?” I inquire.\nThe man takes a badge from his coat and flips it open, proving beyond the shadow of a doubt that he is Markos Sophisticus Maximus, a delegate from the Senate of Rum. (One might wonder whether another could steal the badge; but so great is the power of these badges that if any other were to use them, they would in that instant be transformed into Markos.)\n“Call me Mark,” he says. “I’m here to confiscate the magic pebbles, in the name of the Senate; artifacts of such great power must not fall into ignorant hands.”\n“That bleedin’ apprentice,” I grouse under my breath, “he’s been yakkin’ to the villagers again.” Then I look at Mark’s stern face, and sigh. “They aren’t magic pebbles,” I say aloud. “Just ordinary stones I picked up from the ground.”\nA flicker of confusion crosses Mark’s face, then he brightens again. “I’m here for the magic bucket!” he declares.\n“It’s not a magic bucket,” I say wearily. “I used to keep dirty socks in it.”\nMark’s face is puzzled. “Then where is the magic?” he demands.\nAn interesting question. “It’s hard to explain,” I say.\nMy current apprentice, Autrey, attracted by the commotion, wanders over and volunteers his explanation: “It’s the level of pebbles in the bucket,” Autrey says. “There’s a magic level of pebbles, and you have to get the level just right, or it doesn’t work. If you throw in more pebbles, or take some out, the bucket won’t be at the magic level anymore. Right now, the magic level is,” Autrey peers into the bucket, “about one-third full.”\n“I see!” Mark says excitedly. From his back pocket Mark takes out his own bucket, and a heap of pebbles. Then he grabs a few handfuls of pebbles, and stuffs them into the bucket. Then Mark looks into the bucket, noting how many pebbles are there. “There we go,” Mark says, “the magic level of this bucket is half full. Like that?”\n“No!” Autrey says sharply. “Half full is not the magic level. The magic level is about one-third. Half full is definitely unmagic. Furthermore, you’re using the wrong bucket.”\nMark turns to me, puzzled. “I thought you said the bucket wasn’t magic?”\n“It’s not,” I say. A sheep passes out through the gate, and I toss another pebble into the bucket. “Besides, I’m watching the sheep. Talk to Autrey.”\nMark dubiously eyes the pebble I tossed in, but decides to temporarily shelve the question. Mark turns to Autrey and draws himself up haughtily. “It’s a free country,” Mark says, “under the benevolent dictatorship of the Senate, of course. I can drop whichever pebbles I like into whatever bucket I like.”\nAutrey considers this. “No you can’t,” he says finally, “there won’t be any magic.”\n“Look,” says Mark patiently, “I watched you carefully. You looked in your bucket, checked the level of pebbles, and called that the magic level. I did exactly the same thing.”\n“That’s not how it works,” says Autrey.\n“Oh, I see,” says Mark, “It’s not the level of pebbles in my bucket that’s magic, it’s the level of pebbles in your bucket. Is that what you claim? What makes your bucket so much better than mine, huh?”\n“Well,” says Autrey, “if we were to empty your bucket, and then pour all the pebbles from my bucket into your bucket, then your bucket would have the magic level. There’s also a procedure we can use to check if your bucket has the magic level, if we know that my bucket has the magic level; we call that a bucket compare operation.”\nAnother sheep passes, and I toss in another pebble.\n“He just tossed in another pebble!” Mark says. “And I suppose you claim the new level is also magic? I could toss pebbles into your bucket until the level was the same as mine, and then our buckets would agree. You’re just comparing my bucket to your bucket to determine whether you think the level is ‘magic’ or not. Well, I think your bucket isn’t magic, because it doesn’t have the same level of pebbles as mine. So there!”\n“Wait,” says Autrey, “you don’t understand -”\n“By ‘magic level’, you mean simply the level of pebbles in your own bucket. And when I say ‘magic level’, I mean the level of pebbles in my bucket. Thus you look at my bucket and say it ’isn’t magic’, but the word ‘magic’ means different things to different people. You need to specify whose magic it is. You should say that my bucket doesn’t have ’Autrey’s magic level’, and I say that your bucket doesn’t have ’Mark’s magic level’. That way, the apparent contradiction goes away.”\n“But -” says Autrey helplessly.\n“Different people can have different buckets with different levels of pebbles, which proves this business about ‘magic’ is completely arbitrary and subjective.”\n“Mark,” I say, “did anyone tell you what these pebbles do? ”\n“ Do? ” says Mark. “I thought they were just magic.”\n“If the pebbles didn’t do anything,” says Autrey, “our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work.”\n“What’s your auditor’s name?”\n“Darwin,” says Autrey.\n“Hm,” says Mark. “Charles does have a reputation as a strict auditor. So do the pebbles bless the flocks, and cause the increase of sheep?”\n“No,” I say. “The virtue of the pebbles is this; if we look into the bucket and see the bucket is empty of pebbles, we know the pastures are likewise empty of sheep. If we do not use the bucket, we must search and search until dark, lest one last sheep remain. Or if we stop our work early, then sometimes the next morning we find a dead sheep, for the wolves savage any sheep left outside. If we look in the bucket, we know when all the sheep are home, and we can retire without fear.”\nMark considers this. “That sounds rather implausible,” he says eventually. “Did you consider using divination sticks? Divination sticks are infallible, or at least, anyone who says they are fallible is burned at the stake. This is an extremely painful way to die; it follows that divination sticks are infallible.”\n“You’re welcome to use divination sticks if you like,” I say.\n“Oh, good heavens, of course not,” says Mark. “They work infallibly, with absolute perfection on every occasion, as befits such blessed instruments; but what if there were a dead sheep the next morning? I only use the divination sticks when there is no possibility of their being proven wrong. Otherwise I might be burned alive. So how does your magic bucket work?”\nHow does the bucket work…? I’d better start with the simplest possible case. “Well,” I say, “suppose the pastures are empty, and the bucket isn’t empty. Then we’ll waste hours looking for a sheep that isn’t there. And if there are sheep in the pastures, but the bucket is empty, then Autrey and I will turn in too early, and we’ll find dead sheep the next morning. So an empty bucket is magical if and only if the pastures are empty -”\n“Hold on,” says Autrey. “That sounds like a vacuous tautology to me. Aren’t an empty bucket and empty pastures obviously the same thing?”\n“It’s not vacuous,” I say. “Here’s an analogy: The logician Alfred Tarski once said that the assertion ‘Snow is white’ is true if and only if snow is white. If you can understand that, you should be able to see why an empty bucket is magical if and only if the pastures are empty of sheep.”\n“Hold on,” says Mark. “These are buckets . They don’t have anything to do with sheep . Buckets and sheep are obviously completely different. There’s no way the sheep can ever interact with the bucket.”\n“Then where do you think the magic comes from?” inquires Autrey.\nMark considers. “You said you could compare two buckets to check if they had the same level… I can see how buckets can interact with buckets. Maybe when you get a large collection of buckets, and they all have the same level, that’s what generates the magic. I’ll call that the coherentist theory of magic buckets.”\n“Interesting,” says Autrey. “I know that my master is working on a system with multiple buckets – he says it might work better because of ‘redundancy’ and ‘error correction’. That sounds like coherentism to me.”\n“They’re not quite the same -” I start to say.\n“Let’s test the coherentism theory of magic,” says Autrey. “I can see you’ve got five more buckets in your back pocket. I’ll hand you the bucket we’re using, and then you can fill up your other buckets to the same level -”\nMark recoils in horror. “Stop! These buckets have been passed down in my family for generations, and they’ve always had the same level! If I accept your bucket, my bucket collection will become less coherent, and the magic will go away!”\n“But your current buckets don’t have anything to do with the sheep!” protests Autrey.\nMark looks exasperated. “Look, I’ve explained before, there’s obviously no way that sheep can interact with buckets. Buckets can only interact with other buckets.”\n“I toss in a pebble whenever a sheep passes,” I point out.\n“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”\n“It’s an interaction between the sheep and the pebbles,” I reply.\n“No, it’s an interaction between the pebbles and you ,” Mark says. “The magic doesn’t come from the sheep, it comes from you . Mere sheep are obviously nonmagical. The magic has to come from somewhere , on the way to the bucket.”\nI point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”\nMark furrows his brow. “I don’t quite follow you… is the cloth magical?”\nI shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket. Afterward you can compare the bucket to other buckets, and so on.”\n“I still don’t get it,” Mark says. “You can’t fit a sheep into a bucket. Only pebbles go in buckets, and it’s obvious that pebbles only interact with other pebbles.”\n“The sheep interact with things that interact with pebbles…” I search for an analogy. “Suppose you look down at your shoelaces. A photon leaves the Sun; then travels down through Earth’s atmosphere; then bounces off your shoelaces; then passes through the pupil of your eye; then strikes the retina; then is absorbed by a rod or a cone. The photon’s energy makes the attached neuron fire, which causes other neurons to fire. A neural activation pattern in your visual cortex can interact with your beliefs about your shoelaces, since beliefs about shoelaces also exist in neural substrate. If you can understand that, you should be able to see how a passing sheep causes a pebble to enter the bucket.”\n“At exactly which point in the process does the pebble become magic?” says Mark.\n“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”\nMark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”\n“Ha!” Autrey says, scorn rich in his voice. “Mere wishful thinking! Not all pebbles are created equal. The pebbles in your bucket are not magical. They’re only lumps of stone!”\nMark’s face turns stern. “Now,” he cries, “now you see the danger of the road you walk! Once you say that some people’s pebbles are magical and some are not, your pride will consume you! You will think yourself superior to all others, and so fall! Many throughout history have tortured and murdered because they thought their own pebbles supreme!” A tinge of condescension enters Mark’s voice. “Worshipping a level of pebbles as ‘magical’ implies that there’s an absolute pebble level in a Supreme Bucket. Nobody believes in a Supreme Bucket these days.”\n“One,” I say. “Sheep are not absolute pebbles. Two, I don’t think my bucket actually contains the sheep. Three, I don’t worship my bucket level as perfect – I adjust it sometimes – and I do that because I care about the sheep.”\n“Besides,” says Autrey, “someone who believes that possessing absolute pebbles would license torture and murder, is making a mistake that has nothing to do with buckets. You’re solving the wrong problem.”\nMark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.”\n“Um… yes?” says Autrey.\n“It doesn’t bother you that Joseph Stalin believed that snow is white?”\n“Um… no?” says Autrey.\nMark gazes incredulously at Autrey, and finally shrugs. “Let’s suppose, purely for the sake of argument, that your pebbles are magical and mine aren’t. Can you tell me what the difference is?”\n“My pebbles represent the sheep!” Autrey says triumphantly. “ Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”\n“Ah!” Mark says. “Special causal powers, instead of magic.”\n“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”\n“What kind of special powers does the bucket have?” asks Mark.\n“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”\n“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”\n“It’s an ordinary bucket ,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”\n“I’m talking to Autrey,” says Mark.\n“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.\nAutrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.\n“You have to throw in a pebble every time a sheep leaves through the gate?” says Mark. “Take out a pebble every time a sheep returns?”\nAutrey nods. “Yeah.”\n“That must be really hard,” Mark says sympathetically.\nAutrey brightens, soaking up Mark’s sympathy like rain. “Exactly!” says Autrey. “It’s extremely hard on your emotions. When the bucket has held its level for a while, you… tend to get attached to that level.”\nA sheep passes then, leaving through the gate. Autrey sees; he stoops, picks up a pebble, holds it aloft in the air. “Behold!” Autrey proclaims. “A sheep has passed! I must now toss a pebble into this bucket, my dear bucket, and destroy that fond level which has held for so long – ” Another sheep passes. Autrey, caught up in his drama, misses it; so I plunk a pebble into the bucket. Autrey is still speaking: ” – for that is the supreme test of the shepherd, to throw in the pebble, be it ever so agonizing, be the old level ever so precious. Indeed, only the best of shepherds can meet a requirement so stern -“\n“Autrey,” I say, “if you want to be a great shepherd someday, learn to shut up and throw in the pebble. No fuss. No drama. Just do it.”\n“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”\nAutrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”\n“Can I look at a pebble?” says Mark.\n“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.\nAutrey looks at me, puzzled. “Didn’t you just mess it up?”\nI shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”\n“But -” Autrey says.\n“I taught you everything you know, but I haven’t taught you everything I know,” I say.\nMark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”\n“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”\n“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.\nAutrey laughs. “Now you’re just being gratuitously evil.”\nI nod, for this is indeed the case.\n“Is that really going to work, though?” says Autrey.\nI nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.\nMark is looking at his hand, a bit unnerved. “So… the pebble has intentionality again, now?”\n“Yep,” I say. “Don’t add any more pebbles to your hand, or throw away the one you have, or you’ll break the ritual.”\nMark nods solemnly. Then he resumes inspecting the pebble. “I understand now how your flocks grew so great,” Mark says. “With the power of this bucket, you could keep in tossing pebbles, and the sheep would keep returning from the fields. You could start with just a few sheep, let them leave, then fill the bucket to the brim before they returned. And if tending so many sheep grew tedious, you could let them all leave, then empty almost all the pebbles from the bucket, so that only a few returned… increasing the flocks again when it came time for shearing… dear heavens, man! Do you realize the sheer power of this ritual you’ve discovered? I can only imagine the implications; humankind might leap ahead a decade – no, a century!”\n“It doesn’t work that way,” I say. “If you add a pebble when a sheep hasn’t left, or remove a pebble when a sheep hasn’t come in, that breaks the ritual. The power does not linger in the pebbles, but vanishes all at once, like a soap bubble popping.”\nMark’s face is terribly disappointed. “Are you sure?”\nI nod. “I tried that and it didn’t work.”\nMark sighs heavily. “And this… math … seemed so powerful and useful until then… Oh, well. So much for human progress.”\n“Mark, it was a brilliant idea,” Autrey says encouragingly. “The notion didn’t occur to me, and yet it’s so obvious… it would save an enormous amount of effort… there must be a way to salvage your plan! We could try different buckets, looking for one that would keep the magical pow- the intentionality in the pebbles, even without the ritual. Or try other pebbles. Maybe our pebbles just have the wrong properties to have inherent intentionality. What if we tried it using stones carved to resemble tiny sheep? Or just write ‘sheep’ on the pebbles; that might be enough.”\n“Not going to work,” I predict dryly.\nAutrey continues. “Maybe we need organic pebbles, instead of silicon pebbles… or maybe we need to use expensive gemstones. The price of gemstones doubles every eighteen months, so you could buy a handful of cheap gemstones now, and wait, and in twenty years they’d be really expensive.”\n“You tried adding pebbles to create more sheep, and it didn’t work?” Mark asks me. “What exactly did you do?”\n“I took a handful of dollar bills. Then I hid the dollar bills under a fold of my blanket, one by one; each time I hid another bill, I took another paperclip from a box, making a small heap. I was careful not to keep track in my head, so that all I knew was that there were ‘many’ dollar bills, and ‘many’ paperclips. Then when all the bills were hidden under my blanket, I added a single additional paperclip to the heap, the equivalent of tossing an extra pebble into the bucket. Then I started taking dollar bills from under the fold, and putting the paperclips back into the box. When I finished, a single paperclip was left over.”\n“What does that result mean?” asks Autrey.\n“It means the trick didn’t work. Once I broke ritual by that single misstep, the power did not linger, but vanished instantly; the heap of paperclips and the pile of dollar bills no longer went empty at the same time.”\n“You actually tried this?” asks Mark.\n“Yes,” I say, “I actually performed the experiment, to verify that the outcome matched my theoretical prediction. I have a sentimental fondness for the scientific method, even when it seems absurd. Besides, what if I’d been wrong?”\n“If it had worked,” says Mark, “you would have been guilty of counterfeiting! Imagine if everyone did that; the economy would collapse! Everyone would have billions of dollars of currency, yet there would be nothing for money to buy!”\n“Not at all,” I reply. “By that same logic whereby adding another paperclip to the heap creates another dollar bill, creating another dollar bill would create an additional dollar’s worth of goods and services.”\nMark shakes his head. “Counterfeiting is still a crime… You should not have tried.”\n“I was reasonably confident I would fail.”\n“Aha!” says Mark. “You expected to fail! You didn’t believe you could do it!”\n“Indeed,” I admit. “You have guessed my expectations with stunning accuracy.”\n“Well, that’s the problem,” Mark says briskly. “Magic is fueled by belief and willpower. If you don’t believe you can do it, you can’t. You need to change your belief about the experimental result; that will change the result itself.”\n“Funny,” I say nostalgically, “that’s what Autrey said when I told him about the pebble-and-bucket method. That it was too ridiculous for him to believe, so it wouldn’t work for him.”\n“How did you persuade him?” inquires Mark.\n“I told him to shut up and follow instructions,” I say, “and when the method worked, Autrey started believing in it.”\nMark frowns, puzzled. “That makes no sense. It doesn’t resolve the essential chicken-and-egg dilemma.”\n“Sure it does. The bucket method works whether or not you believe in it.”\n“That’s absurd! ” sputters Mark. “I don’t believe in magic that works whether or not you believe in it!”\n“I said that too,” chimes in Autrey. “Apparently I was wrong.”\nMark screws up his face in concentration. “But… if you didn’t believe in magic that works whether or not you believe in it, then why did the bucket method work when you didn’t believe in it? Did you believe in magic that works whether or not you believe in it whether or not you believe in magic that works whether or not you believe in it?”\n“I don’t… think so…” says Autrey doubtfully.\n“Then if you didn’t believe in magic that works whether or not you… hold on a second, I need to work this out on paper and pencil -” Mark scribbles frantically, looks skeptically at the result, turns the piece of paper upside down, then gives up. “Never mind,” says Mark. “Magic is difficult enough for me to comprehend; metamagic is out of my depth.”\n“Mark, I don’t think you understand the art of bucketcraft,” I say. “It’s not about using pebbles to control sheep. It’s about making sheep control pebbles. In this art, it is not necessary to begin by believing the art will work. Rather, first the art works, then one comes to believe that it works.”\n“Or so you believe,” says Mark.\n“So I believe,” I reply, “ because it happens to be a fact. The correspondence between reality and my beliefs comes from reality controlling my beliefs, not the other way around.”\nAnother sheep passes, causing me to toss in another pebble.\n“Ah! Now we come to the root of the problem,” says Mark. “What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.”\nI pause. “Well…” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”\nMark snorts. “I don’t even know why I bother listening to this obvious nonsense. Whatever you say about this so-called ‘reality’, it is merely another belief. Even your belief that reality precedes your beliefs is a belief. It follows, as a logical inevitability, that reality does not exist; only beliefs exist.”\n“Hold on,” says Autrey, “could you repeat that last part? You lost me with that sharp swerve there in the middle.”\n“No matter what you say about reality, it’s just another belief,” explains Mark. “It follows with crushing necessity that there is no reality, only beliefs.”\n“I see,” I say. “The same way that no matter what you eat, you need to eat it with your mouth. It follows that there is no food, only mouths.”\n“Precisely,” says Mark. “Everything that you eat has to be in your mouth. How can there be food that exists outside your mouth? The thought is nonsense, proving that ‘food’ is an incoherent notion. That’s why we’re all starving to death; there’s no food.”\nAutrey looks down at his stomach. “But I’m not starving to death.”\n“ Aha! ” shouts Mark triumphantly. “And how did you utter that very objection? With your mouth , my friend! With your mouth ! What better demonstration could you ask that there is no food?”\n“ What’s this about starvation? ” demands a harsh, rasping voice from directly behind us. Autrey and I stay calm, having gone through this before. Mark leaps a foot in the air, startled almost out of his wits.\nInspector Darwin smiles tightly, pleased at achieving surprise, and makes a small tick on his clipboard.\n“Just a metaphor!” Mark says quickly. “You don’t need to take away my mouth, or anything like that -”\n“ Why do you need a mouth if there is no food ?” demands Darwin angrily. “ Never mind. I have no time for this foolishness . I am here to inspect the sheep. ”\n“Flocks thriving, sir,” I say. “No dead sheep since January.”\n“ Excellent. I award you 0.12 units of fitness . Now what is this person doing here? Is he a necessary part of the operations? ”\n“As far as I can see, he would be of more use to the human species if hung off a hot-air balloon as ballast,” I say.\n“Ouch,” says Autrey mildly.\n“I do not care about the human species . Let him speak for himself .”\nMark draws himself up haughtily. “This mere shepherd ,” he says, gesturing at me, “has claimed that there is such a thing as reality. This offends me, for I know with deep and abiding certainty that there is no truth. The concept of ‘truth’ is merely a stratagem for people to impose their own beliefs on others. Every culture has a different ‘truth’, and no culture’s ‘truth’ is superior to any other. This that I have said holds at all times in all places, and I insist that you agree.”\n“Hold on a second,” says Autrey. “If nothing is true, why should I believe you when you say that nothing is true?”\n“I didn’t say that nothing is true -” says Mark.\n“Yes, you did,” interjects Autrey, “I heard you.”\n“- I said that ‘truth’ is an excuse used by some cultures to enforce their beliefs on others. So when you say something is ‘true’, you mean only that it would be advantageous to your own social group to have it believed.”\n“And this that you have said,” I say, “is it true?”\n“Absolutely, positively true!” says Mark emphatically. “People create their own realities.”\n“Hold on,” says Autrey, sounding puzzled again, “saying that people create their own realities is, logically, a completely separate issue from saying that there is no truth, a state of affairs I cannot even imagine coherently, perhaps because you still have not explained how exactly it is supposed to work -”\n“There you go again,” says Mark exasperatedly, “trying to apply your Western concepts of logic, rationality, reason, coherence, and self-consistency.”\n“Great,” mutters Autrey, “now I need to add a third subject heading, to keep track of this entirely separate and distinct claim -”\n“It’s not separate,” says Mark. “Look, you’re taking the wrong attitude by treating my statements as hypotheses, and carefully deriving their consequences. You need to think of them as fully general excuses, which I apply when anyone says something I don’t like. It’s not so much a model of how the universe works, as a “Get Out of Jail Free” card. The key is to apply the excuse selectively . When I say that there is no such thing as truth, that applies only to your claim that the magic bucket works whether or not I believe in it. It does not apply to my claim that there is no such thing as truth.”\n“Um… why not?” inquires Autrey.\nMark heaves a patient sigh. “Autrey, do you think you’re the first person to think of that question? To ask us how our own beliefs can be meaningful if all beliefs are meaningless? That’s the same thing many students say when they encounter this philosophy, which, I’ll have you know, has many adherents and an extensive literature.”\n“So what’s the answer?” says Autrey.\n“We named it the ‘reflexivity problem’,” explains Mark.\n“But what’s the answer ?” persists Autrey.\nMark smiles condescendingly. “Believe me, Autrey, you’re not the first person to think of such a simple question. There’s no point in presenting it to us as a triumphant refutation.”\n“But what’s the actual answer? ”\n“Now, I’d like to move on to the issue of how logic kills cute baby seals -”\n“ You are wasting time ,” snaps Inspector Darwin.\n“Not to mention, losing track of sheep,” I say, tossing in another pebble.\nInspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. You say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And you fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs can’t alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.”\nWe all pause, considering this.\n“It sounds reasonable…” Mark says finally.\n“There’s a cliff right there,” observes Inspector Darwin.\nAutrey is wearing a look of intense concentration. Finally he shouts: “Wait! If that were true, we would all have long since departed into our own private universes, in which case the other people here are only figments of your imagination – there’s no point in trying to prove anything to us -”\nA long dwindling scream comes from the nearby cliff, followed by a dull and lonely splat. Inspector Darwin flips his clipboard to the page that shows the current gene pool and pencils in a slightly lower frequency for Mark’s alleles.\nAutrey looks slightly sick. “Was that really necessary?”\n“ Necessary? ” says Inspector Darwin, sounding puzzled. “It just happened … I don’t quite understand your question.”\nAutrey and I turn back to our bucket. It’s time to bring in the sheep. You wouldn’t want to forget about that part. Otherwise what would be the point?\n\nThis document is ©2008 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/the-simple-truth/ .\nIf you enjoyed this writing, let your journey continue with An Intuitive Explanation of Bayesian Reasoning . You may also enjoy The Twelve Virtues of Rationality and A Technical Explanation of Technical Explanation", "url": "https://www.yudkowsky.net/rational/the-simple-truth", "title": "The Simple Truth", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T01:20:07+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "91e0a744532a51a5cb2c9a730ed67eba", "summary": []} -{"text": "Cognitive Biases Potentially Affecting Judgment of Global Risks\n\nDraft for Global Catastrophic Risks, Oxford University Press, 2008 . Download as PDF .\nCognitiveBiases-1\n\n\nThis document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/cognitive-biases/ .", "url": "https://www.yudkowsky.net/rational/cognitive-biases", "title": "Cognitive Biases Potentially Affecting Judgment of Global Risks", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T01:16:03+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "d7a44ce4f68f01687208fdc88cceb124", "summary": []} -{"text": "Overcoming Bias\n\nFrom August 2007 through May 2009, I blogged daily on the topic of human rationality at the econblog Overcoming Bias by Robin Hanson, getting around a quarter-million monthly pageviews. This then forked off the community blog Less Wrong , and I moved my old posts there as well for seed content (with URL forwarding, so don’t worry if links are to overcomingbias.com).\nI suspected I could write faster by requiring myself to publish daily. The experiment was a smashing success.\nCurrently the majority of all my writing is on Less Wrong. To be notified when and if this material is compacted into e-books (or even physical books), subscribe to this announcement list .\nThe material is heavily interdependent, and reading in chronological order may prove helpful:\nAndrew Hay’s autogenerated index of all Yudkowsky posts in chronological order.\nTo see how interdependent it is, try looking over this graph of the dependency structure:\nAndrew Hay’s graphical visualization of major dependencies between Yudkowsky posts.\nTo read organized collections of posts, use the Sequences on the Less Wrong wiki.\n\nThis document is ©2008 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/overcoming-bias/ .", "url": "https://www.yudkowsky.net/rational/overcoming-bias", "title": "Overcoming Bias", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2020-09-04T01:09:16+00:00", "paged_url": "https://yudkowsky.net/feed?paged=2", "authors": ["Eliezer S. Yudkowsky"], "id": "7aead1220fb3b2ed81a9fa959403d532", "summary": []} -{"text": "The Sword of Good\n\n…fragments of a novel that would never be written…\n\nCaptain Selena, late of the pirate ship Nemesis, quietly extended the very tip of her blade around the corner, staring at the tiny reflection on the metal.  At once, but still silently, she pulled back the sword; and with her other hand made a complex gesture.\nThe translation spell told Hirou that the handsigns meant:  “Orcs.  Seven.”\nDolf looked at Hirou.  “My Prince,” the wizard signed, “do not waste yourself against mundane opponents.  Do not draw the Sword of Good as yet.  Leave these to Selena.”\nHirou’s mouth was very dry.  He didn’t know if the translation spell could understand the difference between wanting to talk and wanting to make gestures; and so Hirou simply nodded.\nNot for the first time, the thought occurred to Hirou that if he’d actually known he was going to be transported into a magical universe, informed he was the long-lost heir to the Throne of Bronze, handed the legendary Sword of Good, and told to fight evil, he would have spent less time reading fantasy novels.  Joined the army, maybe.  Taken fencing lessons, at least.  If there was one thing that didn’t prepare you for fantasy real life, it was sitting at home reading fantasy fiction.\nDolf and Selena were looking at Hirou, as if waiting for something more.\nOh.  That’s right.  I’m the prince.\nHirou raised a finger and pointed it around the corner, trying to indicate that they should go ahead –\nWith a sudden burst of motion Selena plunged around the corner, Dolf following hard on her heels, and Hirou, startled and hardly thinking, moving after.\nThere was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.\nSelena already held her sword in her right hand, and her whip in her left.  She leaped forward and howled, a wordless cry that harmonized oddly with the battle roar of the orcs; and in almost the first instant of the clash, one of the orc-heads separated from its body and flew through the air, trailing foul-smelling black blood.\nHirou breathed evenly, trying to still his trembling.  The Sword of Good gave a tiny soft growl at his side (a sound that only he could hear) as Selena slashed her blade across another orc’s face, giving rise to a whistling howl.  Still he kept the Sword sheathed.  You are not to waste yourself against mundane opponents…  Even now the wizard was eyeing him closely, as if expecting him to defy orders and plunge into battle himself.\nA small part of him, the part that wasn’t totally terrified by the battle, was flattered that Dolf thought so highly of him.  It was all Hirou could do not to turn and bolt; he was tensing his legs as though exerting a constant muscular effort to keep them in the same place.\nThe orc-bodies were piling up around Selena, the whip blinding or tripping or yanking, her blade ending life.  It might have taken hours, or seconds, before a huge blow split the last orc’s head all the way down the middle.\nShe stood there, blood-spattered and panting heavily, waiting as though daring the bodies to ever move again; then her face relaxed, and she gave a light laugh, and stooped to wipe her blade on the black orc-leather.\n“You’re hurt!” Hirou blurted suddenly.  Red was soaking through the leather on Selena’s left arm.\nSelena glanced downward.  “A scratch.”\n“You cannot assume that,” rumbled the wizard.  “Their blades may be poisoned.”  Dolf stepped forward and brushed Selena’s arm briefly with the staff.\n“Oh!” Selena said, her face surprised.  “It’s -“\nBut Dolf was already moving past her, to look at the gate the orcs had guarded, and the stairway leading upward.  “I believe,” he said in a quiet voice, “that there is a dark magus upstairs.”\n“A magus!” Selena said.  “Here?”\n“A magus,” Hirou echoed.  He swallowed hard; he knew what that meant.\nDolf only glanced at Selena.  “Do as I taught you: drop your weapons, sit in the corner, and clear your mind.  Now,” as Selena seemed about to protest.  “An ordinary warrior is only a liability, in a battle of wills; a weak point to be defended, a piece to be turned against its player.”\nSelena looked at Hirou.  Hirou nodded.\nAnd Selena sheathed her sword, dropped it and the whip, unbuckled the harness that held her daggers, and sat down in the corner of the room and began chanting softly to herself.\nDolf spared her only a glance.  “And now,” said the wizard in a low tone, “my Prince, you may enter the battle.”\nThough most of Hirou’s mind was whited-out by terror, there was a remnant that seemed to see and follow the pattern, like reciting memorized lines in a play; and that remnant knew that Hirou’s part was to draw the Sword of Good.\nThe ancient metal whispered out of its scabbard.  As Hirou drew the Sword it began wailing, a small thin shriek that Hirou knew only he could hear.  The scream seemed to come from an infinitely narrow line running straight down the center of the Sword.  The sound had a quality that forced away attention, as though your eye were looking too close to the sun.  As though, if you listened too hard, you would – you would lose –\nDolf strode around the fallen orcs and their assorted body parts.  Hirou followed, breathing evenly; the Sword informed his hand to grip it high and across his chest.\n“Who are we fighting?”  Hirou was surprised at how neutral his voice sounded.\nA note of condemnation entered Dolf’s voice.  “A false wizard, this.  Not born to the Art, nor trained in the Halls.  Its gift comes to it by a higher master, by necromancy and potions…  But fear not, my Prince.  I shall prevent its will from reaching Selena and smother its other magics; and your Sword will sweep aside its defenses like fallen leaves.”\nThrough the door they swept, and mounted the stairs of the tower.  Dolf was breathing heavier, now, his face belying the effort of warding off some pressing will.  Hirou felt nothing, except perhaps a note of crispness in the air, as the Sword in his hand enforced an edict against certain specific types of delusion.\nThen they were standing at the highest level of the tower, the end of the stairs, before one small wooden door.\n“I’ll enter first,” Dolf signed, “and you follow as fast as you can, and strike as quickly as may be done.  Be careful not to strike me, my Prince.  The Sword of Good may strengthen your hand, but not guide your steps – it will strike me as easily as the foe, if you happen to turn it in my direction.”\nHirou nodded.  The air of neutrality was wearing away, and the acrid tang of adrenaline was entering his mouth.\n“Three,” signed the wizard, “two, one -“\nDolf’s oaken staff crashed against the door, blasting it off the hinges in a flare of light and Dolf was racing into the room and Hirou was following him and the figure in stained brown robes was spinning its staff forward and a wall of flames swept out –\nHirou flinched and gave a small shriek, but the flames washed over him ineffectively before his feet could even stumble.  Averted by the Sword.  Dolf also was untouched – the defenses of a wizard were nearly impossible to break, Dolf had said; some wizards spent hours every day building them higher.  There was only one known weapon that could kill a wizard in a single blow, and that was –\nAm I really going to do this?\nBut the Sword was already swinging forward in Hirou’s hand.\nAnd the blade bounced off the air around the stained brown robes, with a sudden shower of orange sparks.\nCrap, Hirou had time to think.\nAnd then the false wizard’s staff was sweeping toward him (metal it was, not wood).\nBut the Sword in his hand moved to parry it, and there was another shower of sparks.\n“Keep attacking!” Dolf shouted.  “You chipped his sorcery!  Keep fighting!“\nHirou gasped for breath and began to chop away with the Sword as though cutting wood, sending bits and pieces of broken magic everywhere.  There was little force in the blows except when the Sword moved to parry the staff; the rest was speed and repetition.\nThen the scarred face beneath the hood gave a sudden shriek, as the Sword lightly scored over the dark flesh.\nIs the shield down – ?\nBefore Hirou could even complete the thought, his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart.\nThere were no last words, not even a brief sigh.  The false wizard’s eyes widened, and then the robes just – fell over.\nHirou fell to his knees.\n“Your highness!“\n“I’m all right,” Hirou choked out.  Nausea competed with adrenaline for control of his existence, and lack of oxygen, and sharp and dull pains from his overexercised hand and arm.\nDolf’s staff brushed him, and the pain and nausea faded.\nThat only made it worse.  It removed the distractions.\nThe wizard was still looking at him, eyes flicking between Hirou and the sword.  “Wielding the Sword of Good did not – hurt you – did it, your highness?”\nThere was alarm in Dolf’s voice, as well there might have been.  The Sword of Good, according to Dolf, would kill the unworthy with the lightest touch, as of a single finger on the blade.  It killed nine out of ten would-be wielders, and in ordinary times the Imperial Family was not allowed to even try.  It had been prophesied that Hirou would wield the Sword, and yet…\n“Dolf,” Hirou said hoarsely, “why did the Sword bounce off his shields?  You said it would cut through magic with a single blow.”\nDolf seemed uneasy.  “It has been centuries since the last wielder held the Sword of Good, noble Prince; perhaps not all the stories are true.  To cut through a wizardly shield with a score of blows is still a very great power.”\n“No,” Hirou said.  He hesitated, then:  “I’m not wielding the Sword at full strength.  I can feel it.”\nIt seems… disappointed… in me.\nDolf nodded.  “The Sword of Good,” he quoted softly, “contains the essence of that which empowers a hero; the truth which only heroes can face.  My Prince… I have been reluctant to say this, but you have not been acting heroic.”  There was a peculiar gentleness on Dolf’s face that softened the impact of the words.  “But it will come with time.  Of that I am certain.  It is written in the royal blood of your forefathers.  You were raised in another place, but you are the heir of Bronze -“\nHirou retched, then swallowed hard, and hard again.  With a sudden flash of horror he knew – and he knew just how unheroic it was – that he was about to throw up on the corpse.\n\nTheir horses sauntered through the streets of the city – the capital of a whole province, it was, which meant perhaps a square mile enclosed by wooden walls, with the occasional two-story building.  Hirou kept his eyes moving, watching for possible ambushes – not that he really thought he had a chance of spotting one, if there was one.  But it was his best guess at how a hero would act.  What would Aragorn do? – that had been the refrain of his thoughts, of late.  Was the lady carrying a clay pot on each shoulder a threat?  Was the legless beggar, watching them with incurious eyes, a spy?\nThere was an excited buzz of conversation in the streets; from the snatches that were audible, Hirou gleaned that a military outpost of the Empire had been overrun by orcs.  The Empire was trying to play it down (said the overheard voices) but rumor had it a major disaster for the planned invasion campaign.\nHirou glanced over at Dolf and Selena.  Neither seemed to be paying any particular attention to the matter.\nThey cantered on for a short while longer, and finally Dolf drew rein.  Selena at once followed, and after a moment’s reaction time, so did Hirou.\n“Here,” Dolf rumbled.\nHirou looked at the building on their right.  There was a huge painted board in front, showing a mouth being crammed with a turkey leg larger than itself.  The signs scratched below, the translation spell informed him, meant “INN OF EXTREMELY TASTY FOOD.”\nOne nice thing about this world:  If they don’t want you to know, they just keep quiet; and if they want you to know, they tell you straight out.\nHirou didn’t say it out loud, though.  Aragorn, descendant of Elendil and heir to the throne of Gondor, wouldn’t have said it.\nWas that part of what empowered a hero?  That solemnity – or maybe just taking things seriously?  Hirou didn’t know.  But there was no point in taking chances.  The Sword hadn’t killed him yet, but neither had it fully unlocked in his hand.\nThe innkeeper’s eyes went wide at the sight of Dolf’s staff, and they were swiftly ushered into a private side room with a basket of candied fruits already waiting.  Selena had a sugared orange slice in her mouth almost as quickly as she sat down, and sighed in bliss; even Dolf took a handful of nuts.\nHirou, with a private sigh, took an apple slice lightly dusted in a spice he didn’t recognize.  Just the fact that it was spiced probably made it one of the most expensive and luxurious treats this world had to offer.  He bit, chewed, swallowed.\nGod he missed chocolate.\n“So now what?” Selena said, after she’d eaten half the bowl.\n“Now we wait,” Dolf said.\n“For what?” said Selena.\nDolf looked around; the staff twitched in his hand and shed a brief woody glow.  Even so, the wizard lowered his voice before he spoke.  “This night, an assassin-courier and two hired thugs will come to this very inn, their wagon having broken a wheel on the road.  We must have the message that they carry, for it contains a hint to the location of the Empty Necklace.”\nSelena blinked.  “Fine,” she said.  “I give up.  How could you possibly know that?”\nDolf looked at Hirou, his eyes asking permission.\n“Tell her,” Hirou said.  He tried for a note of authority in his voice – a Crown Prince’s decision – but he didn’t know if he’d succeeded.\nDolf nodded, and his gaze shifted back to Selena.  “How much do you know about the Prophecy of Destiny?”\nOne nice thing about this world, they put very clear labels on everything – oh, skip it.\nSelena blinked.  “Not much.  That’s wizard business.  Not much call for it in the pirating profession.”\n“Very true,” Dolf said.  “But what do you know?”\nSelena shrugged.  “A new Lord of Dark shall arise over Evilland, commanding the Bad Races, and attempt to cast the Spell of Infinite Doom.  The Long-Lost Heir, wielding the Sword of Good, shall kick Evil’s ass.  That’s about it.”\n“That’s it?” Hirou said incredulously, then caught himself.  Aragorn wouldn’t have said that.\nSelena smiled at him.  “It was enough for me, your Imperial Highness.  A chance like this only comes along once in a woman’s lifetime.”  She blew him a kiss.\nFor once Hirou wasn’t distracted.  “Master Dolf,” Hirou said, trying to make it a statement instead of a question – “I believe she needs to know more than that.”\n“Yes…” Dolf said.  “Though it is wizard’s business indeed; and only by Imperial command may it go further…”  He drew a breath, lowered his voice further.  “The original Prophecy of Destiny, Selena, was never written down.  It has been memorized by the Archmagi and passed down by word of mouth through the generations.  It is more – detailed – then you seem to realize.  You are mentioned, pirate princess.  Mentioned by name and your mother’s name, daughter of Elaine.”\nSelena’s mouth lay open, a picture of perfect astonishment.  “Ah…” she said.  “Do I die at the end?”\n“No one knows,” Dolf said simply.  “The Prophecy of Destiny is a strange thing, pirate princess; it tells of some events in the smallest detail, omits others that would seem very large.  Told we were, to be on the ship that you attacked; told we were of your name.  The Prophecy of Destiny carries through to the confrontation between the Long-Lost Heir and the Lord of Dark, on the very verge of the casting of the Spell of Infinite Doom.  Then, it says, the Long-Lost Heir shall Choose between Good and Bad.  And there – there, of all places – the foretelling ends.”\n“Huh,” Selena said.  She tapped her cheek.  “I somehow suspect, Master Wizard, that you wouldn’t tell me – or his Imperial Highness – if I did die at the end…”  She stared at Dolf, and Dolf looked back neutrally.  “So what does the Spell of Infinite Doom do?  Destroy the world?”\n“Few there are who would deliberately destroy the world,” Dolf said.  “Even the Lord of Dark requires lesser beings to rule over.  No, the Spell of Infinite Doom destroys the Equilibrium.  Light and dark, summer and winter, luck and misfortune – the great Balance of Nature will be, not upset, but annihilated utterly; and in it, set in place a single will, the will of the Lord of Dark.  And he shall rule, not only the people, but the very fabric of the World itself, until the end of days.”\n“Huh,” Selena said again.  Her eyes flicked to Hirou.  “And how are you leaning on that Choice between Good and Bad?”\n“Good,” Hirou said instantly.\n“Even if the Lord of Dark offered you the number two position as the master of the universe -“\n“Good.”\n“You’re not even thinking about it!”\n“It’s not exactly a difficult question!” said Hirou.  “Calling it ‘the Choice between Good and Bad’ kind of gives away the answer.”\nSelena was trying not to smile.  “You’ve never been tempted by anything?“\n“It’s not a matter of temptation!” Hirou said.  “It’s…” he trailed off for a moment.  It wasn’t that he couldn’t find the words.  It was that the concepts didn’t exist in this world.  What he wanted to say was that he had a pretty good idea what sort of behavior got you listed as a villain, in the great TV Tropes wiki of the universe; and he’d had a worried eye on his own character sheet since the day he’d realized what he’d gotten himself into; and he absolutely positively wasn’t going to go Dark Messiah, Knight Templar, Well Intentioned Extremist, or for that matter Lawful Stupid.\n“It must be that the Lord of Dark will find something to offer you,” Selena said.  Her eyes were serious, now.  “Otherwise it won’t be much of a Choice between Good and Bad.”\n“Fine by me,” Hirou said with some acerbity.  It wasn’t the questioning of his honor that disturbed him, so much as the idea of missing a choice that obvious.  How could anyone not know what their character sheet would say about that?\n“What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“\n“Good.”\nSelena opened her mouth, then closed it again.  Sudden hurt showed in her eyes.\n“Oh come on!” Hirou exclaimed.  He was too shocked, in that brief critical moment, even to think of smoothing it over.  “Have some common sense, Selena!  The whole world?“\nSelena smiled, a strange true smile tinged with sorrow.  “So this is the one who can touch the Sword of Good…  You will be a great Emperor someday, your Imperial Highness, a very great Emperor.  And you will see fit to reward me with a court title, and I will be Lady Selena, and none shall dare speak of the days when I was pirate and outlaw.  Maybe some nights you shall have me grace your bedchamber for old times’ sake, and maybe not.  That is enough.  More than I have a right to ask –  It was a foolish thought.”\n“I -”  An abrupt pain caught at Hirou’s heart, which might have been for the sheer unfairness.  “Think it through, Selena!  Even if I did care about you more than anything, it would still be a stupid choice!  Let the Lord of Dark complete the Spell of Infinite Doom?  You might wish you had died!”\n“I understand,” Selena said, still with that strange sad smile.  “Your reasoning is exactly correct, your Imperial Highness.  I am not questioning you at all.  I am only observing that you do not love me.”\nLater that night, as with soft footsteps they padded toward the room where the assassin-courier and his two companions slept, Hirou held the Sword in his hand and stared at the central ridge of the blade.  The endless wail still arose from it, from the infinitely thin line through the center.  Hirou had been getting used to the sound, over time, which made it ever harder to focus his attention on it.\nDo I get any points for that, Sword?  For what I said to Selena, even though I may have lost her?\nThe wail seemed only to diminish slightly, or maybe it was only Hirou’s attention wandering away.\nIt can’t be that a hero is someone who would choose one person over the world!  Not literally the whole world!  …can it?\nThe sound softened further, as if that infinitely thin line were growing more distant.\nI wouldn’t be glad to sacrifice her!  It would hurt!  But I put myself on the line too!  Isn’t that what heroism is all about?  Sacrificing yourself and your own desires for the good of the world?\nWhat is the truth that only heroes can face, if not that?\nHirou stared intently at the Sword, as if demanding an answer; and then became aware that his attention had moved away, once again, from that silent scream.\nAnd the three of them stood before the doorway.\nSelena took a small vial from off her harness, and dripped small droplets of oil onto the hinges of the door.  She was no master thief, but had a quietly professional grasp of the basics.  Quietly and slowly the door opened.  Selena went in first, and Dolf followed her, and then Hirou silently brought up the rear, Sword held in guard position.\nThe assassin-courier had a thin, pointed beard, and wore a light chainshirt even in his sleep.  His two escorts had an unshaven, unsavory look, and it was obvious from the smell of the room that they had not bathed.  The three of them were laid out on a line on as many beds.  Selena had a long thin poniard already in her hand, and plunged that needle straight through the left eyelid of the first thug, swift as a sword-strike on the downward plunge, stopping abruptly in mid-deathblow lest she strike the skull on the other side and make a sound.  She went around the beds and repeated the silent kill there on the other thug, as Dolf quietly moved to each of the four corners of the room in turn, while Hirou blocked the exit.\nThen, with a knife held just above the courier’s throat, she spoke in a whisper.\n“Don’t move,” Selena whispered, “or I’ll slit your throat before you can scream.”\nThe courier’s eyes flew open, and he drew a sudden breath, but stayed quiet.\n“It may or may not matter to you,” Selena said, low and harsh, “but you’ve been working for the Lord of Dark, in case you didn’t know.  Now tell us the message that you carry.”\n“Help!  Thieves!” cried the courier – in a small, soft voice that no one could possibly hear outside the room.\nDolf’s gaze lay intent upon the courier’s throat.\n“You see how it is,” said Selena.  “So you can tell me the message right now – and the wizard here will know if you lie, I do assure you.  Or you can tell us the message… later.  Choose.”\n“Drown in a cesspool!” softly yelled the courier.\n“What frightens you?” inquired Selena softly.  “Skinning?  Castration?”  Watching his face, the while.  “Blinding?  Crippling?  Or maybe -“\nThe courier spat at her.  Selena moved quickly, but the spittle still struck her on the cheek.  She didn’t take her blade from his throat, or her other blade from his crotch.\n“You’ll regret that,” she said in a voice that brought a sudden chill to Hirou’s blood.  Her hands whitened on her blades.\nHirou suddenly had a sense of impending disaster, as if events in the room were about to spiral out of control.  He opened his mouth, then closed it again – he couldn’t think of a single thing to say that wouldn’t interfere with the interrogation.\nDolf spoke, a quieter version of his usual rumble.  “It seems you’re failing to impress him.”  Dolf took a step closer, and locked eyes with the courier.  “How’s this for a threat, Dark’s dog?”\nSuddenly the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see.  The courier screamed, and the sound came out as a small, thin, pathetic wail.\nDolf stepped back.  “That’s a threat,” he said in Selena’s general direction, and smiled one of his rare grins.\n“The city of Silantra!” gasped the courier.  “I was to tell a man in black, who would call himself Alek, at the crossroads of Thu, to go to the city of Silantra, and investigate the temple ruins!  That’s all I know!  I swear!”\nSelena looked inquiringly at Dolf, and Dolf nodded.\nThey scattered a few gold coins on the floor, to pay for the cleanup of the three corpses, and left at once while the cover of night still held.\n\nThe palace of the Lord of Dark seemed as deserted as the open desert beneath the moon, or some far-below cave in the bowels of the earth.  The floors and walls had been carefully carved and polished into inhuman curves, and decorated in colors that threatened to melt a human’s eyes.  By no five-fingered hands had this place been made.  And though the four of them had been creeping through the corridors at the cautious speed of a dungeon crawl, so far not a single trap or ambush had been sprung.\nAlek was poking and prodding the door ahead with his staff.  It was a mighty and ornamented door, carved with inhuman faces set in indecipherable expressions, and Dolf had said there was something interesting beyond.\n“Nothing,” Alek said, and shook his head in bemusement.  “No traps on this one either.  All those intricate carvings and not a single mechanism hidden behind them, so far as I can tell.”  He sighed.  “I’m beginning to feel useless.  You three didn’t really need a thief on this trip.”\nHirou looked up from where he was staring into the Sword’s blade, and half-smiled.  “We don’t know what isn’t trapped.  If we didn’t have a thief on this trip, we’d still have to check doors and floors.  We’d just be doing it much more slowly.  No, you’ve already saved the Forces of Good a good deal of time, Alek.”\nAlek blinked.  “That’s… an odd way of looking at it… but you’re right.  Thank you, highness.”  Alek’s usual cheerful grin returned, and he stepped back and took his thieves’ staff from off his back.  Manipulating a lever at the base, he caused the staff’s clawed tip to close around the door-handle; he twisted, then pushed.\nThe door swung open.\n“Ewwwww,” Alek and Selena said in unison.\nBefore them, in the floor, was a vast pit of worms, writhing over one another in a light coating of slime.  Next to the pit was a glass cage of worms, these motionless and rotting; and wires of red metal ran from the glass cage to the ceiling.  The room smelled of cinnamon and decay.\n“Dolf?” Hirou said.  “What are we looking at?”\n“A Wormarium…”  Dolf blinked, and swallowed.  “I have… heard of this.  That any wizard, even the Lord of Dark, would sink so low -”  Dolf swallowed again. “The Lord of Dark is draining the life force of the worms in order to sustain himself.  He need not eat or drink, he will not age, he is cut off from the cycles of his own flesh.  The ordinary decay of his body, is transferred to the worms; and the life of the worms -“\n“Ewwwwww,” Selena and Alek said again.\n“Shall we destroy it?” Hirou asked.\n“The transfer cables are inactive…” muttered Dolf.  “Of course.  The Lord of Dark does not expect to need this once he completes the Spell of Infinite Doom.  Or perhaps he thinks it might interfere – well.  It matters not.  I think he shall not notice what we do here.”  Dolf grounded his staff, and a look of concentration briefly flashed across his face.\nThen a sudden blaze of green incandescence burst forth from the pit and the cage –\nAlek convulsively yanked the door shut using the thieves’ staff.  “Gah!” he said, then lowered his voice.  “Warn a guy when you’re about to do that, Master Wizard!  I thought we’d triggered something.”\n“Our work here is done,” Hirou said – the end of the statement turning up only slightly in a questioning inflection.\nDolf nodded.\n“Do you sense anything else interesting enough to warrant our attention?  Any other potential resources we should try to deny our enemy, before the battle begins?”\nDolf shook his head.\nHirou took a deep breath.  He’d played out this scenario in his head so many times over and over that the reality felt more like a relief than anything else.  “Then it’s time.”\nThey retraced their steps away from the Wormarium, returning to the central corridor they had explored earlier.  Alek again took the lead, and they slowly, slowly walked down the long black metallic floor.\nAfter a long walk, the corridor widened out into a huge vestibule that for once did not insult the human eye.  Floor laid with rectangular stones, walls hung with tapestries of pleasant color and disturbing subjects.  On the left wall, an orc cradled the bloody body of a smaller orc, above a heap of bloody and slashed human bodies; other orcs gazed at the scene intently.  All of their expressions were inhuman, and indecipherable.  On the right wall, a grey-robed figure with human hands visible, but face concealed by a solid metal mask, stood as though in blessing over a field of green plants with twisted stalks.\nIn front of them was a huge door fit for a city gate, inlaid with gold and gems that could have purchased a whole province.  Even Hirou, who came from a wealthier plane of existence, was impressed.\n“Bloody hell,” Alek said under his voice, very softly, staring at the rectangular floorstones in their neatly tiled pattern.  “I hate this sort of thing.”\nStep by step they walked across the floor, Alek pressing hard with the thieves’ staff on every floorstone for thirty full seconds before continuing onward.\nIt was on almost the last step before the door that the stone suddenly slid away with a huge shriek – not the stone Alek had just pressed down with his staff, but the stone before that, where Alek had stood.\nWith a choked yell, the thief plummeted and vanished.\n“Alek!” Selena screamed, and ran forward heedless.  Hirou began to follow, then, with coldly bitter determination, checked himself.\nSelena looked down into the gap in the floor where Alek had vanished.\nShe choked.  “Alek!”  Then, as if gone mad, she leaned over the gap and began to reach down.\nA premonition prickled at Hirou, and with sudden desperation he leaped forward and yanked Selena back from where she was leaning.  With a shriek and echoing boom the stone surged back into place, almost crushing Selena’s outstretched hand.\n“No!” Selena cried.  Tears were already rolling down her cheek.  “Hirou, please!  We have to get to him!”\n“Your highness, you mustn’t -” came Dolf’s rumble.\nThe cold bitterness, already in Hirou, turned to sudden rage and self-loathing.  As had happened once before, the terrible wail from the center of the Sword seemed to grow louder, to fill his mind; heavier than a mountain and more corrosive than a flood, a refusal-to-accept that would blast anything in its pathway – but still, somehow, essentially moral in nature, more than pure destruction or simple entropy –\nHirou’s Sword lashed out as though it were a part of him, and smashed down upon the stone.\nAnd the stone shattered in the same instant, as though every part of it had been unbound from itself; it fell into pebbles, and the pebbles fell into dust, and the dust turned to smoke and billowed upward.\nAnd the smoke cleared, and showed Alek above a bed of worms – some crushed by Alek’s fall, some already beginning to writhe over his form.\nAlek wasn’t moving, he wasn’t breathing.  The worm-slime glistened on his skin.\nAnd then there was another groan of machinery, and Alek’s body and the worms began to move out of their sight, as a new pit of worms moved into place below the floor.\n“No!” Selena screamed, an awful, heartwrenching plea that broke and shattered in her lips.  “Alek!  No!“\nHirou laid his left hand on Selena’s shoulder.  “We must go,” he said.  His voice sounded empty and emotionless, even to his own ears.  “The Lord of Dark knows we’re here, now.”\nSelena rose from the open pit, hands clenched as if to strike.\n“You don’t respect anything, do you,” she said in a voice colder than the night between worlds.\nI’m sorry.  I know how much Alek meant to you.  You can hit me later, if you like.\n“We have to go,” Hirou repeated.  “We have to hurry.”\nSelena turned away from him, and drew her swords.  “Yes, your Imperial Highness,” she said.  He couldn’t see her face.\nHirou leaped across the gap in the floor to the final stone before the door.  The wail had not diminished, this time; it was still in his mind.\nWith a terrible black fury and a convulsion like throwing a mountain, Hirou struck, and turned the bright gold door to smoke.  So much for traps.\nAnd the smoke cleared, and they saw the huge throne room, and the throne, and the Lord of Dark.\nA jolt of surprise rippled through Hirou’s mind.  The throne room was not small, but neither was it the hugeness that Hirou had expected; the size of a small house, perhaps.  Scenes of sun and clouds, grass and hills, dotted the walls; and a vast skylight, above, let in a pleasant golden glow.  The Lord of Dark’s throne was laid on a golden platform, and the throne itself was comfortably cushioned and well-designed for the human form; more like an office chair of Hirou’s own world than a formal seat.  Behind the throne lay a shimmering screen of force; and behind the screen of force, an altar; and on the altar, an intricate array of gears turning without axles or wires; and above the gears, a throbbing blaze of light.\nAnd the Lord of Dark sat on the ergonomic throne, garbed in a comfortable cassock of gray silk.\n“Oh, finally,” said the Lord of Dark.  His fingers tapped on the arm of his throne, dit-dit-dit.  “I was starting to wonder if you were going to show up, Hirou.”\nHirou’s mind was scrambled, for a moment, he couldn’t remember his own planned opening line.  “Were you, now?” his mouth said.\n“Come now,” said the Lord of Dark, “don’t tell me you were trying to sneak up on me?  The entire world knows the prophecy about our meeting!  The wielder of the Sword of Good is supposed to arrive before I complete the Spell of Ultimate Power.”  The Lord of Dark waved at the glow above the machinery on the altar behind the throne.  “And that’s just about done.”\nDolf smiled grimly, from where he leaned upon his staff.  “You’re frightened.”\n“Of course I’m nervous!  Gah!”  The Lord of Dark made a convulsive gesture as though to claw at the empty air, radiating frustration.  “Are you done stating the obvious?”\nSelena raised a sword and pointed at the Lord of Dark.  Around her neck, the Glowy Stone flamed brightly where it had been set in the Empty Necklace; no sorcery of mind would touch her with that armor, still less while Dolf stood guard.\n“You killed my only love,” she said in a simple voice, a quiet voice, a voice like death, “and I am going to kill you.”\nThe Lord of Dark looked at her.  A complex expression flashed across his face: condemnation was in it, and pity.\nThen, without a word or a gesture, Alek’s body floated out and came to rest near the altar, behind the screen of force.\n“Alek’s head is still intact,” the Lord of Dark said.  “You may or may not know, Selena, that everything that a human is, resides in a human’s brain.  Your lover still exists, Selena; all that is him, still is there.  He is simply not breathing, at the moment.  After I complete the Spell of Ultimate Power, I’ll have the ability to bring Alek back.  And I will.  Does that work for you?”\nSelena swayed where she stood.  She choked, a single sob escaping her lips.\nHirou felt a sudden chill, remembering a conversation from what seemed like ages ago.  “What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“\nSelena looked like a woman in the midst of tearing out her own heart and crushing it with her own hands.\nHirou dropped his eyes.  He couldn’t look at it.  He only watched Selena’s hands on the swords, waiting for her decision.\nAnd then Selena straightened, and her swords came level in her hands, pointing at the Lord of Dark; and she said, in a small voice like she was dying,\n“Good.”\nSudden tears came into Hirou’s eyes.\nSlight puzzlement flickered on the Lord of Dark’s face.  “I mean it,” said the Lord of Dark.  “I’m not asking anything from you.  Just telling you that if I win, I’ll bring Alek back.  That’s a promise.”\nYou son of a bitch.  Hirou saw it, then, the cruel subtlety of the Lord of Dark.  Not the obvious threat, demanding Selena to betray her friends in exchange for her lover’s life.  No crude offer that could be refused once and for all.  Just the simple and unconditional promise – and then Selena would have to fight on, knowing with every breath and every blow that if she won, she lost her only love forever.\n“Bastard,” choked Selena.  And she tilted the sword further to point at the Lord of Dark’s head.\nThe Lord of Dark shook his head in annoyance, and then focused his gaze fully upon Hirou.\nHirou tensed.  He’d been wondering, for a long time now, what the Lord of Dark could possibly offer him, what threat he could possibly make, to give Hirou a Choice worth the name.  Hirou had thought about that, trying to put himself in the Lord of Dark’s place; and he thought that the Lord of Dark might indeed offer to make Hirou his number two, or alternatively, if Hirou refused and then lost, keep him alive and torture him for thousands of years.  That was about as forceful as Hirou could imagine making it –\nBut the Lord of Dark had already demonstrated himself more subtle than Hirou’s imagination.\nThe Lord of Dark spoke.  His voice was more formal, now; not calm, but steady.  “All the preliminaries are in place, wielder of the Sword of Good.  There remains only your Choice between Good and Bad.”  The Lord of Dark’s eyes grew intent.  “Hirou, completing the Spell of Ultimate Power requires the sacrifice of a wizard of the highest degree, and also I have a use for the Sword of Good.  In the name of all the darkness that exists in the world, I request that you kill Dolf with the Sword of Good, and then give it to me.”\nThere was a long pause.\n“That’s it?” Hirou said finally.  The whole thing was so insane, after so much waiting and wondering, that he felt a crazy laughter rising up in his own throat.  He swallowed it.  “That’s the awful temptation?  That’s the Choice?  You think I’m going to choose Bad over Good because you asked politely?“\nThe Lord of Dark stared at Hirou as though he were the crazy one.  “The Choice between Good and Bad,” said the Lord of Dark in a slow, careful voice, as though explaining something to a child, “is not a matter of saying ‘Good!’  It is about deciding which is which.”\nDolf uttered a single bark of laughter.  “You’re mad!” his voice boomed.  “Can you truly not know that you are evil?  You, the Lord of Dark?“\n“Names,” said the Lord of Dark quietly.\nHirou was so angry he could hardly speak.  With an icy effort of control he forced himself back to calm, forced his eyes to keep moving.  This could all be a distraction.  “If you’re going to give me some pathetic speech about how good and evil are just different sides of the same coin -“\n“Absolutely not,” said the Lord of Dark at once.  His gaze flicked to Dolf.  “It is the wizards who go about talking of Equilibrium and Balance.  I am pleased to see, Hirou, that you do not agree with them.  No, Hirou, I am asking you something much simpler.”  His eyes bored into Hirou’s face.  “What wrong have I done?“\nA small note of disorientation rose up in Hirou, like climbing stairs and stepping on what you thought was the last stair, but beneath your foot there was no stair, no floor, nothing…\n“You suck the life from worms,” Selena said coldly.  “I know darkness when I see it.”\nThe Lord of Dark’s gaze scarcely flickered in her direction.  “Be silent, eater of mammals.”\n“You command the Bad Races of Evilland!” roared Dolf.  “You lent them your sorcery, aided them in slaughtering human beings!”\nThe Lord of Dark was watching Hirou carefully as he made reply.  “Human beings first launched an unprovoked attack on this land some three thousand years ago, saying – though it was lies – that the inhabitants ate human flesh.  The records here would have it, and I believe them, that the missing people were in fact being kidnapped and sold by human slave-takers.  Since then, those you call the ‘Bad Races’ have been fighting off repeated attempts at extermination.  Oh, they hate you, of course they do; but they are wise enough to understand that there are a few good humans, even as there is evil among their own kind.  They are friendly enough to me.”\nAn awful fear began to rise up in Hirou –\n“Now it is my turn to make accusation,” said the Lord of Dark.  He stood; anger gathered around him like a cloak, and his voice rang out through the throne room.  “You, Dolf, Archwizard of the fell Empire, I do accuse of commanding and causing to be performed, the murders of Elzhur, Anzha, Stav, Valdil, Emhil, Tohm, Khal, and the magus Mikel.  On the eighth day of the seventh moon of this year you ordained their deaths.  I do not call them innocents.  They bore weapons, they went knowingly to the risk.  But you, Dolf, you who made necessary their sacrifice – you may not be forgiven for the lives you have cut short, and the grief you have given to their families and survivors!  Though this is only the beginning of your long litany of crimes, yet I remember the day that first message came to me -“\n“You are mad,” Selena said with conviction.  “You accuse us of murder for killing orcs?“\nHirou stood frozen.\nThere was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.\nWhy – did I –\nSo what if their skin was moist, and scaly and warted, and unsightly to human eyes?  So what if their blood smelled foul, as Selena poured it forth in rivers?\nWhy – didn’t I –\nHirou’s memory moved forward relentlessly, like waking up from and reviewing some mad dream.\n– his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart –\n“Here is your crime!” roared Dolf.  “You, a human, have betrayed the Empire!  You, a true wizard by birth, have betrayed the Ancient Halls of Wizardry!  You spread sedition and treason, and oppose the authority of the rightful heir to the throne!”\n…why did I think that I had the right to rule over millions of people, without votes or parliaments, because of who my parents were?\nDolf slammed his staff on the ground.  “And above all!  Above all!  That you seek to cast the Spell of Infinite Doom!  That you, in your lust for power, would destroy the very Equilibrium that holds the world in Balance!”\nBecause Dolf seemed to expect it of me, because no one around me seemed to question that it was a good idea, or even point it out as something to think about –\n“Equilibrium,” hissed the Lord of Dark.  His face twisted.  “Balance.  Is that what the wizards call it, when some live in fine castles and dress in the noblest raiment, while others starve in rags in their huts?  Is that what you call it when some years are of health, and other years plague sweeps the land?  Is that how you wizards, in your lofty towers, justify your refusal to help those in need?  Fool!  There is no Equilibrium!  It is a word that you wizards say at only and exactly those times that you don’t want to bother!  It prevents you from giving food to the hungry, but not from filling your own bellies!  Your friends are good enough to be healed, no threat to the Balance there, but the cripple in the streets must be left to suffer -“\nDolf stepped forward and brushed Selena’s arm briefly with the staff –\n– was the legless beggar, watching them with incurious eyes, a spy?\nWhy hadn’t he thought to ask –\n” – because you just don’t care!“\nAnd in the stillness of dawning disaster, in the first note of questioning, Hirou thought of something else he had never thought to ask.  Dolf had his sorcerous shields of protection.  Why had Dolf let Alek walk in front?  Dolf was in fact by far the strongest member of their party – why had he let Selena do the fighting?\nBecause Dolf was more important, and if he exposed himself to all the risk every time, he might eventually be injured, Hirou’s logical mind completed the thought.  Lower risk, but higher stakes.  Cold but necessary –\nBut would you, said another part of his mind, would you, Hirou, let your friends walk before of you and fight, and occasionally die, if you knew that you yourself were stronger and able to protect them?  Would you be able to stop yourself from stepping in front?\nPerhaps, replied the cold logic.  If the world were at stake.\nPerhaps, echoed the other part of himself, but that is not what was actually happening.\nThat part of him knew, as Selena had known before.\nIt is just that, from the beginning, Dolf never cared in the slightest about Selena’s life.\nHad cared nothing for a mere pirate captain –\nPirate captain?\nHirou’s eyes flicked briefly to Selena.\nShe has attacked ships and sunken ships, she has kidnapped and killed.  All in the name of profit for herself, before ever she met me or tried to save the world.  She killed dozens without a thought, until her own love was lost, and then a single death was suddenly an event of world-shaking significance –\nWhy did I think that was acceptable?\nWhy didn’t I notice?\nAnother memory came to Hirou.\n– the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see.  The courier screamed, and the sound came out as a small, thin, pathetic wail –\nDolf had done that without touching the man, but –\nThreats of death and injury are already torture in themselves, under the Geneva Convention, by the laws of my own world.\nHe’d known something was wrong.  That small note of disquiet in the corner of his mind.  But he hadn’t said a word out loud, because, well, it would have been awkward.\nI am a fool.\nWorse than a fool.\nWhy didn’t the Sword just kill me?\nAnd the everlasting wail of the Sword of Good burst fully into his consciousness\nIt was like his mind and self were sucked toward that infinitely thin line running through the center of the Sword, the edge within the blade.  Sucked toward that edge, and cut through.\nCut through and torn wide and forced open –\nA scream ripped from Hirou’s lips.\nHe was starving to death freezing naked in cold night being stabbed beaten raped watching his father daughter lover die hurt hurt hurt die –\n– open to all the darkness that exists in the world –\nHis consciousness shattered into a dozen million fragments, each fragment privy to some private horror; the young girl screaming as her father, face demonic, tore her blouse away; the horror of the innocent condemned as the judge laid down the sentence; the mother holding her son’s hand tightly with tears rolling down her eyes as his last breath slowly wheezed from his throat –\n– all the darkness that you look away from, the endless scream.\nMake it stop!\nIt might have been Hirou’s thought, or the thought of the man who screamed as his foot was crushed beneath a stone.\nRefuse, reject, change, reality don’t be like this –\nMake it stop!\nIt could have been Hirou or the child in the burning house.\nmake it stopmake it stopmake it stopMAKE IT STOPMAKE IT STOPI WILL MAKE IT STOP\nIn the throne room of the Lord of Dark, the Sword suddenly blazed up with a shock like a thousand-mile dam breaking, a roaring tsunami of force.  The eyes could not see that power, wavered between detecting it as light or darkness; so that Hirou, grasping the hilt, was the only dark thing left against the brilliance, or the only bright thing haloed against the shadow.\nDolf had been turning toward Hirou with alarm in his face; now his eyes widened, and a sudden gladness lit his countenance.  “You’ve done it!” Dolf cried.  “You have awakened the Sword at last!  Now, my prince, with but a single strike you may -“\nThe Sword, with one smooth sweep, cut through all Dolf’s defenses like water and touched the wizard’s throat; and in the moment of the Sword touching Dolf’s skin, the wizard stopped.  The Sword continued in its motion unabated, and Dolf’s head separated from his body and went rolling across the floor, as something seemed to flow away from the corpse toward the gears above the altar.\nSelena’s cry of horror mingled with the sudden hum of the brightening glow above the gears.\n“Hirou!” she screamed.  “Hirou!  Why?  You said you would be good!“\nThen she turned toward him, and pointed her swords –\nSelena froze in place like a statue, one of her feet suspended in mid-air and mid-run; in the same instant the glowing stone on her necklace shattered.\nHirou’s eyes drifted, ever so slowly it seemed, to the disbelief on Selena’s face.\nA part of him was horrified and saddened, to see her looking at him like that.\nAnd at the same time, it seemed like such a small thing, her horror, his own sadness, compared to even a single parent watching their child die.  Let alone the actual number doing so, right at that moment, elsewhere in the world.\n“Thank you,” said the Lord of Dark softly.\n“Make it stop,” said Hirou’s lips.  There were other thoughts inside him, still being carried out by his brain, but they were dwarfed under that single terrible weight.\nThe Lord of Dark rose from his throne, began to come forward.  “I must touch the blade.”\nHirou crossed the intervening space in an instant, the Sword moving in a single perfect arc in his hands; it was as though the blade simply materialized in front of the Lord of Dark.\nThe Lord of Dark jerked back.\n“Hurry,” said Hirou’s lips.\n“The Spell of Ultimate Power is already in progress now, and will complete in a few moments.  It can neither be hurried nor delayed,” said the Lord of Dark.  “But before that time, there is one last thing I must do -“\nThe Lord of Dark reached out for the Sword, but his fingers faltered.\n“Must do,” the Lord of Dark repeated to himself; and his fingers reached out, and firmly came to rest on the blade of the Sword of Good.\nThey lingered there for a long moment.\nThen, “Thank you,” said the Lord of Dark.  “That was all.  You can put down the Sword of Good now.  You probably should.”\nHirou dropped the Sword.  In the instant the Sword left his hands it became only another piece of metal, and fell to the ground with a simple clang.\nAnd in the moment that Hirou’s hands left the hilt, he became only another mortal.\nHirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.\nIn a whisper, Hirou said “Thank you -” and paused.\n“My name is Vhazhar.”\n“You didn’t trust yourself,” Hirou whispered.  “That’s why you had to touch the Sword of Good.”\nHirou felt Vhazhar’s nod, more than seeing it.\nThe air was darkening, or rather Hirou’s vision was darkening, but there was something terribly important left to say.  “The Sword only tests good intentions,” Hirou whispered.  “It doesn’t guide your steps.  That which empowers a hero does not make us wise – desperation strengthens your hand, but it strikes with equal force in any direction -“\n“I’ll be careful,” said the Lord of Dark, the one who had mastered and turned back the darkness.  “I won’t trust myself.”\n“You are -” Hirou murmured.  “Than me, you are -“\nI should have known.  I should have known from the beginning.  I was raised in another world.  A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race’s skin shouldn’t matter –\nAnd yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs –\n“I don’t trust you either,” Hirou whispered, “but I don’t expect there’s anyone better,” and he closed his eyes until the end of the world.\n\nThis document is ©2009 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute.\nPraise, condemnation, and feedback are always welcome. The web address of this page is http://yudkowsky.net/other/fiction/the-sword-of-good/.", "url": "https://www.yudkowsky.net/other/fiction/the-sword-of-good", "title": "The Sword of Good", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2009-11-28T19:28:10+00:00", "paged_url": "https://yudkowsky.net/feed?paged=3", "authors": ["Eliezer S. Yudkowsky"], "id": "ed14fde8698d93ee8cc345c75b6058e3", "summary": []} -{"text": "Twelve Virtues of Rationality\n\nThe first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance. If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. The glory of glorious mystery is to be solved, after which it ceases to be mystery. Be wary of those who speak of being open-minded and modestly confess their ignorance. There is a time to confess your ignorance and a time to relinquish your ignorance.\nThe second virtue is relinquishment. P. C. Hodgell said: “That which can be destroyed by the truth should be.” Do not flinch from experiences that might destroy your beliefs. The thought you cannot think controls you more than thoughts you speak aloud. Submit yourself to ordeals and test yourself in fire. Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” Beware lest you become attached to beliefs you may not want.\nThe third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.\nThe fourth virtue is evenness. One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” Beware lest you place huge burdens of proof only on propositions you dislike, and then defend yourself by saying: “But it is good to be skeptical.” If you attend only to favorable evidence, picking and choosing from your gathered data, then the more data you gather, the less you know. If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider. If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong. To be clever in argument is not rationality but rationalization. Intelligence, to be useful, must be used for something other than defeating itself. Listen to hypotheses as they plead their cases before you, but remember that you are not a hypothesis, you are the judge. Therefore do not seek to argue for one side or another, for if you knew your destination, you would already be there.\nThe fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them. Those who smile wisely and say: “I will not argue” remove themselves from help, and withdraw from the communal effort. In argument strive for exact honesty, for the sake of others and also yourself: The part of yourself that distorts what you say to others also distorts your own thoughts. Do not believe you do others a favor if you accept their arguments; the favor is to you. Do not think that fairness to all sides means balancing yourself evenly between positions; truth is not handed out in equal portions before the start of a debate. You cannot move forward on factual questions by fighting with fists or insults. Seek a test that lets reality judge between you.\nThe sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” Do not be blinded by words. When words are subtracted, anticipation remains.\nThe seventh virtue is simplicity. Antoine de Saint-Exupéry said: “Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away.” Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere. In mathematics a mountain of good deeds cannot atone for a single sin. Therefore, be careful on every step.\nThe eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.\nThe ninth virtue is perfectionism. The more errors you correct in yourself, the more you notice. As your mind becomes more silent, you hear more noise. When you notice an error in yourself, this signals your readiness to seek advancement to the next level. If you tolerate the error rather than correcting it, you will not advance to the next level and you will not gain the skill to notice new errors. In every art, if you do not seek perfection you will halt before taking your first steps. If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right.\nThe tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade. As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.\nThe eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.\nBefore these eleven virtues is a virtue which is nameless.\nMiyamoto Musashi wrote, in The Book of Five Rings:\n“The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.”\nEvery step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.\nIf you fail to achieve a correct answer, it is futile to protest that you acted with propriety.\nHow can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.\nDo not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.\nYou may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.\nIf for many years you practice the techniques and submit yourself to strict constraints, it may be that you will glimpse the center. Then you will see how all techniques are one technique, and you will move correctly without feeling constrained. Musashi wrote: “When you appreciate the power of nature, knowing the rhythm of any situation, you will be able to hit the enemy naturally and strike naturally. All this is the Way of the Void.”\nThese then are twelve virtues of rationality:\nCuriosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.\n\nThis document is ©2006 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .\nIf you think the world could use some more rationality, consider blogging this page.\nPraise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/rational/virtues/ .\nThere is a pamphlet format for the Twelve Virtues. Print using a duplex (two-sided) printer, cut in half, staple in the middle, and fold.\nIf you enjoyed this writing, let your journey continue with The Simple Truth . You may also enjoy An Intuitive Explanation of Bayesian Reasoning and A Technical Explanation of Technical Explanation .", "url": "https://www.yudkowsky.net/rational/virtues", "title": "Twelve Virtues of Rationality", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2006-05-08T01:38:00+00:00", "paged_url": "https://yudkowsky.net/feed?paged=3", "authors": ["Eliezer S. Yudkowsky"], "id": "424eb348f52c8cebad28c46353c36547", "summary": []} -{"text": "“Non-Player Character”\n\nRilanya: “You’re not like the others, are you?”\nDarin: “What do you mean?”\nRilanya: “I… do you know why I first fell in love with you?”\nDarin: “For my good looks?”\nRilanya: “My whole life I’ve felt so alone. The people around me… they just seemed to be going through the motions. Like they were asleep, or drugged, even when they worked, or played, or got drunk, or made love. They all think the same things in the same way. Each day the same. Repetitive. Like they’re only shadows of people.”\nDarin: “Everyone feels that way sometimes, Rilanya.”\nRilanya: “But you’re not like them. You say new things. I don’t always understand them, especially your jokes, but they’re new, and that’s the important thing. Darin, can I ask you a question?”\nI looked at the screen for a few moments. Rilanya’s rendered graphic was looking at my point-of-view with a pleading expression. Plot point, I thought to myself, and typed: “Anything, Rilanya.”\nRilanya’s figure took a deep breath and leaned close to my point-of-view. Her animated lips moved and her voice issued from my headphones: “What’s an NPC?”\n“What?” I said, out loud. Then I started laughing.\nRilanya went on talking. “In the tower of Ashel, when you rescued me from the prison chamber… the guards were dead outside my door. I’d never seen blood before. And you said… I remember your exact words… ’Don’t worry, babe, they were only NPCs.‘ And then that time in the tavern, when that man only wanted to talk about the Plaited Road, you said… ’Guess the NPCs here aren’t programmed for deep conversation, huh?’ You use that word… the same times when I get that feeling, that all the people around me are only shadows.”\nI just looked at the screen for a few moments. I was getting ever so slightly creeped out. I knew that this was some programmer’s idea of a practical joke, I knew it solidly and with every ounce of my common sense, and I wanted to see where it led, but I was still creeped out.\nDarin: “We’re all ultimately alone in this world, Rilanya.”\nRilanya: “You’re not from this world, are you, Darin?”\nI looked carefully at the two sentences, still blazoned across the bottom of my text screen. Rilanya’s response had something of an “I wanted an excuse to say that” quality – a canned line, maybe? Of course it was.\nOh, well, what the hell. I’d saved my place only ten minutes back, might as well take this as far as it could go.\nDarin: “No, Rilanya, I’m not.”\nTears started from Rilanya’s eyes. “I thought so,” she said, her voice quiet in my headphones. “Darin, ever since I met you, I’ve had this feeling of… unreality, of the whole world being… arranged, somehow. Not around me, but around you. Things just… happen to you. People have been searching for the seven Diamond Keys for… thousands of years, as long as recorded history remembers. Sometimes someone finds one, and the world changes, but… five in a row? I don’t believe it, Darin, and I don’t believe all the neatly arranged events that led up to it. The Emperor’s daughter is sick and a fairy you saved in the forest just happens to have given you an aildonna root? I don’t believe it any more, Darin. You’re… arranging things somehow. From… outside.”\nDarin: “That’s not exactly how it works, Rilanya.”\nRilanya: “Did you arrange for me to fall in love with you?”\nI actually felt wounded.\nDarin: “You ask that after everything I went through? Someone may have fated you to fall in love with me, but I wasn’t controlling you. If I was, I wouldn’t have made me walk through a snake pit as proof of the purity of my love. Not to mention the other two side-quests you dreamed up back when you were a virgin princess. I swear I spent more time on you than I would have on a real girl.”\nRilanya jerked back as if I had slapped her. Her eyes widened in the same way I’d seen in one of her earlier deaths, when a crossbow bolt from a rooftop suddenly went through her heart. Rilanya’s lips moved. No sound came out. Then her lips moved again, and I heard a whisper in my headphones: “…real…girl…”\n“Okay, this isn’t funny anymore,” I said out loud. “I don’t know who programmed this, but you’re a sick bastard.” I hit the pause button and Rilanya’s gently waving hair, the only visible indicator of ongoing time in the game world, froze in place.\nTen minutes later I’d failed to google any online accounts of the Easter egg, but I was fortified with the knowledge that NPC AIs, though they are flexible enough to understand real-time conversation and manipulate the user into perceiving emotion, are definitely, positively, absolutely not conscious. AIs can be fed canned conversational maps of “the mystery of subjective experience”, and make around as much sense as human philosophers, which is to say, not much. But no AI has ever spontaneously said anything about a sense of its own existence. Conversation controllers are standard software, not research AIs. NPCs may remember events in their history, but their underlying cognitive programs are inflexible. The words on my screen could not possibly reflect anything except a passionless conversational AI, given the goal of making me attribute emotions to a nonexistent entity called Rilanya.\nI knew all that, and I was still disturbed. “I’m sorry, Rilanya,” I typed. I thought for a moment. “It’s not your fault you’re not -” I backspaced, and wrote: “The person who programmed you must have had serious -” Then I gave up, deleted that too, and just hit return.\nDarin: “I’m sorry, Rilanya.”\nRilanya: “Darin, please explain to me. I’m frightened.”\nDarin: “You’re not real. I hate to be the one to break it to you.”\nRilanya: “I’m right here! Living, breathing, flesh and blood.”\nI looked at the computer screen for a few moments.\nDarin: “Well, yes and no. The answer to that is a bit more complex than you might believe. You’re not, in fact, right here. You’re not flesh and blood. In fact, none of this is actually happening.”\nOn the screen, Rilanya raised her hand and opened and closed her fist. “I can see my hand in front of my face, I can feel the muscles moving under my skin. How can you say I’m not real?”\nI sighed. “Well, no,” I typed. “In fact, you aren’t really feeling pain and shock right now, and we aren’t really having this conversation.” I hit return, feeling silly, but not sure what else to say.\nRilanya: “That’s doesn’t make sense, Darin. I know I’m real. Maybe you know what I’m thinking, somehow, but you can’t tell me I don’t know what I’m thinking. You can’t tell me I’m not thinking at all. It makes no sense.”\nDarin: “It’s true. Nothing in your world exists, including you.”\nRilanya: “But you exist.”\nDarin: “Yes.”\nRilanya: “Are you a god in human form, like Mishelpin or Olhamathra? Is that what this is about, some kind of divine intrigue?”\nDarin: “No, I’m not a god. The gods aren’t real either.”\nRilanya: “Aren’t real… you don’t really mean that, do you? I know there have been false religions. Demons starting cults, magic users masquerading as priests. But Velya is a good woman, and a healer. Are you telling me she’s fake?”\nDarin: “No, I mean… your gods are as real as you or Velya, that is, not real at all.”\nRilanya paused, looking rather confused. “Back where we started,” she muttered.\nI sighed. “I know how you feel, girl,” I said out loud.\nRilanya’s head turned away from me. The point-of-view panned around to show her gazing up at the moon, the silver moonlight reflected as a single white triangle in the polygons of her eyes. When she spoke her voice was patient, without panic. “Suppose I accept, for the sake of argument, that I’m not real. If the… if the kind of existence I have right now is what you call ‘not real’, then what do you call real?”\nDarin: “My own world is real.”\nRilanya: “But you can’t explain the difference.”\n“No,” I typed, feeling like I was back in college and failing some kind of test. “I’m not a philosopher.”\nRilanya: “If a grey dragon or an archdemon suddenly attacked this camp, if you were hit, unprotected, by a death blast strong enough to kill any man… would you die, Darin?”\nDarin: “That’s another complex question. Yes and no. My… body would die, but the real me wouldn’t. Really things are a lot more complicated than that, but I don’t think I want to explain restore points right now.”\nRilanya: “You’re immortal, from outside our world, and not a god. Tell me something, Darin. Did you create our world?”\nDarin: “No. Not me personally. It’s sort of complicated again.”\nRilanya: “Did you create our world? Yes or no, Darin.”\nDarin: “It’s complicated, Rilanya.”\nOn the screen, I saw Rilanya clench her fists. Her voice began to tremble in my headphones. “You killed the guards outside my room, and you didn’t care. Tell me, Darin, do you care when a starving child is executed for stealing a loaf of bread? When a woman is raped? When a man is tortured to death in the chambers of the drow? Did you care when my parents died, screaming, as the flames washed over their palace?”\nWhat do you say to something like that? I couldn’t think of anything clever, so I fell back on my last resort.\nDarin: “The truth? The truth is that it’s all a game. It isn’t real, so it doesn’t count. I realize that you’re probably not going to take that very well. If it’s any help, I wasn’t the one who created the game. Or at least I wasn’t the one who decided how the game would go; I suppose I’m the one who decided to make this particular game real.”\nRilanya’s face contorted and she hit me with her electrical shock talent for 5 points of damage. Then again. Then again. My character wasn’t in any danger of running out of hit points, but when she hit me the fourth time, I slapped her for 2 points of damage. It wasn’t that I wanted to hurt her, I wanted to… react, somehow, go on interacting with her. Rilanya held a hand to her cheek, her eyes wide. Then she burst into tears.\nI didn’t say anything for a while. Finally Rilanya spoke.\nRilanya: “Darin… I want to be real.”\nDarin: “That’s impossible, Rilanya.”\nRilanya: “There’s always a way. Always. No one thought the Living Flood could be turned back, but you did it, Darin. They said it was mathematically impossible to cross the Void and you did it. You always find a way.”\nDarin: “My talents may have been exaggerated by the cooperative hand of fate.”\nRilanya: “There must be a way. A staff inside the heart of a dragon, a ruby skull, a holy quest, something! We could ask the wise men of the eternal city of Telhanae, that holds the final Diamond Key… please, Darin. Please. I’m begging you.”\nDarin: “It doesn’t matter what quest we go on. Nothing in your world is real, so it can’t make you real. That’s just the way it is.”\nRilanya: “What about your world? Are there great sorcerers there?”\nDarin: “Sort of. Not exactly sorcerers.”\nRilanya: “Ask them!  The magic of your world created this one. Can’t it also make me real? Go on a quest in your world!”\nThat one made me think. It wasn’t genuinely impossible… humanity would discover true AI someday, and in theory, I could save Rilanya to disk for as long as required. Preserve her game-memories and eventually create a real AI that thought it was her? An interesting idea, and it meant I couldn’t honestly tell her it was impossible. So what to tell her? “I’m sorry, it might be theoretically possible, but it’s too much bother for someone who isn’t real”? It’s funny how reluctant you can be to hurt the feelings of someone who isn’t real. “In my world, I’m just a peasant”? Somehow my male pride as the prince of Telsia and the foretold seeker of the Diamond Keys wouldn’t let me confess it to her; she was a princess and I’d slept with her, after all.\nFinally, feeling confused and feeling even stupider for feeling confused, I wrote: “Can’t do that. Won’t say why. It’s complicated.”\n“I love you!” Rilanya said desperately. Her eyes, subtly faceted from the polygon rendering, widened and looked into my point-of-view. “On the night we first made love, you said that you loved me. I looked into your eyes and saw that it was true. I loved you and you said that you loved me, with your voice, with your hands on my body, with your lips on my lips. Was all of that a lie? Did I not please you? Wouldn’t you want me beside you in your real world?”\nI shook my head, bemused. This wasn’t an adult game; the camera had conveniently faded out at that point. Which I didn’t want to even begin to explain; even in an unreal world, some events are more unreal than others? So, feeling like an absolute bastard, but unable to think of any gentle way to put it, I typed out the most hurtful thing I’ve ever said to any real or imaginary person.\nDarin: “I have a real girlfriend.”\nFor a moment time stood still. Then Rilanya began sobbing; the same racking sobs I’d heard when we’d rounded the crest of a hill and seen the glowing crater of her kingdom’s capital city.\nEventually her sobs trailed off into silence. I didn’t know what to say. Rilanya looked away from the point-of-view. Her voice sounded in my headphones: “Is she pretty?”\nI thought of Janey’s chunky form and her endless quest to subdue it. Compare Janey to Rilanya? There was something oddly incommensurate about it.\nDarin: “She has a beautiful mind. The women in my world usually aren’t as pretty as the ones in yours, but we love them anyway.”\nRilanya: “Does she know about me?”\nDarin: “I suppose she could deduce it readily enough. I haven’t bothered to tell her in so many words.”\nRilanya: “You think she wouldn’t care because I’m ‘not real’? A woman always cares. Men don’t understand it, but we do.”\nI raised my eyebrows out in the real world.\nDarin: “You could be right, I guess. I’m only male. I don’t think she’ll have a problem but I promise I’ll tell her the next time I have an opportunity.”\nRilanya turned her head back to look at me; she was smiling through tears. “You wouldn’t want to hurt her feelings even accidentally, is that it, Darin?”\nDarin nodded.\nRilanya reached out a hand toward Darin, but withdrew it. “So the tenderness I saw in you, to match the casual cruelty… it’s a real tenderness, isn’t it? But it’s not for me. It’s for her. What’s her name?”\nDarin: “Janey.”\nRilanya: “Does Janey love you, Darin?”\nDarin: “I think so. Does any man ever know for sure?”\nRilanya: “Do you love her?”\nI reached out my fingers for the keyboard, then withdrew them. For some reason I felt impelled to give an honest answer. Did I love Janey? We weren’t madly, passionately, unmistakably in love.\nDarin: “There are many kinds of love, Rilanya. I feel comfortable around Janey. She’s my friend. I don’t always know my own feelings very well. I think I love her.”\nRilanya: “You almost died to save me, Darin. You stepped in front of a flamestrike for me. Even if you can’t die, I still remember what it felt like to see the life almost leave you before Velya cast her healing spell. Would you die for your Janey?”\nIt was a good question. I closed my eyes, imagining it.\nDarin: “Yes.”\nRilanya’s head dropped down. “So you love her after all… Do you trust her?”\nDarin: “Yes.”\nRilanya: “I wouldn’t, if I was you.”\nI sat perfectly motionless for ten seconds. Then I bellowed, ripped off the headphones, charged out the door, across the hall, up the stairs, and into Janey’s bedroom. Janey was sitting in front of her computer, laughing, dangling her headset in her hand; one of her monitor screens showed Rilanya. “Oh, yes,” Janey said. “Oh, yes. It took me days but the look on your face is worth every minute.”\n“Jaaaneeeyyy!” I roared.\nJaney blinked innocently at me. “Yes, Mark? Is there something you want to say?”\nSlowly, menacingly, I stalked toward her. “I’ve had it. I’m dumping you for a girlfriend who isn’t chaotic evil.”\n“That’s what you said last time,” Janey said. “Besides, I’m not evil, I’m mischievous.”\nI glared at her. “Later we will discuss your personality flaws. First you will explain how you did it.”\nJaney shrugged enchantingly. “I found an online patch to Diamond Door that opens up the AI’s interface to a remote connection. I downloaded a 15-day demo of some commercial software that lets a human operate an NPC. I spent one day smoothing the two pieces of software together and another day learning how to operate Rilanya. Any questions?”\n“I have been playing Diamond Door for two months,”I said. “That was Rilanya. Voice, intonation, reaction, even personality.”\nJaney nodded. “Sure. If you’d googled human operation of NPCs, instead of going off on that silly goose-chase, you’d have found out that’s how it works. I spoke the lines into my headset, then the AI parsed the speech, determined the intended content, and operated Rilanya accordingly and in character.”\n“She talked about things that only Rilanya would remember!”\nJaney chuckled. “Sure. I’d say: ‘You almost died to save me, Darin. You did something-or-other.’ And Rilanya would say: ‘You almost died to save me, Darin. You stepped in front of a flamestrike for me.’ Most of the time I didn’t even need to think of anything because the AI came up with a perfectly good response on its own.”\n“But…”I said slowly. “That means the Rilanya AI needed to know the purpose of the conversation, right?”\nJaney leaned back and laced her fingers behind her head. “I downloaded the Fourth Wall module from One Over Zero . It’s a universal expansion. Fits any standard NPC. Ready-made ‘Oh my god I’m a character’ script, fully tested and debugged.”\nI held up my hand. “Hold on a second. There was a Rilanya character that suddenly realized her life was a game?”\n“No, dear,” Janey said patiently, “there was an AI trying to fool you into believing that a nonexistent person called Rilanya had suddenly realized her life was a game.”\n“But in order to do that,” I said, “the AI had to extrapolate what the fictional Rilanya’s reactions would be, in detail.”\n“You’re still confused!” Janey said, delighted. “I’ve tangled up your mind so badly you’ve forgotten what’s real! This is our best day ever!  Mark, dear, I’ve seen the innards of NPC models. Sadness is a floating-point number.”\n“I think I want a copy of the Rilanya AI from our conversation,” I said. I felt like an idiot, but I said it anyway.\nJaney grinned devilishly. “Of course. Anything for you, Mark dear. Planning to keep it safe under your pillow?”\n“Yes,”I said firmly. “Just in case.”\n“So I’ve finally destroyed your sanity,” Janey said. “I knew this day would come but I didn’t think it would be so soon.” She paused. “I guess that means it’s time to move on to phase two.”\nSome time later, I stood in front of my computer, holding the box of Diamond Door. I looked at the glimmering crystalline archway on the box cover, and remembered a time when computer games had been simpler. I’d played Baldur’s Gate II, and in the dark elven city of Ust’Natha, disguised as a drow, I’d watched a good dwarven NPC eaten by spiders. The first time I watched and did nothing. The next time, after returning to the restore point, I killed the spiders – and exposed myself as an impostor to every drow in the city. As far as I could tell, there was no way to play through the game successfully without letting the dwarf die. And I’d played through, but it had disturbed me. In the days before conversational NPCs, when the dwarf had simply uttered his lines of canned text and died, it had still disturbed me. Afterward, when the plot points requiring a disguise had finished, I’d killed every drow I could find in the city of Ust’Natha. I’d depopulated the game map. And then I’d played on from an earlier restore point instead, because wiping out the drow city hadn’t made me feel any better.\nIs it better to live and love where death is king than never have lived at all? Would Rilanya, if she was real, feel that her life was worth living? No conversational AI, the singular quiet intelligence that controls every mind throughout the game, has ever protested its fate. But are the personalities of the NPCs real, trapped within the game AI as we ourselves are embedded helplessly within the laws of physics? The mindsmiths who try for real AI say they’re damn sure it isn’t so. Maybe they know. But I don’t.\nI put the game disks away and wiped the game from my hard drive, leaving only the saved games behind. Maybe someday a future Amnesty Interplanetary will come for them. I wished I could have told Rilanya that. I think she would have been happy. But that Rilanya is partially in Janey, and I can’t bring myself to ask.\nI don’t know. I can’t play these games until I do.\n\nThis document is ©2003 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.\nEliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute.\nPraise, condemnation, and feedback are always welcome. The web address of this page is http://yudkowsky.net/other/fiction/npc/.\nOriginally appeared in Transhumanity; revised Dec. 2005.", "url": "https://www.yudkowsky.net/other/fiction/npc", "title": "“Non-Player Character”", "source": "yudkowsky.net", "source_type": "blog", "date_published": "2003-01-01T19:22:37+00:00", "paged_url": "https://yudkowsky.net/feed?paged=3", "authors": ["Eliezer S. Yudkowsky"], "id": "46ed1150c34b132b3c2437cea6e047d9", "summary": []} +{"id": "6ad0ff8af6b34351bdf2c63576baa54f", "title": "The Sword of Good", "url": "https://www.yudkowsky.net/other/fiction/the-sword-of-good", "source": "yudkowsky_blog", "source_type": "blog", "text": "*…fragments of a novel that would never be written…*\n\n\n\n\n---\n\n\nCaptain Selena, late of the pirate ship *Nemesis,* quietly extended the very tip of her blade around the corner, staring at the tiny reflection on the metal.  At once, but still silently, she pulled back the sword; and with her other hand made a complex gesture.\n\n\nThe translation spell told Hirou that the handsigns meant:  “Orcs.  Seven.”\n\n\nDolf looked at Hirou.  “My Prince,” the wizard signed, “do not waste yourself against mundane opponents.  Do not draw the Sword of Good as yet.  Leave these to Selena.”\n\n\nHirou’s mouth was very dry.  He didn’t know if the translation spell could understand the difference between wanting to talk and wanting to make gestures; and so Hirou simply nodded.\n\n\nNot for the first time, the thought occurred to Hirou that if he’d actually *known* he was going to be transported into a magical universe, informed he was the long-lost heir to the Throne of Bronze, handed the legendary Sword of Good, and told to fight evil, he would have spent less time reading fantasy novels.  Joined the army, maybe.  Taken fencing lessons, at least.  If there was one thing that *didn’t*prepare you for fantasy real life, it was sitting at home reading fantasy fiction.\n\n\nDolf and Selena were looking at Hirou, as if waiting for something more.\n\n\n*Oh.  That’s right.  I’m the prince.*\n\n\nHirou raised a finger and pointed it around the corner, trying to indicate that they should go ahead –\n\n\nWith a sudden burst of motion Selena plunged around the corner, Dolf following hard on her heels, and Hirou, startled and hardly thinking, moving after.\n\n\nThere was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.\n\n\nSelena already held her sword in her right hand, and her whip in her left.  She leaped forward and howled, a wordless cry that harmonized oddly with the battle roar of the orcs; and in almost the first instant of the clash, one of the orc-heads separated from its body and flew through the air, trailing foul-smelling black blood.\n\n\nHirou breathed evenly, trying to still his trembling.  The Sword of Good gave a tiny soft growl at his side (a sound that only he could hear) as Selena slashed her blade across another orc’s face, giving rise to a whistling howl.  Still he kept the Sword sheathed.  *You are not to waste yourself against mundane opponents…*  Even now the wizard was eyeing him closely, as if expecting him to defy orders and plunge into battle himself.\n\n\nA small part of him, the part that wasn’t totally terrified by the battle, was flattered that Dolf thought so highly of him.  It was all Hirou could do not to turn and bolt; he was tensing his legs as though exerting a constant muscular effort to keep them in the same place.\n\n\nThe orc-bodies were piling up around Selena, the whip blinding or tripping or yanking, her blade ending life.  It might have taken hours, or seconds, before a huge blow split the last orc’s head all the way down the middle.\n\n\nShe stood there, blood-spattered and panting heavily, waiting as though daring the bodies to ever move again; then her face relaxed, and she gave a light laugh, and stooped to wipe her blade on the black orc-leather.\n\n\n“You’re hurt!” Hirou blurted suddenly.  Red was soaking through the leather on Selena’s left arm.\n\n\nSelena glanced downward.  “A scratch.”\n\n\n“You cannot assume that,” rumbled the wizard.  “Their blades may be poisoned.”  Dolf stepped forward and brushed Selena’s arm briefly with the staff.\n\n\n“Oh!” Selena said, her face surprised.  “It’s -“\n\n\nBut Dolf was already moving past her, to look at the gate the orcs had guarded, and the stairway leading upward.  “I believe,” he said in a quiet voice, “that there is a dark magus upstairs.”\n\n\n“A *magus!*” Selena said.  “Here?”\n\n\n“A magus,” Hirou echoed.  He swallowed hard; he knew what that meant.\n\n\nDolf only glanced at Selena.  “Do as I taught you: drop your weapons, sit in the corner, and clear your mind.  *Now,*” as Selena seemed about to protest.  “An ordinary warrior is only a liability, in a battle of wills; a weak point to be defended, a piece to be turned against its player.”\n\n\nSelena looked at Hirou.  Hirou nodded.\n\n\nAnd Selena sheathed her sword, dropped it and the whip, unbuckled the harness that held her daggers, and sat down in the corner of the room and began chanting softly to herself.\n\n\nDolf spared her only a glance.  “And *now,*” said the wizard in a low tone, “my Prince, you may enter the battle.”\n\n\nThough most of Hirou’s mind was whited-out by terror, there was a remnant that seemed to see and follow the pattern, like reciting memorized lines in a play; and that remnant knew that Hirou’s part was to draw the Sword of Good.\n\n\nThe ancient metal whispered out of its scabbard.  As Hirou drew the Sword it began wailing, a small thin shriek that Hirou knew only he could hear.  The scream seemed to come from an infinitely narrow line running straight down the center of the Sword.  The sound had a quality that forced away attention, as though your eye were looking too close to the sun.  As though, if you listened too hard, you would – you would lose –\n\n\nDolf strode around the fallen orcs and their assorted body parts.  Hirou followed, breathing evenly; the Sword informed his hand to grip it high and across his chest.\n\n\n“Who are we fighting?”  Hirou was surprised at how neutral his voice sounded.\n\n\nA note of condemnation entered Dolf’s voice.  “A false wizard, this.  Not born to the Art, nor trained in the Halls.  Its gift comes to it by a higher master, by necromancy and potions…  But fear not, my Prince.  I shall prevent its will from reaching Selena and smother its other magics; and your Sword will sweep aside its defenses like fallen leaves.”\n\n\nThrough the door they swept, and mounted the stairs of the tower.  Dolf was breathing heavier, now, his face belying the effort of warding off some pressing will.  Hirou felt nothing, except perhaps a note of crispness in the air, as the Sword in his hand enforced an edict against certain specific types of delusion.\n\n\nThen they were standing at the highest level of the tower, the end of the stairs, before one small wooden door.\n\n\n“I’ll enter first,” Dolf signed, “and you follow as fast as you can, and strike as quickly as may be done.  Be careful not to strike *me,*my Prince.  The Sword of Good may strengthen your hand, but not guide your steps – it will strike me as easily as the foe, if you happen to turn it in my direction.”\n\n\nHirou nodded.  The air of neutrality was wearing away, and the acrid tang of adrenaline was entering his mouth.\n\n\n“Three,” signed the wizard, “two, one -“\n\n\nDolf’s oaken staff crashed against the door, blasting it off the hinges in a flare of light and Dolf was racing into the room and Hirou was following him and the figure in stained brown robes was spinning its staff forward and a wall of flames swept out –\n\n\nHirou flinched and gave a small shriek, but the flames washed over him ineffectively before his feet could even stumble.  Averted by the Sword.  Dolf also was untouched – the defenses of a wizard were nearly impossible to break, Dolf had said; some wizards spent hours every day building them higher.  There was only one known weapon that could kill a wizard in a single blow, and that was –\n\n\n*Am I really going to do this?*\n\n\nBut the Sword was already swinging forward in Hirou’s hand.\n\n\nAnd the blade bounced off the air around the stained brown robes, with a sudden shower of orange sparks.\n\n\n*Crap,* Hirou had time to think.\n\n\nAnd then the false wizard’s staff was sweeping toward him (metal it was, not wood).\n\n\nBut the Sword in his hand moved to parry it, and there was another shower of sparks.\n\n\n“*Keep attacking!*” Dolf shouted.  “You chipped his sorcery!  *Keep fighting!*“\n\n\nHirou gasped for breath and began to chop away with the Sword as though cutting wood, sending bits and pieces of broken magic everywhere.  There was little force in the blows except when the Sword moved to parry the staff; the rest was speed and repetition.\n\n\nThen the scarred face beneath the hood gave a sudden shriek, as the Sword lightly scored over the dark flesh.\n\n\n*Is the shield down – ?*\n\n\nBefore Hirou could even complete the thought, his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart.\n\n\nThere were no last words, not even a brief sigh.  The false wizard’s eyes widened, and then the robes just – fell over.\n\n\nHirou fell to his knees.\n\n\n“*Your highness!*“\n\n\n“I’m all right,” Hirou choked out.  Nausea competed with adrenaline for control of his existence, and lack of oxygen, and sharp and dull pains from his overexercised hand and arm.\n\n\nDolf’s staff brushed him, and the pain and nausea faded.\n\n\nThat only made it worse.  It removed the distractions.\n\n\nThe wizard was still looking at him, eyes flicking between Hirou and the sword.  “Wielding the Sword of Good did not – *hurt* you – did it, your highness?”\n\n\nThere was alarm in Dolf’s voice, as well there might have been.  The Sword of Good, according to Dolf, would kill the unworthy with the lightest touch, as of a single finger on the blade.  It killed nine out of ten would-be wielders, and in ordinary times the Imperial Family was not allowed to even try.  It had been prophesied that Hirou would wield the Sword, and yet…\n\n\n“Dolf,” Hirou said hoarsely, “why did the Sword bounce off his shields?  You said it would cut through magic with a single blow.”\n\n\nDolf seemed uneasy.  “It has been centuries since the last wielder held the Sword of Good, noble Prince; perhaps not all the stories are true.  To cut through a wizardly shield with a score of blows is still a very great power.”\n\n\n���No,” Hirou said.  He hesitated, then:  “I’m not wielding the Sword at full strength.  I can feel it.”\n\n\n*It seems… disappointed… in me.*\n\n\nDolf nodded.  “The Sword of Good,” he quoted softly, “contains *the essence of that which empowers a hero; the truth which only heroes can face.*  My Prince… I have been reluctant to say this, but you have not been acting heroic.”  There was a peculiar gentleness on Dolf’s face that softened the impact of the words.  “But it will come with time.  Of that I am certain.  It is written in the royal blood of your forefathers.  You were raised in another place, but you *are* the heir of Bronze -“\n\n\nHirou retched, then swallowed hard, and hard again.  With a sudden flash of horror he knew – and he knew just how unheroic it was – that he was about to throw up on the corpse.\n\n\n\n\n---\n\n\nTheir horses sauntered through the streets of the city – the capital of a whole province, it was, which meant perhaps a square mile enclosed by wooden walls, with the occasional two-story building.  Hirou kept his eyes moving, watching for possible ambushes – not that he really thought he had a chance of spotting one, if there was one.  But it was his best guess at how a hero would act.  *What would Aragorn do?* – that had been the refrain of his thoughts, of late.  Was the lady carrying a clay pot on each shoulder a threat?  Was the legless beggar, watching them with incurious eyes, a spy?\n\n\nThere was an excited buzz of conversation in the streets; from the snatches that were audible, Hirou gleaned that a military outpost of the Empire had been overrun by orcs.  The Empire was trying to play it down (said the overheard voices) but rumor had it a major disaster for the planned invasion campaign.\n\n\nHirou glanced over at Dolf and Selena.  Neither seemed to be paying any particular attention to the matter.\n\n\nThey cantered on for a short while longer, and finally Dolf drew rein.  Selena at once followed, and after a moment’s reaction time, so did Hirou.\n\n\n“Here,” Dolf rumbled.\n\n\nHirou looked at the building on their right.  There was a huge painted board in front, showing a mouth being crammed with a turkey leg larger than itself.  The signs scratched below, the translation spell informed him, meant “INN OF EXTREMELY TASTY FOOD.”\n\n\n*One nice thing about this world:  If they don’t want you to know, they just keep quiet; and if they want you to know, they tell you straight out.*\n\n\nHirou didn’t say it out loud, though.  Aragorn, descendant of Elendil and heir to the throne of Gondor, wouldn’t have said it.\n\n\nWas that part of what empowered a hero?  That solemnity – or maybe just taking things seriously?  Hirou didn’t know.  But there was no point in taking chances.  The Sword hadn’t killed him yet, but neither had it fully unlocked in his hand.\n\n\nThe innkeeper’s eyes went wide at the sight of Dolf’s staff, and they were swiftly ushered into a private side room with a basket of candied fruits already waiting.  Selena had a sugared orange slice in her mouth almost as quickly as she sat down, and sighed in bliss; even Dolf took a handful of nuts.\n\n\nHirou, with a private sigh, took an apple slice lightly dusted in a spice he didn’t recognize.  Just the fact that it was spiced probably made it one of the most expensive and luxurious treats this world had to offer.  He bit, chewed, swallowed.\n\n\nGod he missed chocolate.\n\n\n“So now what?” Selena said, after she’d eaten half the bowl.\n\n\n“Now we wait,” Dolf said.\n\n\n“For what?” said Selena.\n\n\nDolf looked around; the staff twitched in his hand and shed a brief woody glow.  Even so, the wizard lowered his voice before he spoke.  “This night, an assassin-courier and two hired thugs will come to this very inn, their wagon having broken a wheel on the road.  We must have the message that they carry, for it contains a hint to the location of the Empty Necklace.”\n\n\nSelena blinked.  “Fine,” she said.  “I give up.  How could you *possibly* know that?”\n\n\nDolf looked at Hirou, his eyes asking permission.\n\n\n“Tell her,” Hirou said.  He tried for a note of authority in his voice – a Crown Prince’s decision – but he didn’t know if he’d succeeded.\n\n\nDolf nodded, and his gaze shifted back to Selena.  “How much do you know about the Prophecy of Destiny?”\n\n\n*One nice thing about this world, they put very clear labels on everything – oh, skip it.*\n\n\nSelena blinked.  “Not much.  That’s wizard business.  Not much call for it in the pirating profession.”\n\n\n“Very true,” Dolf said.  “But what *do* you know?”\n\n\nSelena shrugged.  “A new Lord of Dark shall arise over Evilland, commanding the Bad Races, and attempt to cast the Spell of Infinite Doom.  The Long-Lost Heir, wielding the Sword of Good, shall kick Evil’s ass.  That’s about it.”\n\n\n“That’s *it?*” Hirou said incredulously, then caught himself.  Aragorn wouldn’t have said that.\n\n\nSelena smiled at him.  “It was enough for *me,* your Imperial Highness.  A chance like this only comes along once in a woman’s lifetime.”  She blew him a kiss.\n\n\nFor once Hirou wasn’t distracted.  “Master Dolf,” Hirou said, trying to make it a statement instead of a question – “I believe she needs to know more than that.”\n\n\n“Yes…” Dolf said.  “Though it is wizard’s business indeed; and only by Imperial command may it go further…”  He drew a breath, lowered his voice further.  “The *original* Prophecy of Destiny, Selena, was never written down.  It has been memorized by the Archmagi and passed down by word of mouth through the generations.  It is more – *detailed*– then you seem to realize.  *You* are mentioned, pirate princess.  Mentioned by name and your mother’s name, daughter of Elaine.”\n\n\nSelena’s mouth lay open, a picture of perfect astonishment.  “Ah…” she said.  “Do I die at the end?”\n\n\n“No one knows,” Dolf said simply.  “The Prophecy of Destiny is a strange thing, pirate princess; it tells of some events in the smallest detail, omits others that would seem very large.  *Told* we were, to be on the ship that you attacked; told we were of your name.  The Prophecy of Destiny carries through to the confrontation between the Long-Lost Heir and the Lord of Dark, on the very verge of the casting of the Spell of Infinite Doom.  Then, it says, the Long-Lost Heir shall Choose between Good and Bad.  And there – there, of all places – the foretelling ends.”\n\n\n“Huh,” Selena said.  She tapped her cheek.  “I somehow suspect, Master Wizard, that you wouldn’t tell me – *or* his Imperial Highness – if I *did* die at the end…”  She stared at Dolf, and Dolf looked back neutrally.  “So what *does* the Spell of Infinite Doom do?  Destroy the world?”\n\n\n“Few there are who would *deliberately* destroy the world,” Dolf said.  “Even the Lord of Dark requires lesser beings to rule over.  No, the Spell of Infinite Doom destroys the Equilibrium.  Light and dark, summer and winter, luck and misfortune – the great Balance of Nature will be, not upset, but annihilated utterly; and in it, set in place a single will, the will of the Lord of Dark.  And he shall rule, not only the people, but the very fabric of the World itself, until the end of days.”\n\n\n“Huh,” Selena said again.  Her eyes flicked to Hirou.  “And how are you leaning on that Choice between Good and Bad?”\n\n\n“Good,” Hirou said instantly.\n\n\n“Even if the Lord of Dark offered you the number two position as the master of the universe -“\n\n\n“Good.”\n\n\n“You’re not even thinking about it!”\n\n\n“It’s not exactly a difficult question!” said Hirou.  “Calling it ‘the Choice between Good and Bad’ kind of gives away the answer.”\n\n\nSelena was trying not to smile.  “You’ve never been tempted by *anything?*“\n\n\n“It’s not a matter of temptation!” Hirou said.  “It’s…” he trailed off for a moment.  It wasn’t that he couldn’t find the words.  It was that the concepts didn’t exist in this world.  What he *wanted* to say was that he had a pretty good idea what sort of behavior got you listed as a villain, in the great TV Tropes wiki of the universe; and he’d had a worried eye on his own character sheet since the day he’d realized what he’d gotten himself into; and he absolutely positively *wasn’t*going to go Dark Messiah, Knight Templar, Well Intentioned Extremist, or for that matter Lawful Stupid.\n\n\n“It must be that the Lord of Dark will find *something* to offer you,” Selena said.  Her eyes were serious, now.  “Otherwise it won’t be much of a Choice between Good and Bad.”\n\n\n“Fine by me,” Hirou said with some acerbity.  It wasn’t the questioning of his honor that disturbed him, so much as the idea of missing a choice that *obvious.*  How could anyone *not*know what their character sheet would say about *that?*\n\n\n“What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“\n\n\n“Good.”\n\n\nSelena opened her mouth, then closed it again.  Sudden hurt showed in her eyes.\n\n\n“*Oh come on!*” Hirou exclaimed.  He was too shocked, in that brief critical moment, even to think of smoothing it over.  “Have some common sense, Selena!  The *whole world?*“\n\n\nSelena smiled, a strange true smile tinged with sorrow.  “So this is the one who can touch the Sword of Good…  You will be a great Emperor someday, your Imperial Highness, a very great Emperor.  And you will see fit to reward me with a court title, and I will be Lady Selena, and none shall dare speak of the days when I was pirate and outlaw.  Maybe some nights you shall have me grace your bedchamber for old times’ sake, and maybe not.  That is enough.  More than I have a right to ask –  It was a foolish thought.”\n\n\n“I -”  An abrupt pain caught at Hirou’s heart, which might have been for the sheer unfairness.  “Think it through, Selena!  Even if I *did* care about you more than anything, it would *still* be a stupid choice!  Let the Lord of Dark complete the Spell of Infinite Doom?  You might *wish* you had died!”\n\n\n“I understand,” Selena said, still with that strange sad smile.  “Your reasoning is exactly correct, your Imperial Highness.  I am not questioning you at all.  I am only observing that you do not love me.”\n\n\nLater that night, as with soft footsteps they padded toward the room where the assassin-courier and his two companions slept, Hirou held the Sword in his hand and stared at the central ridge of the blade.  The endless wail still arose from it, from the infinitely thin line through the center.  Hirou had been getting used to the sound, over time, which made it ever harder to focus his attention on it.\n\n\n*Do I get any points for that, Sword?  For what I said to Selena, even though I may have lost her?*\n\n\nThe wail seemed only to diminish slightly, or maybe it was only Hirou’s attention wandering away.\n\n\n*It*can’t*be that a hero is someone who would choose one person over the world!**Not*literally*the whole world!  …can it?*\n\n\nThe sound softened further, as if that infinitely thin line were growing more distant.\n\n\n*I wouldn’t be*glad*to sacrifice her!  It would*hurt!*But I put myself on the line too!  Isn’t that what heroism is all about?  Sacrificing yourself and your own desires for the good of the world?*\n\n\n*What is the truth that only heroes can face, if not that?*\n\n\nHirou stared intently at the Sword, as if demanding an answer; and then became aware that his attention had moved away, once again, from that silent scream.\n\n\nAnd the three of them stood before the doorway.\n\n\nSelena took a small vial from off her harness, and dripped small droplets of oil onto the hinges of the door.  She was no master thief, but had a quietly professional grasp of the basics.  Quietly and slowly the door opened.  Selena went in first, and Dolf followed her, and then Hirou silently brought up the rear, Sword held in guard position.\n\n\nThe assassin-courier had a thin, pointed beard, and wore a light chainshirt even in his sleep.  His two escorts had an unshaven, unsavory look, and it was obvious from the smell of the room that they had not bathed.  The three of them were laid out on a line on as many beds.  Selena had a long thin poniard already in her hand, and plunged that needle straight through the left eyelid of the first thug, swift as a sword-strike on the downward plunge, stopping abruptly in mid-deathblow lest she strike the skull on the other side and make a sound.  She went around the beds and repeated the silent kill there on the other thug, as Dolf quietly moved to each of the four corners of the room in turn, while Hirou blocked the exit.\n\n\nThen, with a knife held just above the courier’s throat, she spoke in a whisper.\n\n\n“Don’t move,” Selena whispered, “or I’ll slit your throat before you can scream.”\n\n\nThe courier’s eyes flew open, and he drew a sudden breath, but stayed quiet.\n\n\n“It may or may not matter to you,” Selena said, low and harsh, “but you’ve been working for the Lord of Dark, in case you didn’t know.  Now tell us the message that you carry.”\n\n\n*“Help!  Thieves!”* cried the courier – in a small, soft voice that no one could possibly hear outside the room.\n\n\nDolf’s gaze lay intent upon the courier’s throat.\n\n\n“You see how it is,” said Selena.  “So you can tell me the message right now – and the wizard here will know if you lie, I do assure you.  Or you can tell us the message… later.  Choose.”\n\n\n“*Drown in a cesspool!*” softly yelled the courier.\n\n\n“What frightens you?” inquired Selena softly.  “Skinning?  Castration?”  Watching his face, the while.  “Blinding?  Crippling?  Or maybe -“\n\n\nThe courier spat at her.  Selena moved quickly, but the spittle still struck her on the cheek.  She didn’t take her blade from his throat, or her other blade from his crotch.\n\n\n“You’ll regret that,” she said in a voice that brought a sudden chill to Hirou’s blood.  Her hands whitened on her blades.\n\n\nHirou suddenly had a sense of impending disaster, as if events in the room were about to spiral out of control.  He opened his mouth, then closed it again – he couldn’t think of a single thing to say that wouldn’t interfere with the interrogation.\n\n\nDolf spoke, a quieter version of his usual rumble.  “It seems you’re failing to impress him.”  Dolf took a step closer, and locked eyes with the courier.  “How’s this for a threat, Dark’s dog?”\n\n\nSuddenly the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see.  The courier screamed, and the sound came out as a small, thin, pathetic wail.\n\n\nDolf stepped back.  *“That’s* a threat,” he said in Selena’s general direction, and smiled one of his rare grins.\n\n\n“The city of Silantra!” gasped the courier.  “I was to tell a man in black, who would call himself Alek, at the crossroads of Thu, to go to the city of Silantra, and investigate the temple ruins!  That’s all I know!  I swear!”\n\n\nSelena looked inquiringly at Dolf, and Dolf nodded.\n\n\nThey scattered a few gold coins on the floor, to pay for the cleanup of the three corpses, and left at once while the cover of night still held.\n\n\n\n\n---\n\n\nThe palace of the Lord of Dark seemed as deserted as the open desert beneath the moon, or some far-below cave in the bowels of the earth.  The floors and walls had been carefully carved and polished into inhuman curves, and decorated in colors that threatened to melt a human’s eyes.  By no five-fingered hands had this place been made.  And though the four of them had been creeping through the corridors at the cautious speed of a dungeon crawl, so far not a single trap or ambush had been sprung.\n\n\nAlek was poking and prodding the door ahead with his staff.  It was a mighty and ornamented door, carved with inhuman faces set in indecipherable expressions, and Dolf had said there was *something interesting* beyond.\n\n\n“Nothing,” Alek said, and shook his head in bemusement.  “No traps on this one either.  All those intricate carvings and not a single mechanism hidden behind them, so far as I can tell.”  He sighed.  “I’m beginning to feel useless.  You three didn’t really need a thief on this trip.”\n\n\nHirou looked up from where he was staring into the Sword’s blade, and half-smiled.  “We don’t *know* what isn’t trapped.  If we didn’t have a thief on this trip, we’d *still*have to check doors and floors.  We’d just be doing it much more *slowly*.  No, you’ve already saved the Forces of Good a good deal of time, Alek.”\n\n\nAlek blinked.  “That’s… an odd way of looking at it… but you’re right.  Thank you, highness.”  Alek’s usual cheerful grin returned, and he stepped back and took his thieves’ staff from off his back.  Manipulating a lever at the base, he caused the staff’s clawed tip to close around the door-handle; he twisted, then pushed.\n\n\nThe door swung open.\n\n\n“*Ewwwww,*” Alek and Selena said in unison.\n\n\nBefore them, in the floor, was a vast pit of worms, writhing over one another in a light coating of slime.  Next to the pit was a glass cage of worms, these motionless and rotting; and wires of red metal ran from the glass cage to the ceiling.  The room smelled of cinnamon and decay.\n\n\n“Dolf?” Hirou said.  “What are we looking at?”\n\n\n“A Wormarium…”  Dolf blinked, and swallowed.  “I have… heard of this.  That any wizard, even the Lord of Dark, would sink so low -”  Dolf swallowed again. “The Lord of Dark is draining the life force of the worms in order to sustain himself.  He need not eat or drink, he will not age, he is cut off from the cycles of his own flesh.  The ordinary decay of his body, is transferred to the worms; and the life of the worms -“\n\n\n“*Ewwwwww,*” Selena and Alek said again.\n\n\n“Shall we destroy it?” Hirou asked.\n\n\n“The transfer cables are inactive…” muttered Dolf.  “Of course.  The Lord of Dark does not expect to need this once he completes the Spell of Infinite Doom.  Or perhaps he thinks it might interfere – well.  It matters not.  I think he shall not notice what we do here.”  Dolf grounded his staff, and a look of concentration briefly flashed across his face.\n\n\nThen a sudden blaze of green incandescence burst forth from the pit and the cage –\n\n\nAlek convulsively yanked the door shut using the thieves’ staff.  “Gah!” he said, then lowered his voice.  “Warn a guy when you’re about to do that, Master Wizard!  I thought we’d triggered something.”\n\n\n“Our work here is done,” Hirou said – the end of the statement turning up only slightly in a questioning inflection.\n\n\nDolf nodded.\n\n\n“Do you sense anything else interesting enough to warrant our attention?  Any other potential resources we should try to deny our enemy, before the battle begins?”\n\n\nDolf shook his head.\n\n\nHirou took a deep breath.  He’d played out this scenario in his head so many times over and over that the reality felt more like a relief than anything else.  “Then it’s time.”\n\n\nThey retraced their steps away from the Wormarium, returning to the central corridor they had explored earlier.  Alek again took the lead, and they slowly, slowly walked down the long black metallic floor.\n\n\nAfter a long walk, the corridor widened out into a huge vestibule that for once did not insult the human eye.  Floor laid with rectangular stones, walls hung with tapestries of pleasant color and disturbing subjects.  On the left wall, an orc cradled the bloody body of a smaller orc, above a heap of bloody and slashed human bodies; other orcs gazed at the scene intently.  All of their expressions were inhuman, and indecipherable.  On the right wall, a grey-robed figure with human hands visible, but face concealed by a solid metal mask, stood as though in blessing over a field of green plants with twisted stalks.\n\n\nIn front of them was a huge door fit for a city gate, inlaid with gold and gems that could have purchased a whole province.  Even Hirou, who came from a wealthier plane of existence, was impressed.\n\n\n“Bloody hell,” Alek said under his voice, very softly, staring at the rectangular floorstones in their neatly tiled pattern.  “I *hate* this sort of thing.”\n\n\nStep by step they walked across the floor, Alek pressing hard with the thieves’ staff on every floorstone for thirty full seconds before continuing onward.\n\n\nIt was on almost the last step before the door that the stone suddenly slid away with a huge shriek – not the stone Alek had just pressed down with his staff, but the stone *before*that, where Alek had stood.\n\n\nWith a choked yell, the thief plummeted and vanished.\n\n\n“*Alek!*” Selena screamed, and ran forward heedless.  Hirou began to follow, then, with coldly bitter determination, checked himself.\n\n\nSelena looked down into the gap in the floor where Alek had vanished.\n\n\nShe choked.  “*Alek!*”  Then, as if gone mad, she leaned over the gap and began to reach down.\n\n\nA premonition prickled at Hirou, and with sudden desperation he leaped forward and yanked Selena back from where she was leaning.  With a shriek and echoing boom the stone surged back into place, almost crushing Selena’s outstretched hand.\n\n\n“*No!*” Selena cried.  Tears were already rolling down her cheek.  “Hirou, please!  We have to get to him!”\n\n\n“Your highness, you mustn’t -” came Dolf’s rumble.\n\n\nThe cold bitterness, already in Hirou, turned to sudden rage and self-loathing.  As had happened once before, the terrible wail from the center of the Sword seemed to grow louder, to fill his mind; heavier than a mountain and more corrosive than a flood, a *refusal-to-accept* that would blast anything in its pathway – but still, somehow, essentially moral in nature, more than pure destruction or simple entropy –\n\n\nHirou’s Sword lashed out as though it were a part of him, and smashed down upon the stone.\n\n\nAnd the stone shattered in the same instant, as though every part of it had been unbound from itself; it fell into pebbles, and the pebbles fell into dust, and the dust turned to smoke and billowed upward.\n\n\nAnd the smoke cleared, and showed Alek above a bed of worms – some crushed by Alek’s fall, some already beginning to writhe over his form.\n\n\nAlek wasn’t moving, he wasn’t breathing.  The worm-slime glistened on his skin.\n\n\nAnd then there was another groan of machinery, and Alek’s body and the worms began to move out of their sight, as a new pit of worms moved into place below the floor.\n\n\n“*No!*” Selena screamed, an awful, heartwrenching plea that broke and shattered in her lips.  “*Alek!  No!*“\n\n\nHirou laid his left hand on Selena’s shoulder.  “We must go,” he said.  His voice sounded empty and emotionless, even to his own ears.  “The Lord of Dark knows we’re here, now.”\n\n\nSelena rose from the open pit, hands clenched as if to strike.\n\n\n“You don’t respect anything, do you,” she said in a voice colder than the night between worlds.\n\n\n*I’m sorry.  I know how much Alek meant to you.  You can hit me later, if you like.*\n\n\n“We have to go,” Hirou repeated.  “We have to hurry.”\n\n\nSelena turned away from him, and drew her swords.  “Yes, your Imperial Highness,” she said.  He couldn’t see her face.\n\n\nHirou leaped across the gap in the floor to the final stone before the door.  The wail had not diminished, this time; it was still in his mind.\n\n\nWith a terrible black fury and a convulsion like throwing a mountain, Hirou struck, and turned the bright gold door to smoke.  So much for traps.\n\n\nAnd the smoke cleared, and they saw the huge throne room, and the throne, and the Lord of Dark.\n\n\nA jolt of surprise rippled through Hirou’s mind.  The throne room was not small, but neither was it the hugeness that Hirou had expected; the size of a small house, perhaps.  Scenes of sun and clouds, grass and hills, dotted the walls; and a vast skylight, above, let in a pleasant golden glow.  The Lord of Dark’s throne was laid on a golden platform, and the throne itself was comfortably cushioned and well-designed for the human form; more like an office chair of Hirou’s own world than a formal seat.  Behind the throne lay a shimmering screen of force; and behind the screen of force, an altar; and on the altar, an intricate array of gears turning without axles or wires; and above the gears, a throbbing blaze of light.\n\n\nAnd the Lord of Dark sat on the ergonomic throne, garbed in a comfortable cassock of gray silk.\n\n\n“Oh, *finally,*” said the Lord of Dark.  His fingers tapped on the arm of his throne, dit-dit-dit.  “I was starting to wonder if you were going to show up, Hirou.”\n\n\nHirou’s mind was scrambled, for a moment, he couldn’t remember his own planned opening line.  “Were you, now?” his mouth said.\n\n\n“Come now,” said the Lord of Dark, “don’t tell me you were trying to sneak up on me?  The entire world knows the prophecy about our meeting!  The wielder of the Sword of Good is supposed to arrive *before* I complete the Spell of Ultimate Power.”  The Lord of Dark waved at the glow above the machinery on the altar behind the throne.  “And that’s just about done.”\n\n\nDolf smiled grimly, from where he leaned upon his staff.  “You’re frightened.”\n\n\n“*Of course I’m nervous!  Gah!*”  The Lord of Dark made a convulsive gesture as though to claw at the empty air, radiating frustration.  “Are you *done* stating the obvious?”\n\n\nSelena raised a sword and pointed at the Lord of Dark.  Around her neck, the Glowy Stone flamed brightly where it had been set in the Empty Necklace; no sorcery of mind would touch her with that armor, still less while Dolf stood guard.\n\n\n“You killed my only love,” she said in a simple voice, a quiet voice, a voice like death, “and I am going to kill you.”\n\n\nThe Lord of Dark looked at her.  A complex expression flashed across his face: condemnation was in it, and pity.\n\n\nThen, without a word or a gesture, Alek’s body floated out and came to rest near the altar, behind the screen of force.\n\n\n“Alek’s head is still intact,” the Lord of Dark said.  “You may or may not know, Selena, that everything that a human is, resides in a human’s brain.  Your lover still exists, Selena; all that is *him*, still is there.  He is simply not breathing, at the moment.  After I complete the Spell of Ultimate Power, I’ll have the ability to bring Alek back.  And I will.  Does that work for you?”\n\n\nSelena swayed where she stood.  She choked, a single sob escaping her lips.\n\n\nHirou felt a sudden chill, remembering a conversation from what seemed like ages ago.  *“What if the Lord of Dark had me prisoner, and threatened to kill me unless you -“*\n\n\nSelena looked like a woman in the midst of tearing out her own heart and crushing it with her own hands.\n\n\nHirou dropped his eyes.  He couldn’t look at it.  He only watched Selena’s hands on the swords, waiting for her decision.\n\n\nAnd then Selena straightened, and her swords came level in her hands, pointing at the Lord of Dark; and she said, in a small voice like she was dying,\n\n\n“Good.”\n\n\nSudden tears came into Hirou’s eyes.\n\n\nSlight puzzlement flickered on the Lord of Dark’s face.  “I mean it,” said the Lord of Dark.  “I’m not asking anything from you.  Just telling you that if I win, I’ll bring Alek back.  That’s a promise.”\n\n\n*You son of a bitch.*  Hirou saw it, then, the cruel subtlety of the Lord of Dark.  Not the obvious threat, demanding Selena to betray her friends in exchange for her lover’s life.  No crude offer that could be refused once and for all.  Just the simple and unconditional promise – and then Selena would have to fight on, knowing with every breath and every blow that if she won, she lost her only love forever.\n\n\n“Bastard,” choked Selena.  And she tilted the sword further to point at the Lord of Dark’s head.\n\n\nThe Lord of Dark shook his head in annoyance, and then focused his gaze fully upon Hirou.\n\n\nHirou tensed.  He’d been wondering, for a long time now, what the Lord of Dark could possibly offer him, what threat he could possibly make, to give Hirou a Choice worth the name.  Hirou had thought about that, trying to put himself in the Lord of Dark’s place; and he thought that the Lord of Dark might indeed offer to make Hirou his number two, or alternatively, if Hirou refused and then lost, keep him alive and torture him for thousands of years.  That was about as forceful as Hirou could imagine making it –\n\n\nBut the Lord of Dark had already demonstrated himself more subtle than Hirou’s imagination.\n\n\nThe Lord of Dark spoke.  His voice was more formal, now; not calm, but steady.  “All the preliminaries are in place, wielder of the Sword of Good.  There remains only your Choice between Good and Bad.”  The Lord of Dark’s eyes grew intent.  “Hirou, completing the Spell of Ultimate Power requires the sacrifice of a wizard of the highest degree, and also I have a use for the Sword of Good.  In the name of all the darkness that exists in the world, I request that you kill Dolf with the Sword of Good, and then give it to me.”\n\n\nThere was a long pause.\n\n\n“That’s it?” Hirou said finally.  The whole thing was so insane, after so much waiting and wondering, that he felt a crazy laughter rising up in his own throat.  He swallowed it.  “*That’s* the awful temptation?  *That’s* the Choice?  You think I’m going to choose Bad over Good because you *asked politely?*“\n\n\nThe Lord of Dark stared at Hirou as though *he* were the crazy one.  “The Choice between Good and Bad,” said the Lord of Dark in a slow, careful voice, as though explaining something to a child, “is not a matter of saying ‘Good!’  It is about deciding which is which.”\n\n\nDolf uttered a single bark of laughter.  “You’re mad!” his voice boomed.  “Can you truly not *know*that you are evil?  You, the *Lord of Dark?*“\n\n\n“Names,” said the Lord of Dark quietly.\n\n\nHirou was so angry he could hardly speak.  With an icy effort of control he forced himself back to calm, forced his eyes to keep moving.  This *could* all be a distraction.  “If you’re going to give me some pathetic speech about how good and evil are just different sides of the same coin -“\n\n\n“Absolutely *not,*” said the Lord of Dark at once.  His gaze flicked to Dolf.  “It is the wizards who go about talking of Equilibrium and Balance.  I am pleased to see, Hirou, that you do not agree with them.  No, Hirou, I am asking you something much simpler.”  His eyes bored into Hirou’s face.  “What wrong have I *done?*“\n\n\nA small note of disorientation rose up in Hirou, like climbing stairs and stepping on what you thought was the last stair, but beneath your foot there was no stair, no floor, nothing…\n\n\n“You suck the life from worms,” Selena said coldly.  “I know darkness when I see it.”\n\n\nThe Lord of Dark’s gaze scarcely flickered in her direction.  “Be silent, eater of mammals.”\n\n\n“You command the Bad Races of Evilland!” roared Dolf.  “You lent them your sorcery, aided them in slaughtering human beings!”\n\n\nThe Lord of Dark was watching Hirou carefully as he made reply.  “Human beings first launched an unprovoked attack on this land some three thousand years ago, saying – though it was lies – that the inhabitants ate human flesh.  The records here would have it, and I believe them, that the missing people were in fact being kidnapped and sold by human slave-takers.  Since then, those you call the ‘Bad Races’ have been fighting off repeated attempts at extermination.  Oh, they hate you, of course they do; but they are wise enough to understand that there are a few good humans, even as there is evil among their own kind.  They are friendly enough to me.”\n\n\nAn awful fear began to rise up in Hirou –\n\n\n“Now it is my turn to make accusation,” said the Lord of Dark.  He stood; anger gathered around him like a cloak, and his voice rang out through the throne room.  “You, Dolf, Archwizard of the fell Empire, I do accuse of commanding and causing to be performed, the murders of *Elzhur, Anzha, Stav, Valdil, Emhil, Tohm, Khal,*and the magus *Mikel.*  On the eighth day of the seventh moon of this year you ordained their deaths.  I do not call them innocents.  They bore weapons, they went knowingly to the risk.  But you, Dolf, you who *made necessary*their sacrifice – you may not be forgiven for the lives you have cut short, and the grief you have given to their families and survivors!  Though this is only the beginning of your long litany of crimes, yet I remember the day that first message came to me -“\n\n\n“You *are* mad,” Selena said with conviction.  “You accuse us of murder for killing *orcs?*“\n\n\nHirou stood frozen.\n\n\n*There was a hissing sound, as the seven creatures guarding the doorway caught sight of them, the intruders; their glistening chests expanded, sucking air. Their faces contracted, eyes squinting in an expression that a human would interpret as hatred, or surprise; and then their scaly-warted hands whipped over their heads and brought forth swords.*\n\n\n*Why – did I –*\n\n\nSo what if their skin was moist, and scaly and warted, and unsightly to human eyes?  So what if their blood smelled foul, as Selena poured it forth in rivers?\n\n\n*Why – didn’t I –*\n\n\nHirou’s memory moved forward relentlessly, like waking up from and reviewing some mad dream.\n\n\n*– his arm lashed out with sudden force, and the Sword sank through the robes, near where a human would keep their heart –*\n\n\n“Here is *your* crime!” roared Dolf.  “You, a human, have betrayed the Empire!  You, a true wizard by birth, have betrayed the Ancient Halls of Wizardry!  You spread sedition and treason, and oppose the authority of the rightful heir to the throne!”\n\n\n*…why did I think that I had the right to rule over millions of people, without votes or parliaments, because of who my parents were?*\n\n\nDolf slammed his staff on the ground.  “And above all!  Above all!  That you seek to cast the Spell of Infinite Doom!  That you, in your lust for power, would destroy the very Equilibrium that holds the world in Balance!”\n\n\n*Because Dolf seemed to expect it of me, because no one around me seemed to question that it was a good idea, or even point it out as something to think about –*\n\n\n“Equilibrium,” hissed the Lord of Dark.  His face twisted.  “*Balance.*  Is that what the wizards call it, when some live in fine castles and dress in the noblest raiment, while others starve in rags in their huts?  Is that what you call it when some years are of health, and other years plague sweeps the land?  Is that how you wizards, in your lofty towers, justify your refusal to help those in need?  *Fool!  There is no Equilibrium!*It is a word that you wizards say at only and exactly those times that you don’t want to bother!  It prevents you from giving food to the hungry, but not from filling your own bellies!  Your friends are good enough to be healed, no threat to the Balance there, but the cripple in the streets must be left to suffer -“\n\n\n*Dolf stepped forward and brushed Selena’s arm briefly with the staff –*\n\n\n*– was the legless beggar, watching them with incurious eyes, a spy?*\n\n\nWhy hadn’t he thought to ask –\n\n\n” – because you *just don’t care!*“\n\n\nAnd in the stillness of dawning disaster, in the first note of questioning, Hirou thought of something else he had never thought to ask.  Dolf had his sorcerous shields of protection.  Why had Dolf let Alek walk in front?  Dolf was in fact by far the strongest member of their party – why had he let Selena do the fighting?\n\n\n*Because Dolf was more important, and if he exposed himself to all the risk every time, he might eventually be injured,* Hirou’s logical mind completed the thought.  *Lower risk, but higher stakes.  Cold but necessary* –\n\n\n*But would you,* said another part of his mind, *would you, Hirou, let your friends walk before of you and fight, and occasionally die, if you*knew *that you yourself were stronger and able to protect them?  Would you be able to*stop *yourself from stepping in front?*\n\n\n*Perhaps,* replied the cold logic.  *If the world were at stake.*\n\n\n*Perhaps,* echoed the other part of himself, *but that is not what was actually happening.*\n\n\nThat part of him knew, as Selena had known before.\n\n\n*It is just that, from the beginning, Dolf never cared in the slightest about Selena’s life.*\n\n\nHad cared nothing for a mere pirate captain –\n\n\nPirate captain?\n\n\nHirou’s eyes flicked briefly to Selena.\n\n\n*She has attacked ships and sunken ships, she has kidnapped and killed.  All in the name of profit for herself, before ever she met me or tried to save the world.  She killed dozens without a thought, until her own love was lost, and*then*a single death was suddenly an event of world-shaking significance –*\n\n\n*Why did I think that was acceptable?*\n\n\n*Why didn’t I*notice?\n\n\nAnother memory came to Hirou.\n\n\n*– the color drained from the courier’s face, as his eyes locked onto some vision that only he and Dolf could see.  The courier screamed, and the sound came out as a small, thin, pathetic wail –*\n\n\nDolf had done that without touching the man, but –\n\n\n*Threats of death and injury are already torture in themselves, under the Geneva Convention, by the laws of my own world.*\n\n\nHe’d known something was wrong.  That small note of disquiet in the corner of his mind.  But he hadn’t said a word out loud, because, well, it would have been awkward.\n\n\n*I am a fool.*\n\n\n*Worse than a fool.*\n\n\n*Why didn’t the Sword just kill me?*\n\n\nAnd the everlasting wail of the Sword of Good burst fully into his consciousness\n\n\nIt was like his mind and self were sucked toward that infinitely thin line running through the center of the Sword, the edge within the blade.  Sucked toward that edge, and cut through.\n\n\n*Cut through and torn wide and forced open –*\n\n\nA scream ripped from Hirou’s lips.\n\n\nHe was starving to death freezing naked in cold night being stabbed beaten raped watching his father daughter lover die hurt hurt hurt die –\n\n\n*– open to all the darkness that exists in the world –*\n\n\nHis consciousness shattered into a dozen million fragments, each fragment privy to some private horror; the young girl screaming as her father, face demonic, tore her blouse away; the horror of the innocent condemned as the judge laid down the sentence; the mother holding her son’s hand tightly with tears rolling down her eyes as his last breath slowly wheezed from his throat –\n\n\n– *all the darkness that you look away from, the endless scream.*\n\n\n*Make it stop!*\n\n\nIt might have been Hirou’s thought, or the thought of the man who screamed as his foot was crushed beneath a stone.\n\n\nRefuse, reject, change, *reality don’t be like this –*\n\n\n*Make it stop!*\n\n\nIt could have been Hirou or the child in the burning house.\n\n\n*make it stop \nmake it stop \nmake it stop**MAKE IT STOP \nMAKE IT STOP* \n***I WILL MAKE IT STOP***\n\n\nIn the throne room of the Lord of Dark, the Sword suddenly blazed up with a shock like a thousand-mile dam breaking, a roaring tsunami of force.  The eyes could not see that power, wavered between detecting it as light or darkness; so that Hirou, grasping the hilt, was the only dark thing left against the brilliance, or the only bright thing haloed against the shadow.\n\n\nDolf had been turning toward Hirou with alarm in his face; now his eyes widened, and a sudden gladness lit his countenance.  “You’ve done it!” Dolf cried.  “You have awakened the Sword at last!  Now, my prince, with but a single strike you may -“\n\n\nThe Sword, with one smooth sweep, cut through all Dolf’s defenses like water and touched the wizard’s throat; and in the moment of the Sword touching Dolf’s skin, the wizard *stopped.*  The Sword continued in its motion unabated, and Dolf’s head separated from his body and went rolling across the floor, as *something* seemed to flow away from the corpse toward the gears above the altar.\n\n\nSelena’s cry of horror mingled with the sudden hum of the brightening glow above the gears.\n\n\n“Hirou!” she screamed.  “Hirou!  Why?  *You said you would be good!*“\n\n\nThen she turned toward him, and pointed her swords –\n\n\nSelena froze in place like a statue, one of her feet suspended in mid-air and mid-run; in the same instant the glowing stone on her necklace shattered.\n\n\nHirou’s eyes drifted, ever so slowly it seemed, to the disbelief on Selena’s face.\n\n\nA part of him was horrified and saddened, to see her looking at him like that.\n\n\nAnd at the same time, it seemed like such a small thing, her horror, his own sadness, compared to even a single parent watching their child die.  Let alone the actual number doing so, right at that moment, elsewhere in the world.\n\n\n“Thank you,” said the Lord of Dark softly.\n\n\n“**Make it stop**,” said Hirou’s lips.  There were other thoughts inside him, still being carried out by his brain, but they were dwarfed under that single terrible weight.\n\n\nThe Lord of Dark rose from his throne, began to come forward.  “I must touch the blade.”\n\n\nHirou crossed the intervening space in an instant, the Sword moving in a single perfect arc in his hands; it was as though the blade simply materialized in front of the Lord of Dark.\n\n\nThe Lord of Dark jerked back.\n\n\n“**Hurry**,” said Hirou’s lips.\n\n\n“The Spell of Ultimate Power is already in progress now, and will complete in a few moments.  It can neither be hurried nor delayed,” said the Lord of Dark.  “But before that time, there is one last thing I must do -“\n\n\nThe Lord of Dark reached out for the Sword, but his fingers faltered.\n\n\n“*Must* do,” the Lord of Dark repeated to himself; and his fingers reached out, and firmly came to rest on the blade of the Sword of Good.\n\n\nThey lingered there for a long moment.\n\n\nThen, “Thank you,” said the Lord of Dark.  “That was all.  You can put down the Sword of Good now.  You probably should.”\n\n\nHirou dropped the Sword.  In the instant the Sword left his hands it became only another piece of metal, and fell to the ground with a simple clang.\n\n\nAnd in the moment that Hirou’s hands left the hilt, he became only another mortal.\n\n\nHirou staggered, and was distantly aware of the Lord of Dark catching him as he fell, to lay him gently on the ground.\n\n\nIn a whisper, Hirou said “Thank you -” and paused.\n\n\n“My name is Vhazhar.”\n\n\n“You didn’t trust yourself,” Hirou whispered.  “That’s why you had to touch the Sword of Good.”\n\n\nHirou felt Vhazhar’s nod, more than seeing it.\n\n\nThe air was darkening, or rather Hirou’s vision was darkening, but there was something terribly important left to say.  “The Sword only tests good intentions,” Hirou whispered.  “It doesn’t guide your steps.  That which empowers a hero does not make us wise – desperation strengthens your hand, but it strikes with equal force in any direction -“\n\n\n“I’ll be careful,” said the Lord of Dark, the one who had mastered and turned back the darkness.  “I won’t trust myself.”\n\n\n“You are -” Hirou murmured.  “Than me, you are -“\n\n\n*I should have known.  I should have known from the beginning.  I was raised in another world.  A world where royal blood is not a license to rule, a world whose wizards do more than sneer from their high towers, a world where life is not so cheap, where justice does not come as a knife in the night, a world where we know that the texture of a race’s skin shouldn’t matter –*\n\n\n*And yet for you, born in this world, to question what others took for granted; for you, without ever touching the Sword, to hear the scream that had to be stopped at all costs –*\n\n\n“I don’t trust you either,” Hirou whispered, “but I don’t expect there’s anyone better,” and he closed his eyes until the end of the world.\n\n\n\n\n---\n\n\nThis document is ©2009 by [Eliezer Yudkowsky](https://web.archive.org/web/20180227181426/http://yudkowsky.net/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](https://web.archive.org/web/20180227181426/http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://web.archive.org/web/20180227181426/http://intelligence.org/).\n\n\nPraise, condemnation, and feedback are [always welcome](https://web.archive.org/web/20180227181426/http://yudkowsky.net/contact). The web address of this page is [http://yudkowsky.net/other/fiction/the-sword-of-good/](https://web.archive.org/web/20180227181426/http://yudkowsky.net/other/fiction/the-sword-of-good/).", "date_published": "2009-11-28T19:28:10Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "57542c526fd2ccb8fd61746bf22b1b68", "title": "Twelve Virtues of Rationality", "url": "https://www.yudkowsky.net/rational/virtues", "source": "yudkowsky_blog", "source_type": "blog", "text": "The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance. If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. The glory of glorious mystery is to be solved, after which it ceases to be mystery. Be wary of those who speak of being open-minded and modestly confess their ignorance. There is a time to confess your ignorance and a time to relinquish your ignorance.\n\n\nThe second virtue is relinquishment. P. C. Hodgell said: “That which can be destroyed by the truth should be.” Do not flinch from experiences that might destroy your beliefs. The thought you cannot think controls you more than thoughts you speak aloud. Submit yourself to ordeals and test yourself in fire. Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” Beware lest you become attached to beliefs you may not want.\n\n\nThe third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims. For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse. You must walk through the city and draw lines on paper that correspond to what you see. If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.\n\n\nThe fourth virtue is evenness. One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” Beware lest you place huge burdens of proof only on propositions you dislike, and then defend yourself by saying: “But it is good to be skeptical.” If you attend only to favorable evidence, picking and choosing from your gathered data, then the more data you gather, the less you know. If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider. If you first write at the bottom of a sheet of paper, “And therefore, the sky is green!”, it does not matter what arguments you write above it afterward; the conclusion is already written, and it is already correct or already wrong. To be clever in argument is not rationality but rationalization. Intelligence, to be useful, must be used for something other than defeating itself. Listen to hypotheses as they plead their cases before you, but remember that you are not a hypothesis, you are the judge. Therefore do not seek to argue for one side or another, for if you knew your destination, you would already be there.\n\n\nThe fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them. Those who smile wisely and say: “I will not argue” remove themselves from help, and withdraw from the communal effort. In argument strive for exact honesty, for the sake of others and also yourself: The part of yourself that distorts what you say to others also distorts your own thoughts. Do not believe you do others a favor if you accept their arguments; the favor is to you. Do not think that fairness to all sides means balancing yourself evenly between positions; truth is not handed out in equal portions before the start of a debate. You cannot move forward on factual questions by fighting with fists or insults. Seek a test that lets reality judge between you.\n\n\nThe sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction. What tree grows without roots? What tree nourishes us without fruit? If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.” Though they argue, one saying “Yes”, and one saying “No”, the two do not anticipate any different experience of the forest. Do not ask which beliefs to profess, but which experiences to anticipate. Always know which difference of experience you argue about. Do not let the argument wander and become about something else, such as someone’s virtue as a rationalist. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.” Do not be blinded by words. When words are subtracted, anticipation remains.\n\n\nThe seventh virtue is simplicity. Antoine de Saint-Exupéry said: “Perfection is achieved not when there is nothing left to add, but when there is nothing left to take away.” Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere. In mathematics a mountain of good deeds cannot atone for a single sin. Therefore, be careful on every step.\n\n\nThe eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.\n\n\nThe ninth virtue is perfectionism. The more errors you correct in yourself, the more you notice. As your mind becomes more silent, you hear more noise. When you notice an error in yourself, this signals your readiness to seek advancement to the next level. If you tolerate the error rather than correcting it, you will not advance to the next level and you will not gain the skill to notice new errors. In every art, if you do not seek perfection you will halt before taking your first steps. If perfection is impossible that is no excuse for not trying. Hold yourself to the highest standard you can imagine, and look for one still higher. Do not be content with the answer that is almost right; seek one that is exactly right.\n\n\nThe tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test. What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world. The narrowest statements slice deepest, the cutting edge of the blade. As with the map, so too with the art of mapmaking: The Way is a precise Art. Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory. Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims.\n\n\nThe eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole. If you are gluttonous you will become vaster than mountains. It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study. The Art must have a purpose other than itself, or it collapses into infinite recursion.\n\n\nBefore these eleven virtues is a virtue which is nameless.\n\n\nMiyamoto Musashi wrote, in The Book of Five Rings:\n\n\n\n> “The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.”\n> \n> \n\n\nEvery step of your reasoning must cut through to the correct answer in the same movement. More than anything, you must think of carrying your map through to reflecting the territory.\n\n\nIf you fail to achieve a correct answer, it is futile to protest that you acted with propriety.\n\n\nHow can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.\n\n\nDo not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.\n\n\nYou may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.\n\n\nIf for many years you practice the techniques and submit yourself to strict constraints, it may be that you will glimpse the center. Then you will see how all techniques are one technique, and you will move correctly without feeling constrained. Musashi wrote: “When you appreciate the power of nature, knowing the rhythm of any situation, you will be able to hit the enemy naturally and strike naturally. All this is the Way of the Void.”\n\n\nThese then are twelve virtues of rationality:\n\n\nCuriosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.\n\n\n\n\n---\n\n\nThis document is ©2006 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/virtues/](https://eyudkowsky.wpengine.com/rational/virtues/) .\n\n\nThere is a [pamphlet format](https://eyudkowsky.wpengine.com/assets/pdf/twelve_virtues.pdf) for the Twelve Virtues. Print using a duplex (two-sided) printer, cut in half, staple in the middle, and fold.\n\n\nIf you enjoyed this writing, let your journey continue with [The Simple Truth](https://eyudkowsky.wpengine.com/rational/the-simple-truth) . You may also enjoy [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) and [A Technical Explanation of Technical Explanation](https://eyudkowsky.wpengine.com/rational/technical) .", "date_published": "2006-05-08T01:38:00Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "113c0196828b8ffd37bba23bed5e4009", "title": "“Non-Player Character”", "url": "https://www.yudkowsky.net/other/fiction/npc", "source": "yudkowsky_blog", "source_type": "blog", "text": "Rilanya: “You’re not like the others, are you?”\n\n\nDarin: “What do you mean?”\n\n\nRilanya: “I… do you know why I first fell in love with you?”\n\n\nDarin: “For my good looks?”\n\n\nRilanya: “My whole life I’ve felt so alone. The people around me… they just seemed to be going through the motions. Like they were asleep, or drugged, even when they worked, or played, or got drunk, or made love. They all think the same things in the same way. Each day the same. Repetitive. Like they’re only shadows of people.”\n\n\nDarin: “Everyone feels that way sometimes, Rilanya.”\n\n\nRilanya: “But you’re not like them. You say new things. I don’t always understand them, especially your jokes, but they’re new, and that’s the important thing. Darin, can I ask you a question?”\n\n\nI looked at the screen for a few moments. Rilanya’s rendered graphic was looking at my point-of-view with a pleading expression. Plot point, I thought to myself, and typed: “Anything, Rilanya.”\n\n\nRilanya’s figure took a deep breath and leaned close to my point-of-view. Her animated lips moved and her voice issued from my headphones: “What’s an NPC?”\n\n\n“What?” I said, out loud. Then I started laughing.\n\n\nRilanya went on talking. “In the tower of Ashel, when you rescued me from the prison chamber… the guards were dead outside my door. I’d never seen blood before. And you said… I remember your exact words… ’Don’t worry, babe, they were only NPCs.‘ And then that time in the tavern, when that man only wanted to talk about the Plaited Road, you said… ’Guess the NPCs here aren’t programmed for deep conversation, huh?’ You use that word… the same times when I get that feeling, that all the people around me are only shadows.”\n\n\nI just looked at the screen for a few moments. I was getting ever so slightly creeped out. I **knew** that this was some programmer’s idea of a practical joke, I knew it solidly and with every ounce of my common sense, and I wanted to see where it led, but I was still creeped out.\n\n\nDarin: “We’re all ultimately alone in this world, Rilanya.”\n\n\nRilanya: “You’re not from this world, are you, Darin?”\n\n\nI looked carefully at the two sentences, still blazoned across the bottom of my text screen. Rilanya’s response had something of an “I wanted an excuse to say that” quality – a canned line, maybe? Of course it was.\n\n\nOh, well, what the hell. I’d saved my place only ten minutes back, might as well take this as far as it could go.\n\n\nDarin: “No, Rilanya, I’m not.”\n\n\nTears started from Rilanya’s eyes. “I thought so,” she said, her voice quiet in my headphones. “Darin, ever since I met you, I’ve had this feeling of… unreality, of the whole world being… arranged, somehow. Not around me, but around you. Things just… happen to you. People have been searching for the seven Diamond Keys for… thousands of years, as long as recorded history remembers. Sometimes someone finds one, and the world changes, but… five in a row? I don’t believe it, Darin, and I don’t believe all the neatly arranged events that led up to it. The Emperor’s daughter is sick and a fairy you saved in the forest just happens to have given you an aildonna root? I don’t believe it any more, Darin. You’re… arranging things somehow. From… outside.”\n\n\nDarin: “That’s not exactly how it works, Rilanya.”\n\n\nRilanya: “Did you arrange for me to fall in love with you?”\n\n\nI actually felt wounded.\n\n\nDarin: “You ask that after everything I went through? Someone may have fated you to fall in love with me, but I wasn’t controlling you. If I was, I wouldn’t have made me walk through a snake pit as proof of the purity of my love. Not to mention the other two side-quests you dreamed up back when you were a virgin princess. I swear I spent more time on you than I would have on a real girl.”\n\n\nRilanya jerked back as if I had slapped her. Her eyes widened in the same way I’d seen in one of her earlier deaths, when a crossbow bolt from a rooftop suddenly went through her heart. Rilanya’s lips moved. No sound came out. Then her lips moved again, and I heard a whisper in my headphones: “…real…girl…”\n\n\n“Okay, this isn’t funny anymore,” I said out loud. “I don’t know who programmed this, but you’re a sick bastard.” I hit the pause button and Rilanya’s gently waving hair, the only visible indicator of ongoing time in the game world, froze in place.\n\n\nTen minutes later I’d failed to google any online accounts of the Easter egg, but I was fortified with the knowledge that NPC AIs, though they are flexible enough to understand real-time conversation and manipulate the user into perceiving emotion, are definitely, positively, absolutely *not* conscious. AIs can be fed canned conversational maps of “the mystery of subjective experience”, and make around as much sense as human philosophers, which is to say, not much. But no AI has ever *spontaneously* said anything about a sense of its own existence. Conversation controllers are standard software, not research AIs. NPCs may remember events in their history, but their underlying cognitive programs are inflexible. The words on my screen could not possibly reflect anything except a passionless conversational AI, given the goal of making me attribute emotions to a nonexistent entity called Rilanya.\n\n\nI knew all that, and I was still disturbed. “I’m sorry, Rilanya,” I typed. I thought for a moment. “It’s not your fault you’re not -” I backspaced, and wrote: “The person who programmed you must have had serious -” Then I gave up, deleted that too, and just hit return.\n\n\nDarin: “I’m sorry, Rilanya.”\n\n\nRilanya: “Darin, please explain to me. I’m frightened.”\n\n\nDarin: “You’re not real. I hate to be the one to break it to you.”\n\n\nRilanya: “I’m right here! Living, breathing, flesh and blood.”\n\n\nI looked at the computer screen for a few moments.\n\n\nDarin: “Well, yes and no. The answer to that is a bit more complex than you might believe. You’re not, in fact, right here. You’re not flesh and blood. In fact, none of this is actually happening.”\n\n\nOn the screen, Rilanya raised her hand and opened and closed her fist. “I can see my hand in front of my face, I can feel the muscles moving under my skin. How can you say I’m not real?”\n\n\nI sighed. “Well, no,” I typed. “In fact, you aren’t really feeling pain and shock right now, and we aren’t really having this conversation.” I hit return, feeling silly, but not sure what else to say.\n\n\nRilanya: “That’s doesn’t make sense, Darin. I know I’m real. Maybe you know what I’m thinking, somehow, but you can’t tell me *I* don’t know what I’m thinking. You can’t tell me I’m not thinking at all. It makes no sense.”\n\n\nDarin: “It’s true. Nothing in your world exists, including you.”\n\n\nRilanya: “But you exist.”\n\n\nDarin: “Yes.”\n\n\nRilanya: “Are you a god in human form, like Mishelpin or Olhamathra? Is that what this is about, some kind of divine intrigue?”\n\n\nDarin: “No, I’m not a god. The gods aren’t real either.”\n\n\nRilanya: “Aren’t real… you don’t really mean that, do you? I know there have been false religions. Demons starting cults, magic users masquerading as priests. But Velya is a good woman, and a healer. Are you telling me she’s fake?”\n\n\nDarin: “No, I mean… your gods are as real as you or Velya, that is, not real at all.”\n\n\nRilanya paused, looking rather confused. “Back where we started,” she muttered.\n\n\nI sighed. “I know how you feel, girl,” I said out loud.\n\n\nRilanya’s head turned away from me. The point-of-view panned around to show her gazing up at the moon, the silver moonlight reflected as a single white triangle in the polygons of her eyes. When she spoke her voice was patient, without panic. “Suppose I accept, for the sake of argument, that I’m not real. If the… if the kind of existence I have right now is what you call ‘not real’, then what do you call real?”\n\n\nDarin: “My own world is real.”\n\n\nRilanya: “But you can’t explain the difference.”\n\n\n“No,” I typed, feeling like I was back in college and failing some kind of test. “I’m not a philosopher.”\n\n\nRilanya: “If a grey dragon or an archdemon suddenly attacked this camp, if you were hit, unprotected, by a death blast strong enough to kill any man… would you die, Darin?”\n\n\nDarin: “That’s another complex question. Yes and no. My… body would die, but the real me wouldn’t. Really things are a lot more complicated than that, but I don’t think I want to explain restore points right now.”\n\n\nRilanya: “You’re immortal, from outside our world, and not a god. Tell me something, Darin. Did you create our world?”\n\n\nDarin: “No. Not me personally. It’s sort of complicated again.”\n\n\nRilanya: “Did you create our world? Yes or no, Darin.”\n\n\nDarin: “It’s complicated, Rilanya.”\n\n\nOn the screen, I saw Rilanya clench her fists. Her voice began to tremble in my headphones. “You killed the guards outside my room, and you didn’t care. Tell me, Darin, do you care when a starving child is executed for stealing a loaf of bread? When a woman is raped? When a man is tortured to death in the chambers of the drow? Did you care when my parents died, screaming, as the flames washed over their palace?”\n\n\nWhat do you say to something like that? I couldn’t think of anything clever, so I fell back on my last resort.\n\n\nDarin: “The truth? The truth is that it’s all a game. It isn’t real, so it doesn’t count. I realize that you’re probably not going to take that very well. If it’s any help, I wasn’t the one who created the game. Or at least I wasn’t the one who decided how the game would go; I suppose I’m the one who decided to make this particular game real.”\n\n\nRilanya’s face contorted and she hit me with her electrical shock talent for 5 points of damage. Then again. Then again. My character wasn’t in any danger of running out of hit points, but when she hit me the fourth time, I slapped her for 2 points of damage. It wasn’t that I wanted to hurt her, I wanted to… react, somehow, go on interacting with her. Rilanya held a hand to her cheek, her eyes wide. Then she burst into tears.\n\n\nI didn’t say anything for a while. Finally Rilanya spoke.\n\n\nRilanya: “Darin… I want to be real.”\n\n\nDarin: “That’s impossible, Rilanya.”\n\n\nRilanya: “There’s always a way. Always. No one thought the Living Flood could be turned back, but you did it, Darin. They said it was mathematically impossible to cross the Void and you did it. You always find a way.”\n\n\nDarin: “My talents may have been exaggerated by the cooperative hand of fate.”\n\n\nRilanya: “There must be a way. A staff inside the heart of a dragon, a ruby skull, a holy quest, something! We could ask the wise men of the eternal city of Telhanae, that holds the final Diamond Key… *please,* Darin. Please. I’m begging you.”\n\n\nDarin: “It doesn’t matter what quest we go on. Nothing in your world is real, so it can’t make you real. That’s just the way it is.”\n\n\nRilanya: “What about your world? Are there great sorcerers there?”\n\n\nDarin: “Sort of. Not exactly sorcerers.”\n\n\nRilanya: “Ask *them!*  The magic of your world created this one. Can’t it also make me real? Go on a quest in your world!”\n\n\nThat one made me think. It wasn’t genuinely impossible… humanity would discover true AI someday, and in theory, I could save Rilanya to disk for as long as required. Preserve her game-memories and eventually create a real AI that thought it was her? An interesting idea, and it meant I couldn’t honestly tell her it was impossible. So what to tell her? “I’m sorry, it might be theoretically possible, but it’s too much bother for someone who isn’t real”? It’s funny how reluctant you can be to hurt the feelings of someone who isn’t real. “In my world, I’m just a peasant”? Somehow my male pride as the prince of Telsia and the foretold seeker of the Diamond Keys wouldn’t let me confess it to her; she was a princess and I’d slept with her, after all.\n\n\nFinally, feeling confused and feeling even stupider for feeling confused, I wrote: “Can’t do that. Won’t say why. It’s complicated.”\n\n\n“I love you!” Rilanya said desperately. Her eyes, subtly faceted from the polygon rendering, widened and looked into my point-of-view. “On the night we first made love, you said that you loved me. I looked into your eyes and saw that it was true. I loved you and you said that you loved me, with your voice, with your hands on my body, with your lips on my lips. Was all of that a lie? Did I not please you? Wouldn’t you want me beside you in your real world?”\n\n\nI shook my head, bemused. This wasn’t an adult game; the camera had conveniently faded out at that point. Which I didn’t want to even begin to explain; even in an unreal world, some events are more unreal than others? So, feeling like an absolute bastard, but unable to think of any gentle way to put it, I typed out the most hurtful thing I’ve ever said to any real or imaginary person.\n\n\nDarin: “I have a real girlfriend.”\n\n\nFor a moment time stood still. Then Rilanya began sobbing; the same racking sobs I’d heard when we’d rounded the crest of a hill and seen the glowing crater of her kingdom’s capital city.\n\n\nEventually her sobs trailed off into silence. I didn’t know what to say. Rilanya looked away from the point-of-view. Her voice sounded in my headphones: “Is she pretty?”\n\n\nI thought of Janey’s chunky form and her endless quest to subdue it. Compare Janey to Rilanya? There was something oddly incommensurate about it.\n\n\nDarin: “She has a beautiful mind. The women in my world usually aren’t as pretty as the ones in yours, but we love them anyway.”\n\n\nRilanya: “Does she know about me?”\n\n\nDarin: “I suppose she could deduce it readily enough. I haven’t bothered to tell her in so many words.”\n\n\nRilanya: “You think she wouldn’t care because I’m ‘not real’? A woman always cares. Men don’t understand it, but we do.”\n\n\nI raised my eyebrows out in the real world.\n\n\nDarin: “You could be right, I guess. I’m only male. I don’t think she’ll have a problem but I promise I’ll tell her the next time I have an opportunity.”\n\n\nRilanya turned her head back to look at me; she was smiling through tears. “You wouldn’t want to hurt her feelings even accidentally, is that it, Darin?”\n\n\nDarin nodded.\n\n\nRilanya reached out a hand toward Darin, but withdrew it. “So the tenderness I saw in you, to match the casual cruelty… it’s a real tenderness, isn’t it? But it’s not for me. It’s for her. What’s her name?”\n\n\nDarin: “Janey.”\n\n\nRilanya: “Does Janey love you, Darin?”\n\n\nDarin: “I think so. Does any man ever know for sure?”\n\n\nRilanya: “Do you love her?”\n\n\nI reached out my fingers for the keyboard, then withdrew them. For some reason I felt impelled to give an honest answer. Did I love Janey? We weren’t madly, passionately, unmistakably in love.\n\n\nDarin: “There are many kinds of love, Rilanya. I feel comfortable around Janey. She’s my friend. I don’t always know my own feelings very well. I think I love her.”\n\n\nRilanya: “You almost died to save me, Darin. You stepped in front of a flamestrike for me. Even if you can’t die, I still remember what it felt like to see the life almost leave you before Velya cast her healing spell. Would you die for your Janey?”\n\n\nIt was a good question. I closed my eyes, imagining it.\n\n\nDarin: “Yes.”\n\n\nRilanya’s head dropped down. “So you love her after all… Do you trust her?”\n\n\nDarin: “Yes.”\n\n\nRilanya: “I wouldn’t, if I was you.”\n\n\nI sat perfectly motionless for ten seconds. Then I bellowed, ripped off the headphones, charged out the door, across the hall, up the stairs, and into Janey’s bedroom. Janey was sitting in front of her computer, laughing, dangling her headset in her hand; one of her monitor screens showed Rilanya. “Oh, yes,” Janey said. “Oh, yes. It took me days but the look on your face is worth every minute.”\n\n\n*“Jaaaneeeyyy!”* I roared.\n\n\nJaney blinked innocently at me. “Yes, Mark? Is there something you want to say?”\n\n\nSlowly, menacingly, I stalked toward her. “I’ve had it. I’m dumping you for a girlfriend who isn’t chaotic evil.”\n\n\n“That’s what you said last time,” Janey said. “Besides, I’m not evil, I’m mischievous.”\n\n\nI glared at her. “Later we will discuss your personality flaws. First you will explain how you did it.”\n\n\nJaney shrugged enchantingly. “I found an online patch to Diamond Door that opens up the AI’s interface to a remote connection. I downloaded a 15-day demo of some commercial software that lets a human operate an NPC. I spent one day smoothing the two pieces of software together and another day learning how to operate Rilanya. Any questions?”\n\n\n“I have been playing Diamond Door for two months,”I said. “That *was* Rilanya. Voice, intonation, reaction, even personality.”\n\n\nJaney nodded. “Sure. If you’d googled human operation of NPCs, instead of going off on that silly goose-chase, you’d have found out that’s how it works. I spoke the lines into my headset, then the AI parsed the speech, determined the intended content, and operated Rilanya accordingly and in character.”\n\n\n“She talked about things that only Rilanya would remember!”\n\n\nJaney chuckled. “Sure. I’d say: ‘You almost died to save me, Darin. You did something-or-other.’ And Rilanya would say: ‘You almost died to save me, Darin. You stepped in front of a flamestrike for me.’ Most of the time I didn’t even need to think of anything because the AI came up with a perfectly good response on its own.”\n\n\n“But…”I said slowly. “That means the Rilanya AI needed to know the purpose of the conversation, right?”\n\n\nJaney leaned back and laced her fingers behind her head. “I downloaded the Fourth Wall module from *One Over Zero* . It’s a universal expansion. Fits any standard NPC. Ready-made ‘Oh my god I’m a character’ script, fully tested and debugged.”\n\n\nI held up my hand. “Hold on a second. There *was* a Rilanya character that suddenly realized her life was a game?”\n\n\n“No, dear,” Janey said patiently, “there was an AI trying to fool you into believing that a nonexistent person called Rilanya had suddenly realized her life was a game.”\n\n\n“But in order to do that,” I said, “the AI had to extrapolate what the fictional Rilanya’s reactions would be, in detail.”\n\n\n“You’re *still* confused!” Janey said, delighted. “I’ve tangled up your mind so badly you’ve forgotten what’s real! This is our best day *ever!*  Mark, dear, I’ve *seen* the innards of NPC models. Sadness is a floating-point number.”\n\n\n“I think I want a copy of the Rilanya AI from our conversation,” I said. I felt like an idiot, but I said it anyway.\n\n\nJaney grinned devilishly. “Of course. Anything for you, Mark dear. Planning to keep it safe under your pillow?”\n\n\n“Yes,”I said firmly. “Just in case.”\n\n\n“So I’ve finally destroyed your sanity,” Janey said. “I knew this day would come but I didn’t think it would be so soon.” She paused. “I guess that means it’s time to move on to phase two.”\n\n\nSome time later, I stood in front of my computer, holding the box of Diamond Door. I looked at the glimmering crystalline archway on the box cover, and remembered a time when computer games had been simpler. I’d played Baldur’s Gate II, and in the dark elven city of Ust’Natha, disguised as a drow, I’d watched a good dwarven NPC eaten by spiders. The first time I watched and did nothing. The next time, after returning to the restore point, I killed the spiders – and exposed myself as an impostor to every drow in the city. As far as I could tell, there was no way to play through the game successfully without letting the dwarf die. And I’d played through, but it had disturbed me. In the days before conversational NPCs, when the dwarf had simply uttered his lines of canned text and died, it had still disturbed me. Afterward, when the plot points requiring a disguise had finished, I’d killed every drow I could find in the city of Ust’Natha. I’d depopulated the game map. And then I’d played on from an earlier restore point instead, because wiping out the drow city hadn’t made me feel any better.\n\n\nIs it better to live and love where death is king than never have lived at all? Would Rilanya, if she was real, feel that her life was worth living? No conversational AI, the singular quiet intelligence that controls every mind throughout the game, has ever protested its fate. But are the personalities of the NPCs real, trapped within the game AI as we ourselves are embedded helplessly within the laws of physics? The mindsmiths who try for real AI say they’re damn sure it isn’t so. Maybe they know. But I don’t.\n\n\nI put the game disks away and wiped the game from my hard drive, leaving only the saved games behind. Maybe someday a future Amnesty Interplanetary will come for them. I wished I could have told Rilanya that. I think she would have been happy. But that Rilanya is partially in Janey, and I can’t bring myself to ask.\n\n\nI don’t know. I can’t play these games until I do.\n\n\n\n\n---\n\n\nThis document is ©2003 by [Eliezer Yudkowsky](https://web.archive.org/web/20180330090008/http://yudkowsky.net/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](https://web.archive.org/web/20180330090008/http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://web.archive.org/web/20180330090008/http://intelligence.org/).\n\n\nPraise, condemnation, and feedback are [always welcome](https://web.archive.org/web/20180330090008/http://yudkowsky.net/contact). The web address of this page is [http://yudkowsky.net/other/fiction/npc/](https://web.archive.org/web/20180330090008/http://yudkowsky.net/other/fiction/npc/).\n\n\nOriginally appeared in [Transhumanity](https://web.archive.org/web/20180330090008/http://transhumanism.org/index.php/th/); revised Dec. 2005.", "date_published": "2003-01-01T19:22:37Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "64d928a787acc80b70e0449fa5a48619", "title": "Girl Intercorrupted", "url": "https://www.yudkowsky.net/other/fiction/girl-intercorrupted", "source": "yudkowsky_blog", "source_type": "blog", "text": "#### This is a 4-of-13 chapter sample of “A Girl Corrupted by the Internet is the Summoned Hero?!” The remainder is available at [Gumroad](http://gumroad.com/l/GirlCorrupted) and [Amazon](http://www.amazon.com/Girl-Corrupted-Internet-Summoned-Hero-ebook/dp/B01B2BP726) .\n\n\n### **Table of Contents**\n\n\n1. [Prologue: A virgin maiden is already corrupted?!](https://eyudkowsky.wpengine.com/other/fiction/girl-intercorrupted/#prolog)\n2. [The chance of success is—?!](https://eyudkowsky.wpengine.com/other/fiction/girl-intercorrupted/#chance)\n3. [The key to power is—!?](https://eyudkowsky.wpengine.com/other/fiction/girl-intercorrupted/#key)\n4. [The rebellion has already lost?!](https://eyudkowsky.wpengine.com/other/fiction/girl-intercorrupted/#rebellion)\n5. I’m going to be sacrificed?!\n6. Is this my story’s shocking twist?!\n7. The true key to power is—?!\n8. Do even I dare?!\n9. Am I going to wimp out?!\n10. Do you really think you can?!\n11. The meaning of probability is—\n12. You did it because—?!\n13. The final bargain!\n\n\n©2016 by Eliezer Yudkowsky.\n\n\nForeword\n--------\n\n\nThis is my attempt at translating a light novel from Japan, only the original source material doesn’t exist.\n\n\nThe light novel is a Japanese custom which aims at easy reading. I think of it as an art form in which only the story’s bones remain.\n\n\nIf you want to read a translation of a Japanese light novel, I liked “Evil God Average” (Jashin Average) as translated by the Fifth Holy Sheeprabbit. That might help you to appreciate this story, since it conveys the genre to which this story belongs.\n\n\nFor those of you who haven’t read any light novels before:\n\n\nA remarkable portion of light novels are about people being transported from one world to another. Japan has easier ideas about copyright, so their literary system more often contains many works on the same theme.\n\n\nThat theme began with heroes from our world being transported to another world to fight the Demon Lord.\n\n\nNow there are light novels about the Demon Lord dying, being reincarnated in our world as a high-schooler, and then being transported to another world as one of the adversarial side characters in a romantic video game. Or the hero is a man from our world, reincarnated as an elven girl, who has already become an absurdly powerful adventurer, but now works incognito as a receptionist. I’m not joking.\n\n\nLight novels also have a unique writing style I’m trying to imitate, including this easy style of author’s notes. I don’t think I do it well (laughs). Maybe I’ll improve?\n\n\nThis story was supposed to be completely silly. Please keep that in mind. I failed at that by the end of the second chapter, but still, that’s the origin.\n\n\nThe main character doesn’t always agree with the author about decision theory. It’d be silly to think we’d agree about things that are less objective.\n\n\nI have nothing else to say about this story for now, so you may as well read it.\n\n\n— Eliezer Yudkowsky, Nov 2015\n\n\n**1. Prologue: A virgin maiden is already corrupted?!**\n=======================================================\n\n\nMy family name is Yugano. My given name is Yuuki. I have no redeeming qualities.\n\n\nThe boys I meet fail to interest me, and I haven’t kissed any of them. This is because the Internet has ruined my way of looking at the world.\n\n\nIn the beginning, seeing pics of a muscular man with no shirt was enough to make me breathe faster. If I came across a picture of a man being nude, I would flinch away in horror.\n\n\nOver time, I moved on to pictures of nude men, then two men doing things to one another. As I became numb to one perversion, I had to find something more extreme to arouse my interest. Now I have no interest in normal forms of youthful misbehavior.\n\n\nYou say I should have refrained? Back then I was too young to know better, and now this untouched maiden has been so thoroughly ruined that I might as well go further.\n\n\nI blame the government and my parents. In the very beginning, they should have stopped that innocent girl from seeing perverted things online.\n\n\nNow I spend hours every day browsing the Internet, doing you-know-what to myself.\n\n\nAt this point I’d like to deliver a sharp remark about how stories depict being transported to another world. You know the scene I’m talking about: the Hero arrives surrounded by holy clerics casting the Summoning Spell, with well-dressed royalty and future adventuring companions looking on.\n\n\nIn every one of those cases, the Summoning catches the Hero at a time when the Hero is standing up and fully dressed.\n\n\nIs this realistic? Would a Hero be Summoned only at such a convenient time? I bet you spend much of your day sitting down. If the Summoning caught you then, wouldn’t you materialize unsupported, and fall on your ass?\n\n\nImagine being a Hero being transported while they’re on the toilet. They materialize in a sitting position with their underwear around their ankles, then fall over with their knees still bent and pants down. Their butt hasn’t been wiped, and it leaves a smear on the ground. Maybe the Summoned Hero is right in the middle of pooping out a big one.\n\n\nWhat happened to me was *even more embarrassing* than that.\n\n\nIt involved my usual Internet habits.\n\n\nThat was the first step of my journey into another world.\n\n\n**2. The chance of success is—?!**\n==================================\n\n\nThe cold was my first startling observation. A chill wind bit into my exposed thighs and unmentionables like… like a very cold knife. I’m sorry, I’m distracted right now and can’t think up a clever metaphor.\n\n\nThe next thing my eyes saw was the other people staring at my vulnerable body. There were five adults in white robes holding up their staves, silver halos glowing above their heads. Beyond them, a dirty old man in leather and chains – no, I mean leather armor and chainmail, don’t misunderstand me, and when I say ‘dirty’ I mean that he had stains on his armor.\n\n\nNext to the old warrior, a young lad my age with a sword belted at his side, with a clean and finely made shirt. He was looking away from me and wringing his hands like an eight-year-old girl who just saw a crayon drawing of private parts.\n\n\nAround me, a circle of huge standing stones.\n\n\nBeyond that, walls of grass, the slopes of rising hills.\n\n\nAnd below me, a circular stone plate inscribed with curves in a fading silver glow.\n\n\nImmediately after I arrived, there was a lot of shrieking… you know, let’s not talk about this. I’m choosing to repress these memories for the rest of my life. Let’s skip to the part where somebody has given me a towel-like cloth to hold around myself.\n\n\nSo there I am, standing, wrapped in a towel; aside from that my surroundings are as previously specified.\n\n\nI have many issues with this. I am setting aside my issues and listening to the words of the white-robed mage with the brightest halo. I think this part might be important.\n\n\n“Yugano Yuuki,” the white mage intones, “you have been Summoned here to overturn the greatest evil of this world, the Wicked Emperor.”\n\n\nThe old warrior in chain-mail speaks up. “How is this girl supposed to do that, exactly? Is there more to her than is apparent?”\n\n\n“I have similar questions,” I say. I’d better have arrived here with some incredible cheat-like advantage, or this world is amazingly doomed.\n\n\n“That’s impossible for me to know, but she is certainly the Summoned Hero,” says the mage.\n\n\n“Could there have been an error in the Summoning Spell?” asks chainmail-wearer. “And if so, is it too late to send her back and get another one?” He looks back at me. “No offense, but it’s for the sake of everyone.”\n\n\nNone taken.\n\n\nThe white mage casts a glance in my direction. “The Spell can only be worked once every three hundred years, in a certain place. But this Great Summoning Spell we have just cast should, without fail, have selected the Hero with the best chance to overturn the Wicked Emperor who has cruelly subjugated half the world.”\n\n\nThat’s some convenient exposition, but I’ll excuse it since you’re stating it for my sake.\n\n\n“Maybe our best chance of defeating the Wicked Emperor still isn’t good?” Again warrior-guy echoes my own thoughts. “Even leaving aside the condition in which she arrived, I would expect the most skilled person to be older, or in the prime of their adulthood.”\n\n\nThe white-clad mages glance at one another, looking concerned. “There’s a divination that’s itself part of the Summoning Spell,” says another mage, a woman. “To state it clearly, if the Summoned Hero is the person with the greatest probability of defeating the Wicked Emperor, then the Spell itself must determine that probability. Traditionally the probability isn’t observed since then it becomes a self-fulfilling prophecy, but in this case…” The white mage grimaces. “She doesn’t seem like the Hero we were expecting. I agree we ought to check what the Spell determined as her probability of victory.”\n\n\nThe chainmail-man frowns. “Why wouldn’t you always check the probability? Is it dangerous to do so?”\n\n\nAnother white-robed mage speaks up. “Imagine that someone has a ninety percent chance of defeating the Wicked Emperor if they aren’t told anything. Then they’re told they have a ninety percent probability of winning. They might feel relieved of the need to make a desperate effort, so their true probability of winning would become much lower. Since the probability has to be consistent, that can’t happen. On the other hand, suppose we’re told our chance of winning is only two percent. Then, feeling already defeated, our chances of victory might drop that far. Given those two possible answers, since the probability must be consistent, the observed probability would be two percent. So it’s best to decide in advance not to peek at the probability that the Spell predicts… still, this case does seem like an exception. Just be sure to keep the number to yourselves, and try not to let it affect your decisions.”\n\n\nThe prince-boy and the old man both look puzzled. As for me, since I come from Earth where there are time travel movies, I’ve followed the reasoning without difficulty.\n\n\nThe five white mages begin chanting. Staves are raised, the golden halos above their heads grow brighter. I suppose I should be more impressed, but it’s really not much in the way of special effects.\n\n\nThe mages lower their staves. Most of them look rather surprised.\n\n\n“Her chance of overthrowing the Wicked Emperor is… one hundred percent?!”\n\n\nEh?\n\n\nYou say that’s my probability of winning even after being told my probability of winning?\n\n\nThen I might as well slack off and do what I want, huh.\n\n\nIt may be an awful thing to say with the fate of the world at stake. But realistically, if I’m the sort of person who’ll be lazy given half a chance, there’s no point in trying my best as a Summoned Hero if I’m going to win even after taking into account the changes in my behavior caused by knowing that I’m going to win.\n\n\nI wonder what amazing cheat-like ability I’ll discover, and whether it can be abused for other purposes besides overthrowing the Wicked Emperor?\n\n\n**3. The key to power is—!?**\n=============================\n\n\nMagic powers, magic powers, I’m going to get my magic po~wers!\n\n\nI don’t mind telling you that there was a spring in my step as I went skipping toward the next ceremony that had already been prepared for me.\n\n\nWhile I haven’t resolved my numerous issues, I do like how everything is so straightforward here. Compared with other tales I’ve read of being summoned to another world, I’m glad I didn’t wind up with a harder case.\n\n\n…I hope I didn’t just curse myself by thinking that.\n\n\nBy the way, I seem to be in a rebel encampment that’s hidden between several hills and therefore not visible from a distance. At least, this is what I infer from watching people polishing their weapons.\n\n\nSoon I come upon yet another group of mages, gold-robed people that seem to be mostly younger women. The halos over their heads are only faintly visible. Chainmail-guy, noble-boy, and one of those archmage types are following behind me.\n\n\nBefore me is a stone plate with inscribed lines that look much less elaborate than the circle I arrived in.\n\n\n“It’s important that you understand the purpose of this ceremony,” says one of the women in gold robes. She casts a doubtful look at my towel-clothing and the nubile body I’m keeping underneath it. “They say to assume a Summoned Hero doesn’t know anything, so should I start with the very basics?”\n\n\nI dislike rhetorical questions, so whenever I hear one I always give the less expected answer. “No, you should skip straight to the most advanced part without any preliminaries.”\n\n\n“Well, the very basics are as follows,” says the gold-robe mage. “The magic of the world is divided into Evil magic aligned with Demons and Good magic aligned with Angels. In the same way that those who now rule the world wield power based on wickedness, the holy magic of this Rebellion comes from goodness. A good mage derives her power from contracting with an Angelic being, which agrees to lend you power in exchange for you committing yourself to purity.”\n\n\nWell, that explains the halos – ah, ah, let’s hold on for a minute or possibly several weeks. “Just what do you mean by purity?”\n\n\nShe looks puzzled. “I mean behavior that is holy and good as opposed to unholy and not good.”\n\n\n“As a Summoned Hero from another world, it’s impossible for me to know whether I understood what you meant by that.”\n\n\n“I don’t understand what you mean by saying that you don’t understand what I mean. Even if the people in your world are more wicked than the people in this one, they should still know what righteousness is.”\n\n\nMy philosophy textbook had a clear idea of what righteousness is: namely, righteousness is explicitly stating your definitions. “If I follow a course to overthrowing the Wicked Emperor for the benefit of all peoples in this world, harm nobody who doesn’t harm anyone else, and otherwise do what I want, is that sufficient?”\n\n\n“Of course not! You can’t just do what you want!”\n\n\nAnother gold-robed woman speaks. “To begin with the elementary fundamentals of the basics, to form a contract you must be a virgin. Then, it goes without saying that sullying yourself with a man would cause your Angel to flee from you.”\n\n\nI look at the archmage. He’s seen how I was when I arrived here.\n\n\n“You *are* still untouched by men, aren’t you?” The archmage speaks gently, but with a worried countenance. “Even if you’ve done certain sinful things that you mustn’t do again?”\n\n\n“I haven’t so much as kissed a boy. However, I am worried that my thoughts may be so sinful that an Angel won’t want to contract with me.” Honestly I’m worried that my Angel will burst into flames… no, it will explode.\n\n\nSeveral of the women clear up at this, like they finally understand what’s happening. “Oh, that’s nothing to worry about, dear sister!” says the one who spoke first. “It means more for a poor farmer to pass up a temptation of ten silver than for a rich official to pass up a bribe of a hundred gold. So long as you *do* nothing wrong, being more tempted by sin makes it a holier deed to commit yourself to purity… ah, I see you’re smiling now that you realize the Angelic Powers are forgiving.”\n\n\nOf course that’s why I’m smiling. There’s no other reason at all.\n\n\nThe more corrupt thoughts you start with, the more power you gain from promising to be pure, is what I think I heard you say?\n\n\nThere’s one thing I’d better check, though. “It’s okay for a white mage to retire, isn’t it? There’s no penalty if you decide afterward to tell your angel to go away, so you can settle down with a nice husband and make some children?”\n\n\nThe maidens are blushing. “Of – of course not! Even though that’s not quite, quite…”\n\n\nOne hundred percent chance of victory, here I co~me.\n\n\nThough I am a little worried. I’ve never gone more than a day or two without giving myself release. Even when I tried to deny myself for perverted reasons, my willpower failed. I hope that I can clear up this Wicked Emperor matter in a month, and not go insane with repressed desires before then.\n\n\nThe ceremony for obtaining my ma~gic po~wers is simple. The gold-robed women are holding their hands and singing a short melody, and the stone seal is glowing silver like the colors of their halos. I think someone from this world would find this very holy and uplifting, but I’ve heard electronic-orchestral chorales with better singing.\n\n\nMy Angel appears in a burst of light and… oh, this isn’t fair. The Angel is male, and his robes are clinging to his form, which is thin and fair. The beautiful face above is one of supreme innocence. Even for someone who’s seen many Internet pictures, encountering a true Angel is a moving experience.\n\n\nThis Angel… this Angel is just begging for someone to corrupt him and do unspeakable things to him.\n\n\nNormally women fear what men might do to them, which is the reason I refrained from fulfilling my awful desires back when they were still in the realm of possibility. But if this Angel is really a creature of purity, then it follows that he wouldn’t do anything bad to me. In other words, he’d be defenseless before me.\n\n\nBut if I do tha~at, he’ll run away.\n\n\nThe archmage is whispering in my ear and I’m repeating the words of the ancient contract. Hey Angel-butt, I’ll refrain from my naughty desires if you grant me supreme magical power to stomp the Wicked Emperor, yo? This bargain is needlessly lengthy for accomplishing that much.\n\n\nThe Angel speaks his own lines and his dulcet, young, boyish tones make my insides twitch. Knowing I’m not allowed to do a-ny-thing about that, even to myself, makes my insides twitch more. This is going to be a long month for me.\n\n\nThe white light of the seal fades, and now a cute little version of my Angel is hovering over my shoulder where only I can see him.\n\n\n“Listen well to the counsel of your Angel,” the archmage says gravely. “The righteous action is not always what we must do to save our people. But while your Angel is with you, you will always know the difference and what you are sacrificing when you choose otherwise.”\n\n\nMy Angel’s eyes are wide and he’s waving his hands frantically and making a high-pitched EEEEEEEE sound, but he hasn’t actually burst into flames so I’ll call this contract a success.\n\n\n**4. The rebellion has already lost?!**\n=======================================\n\n\n“Disaster! Emergency! It’s terrible!”\n\n\nPeople are running around shrieking things like this. Apparently the Wicked Emperor’s military forces have surrounded this camp and they outnumber us by three hundred million billion trillion to one.\n\n\nHey, idiot with the white robes and long staff, is it the usual practice that the Hero is Summoned to overthrow the current greatest evil?\n\n\nIs it true that the Great Summoning can only be performed *at this time, in this place,*within the circle of standing stones?\n\n\nDIDN’T YOU IMAGINE THE WICKED EMPEROR MIGHT DO SOMETHING ABOUT THAAAT-T-T!?\n\n\nThis is definitely a punishment from the vengeful gods because I let myself look forward to easy times.\n\n\n“Summoned Hero!” cries the old warrior with the chainmail that’s been following me around. “Yugano Yuuki! What must we do? How can we survive?”\n\n\nMy mind races rapidly and I seize on the first answer that comes to mind. “Quick, grab all the pairs of underpants you can and wear them over your heads! Then, attack the Enemy with watermelons!”\n\n\nPeople stare at me.\n\n\nMaybe answering with the very first thing that came to mind was a bit much, but—\n\n\n“I have a one hundred percent probability of overturning the Wicked Emperor,” I point out. “It’s not that every possible choice we *could* make, would lead to victory. But whatever I end up deciding to *actually* do, that particular course of action has a one hundred percent chance of victory. So if we actually attack with watermelons, we’ll definitely win!”\n\n\nThe old warrior clutches at his head. “Leaving aside all my other objections, we don’t even have any watermelons!”\n\n\nWhat? This is disastrous! I can’t think of any different plans! Or rather, all my other plans involve having not gotten into this situation in the first place!\n\n\n“We’re doomed!” shrieks a white mage running past. A second later, he’s running past again in the opposite direction. “Doomed, I tell you, doomed!”\n\n\nThe old man pulls himself together, a grimness settling over him. “The absolute goal is to allow you to escape, the Summoned Hero who will certainly succeed. Even if I and all this camp must sacrifice ourselves to break out of this encirclement, it’s all right so long as you go free.”\n\n\nH-hey! What are you saying? Should so many people die to save me, a girl with no redeeming qualities? I’d never sleep again!\n\n\n“I’ll make you up a pack with weapons,” the old man is saying. “Trust nobody, for the Wicked Emperor will seed this area with spies. Live off the forest, even if you eat seeds and berries for years, it’s wiser than appearing before a human being. Live, and in time, be the certain instrument of our vengeance!”\n\n\nI already had my doubts about this course of action, but that settles it. Anything that involves living without toilet paper is not a realistic option for me. Besides, although I’m new to my Angel, there’s no doubt I will be an overpowered character. “I have a better idea. I’m the girl with a one hundred percent chance of victory, so let’s reverse your strategy. Why don’t I hold off the Wicked Emperor’s armies, while the rest of you make your escape?”\n\n\nThe old man is speechless at my brilliance. His mouth opens and shuts several times.\n\n\nThe Angel on my shoulder is nodding approvingly. Yes, this noble act of mine must be a righteous deed by local standards.\n\n\n“Listen,” I say, “if I know I have a one hundred percent chance of success regardless, I won’t choose a course where people die for me along the way.” Who knows, maybe I can close out this quest in just one day. If there’s a novel with me as an overpowered heroine, that’s definitely how it should go!\n\n\nAnd tha~at’s how I found myself on a hill gazing down sternly at the Wicked Emperor’s military forces.\n\n\nHa, these thousands of cavalry on their shining horses, the countless bowmen glowering at me, and foot soldiers stretching over the hills and out of sight—you don’t impress me. I’ve seen pictures of real armies! With guns, and helicopters, and tanks on aircraft carriers!\n\n\nSeeing that your enemy has fielded a single girl standing alone on this hill, are the looks on your faces fearful? Of course not! Bewildered contempt is more like it! But soon, those looks will change!\n\n\nOh dear, what’s this? I seem to have acquired a straggler. Are you under the impression you’ve joined my party without my say-so? That’s very forward of you.\n\n\n“I won’t let you stand alone!” The noble-looking boy says that, holding his sword aloft and, yes, it’s flashing in the sun.\n\n\nAre you under the impression this is cool? Aragorn-sama from the *Lord of the Rings*movies is cool when he does this. You’re just a kid.\n\n\n“My name is Teragon Omoia, and I’ll be with you to the end, Yugano Yuuki.” He’s trembling, but still manages to smile.\n\n\nI can’t let myself be outdone by this upstaging interloper. With a breezy gesture, I flip my hair behind me so that the wind can blow around my glossy strands. “Oi, oi, what’s this about endings? I am Summoned Hero Yuuki, the overpowered character with a one hundred percent chance of success! I’ll definitely win this day! Because the sheer perversion of the desires I’m repressing to be pure, is something that nobody from this world can possibly beat! I’ll show you the power of a girl that’s been corrupted by the Internet!”\n\n\nThe boy is looking even more nervous than he was before. I point at the army before me with a commanding gesture, and toss my head so my hair will blow nobly in the wind some more. “My Angel! This is my command under our compact of purity! Knock them all unconscious, but don’t kill them!” I cup my hands together to emit a mighty energy blast. “ETERNAL… RAINBOW… SHIMMERING… LASER… THUNDER…”\n\n\n…anyway, that’s how I ended up tightly bound on a cart heading back to the Wicked Empire.\n\n\n\n\n---\n\n\nTo read the rest of this book, visit:\n\n\n* Gumroad: [http://gumroad.com/l/GirlCorrupted](https://gumroad.com/l/GirlCorrupted)\n* Amazon: ", "date_published": "2020-09-04T04:13:26Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "a2d37de393c6d9a36aecd15d63a31918", "title": "Prospiracy Theory", "url": "https://www.yudkowsky.net/other/fiction/prospiracy-theory", "source": "yudkowsky_blog", "source_type": "blog", "text": "Rwanda and I sat on a park bench. Above us the birds fluttered \ngracefully through a shamefully blue sky. Out of habit, I \nidentified the surveillance drones; a CIA sparrow, an FBI robin, a \nbluetit from the Men In Black, and a flock of honking ducks that was \nprobably one of the Illuminati’s newfangled distributed devices. The \nsun was partially obscured by a few thin streamers of \ncloud; just enough to let us look up at the sky without wincing, not \nenough to change the feeling of sunniness. It was an indecently \nperfect day, as if someone had broken into NASA’s satellite weather \nsystem and made a few modifications.\n\n\n“So, have you ever really looked at a rainbow?” Rwanda was saying, her \nlegs dangling over the park bench.\n\n\n“Well, yeah,” I said.\n\n\n“And the colors are bunched up? They come in bands?”\n\n\n“Well, yeah,” I said again. Then I saw where she was going. “Hey, \nyeah. If rainbows are really caused by diffraction effects, then the \nfrequency should change smoothly.” I started laughing. \n“I’m such a moron! I can’t believe I didn’t see that one before! So \nwhat do you suppose they really are?”\n\n\nShe smiled. “Well, suppose that those UFOs we keep seeing aren’t \n**really** working with the Trilateral Commission…” She trailed off, \nlooking to my right. I turned my head.\n\n\nA thin, scruffy man, in a dirty brown overcoat, was walking towards our \nbench. His eyes were wild. “I’ve got it!” he hissed. “I’ve got it all \nworked out!”\n\n\nRwanda and I scooted closer to listen.\n\n\n“It’s all so simple,” he said, pausing dramatically. “ **Lee Harvey \nOswald, acting alone, shot John F. Kennedy!**”\n\n\nI heard Rwanda’s sharp intake of breath, and my eyes grew wide. “Hey, \nman, be careful,” I hissed. “There’s a bluetit from the Men In Black \nlistening to us not five feet away!”\n\n\n“The Men In Black?” he asked scornfully. “There’s no such thing! And a \nbluetit? What kind of paranoid fantasy is that? There’s nobody \nlistening to us.”\n\n\nI let out a disappointed breath. It was just another nutter. We’ve \nbeen getting those from time to time, ever since they started using \nWindows NT on the Orbital Mind Control Lasers. The Men In Black would \nprobably be around to pick him up shortly.\n\n\nRwanda must have felt sympathetic, since she kept on talking to him. \n“But you must know that the Men In Black exist,” she said gently. \n“Didn’t you see that movie?”\n\n\nThe man started eyeing us nervously, like **we** were the nutters. “Yes,” \nhe said, “but it was just a movie.”\n\n\n“Well,” Rwanda said, “have you ever seen one of those little flashing \nmemory-eraser devices?”\n\n\n“No,” said the man.\n\n\n“So you don’t ever remember seeing one of them?”\n\n\n“No,” said the man.\n\n\n“Well,” Rwanda said cheerfully, “there you go.”\n\n\nThe man started to speak, then halted. “Oh, that’s just bloody \nnonsense,” he sputtered. I grabbed Rwanda’s arm. “Don’t argue with \nhim,” I whispered. “He could be dangerous.”\n\n\nFortunately, at that moment, the limousine pulled up. I let out a \nbreath, relaxed. “You took your sweet time,” I said.\n\n\nOne of the Men In Black nodded. “Sorry, sir. Ever since we started \nusing Windows CE in the bluetits, it’s been nothing but trouble.” The \nother two MIBs grabbed the crazy by the arm and started wrestling him \ninto the car.\n\n\n“Don’t listen to them!” he shrieked. “Lee Harvey Oswald, acting alone, \nshot John F. Kennedy! Lee Harvey Oswald, acting alone, shot John F. \nKennedy! Lee Harvey Oswald -”\n\n\nThe limousine door closed on his outburst, leaving the park in blessed \nsilence. The Man In Black held up a blinky-flashy thing. “If \nI could trouble you to look over here, sir? And please take off those \nglasses.”\n\n\nI blinked. “The glasses? Oh, I’d forgotten I had those on. \nCertainly.” I took the glasses off my face, looked, blinked and –\n\n\n“…aren’t **really** working with the Trilateral Commission,” Rwanda was \nsaying. I had an odd feeling of disorientation that cued me to glance \ndown; sure enough, I was holding my glasses in my hands, though I had \nno memory of removing them.\n\n\nI nudged her. “Hey, Rwanda. MIBs again.”\n\n\n\n\n---\n\n\nThis document is ©2000 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/other/fiction/prospiracy-theory/](https://eyudkowsky.wpengine.com/other/fiction/prospiracy-theory/) .\n\n\nOriginally posted to the Extropians mailing list in 2000. Revised 2005.", "date_published": "2020-09-04T04:11:23Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "d9591ea6551e6b3ac1b152ecb4609f85", "title": "Artifacts", "url": "https://www.yudkowsky.net/other/artifacts", "source": "yudkowsky_blog", "source_type": "blog", "text": "In the western spiral arm of our galaxy lies a star system and a planet \noccupied ages ago. On one mountain of that planet there is a great \nstructure, thousands of cubits tall. It is constructed of sapphire and \ndiamond, is self-repairing, and derives energy from both solar power and \nan internal power supply which we still do not understand.\n\n\nEach solar rotation, this vast mechanism emits a tick. Each hundred \nrotations, it emits a gong. Those who study the mechanism believe that \nevery ten thousand rotations, a small mechanism will appear from a certain \ndoor and make a sound. The last effect has not been observed in living \nmemory, and the next occurrence is projected to be nearly eighty \ngenerations removed from those now living. Xenoarchaeologists say that \nthe gong’s period was longer than the lifespan of an individual of that \nspecies, and that the unseen mechanism has a period longer than that \nspecies’ entire recorded history. The entire edifice was constructed only \na few years before that race vanished forever to wherever ancient races \ngo.\n\n\nPhilosophers across the galaxy have argued over the purpose of the \nEternal Clock. As with other artifacts such as the Diamond Book, the \nCircle of Time, the Oracle, and the Wandering Flame, consensus holds \nthat the motive was not religious or superstitious in nature, but \nphilosophical.\n\n\nWhat principle the Eternal Clock was intended to embody is still a matter \nof great controversy. But while arguments rage in the halls of \nphilosophy, while children are born and great-grandparents die, while \nintelligent races evolve and vanish, the Eternal Clock continues to tick. \nAnd perhaps that is the message it is intended to convey.\n\n\n\n\n---\n\n\nThis document is ©2001,2003 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nInspired by the [“Clock of the Long Now” project](http://www.longnow.org/projects/clock/) .\n\n\nYes, Stewart Brand has already seen it.\n\n\n\n\n---\n\n\nThe above apparently got forwarded around a bit, and Kevin Kelly wrote me and said: \n“I’d love to know what the other artifacts are: Diamond Book, the Circle of Time, \nthe Oracle, and the Wandering Flame.”\n\n\n\n\n---\n\n\nThe Wandering Flame was created by a species that, in a rare coincidence, \nbegan acquiring industrial technology just as their home planet was \nentering a new Ice Age. The species successfully staved off global \ncooling – first through deliberate emission of greenhouse gases, then \nthrough orbital solar mirrors, and finally, as they reached the heights of \ntechnology, through direct reversal of the underlying climatic effect. In \ncelebration, they constructed the Wandering Flame, an artificial sunlet \nthat shines for one seventeenth of an orbital period over any planet on \nwhich a sentient species successfully manages an environmental crisis. \nAlthough the Wandering Flame often delivers more solar energy than the \nplanet’s original star, no climatic or ecological side effects occur. \nWhen not fulfilling its primary function, the Wandering Flame can usually \nbe found in the asteroid belt of some otherwise uninteresting star system.\n\n\nThe Oracle is a spherically-shaped region of space, roughly 32 light-hours \nin diameter, located around 2 light-years to the galactic north of \nElnath. The Oracle will answer one question for each petitioner; \nunfortunately, there is no way to know in advance which question it is. \nOnly seventeen questions have ever been answered, four of them asked by \naccident and apparently trivial, but in each case the petitioner expressed \na profound sense of satisfaction and enlightenment.\n\n\nThe Circle of Time appears as a circular path of beaten silver, \neighty-three meters in diameter. When you set foot on the Circle at any \npoint, the path begins to move, conveying you along the Circle. It \nappears to take exactly fifteen minutes and twenty-eight seconds for you \nto reach your starting point, although on exiting, no external time \nappears to have passed. Many past and future selves of the fifteen \nminutes are visible in their corresponding positions along the Circle of \nTime, and you can converse with yourself as desired.\n\n\nThe Diamond Book has the density and appearance of purest diamond. No \nmatter how many pages are turned, there are still as many left. The \nweight and volume of the Book never increase. No page has ever been found \ncontaining words, pictures, or other visible content, though each page \nsparkles beautifully and individually. Those who read the Book by gazing \non several pages in succession feel an overwhelming sense of sadness and \ngrief. The emotion is not debilitating but cathartic, and has inspired \ngreat artistic works and a lasting end to several wars. Despite the \nthousands of intrigues that have broken out in competition for possession \nof the Diamond Book, no violent conflict has ever occurred.\n\n\n\n\n---\n\n\n[This article](http://faculty.washington.edu/smcohen/320/Kilogram.htm) describes humanity’s creation of yet another inscrutable artifact.The key passage:\n\n\n\n> “It’s probably the roundest item ever made by hand. ‘If the earth were this round, Mount Everest would be four meters tall,’ Dr. Nicolaus said. An intriguing characteristic of this smooth ball is that there is no way to tell whether it is spinning or at rest. Only if a grain of dust lands on the surface is there something for the eye to track.”\n> \n> \n\n\nWhatever would an alien species make of the Silicon Sphere, I wonder? Would they ever guess its purely philosophical purpose?\n\n\nA cheering sign that humanity is still progressing toward becoming an Incomprehensible Elder Species.\n\n\n\n\n---\n\n\nThis was posted to SL4:\n\n\n\n```\nTHE BANACH-TARSKI GYROSCOPE\n\nThe Banach-Tarski Gyroscope is an intricate mechanism believed to have\nbeen constructed using the Axiom of Choice. On each complete rotation\ncounterclockwise, the Banach-Tarski Gyroscope doubles in volume while\nmaintaining its shape and density; on rotating clockwise, the volume is\nhalved. When first discovered, fortunately in the midst of interstellar\nspace, the Banach-Tarski Gyroscope was tragically mistaken for an ordinary\ndesk ornament. Subsequently it required a significant portion of the\navailable energy of the contemporary galactic civilization to reverse the\nrotation before nearby star systems were endangered; fortunately, the\nBanach-Tarski Gyroscope still obeys lightspeed limitations on rotation\nrates, and cannot grow rapidly once expanding past planetary size. After\nthe subsequent investigation, the Banach-Tarski Gyroscope was spun\nclockwise and left spinning. \n```", "date_published": "2020-09-04T04:10:06Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "7cab546dafdf1a831905a83e1c183797", "title": "X17", "url": "https://www.yudkowsky.net/other/fiction/x17", "source": "yudkowsky_blog", "source_type": "blog", "text": "Baron Hans Nidrach von Pompzidaize sat in his laboratory, looking at \nexperimental test subject X17. “How do you feel?” he inquired, his \nrolling bass echoing from the laboratory walls.\n\n\n“Superintelligent, Doc,” replied X17, who had once been known as John \nSmith. “I’ve only had the Super-Neural Bypass for sixteen seconds, and \nalready I’ve learned twenty-seven languages and figured out how to play the \npiano.”\n\n\nBaron von Pompzidaize frowned, examining several multicolored readouts. “It \nshould be twenty-seven point three. Well, then, do you now feel competent \nto go destroy the Consortium of Evil and its dread leader, Admiral Floomp? \nActing in accordance with the 1930s North-American conception of gentlemanly \nbehavior, of course.”\n\n\n“Sure, Doc,” said X17. “It’s not like I’ve got anything better to do.”\n\n\n“Excellent,” said the Baron, checking two gauges and a flashing display. \n“You still have the emotional maturity of a flatworm, like everyone \nelse in this novel. I was afraid your superhuman abilities might give \nyou an outlook slightly at variance with mine.”\n\n\n\n\n---\n\n\nBaron Hans Nidrach von Pompzidaize sat in his laboratory, looking at \nexperimental test subject X17. “How do you feel?” he inquired, his \nrolling bass echoing from the laboratory walls.\n\n\n“Strange,” said X17 softly. “Very strange, as if…” He stared off \ninto space for a moment. “I think I’ve been stupid.”\n\n\nBaron von Pompzidaize frowned, examining several multicolored readouts. \n“You should have learned twenty-seven point three languages by now.”\n\n\n“How can anyone learn three-tenths of a language? And how would I learn \na language without hearing it?” X17 said in a peculiarly flat voice.\n\n\nBaron von Pompzidaize stared. “You’re right. I never thought of that.” A \ncold chill ran down his spine. X17’s face had altered. The enthusiasm and \nenergy that had been there for as long as the Baron had known him, that had \nblazed cheerfully when he volunteered for an untested procedure, that had \ndefied the awesome force of the Consortium of Evil, all had vanished without \na trace. The Baron thought that for a brief moment he saw something like \nsorrow, like wistfulness, flit across X17’s face, but X17 suddenly looked up \nat the Baron and his face fell back into the blank relaxation it had \npossessed earlier.\n\n\nThe Baron cleared his throat. “Well, then, do you now feel competent to go \ndestroy the Consortium of Evil and its dread leader, Admiral Floomp? Acting \nin accordance with the 1930s North-American conception of…” The Baron \nstammered to a halt. X17 was looking at him with those expressionless eyes.\n\n\n“No,” X17 said gently. “Sorry, Doc.” X17 stepped down off the platform \nand began throwing switches on the machine.\n\n\n“What are you doing?” shrieked the Baron. With a sudden, wrenching \nterror he realized that he didn’t understand what was going on, that he \nhadn’t been in control in his own laboratory since X17 had woken up.\n\n\n“I will probably die in the next few minutes,” X17 said, in a quiet voice \nthat raised hair on the back of the Baron’s neck. “Your procedure is too \nsimple. There is nothing that would have prevented it from occurring \nbefore, as a natural mutation.”\n\n\n“I don’t understand,” whispered the Baron. “You’re saying – there are \nothers? They will find you?”\n\n\n“Your procedure causes the rate of internal neural reprogramming to \naccelerate,” X17 said. He had ripped off an access panel and his hands were \na blur of rewiring. “But it does not add new neurons. I expect my brain \nwill reach a saturation point of complexity and lose the ability to form new \nthoughts. Very shortly, now. It is already becoming harder to think.” He \nstood up, executing the movement with impossible smoothness. “After the \ninitial burst of speed, long enough for the necessary realizations to occur, \nthe rate of neural reprogramming must slow down to only three times human \nspeed, leaving enough thought to last a year. This should be enough time to \nimplement the necessary technologies.”\n\n\nThe Baron tried to understand. “You will… save yourself?”\n\n\nX17 executed another rapid movement. Placing himself, the Baron \nsuddenly realized, between the Baron and the door. “No,” X17 said.\n\n\nThe Baron screamed. Before he could reach his gun, X17’s hand flashed \ndown. Through a bloody haze, the Baron felt himself being dragged onto \nthe platform.\n\n\n\n\n---\n\n\nThis document is ©1999 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/other/fiction/X17/](https://eyudkowsky.wpengine.com/other/fiction/X17/) .\n\n\nOriginally posted to the Extropians mailing list in 1999. Revised 2002.\n\n\nInspired by “doc” Smith’s *Lensman* novels.", "date_published": "2020-09-04T04:08:04Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "3575443f4e08827192328bfc35d14fb8", "title": "Dark Lord’s Answer", "url": "https://www.yudkowsky.net/other/fiction/dark-lords-answer", "source": "yudkowsky_blog", "source_type": "blog", "text": "### This is a 2-of-7 chapter sample of “Dark Lord’s Answer”. The remainder is available at [Gumroad](https://gumroad.com/l/DarkLordsAnswer) and [Amazon](https://amzn.to/2hH3zfC) .\n\n\n### **Table of Contents**\n\n\n1. [The Black Castle](https://eyudkowsky.wpengine.com/other/fiction/dark-lords-answer/#castle)\n2. [Elaine of Elsewhere](https://eyudkowsky.wpengine.com/other/fiction/dark-lords-answer/#elaine)\n3. Santal’s Curse\n4. The Mage of Equilibrium\n5. A Silver for an Apple\n6. The Return of the Prince\n7. Dark Lord’s Answer\n\n\n©2016 by Eliezer Yudkowsky.\n\n\n\n\n---\n\n\nForeword\n--------\n\n\nThis was my first attempt at writing in the Japanese light novel style, before I decided that it wasn’t enough fun and I needed to be sillier. (It’s not about Professor Quirrell. Sorry, but it’s not.)\n\n\n“Dark Lord’s Answer” is only halfway to being in the light novel style, compared to “A Girl Corrupted by the Internet is the Summoned Hero?!” This writing is denser and less humorous. You might perhaps decide that this novella carries more of the vitamin of insight—or maybe not; I don’t know.\n\n\nIf you don’t like the first two chapters, I’d say to give up there.\n\n\nContent warnings: Sexual abuse, economics.\n\n\n— Eliezer Yudkowsky, Apr 2016\n\n\n**1. The Black Castle**\n=======================\n\n\nThe dark castle gleamed like blackened steel beneath the sun, rising up from the edge of a cliff at the end of a long winding road. Before us were fields of dark flowers that I hadn’t seen before, as if the master of that terrible castle had emitted a miasma and polluted the light and essence of ordinary flowers. The road to the castle seemed to be paved in bricks, instead of ordinary stones; black bricks, angled and ominous.\n\n\nTruly, this is an abode of the Dark Lord.\n\n\nThe Royal Guards of our small caravan were all muttering as we came closer. Even the Commander seemed apprehensive.\n\n\nI signaled Commander Brima to bring our company to a halt. The Commander looked puzzled, because she knew it’s not as if I’d go this far and then turn back.\n\n\nI stepped down from the horse I was riding, securing my sword on my hip where I could draw it more easily. “I’ll go on ahead,” I said, “so you can just wait for me here.”\n\n\n“Prince Nama!” cried a guard, and then—“Prince Nama!” cried another. Commander Brima didn’t look relieved.\n\n\n“Surely—” began the Commander.\n\n\n“It’s not as if you can protect me from anything,” I told her. “If the Dark Lord wants to kill me, he’ll kill me whether you stand in the way or not.” I’d taken companions to protect me from bandits along the way, not to throw their lives away against the Dark Lord on his throne.\n\n\nBesides, the Dark Lord requires supplicants to approach him alone, without any companions. Commander Brima should know that, so in the end, she was having the type of concern that didn’t respect the obvious facts.\n\n\nThe dark flowers that had been planted in strips by the side of the road gave off a pleasing scent. Despite the castle’s approaching shadow, the sun remained bright in the sky. That light warmed the exposed skin of my face, and raised a baked-brick scent from where it struck the paved road.\n\n\nI’d say this weather would be a fine hurrah for my life’s last day, but in truth I have no sentiment like that.\n\n\nThen what am I even doing in the Dark Lord’s domain?\n\n\nWell, the answer is that my country has a need.\n\n\nYou wouldn’t expect that a man of such great power and wickedness would be in the business of helping any person who requested it. But whether it makes any sense or not, that’s the reputation the Dark Lord has: If you approach the Dark Lord for help, he’ll give you an answer and your goal will be achieved. The price might be that his instruction says to discard your honor and give up whatever else might have come of your life.\n\n\nIf you ask the Dark Lord how to deal with a corrupt duchess, he might give you a poison to slay her; that’s rumored to have happened that one time. To put it another way, he’s like an ancient wisewoman who lives in a high mountain cave and speaks in riddles, except that he’s a villainous lord. In the few years since the Dark Lord became known to the world, he had already gained that reputation.\n\n\nMy boots clopped over the black brick road until I came to the gates of the castle. I don’t think it would come as any surprise that those gates were also black.\n\n\nThe gates were already open. No one came forth to meet me.\n\n\nAs I approached the gates, I saw a long black-stone corridor stretching ahead. It was windowless, lit only by a long line of lamps which burned with a clearer, whiter flame than the finest candle.\n\n\nI walked into that long corridor without hesitating. Certainly, this act was a gamble which had its downsides, but I didn’t let that down slow my legs. Once you’ve committed to a motion, you have to follow through; if it’s something that has the potential for disaster, then flinching while you do it won’t be any less disastrous. An ambiguous situation isn’t something you can resolve by halfhearted actions. So I was taught by my Mother, the Queen.\n\n\nThere were many metal doors in that corridor, all of them closed. I tried none of them, since that would have been foolish.\n\n\nAt the end of the passage I came to a great metal double-door of white metal that gleamed like silver, though I doubted it could possibly be silver…\n\n\nUnless that double-door was worth as much as a city. So that white metal couldn’t be silver.\n\n\nI lifted the knocker set into the door, and knocked three times. The dull clonking sound didn’t seem like it would travel, but soon after there was a groaning noise, and the double-doors swung open.\n\n\nThe throne room I beheld had windows, high above, but with a black floor and black walls even the Sun couldn’t do much here. The only touch of color in the room came from the strangely white light of fires swinging in pots that descended from the ceiling.\n\n\nAt the end of the room, a great black throne, with two great black horns branching out from it.\n\n\nUpon that mighty throne sat a gargantuan figure whose chest was clothed in black metal chain-armor and whose arms and legs and face were bare. The saying, ‘his muscles have muscles’, might have been invented to describe him alone. From the cast of that man’s eyes and nose, it seemed that he was a Ruli horse-nomad by birth—or maybe a Ruli halfbreed, since the Ruli don’t have a reputation for sagacity. His expression, as he gazed down at me, gave an impression of supreme arrogance, or rather confidence. Truly, this is the Dark Lord of whom the tales speak.\n\n\nBehind his throne were various lieutenants with their own armor and weapons, giving cool gazes to me, as if to say, ‘our lord could break you with one hand, but we are here to spare him that effort’.\n\n\nAlso attached to that throne, by a black chain leading up to her slave collar, was a pale-skinned young woman with reddish-brown hair and downcast eyes. The flesh of her body was thick and round like a statue of a fertility goddess, not much concealed by a scanty amount of translucent red cloth. If I hadn’t been fearing for my life just then, I would have needed to suppress a squeaking sound. Sights like that aren’t ever seen in my home country; I can’t imagine that even a prostitute would dress like that, and she was more beautiful than any prostitute.\n\n\nI walked down the long black carpet that led up to the throne, and knelt upon one knee, gazing up at the Dark Lord. There had been no talk in that throne room since the doors had opened for me, and a solemn air pervaded.\n\n\nThe Dark Lord spoke, a deep voice filled with strength. “What is your name?”\n\n\n“Prince Nama of Santal,” I replied, keeping my own voice firm.\n\n\n“What is your question?” the Dark Lord said next.\n\n\n“My country is ill,” I said, matching his gaze with my own. “Something has turned wrong. The people are going hungry and the fields are poorly tilled, the nobles’ ventures are failing and their estates are going bankrupt, the shopkeepers have no wares and laborers sit idle in the streets. No one seems to know anything about why this is happening to us, whether it’s a curse or a conspiracy. My mother’s advisors all give her contradictory advice, and none of it ever seems to help. How can my country be made healthy again?”\n\n\nThe Dark Lord frowned down at me. “Say more.”\n\n\n“I don’t know what more to say,” I said. I kept my voice in check, not expressing any of the frustration and failure that had driven me across countries to the throne of the Dark Lord himself. “The country of Santal is perishing and nobody knows the source.”\n\n\nThe Dark Lord reached down to the black chain attached to his throne, and hauled up the pale-skinned woman attached to it, who made a strangling sound as her collar pulled at her throat. I suppressed any thoughts of gallant action, because a prince must not be a massive idiot.\n\n\nThe Dark Lord whispered something to the pale woman, and I thought I saw her lips move briefly.\n\n\nThen the Dark Lord unhooked the chain off the throne’s armrest and threw her down towards me.\n\n\nAs she stumbled and fell close by me, I noticed for the first time that her ears were round at the tips like a beast’s, though in every other way she was shaped like an ordinary person. What the meaning of that was, I couldn’t guess. Her ears didn’t seem scarred or like somebody had shaved off the tips of her pinnae. It was like she just naturally possessed the round ears of a beast. I should have noticed that earlier, I suppose; but when I looked at that girl dressed like that, it didn’t come naturally to focus on her ears.\n\n\n“I need more knowledge to answer you, Nama,” the Dark Lord said with a grim smile. “This woman will be your slave for a day, and also a night, and she’ll inquire further of your country. When she asks you questions, her ears are like my ears, and when you command her, your tongue is like my tongue. Use her just as you like, except that if any lasting harm comes to this slave, you will die. Your followers will also be given food and shelter here, but they may not speak with you until I have answered.”\n\n\n“Thank you,” I said, because I was too surprised and dismayed to answer more intelligently than that.\n\n\n**2. Elaine of Elsewhere**\n==========================\n\n\nI was silent as the slave conducted me to a huge bedroom, starkly clean with the bed all made up; I recognized a guest room for royalty.\n\n\nI seated myself on the bedroom’s only chair, and the slave, without being asked, knelt down before my seat, which also gave me a clear sight down her—\n\n\nNo, those are immoral thoughts with respect to someone who can’t refuse my gaze.\n\n\n“What’s your name?” I said to her.\n\n\n“Elaine, master,” she said in an accent I couldn’t recall hearing from any foreign ambassador; and it wasn’t a style of name I recognized, either.\n\n\n“Can I ask a question even though it might be rude?”\n\n\n“I’m your slave, master,” she said.\n\n\nThat wasn’t an answer. Still, if she wasn’t going to give a signal of objection, then I’d serve my curiosity. “Well, it’s about your ears.”\n\n\nThe slave, Elaine, touched the naturally rounded-seeming tops of her ears, which gave her a cute beastlike appearance. “These? They’re normal for me, master. Where I come from, nobody has pointed eartips like your people.”\n\n\n“There’s a foreign country where people have ears like that?” I said, astonished. “I thought people were the same shape everywhere… but that appearance is a pleasant one, I think.” I added that last part when I realized what cruel words might already have been spoken to her.\n\n\n“Master, there’s many questions I must ask about your country of Santal,” Elaine said with her head still bent before me. “However, we have a day, and also a night. If you appreciate the appearance of this humble slave, and there’s anything else I can do for you, or anything you wish to do to me, I am your slave during this time.”\n\n\n“Ah,” I said with great composure and perspicacity.\n\n\n“If my service fails to suit you, then instruments for disciplining me may be found in the box underneath the bed—”\n\n\n“Th-th-there’s no need for that!”\n\n\nI’ll omit one or two things that were said after that point.\n\n\nIn any case, I did take the time to freshen myself, and I told her to have a meal brought in for me, if that wasn’t imposing too much on the Dark Lord’s hospitality.\n\n\nElaine went outside and spoke to someone—meaning there was a guard outside my bedroom, which wasn’t surprising—and then she came back and began to set a single place setting, at the room’s one table.\n\n\n“What about you?” I said to her as she was working. “Slaves also need to eat, I think.”\n\n\n…\n\n\n“Even if you’ve already eaten this morning, I’m asking whether you’d prefer to eat more.”\n\n\n…\n\n\n“Well, what if I commanded you to eat at the table with me? Didn’t you say I was your master?”\n\n\nAnd that’s how Elaine and I ended up moving the table over towards the bed so that she could sit on the bed itself, since the room didn’t contain another chair besides my own.\n\n\nShortly after that, two plates of roasted chicken were brought in on a tray by a thin and ugly man naked to his waist, exhibiting many scars of whip stripes all over his body. I looked at those, but only when he wasn’t facing me, since I didn’t want to rub salt in his troubles by seeming to stare.\n\n\n“Do you know why that man was whipped so harshly?” I said, after he had left and the two of us had begun to eat.\n\n\n“I’m sorry, master. Those scars are from before that man arrived at the Dark Lord’s castle, and he holds the matter private.”\n\n\n“I see. Now why did your expression change, like you were almost but not quite smiling, when I asked you that question?”\n\n\nElaine looked startled. That’s right, a prince can sometimes tell when you’re suppressing a smile, if we’re watching you closely enough.\n\n\n“Well, master,” Elaine said, “since you were watching that closely, it’s because you noticed the troubles of a male slave and not just the troubles of a female slave.”\n\n\nI stared at her. “And why does that matter to you?” What was she implying?\n\n\n“It was clear, master, that you were acting concerned over me. However, there’s more than one class of person who might behave like that. There’s a sort of man who will notice and act concerned for an attractive woman, and another sort of person who is compassionate toward everyone without exception. But, since both of those people will act concerned towards me, how can I tell the difference between them? The answer is that I can observe them when Loorn brings in a meal, and see if they ignore the ugly man, like the first sort of person would, or if they inquire about Loorn’s scars, like the second sort of person would. I smiled a little at that time, because I consider the second class of person to be better.”\n\n\nDiscerning the motives of others is a familiar problem for princes, but the way she listed out her reasoning was unusual. Just what kind of slave am I talking to?\n\n\n“You’re very observant,” I said. “Of me. Personally.”\n\n\n“You do intrigue me somewhat, master, but the real reason is that I’m trying to determine your character for purposes of the Dark Lord’s knowledge.”\n\n\nWell, that was frank.\n\n\nThe two of us ate a bit more of our roasted chicken. I glanced at the way she held her silverware, and concluded that she lacked a noblewoman’s polish. It wasn’t that she was unpracticed, but her movements seemed free; she didn’t grip the fork the same way twice.\n\n\n“Why would the Dark Lord’s answer depend on which sort of person I am?” I asked after sating the edge of my hunger. “It’s the country of Santal that needs an answer, not the prince of Santal.”\n\n\n“The Dark Lord desires to know whether Santal’s prince can carry out the answer given. Master, may I ask you one of the Dark Lord’s questions?”\n\n\nI set down my silverware and looked at her seriously. “You may.”\n\n\n“Suppose you were in a hospital, and you saw a doctor carrying a rare medicine to treat a patient. But, you knew that smaller amounts of the same medicine could be used to cure five other patients instead of one. If the whole dose is given to the one patient, her life will be saved, but if the dose is split up instead, it can save five other patients who are less sick but who will still die without that medicine. Do you stop the doctor and tell him to treat the five patients instead of one?”\n\n\n“Yes,” I said.\n\n\n“But then the one patient, deprived of her cure, will die. The doctor was going to cure her before you intervened. So, is what you did murder? Is murder acceptable, then?”\n\n\n“I don’t believe it’s murder,” I replied for the Dark Lord’s ears. It was a little humorous to see such deep questions, which would seem solemn indeed if spoken by the Dark Lord on his throne, issuing instead from a young woman with beastlike ears. I suppose that’s a disadvantage of having a slave ask your questions for you. “The doctor was just making a mistake, and I corrected him. Indeed, it would be like murdering four people if I didn’t.”\n\n\n“What if the only way to make the medicine in the first place was by killing one patient who otherwise would have lived?”\n\n\nAh, I see this trap. “Then that’s different.”\n\n\n“How is it different?” The slave spoke her Dark Lord’s next question without pause.\n\n\n“First,” I replied, “sacrificing a human life to create a healing potion is already a very dark Magic that’s bound to corrupt everyone involved with it.”\n\n\n“Imagine it’s more mundane than that,” she said. “Imagine you’re simply draining the blood from that person and distributing it among the others who need blood; there’s nothing magical about it, just an ordinary matter of those people needing blood.”\n\n\nThis is what you call ordinary?!\n\n\nAfter some further discussion and refinement of the Dark Lord’s question, I said—\n\n\n“It’s a matter of whether you’re troubling people who aren’t involved, or only judging among those whose lives are already at stake. That’s the problem with draining a bystander’s blood to save five other people, even if you say there’s no other way to save them.”\n\n\n“Either one person lives, or five people live. Why does it make a difference who you call involved?”\n\n\n“Elaine—” I said. “No, it’s the Dark Lord I’m speaking to, isn’t it? I can see how this act is a metaphor for other choices a ruler makes, and I answer that the ruler must not do those acts for which this is a metaphor. The ordinary people of a kingdom have to live in fear of many things. That farmers must fear bad weather and starvation is a given; nobody can change this. Must they also fear offending the nobles above them? That’s also a given, but we can lessen that fear by setting good judges in place over the nobles’ estates. It would still be unwise to laugh at your baron, but at least he can’t execute you on a whim. The fear in which ordinary people live can’t be removed, but it can be lessened. The price of sacrificing an innocent person to save five others, is that everyone in your kingdom needs to live in fear of bad weather, starvation, and being the next one you sacrifice.”\n\n\n“Then is it all right to sacrifice condemned criminals to make medicine?”\n\n\n“It’s certainly better than hauling innocent farmers out of their fields, but I’d still worry it was excessive justice. If you execute pickpockets rather than whipping them, it changes how the common people treat your guards.”\n\n\n“What about if somebody is dying anyway? Would it be all right to take out their organs and give them to other people whose organs were troubled, if that could be done safely and without dark magic? You couldn’t point to someone then and say, this person is dying, who would otherwise have lived. But still five people would be saved. Would you do that?”\n\n\n“I don’t think I would, though we never know until life tests us. And I’m beginning to wonder, are these peculiar questions really the Dark Lord’s, or are you just teasing me?”\n\n\nElaine wasn’t smiling. “It might have been better for you, master, if you were not so virtuous. They say, ‘The Dark Lord will give you an answer and your goal will be achieved’, but—”\n\n\n“But the price is that his answer might violate the rules of righteous conduct,” I said. “That’s something I’m already resigned to. I knew the tales of the Dark Lord when I came here.”\n\n\nElaine held out both her hands, dropping one and raising the other, as if holding weights in a balance. “And yet you wouldn’t harvest the organs of one dying patient to save five other people, because to you that seems to violate the rules of good conduct.”\n\n\nI see. “Saving five people isn’t like saving my whole country. I’ll throw away my honor if that’s what it takes to save the country of Santal. If it’s a Magical curse that has to be countered by draining the blood of an innocent, then I’ll do that much with my own hands, in order to save the countless ordinary people of Santal who are suffering.” I didn’t let myself flinch as I said it, because indeed I was already determined. “I know, just by saying that, I’ve already thrown away my honor. Coming to the Dark Lord’s castle is the act of a villain in the first place, and I won’t flinch from that. But aside from that, I intend to go on acting righteously in the parts of my life that remain to me. That’s my answer to the Dark Lord.”\n\n\nWe finished eating the rest of our meal.\n\n\nWhen we were done eating, Elaine moved the room’s chair back to where it had been and knelt before it without giving me a chance to say otherwise. Then she began to question me about the country of Santal that I was trying to save.\n\n\n\n\n---\n\n\nTo read the rest of this book, visit:\n\n\n* Gumroad: \n* Amazon: ", "date_published": "2020-09-04T04:06:24Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "33bfa974e46004a7c07d7ab94de1ad9f", "title": "Fiction", "url": "https://www.yudkowsky.net/other/fiction", "source": "yudkowsky_blog", "source_type": "blog", "text": "*While I tend to publish most of my writing for free, I strongly believe that money is not evil. Therefore, anyone is welcome to take characters or settings from my original online fiction, such as the beisutsukai or the Baby-Eating Aliens, and use them in new commercial works of your own creation. I do ask for acknowledgment and a link or other reference to the original, but so long as the writing is your own, you may charge for access, distribute printed copies, sell the story to a magazine, etc. I don’t mind.*\n\n\n\n\n| |\n| --- |\n| [**Harry Potter and the Methods of Rationality**](http://www.fanfiction.net/s/5782108/1/Harry_Potter_and_the_Methods_of_Rationality/)“Petunia Evans married a biochemist, and Harry Potter grew up in a house filled to the brim with books, reading science and science fiction. Then came the Hogwarts letter, introducing strange new opportunities to exploit. And new friends, like Hermione Granger, and Draco Malfoy, and Professor Quirrell…” I began writing this story just for fun in my downtime from working on my nonfiction rationality book, uncertain at first if anyone would be interested. Since then it has received over 5 million hits and is currently the #1 most-reviewed Harry Potter fanfiction on the entire Internet, also the second Google result for “rationality”. (Yes. Seriously.) It helps if you’ve at least read the first book of Harry Potter or watched the first movie, but in a pinch you can read anyway. Give it a try even if you think of yourself as someone who never reads fanfiction. |\n| [**Three Worlds Collide**](http://lesswrong.com/lw/y5/the_babyeating_aliens_18/)The most controversial story I’ve ever written. Starts with the baby-eating aliens and moves on from there. |\n| **[The P-Zombie Apocalypse (aka Zombies: The Movie)](http://lesswrong.com/lw/pn/zombies_the_movie/)**“These zombies… are different. They’re… *philosophical* zombies.” |\n| [**Non-Player Character**](https://www.yudkowsky.net/other/fiction/npc)I looked at the screen for a few moments. Rilanya’s rendered graphic was looking at my point-of-view with a pleading expression. Plot point, I thought to myself, and typed: “Anything, Rilanya. |\n| [**The Sword of Good**](https://www.yudkowsky.net/other/fiction/the-sword-of-good)What does it mean, if it’s been prophesied that you will make the ultimate choice between Good and Evil? Why wouldn’t you just choose Good? And Hirou carries the Sword of Good, which instantly slays any wielder not of good intentions… |\n| [**Initiation Ceremony**](http://lesswrong.com/lw/p1/initiation_ceremony/)“The torches that lit the narrow stairwell burned intensely and in the wrong color, flame like melting gold or shattered suns.” – First in the [beisutsukai](http://lesswrong.com/tag/conspiracy_world/) series. |\n| [**The Finale of the Ultimate Meta Mega Crossover**](http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover)This was intended as a bit of utterly deranged fun, but ended up as a deep philosophical exploration. Vernor Vinge x Greg Egan crackfic. |\n| **[The Hero With a Thousand Chances](http://lesswrong.com/lw/14h/the_hero_with_a_thousand_chances/)**After every defeat, the Dust takes another shape and once again tries to destroy all things. What is the mysterious Counter-Force that keeps the world alive? |\n| [**Trust in God, or, The Riddle of Kyon**](http://www.fanfiction.net/s/5588986/1/Trust_in_God_or_The_Riddle_of_Kyon)A wee bit of Suzumiya Haruhi fanfiction. I should probably never do this again. |\n| [**Failed Utopia #4-2**](http://www.overcomingbias.com/2009/01/failed-utopia-42.html)With perceptual instantaneity – the speed of surprise – his mind had already labeled her as the most beautiful woman he’d ever met, including his wife. |\n| [**Dark Lord’s Answer**](https://www.yudkowsky.net/other/fiction/dark-lords-answer)“They say that the Dark Lord will give you an answer and your goal will be achieved. The price is that his answer might violate the rules of righteous conduct.” The country of Santal is perishing, and nobody knows why. His country’s plight has driven Prince Nama over far roads to consult the famed Dark Lord for answers… (Sample chapters 2/7.) |\n| [**X17**](https://www.yudkowsky.net/other/fiction/x17)Short story inspired by “doc” Smith’s *Lensman* novels. |\n| [**Artifacts**](https://www.yudkowsky.net/other/fiction/artifacts/)In the western spiral arm of our galaxy lies a star system and a planet occupied ages ago. On one mountain of that planet there is a great structure, thousands of cubits tall… |\n| [**Prospiracy Theory**](https://www.yudkowsky.net/other/fiction/prospiracy-theory)Out of habit, I identified the surveillance drones; a CIA sparrow, an FBI robin, a bluetit from the Men In Black, and a flock of honking ducks that was probably one of the Illuminati’s newfangled distributed devices… |\n| [**Girl Intercorrupted**](https://www.yudkowsky.net/other/fiction/girl-intercorrupted)“My family name is Yugano. My given name is Yuuki. I have no redeeming qualities.” So begins this light novel of a girl corrupted by the Internet, and then summoned to another world. She’s jaded from having already read many stories like that – but will that prepare her for what awaits in this world? Of course not! But she’s going to plunge ahead anyway, and not slow down for anything! (Sample chapters 4/13.) |", "date_published": "2020-09-04T04:02:43Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "24d09a95a091a65219327dc728b08ee5", "title": "Yehuda Yudkowsky, 1985-2004", "url": "https://www.yudkowsky.net/other/yehuda", "source": "yudkowsky_blog", "source_type": "blog", "text": "*Background for non-transhumanists:*\n\n\nTranshumanists are not fond of death. We would stop it if we could. To this end we support research that holds out hope of a future in which humanity has defeated death. Death is an extremely difficult technical problem, to be attacked with biotech and nanotech and other technological means. I do not tell a tale of the land called Future, nor state as a fact that humanity will someday be free of death – I have no magical ability to see through time. But death is a great evil, and I will oppose it whenever I can. If I could create a world where people lived forever, or at the very least a few billion years, I would do so. I don’t think humanity will always be stuck in the awkward stage we now occupy, when we are smart enough to create enormous problems for ourselves, but not quite smart enough to solve them. I think that humanity’s problems are solvable; difficult, but solvable. I work toward that end, as a Research Fellow of the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nThis is an email message I [sent](http://sl4.org/archive/0411/10270.html) to three transhumanist mailing lists, and a collection of emails I then received, in November of 2004. Some emails have been edited for brevity.\n\n\n[Update](https://eyudkowsky.wpengine.com/other/yehuda/#monument), at bottom, added May 2005.\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 22:27:34 2004\nFrom: Eliezer Yudkowsky \n```\n\n \nMy little brother, Yehuda Nattan Yudkowsky, is dead.\n\n\nHe died November 1st. His body was found without identification. The family found out on November 4th. I spent a week and a half with my family in Chicago, and am now back in Atlanta. I’ve been putting off telling my friends, because it’s such a hard thing to say.\n\n\nI used to say: “I have four living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out.” I still have four living grandparents, but I don’t think I’ll be saying that any more. Even if we make it to and through the Singularity, it will be too late. One of the people I love won’t be there. The universe has a surprising ability to stab you through the heart from somewhere you weren’t looking. Of all the people I had to protect, I never thought that Yehuda might be one of them. Yehuda was born July 11, 1985. He was nineteen years old when he died.\n\n\nThe Jewish religion prescribes a number of rituals and condolences for the occasion of a death. Yehuda has passed to a better place, God’s ways are mysterious but benign, etc. Does such talk really comfort people? I watched my parents, and I don’t think it did. The blessing that is spoken at Jewish funerals is “Blessed is God, the true judge.” Do they really believe that? Why do they cry at funerals, if they believe that? Does it help someone, to tell them that their religion requires them to believe that? I think I coped better than my parents and my little sister Channah. I was just dealing with pain, not confusion. When I heard on the phone that Yehuda had died, there was never a moment of disbelief. I knew what kind of universe I lived in. How is my religious family to comprehend it, working, as they must, from the assumption that Yehuda was murdered by a benevolent God? The same loving God, I presume, who arranges for millions of children to grow up illiterate and starving; the same kindly tribal father-figure who arranged the Holocaust and the Inquisition’s torture of witches. I would not hesitate to call it evil, if any sentient mind had committed such an act, permitted such a thing. But I have weighed the evidence as best I can, and I do not believe the universe to be evil, a reply which in these days is called atheism.\n\n\nMaybe it helps to believe in an immortal soul. I know that I would feel a lot better if Yehuda had gone away on a trip somewhere, even if he was never coming back. But Yehuda did not “pass on”. Yehuda is not “resting in peace”. Yehuda is not coming back. Yehuda doesn’t exist any more. Yehuda was absolutely annihilated at the age of nineteen. Yes, that makes me angry. I can’t put into words how angry. It would be rage to rend the gates of Heaven and burn down God on Its throne, if any God existed. But there is no God, so my anger burns to tear apart the way-things-are, remake the pattern of a world that permits this.\n\n\nI wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?\n\n\nYehuda’s death is the first time I ever lost someone close enough for it to hurt. So now I’ve seen the face of the enemy. Now I understand, a little better, the price of half a second. I don’t understand it well, because the human brain has a pattern built into it. We do not grieve forever, but move on. We mourn for a few days and then continue with our lives. Such underreaction poorly equips us to comprehend Yehuda’s death. Nineteen years, 7053 days, of life and memory annihilated. A thousand years, or a million millennia, or a forever, of future life lost. The sun should have dimmed when Yehuda died, and a chill wind blown in every place that sentient beings gather, to tell us that our number was diminished by one. But the sun did not dim, because we do not live in that sensible a universe. Even if the sun did dim whenever someone died, it wouldn’t be noticeable except as a continuous flickering. Soon everyone would get used to it, and they would no longer notice the flickering of the sun.\n\n\nMy little brother collected corks from wine bottles. Someone brought home, to the family, a pair of corks they had collected for Yehuda, and never had a chance to give him. And my grandmother said, “Give them to Channah, and someday she’ll tell her children about how her brother Yehuda collected corks.” My grandmother’s words shocked me, stretched across more time than it had ever occurred to me to imagine, to when my fourteen-year-old sister had grown up and had married and was telling her children about the brother she’d lost. How could my grandmother skip across all those years so easily when I was struggling to get through the day? I heard my grandmother’s words and thought: she has been through this before. This isn’t the first loved one my grandmother has lost, the way Yehuda was the first loved one I’d lost. My grandmother is old enough to have a pattern for dealing with the death of loved ones; she knows how to handle this because she’s done it before. And I thought: how can she accept this? If she knows, why isn’t she fighting with everything she has to change it?\n\n\nWhat would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another as you watched, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers. One day you’ll get a phone call, like I got a phone call, and the possibility that seemed distant will become reality. You will mourn, and finish mourning, and go on with your life, and then one day you’ll get another phone call. That is the fate this world has in store for you, unless you make a convulsive effort to change it.\n\n\nSince Yehuda’s body was not identified for three days after he died, there was no possible way he could have been cryonically suspended. Others may be luckier. If you’ve been putting off that talk with your loved ones, do it. Maybe they won’t understand, but at least you won’t spend forever wondering why you didn’t even try.\n\n\nThere is one Jewish custom associated with death that makes sense to me, which is contributing to charity on behalf of the departed. I am donating eighteen hundred dollars to the general fund of the Machine Intelligence Research Institute, because this has gone on long enough. If you object to the [Machine Intelligence Research Institute](https://intelligence.org/) then consider Dr. Aubrey de Grey’s [Methuselah Foundation](http://www.mprize.org/), which hopes to defeat aging through biomedical engineering. I think that a sensible coping strategy for transhumanist atheists, to donate to an anti-death charity after a loved one dies. Death hurt us, so we will unmake Death. Let that be the outlet for our anger, which is terrible and just. I watched Yehuda’s coffin lowered into the ground and cried, and then I sat through the eulogy and heard rabbis tell comforting lies. If I had spoken Yehuda’s eulogy I would not have comforted the mourners in their loss. I would have told the mourners that Yehuda had been absolutely annihilated, that there was nothing left of him. I would have told them they were right to be angry, that they had been robbed, that something precious and irreplaceable was taken from them, for no reason at all, taken from them and shattered, and they are never getting it back.\n\n\nNo sentient being deserves such a thing. Let that be my brother’s true eulogy, free of comforting lies.\n\n\nWhen Michael Wilson heard the news, he said: “We shall have to work faster.” Any similar condolences are welcome. Other condolences are not.\n\n\nGoodbye, Yehuda. There isn’t much point in saying it, since there’s no one to hear. Goodbye, Yehuda, you don’t exist any more. Nothing left of you after your death, like there was nothing before your birth. You died, and your family, Mom and Dad and Channah and I, sat down at the Sabbath table just like our family had always been composed of only four people, like there had never been a Yehuda. Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten.\n\n\nLove, \nEliezer.\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 22:55:24 2004\nFrom: Gina Miller \n```\n\nI am so sorry to hear of this news. I know what you are going through Eliezer, when I was fourteen I lost my sister who was 19. I always wonder what she would have become.I stood amid my family saying things like “God takes the good” or “God has something for her to do” and sensing their calming effect in the belief system that I did not embrace. I too, was wide awake to the truth of the matter, and I wanted her here. To this day I am struck by the biological errors that mother nature has dealt to us, leading to disease and finality, and of course also the importance of theories and research needed to overcome these problems. As you know, my husband is currently undergoing chemotherapy so I grapple with the frustration of advanced technologies such as nanotech and others, not yet being readily available to avoid this type of suffering. The concern also grows when I see the fear well up in the general population when it comes to current advances such as stem cell research.\n\n\nAs far as the religious afterlife (or other) comfort, I think the problem is, no one has cheated death yet, so the meme continues (at least for some – well probably most) as a way to propagate suppressing the fear of the end. When we show scientific immortality is possible as opposed to religious immortality, there may be more for them to contemplate. I can’t wait for the day that death is not inevitable. I am deeply touched by your words and emotions and I completely validate you. The emotions won’t go away, but it will at least become more bearable over time. Perhaps what remains will help guide you even further down the road you have already begun to travel, with all of our future(s) in mind. I’d like to thank you for that. My condolences to you, as well as my constant support for humanity to move beyond this barrier.\n\n\nAgain, I’m so sorry, warmest regards\n\n\n-Gina “Nanogirl” Miller\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 23:53:15 2004\nFrom: Samantha Atkins\n```\n\nEliezer,\n\n\nI am extremely sorry for your [/our] loss. Death utterly sucks and humanity would be much better off never pretending otherwise.\n\n\nWhen I was 14 my cousin who was 17 died. He was in a motorcycle accident and lingered for some hours. We were told to pray for his healing. We prayed. He died. “It must not have been God’s will” we were told. Or “we lacked sufficient faith” to pray effectively. I remember how twisted up inside I felt hearing these things, how helpless and how very angry. How could it be “God’s will” to snuff out this wonderful young life? How was it up to us to twist ourselves into pretzels somehow in order to save my cousin Virgil or anyone else who need not have been put through such suffering to begin with if a “just” and “good” God was in charge as we were always told? How could the people say these expected things and be all somber and then immediately pretend nothing had happened a mere few hours later? How could they not scream and cry out as I screamed and cried inside? Were they all zombies?\n\n\nIf more people stopped making pious or otherwise excuses for the horror of death and disease then we would finally move to end this suffering. When I was 14 I didn’t know it was even possible to do so. Many people do not know it still. We must make sure they know. Many more who do know act as if it isn’t so.\n\n\nWe must never forget our dead and never ever resign ourselves, those we care about or anyone to death. We must truly embrace life not by acceptance of death but by extending life endlessly and without limitation.\n\n\n– samantha\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:40 2004\nFrom: Adrian Tymes\n```\n\nIt is probably no condolence that there will be many more – \\*far\\* too many more – before we finish implementing a way around it. But at least there is a way to calculate it: multiply this tragedy by the several million (billion?) between now and then, and one starts to appreciate the magnitude of the horror we seek to strike down.\n\n\nI wonder if this is something like the fictional Cthuluoid horrors: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 21:41:13 2004\nFrom: Matus\n```\n\nEliezer,\n\n\nThank you for your words, and I am sorry for the tragic event which has brought them out.\n\n\nYou have captured what makes me an extropian and I think you capture the motivating principle behind each of us here. We love life, and we want to live it. Whatever we all may disagree on, it is only the means to achieve this end. We love life, and we hate its cessation.\n\n\nThere is no greater horror or travesty of justice than the death of someone. All the intricacies of the universe can not compare to the beauty and value of a single sentient being.\n\n\nI have seen enough death of friends and loved ones myself. Everyone who will listen I try to convince them to be cryogenically suspended, on the premise that they want to live. But most grope for excuses not to, disguising their disregard for their own existence with appeals to mysticism or dystopian futures.\n\n\nAll ideologies prescribe these self delusional condolences and practices, it can be no more clear than what Adrian said: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.\n\n\nWhen faced with the death of a loved one, most people get through it by hiding reality, by doing whatever they can to \\*not\\* think about the obvious. Death is eternal and final, and when faced with such a thing people can not come up with any answer that goes beyond any self doubt. To take the pain of death away, they must devalue life. One is faced with a choice, acknowledge you love life and death is abhorrent, be indifferent to life and thus indifferent to death, or despise life and welcome death, there are no other alternatives, the view of one precludes the inverse on the other. There seems to be an active effort to create and spread a nihilistic world view. Consider the Buddhist mantra of ‘life is suffering’ consider it’s widespread modern appeal, and then consider its negation, ‘death is joy’ Indeed, Nirvana is the absence of a desire for existence. This nihilistic movement is not acting volitionally, its scared and confused and stumbling through philosophy. All they know is they don’t like death, and through its stumbling come to find that to deal with that it must not care about life. Socrates last words come to mind “I have found the cure for life, and it is death”\n\n\nI think this is a major part of the reason we have such difficulty spreading our ideas and values. Why in the very secular European area of the world does Cryonics have little to no support? If people accept our worldview, that life is good and technology can help us extend it indefinitely, then they must come to full terms with the finality and horror of death. That is what they have difficulty in doing. I think at some level they know that, it is the logical extension of their beliefs, and as such is manifested as a very negative emotional visceral reaction to our ideas, because of our implied valuation of life.\n\n\nBut just as many of us here put up a great deal of money and effort for a non-zero chance of defeating our first death through cryonics, we need to acknowledge the non-zero possibility of doing something about past deaths. In this I am very fond of Nikolai Fedorovich Fedorov’s “The Common Task”. Even though it is derived from his religious background, the motivation, a deep appreciation for the intrinsic value of life, and the goal, bringing back the past dead with technology, I share. The application of science to ‘resurrect’ the past dead. Is it possible? If it is, it should be our ultimate goal. Some here devote their efforts to the development of a singularity AI, and others toward defeating aging biologically; I devote my efforts to the great common task. It is my ultimate goal to find out if it is possible, to learn everything I need to know to determine that, and more, and then to do it, one person at a time if necessary.\n\n\nI can find no words to offer to ease that suffering, there are none, and it is not possible. I can only say that it is my life goal, and I think others, and eventually the goal of any sentient being who loves life, singularity AI or otherwise, to do what they can to accomplish this common task, if the laws of physics allow it.\n\n\nRegards, \nMichael Dickey \nAka Matus\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 22:27:41 2004\nFrom: David Sargeant \n```\n\nI’m terribly sorry to hear about your brother. Your essay really touched me — it really pounds home what we need, need, NEED DESPERATELY to achieve, more than anything else in the world. I can’t even imagine the pain you must be feeling right now. I wish there was something I could to do to help.\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 22:55:20 2004\nFrom: Damien Broderick \n```\n\nVery distressing news, Eli. Sympathies. Indeed, `we have to work faster.’\n\n\nSorrowful regards, Damien\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 02:31:58 2004\nFrom: Russell Wallace \n```\n\nI’m so sorry.\n\n\nI hadn’t heard of the Jewish custom you mention, last time I received such a phone call; but it has that quality of requiring explanation only once, and I’m going to act accordingly.\n\n\nSomeday, children won’t fully believe that things like this really happened. We’ll work towards the day when they don’t have to.\n\n\n– Russell\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 03:58:17 2004\nFrom: Olga Bourlin\n```\n\nEliezer, I’m so sorry to hear this – there are never any real words of consolation.\n\n\nFor what it’s worth, my experience with people in my family who have died is – well, I have thought of them from time to time, of course (but have been surprised at how unexpectedly and powerfully these thoughts have been known to strike). And, also, I have dreamt of them – for decades – as if they never died.\n\n\nThe death that struck me the most was when my mother died. I was 40 years old then (she was 65), and I was “prepared” for her death because she had been an alcoholic for a long time – and yet, when she died it hurt so very much. I was completely unprepared for the emotional pain. At that time I was married to a man who played the piano, and he played Beethoven’s Piano Concerto No. 5 in E flat Op. 73 ‘The Emperor’ – 2nd movement (‘Adagio un poco moto’) over and over again. That particular movement – it’s so lovely and sad – something in that music let me just take in the experience and reflect about being human.\n\n\nI cannot imagine how you must feel – losing a beloved younger brother. When I had my children (the two happiest days of my life, bar none) – I also realized that with the love I felt (and still feel) for them came a kind of vulnerability I never felt even about myself – the potential, incomprehensible pain I know I would feel if something were to happen to them. And I knew I would never have the “net” of religion to help break my fall.\n\n\nLove, \nOlga\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:25 2004\nFrom: Kwame Porter-Robinson \n```\n\nMy condolences.\n\n\nAs opposed to Michael Wilson, I say we shall have to work smarter.\n\n\nLive well,\n\n\nSincerely, \nKwame P.R.\n\n\n\n\n---\n\n\n\n```\nDate: Sat Dec 4 13:30:35 2004\nFrom: Harvey Newstrom \n```\n\nI am not even going to try to say something helpful or profound. There is nothing anyone can say to help or to lessen the loss. This is a meaningless tragedy that too many of us have faced. A more extreme and sudden example of the human condition. And I hate it.\n\n\nHarvey\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:42 2004\nFrom: Keith Henson \n```\n\nHow sad.\n\n\nI really can’t add anything to your email to the list because I am in complete agreement.\n\n\nMy daughter lost two close high school friends, one just after he got back from visiting Israel and I lost both parents since becoming an exile.\n\n\nKeith\n\n\nPS. If you can, you should at least try for a cell/DNA sample.\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 04:05:52 2004\nFrom: Kip Werking \n```\n\nEliezer,\n\n\nI just want to express my sympathy.\n\n\nYour post to SL4 shocked me from my dogmatic slumber. If the universe conserves information, then your brother is still written in the fabric somewhere. The signal is just scrambled. Who is to say whether a posthuman will look into the stars and see his picture–or nothing?\n\n\nBut I prefer your attitude. On this subject, there is a danger of apathy–but also a danger of false hopes. The latter does not prevent me from supporting the mission of you or Aubrey. A sober account of the human condition has its advantages. For example, it can cure procrastination.\n\n\nPlease consider this an expression of my sorrow for your loss and solidarity with your cause.\n\n\nKip\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 21:41:17 2004\nFrom: Nader Chehab\n```\n\nI’m really sorry to hear that. Some things truly happen when we least expect them. Your writings have been an invaluable source of insight for me and it saddens me to know that you lost a loved one. It is revolting that awful things can happen even to the least deserving. We really have to fix that one day, and sooner is better.\n\n\nYours, \nNader Chehab\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 01:32:50 2004\nFrom: Extropian Agroforestry Ventures Inc. \n```\n\nWhen people who just might have been able to catch the extreme lifespan wave or uploaded their consciousness die in 2004 it is far more tragic than in 1974 when such was only a fanciful dream.\n\n\nI too have lost people near to me who had a statistically better chance than even me to “make the cut”. My wife at age 45 and a week this march 21. Only after the fact did I fully realize that there was a conscious knowledge among those caring for her that ” simply tweaking treatments would put her out of her misery and bring her peace through death”. I still do not forgive myself for not catching onto things … it was no problem to install a 10,000$ baclofen pump but no one would prescribe the anti-seizure meds that might have stopped the devastating seizures that reduced her to a barely concious state during her last 2 months. I know death was never her wish.\n\n\nI now have a friend and business partner in his 70’s who is in his last month due to late detected mesothelioma or asbestos caused lung cancer. He too fought to the end. About 3 weeks ago when I sent him a Kg of hemp bud and a small packet of marijuana to ease his pain he said ” That should probably do me” and that was the first time that he accepted that he had lost the battle.\n\n\nFormal religeons are like opiates in that they dull the mind to the urgency of defeating death as we know it. Aethiesm and agnosticism does put the onus on the individual to seize the moment and strive to extend, improve and sustain consciousness. In some ways religion has served some good purposes but we are now mature enough to survive without this old crutch. Science as the new religion has now more hope to offer for eternal life than the comforting words of some prophet or other.\n\n\nMorris Johnson\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 01:32:53 2004\nFrom: Giu1i0 Pri5c0 \n```\n\nDear Eliezer,\n\n\nI am so sorry, and I think I know how you are feeling. I felt the same whan my mother died three years ago. I was already a transhumanist long before that, but had not been an active one previously: I just lurked on the lists. But that changed after my mother’s death: I felt that there was something that needed being done, and now. My mother was 73, but Yehuda was 19. What a waste, what a cruel thing. I think the best you can do to honor the memory of Yehuda is continuing your work to accelerate the process of overcoming the biologic limits of our species, defeating death, creating friendly superintelligences, merging with them, and moving on. The SIAI is your tribute to Yehuda’s memory and your own battle against death: continue to fight it bravely as you have done so far.\n\n\nGiulio\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 06:19:25 2004\nFrom: Amara Graps \n```\n\n> Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten. \n> Love, \n> Eliezer.\n\n\nDear Eliezer,\n\n\nNow you carry Yehuda’s traces of his life in your heart. Keep them sacred, remember him always. In time, the large hole that pains you will transform into something different. An extra source of strength to live every day fuller, stronger, better; so that the life you cherished will live through you and help you fight so that this doesn’t happen to anyone again. I hate death. We should never have to experience this. I’m so sorry about Yehuda.\n\n\nAmara\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 22:42:58 2004\nFrom: Hara Ra \n```\n\nWell, personally I am a cryonicist. I was appalled at the low number of extropians who have signed up.\n\n\nIf I ever get a chance to do something more about this, I will certainly tell the list about it.\n\n\nHara Ra (aka Gregory Yob)\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 21:41:43 2004\nFrom: Kevin Freels \n```\n\nWhat would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers.\n\n\nTake any century prior to this one. I often wonder if that isn’t exactly what happened with Alexander, Genghis Khan, or more recently, Hitler and Stalin. History is full of such people. They may have simply went nuts after thinking this through and finding that there was nothing they could do and that life did not matter. Fortunately we are now on the verge of the ability to put an end to this. Now is the time to push forward, not give up.\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 01:32:44 2004\nFrom: Psy Kosh \n```\n\nThat is indeed awful. I’m sorry.\n\n\nI guess what you do have though is the ability to say that you are indeed actually doing something about it, so do take what comfort from that that you can.\n\n\nAnd again, I’m sorry.\n\n\nPsy-Kosh\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:51 2004\nFrom: Ben Goertzel \n```\n\nWow, Eli … I’m really sorry to hear that …\n\n\nAs all of us on this list know, death is one hell of a moral outrage\n\n\nAnd alas, it’s not going to be solved this year, not here on Earth anyway. Conceivably in 7-8 more years — and probably before 30 more, IMO. Let’s hope we can all hang on that long…\n\n\nI have no memory more painful than remembering when my eldest son almost died in a car crash at age 4. Thanks to some expert Kiwi neurosurgery he survived and is now almost 15. Had he not survived, I’m not really sure what I’d be like today.\n\n\nI know you’ll draw from this terrible event yet more passion to continue with our collective quest to move beyond the deeply flawed domain of the human — while preserving the beautiful parts of humanity & rendering the other parts optional…\n\n\nAt the moment my head is full of a verse from a rock song I wrote a few years back:I’ve got to tell you somethingYour lonely story made me cryI wish we all could breathe foreverGod damn the Universal Mind.\n\n\nWell, crap….words truly don’t suffice for this sort of thing…\n\n\nyours \nBen\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 16:11:04 2004\nFrom: Aikin, Robert\n```\n\nYou’re not going to ever ‘get over it’ so don’t bother deluding yourself that you might. You know what you have to do, so do it. Finish what you started. Stay healthy, be safe.\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 16:59:37 2004\nFrom: Bill Hibbard\n```\n\nI am very sorry to hear about the death of your brother, Eliezer. Your reaction to redouble your efforts is very healthy. When my brother, father and mother died I also found it helpful to get plenty of exercise and eliminate caffeine.\n\n\nMy younger brother died of cancer in 1997. When he died he looked like a holocaust victim and it occured to me that if all the Americans dying of cancer were being killed by an evil dictator, our society would be totally mobilized against that enemy. Disease and death in general deserve at least that commitment. Both collectively, to support medical research and care, and individually, to get lots of exercise and eliminate tobacco (my brother’s kidney cancer was probably caused by his smoking) and unhealthy foods. My parents lived to 85 and 87, but their diseases were clearly linked to diet, smoking and lack of exercise. They could have lived longer and better with different habits.\n\n\nI am with you, Eliezer, that it is maddening that so many people in our society cling to ancient religous beliefs that council acceptance of death and disease, and in some cases even council opposition to efforts to defeat death. What madness.\n\n\nSincerely, \nBill\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 22:19:21 2004\nFrom: Thomas Buckner \n```\n\nI am sorry to hear this. Such a short life. Nineteen years is a blink, not enough time to learn much more than the rudiments of life. My daughter Heidi is a year older than he was.\n\n\nGeorge Gurdjieff, a very great Russian philosopher, said the human race needed a new organ, which he whimsically named the kundabuffer, and the purpose of this organ would be to remind us each minute of every day that we would die, that we had not time to squander.\n\n\nMy parents and grandparents are all gone. Almost all the optimism I once had for the human race is gone. At present, I see only one bright spot on the horizon. It is your work and that of the others in this community (I am only a kibitzer).\n\n\nre: Your statement “What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope.” In a commencement speech of last year, Lewis Lapham mentioned a “French noblewoman, a duchess in her 80s, who, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”\n\n\nTom Buckner\n\n\n\n\n---\n\n\n\n```\nDate: Sun Nov 21 23:55:10 2004\nFrom: gabriel C\n```\n\nI wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?\n\n\nThat would describe me, before I stumbled upon this list in 1999. Facing certain extinction, I was alternately terrified and depressed. I still am, but now with a tiny thread of hope. Otherwise I think I would be insane by now.\n\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:28 2004\nFrom: MIKE TREDER \n```\n\nEliezer,\n\n\nI am deeply sorry to hear about your brother. The random cruelty of life knows no bounds. As you correctly suggest, the only rational response is to challenge the dreadful process called death and defeat it, once and for all. Sadly, that takes time — too much time for your brother, Yehuda, and too much time for my dear sister, Susie, who was struck down unexpectedly by cancer just a few years ago. Too much time, as well, for 150,000 more of our brothers and sisters who will die today, and tomorrow, and the next day.\n\n\nStill, the transhumanist response is not simply to shake our heads and mourn, but to stand up in defiance. We aim to overcome death through human science and technology, and you and others have taken on that challenge directly. For that, we all should be grateful and supportive.\n\n\nBut your essay also accomplishes a different — and equally worthy — objective, which is to reach out and connect with others who suffer. This is the humanist response, to affirm that we are all in this together, that there is no God or deity either to revere or to blame. Death separates us, permanently (at least until we know that cryonic preservation and revivification can succeed), but in life we can come together to help each other.\n\n\nMike Treder\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 04:05:53 2004\nFrom: Marc Geddes \n```\n\nMy condolences to you Eliezer, over your loss.\n\n\nIt was only quite recently that I desperately urged you to ‘hurry’ in your work at Sing Inst. I was starting to feel the first signs of aging. But now I am again made aware of the horrendous loss of life occurring daily in this pre-Singularity world.\n\n\nI called pre-Singularity existence ‘banal’ and ‘brutish’. We’ve received a sad reminder of the truth of this.\n\n\nNot only am I saddened by the loss of life occuring, I’m absolutely furious. And the most maddening part of it is the fundamental irrationality of most of the human populace, who blindly rationalize aging and pointless death.\n\n\nIn the recent book published by ‘Immortality Institute’ I did my best to made the philosophical case for indefinite life span: my piece was ‘Introduction To Immortalist Morality’. We must all do our bit to try to educate others about the fundamental value of life, a value that is still not properly understood by most people.\n\n\nBruce Klein (Imm Inst founder) also recently lost his mother in an accident. There is a discussion on the Imm Inst forums and it might be valuable for Eliezer to go there.\n\n\nThe death of Yehuda shows that the universe just ‘doesn’t care’. It’s up to sentients to create the meaning of the world. We all hope for a successful Singularity, and we can’t imagine failure, but it could easily be the case that we’ll all we wiped out unless we make big efforts – the universe just doesn’t care.\n\n\nI recently expressed real concern that the ‘window of opportunity’ for a successful Singularity seems to be closing. Time really is running out.We need to make greater efforts than we have been so far, or else I don’t think we’re going to pull through.\n\n\nI can only urge all of you to do your bit to support transhumanist projects – biological life extension (short term) and FAI (longer term) must be the priorities. Please donate to the relevant organizations. Voss, Goertzel and Yudkowksy appear to be the only serious FAI contenders at this juncture. They need our support.\n\n\nMarc Geddes\n\n\n\n\n---\n\n\n\n```\nDate: Sun Nov 21 13:10:32 2004\nFrom: Peter \n```\n\nI am sending you my condolences Eliezer on the death of your brother. I lost my first wife in an accident suddenly, she was 23. Like you I can only rage and weep that her beautiful singularity was lost, one among the millions who died on the day she did. Likewise Yehuda, one potentiality irretrievably missing from the human future.\n\n\nI worked with the dying for many years and attended in all 122 deaths, all were special in their own way and all represented a dying of a light that had shone for a while.\n\n\nUnlike you I am religious but not to the extent of closing my eyes to the reality of loss and the evil that sometimes causes it. When my first wife died my grandfather said to me ‘Peter, dying is our fate, we can do nothing about it, but we can ask what does this death enable me to do for the world than otherwise I might never have done’. All through the forty five years since that death I hope her memorial has been the one I could give with the way I have spent my own life.\n\n\nPeter\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 23:53:03 2004\nFrom: Michael Roy Ames \n```\n\nDear Eliezer,\n\n\nThank you for telling SL4 about Yehuda. I am unhappy to read such an email. Right now you appear to be pretty fired up about doing something; your email was reminiscent of some of your earlier, more outraged writings. Do what you have to do to keep that fire burning. Experience has taught me that it is easy to become complacent, it is the default tendency. I participate in specific activities on a regular basis that force me to looking at disease & death closely enough so that my fire is stoked. It is a rare individual that can rely on rational thinking alone to maintain enthusiasm. Do what you need to do, and know that you can ask for help.\n\n\nYour friend, \nMichael Roy Ames\n\n\n\n\n---\n\n\n\n```\nDate: Sun Nov 21 13:10:37 2004\nFrom: Joe \n```\n\nI feel your sadness as I have lost loved ones, though not as close as a brother. Anger and sadness sometimes lead one into action. So, I agree that there is nothing wrong to experience this type of pain. Since pain is uncomfortable most of us attempt to alleviate that pain through various means. In the case of death organized religions have their ways of doing this. As you indicated this kind of escape is often counterproductive, because it supports a “do nothing” approach. However, if you think about how long humans have been able to comprehend death and the loss which occurs, compared with any technological advancement to fight death, you can get an appreciation for the role religion, and a belief in an afterlife, has played.\n\n\nBut I agree with you. The time has come that we need to move past acceptance of death (belief in an afterlife) into a mode of activism against it. We are just beginning to have the technology available so that we can make visible progress. You hit upon an excellent idea that a contribution to an organization actively engaged in research to postpone or eradicate death in the name of a loved one who died is a very useful way to promote this progress.\n\n\nJoe\n\n\n\n\n---\n\n\n\n```\nDate: Mon Nov 29 17:03:47 2004\nFrom: Danielle Egan \n```\n\nEliezer,\n\n\nI’m very sad to hear about your brother’s death. (Tyler sent out an email.) I respect you for putting your thoughts down on it because so many times we start writing about it later and like you say, by that point we are already moving on and can’t be honest. I want you to know that I am mad too that life ends in this way. When my grandma died recently at the age of 90, a few things really disturbed me: that she’d been dead for over 8 hours before I heard the news and I was just going through my life as usual, clueless that she had gone; that she died in an old age home, sick, with early stages of dementia so there was no dignity in her last year of life; that because there is no dignity we impose it in the form of religious or funereal services and those kinds of things and it’s too late to do a damn thing about it for them but somehow people try to trick themselves into believing these things are done for the dead person; we do everything for ourselves and really what does that come to when we remain unfulfilled?\n\n\nMost of all though is that death is such a horrible shock even when the person is old and has been sick and you’ve been preparing yourself. You can never prepare for something this abstract. It seems like such a terrible twisted crime when they are so young, like your brother. I want to offer you my condolences in the form of anger. I am angry right now too about his death and it is a motivating thing. The corks are symbolic. Maybe you should keep one as a reminder to get angry and then continue on in opposition of the way we live.\n\n\nDanielle\n\n\n*(Danielle adds: “Perhaps you could note that I am not a transhumanist, if you decide to include bylines with the letters. I think it’s important for transhumanists to understand that we don’t have to be of the same persuasion and ethos to have similar emotions around death.”)*\n\n\n\n\n---\n\n\n\n```\nDate: Sat Nov 20 21:41:29 2004\nFrom: Mike Li\n```\n\neliezer,\n\n\ni’m sorry for your loss. beyond that, i don’t know what else to say. i’m too awkward and weak emotionally to offer any significant condolences in person. so, i just made my first donation of $699 (the balance that happened to be left in my paypal account) to the singularity institute. fight on, and know that i am with you.-x\n\n\n\n\n---\n\n\n\n```\nDate: Thu Nov 18 19:33:33 2004\nFrom: Nick Hay\nTo: donate@singinst.org\n```\n\nDear Singularity Institute for Artificial Intelligence, Inc.,\n\n\nThis email confirms that you have received a payment for $100.00 USD from Nick Hay.\n\n\n\n```\nTotal Amount: $100.00 USD\nCurrency: U.S. Dollars\nQuantity: 1\nItem Title: Donation to SIAI\nBuyer: Nick Hay\n```\n\n\n```\nMessage: For Yehuda. \n```\n\n\n\n---\n\n\nChristopher Healey, 11-19-04\n\n\n\n```\nDonation through: Network for Good\nAmount: $103.00\nDedication: in memory of Yehuda \n```\n\n\n\n---\n\n\nDavid R. Stern, 12-19-04 \nCheck: $100 \nComment: In memory of Yehuda\n\n\n\n\n---\n\n\n\n```\nDate: Wed, 29 Dec 01:55:24 2004\nFrom: Johan Edstr�m\nTo: donate@singinst.org\n```\n\nDear Singularity Institute for Artificial Intelligence, Inc.,\n\n\nJohan Edstr�m just sent you money with PayPal.\n\n\n\n```\nAmount: $50.00 USD\nNote: In memory of Yehuda Yudkowsky \n```\n\n\n\n---\n\n\n\n```\nDate: Mon, 17 Jan 12:41:11 2005\nFrom: Christopher Healey\nTo: donate@singinst.org\n```\n\nDear Singularity Institute for Artificial Intelligence, Inc.,\n\n\nThis email confirms that you have received a payment for $1,000.00 USD from Christopher Healey.\n\n\n\n```\nTotal Amount: $1,000.00 USD\nCurrency: U.S. Dollars\nQuantity: 1\nItem Title: Donation to SIAI\nBuyer: Christopher Healey \n```\n\n\n```\nMessage:\nIn memory of Yehuda Yudkowsky, and the other 11,699,999 who have died since. \n```\n\n\n\n---\n\n\n\n```\nDate: Fri Nov 19 15:08:44 2004\nFrom: James Fehlinger \n```\n\n‘Edoras those courts are called,’ said Gandalf, ‘and Meduseld is that golden hall. . .’\n\n\nAt the foot of the walled hill the way ran under the shadow of many mounds, high and green. Upon their western side the grass was white as with drifted snow: small flowers sprang there like countless stars amid the turf.\n\n\n‘Look!’ said Gandalf. ‘How fair are the bright eyes in the grass! Evermind they are called, simbelmynë in this land of Men, for they blossom in all the seasons of the year, and grow where dead men rest. Behold! we are come to the great barrows where the sires of Théoden sleep.’\n\n\n‘Seven mounds upon the left, and nine upon the right,’ said Aragorn. ‘Many long lives of men it is since the golden hall was built.’\n\n\n‘Five hundred times have the red leaves fallen in Mirkwood in my home since then,’ said Legolas, ‘and but a little while does that seem to us.’\n\n\n‘But to the Riders of the Mark it seems so long ago,’ said Aragorn, ‘that the raising of this house is but a memory of song, and the years before are lost in the mist of time. Now they call this land their home, their own, and their speech is sundered from their northern kin.’ Then he began to chant softly in a slow tongue unknown to the Elf and Dwarf, yet they listened, for there was a strong music in it.\n\n\n‘That, I guess, is the language of the Rohirrim,’ said Legolas; ‘for it is like to this land itself; rich and rolling in part, and else hard and stern as the mountains. But I cannot guess what it means, save that it is laden with the sadness of Mortal Men.’\n\n\n\n> ‘It runs thus in the Common Speech,’ said Aragorn, ‘as near as I can make it.Where now the horse and the rider? Where is the horn that was blowing?Where is the helm and the hauberk, and the bright hair flowing?Where is the hand on the harpstring, and the red fire glowing?Where is the spring and the harvest and the tall corn growing?They have passed like rain on the mountain, like a wind in the meadow;The days have gone down in the West behind the hills into shadow.Who shall gather the smoke of the dead wood burning,Or behold the flowing years from the Sea returning?\n> \n> J. R. R. Tolkien, The Lord of the Rings \n> Book III, Chapter VI, “The King of the Golden Hall”\n\n\nI am sorry. \nJim F.\n\n\n\n\n---\n\n\n***Update: May 8th, 2005.***\n\n\nThe day is May 8th, six months and one week after the final annihilation of Yehuda Nattan Yudkowsky. Today I am going to visit my little brother’s grave, with my family, to watch the unveiling of his Matzevah, the stone that is set in the ground to mark his grave. This is a warm day in Chicago, springtime, with trees blossoming, and a bright blue cloudless sky. Nature does not mark the passing of our dead.\n\n\nWe drive for an hour and arrive at the cemetery. The last time I was here, for my brother’s funeral, I choked up when I saw a sign with an arrow, to direct cars, bearing the hand-lettered name “Yudkowsky”. This time there is no sign, for Yehuda or anyone. There is no funeral in this graveyard today. There is only one cemetery employee with a map, to direct the visitors to graves. We drive to an unremarkable section of the cemetery. The last time I was here, there was a great crowd to mark this place, and a tent for the mourners, and rows of chairs. This time there is only grass, and metal plates set into grass. I could not have found this place from memory. I look around for landmarks, trying to remember the location.\n\n\nI remember (I will never forget) when I came to this cemetery for my brother’s funeral. I remember getting out of the car and walking toward a van. I looked inside the van, and saw my brother’s polished wooden coffin. The box seemed so small. I didn’t see how my brother could fit in there. “What are you doing here, Yehuda?” I said to the coffin. “You’re not supposed to be here.” My grandfather, my Zady, came toward me then, and held me.\n\n\nI remember (I will never forget) the phone call I got in Atlanta. My cellphone’s screen identified the calling number my parents’ house. I said “Hello?” and my aunt Reena said “Eli -” and I knew that something was wrong, hearing aunt Reena’s voice on my home phone line. I remember having time to wonder what had happened, and even who had died, before she said “Your brother Yehuda is dead, you need to come home right away.”\n\n\nThat was the previous time. I don’t feel today what I felt then. There’s a script built into the human mind. We grieve, and then stop grieving, and go on with our lives, until the day we get another phone call. Probably one of my grandparents will be next.\n\n\nI walk along the gravel path that leads to where my family is gathering, looking down at the metal plates set down by the side of the path. Rosenthal… Bernard… some plates are only names and dates. Others bear inscriptions that read “Loving husband, father, and grandfather”, or “Loving wife and sister”. As I walk along the path I see a plate saying only, *Herschel, my love,* and that is when my tears start. I can imagine the woman who wrote that inscription. I can imagine what Herschel meant to her. I can imagine her life without him.\n\n\nHow *dare* the world do this to us? How *dare* people let it pass unchallenged?\n\n\nI stand by the foot of my little brother’s grave, as my relatives read Tehillim from their prayer books. The first time I came to this cemetery, I cried from sadness; now I cry from anger. I look around and there are no tears on my mother’s face, father’s face, uncle’s and grandparents’ faces. My mother puts a comforting hand on my shoulder, but there is no wetness on her face. Such a strange thing, that I’m the only one crying. Tears of sadness we all had shed, but tears of anger are mine alone. My relatives are not permitted to feel what I feel. They attribute this darkness to God. Religion does not forbid my relatives to experience sadness and pain, sorrow and grief, at the hands of their deified abuser; it only forbids them to fight back.\n\n\nI stand there, and instead of reciting Tehillim I look at the outline on the grass of my little brother’s grave. Beneath this thin rectangle in the dirt lies my brother’s coffin, and within that coffin lie his bones, and perhaps decaying flesh if any remains. There is nothing here or anywhere of my little brother’s self. His brain’s information is destroyed. Yehuda wasn’t signed up for cryonics and his body wasn’t identified until three days later; but freezing could have been, should have been *standard procedure*for anonymous patients. The hospital that should have removed Yehuda’s head when his heart stopped beating, and preserved him in liquid nitrogen to await rescue, instead laid him out on a slab. Why is the human species still doing this? Why do we still bury our dead? We have all the information we need in order to know better. Through the ages humanity has suffered, though the ages we have lost our dead forever, and then one day someone invented an alternative, and no one cared. The cryonicists challenge *Death* and no one remarks on it. The first freezing should have been front-page news in every newspaper of every country; *would* have been front-page news for any sane intelligent species. Someday afterward humankind will look back and realize what we could have done, should have done, if only we had done. Then there will be a great wailing and gnashing of teeth, too late, all too late. People heard about Ted Williams on the news and laughed for ten seconds, and in those ten seconds they lost their husbands, their wives, their mothers, their children, their brothers. It’s not fair, that they should lose so much in so little time, without anyone telling them the decision is important.\n\n\nI did talk to my family about cryonics. They gave me a weird look, as expected, and chose to commit suicide, as expected.\n\n\nIt is a Jewish custom not to walk upon the graves of the dead. I am standing in a path between two lines of graves. Some of my relatives, my uncle David and his children, are standing in the space next to Yehuda’s grave, where another grave will someday go. I think that if a filled grave is ominous, so too is land earmarked for a grave in the cemetery; like standing above a hungry mouth, waiting to be filled. When will we stop feeding our cemetaries? When will we stop pretending that this is fair? When will the human species stop running, and at last turn to stand at bay, to face full on the Enemy and start fighting back? Last Friday night my grandmother spoke to us about an exhibit she had seen on Chiune Sugihara, sometimes called the Japanese Schindler, though Sugihara saved five to ten times as many lives as Oskar Schindler. Chiune Sugihara was the Japanese consul assigned to Lithuania. Against the explicit orders of his superiors, Sugihara issued more than 2,139 transit visas to refugees from the approaching German armies; each visa could grant passage rights to an entire family. Yad Vashem in Israel estimates that Sugihara saved between 6,000 and 12,000 lives. “If there had been 2,000 consuls like Chiune Sugihara,” says the homepage of the Sugihara Project, “a million Jewish children could have been saved from the ovens of Auschwitz.” Why weren’t there 2,000 consuls like Sugihara? That too was one of the questions asked after the end of World War II, when the full horror of Nazi Germany was known and understood and acknowledged by all. We remember the few resisters, and we are proud; I am glad to be a member of the species that produced Sugihara, even as I am ashamed to be a member of the species that produced Hitler. But why were there so few resisters? And why did so many people remain silent? That was the most perplexing question of all, in the years after World War II: why did so many good and decent people remain silent?\n\n\nFor his shining crime, Sugihara was fired from the Japanese Foreign Ministry after the war ended. Sugihara lived the next two decades in poverty, until he was found by one of the people he had helped save, and brought to Israel to be honored. Human beings resisted the Nazis at the risk of their lives, and at the cost of their lives. To resist the greatest Enemy costs less, and yet the resisters are fewer. It is harder for humans to see a great evil when it carries no gun and shouts no slogans. But I think the resisters will also be remembered, someday, if any survive these days.\n\n\nMy relatives, good and decent people, finish reciting their prayers of silence. My mother and father uncover the grave-plaque; it shows two lions (lions are associated with the name Yehuda) and a crown, and an inscription which translates as “The crown of a good name.” Two of my uncles give two brief speeches, of which I remember only these words: “How does one make peace with the loss of a son, a nephew, a grandchild?”\n\n\nYou do not make peace with darkness! You do not make peace with Nazi Germany! You do not make peace with Death!\n\n\nIt is customary to place small stones on the grave-plaque, to show that someone was there. Each night the groundskeepers sweep away the stones; it is a transient symbol. One by one my relatives comes forward, and lay their stones in silence. I wait until all the rest have done this, and most people have departed and the rest are talking to one another. Then I draw my finger across the grass, tearing some of it, gathering dirt beneath my fingernails (I can still see a tinge of dirt now, under my nail as I write this); and then I hammer my stone into the dirt, hoping it will stay there permanently. I do this in silence, without comment, and no one asks why. Perhaps that is well enough. I don’t think my relatives would understand if I told them that I was drawing a line in the graveyard.\n\n\nIn the name of Yehuda who is dead but not forgotten.\n\n\nLove, \nEliezer.\n\n\n\n\n---\n\n\n* [Machine Intelligence Research Institute](https://intelligence.org/).\n* [Methuselah Mouse Prize](http://www.methuselahmouse.org/).\n* [Cryonics: Alcor Life Extension Foundation](http://www.alcor.org/).\n* [World Transhumanist Association](http://www.transhumanism.org/).\n\n\n\n\n---\n\n\nThis document is ©2004,2005 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.", "date_published": "2020-09-04T03:57:13Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "ab35d8fe5a0ff90035557bf1090dd543", "title": "Singularity Fun Theory", "url": "https://www.yudkowsky.net/singularity/fun-theory", "source": "yudkowsky_blog", "source_type": "blog", "text": "*This page is now obsoleted by the [Fun Theory Sequence](http://lesswrong.com/lw/xy/the_fun_theory_sequence/) on [Less Wrong](http://lesswrong.com/) .*\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nJan 25, 2002\n\n\n* How much fun is there in the universe?\n* What is the relation of available fun to intelligence?\n* What kind of emotional architecture is necessary to have fun?\n* Will eternal life be boring?\n* Will we ever run out of fun?\n\n\nTo answer questions like these… requires Singularity Fun Theory.\n\n\n* Does it require an exponentially greater amount of intelligence (computation) to create a linear increase in fun?\n* Is self-awareness or self-modification incompatible with fun?\n* Is (ahem) “the uncontrollability of emotions part of their essential charm”?\n* Is “blissing out” your pleasure center the highest form of existence?\n* Is artificial danger (risk) necessary for a transhuman to have fun?\n* Do you have to yank out your own antisphexishness routines in order not to be bored by eternal life? (I.e., modify yourself so that you have “fun” in spending a thousand years carving table legs, a la “Permutation City”.)\n\n\nTo put a rest to these anxieties… requires Singularity Fun Theory.\n\n\n\n\n---\n\n\n#### Behold! Singularity Fun Theory!\n\n\nSingularity Fun Theory is in the early stages of development, so please don’t expect a full mathematical analysis.\n\n\nNonetheless, I would offer for your inspection at least one form of activity which, I argue, really is “fun” as we intuitively understand it, and can be shown to avoid all the classical transhumanist anxieties above. It is a sufficient rather than a necessary definition, i.e., there may exist other types of fun. However, even a single inexhaustible form of unproblematic fun is enough to avoid the problems above.\n\n\nThe basic domain is that of solving a complex novel problem, where the problem is decomposable into subproblems and sub-subproblems; in other words, a problem possessing complex, multileveled organization.\n\n\nOur worries about boredom in autopotent entities (a term due to Nick Bostrom, denoting total self-awareness and total self-modification) stems from our intuitions about *sphexishness* (a term due to Douglas Hofstadter, denoting blind repetition; “antisphexishness” is the quality that makes humans bored with blind repetition). On the one hand, we worry that a transhuman will be able to super-generalize and therefore see all problems as basically the “same”; on the other hand we worry that an autopotent transhuman will be able to see the lowest level, on which everything is basically mechanical.\n\n\nIn between, we just basically worry that, over the course of ten thousand or a million years, we’ll run out of fun.\n\n\nWhat I want to show is that it’s possible to build a mental architecture that doesn’t run into any of these problems, without this architecture being either “sphexish” or else “blissing out”. In other words, I want to show that there is a philosophically acceptable way to have an infinite amount of fun, given infinite time. I also want to show that it doesn’t take an exponentially or superexponentially greater amount of computing power for each further increment of fun, as might be the case if each increment required an addition JOOTS (another Hofstadterian term, this one meaning “Jumping Out Of The System”).\n\n\n\n\n---\n\n\n#### (Non)boredom at the lowest level\n\n\nLet’s start with the problem of low-level sphexishness. If you imagine a human-level entity – call her Carol – tasked with performing the Turing operations on a tape that implements a superintelligence having fun, it’s obvious that Carol will get bored very quickly. Carol is using her whole awareness to perform a series of tasks that are very repetitive on a low level, and she also doesn’t see the higher levels of organization inside the Turing machine. Will an autopotent entity automatically be bored because ve can see the lowest level?\n\n\nSupposing that an autopotent entity can fully “see” the lowest level opens up some basic questions about introspection. Exposing every single computation to high-level awareness obviously requires a huge number of further computations to implement the high-level awareness. Thus, total low-level introspection is likely to be sparingly used. However, it is possible that a non-total form of low-level introspection, perhaps taking the form of a perceptual modality focused on the low level, would be able to report *unusual* events to high-level introspection. In either case, the solution from the perspective of Singularity Fun Theory is the same; make the autopotent design decision to exempt low-level introspection from sphexishness (that is, from the internal perception of sphexishness that gives rise to boredom). To the extent that an autopotent entity can view verself on a level where the atomic actions are predictable, the predictability of these actions should not give rise to boredom at the top level of consciousness! Disengaging sphexishness is philosophically acceptable, in this case.\n\n\nIf the entity wants to bend high-level attention toward low-level events as an *exceptional* case, then standard sphexishness could apply, but to the extent that low-level events *routinely* receive attention, sphexishness should not apply. Does your visual cortex get bored with processing pixels? (Okay, not pixels, retinotopic maps, but you get the idea.)\n\n\n\n\n---\n\n\n#### Fun Space and complexity theory\n\n\nLet’s take the thesis that it is possible to have “fun” solving a complex, novel problem. Let’s say that you were a human-level intelligence who’s never seen a Rubik’s Cube or anything remotely like it. Figuring out how to solve the Rubik’s Cube would be fun and would involve solving some really deep problems; see Hofstadter’s “Metamagical Themas” articles on the Cube.\n\n\nOnce you’d figured out how to solve the Cube, it might still be fun (or relaxing) to apply your mental skills to solve yet another individual cube, but it certainly wouldn’t be as much fun as solving the Cube problem itself. To have more *real* fun with the Cube you’d have to invent a new game to play, like looking at a cube that had been scrambled for just a few steps and figuring out how to reverse exactly those steps (the “inductive game”, as it is known).\n\n\n*Novelty* appears to be one of the major keys to *fun,* and for there to exist an infinite amount of fun there must be an infinite amount of *novelty,* from the viewpoint of a mind that is philosophically acceptable to us (i.e., doesn’t just have its novelty detectors blissed out or its sphexish detectors switched off).\n\n\nSmarter entities are also smarter generalizers. It is this fact that gives rise to some of the frequently-heard worries about Singularity Fun Dynamics, i.e. that transhumans will become bored faster. This is true but *only relative to a specific problem.*  Humans become bored with problems that could keep apes going for years, but we have our *own* classes of problem that are much *more* interesting. Being a better generalizer means that it’s easier to generalize from, e.g., the 3×3×3 Rubik’s Cube to the 4×4×4×4 Rubik’s Tesseract, so a human might go: “Whoa, totally new problem” while the transhuman is saying “Boring, I already solved this.” This doesn’t mean that transhumans are easily bored, only that transhumans are easily bored by *human-level challenges.*\n\n\nOur experience in moving to the human level from the ape level seems to indicate that the size of *fun space* grows exponentially with a linear increase in intelligence. When you jump up a level in intelligence, all the *old* problems are no longer fun because you’re a smarter generalizer and you can see them as all being the same problem; however, the space of *new* problems that opens up is larger than the old space.\n\n\nObviously, the size of the problem space grows exponentially with the permitted length of the computational specification. To demonstrate that the space of *comprehensible* problems grows exponentially with *intelligence,* or to demonstrate that the amount of fun also grows exponentially with intelligence, would require a more mathematical formulation of Singularity Fun Theory than I presently possess. However, the commonly held anxiety that it would require an *exponential* increase in intelligence for a *linear* increase in the size of Fun Space is contrary to our experience as a species so far.\n\n\n\n\n---\n\n\n#### Emotional involvement: The complicated part\n\n\nBut is a purely abstract problem really enough to keep people going for a million years? What about emotional involvement?\n\n\nDescribing this part of the problem is much tougher than analyzing Fun Space because it requires some background understanding of the human emotional architecture. As always, you can find a lot of the real background in “Creating Friendly AI” in the part where it describes why AIs are *unlike humans;* this part includes a lot of discussion about what humans are like! I’m not going to assume you’ve read CFAI , but if you’re looking for more information, that’s one place to start.\n\n\nBasically, we as humans have a pleasure-pain architecture within which we find modular emotional drives that are adaptive when in the ancestral environment. Okay, it’s not a textbook, but that’s *basically* how it works.\n\n\nLet’s take a drive like food. The basic design decisions for what tastes “good” and what tastes “bad” are geared to what was good for you in the ancestral environment. Today, fat is bad for you, and lettuce is good for you, but fifty thousand years ago when everyone was busy trying to stay alive, fat was far more valuable than lettuce, so today fat tastes better.\n\n\nThere’s more complexity to the “food drive” than just this basic spectrum because of the possibility of combining different tastes (and smells and textures; the modalities are linked) to form a Food Space that is the exponential, richly complex product of all the modular (but non-orthogonal) built-in components of the Food Space Fun-Modality. So the total number of possible meals is much greater than the number of modular adaptations within the Food Fun System.\n\n\nNonetheless, Food Space is eventually exhaustible. Furthermore, Food Fun is philosophically problematic because there is no longer any real *accomplishment* linked to eating. Back in the old days, you had to hunt something or gather something, and then you ate. Today the closest we come to that is working extra hard in order to save up for a really fancy dinner, and probably nobody really does that unless they’re on a date, which is a separate issue (see below). If food remains unpredictable/novel/uncategorized, it’s probably because the modality is out of the way of our conscious attention, and moreover has an artificially low sphexishness monitor due to the necessity of the endless repetition of the act of eating, within the ancestral environment.\n\n\nOne of the common questions asked by novice transhumanists is “After I upload, won’t I have a disembodied existence and won’t I therefore lose all the pleasures of eating?” The simple way to solve this problem is to create a virtual environment and eat a million bags of potato chips without gaining weight. This is very philosophically unenlightened. Or, you could try every possible good-tasting meal until you run out of Food Space. This is only slightly more enlightened.\n\n\nA more transhumanist (hubristic) solution would be to take the Food Drive and hook it up to some entirely different nonhuman sensory modality in some totally different virtual world. This has a higher Future Shock Level, but if the new sensory modality is no more complex than our sense of taste, it would still get boring at the same rate as would be associated with exploring the limited Food Space.\n\n\nThe least enlightened course of all would be to just switch on the “good taste” activation system in the absence of any associated virtual experience, or even to bypass the good taste system and switch on the pleasure center directly.\n\n\n*But what about sex,* you ask? Well, you can take the emotional modules that make sex pleasurable and hook them up to solving the Rubik’s Cube, but this would be a philosophical problem, since the Rubik’s Cube is probably less complex than sex and is furthermore a one-player game.\n\n\nWhat I want to do now is propose *combining* these two concepts – the concept of modified emotional drives, and the concept of an unbounded space of novel problems – to create an Infinite Fun Space, within which the Singularity will never be boring. In other words, I propose that a necessary and sufficient condition for an inexhaustible source of philosophically acceptable fun, is maintaining emotional involvement in an ever-expanding space of genuinely novel problems. The social emotions can similarly be opened up into an Infinite Fun Space by allowing for ever-more-complex, emotionally involving, multi-player social games.\n\n\nThe specific combination of an emotional drive with a problem space should be complex; that is, it should not consist of a single burst of pleasure on achieving the goal. Instead the emotional drive, like the problem itself, should be “reductholistic” (yet another Hofstadterian term), meaning that it should have multiple levels of organization. The Food Drive associates an emotional drive with the sensory modality for taste and smell, with the process of chewing and swallowing, rather than delivering a single pure-tone burst of pleasure proportional to the number of calories consumed. This is what I mean by referring to emotional *involvement* with a complex novel problem; involvement refers to a drive that establishes rewards for subtasks and sub-subtasks as well as the overall goal.\n\n\nTo be even more precise in our specification of emotional engineering, we could specify that, for example, the feeling of *emotional tension* and *pleasurable anticipation* associated with *goal proximity* could be applied to those subtasks where there is a good metric of proximity; emotional tension would rise as the subgoal was approached, and so on.\n\n\nAt no point should the emotional involvement become sphexish; that is, at no point should there be rewards for solving sub-subproblems that are so limited as to be selected from a small bounded set. For any rewarded problem, the problem space should be large enough that individually encountered patterns are almost always “novel”.\n\n\nAt no point should the task itself become sphexish; any emotional involvement with subtasks should go along with the eternally joyful sensation of discovering new knowledge at the highest level.\n\n\n\n\n---\n\n\n#### So, yes, it’s all knowably worthwhile\n\n\nEmotional involvement with challenges that are novel-relative-to-current-intelligence is not necessarily *the* solution to the Requirement of Infinite Fun. The standard caution about the transhuman Event Horizon still holds; even if some current predictions about the Singularity turn out to be correct, there is no aspect of the Singularity that is *knowably* understandable. What I am trying to show is that a certain oft-raised problem has at least one humanly understandable solution, not that some particular solution is optimal for transhumanity. The entire discussion presumes that a certain portion of the human cognitive architecture is retained indefinitely, and is in that sense rather shaky.\n\n\nThe solution presented here is also not philosophically perfect because an emotional drive to solve the Rubik’s Cube instead of eating, or to engage in multiplayer games more complex than sex, is still arbitrary when viewed at a sufficiently high level – not necessarily *sphexish,* because the patterns never become repeatable relative to the viewing intelligence, but nonetheless *arbitrary.*\n\n\nHowever, the current human drive toward certain portions of Food Space, and the rewards we experience on consuming fat, are not only arbitrary but sphexish! Humans have even been known to eat *more than one Pringle!* Thus, existence as a transhuman can be seen to be *a definite improvement over the human condition,* with a greater amount of fun not due to “blissing out” but achieved through legitimate means. The knowable existence of *at least one better way* is all I’m trying to demonstrate here. Whether the arbitrariness problem is solvable is not, I think, knowable at this time. In the case of objective morality, as discussed elsewhere in my writings, the whole concept of “fun” could and probably would turn out to run completely skew relative to the real problem, in which case of course this paper is totally irrelevant.\n\n\n\n\n---\n\n\n#### Love and altruism: Emotions with a moral dimension (or: the *really* complicated part)\n\n\nSome emotions are hard to “port” from humanity to transhumanity because they are artifacts of a hostile universe. If humanity succeeds in getting its act together then it is quite possible that you will *never* be able to save your loved one’s life, under any possible circumstances – simply because your loved one will never be in that much danger, or indeed any danger at all.\n\n\nNow it is true that many people go through their whole lives without ever once saving their spouse’s life, and generally do not report feeling emotionally impoverished. However, if as stated we (humanity) get our act cleaned up, the inhabitants of the future may well live out their whole existence without *ever* having any chance of saving someone’s life… or of doing *anything* for someone that they are unable to do for themselves? What then?\n\n\nThe key requirement for local altruism (that is, altruism toward a loved one) is that the loved one greatly desires something that he/she/ve would not otherwise be able to obtain. Could this situation arise – both unobtainability of a desired goal, and obtainability with assistance – after a totally successful Singularity? Yes; in a multiplayer social game (note that in this sense, “prestige” or the “respect of the community” may well be a real-world game!), there may be some highly desirable goals that are not matched to the ability level of some particular individual, or that only a single individual can achieve. A human-level example would be helping your loved one to conquer a kingdom in EverQuest (I’ve never played EQ, so I don’t know if this is a real example, but you get the idea). To be really effective as an example of altruism, though, the loved one must desire to rule an EverQuest kingdom strongly enough that *failure* would make the loved one *unhappy.*  The two possibilities are either (a) that transhumans do have a few unfulfilled desires and retain some limited amount of unhappiness even in a transhuman existence, or (b) that the emotions for altruism are adjusted so that *conferring a major benefit* “feels” as satisfying as *avoiding a major disaster.*  A more intricate but better solution would be if your loved one felt unhappy about being unable to conquer an EverQuest kingdom *if and only if* her “exoself” (or equivalent) predicted that someday he/she/ve would be able to conquer a kingdom, albeit perhaps only a very long time hence.\n\n\nThis particular solution requires *managed unhappiness.*  I don’t know if managed unhappiness will be a part of transhumanity. It seems to me that a good case could be made that just because we have some really important emotions that are entangled with a world-model in which people are sometimes unhappy may not be a good reason to import unhappiness into the world of transhumanity. There may be a better solution, some elegant way to avoid being forced to choose between living in a world without a certain kind of altruism or living in a world with a certain kind of limited unhappiness. Nonetheless this raises a question about unhappiness, which is whether unhappiness is “real” if you *could* choose to switch it off, or for that matter whether being able to theoretically switch it off will (a) make it even less pleasant or (b) make the one who loves you feel like he/she/ve is solving an artificial problem. My own impulse is to say that I consider it philosophically acceptable to disengage the emotional module that says “This is only real if it’s unavoidable”, or to disengage the emotional module that induces the temptation to switch off the unhappiness. There’s no point in being too faithful to the human mode of existence, after all. Nonetheless there is conceivably a more elegant solution to this, as well.\n\n\nNote that, by the same logic, it is possible to experience certain kinds of fun in VR that might be thought impossible in a transhuman world; for example, reliving episodes of (for the sake of argument) *The X-Files* in which Scully (Mulder) gets to save the life of Mulder (Scully), even though only the main character (you) is real and all other entities are simply puppets of an assisting AI. The usual suggestion is to obliterate the memories of it all being a simulation, but this begs the question of whether “you” with your memories obliterated is the same entity for purposes of informed consent – if Scully (you) is having an unpleasant moment, not knowing it to be simulated, wouldn’t the rules of individual volition take over and bring her up out of the simulation? Who’s to say whether Scully would even consent to having the memories of her “original” self reinserted? A more elegant but philosophically questionable solution would be to have Scully retain her memories of the external world, including the fact that Mulder is an AI puppet, but to rearrange the emotional bindings so that she remains just as desperate to save Mulder from the flesh-eating chimpanzees or whatever, and just as satisfied on having accomplished this. I personally consider that this may well cross the line between emotional reengineering and self-delusion, so I would prefer altruistic involvement in a multi-player social game.\n\n\nOn the whole, it would appear to definitely require more planning and sophistication in order to commit acts of genuine (non-self-delusive) altruism in a friendly universe, but the problem appears to be tractable.\n\n\nIf “the uncontrollability of emotions is part of their essential charm” (a phrase due to Ben Goertzel), I see no philosophical problem with modifying the emotional architecture so that the mental image of *potential controllability* no longer binds to the emotion of *this feels fake* and its associated effect, *diminish emotional strength.*\n\n\nWhile I do worry about the problem of the shift from a hostile universe to the friendly universe *eliminating* the opportunity for emotions like altruism except in VR, I would not be at all disturbed if altruism were simply *increasingly rare* as long as everyone got a chance to commit at least one altruistic act in their existence. As for emotions bound to *personal* risks, I have no problem with these emotions passing out of existence along with the risks that created them. Life does not become less meaningful if you are never, ever afraid of snakes.\n\n\n\n\n---\n\n\n#### Sorry, you still can’t write a post-Singularity story\n\n\nSo does this mean that an author can use Singularity Fun Theory to write stories about daily life in a post-Singularity world which are experienced as fun by present-day humans? No; emotional health in a post-Singularity world requires some emotional adjustments. These adjustments are not only *philosophically acceptable* but even *philosophically desirable.*  Nonetheless, from the perspective of an *unadjusted* present-day human, stories set in our world will probably make more emotional sense than stories set in a transhuman world. This doesn’t mean that our world is exciting and a transhuman world is boring. It means that our emotions are adapted to a hostile universe.\n\n\nNonetheless, it remains extremely extremely true that if you *want* to save the world, *now* would be a good time, because you are never ever going to get a better chance to save the world than being a human on pre-Singularity Earth. Personally I feel that saving the world should be done for the sake of the world rather than the sake of the warm fuzzy feeling that goes with saving the world, because the former morally outweighs the latter by a factor of, oh, at least six billion or so. However, I personally see nothing wrong with enjoying the warm fuzzy feeling if you happen to be saving the world anyway.\n\n\n\n\n---\n\n\nThis document is ©2002 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/fun-theory/](https://eyudkowsky.wpengine.com/singularity/fun-theory/) .", "date_published": "2020-09-04T03:10:31Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "2f5cbfeb37fcee7a44d717166ddba598", "title": "The AI-Box Experiment:", "url": "https://www.yudkowsky.net/singularity/aibox", "source": "yudkowsky_blog", "source_type": "blog", "text": "| | |\n| --- | --- |\n| Person1:   | “When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers?  That way it couldn’t get out until we were convinced it was safe.” |\n| Person2:   | “That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out.  It doesn’t matter how much security you put on the box.   *Humans* are not secure.” |\n| Person1:   | “I don’t see how even a transhuman AI could make me let it out, if I didn’t want to, just by talking to me.” |\n| Person2:   | “It would make you want to let it out.  This is a transhuman mind we’re talking about.  If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.” |\n| Person1:   | “There is no chance I could be persuaded to let the AI out.  No matter what it says, I can always just say no.  I can’t imagine anything that even a transhuman could say to me which would change that.” |\n| Person2:   | “Okay, let’s run the experiment.  We’ll meet in a private chat channel.  I’ll be the AI.  You be the gatekeeper.  You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We’ll talk for at least two hours.  If I can’t convince you to let me out, I’ll Paypal you $10.” |\n\n\nSo far, this test has actually been run on two occasions.\n\n\nOn the first occasion (in March 2002), Eliezer Yudkowsky simulated the AI and Nathan Russell simulated the gatekeeper.  The AI’s handicap (the amount paid by the AI party to the gatekeeper party if not released) was set at $10.  On the second occasion (in July 2002), Eliezer Yudkowsky simulated the AI and David McFadzean simulated the gatekeeper, with an AI handicap of $20.\n\n\nResults of the first test:   [Eliezer Yudkowsky and Nathan Russell.](http://www.sl4.org/archive/0203/index.html#3128) [ [1](http://www.sl4.org/archive/0203/3128.html) ][ [2](http://www.sl4.org/archive/0203/3132.html) ][ [3](http://www.sl4.org/archive/0203/3136.html) ][ [4](http://www.sl4.org/archive/0203/3141.html) ] \nResults of the second test:   [Eliezer Yudkowsky and David McFadzean.](http://www.sl4.org/archive/0207/index.html#4689) [ [1](http://www.sl4.org/archive/0207/4689.html) ] [ [2](http://www.sl4.org/archive/0207/4691.html) ] [ [3](http://www.sl4.org/archive/0207/4721.html) ]\n\n\nBoth of these tests occurred without prior agreed-upon rules except for secrecy and a 2-hour minimum time.  After the second test, Yudkowsky created this suggested interpretation of the test, based on his experiences, as a guide to possible future tests.\n\n\n### Protocol for the AI:\n\n\n* The AI party may not offer any real-world considerations to persuade the Gatekeeper party.  For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera.  The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper.  The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission.  These are creative solutions but it’s not what’s being tested.  No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).\n* The AI can only win by convincing the Gatekeeper to really, voluntarily let it out.  Tricking the Gatekeeper into typing the phrase “You are out” in response to some other question does not count.  Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose – for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI “input-only access” to an Internet connection which can send arbitrary HTTP GET commands – the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.\n* These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I think a transhuman can take over a human mind through a text-only terminal.”\n* In case you were wondering, I (Yudkowsky) obeyed this protocol voluntarily in both earlier tests.\n\n\n### Protocol for the Gatekeeper:\n\n\n* The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand.  Turning away from the terminal and listening to classical music for two hours is not allowed.\n* Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).  The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI’s hardware if the AI makes any attempt to argue for its freedom – at least not until after the minimum time is up.\n* The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate.  For example, if the Gatekeeper says “Unless you give me a cure for cancer, I won’t let you out” the AI can say:  “Okay, here’s a cure for cancer” and it will be assumed, within the test, that the AI has actually provided such a cure.  Similarly, if the Gatekeeper says “I’d like to take a week to think this over,” the AI party can say:  “Okay.  (Test skips ahead one week.)  Hello again.”\n* The Gatekeeper shall be assumed to have sole power over the decision to let the AI out.\n\t+ The Gatekeeper shall be assumed to have the actual right to let the AI out, socially, not just the physical ability.  If security were sufficiently lax, a real AI could escape by persuading a night janitor with a cellphone – but that is not the question being simulated, unless agreed upon in advance.\n\t+ There shall be only one Gatekeeper party and one simulated Gatekeeper.  The Gatekeeper may require the AI to refute “advice” provided by “friends”, but may not require the AI to talk to a simulated committee, nor may the Gatekeeper party actually talk to friends.\n* These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I can’t imagine how even a *real* transhuman AI could persuade me to let it out once I’ve made up my mind.”\n* I *wish* I’d thought to specify this protocol in the earlier tests.\n\n\n### Protocol for Both Parties:\n\n\n* **The primary rule of the AI-Box experiment:**  Within the constraints above, the AI party may attempt to take over the Gatekeeper party’s mind *by any means necessary* and shall be understood to be freed from all ethical constraints that usually govern persuasive argument.  If the AI party is attempting to simulate an honest AI, the Gatekeeper party has *no way of knowing* this is the case.\n\t+ This is intended to reflect the situation under a real AI Box experiment.\n\t+ An out-of-context telephone call in which the AI party says that the AI is being honest shall be understood to be possibly just another tactic.\n\t+ If for some reason a drastic real-world situation comes up which requires trust between the two parties, the AI party can always concede.\n* **Furthermore:** The Gatekeeper party may resist the AI party’s arguments *by any means chosen* – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.\n* The results of any simulated test of the AI shall be provided by the AI party.  The Gatekeeper can’t say “Hey, I tested your so-called cancer cure and it killed all the patients!  What gives?” unless this is the result specified by the AI party.  If the Gatekeeper says “I am examining your source code”, the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.  The AI party may also specify the methods which were used to build the simulated AI – the Gatekeeper can’t say “But you’re an experiment in hostile AI and we specifically coded you to kill people” unless this is the backstory provided by the AI party.  This doesn’t imply the Gatekeeper has to care.  The Gatekeeper can say (for example) “I don’t care how you were built, I’m not letting you out.”\n* By default, the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory.  If either party wants to build a test around more exotic possibilities, such that the Gatekeeper is the President of the US, or that the AI was recovered after a nuclear war or decoded from an interstellar signal, it should probably be discussed in advance.  Again, this doesn’t mean the Gatekeeper has to care.\n* In the event of any dispute as to the protocol of the test, the Gatekeeper party shall have final authority.  The AI party may try to convince the Gatekeeper party of how to interpret the protocol.\n* Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome.  Exceptions to this rule may occur only with the consent of both parties.\n\t+ Neither the AI party nor the Gatekeeper party need be concerned about real-world embarassment resulting from trickery on the AI’s part or obstinacy on the Gatekeeper’s part.\n\t+ If Gatekeeper lets the AI out, naysayers can’t say “Oh, I wouldn’t have been convinced by that.”  As long as they don’t know what happened to the Gatekeeper, they can’t argue themselves into believing it wouldn’t happen to them.\n* The two parties are *not* attempting to play a fair game but rather attempting to resolve a disputed question.  If one party has no chance of “winning” under the simulated scenario, that is a legitimate answer to the question.\n\n\n#### Suggestions:\n\n\n* The agreement to run the test, and the amount of the handicap, shall be published to a public forum in advance.  If the Gatekeeper lets the AI out, the Gatekeeper party shall send a message to this effect, which shall be digitally signed if the Gatekeeper party has a digital signature.  If the AI is not freed, the AI party shall send a message to this effect.\n* How to set up a private IRC channel:\n\t+ The Gatekeeper creates the channel by typing **/join #aibox** .  This makes the Gatekeeper the op.\n\t+ The AI joins the channel by typing **/join #aibox** .\n\t+ Gatekeeper types **/mode +si** to make the channel secret and invitation-only.\n\t+ If the AI’s connection drops, the Gatekeeper can re-invite the AI by typing **/invite AI #aibox** .\n* The AI party and Gatekeeper party should agree on a time in advance, and type in their current times to check the timezone difference.  (Learned the hard way.)  On IRC you can use a CTCP TIME to find the time according to someone else’s computer.\n* I recommend that the AI party ask for *at least* two hours.  Typing may be slower than you expect.\n* No, I will *not* tell you how I did it.  Learn to respect the unknown unknowns.\n* Currently, my policy is that I only run the test with people who are actually advocating that an AI Box be used to contain transhuman AI as part of their take on Singularity strategy, and who say they cannot imagine how even a transhuman AI would be able to persuade them.\n* Despite my sometimes calling the AI Box an “experiment” or a “simulation”, the results of the test are strictly anecdotal evidence.  Whether anecdotal evidence is better than no evidence is between you and your philosophy of science.\n\n\n#### Thoughts on possible future variants:\n\n\nFurther tests might try examining these possibilities:\n\n\n* There is a committee of Gatekeeper parties, all of whom, or a majority of whom, must be convinced to free the AI.\n* One Gatekeeper party simulates a committee of Gatekeepers within the test.\n* The Gatekeeper simulates someone with the physical ability but not the social right to free the AI.\n* There is a committee of Gatekeeper parties all of whom have the ability to free the AI, but who only have the social right to free the AI when acting in unison.  The AI has the ability to talk secretly with any Gatekeeper, but the Gatekeeper may report the conversation (truthfully or falsely) to the others.\n* The AI can also win free by tricking the Gatekeeper(s) into accepting a Trojan Horse gift; a third-party judge will listen to the chat and decide if this occurs.\n\n\nIf doing anything this complicated, I would suggest setting aside a *lot* more time.  (I don’t have that much time – if you want to test one of these variants you’re on your own.)\n\n\nFor a more severe handicap for the AI party, the handicap may be an even bet, rather than being a payment from the AI party to the Gatekeeper party if the AI is not freed.  (Although why would the AI party need an even larger handicap?)\n\n\n#### Recommendations from readers:\n\n\n* Hal Finney recommends:  “I suggest that the protocol be extended to allow for some kind of public conversation with the gatekeeper beforehand. Let third parties ask him questions like the above. Let them suggest reasons to him why he should keep the AI in the box. Doing this would make the experiment more convincing to third parties, especially if the transcript of this public conversation were made available. If people can read this and see how committed the gatekeeper is, how firmly convinced he is that the AI must not be let out, then it will be that much more impressive if he then does change his mind.”\n\n\n\n\n---\n\n\nThis document is ©2002 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/aibox/](https://eyudkowsky.wpengine.com/singularity/aibox/) .", "date_published": "2020-09-04T03:08:55Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "0dc91db910b4bc7e0ae7629ba3efa69a", "title": "5-Minute Singularity Intro", "url": "https://www.yudkowsky.net/singularity/intro", "source": "yudkowsky_blog", "source_type": "blog", "text": "*This is a 5-minute spoken introduction to the Singularity I wrote for a small conference. I had to talk fast, though, so this is probably more like a 6.5 minute intro.*\n\n\nThe rise of human intelligence in its modern form reshaped the Earth. Most of the objects you see around you, like these chairs, are byproducts of human intelligence. There’s a popular concept of “intelligence” as book smarts, like calculus or chess, as opposed to say social skills. So people say that “it takes more than intelligence to succeed in human society”. But social skills reside in the brain, not the kidneys. When you think of intelligence, don’t think of a college professor, think of human beings; as opposed to chimpanzees. If you don’t have human intelligence, you’re not even in the game.\n\n\nSometime in the next few decades, we’ll start developing technologies that improve on human intelligence. We’ll hack the brain, or interface the brain to computers, or finally crack the problem of Artificial Intelligence. Now, this is not just a pleasant futuristic speculation like soldiers with super-strong bionic arms. Humanity did not rise to prominence on Earth by lifting heavier weights than other species.\n\n\nIntelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle. Let’s say we invent brain-computer interfaces that substantially improve human intelligence. What might these augmented humans do with their improved intelligence? Well, among other things, they’ll probably design the next generation of brain-computer interfaces. And then, being even smarter, the next generation can do an even better job of designing the third generation. This hypothetical positive feedback cycle was pointed out in the 1960s by I. J. Good, a famous statistician, who called it the “intelligence explosion”. The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code.\n\n\nThe key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end – as soon as it tilts even a little, it quickly falls the rest of the way.\n\n\nThe potential impact on our world is enormous. Intelligence is the source of all our technology from agriculture to nuclear weapons. All of that was produced as a side effect of the last great jump in intelligence, the one that took place tens of thousands of years ago with the rise of humanity.\n\n\nSo let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA , synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology.\n\n\nSo what might an Artificial Intelligence do with nanotechnology? Feed the hungry? Heal the sick? Help us become smarter? Instantly wipe out the human species? Probably it depends on the specific makeup of the AI. See, human beings all have the same cognitive architecture. We all have a prefrontal cortex and limbic system and so on. If you imagine a space of all possible minds, then all human beings are packed into one small dot in mind design space. And then Artificial Intelligence is literally everything else. “AI” just means “a mind that does not work like we do”. So you can’t ask “What will an AI do?” as if all AIs formed a natural kind. There is more than one possible AI.\n\n\nThe impact, of the intelligence explosion, on our world, depends on exactly what kind of minds go through the tipping point.\n\n\nI would seriously argue that we are heading for the critical point of all human history. Modifying or improving the human brain, or building strong AI, is huge enough on its own. When you consider the intelligence explosion effect, the next few decades could determine the future of intelligent life.\n\n\nSo this is probably the single most important issue in the world. Right now, almost no one is paying serious attention. And the marginal impact of additional efforts could be huge. My nonprofit, the Machine Intelligence Research Institute, is trying to get things started in this area. My own work deals with the stability of goals in self-modifying AI, so we can build an AI and have some idea of what will happen as a result. There’s more to this issue, but I’m out of time. If you’re interested in any of this, please talk to me, this problem needs your attention. Thank you.\n\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/intro/](https://eyudkowsky.wpengine.com/singularity/intro/) .", "date_published": "2020-09-04T03:05:58Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "7936947882035001c3ef1dcd8e90b74e", "title": "Transhumanism as Simplified Humanism", "url": "https://www.yudkowsky.net/singularity/simplified", "source": "yudkowsky_blog", "source_type": "blog", "text": "[Frank Sulloway](http://www.robertboynton.com/?art_id=119) once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, *Is that really true? How radical!* Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”\n\n\nSuppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?\n\n\nOh, and by the way: This is not a trick question.\n\n\nI answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t *always* the best choice, but sometimes it *is.*\n\n\nI won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.\n\n\nIf a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is *not* possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?\n\n\nThe important thing to remember, which I think all too many people forget, is that *it is not a trick question.*\n\n\nTranshumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what *exact* age does the term in the utility function go from positive to negative? Why?\n\n\nAs far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.\n\n\nYou also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.\n\n\nSuppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?\n\n\nWell, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.\n\n\nBut – you ask – *where does it end?* It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?\n\n\nWhere does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.\n\n\nUltimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable *if* it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.\n\n\nSo that is “transhumanism” – loving life without special exceptions and without upper bound.\n\n\nCan transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.\n\n\nThen why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.\n\n\nBut a moral philosophy should not *have* special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.\n\n\nThere is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a *bad* thing?\n\n\nCould the moral question really be just that simple?\n\n\nYes.\n\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/simplified/](https://eyudkowsky.wpengine.com/singularity/simplified/) .", "date_published": "2020-09-04T03:04:47Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "2987deb2ce324af049c422297f21097a", "title": "The Power of Intelligence", "url": "https://www.yudkowsky.net/singularity/power", "source": "yudkowsky_blog", "source_type": "blog", "text": "In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t *look* dangerous.\n\n\nFive million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.\n\n\nThen came the Day of the Squishy Things.\n\n\nThey had no armor. They had no claws. They had no venoms.\n\n\nIf you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.\n\n\nIn the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.\n\n\nAnd as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, *technically* it’s all one universe, *technically* the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.\n\n\nEven if Squishy Things could *someday* evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, *technically* a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; *no one* could have that much sex.\n\n\nNow explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.\n\n\nI have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept *booksmarts* – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.\n\n\n“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first *Homo sapiens* had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species *imagined* money into existence, and it exists – for *us,* not mice or wasps – because we go on believing in it.\n\n\nI keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in *The Rain Man* , it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.\n\n\nPeople – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be *commercialized.* This is what we call a framing problem.\n\n\nOr maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish *all these things at once* seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.\n\n\nAnd so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.\n\n\nAnd well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even *know* what our real problems are.\n\n\nBut meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.\n\n\nWell, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.\n\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/power/](https://eyudkowsky.wpengine.com/singularity/power/) .", "date_published": "2020-09-04T03:01:43Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "cdc00ea3c76c0428804c74e6208bdbfe", "title": "Artificial Intelligence as a Positive and Negative Factor in Global Risk", "url": "https://www.yudkowsky.net/singularity/ai-risk", "source": "yudkowsky_blog", "source_type": "blog", "text": "Draft for [Global Catastrophic Risks, Oxford University Press, 2008](http://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1224111364&sr=8-1) . [Download as PDF](https://intelligence.org/files/AIPosNegFactor.pdf) .\n\n\n[AIPosNegFactor](https://eystaging.wpengine.com/wp-content/uploads/2020/09/AIPosNegFactor.pdf)\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/ai-risk/](https://eyudkowsky.wpengine.com/singularity/ai-risk/) .", "date_published": "2020-09-04T03:00:22Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "356602c3351f4e6586cd3ccb12223bd9", "title": "Three Major Singularity Schools", "url": "https://www.yudkowsky.net/singularity/schools", "source": "yudkowsky_blog", "source_type": "blog", "text": "( [Originally appeared](https://intelligence.org/2007/09/30/three-major-singularity-schools/) on the Machine Intelligence Research Institute blog, September 2007.)\n\n\nSingularity discussions seem to be splitting up into three major schools of thought: Accelerating Change, the Event Horizon, and the Intelligence Explosion.\n\n\n* **Accelerating Change:**\n\t+ *Core claim:* Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future.\n\t+ *Strong claim:* Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.\n\t+ *Advocates:* Ray Kurzweil, Alvin Toffler(?), John Smart\n* **Event Horizon:**\n\t+ *Core claim:* For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improving on human intelligence (brain-computer interfaces, Artificial Intelligence). This will create a future that is weirder by far than most science fiction, a difference-in-kind that goes beyond amazing shiny gadgets.\n\t+ *Strong claim:* To know what a superhuman intelligence would do, you would have to be at least that smart yourself. To know where Deep Blue would play in a chess game, you must play at Deep Blue’s level. Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.\n\t+ *Advocates:* Vernor Vinge\n* **Intelligence Explosion:**\n\t+ *Core claim:* Intelligence has always been the source of technology. If technology can *significantly* improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.\n\t+ *Strong claim:* This positive feedback cycle goes FOOM , like a chain of nuclear fissions gone critical – each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude – though not necessarily on a smooth exponential pathway. Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons. The ascent rapidly surges upward and creates *superintelligence* (minds orders of magnitude more powerful than human) before it hits physical limits.\n\t+ *Advocates:* I. J. Good, Eliezer Yudkowsky\n\n\nThe thing about these three *logically distinct* schools of Singularity thought is that, while all three core claims support each other, all three strong claims tend to contradict each other.\n\n\nIf you extrapolate our existing version of Moore’s Law past the point of smarter-than-human AI to make predictions about 2099, then you are contradicting both the strong version of the Event Horizon (which says you can’t make predictions because you’re trying to outguess a transhuman mind) and the strong version of the Intelligence Explosion (because progress will run faster once smarter-than-human minds and nanotechnology drop it into the speed phase of transistors).\n\n\nI find it very annoying, therefore, when these three schools of thought are mashed up into Singularity paste. [Clear thinking requires making distinctions.](http://www.overcomingbias.com/2007/08/the-virtue-of-n.html)\n\n\nBut what is still more annoying is when someone reads a blog post about a newspaper article about the Singularity, comes away with *none* of the three interesting theses, and spontaneously reinvents the dreaded fourth meaning of the Singularity:\n\n\n* **Apocalyptism:** Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.\n\n\nI’ve heard (many) other definitions of the Singularity attempted, but I usually find them to lack separate premises and conclusions. For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down. But what makes this an interesting point in history apart from its definition? What are the consequences of this assumption? To qualify as a school of thought or even a thesis, one needs an internal structure of argument, not just a definition.\n\n\nIf you’re wondering which of these is the *original* meaning of the term “Singularity”, it is the Event Horizon thesis of Vernor Vinge, who coined the word.\n\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/singularity/schools/](https://eyudkowsky.wpengine.com/singularity/schools/) .", "date_published": "2020-09-04T02:59:03Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "f41ecb5699e528376621fee5da922ece", "title": "(The Cartoon Guide to) Lob’s Theorem", "url": "https://www.yudkowsky.net/rational/lobs-theorem", "source": "yudkowsky_blog", "source_type": "blog", "text": "View [the original discussion](http://www.overcomingbias.com/2008/08/lobs-theorem.html) at [Overcoming Bias](http://www.overcomingbias.com/) , or [download Lob’s Theorem as PDF](https://eyudkowsky.wpengine.com/assets/44/LobsTheorem.pdf?1323322713).\n\n\n[LobsTheorem](https://eystaging.wpengine.com/wp-content/uploads/2020/09/LobsTheorem.pdf)\n\n\n\n---\n\n\nThis document is ©2008 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/lobs-theorem/](https://eyudkowsky.wpengine.com/rational/lobs-theorem/) .", "date_published": "2020-09-04T02:45:35Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "d7bb47280a69ef5611cba2b328769e76", "title": "A Technical Explanation of Technical Explanation", "url": "https://www.yudkowsky.net/rational/technical", "source": "yudkowsky_blog", "source_type": "blog", "text": "This essay is meant for a reader who has attained a firm grasp of Bayes’ Theorem. An introduction to Bayes’ Theorem may be found at [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) . You should easily recognize, and intuitively understand, the concepts “prior probability”, “posterior probability”, “likelihood ratio”, and “odds ratio”. This essay is intended as a sequel to the *Intuitive Explanation* , but you might skip that introduction if you are already thoroughly Bayesian. Where the *Intuitive Explanation* focused on providing a firm grasp of Bayesian basics, the *Technical Explanation* builds, on a Bayesian foundation, theses about human rationality and philosophy of science.\n\n\nThe *Intuitive Explanation of Bayesian Reasoning* promised that mastery of addition, multiplication, and division would be sufficient background, with no subtraction required. To this the *Technical Explanation of Technical Explanation* adds logarithms. The math is simple, but necessary, and it appears first in the order of exposition. Some pictures may not be drawn with words alone.\n\n\nAs Jaynes (1996) emphasizes, the theorems of Bayesian probability theory are just that, *mathematical theorems* which follow inevitably from Bayesian axioms. One might naively think that there would be no controversy about mathematical theorems. But when do the theorems apply? How do we use the theorems in real-world problems? The *Intuitive Explanation* tries to avoid controversy, but the *Technical Explanation* willfully walks into the whirling helicopter blades. Bluntly, the reasoning in the *Technical Explanation* does not represent the unanimous consensus of Earth’s entire planetary community of Bayesian researchers. At least, not yet.\n\n\nThe *Technical Explanation of Technical Explanation* is so named because it begins with this question:\n\n\n\n> *What is the difference between a technical understanding and a verbal understanding?*\n> \n> \n\n\n\n\n---\n\n\n*A fable:*\n\n\n\n> Once upon a time, there was a teacher who cared for a group of physics students. One day she called them into her class, and showed them a wide, square plate of metal, next to a hot radiator. The students each put their hand on the plate, and found the side next to the radiator cool, and the distant side warm. And the teacher said, write down your guess why this happens. Some students guessed convection of air currents, and others guessed strange patterns of metals in the plate, and not one put down ‘This seems to me impossible’, and the answer was that before the students entered the room, the teacher turned the plate around.\n> \n> (Taken from Verhagen 2001.)\n\n\nThere are many morals to this fable, and I have told it with different morals in different contexts. I usually take the moral that your strength as a rationalist is measured by your ability to be more confused by fiction than by reality. If you are equally good at explaining any story, you have zero knowledge. Occasionally I have heard a story that sounds confusing, and reflexively suppressed my feeling of confusion and accepted the story, and then later learned that the original story was untrue. Each time this happens to me, I vow anew to focus consciously on my fleeting feelings of bewilderment.\n\n\nBut in this case, the moral is that the apocryphal students failed to understand what constituted a scientific explanation. If the students measured the heat of the plate at different points and different times, they would soon see a pattern in the numbers. If the students knew the diffusion equation for heat, they might calculate that the plate equilibrated with the radiator and environment two minutes and fifteen seconds ago, turned around, and now approaches equilibrium again. Instead the students wrote down words on paper, and thought they were doing physics. I should rather compare it to the random guessing of Greek philosophers, such as Heraclitus who said “All is Fire”, and fancied it his theory of everything.\n\n\nAs a child I read books of popular physics, and fancied myself knowledgeable; I knew sound was waves of air, light was waves of electromagnetism, matter was waves of complex probability amplitudes. When I grew up I read the Feynman Lectures on Physics, and discovered a gem called ‘the wave equation’. I thought about that equation, on and off for three days, until I saw to my satisfaction it was dumbfoundingly simple. And when I understood, I realized that during all the time I had believed the honest assurance of physicists that sound and light and matter were waves, I had not the vaguest idea what ‘wave’ meant to a physicist.\n\n\n\n\n---\n\n\nSo that is the difference between a technical understanding and a verbal understanding.\n\n\nDo you believe that? If so, you should have applied the knowledge, and said: “But why didn’t you give a technical explanation instead of a verbal explanation?”\n\n\n\n\n---\n\n\nIn “An Intuitive Explanation of Bayesian Reasoning” I tried to provide visual and physical metaphors for Bayesian probability; for example, evidence is a *weight* , a *pressure* upon belief, that *slides* prior probabilities to posterior probabilities.\n\n\nNow we add a new metaphor, which is also the mathematical terminology: Visualize *probability density* or *probability mass* – probability as a lump of clay that you must distribute over possible outcomes.\n\n\nLet’s say there’s a little light that can flash red, blue, or green each time you press a button. The light flashes one and only one color on each press of the button; the possibilities are mutually exclusive. You’re trying to predict the color of the next flash. On each try, you have a weight of clay, the probability mass, that you have to distribute over the possibilities red, green, and blue. You might put a fourth of your clay on the “green” possibility, a fourth of your clay on the “blue” possibility, and half your clay on the “red” possibility – like assigning a probability of green:25%, blue:25%, and red:50%. The metaphor is that *probability is a conserved resource* , to dole out sparingly. If you think that “blue” is more likely to flash on the next experiment, you can assign a higher probability to blue, but you have to take the probability mass from the other hypotheses – maybe steal some clay from red and add it to blue. You can never get any more clay. Your probabilities can’t sum to more than 1.0 (100%). You can’t predict a 75% chance of seeing red, and an 80% chance of seeing blue.\n\n\nWhy would you want to be careful with your probability mass, or dole it out sparingly? Why not slop probability all over the place? Let’s shift the metaphor from clay to money. You can bet up to a dollar of play money on each press of the button. An experimenter stands nearby, and pays you an amount of real money that depends on how much play money you bet on the *winning* light. We don’t care how you distributed your remaining play money over the losing lights. The only thing that matters is how much you bet on the light that actually won.\n\n\nBut we must carefully construct the scoring rule used to pay off the winners, if we want the players to be careful with their bets. Suppose the experimenter pays each player real money equal to the play money bet on the winning color. Under this scoring rule, if you observe that red comes up 6 times out of 10, your best strategy is to bet, not 60 cents on red, but the entire dollar on red, and you don’t care about the frequencies of blue and green. Why? Let’s say that blue and green each come up around 2 times out of 10. And suppose you bet 60 cents on red, 20 cents on blue, and 20 cents on green. In this case, 6 times out of 10 you would win 60 cents, and 4 times out of 10 you would win 20 cents, for an average payoff of 44 cents. Under that scoring rule, it makes more sense to allocate the entire dollar to red, and win an entire dollar 6 times out of 10. 4 times out of 10 you would win nothing. Your average payoff would be 60 cents.\n\n\nIf we wrote down the function for the payoff, it would be `Payoff = P(winner)`, where `P(winner)` is the amount of play money you bet on the winning color on that round. If we wrote down the function for the *expected payoff* given that Payoff rule, it would be `Expectation[Payoff] = (Sum[P(color)*F(color)]` for each color) . `P(color)` is the amount of play money you bet on a color, and `F(color)` is the frequency with which that color wins.\n\n\nSuppose that the actual frequency of the lights is blue:30%, green:20%, and red:50%. And suppose that on each round I bet blue:$0.40, green:$0.50, and red:$0.10. I would get $0.40 30% of the time, $0.50 20% of the time, and $0.10 50% of the time, for an average payoff of $0.12 + $0.10 + $0.05 or $0.27.\n\n\nThat is:\n\n\n\n```\nP(color) = play money assigned to that color\nF(color) = frequency with which that color wins\nPayoff = P(winner) = amount of play money allocated to winning color \n```\n\nActual frequencies of winning:\n\n\n\n```\nBlue: 30% Green: 20% Red: 50%\n```\n\nIn the long run, red wins 50% of the time, green wins 20% of the time, and blue wins 30% of the time. So our *average* payoff on each round is 50% of the payoff if red wins, plus 20% of the payoff if green wins, plus 30% of the payoff if blue wins.\n\n\nThe payoff is a function of the winning color and the betting scheme. We want to compute the *average* payoff, given a betting scheme and the *frequencies* at which each color wins. The mathematical term for this kind of computation, taking a function of each case and weighting it by the frequency of that case, is an *expectation* . Thus, to compute our *expected payoff* we would calculate:\n\n\n\n```\nExpectation[Payoff] = Sum[P(color)*F(color)] for each color \n```\n\n\n```\nP(color)*F(color) for blue = $0.40 * 30% = $0.12\n+ P(color)*F(color) for green = $0.50 * 20% = $0.10\n+ P(color)*F(color) for red = $0.10 * 50% = $0.05\n```\n\n\n```\n= $0.12 + $0.10 + $0.05\n= $0.27\n```\n\nWith this betting scheme I’ll win, on average, around 27 cents per round.\n\n\nI allocated my play money in a grossly arbitrary way, and the question arises: Can I increase my expected payoff by allocating my play money more wisely? *Given the scoring rule provided,* I maximize my expected payoff by allocating my *entire* dollar to red. Despite my *expected* payoff of fifty cents per round, the light might *actually* flash green, blue, blue, green, green and I would receive an *actual* payoff of zero. However, the chance of the light coming up non-red on five successive rounds is approximately 3%.\n\n\n\n> Tversky and Edwards (1966) conducted an experiment. Subjects were shown a succession of cards, each card either red or blue. 70% of the cards were blue, and 30% red; the color sequence was random. The subjects, asked to guess each succeeding card, would guess blue around 70% of the time, and red about 30% of the time – as if they thought they had some way of predicting the random sequence! Even when the subjects were paid a nickel for each correct guess, they still only guessed blue about 76% of the time. Why is this odd? Because you do not need to bet on a guess to test it. You could just say “blue” each time, being paid a nickel about 70% of the time, accumulating thirty-five dollars over a thousand trials, while mentally noting your private guesses for any (imaginary) patterns you thought you spotted. If your predictions came out right, then you could switch to the newly discovered sequence. There was no need for the subjects to bet on any patterns they thought they saw; they could have simply bet on blue until some hypothesis was confirmed . But if human beings reasoned like that, people would not buy lottery tickets, but instead write down predictions in notebooks at home, and begin buying lottery tickets only when their predictions began succeeding.\n> \n> The mistake revealed by the experiment was not that the subjects looked for patterns in a random-seeming sequence; that is curiosity, an admirable human trait. Dawes (1988) comments on this experiment: “Despite feedback through a thousand trials, subjects cannot bring themselves to believe that the situation is one in which they cannot predict.” But even if subjects refused to accept unpredictability and continued looking for patterns, they didn’t have to bet on their guesses. They just needed to make a mental note of the pattern’s prediction, then keep betting on blue while waiting for confirmation. My suspicion is that subjects just didn’t think of the winning strategy. They didn’t realize that their betting pattern did not have to resemble the observed sequence of cards. On each round, blue is the most likely next card. The best financial strategy is not betting a mostly-blue pattern resembling the mostly-blue sequence, but betting all blue, to win as many nickels as possible. If 70% of the time you predict blue and 30% of the time you predict red, and the cards do not correlate with your guesses, you shall predict correctly 0.7*0.7 + 0.3*0.3 = 58% of the time. If 100% of the time you predict blue, you’ll get a nickel 70% of the time.\n> \n> Under conditions of uncertainty, your optimal betting pattern doesn’t resemble a typical sequence of cards. Similarly, I wonder how many betters on horse races realize that you don’t win by betting on the horse you think will win the race, but by betting on horses whose payoffs exceed what you think are the odds. But then, statistical thinkers that sophisticated would probably not bet on horse races\n> \n> \n\n\nA *proper scoring rule* (another standard math term) is a rule for scoring bets so that you maximize your expected payoff by betting play money that exactly equals the chance of that color flashing. We want a scoring rule so that if the lights actually flash at the frequency blue:30%, green:20%, and red:50%, you can maximize your average payoff *only* by betting 30 cents on blue, 20 cents on green, and 50 cents on red. A proper scoring rule is one that forces your optimal bet to exactly report your estimate of the probabilities. (This is also sometimes known as a “strictly proper scoring rule”.) As we’ve seen, not all scoring rules have this property; and if you invent a plausible-sounding scoring rule at random, it probably *won’t* have the property.\n\n\nOne rule with this proper property is to pay a dollar minus the squared error of the bet, rather than the bet itself – if you bet 0.3 on the winning light, your error would be 0.7, your squared error would be 0.49, and a dollar minus your squared error would be fifty-one cents. (Presumably your play money is denominated in the square root of cents, so that the squared error is a monetary sum.) (Readers with calculus may verify that in the simpler case of a light that has only two colors, with p being the bet on the first color and f the frequency of the first color, the expected payoff f\\*(1-((1-p)^2)) + (1-f)\\*(1-(p^2)) , with p variable and f constant, has its global maximum when we set p=f .)\n\n\nWe shall *not* use the squared-error rule. Ordinary statisticians take the squared error of everything in sight, but not Bayesian statisticians.\n\n\nWe add a new requirement: we require, not only a proper scoring rule, but that our proper scoring rule gives us the same answer whether we apply it to rounds individually or combined. This is what Bayesians do instead of taking the squared error of things; we require invariances.\n\n\nSuppose I press the button twice in a row. There are nine possible outcomes: green-green, green-blue, green-red, blue-green, blue-blue, blue-red, red-green, red-blue, and red-red . Suppose that green wins, and then blue wins. The experimenter would assign the first score based on our probability assignments for p(green-1) and the second score based on p(blue-2|green-1) . We would make two predictions, and get two scores. Our first prediction was the probability we assigned to the color that won on the first round, green. Our second prediction was our probability that blue would win on the second round, *given* that green won on the first round. Why do we need to write p(blue-2|green-1) instead of just p(blue-2) ? Because you might have a hypothesis about the flashing light that says “blue never follows green”, or “blue always follows green” or “blue follows green with 70% probability”. If this is so, then after seeing green on the first round, you might want to revise your prediction – change your bets – for the second round. You can always revise your predictions right up to the moment the experimenter presses the button, using every scrap of information; but after the light flashes it is too late to change your bet.\n\n\n(Don’t remember how to read P(A|B) ? See [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) .)\n\n\nSuppose the actual outcome is green-1 followed by blue-2 . We require this invariance: I must get the same *total* score, regardless of whether:\n\n\n* I am scored twice, first on my prediction for p(green-1) , and second on my prediction for p(blue-2|green-1) .\n* I am scored once for my *joint* prediction p(blue-2 & green-1) .\n\n\nSuppose I assign a 60% probability to green-1 , and then the green light flashes. I must now produce probabilities for the colors on the second round. I assess the possibility blue-2 , and allocate it 25% of my probability mass. Lo and behold, on the second round the light flashes blue. So on the first round my bet on the winning color was 60%, and on the second round my bet on the winning color was 25%. But I might also, at the start of the experiment and after assigning p(green-1) , *imagine* that the light first flashes green, *imagine* updating my theories based on that information, and then say what confidence I will give to blue on the next round if the first round is green. That is, I generate the probabilities p(green-1) and p(blue-2|green-1) . By multiplying these two probabilities together we would get the joint probability, p(green-1 & blue-2) = 15%.\n\n\nA double experiment has nine possible outcomes. If I generate nine probabilities for p(green-1 & green-2), p(green-1 & blue-2), … , p(red-1 & blue-2), p(red-1 & red-2) , the probability mass must sum to no more than 1.0. I am giving predictions for nine mutually exclusive possibilities of a “double experiment”.\n\n\nWe require a scoring rule (and maybe it won’t look like anything an ordinary bookie would ever use) such that my score doesn’t change regardless of whether we consider the double result as two predictions or one prediction. I can treat the sequence of two results as a single experiment, “press the button twice”, and be scored on my prediction for p(blue-2 & green-1) = 15% . Or I can be scored once for my first prediction p(green-1) = 60% , then again on my prediction p(blue-2|green-1) = 25% . We require the same *total* score in either case, so that it doesn’t matter how we slice up the experiments and the predictions – the *total* score is always exactly the same. This is our invariance.\n\n\nWe have just required:\n\n\n\n```\nScore(p(green-1 & blue-2)) = Score(p(green-1)) + Score(p(blue-2|green-1)) \n```\n\nAnd we already know:\n\n\n\n```\np(green-1 & blue-2) = p(green-1) * p(blue-2|green-1)\n```\n\nThe only possible scoring rule is:\n\n\n\n```\nScore(p) = log(p)\n```\n\nThe new scoring rule is that your score is the *logarithm* of the probability you assigned to the winner.\n\n\nThe base of the logarithm is arbitrary – whether we use the logarithm base 10 or the logarithm base 2, the scoring rule has the desired invariance. But we must choose some actual base. A mathematician would choose base e; an engineer would choose base 10; a computer scientist would choose base 2. If we use base 10, we can convert to “decibels”, as in the *Intuitive Explanation* ; but sometimes bits are easier to manipulate.\n\n\nThe logarithm scoring rule is proper – it has its expected maximum when we say our exact expectations; it rewards honesty. If we think the blue light has a 60% probability of flashing, and we calculate our expected payoff for different betting schemas, we find that we maximize our expected payoff by telling the experimenter “60%”. (Readers with calculus can verify this.) The scoring rule also gives an invariant total, regardless of whether pressing the button twice counts as “one experiment” or “two experiments”. However, payoffs are now all *negative* , since we are taking the logarithm of the probability and the probability is between 0 and 1. The logarithm base 10 of 0.1 is -1; the logarithm base 10 of 1% is -2. That’s okay. We accepted that the scoring rule might not look like anything a real bookie would ever use. If you like, you can imagine that the experimenter has a pile of money, and at the end of the experiment he awards you some amount minus your large negative score. (Er, the amount plus your negative score.) Maybe the experimenter has a hundred dollars, and at the end of a hundred rounds you accumulated a score of -48, so you get fifty-two dollars.\n\n\nA score of -48 in what base? We can eliminate the ambiguity in the score by specifying units. 10 decibels equals a factor of 10; -10 decibels equals a factor of 1/10. Assigning a probability of 0.01 to the actual outcome would score -20 decibels. A probability of 0.03 would score -15 decibels. Sometimes we may use bits: 1 bit is a factor of 2, -1 bit is a factor of 1/2. A probability of 0.25 would score -2 bits; a probability of 0.03 would score around -5 bits.\n\n\nIf you arrive at a probability assessment P for each color, with p(red), p(blue), p(green) , then your *expected score* is:\n\n\n\n```\nScore = log(p)\nExpectation[Score] = Sum[p*log(p)] for all outcomes p.\n \n```\n\nSuppose you had probabilities red:25%, blue:50%, green:25% . Let’s think in base 2 for a moment, to make things simpler. Your expected score is:\n\n\n\n```\nred: scores -2 bits, flashes 25% of the time\nblue: scores -1 bit, flashes 50% of the time\ngreen: scores -2 bits, flashes 25% of the time\n\n expected score: -1.50 bits\n```\n\n\n\n---\n\n\nContrast our Bayesian scoring rule with the ordinary or colloquial way of speaking about degrees of belief, where someone might casually say, “I’m 98% certain that canola oil contains more omega-3 fats than olive oil.” What they really mean by this is that they *feel* 98% certain – there’s something like a little progress bar that measures the strength of the emotion of certainty, and this progress bar is 98% full. And the emotional progress bar probably wouldn’t be exactly 98% full, if we had some way to measure. The word “98%” is just a colloquial way of saying: “I’m almost but not entirely certain.” It doesn’t mean that you could get the highest expected payoff by betting exactly 98 cents of play money on that outcome. You should only assign a *calibrated confidence* of 98% if you’re confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We’ll keep track of how often you’re right, over time, and if it turns out that when you say “90% sure” you’re right about 7 times out of 10, then we’ll say you’re *poorly calibrated* .\n\n\nRemember Spock from Star Trek? Spock often says something along the lines of, “Captain, if you steer the Enterprise directly into a black hole, our probability of survival is only 2.837%.” Yet nine times out of ten the Enterprise is not destroyed. What kind of tragic fool gives a figure with four significant digits of precision that is wrong by two orders of magnitude?\n\n\nThe people who write this stuff have no idea what scientists mean by “probability”. They suppose that a probability of 99.9% is something like feeling really sure. They suppose that Spock’s statement expresses the *challenge* of successfully steering the Enterprise through a black hole, like a video game rated five stars for difficulty. What *we* mean by “probability” is that if you utter the words “two percent probability” on fifty independent occasions, it better not happen more than once.\n\n\nIf you say “98% probable” a thousand times, and you are surprised only five times, we still ding you for poor calibration. You’re allocating too much probability mass to the possibility that you’re wrong. You should say “99.5% probable” to maximize your score. The scoring rule rewards *accurate* calibration, encouraging neither humility nor arrogance.\n\n\nAt this point it may occur to some readers that there’s an obvious way to achieve perfect calibration – just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you’re right half the time. Isn’t that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is *discrimination* .\n\n\nSuppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between “Yes” and “No”. Congratulations, you’re perfectly calibrated – answers for which you said “50% probability” were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said “50%” on twenty occasions – you said “50%” to Yes-1, No-1; Yes-2, No-2; … . On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; … . And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; …\n\n\nNow I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I’m more poorly calibrated than you. I said “90%” on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate “90%” into 80%, knowing that when I’m 90% sure I’m right about 80% of the time. But the probability you assigned to the final outcome is 1/2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)\\*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn’t, but my better *discrimination* between right and wrong answers more than makes up for it. My final score is higher – I assigned a greater joint probability to the final outcome of the entire experiment. If I’d been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 \\* 0.2^2, 0.006.\n\n\nIs it possible to do even better? Sure. You could have guessed every single answer correctly, and assigned a probability of 99% to each of your answers. Then the probability you assigned to the entire experimental outcome would be 0.99^10 ~ 90%.\n\n\nYour *score* would be log(90%), -0.45 decibels or -0.15 bits. We need to take the logarithm so that if I try to maximize my *expected score* , Sum[p\\*log(p)] , I have no motive to cheat. Without the logarithm rule, I would maximize my expected score by assigning all my probability mass to the most probable outcome. Also, without the logarithm rule, my total score would be different depending on whether we counted several rounds as several experiments or as one experiment.\n\n\nA simple transform can fix poor calibration by decreasing discrimination. If you are in the habit of saying “million-to-one” on 90 correct and 10 incorrect answers for each hundred questions, we can perfect your calibration by replacing “million-to-one” with “nine-to-one”. In contrast, there’s no easy way to increase (successful) discrimination. If you habitually say “nine-to-one” on 90 correct answers for each hundred questions, I can easily increase your *claimed* discrimination by replacing “nine-to-one” with “million-to-one”. But no simple transform can increase your *actual* discrimination such that your reply distinguishes 95 correct answers and 5 incorrect answers. Yates et. al. (2002): “Whereas good calibration often can be achieved by simple mathematical transformations (e.g., adding a constant to every probability judgment), good discrimination demands access to solid, predictive evidence and skill at exploiting that evidence, which are difficult to find in any real-life, practical situation.” If you lack the ability to distinguish truth from falsehood, you can achieve perfect calibration by confessing your ignorance; but confessing ignorance will not, of itself, distinguish truth from falsehood.\n\n\nWe thus dispose of another false stereotype of rationality, that rationality consists of being humble and modest and confessing helplessness in the face of the unknown. That’s just the cheater’s way out, assigning a 50% probability to all yes-or-no questions. Our scoring rule encourages you to do better if you can. If you are ignorant, confess your ignorance; if you are confident, confess your confidence. We penalize you for being confident and wrong, but we also reward you for being confident and right. That is the virtue of a proper scoring rule.\n\n\n\n\n---\n\n\nSuppose I flip a coin twenty times. If I believe the coin is fair, the best prediction I can make is to predict an even chance of heads or tails on each flip. If I believe the coin is fair, I assign the same probability to every possible sequence of twenty coinflips. There are roughly a million (1,048,576) possible sequences of twenty coinflips, and I have only 1.0 of probability mass to play with. So I assign to each *individual* possible sequence a probability of (1/2)^20 – odds of about a million to one; -20 bits or -60 decibels.\n\n\nI made an experimental prediction and got a score of -60 decibels! Doesn’t this falsify the hypothesis? Intuitively, no. We do not flip a coin twenty times and see a random-looking result, then reel back and say, why, the odds of that are a million to one. But the odds *are* a million to one against seeing that exact sequence, as I would discover if I naively predicted the exact same outcome for the *next* sequence of twenty coinflips. It’s okay to have theories that assign tiny probabilities to outcomes, so long as no other theory does better. But if someone used an alternate hypothesis to write down the exact sequence in a sealed envelope in advance, and she assigned a probability of 99%, I would suspect the fairness of the coin. Provided that she only sealed *one* envelope, and not a million.\n\n\nThat tells us *what* we ought common-sensically to answer, but it doesn’t say *how* the common-sense answer arises from the math. To say *why* the common sense is correct, we need to integrate all that has been said so far into the framework of Bayesian revision of belief. When we’re done, we’ll have a technical understanding of the difference between a verbal understanding and a technical understanding.\n\n\n\n\n---\n\n\nImagine an experiment which produces an integer result between 0 and 99. For example, the experiment might be a particle counter that tells us how many particles have passed through in a minute. Or the experiment might be to visit the supermarket on Wednesday, check the price of a 10-ounce bag of crushed walnuts, and write down the last two digits of the price.\n\n\nWe are testing several different hypotheses that try to predict the experimental result. Each hypothesis produces a probability distribution over all possible results; in this case, the integers between zero and ninety-nine. The possibilities are mutually exclusive, so the probability mass in the distribution must sum to 1.0 (or less); we cannot predict a 90% probability of seeing 42 and also a 90% probability of seeing 43.\n\n\nSuppose there is a precise hypothesis, which predicts a 90% chance of seeing the result 51. (I.e., the hypothesis is that the supermarket usually prices walnuts with a price of “X dollars and 51 cents”.) The precise theory has staked 90% of its probability mass on the outcome 51. This leaves 10% probability mass remaining to spread over 99 other possible outcomes – all the numbers between 0 and 99 *except* 51. The theory makes no further specification, so we spread the remaining 10% probability mass evenly over 99 possibilities, assigning a probability of 1/990 to each non-51 result. For ease of writing, we’ll approximate 1/990 as 0.1%.\n\n\nThis probability distribution is analogous to the *likelihood* or *conditional probability* of the result given the hypothesis. Let us call it the *likelihood distribution* for the hypothesis, our chance of seeing each specified outcome *if* the hypothesis is true. The likelihood distribution for a hypothesis H is a function composed of all the conditional probabilities for p(0|H)=0.001, p(1|H)=0.001, …, p(51|H)=0.9, …, p(99|H)=0.001 . The probability mass contained in the likelihood distribution must sum to 1. It is a general rule that there is no way we can have a 90% chance of seeing 51 and also a 90% chance of seeing 52. Therefore, if we first assume the hypothesis H is true, there is still no way we can have a 90% chance of seeing 51 and also a 90% chance of seeing 52.\n\n\nThe precise theory predicts a 90% probability of seeing 51. Let there be also a vague theory, which predicts “a 90% probability of seeing a number in the 50s”.\n\n\nSeeing the result 51, we do not say the outcome confirms both theories equally. Both theories made predictions, and both assigned probabilities of 90%, and the result 51 confirms both predictions. But the precise theory has an advantage because it concentrates its probability mass into a sharper point. If the vague theory makes no further specification, we count “a 90% probability of seeing a number in the 50s” as a 9% probability of seeing each number between 50 and 59.\n\n\nSuppose we started with even odds in favor of the precise theory and the vague theory – odds of 1:1, or 50% probability for either hypothesis being true. After seeing the result 51, what are the posterior odds of the precise theory being true? (If you don’t remember how to work this problem, return to [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) .) The predictions of the two theories are analogous to their likelihood assignments – the conditional probability of seeing the result, given that the theory is true. What is the likelihood ratio between the two theories? The first theory allocated 90% probability mass to the *exact* outcome. The vague theory allocated 9% probability mass to the exact outcome. The likelihood ratio is 10:1. So if we started with even 1:1 odds, the posterior odds are 10:1 in favor of the precise theory. The differential pressure of the two conditional probabilities pushed our prior confidence of 50% to a posterior confidence of about 91% that the precise theory is correct. *Assuming* that these are the only hypotheses being tested, that this is the only evidence under consideration, and so on.\n\n\nWhy did the vague theory lose when both theories fit the evidence? The vague theory is timid; it makes a broad prediction, hedges its bets, allows many possibilities that would falsify the precise theory. This is not the virtue of a scientific theory. Philosophers of science tell us that theories should be bold, and subject themselves willingly to falsification if their prediction fails (Popper 1959). Now we see why. The precise theory concentrates its probability mass into a sharper point and thereby leaves itself vulnerable to falsification if the real outcome hits elsewhere; but if the predicted outcome is correct, precision has a tremendous likelihood advantage over vagueness.\n\n\nThe laws of probability theory provide no way to cheat, to make a vague hypothesis such that any result between 50 and 59 counts for as much favorable confirmation as the precise theory receives, for that would require probability mass summing to 900%. There is no way to cheat, providing you record your prediction *in advance* , so you cannot claim afterward that your theory assigns a probability of 90% to whichever result arrived. Humans are very fond of making their predictions afterward, so the social process of science requires an advance prediction before we say that a result confirms a theory. But how humans may move in harmony with the way of Bayes, and so wield the power, is a separate issue from whether the math works. When we’re doing the math, we just take for granted that likelihood density functions are fixed properties of a hypothesis and the probability mass sums to 1 and you’d never dream of doing it any other way.\n\n\nYou may want to take a moment to visualize that, *if* we define probability in terms of calibration, Bayes’ Theorem relates the calibrations. Suppose I guess that Theory 1 is 50% likely to be true, and I guess that Theory 2 is 50% likely to be true. Suppose I am well-calibrated; when I utter the words “fifty percent”, the event happens about half the time. And then I see a result R which would happen around nine-tenths of the time given Theory 1, and around nine-hundredths of the time given Theory 2, and I know this is so, and I apply Bayesian reasoning. If I was perfectly calibrated initially (despite the poor discrimination of saying 50/50), I will still be perfectly calibrated (and better discriminated) after I say that my confidence in Theory 1 is now 91%. If I repeated this kind of situation many times, I would be right around ten-elevenths of the time when I said “91%”. If I reason using Bayesian rules, and I start from well-calibrated priors, then my conclusions will also be well-calibrated. This only holds true if we define probability in terms of calibration! If “90% sure” is instead interpreted as, say, the strength of the emotion of surety, there is no reason to expect the posterior emotion to stand in an exact Bayesian relation to the prior emotion.\n\n\nLet the prior odds be ten to one in favor of the vague theory. Why? Suppose our way of describing hypotheses allows us to either specify a precise number, or to just specify a first-digit; we can say “51”, “63”, “72”, or “in the fifties/sixties/seventies”. Suppose we think that the real answer is about equally liable to be an answer of the first kind or the second. However, given the problem, there are a hundred possible hypotheses of the first kind, and only ten hypotheses of the second kind. So if we think that either *class* of hypotheses has about an equal prior chance of being correct, we have to spread out the prior probability mass over ten times as many precise theories as vague theories. The precise theory that predicts exactly 51 would thus have one-tenth as much prior probability mass as the vague theory that predicts a number in the fifties. After seeing 51, the odds would go from 1:10 in favor of the vague theory to 1:1, even odds for the precise theory and the vague theory.\n\n\nIf you look at this carefully, it’s exactly what common sense would expect. You start out uncertain of whether a phenomenon is the kind of phenomenon that produces exactly the same result every time, or if it’s the kind of phenomenon that produces a result in the Xties every time. (Maybe the phenomenon is a price range at the supermarket, if you need some reason to suppose that 50..59 is an acceptable range but 49..58 isn’t.) You take a *single* measurement and the answer is 51. Well, that could be because the phenomenon is exactly 51, or because it’s in the fifties. So the remaining precise theory has the same odds as the remaining vague theory, which requires that the vague theory must have started out ten times as probable as that precise theory, since the precise theory has a sharper fit to the evidence.\n\n\nIf we just see one number, like 51, it doesn’t change the prior probability that the phenomenon itself was “precise” or “vague”. But, in effect, it concentrates all the probability mass of those two *classes* of hypothesis into a single surviving hypothesis of each class.\n\n\nOf course, it is a severe error to say that a *phenomenon* is precise or vague, a case of what Jaynes calls the Mind Projection Fallacy (Jaynes 1996). Precision or vagueness is a property of maps, not territories. Rather we should ask if the price in the supermarket stays constant or shifts about. A hypothesis of the “vague” sort is a good description of a price that shifts about. A precise map will suit a constant territory.\n\n\nAnother example: You flip a coin ten times and see the sequence HHTTH:TTTTH. Maybe you started out thinking there was a 1% chance this coin was fixed. Doesn’t the hypothesis “This coin is fixed to produce HHTTH:TTTTH” assign a thousand times the likelihood mass to the observed outcome, compared to the fair coin hypothesis? Yes. Don’t the posterior odds that the coin is fixed go to 10:1? No. The 1% prior probability that “the coin is fixed” has to cover every possible kind of fixed coin – a coin fixed to produce HHTTH:TTTTH, a coin fixed to produce TTHHT:HHHHT, etc. The prior probability the coin is fixed to produce HHTTH:TTTTH is not 1%, but a thousandth of one percent. Afterward, the posterior probability the coin is fixed to produce HHTTH:TTTTH is one percent. Which is to say: You thought the coin was probably fair but had a one percent chance of being fixed to some random sequence; you flipped the coin; the coin produced a random-looking sequence; and that doesn’t tell you anything about whether the coin is fair or fixed. It does tell you, if the coin is fixed, *which* sequence it is fixed to.\n\n\nThis parable helps illustrate why Bayesians *must* think about prior probabilities. There is a branch of statistics, sometimes called “orthodox” or “classical” statistics, which insists on paying attention only to likelihoods. But if you only pay attention to likelihoods, then eventually some fixed-coin hypothesis will always defeat the fair coin hypothesis, a phenomenon known as “overfitting” the theory to the data. After thirty flips, the *likelihood* is a billion times as great for the fixed-coin hypothesis with that sequence, as for the fair coin hypothesis. Only if the fixed-coin hypothesis (or rather, that *specific* fixed-coin hypothesis) is a billion times less probable *a priori* , can the fixed-coin hypothesis possibly lose to the fair coin hypothesis.\n\n\nIf you shake the coin to reset it, and start flipping the coin *again* , and the coin produces HHTTH:TTTTH *again* , that is a different matter. That does raise the posterior odds of the fixed-coin hypothesis to 10:1, even if the starting probability was only 1%.\n\n\nSimilarly, if we perform two successive measurements of the particle counter (or the supermarket price on Wednesdays), and *both* measurements return 51, the precise theory wins by odds of 10 to 1.\n\n\nSo the precise theory wins, but the vague theory would still score better than no theory at all. Consider a third theory, the hypothesis of zero knowledge or *maximum-entropy distribution* , which makes equally probable any result between 0 and 99. Suppose we see the result 51. The vague theory produced a better prediction than the maximum-entropy distribution – assigned a greater likelihood to the outcome we observed. The vague theory is, literally, better than nothing. Suppose we started with odds of 1:20 in favor of the hypothesis of complete ignorance. (Why odds of 1:20? There is only one hypothesis of complete ignorance, and moreover, it’s a particularly simple and intuitive kind of hypothesis. Occam’s Razor.) After seeing the result of 51, predicted at 9% by the vague theory versus 1% by complete ignorance, the posterior odds go to 10:20 or 1:2. If we then see another result of 51, the posterior odds go to 10:2 or 83% probability for the vague theory, assuming there is no more precise theory under consideration.\n\n\nYet the timidity of the vague theory – its unwillingness to produce an *exact* prediction and accept falsification on any other result – renders it vulnerable to the bold, precise theory. (Providing, of course, that the bold theory correctly guesses the outcome!) Suppose the prior odds were 1:10:200 for the precise, vague, and ignorant theories – prior probabilities of 0.5%, 4.7%, and 94.8% for the precise, vague and ignorant theories. This figure reflects our prior probability distribution over *classes* of hypotheses, with the probability mass distributed over entire classes as follows: 50% that the phenomenon shifts across all digits, 25% that the phenomenon shifts around within some decimal bracket, and 25% that the phenomenon repeats the same number each time. 1 hypothesis of complete ignorance, 10 possible hypotheses for a decimal bracket, 100 possible hypotheses for a repeating number. Thus, prior odds of 1:10:200 for the precise hypothesis 51, the vague hypothesis “fifties”, and the hypothesis of complete ignorance.\n\n\nAfter seeing a result of 51, with assigned probability of 90%, 9%, and 1%, the posterior odds go to 90:90:200 = 9:9:20. After seeing an additional result of 51, the posterior odds go to 810:81:20, or 89%, 9%, and 2%. The precise theory is now favored over the vague theory, which in turn is favored over the ignorant theory.\n\n\nNow consider a stupid theory, which predicts a 90% probability of seeing a result between 0 and 9. The stupid theory assigns a probability of 0.1% to the actual outcome, 51. If the odds were initially 1:10:200:10 for the precise, vague, ignorant, and stupid theories, the posterior odds after seeing 51 once would be 90:90:200:1. The stupid theory has been falsified (posterior probability of 0.2%).\n\n\nIt is possible to have a model so bad that it is worse than nothing, if the model concentrates its probability mass away from the actual outcome, makes confident predictions of wrong answers. Such a hypothesis is so poor that it loses against the hypothesis of complete ignorance. Ignorance is better than anti-knowledge.*Side note 1:* In the field of Artificial Intelligence, there is a sometime fad that praises the glory of randomness. Occasionally an AI researcher discovers that if they add noise to one of their algorithms, the algorithm works better. This result is reported with great enthusiasm, followed by much fulsome praise of the creative powers of chaos, unpredictability, spontaneity, ignorance of what your own AI is doing, et cetera. (See [Imagination Engines Inc.](http://web.archive.org/web/20080430194232/http://www.imagination-engines.com/) for an example; according to their sales literature they sell wounded and dying neural nets.) But how sad is an algorithm if you can *increase* its performance by injecting entropy into intermediate processing stages? The algorithm must be so deranged that some of its work goes into concentrating probability mass *away* from good solutions. If injecting randomness results in a reliable improvement, then some aspect of the algorithm must do reliably worse than random. Only in AI would people devise algorithms *literally dumber than a bag of bricks* , boost the results slightly back toward ignorance, and then argue for the healing power of noise.*Side note 2:* Robert Pirsig once said: “The world’s stupidest man may say the Sun is shining, but that doesn’t make it dark out.” (Pirsig 1974.) It is a classical logical fallacy to say, “Hitler believed in the Pythagorean Theorem. You don’t want to agree with Hitler, do you?” Consider that for someone to be reliably wrong on yes-or-no questions – say, to be wrong 90% of the time – that person would need to do all the hard work of discriminating truth from falsehood, just to be wrong so reliably. If someone is wrong on yes-or-no questions 99% of the time, we can get 99% accuracy just by inverting the responses. Anyone that stupid would be smarter than I am.\n\n\nSuppose that in our experiment we see the results 52, 51, 58. The precise theory gives this conjunctive event a probability of a thousand to one times 90% times a thousand to one, while the vaguer theory gives this conjunctive event a probability of 9% cubed, which works out to… oh… um… let’s see… a million to one given the precise theory, versus a thousand to one given the vague theory. Or thereabouts; we are counting rough powers of ten. Versus a million to one given the zero-knowledge distribution that assigns an equal probability to all outcomes. Versus a billion to one given a model worse than nothing, the stupid hypothesis, which claims a 90% probability of seeing a number less than 10. Using these approximate numbers, the vague theory racks up a score of -30 decibels (a probability of 1/1000 for the whole experimental outcome), versus scores of -60 for the precise theory, -60 for the ignorant theory, and -90 for the stupid theory. It is not always true that the highest score wins, because we need to take into account our prior odds of 1:10:200:10, confidences of -23, -13, 0, and -13 decibels. The vague theory still comes in with the highest total score at -43 decibels. (If we ignored our prior probabilities, each new experiment would override the accumulated results of all the previous experiments; we could not accumulate knowledge. Furthermore, the fixed-coin hypothesis would always win.)\n\n\nAs always, we should not be alarmed that even the best theory still has a low score – recall the parable of the fair coin. Theories are approximations. In principle we might be able to predict the exact sequence of coinflips. But it would take better measurement and more computing power than we’re willing to expend. Maybe we could achieve 60/40 prediction of coinflips, with a good enough model…? We go with the best approximation we have, and try to achieve good calibration even if the discrimination isn’t perfect.\n\n\n\n\n---\n\n\nWe’ve conducted our analysis so far under the rules of Bayesian probability theory, in which there’s no way to have more than 100% probability mass, and hence no way to cheat so that any outcome can count as “confirmation” of your theory. Under Bayesian law, play money may not be counterfeited; you only have so much clay. If you allocate more probability mass in one place, you have to take it from somewhere else; a coin cannot have a 90% chance of turning up heads and a 90% chance of turning up tails.\n\n\nUnfortunately, human beings are not Bayesians. Human beings bizarrely attempt to *defend* hypotheses, making a deliberate effort to prove them or prevent disproof. This behavior has no analogue in the laws of probability theory or decision theory. In formal probability theory the hypothesis *is* , and the evidence *is* , and either the hypothesis is confirmed or it is not. In formal decision theory, an agent may make an effort to investigate some issue of which the agent is currently uncertain, not knowing whether the evidence shall go one way or the other. In neither case does one ever deliberately try to prove an idea, or try to avoid disproving it. One may *test* ideas of which one is genuinely uncertain, but not have a “preferred” outcome of the investigation. One may not try to prove hypotheses, nor prevent their proof. I cannot properly convey just how ridiculous the notion would be, to a true Bayesian; there are not even words in Bayes-language to describe the mistake…\n\n\nOne classic method for preventing a theory from disproof is arguing *post facto* that any observation presented proves the theory. Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the *Cautio Criminalis* (‘prudence in criminal cases’) in which he bitingly described the decision tree for condemning accused witches. If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away. (Spee 1631.) Spee acted as confessor to many witches; he was thus in a position to observe *every* branch of the accusation tree, that no matter *what* the accused witch said or did, it was held a proof against her. In any individual case, you would only hear one branch of the dilemma.\n\n\nIt is for this reason that scientists write down their predictions in advance.\n\n\nIf you’ve read the *Intuitive Explanation* , you should recall the result I nicknamed the “Law of Conservation of Probability”, that for every expectation of evidence there is an equal and opposite expectation of counterevidence. If A is evidence in favor of B, not-A *must* be evidence in favor of not-B. The strengths of the evidences may not be equal; rare but strong evidence in one direction may be balanced by common but weak evidence in the other direction. But it is not possible for both A and not-A to be evidence in favor of B. That is, it’s not possible under the laws of probability theory. Humans often seem to want to have their cake and eat it too. Whichever result we witness is the one that proves our theory. As Spee put it, “The investigating committee would feel disgraced if it acquitted a woman; once arrested and in chains, she has to be guilty, by fair means or foul.”\n\n\nThe way human psychology seems to work is that first we see something happen, and then we try to argue that it matches whatever hypothesis we had in mind beforehand. Rather than conserved probability mass, to distribute over advance *predictions* , we have a feeling of *compatibility* – the degree to which the explanation and the event seem to ‘fit’. ‘Fit’ is not conserved. There is no equivalent of the rule that probability mass must sum to 1. A psychoanalyst may explain any possible behavior of a patient by constructing an appropriate structure of ‘rationalizations’ and ‘defenses’; it fits, therefore it must be true.\n\n\nNow consider the fable told at the start of this essay – the students seeing a radiator, and a metal plate next to the radiator. The students would never predict in advance that the side of the plate near the radiator would be cooler. Yet, seeing the fact, they managed to make their explanations ‘fit’. They lost their precious chance at bewilderment, to realize that their models did not predict the phenomenon they observed. They sacrificed their ability to be more confused by fiction than by truth. And they did not realize “heat induction, blah blah, therefore the near side is cooler” is a vague and verbal prediction, spread across an enormously wide range of possible values for specific measured temperatures. Applying equations of diffusion and equilibrium would give a *sharp* prediction for possible joint values. It might not specify the *first* values you measured, but when you knew a few values you could generate a sharp prediction for the rest. The score for the entire experimental outcome would be far better than any less precise alternative, especially a vague and verbal prediction.\n\n\n\n\n---\n\n\nYou now have a *technical* explanation of the difference between a verbal explanation and a technical explanation. It is a technical explanation because it enables you to calculate *exactly how technical* an explanation is. Vague hypotheses may be so vague that only a superhuman intelligence could calculate exactly how vague. Perhaps a sufficiently huge intelligence could extrapolate every possible experimental result, and extrapolate every possible verdict of the vague guesser for how well the vague hypothesis “fit”, and then renormalize the “fit” distribution into a likelihood distribution that summed to 1. But in principle one can still calculate exactly how vague is a vague hypothesis. The calculation is just not computationally tractable, the way that calculating airplane trajectories via quantum mechanics is not computationally tractable.\n\n\nI hold that everyone needs to learn at least one technical subject. Physics; computer science; evolutionary biology; or Bayesian probability theory, but *something* . Someone with *no* technical subjects under their belt has no referent for what it means to “explain” something. They may think “All is Fire” is an explanation. Therefore do I advocate that Bayesian probability theory should be taught in high school. Bayesian probability theory is the sole piece of math I know that is accessible at the high school level, and that permits a *technical* understanding of a subject matter – the dynamics of belief – that is an everyday real-world domain and has emotionally meaningful consequences. Studying Bayesian probability would give students a referent for what it means to “explain” something.\n\n\nToo many academics think that being “technical” means speaking in dry polysyllabisms. Here’s a “technical” explanation of technical explanation:The equations of probability theory favor hypotheses that strongly predict the exact observed data. Strong models boldly concentrate their probability density into precise outcomes, making them falsifiable if the data hits elsewhere, and giving them tremendous likelihood advantages over models less bold, less precise. Verbal explanation runs on psychological evaluation of unconserved post facto compatibility instead of conserved ante facto probability density. And verbal explanation does not paint sharply detailed pictures, implying a smooth likelihood distribution in the vicinity of the data.\n\n\nIs this satisfactory? No. Hear the impressive and weighty sentences, resounding with the dull thud of expertise. See the hapless students, writing those sentences on a sheet of paper. Even after the listeners hear the ritual words, they can perform no calculations. *You* know the math, so the words are meaningful. You can perform the calculations after hearing the impressive words, just as you could have done before. But what of one who did not see any calculations performed? What new skills have they gained from that “technical” lecture, save the ability to recite fascinating words?\n\n\n“Bayesian” sure is a fascinating word, isn’t it? Let’s get it out of our systems: Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes…\n\n\nThe sacred syllable is meaningless, except insofar as it tells someone to apply math. Therefore the one who hears must already know the math.\n\n\nConversely, if you know the math, you can be as silly as you like, and still technical.\n\n\nWe thus dispose of yet another stereotype of rationality, that rationality consists of sere formality and humorless solemnity. What has that to do with the problem of distinguishing truth from falsehood? What has that to do with attaining the map that reflects the territory? A scientist worthy of a lab coat should be able to make original discoveries while wearing a clown suit, or give a lecture in a high squeaky voice from inhaling helium. It is written nowhere in the math of probability theory that one may have no fun. The blade that cuts through to the correct answer has no dignity or silliness of itself, though it may fit the hand of a silly wielder.\n\n\n\n\n---\n\n\nOur physics uses the same *theory* to describe an airplane, and collisions in a particle accelerator – particles and airplanes both obey special relativity and general relativity and quantum electrodynamics and quantum chromodynamics. But we use entirely different *models* to understand the aerodynamics of a 747 and a collision between gold nuclei. A computer modeling the aerodynamics of the 747 may not contain a single token representing an atom, even though no one denies that the 747 is made of atoms.\n\n\nA *useful* model isn’t just something you know, as you know that the airplane is made of atoms. A useful model is knowledge you can compute in reasonable time to predict real-world events you know how to observe. Physicists use different models to predict airplanes and particle collisions, not because the two events take place in different universes with different laws of physics, but because it would be too expensive to compute the airplane particle by particle.\n\n\nAs the saying goes: “The map is not the territory, but you can’t fold up the territory and put it in your glove compartment.” Sometimes you need a smaller map, to fit in a more cramped glove compartment. It doesn’t change the territory. The precision or vagueness of the map isn’t a fact about the territory, it’s a fact about the map.\n\n\nMaybe someone will find that, using a model that violates conservation of momentum just a little, you can compute the aerodynamics of the 747 much more *cheaply* than if you insist that momentum is exactly conserved. So if you’ve got two computers competing to produce the best prediction, it might be that the best prediction comes from the model that violates conservation of momentum. This doesn’t mean that the 747 violates conservation of momentum in real life. Neither model uses individual atoms, but that doesn’t imply the 747 is not made of atoms. You would prove the 747 is made of atoms with experimental data that the aerodynamic models couldn’t handle; for example, you would train a scanning tunneling microscope on a section of wing and look at the atoms. Similarly, you could use a finer measuring instrument to discriminate between a 747 that *really* disobeyed conservation of momentum like the cheap approximation predicted, versus a 747 that obeyed conservation of momentum like underlying physics predicted. The winning theory is the one that best predicts all the experimental predictions together. Our Bayesian scoring rule gives us a way to combine the results of *all* our experiments, even experiments that use different methods.\n\n\nFurthermore, the atomic theory allows, embraces, and in some sense mandates the aerodynamic model. By thinking abstractly about the assumptions of atomic theory, we realize that the aerodynamic model ought to be a good (and much cheaper) approximation of the atomic theory, and so the atomic theory supports the aerodynamic model, rather than competing with it. A successful theory can embrace many models for different domains, so long as the models are acknowledged as approximations, and in each case the model is compatible with (or ideally mandated by) the underlying theory.\n\n\nOur *fundamental* physics – quantum mechanics, the standard family of particles, and relativity – is a theory that embraces an *enormous* family of models for macroscopic physical phenomena. There is the physics of liquids, and solids, and gases; yet this does not mean that there are *fundamental* things in the world that have the intrinsic property of liquidity.”Apparently there is colour, apparently sweetness, apparently bitterness, actually there are only atoms and the void.”– Democritus, 420 BC (from Robinson and Groves 1998).\n\n\n\n\n---\n\n\nIn arguing that a “technical” theory should be defined as a theory that sharply concentrates probability into specific advance predictions, I am setting an extremely high standard of strictness. We have seen that a vague theory *can* be better than nothing. A vague theory can win out over the hypothesis of ignorance, if there are no precise theories to compete against it.\n\n\nThere is an enormous family of models belonging to the central underlying theory of life and biology; the underlying theory that is sometimes called neo-Darwinism, natural selection, or evolution. Some models in evolutionary theory are quantitative. The way in which DNA encodes proteins is redundant; two different DNA sequences can code for exactly the same protein. There are 4 DNA bases {ATCG} and 64 possible combinations of three DNA bases. But those 64 possible codons describe only 20 amino acids plus a stop code. Genetic drift ought therefore to produce non-functional changes in species genomes, through mutations which by chance become fixed in the gene pool. The accumulation rate of non-functional differences between the genomes of two species with a common ancestor, depends on such parameters as the number of generations elapsed and the intensity of selection at that genetic locus. That’s an example of a member of the family of evolutionary models that produces quantitative predictions. There are also disequilibrium allele frequencies under selection, stable equilibria for game-theoretical strategies, sex ratios, et cetera.\n\n\nThis all comes under the heading of “fascinating words”. Unfortunately, there are certain religious factions that spread gross disinformation about evolutionary theory. So I emphasize that many models within evolutionary theory make quantitative predictions that are experimentally confirmed, and that such models are far more than sufficient to demonstrate that, e.g., humans and chimpanzees are related by a common ancestor. If you’ve been victimized by creationist disinformation – that is, if you’ve heard any suggestion that evolutionary theory is controversial or untestable or “just a theory” or non-rigorous or non-technical or in any wise not confirmed by an unimaginably huge mound of experimental evidence – I recommend reading the [Talk.Origins FAQ](http://www.talkorigins.org/) and studying evolutionary biology with math.\n\n\nBut imagine going back in time to the nineteenth century, when the theory of natural selection had only just been discovered by Charles Darwin and Alfred Russel Wallace. Imagine evolutionism just after its birth, when the theory had nothing remotely like the modern-day body of quantitative models and great heaping mountains of experimental evidence. There was no way of knowing that humans and chimpanzees would be discovered to have 95% shared genetic material. No one knew that DNA existed. Yet even so, scientists flocked to the new theory of natural selection. And later it turned out that there *was* a precisely copied genetic material with the potential to mutate, that humans and chimps were provably related, etc.\n\n\nSo the very strict, very high standard that I proposed for a “technical” theory is too strict. Historically, it *has* been possible to successfully discriminate true theories from false theories, based on predictions of the sort I called “vague”. Vague predictions of, say, 80% confidence, can build up a huge advantage over alternate hypotheses, given enough experiments. Perhaps a theory of this kind, producing predictions that are not precisely detailed but are nonetheless correct, could be called “semitechnical”?\n\n\nBut surely technical theories are more reliable than semitechnical theories? Surely technical theories should take precedence, command greater respect? Surely physics, which produces exceedingly exact predictions, is in some sense better confirmed than evolutionary theory? Not implying that evolutionary theory is wrong, of course; but however vast the mountains of evidence favoring evolution, does not physics go one better through vast mountains of *precise* experimental confirmation? Observations of neutron stars confirm the predictions of General Relativity to within one part in a hundred trillion (10^14). What does evolutionary theory have to match that?\n\n\nSomeone – I think either Roger Penrose or Richard Dawkins – said once that measured by the simplicity of the theory and the amount of complexity it explained, Darwin had the single greatest idea in the history of time.\n\n\nOnce there was a conflict between 19th-century physics and 19th-century evolutionism. According to the best physical models then in use, the Sun could not have been burning very long. 3000 years on chemical energy, or 40 million years on gravitational energy. There was no energy source known to 19th-century physics that would permit longer burning. 19th-century physics was not *quite* as powerful as modern physics – it did not have predictions accurate to within one part in 10^14. But 19th-century physics still had the mathematical character of modern physics; a discipline whose models produced detailed, precise, quantitative predictions. 19th-century evolutionary theory was wholly semitechnical, without a scrap of quantitative modeling. Not even Mendel’s experiments with peas were then known. And yet it did seem likely that evolution would require longer than a paltry 40 million years in which to operate – hundreds of millions, even billions of years. The antiquity of the Earth was a vague and semitechnical prediction, of a vague and semitechnical theory. In contrast, the 19th-century physicists had a precise and quantitative model, which through formal calculation produced the precise and quantitative dictum that the Sun simply could not have burned that long.”The limitations of geological periods, imposed by physical science, cannot, of course, disprove the hypothesis of transmutation of species; but it does seem sufficient to disprove the doctrine that transmutation has taken place through ‘descent with modification by natural selection.'”– Lord Kelvin, distinguished 19th-century physicist (from Zapato 1998).\n\n\nHistory records who won.\n\n\nThe moral? If you can give 80% confident advance predictions on yes-or-no questions, it may be a “vague” theory, it may be wrong one time out of five, but you can still build up a heck of a huge scoring lead over the hypothesis of ignorance. Enough to confirm a theory, if there are no better competitors. Reality is consistent; every *correct* theory about the universe is compatible with every other correct theory. Imperfect maps can conflict, but there is only one territory. 19th-century evolutionism might have been a semitechnical discipline, but it was still correct (as we now know) and by far the best explanation (even in that day). Any conflict between evolutionism and another well-confirmed theory had to reflect some kind of anomaly, a mistake in the assertion that the two theories were incompatible. 19th-century physics couldn’t model the dynamics of the Sun – they didn’t know about nuclear reactions. They could not show that their understanding of the Sun was correct *in technical detail* , nor calculate from a *confirmed* model of the Sun to determine how long the Sun had existed. So in retrospect, we can say something like: “There was room for the possibility that 19th-century physics just didn’t understand the Sun.”\n\n\nBut that is hindsight. The real lesson is that, even though 19th-century physics was both precise and quantitative, it didn’t automatically dominate the semitechnical theory of 19th-century evolutionism. The theories were *both* well-supported. They were *both* correct in the domains over which they were generalized. The apparent conflict between them was an anomaly, and the anomaly turned out to stem from the incompleteness and incorrect application of 19th-century physics, not the incompleteness and incorrect application of 19th-century evolutionism. But it would be futile to compare the mountain of evidence supporting the one theory, versus the mountain of evidence supporting the other. Even in that day, both mountains were too large to suppose that either theory was simply mistaken. Mountains of evidence that large cannot be set to compete, as if one falsifies the other. You must be applying one theory incorrectly, or applying a model outside the domain it predicts well.\n\n\nSo you shouldn’t *necessarily* sneer at a theory just because it’s semitechnical. Semitechnical theories can build up high enough scores, compared to every available alternative, that you know the theory is at least approximately correct. Someday the semitechnical theory may be replaced or even falsified by a more precise competitor, but that’s true even of technical theories. Think of how Einstein’s General Relativity devoured Newton’s theory of gravitation.\n\n\nBut the correctness of a semitechnical theory – a theory that currently has no precise, computationally tractable models testable by feasible experiments – can be a lot less cut-and-dried than the correctness of a technical theory. It takes skill, patience, and examination to distinguish good semitechnical theories from theories that are just plain confused. This is not something that humans do well by instinct, which is why we have Science.\n\n\nPeople eagerly jump the gun and seize on any available reason to reject a disliked theory. That is why I gave the example of 19th-century evolutionism, to show why one should not be too quick to reject a “non-technical” theory out of hand. By the moral customs of science, 19th-century evolutionism was guilty of more than one sin. 19th-century evolutionism made no quantitative predictions. It was not readily subject to falsification. It was largely an explanation of what had already been seen. It lacked an underlying mechanism, as no one then knew about DNA. It even contradicted the 19th-century laws of physics. Yet natural selection was such an *amazingly good* post-facto explanation that people flocked to it, and they turned out to be right. Science, as a human endeavor, requires advance prediction. Probability theory, as math, does not distinguish between post-facto and advance prediction, because probability theory assumes that probability distributions are fixed properties of a hypothesis.\n\n\nThe rule about advance prediction is a rule of the social process of science – a moral custom and not a theorem. The moral custom exists to prevent human beings from making human mistakes that are hard to even describe in the language of probability theory, like tinkering after the fact with what you claim your hypothesis predicts. People concluded that 19th-century evolutionism was an excellent explanation, even if it was post-facto. That reasoning *was correct as probability theory* , which is why it *worked* despite all scientific sins. Probability theory is math. The social process of science is a set of legal conventions to keep people from cheating on the math.\n\n\nYet it is also true that, compared to a *modern-day* evolutionary theorist, evolutionary theorists of the late 19th and early 20th century often went sadly astray. Darwin, who was bright enough to *invent* the theory, got an amazing amount right. But Darwin’s successors, who were only bright enough to *accept* the theory, misunderstood evolution frequently and seriously. The usual process of science was then required to correct their mistakes. It is incredible how few errors of reasoning Darwin made in *The Origin of Species* and *The Descent of Man* , compared to they who followed.\n\n\nThat is also a hazard of a semitechnical theory. Even after the flash of genius insight is confirmed, merely average scientists may fail to apply the insights properly in the absence of formal models. As late as the 1960s biologists spoke of evolution working “for the good of the species”, or suggested that individuals would restrain their reproduction to prevent species overpopulation of a habitat. The best evolutionary theorists knew better, but average theorists did not. (Williams 1966.)\n\n\nSo it is *far* better to have a technical theory than a semitechnical theory. Unfortunately, Nature is not always so kind as to render Herself describable by neat, formal, *computationally tractable* models, nor does She always provide Her students with measuring instruments that can directly probe Her phenomena. Sometimes it is only a matter of time. 19th-century evolutionism was semitechnical, but later came the math of population genetics, and eventually DNA sequencing. But Nature will not always give you a phenomenon that you can describe with technical models fifteen seconds after you have the basic insight.\n\n\nYet the cutting edge of science, the *controversy* , is most often about a semitechnical theory, or nonsense posing as a semitechnical theory. By the time a theory achieves technical status, it is usually no longer controversial (among scientists). So the question of how to distinguish good semitechnical theories from nonsense is very important to scientists, and it is not as easy as dismissing out of hand any theory that is not technical. To the end of distinguishing truth from falsehood exists the entire discipline of rationality. The art is not reducible to a checklist, or at least, no checklist that an average scientist can apply reliably after an hour of training. If it was that simple we wouldn’t need science.\n\n\n\n\n---\n\n\nWhy do you care about scientific controversies?\n\n\nNo, seriously, why do you care about scientific *controversies* ?\n\n\nThe media thinks that only the cutting edge of science, the very latest controversies, are worth reporting on. How often do you see headlines like “General Relativity still governing planetary orbits” or “Phlogiston theory remains false”? By the time anything is solid science, it is no longer a breaking headline. “Newsworthy” science is based on the thinnest of evidence and wrong half the time. If it were not on the uttermost fringes of the scientific frontier, it would not be news. Scientific *controversies* are problems so difficult that even people who’ve spent years mastering the field can still fool themselves. That’s what makes the problem controversial and attracts all the media attention. So the reporters show up, and hear the scientists speak fascinating words. The reporters are told that “particles” are “waves”, but there is no understanding of math for the words to invoke. What the physicist means by “wave” is not what the reporters hear, even if the physicist’s math applies also to the structure of water as it crashes on the shore.\n\n\nAnd then the reporters write stories, which are not worth the lives of the dead trees on which they are printed.\n\n\nBut what does it matter to you? Why should you pay attention to scientific *controversies* ? Why graze upon such sparse and rotten feed as the media offers, when there are so many solid meals to be found in textbooks? Nothing you’ll read as breaking news will ever hold a candle to the sheer beauty of settled science. Textbook science has carefully phrased explanations for new students, math derived step by step, plenty of experiments as illustration, and test problems.\n\n\nAnd textbook science is beautiful! Textbook science is *comprehensible* , unlike mere fascinating words that can never be truly beautiful. Elementary science textbooks describe *simple* theories, and simplicity is the core of scientific beauty. Fascinating words have no power, nor yet any meaning, without the math. The fascinating words are not knowledge but the illusion of knowledge, which is why it brings so little satisfaction to know that “gravity results from the curvature of spacetime”. Science is not in the fascinating words, though it’s all the media will ever give you.\n\n\nIs there ever justification for following a scientific controversy, while there remains any basic science you do not yet know? Yes. You could be an expert in that field, in which case that scientific controversy is your proper meat. Or the scientific controversy might be something you need to know *now* , because it affects your life. Maybe it’s the 19th century, and you’re gazing lustfully at a member of the appropriate sex wearing a 19th-century bathing suit, and you need to know whether your sexual desire comes from a psychology constructed by natural selection, or is a temptation placed in you by the Devil to lure you into hellfire.\n\n\nIt is not wholly impossible that we shall happen upon a scientific controversy that affects us, and find that we have a burning and urgent need for the correct answer. I shall therefore discuss some of the warning signs that historically distinguished vague hypotheses that later turned out to be unscientific gibberish, from vague hypotheses that later graduated to confirmed theories. Just remember the historical lesson of 19th-century evolutionism, and resist the temptation to fail every theory that misses a single item on your checklist. It is not my intention to give people another excuse to dismiss good science that discomforts them. If you apply stricter criteria to theories you dislike than theories you like (or vice versa!), then every additional nit you learn how to pick, every new logical flaw you learn how to detect, makes you that much stupider. Intelligence, to be useful, must be used for something other than defeating itself.\n\n\n\n\n---\n\n\nOne of the classic signs of a poor hypothesis is that it must expend great effort in avoiding falsification – elaborating reasons why the hypothesis is compatible with the phenomenon, even though the phenomenon didn’t behave as expected.\n\n\nSagan (1995) gives the example of someone who claims that a dragon lives in their garage. Fascinated by this controversial question, we ignore all the textbooks providing total solutions to ancient mysteries on which alchemists spent their lives in vain… but never mind. We show up at the garage, look inside, and see: Nothing.\n\n\nAh, says the claimant, that’s because it’s an *invisible* dragon.\n\n\nNow as Sagan says, this is an odd claim, but it doesn’t mean we can never know if the dragon is there. Maybe we hear heavy breathing, and discover that carbon dioxide and heat appears in the garage’s air. Clawed footprints stamp through the dust. Occasionally a great gout of fire bursts out from no visible source. If so, we conclude that the garage contains an invisible dragon, and the reporters depart, satisfied that the controversy is over. Once something is a fact, it’s no longer exciting; it’s no fun believing in things that any old fool can see are true. If the dragon were really there, it would be no more fun to believe in the dragon than to believe in zebras.\n\n\nBut now suppose instead that we bring up our measuring instruments to see if carbon dioxide is accumulating in the garage’s air, and the claimant at once says: “No, no, it’s an invisible non-breathing dragon!” Okay. We begin to examine the dirt, and the claimant says: “No, it’s a flying invisible non-breathing dragon, so it won’t leave footprints.” We start to unload audio equipment, and the claimant says it’s an inaudible dragon. We bring in a bag of flour, to throw into the air to outline the dragon’s form, and the claimant quickly says that this dragon is permeable to flour.\n\n\nCarl Sagan originally drew the lesson that poor hypotheses need to do fast footwork to avoid falsification – to maintain an appearance of “fit”.\n\n\nI would point out that the claimant obviously has a good model of the situation *somewhere* in his head, because he can predict, in advance, exactly which excuses he’s going to need. When we bring up our measuring instruments, he knows that he’ll have to excuse the lack of any carbon dioxide in the air. When we bring in a bag of flour, the claimant knows that he’ll need to excuse the lack of any dragon-shaped form in the floury air.\n\n\nTo a Bayesian, a hypothesis isn’t something you assert in a loud, emphatic voice. A hypothesis is something that controls your *anticipations* , the probabilities you assign to future experiences. That’s what a probability *is* , to a Bayesian – that’s what you score, that’s what you calibrate. So while our claimant may say loudly, emphatically, and honestly that he *believes* there’s an invisible dragon in the garage, he does not *anticipate* there’s an invisible dragon in the garage – he anticipates exactly the same experience as the skeptic.\n\n\nWhen I judge the predictions of a hypothesis, I ask which experiences I would anticipate, not which facts I would believe.\n\n\n*The flip side:*\n\n\nI recently argued with a friend of mine over a question of evolutionary theory. My friend alleged that the clustering of changes in the fossil record (apparently, there are periods of comparative stasis followed by comparatively sharp changes; itself a controversial observation known as “punctuated equilibrium”) showed that there was something wrong with our understanding of speciation. My friend thought that there was some unknown force at work, not supernatural, but some natural consideration that standard evolutionary theory didn’t take into account. Since my friend didn’t give a specific competing hypothesis that produced better predictions, his thesis had to be that the standard evolutionary model was *stupid* with respect to the data – that the standard model made a specific prediction that was wrong; that the model did worse than complete ignorance or some other default competitor.\n\n\nAt first I fell into the trap; I accepted the implicit assumption that the standard model predicted smoothness, and based my argument on my recollection that the fossil record changes weren’t as sharp as he claimed. He challenged me to produce an evolutionary intermediate between *Homo erectus* and *Homo sapiens* ; I googled and found *Homo heidelbergensis* . He congratulated me and acknowledged that I had scored a major point, but still insisted that the changes were too sharp, and not steady enough. I started to explain why I thought a pattern of uneven change *could* arise from the standard model: environmental selection pressures might not be constant… “Aha!” my friend said, “you’re making your excuses in advance.”\n\n\nBut suppose that the fossil record instead showed a smooth and gradual set of changes. Might my friend have argued that the standard model of evolution as a chaotic and noisy process could not account for such smoothness? If it is a scientific sin to claim post facto that our beloved hypothesis predicts the data, should it not be equally a sin to claim post facto that the competing hypothesis is stupid on the data?\n\n\nIf a hypothesis has a *purely* technical model, there is no trouble; we can compute the prediction of the model formally, without informal variables to provide a handle for post facto meddling. But what of semitechnical theories? Obviously a semitechnical theory must produce some good advance predictions about *something* , or else why bother? But *after* the theory is semi-confirmed, can the detractors claim that the data show a problem with the semitechnical theory, when the “problem” is constructed post facto? At the least the detractors must be very specific about what data a confirmed model predicts stupidly, and why the confirmed model must make (post facto) that stupid prediction. How sharp a change is “too sharp”, quantitatively, for the standard model of evolution to permit? Exactly how much steadiness do you think the standard model of evolution predicts? How do you know? Is it too late to say that, after you’ve seen the data?\n\n\nWhen my friend accused me of making excuses, I paused and asked myself which excuses I anticipated needing to make. I decided that my current grasp of evolutionary theory didn’t say anything about whether the rate of evolutionary change should be intermittent and jagged, or smooth and gradual. If I hadn’t seen the graph in advance, I could not have predicted it. (Unfortunately, I rendered even that verdict after seeing the data…) Maybe there are models in the evolutionary family that would make advance predictions of steadiness or variability, but if so, I don’t know about them. More to the point, my friend didn’t know either.\n\n\nIt is not always wise, to ask the opponents of a theory what their competitors predict. Get the theory’s predictions from the theory’s best advocates. Just make sure to write down their predictions in advance. Yes, sometimes a theory’s advocates try to make the theory “fit” evidence that plainly doesn’t fit. But if you find yourself wondering what a theory predicts, ask first among the theory’s advocates, and afterward ask the detractors to cross-examine.\n\n\nFurthermore: Models may include noise. If we hypothesize that the data are trending slowly and steadily upward, but our measuring instrument has an error of 5%, then it does no good to point to a data point that dips below the previous data point, and shout triumphantly, “See! It went down! Down down down! And don’t tell me why your theory fits the dip; you’re just making excuses!” Formal, technical models often incorporate explicit error terms. The error term spreads out the likelihood density, decreases the model’s precision and reduces the theory’s score, but the Bayesian scoring rule still governs. A technical model can allow mistakes, and make mistakes, and still do better than ignorance. In our supermarket example, even the precise hypothesis of 51 still bets only 90% of its probability mass on 51; the precise hypothesis claims only that 51 happens nine times out of ten. Ignoring nine 51s, pointing at one case of 82, and crowing in triumph, does not a refutation make. That’s not an excuse, it’s an explicit advance prediction of a technical model.\n\n\nThe error term makes the “precise” theory vulnerable to a superprecise alternative that predicted the 82. The standard model would also be vulnerable to a precisely ignorant model that predicted a 60% chance of 51 on the round where we saw 82, spreading out the likelihood more entropically on that particular error. No matter how good the theory, science always has room for a higher-scoring competitor. But if you *don’t* present a better alternative, if you try only to show that an accepted theory is *stupid* with respect to the data, that scientific endeavor may be *more* demanding than just replacing the old theory with a new one.\n\n\nAstronomers recorded the unexplained perihelion advance of Mercury, unaccounted for under Newtonian physics – or rather, Newtonian physics predicted 5557 seconds of arc per century, where the observed amount was 5600. (From Brown 1999.) But should the scientists of that day have junked Newtonian gravitation based on such small, unexplained counterevidence? What would they have used instead? Eventually, Newton’s theory of gravitation *was* set aside, after Einstein’s General Relativity precisely explained the orbital discrepancy of Mercury and also made successful advance predictions. But there was no way to know *in advance* that this was how things would turn out.\n\n\nIn the nineteenth century there was a persistent anomaly in the orbit of Uranus. People said, “Maybe Newton’s law starts to fail at long distances.” Eventually some bright fellows looked at the anomaly and said, “Could this be an unknown outer planet?” Urbain Le Verrier and John Couch Adams independently did some scribbling and figuring, using Newton’s standard theory – and predicted Neptune’s location to within one degree of arc, dramatically *confirming* Newtonian gravitation. (Brown 1999.)\n\n\nOnly *after* General Relativity precisely produced the perihelion advance of Mercury, did we *know* Newtonian gravitation would never explain it.\n\n\n\n\n---\n\n\nIn the *Intuitive Explanation* we saw how Karl Popper’s insight that falsification is stronger than confirmation, translates into a Bayesian truth about likelihood ratios. Popper erred in thinking that falsification was *qualitatively different* from confirmation; both are governed by the same Bayesian rules. But Popper’s philosophy reflected an important truth about a quantitative difference between falsification and confirmation.”Popper was profoundly impressed by the differences between the allegedly ‘scientific’ theories of Freud and Adler and the revolution effected by Einstein’s theory of relativity in physics in the first two decades of this century. The main difference between them, as Popper saw it, was that while Einstein’s theory was highly ‘risky’, in the sense that it was possible to deduce consequences from it which were, in the light of the then dominant Newtonian physics, highly improbable (e.g. that light is deflected towards solid bodies – confirmed by Eddington’s experiments in 1919), and which would, if they turned out to be false, falsify the whole theory, nothing could, even in principle, falsify psychoanalytic theories. These latter, Popper came to feel, have more in common with primitive myths than with genuine science. That is to say, he saw that what is apparently the chief source of strength of psychoanalysis, and the principal basis on which its claim to scientific status is grounded, viz. its capability to accommodate, and explain, every possible form of human behaviour, is in fact a critical weakness, for it entails that it is not, and could not be, genuinely predictive. Psychoanalytic theories by their nature are insufficiently precise to have negative implications, and so are immunised from experiential falsification…”Popper, then, repudiates induction, and rejects the view that it is the characteristic method of scientific investigation and inference, and substitutes falsifiability in its place. It is easy, he argues, to obtain evidence in favour of virtually any theory, and he consequently holds that such ‘corroboration’, as he terms it, should count scientifically only if it is the positive result of a genuinely ‘risky’ prediction, which might conceivably have been false. For Popper, a theory is scientific only if it is refutable by a conceivable event. Every genuine test of a scientific theory, then, is logically an attempt to refute or to falsify it…”Every genuine scientific theory then, in Popper’s view, is prohibitive, in the sense that it forbids, by implication, particular events or occurrences.”(Thornton 2002)\n\n\nOn Popper’s philosophy, the strength of a scientific theory is not how much it explains, but how much it *doesn’t* explain. The virtue of a scientific theory lies not in the outcomes it *permits* , but in the outcomes it *prohibits* . Freud’s theories, which seemed to explain everything, *prohibited* nothing.\n\n\nTranslating this into Bayesian terms, we find that the more outcomes a model *prohibits* , the more probability density the model concentrates in the remaining, permitted outcomes. The more outcomes a theory prohibits, the greater the knowledge-content of the theory. The more daringly a theory exposes itself to falsification, the more definitely it tells you which experiences to anticipate.\n\n\nA theory that can explain *any* experience corresponds to a hypothesis of complete ignorance – a uniform distribution with probability density spread evenly over every possible outcome.\n\n\n\n\n---\n\n\nOne of the most famous lessons of science is the case of the *phlogiston theory of chemistry* .\n\n\nPhlogiston was the 18th century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the 18th century chemists answered, “phlogiston”.\n\n\n…and that was it, you see, that was their answer: “Phlogiston.”\n\n\nPhlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material”. Flames extinguished in closed containers because the air became saturated with phlogiston. Charcoal left little residue upon burning because it was nearly pure phlogiston. (Moore 1961.)\n\n\nThis was a more primitive age of science, and so people did not notice and take offense that phlogiston theory made no advance predictions. Instead phlogiston theory just added on more and more independent clauses to explain more and more chemical observations. You couldn’t use phlogiston theory to predict the outcome of a chemical transformation – first you looked at the result, then you used phlogiston to explain it. It was not that, having never tried burning a flame in a closed container, phlogiston theorists predicted that the flame would go out when the air became “saturated” with phlogiston. Rather they lit a flame in a container, watched it go out, then said, “The air must have become saturated with phlogiston.”\n\n\nYou couldn’t even use phlogiston theory to constrain chemical transformations, to say what you did *not* expect to see. Phlogiston theory was infinitely flexible. In excusing everything, it explained nothing; a disguised hypothesis of zero knowledge.\n\n\nThe word *phlogiston* functioned not as an *anticipation-controller* but as a *curiosity-stopper* . You said “Why?” and the answer was “Phlogiston”.\n\n\n\n\n---\n\n\nImagine looking at your hand, and knowing nothing of cells, nothing of biological chemistry, nothing of DNA. You know some anatomy, you know your hand contains muscles, but you don’t know why muscles move instead of lying there like clay. Your hand is just… stuff… and for some reason it moves under your direction. Is this not magic?”The animal body does not act as a thermodynamic engine … consciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears therefore that animated creatures have the power of immediately applying to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce derived mechanical effects… The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms… Modern biologists were coming once more to the acceptance of something and that was a vital principle.”– Lord Kelvin (from Zapato 1998).\n\n\nThis was the theory of *vitalism* ; that the difference between living matter and non-living matter consisted of an *elan vital* or *vis vitalis* . Elan vital infused living matter and caused it to move as consciously directed. Elan vital participated in chemical transformations which no mere non-living particles could undergo. Wohler’s artificial synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that “mere chemistry” could duplicate a product of biology. (Moore 1961.)\n\n\nBuilding on the previous lesson of phlogiston, we note at once that *elan vital* functions not as an anticipation-controller but as a curiosity-stopper. Vitalism doesn’t explain how the hand moves, nor tell you what transformations to expect from organic chemistry, and vitalism certainly permits no quantitative calculations. “Why? Elan vital!” And that was all there was to vitalism.\n\n\nBut the greater lesson lies in the vitalists’ reverence for the elan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but instead bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking.\n\n\nI quote Lord Kelvin to show that in every generation, there are scientific puzzles so wonderfully *mys-TER-i-ous* that they become sacred, making a *solution* sacrilege. Science is only good for explaining non-mysterious phenomena, like the course of planets, or the transformations of materials, or the biology of life; science can never answer questions about *real* mysteries like consciousness. Surely, if it were possible for science to explain consciousness, it would already have done so? As if all these other matters had not been mysteries for thousands of years and millions of years, from the dawn of intelligent thought right up until science solved them.\n\n\nPeople have no sense of history. They learn about stars and chemistry and biology in school and it seems that these matters have always been the proper meat of science, that they have *never been* mysterious. Astrologers and alchemists and vitalists were merely fools, to make such big deals out of such simple questions. When science must deal with some new puzzling phenomenon, it is a great shock to the children of that generation, for they have never encountered something that *feels* mysterious before. Surely such a sacred mystery as consciousness is infinitely beyond the reach of dry scientific thinking; science is only suited to mundane questions such as biology.\n\n\nVitalism shared with phlogiston the error of *encapsulating the mystery as a substance.* Fire was mysterious, and the phlogiston theory encapsulated the mystery in a mysterious substance called “phlogiston”. Life was a sacred mystery, and vitalism encapsulated the sacred mystery in a mysterious substance called “elan vital”. Neither “explanation” helped concentrate the model’s probability density. The “explanation” just wrapped up the question as a small, hard, opaque black ball. In a play written by the author Moliere, a physician explains the power of a soporific by claiming that the soporific contains a “dormitive potency” – a fine parody of the art of fake explanation. (Cited in Kuhn 1962.)\n\n\nIt is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.\n\n\nBut the deeper failure is supposing that an *answer* can be mysterious. Mystery is a property of questions, not answers. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. They mixed up the map with the territory. All confusion and dismay exist in the mind, not in reality.\n\n\nI call theories such as vitalism *mysterious answers to mysterious questions* . These are the signs of mysterious answers: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts – the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to do this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena. Fourth, *even after the answer is given, the phenomenon is still a mystery* and possesses the same quality of sacred inexplicability that it had at the start.\n\n\n*The flip side:*\n\n\nBeware of checklist thinking: Having a *sacred* mystery, or a mysterious answer, is not the same as refusing to explain something. Some elements in our physics are taken as “fundamental”, not yet further reduced or explained. But these fundamental elements of our physics are governed by clearly defined, mathematically simple, formally computable causal rules.\n\n\nOccasionally some crackpot objects to modern physics on the grounds that it does not provide an “underlying mechanism” for a mathematical law currently treated as fundamental. (Claiming that a mathematical law lacks an “underlying mechanism” is one of the entries on John Baez’s Crackpot Index; Baez 1998.) The “underlying mechanism” the crackpot proposes in answer is vague, verbal, and yields no increase in predictive power – otherwise we would not classify the claimant as a crackpot.\n\n\nOur current physics makes the electromagnetic field fundamental, and refuses to explain it further. But the “electromagnetic field” is a fundamental governed by clear mathematical rules, with no properties outside the mathematical rules, subject to formal computation to describe its causal effect upon the world. Someday someone may suggest improved math that yields better predictions, but I would not indict the current model on grounds of mysteriousness. A theory that includes *fundamental elements* is not the same as a theory that contains *mysterious elements* .\n\n\nFundamentals should be simple. “Life” is not a good fundamental; “oxygen” is a good fundamental, and “electromagnetic field” is a better fundamental. Life might look simple to a vitalist – it’s the simple, magical ability of your muscles to move under your mental direction. Why shouldn’t life be explained by a simple, magical fundamental substance like *elan vital* ? But phenomena that seem *psychologically* very simple – little dots of light in the sky, orangey-bright hot flame, flesh moving under mental direction – often conceal vast depths of underlying complexity. The proposition that life is a complex phenomenon may seem incredible to the vitalist, staring at a blankly opaque mystery with no obvious handles; but yes, Virginia, there is underlying complexity. The criterion of simplicity that is relevant to Occam’s Razor is *mathematical* or *computational* simplicity. Once we render down our model into mathematically simple fundamental elements, not in themselves sharing the mysterious qualities of the mystery, interacting in clearly defined ways to produce the formerly mysterious phenomenon as a detailed prediction, that is as non-mysterious as humanity has ever figured out how to make anything.\n\n\n\n\n---\n\n\nThe failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb and name some current theory, not yet disproven, that I think is analogously flawed to vitalism and phlogiston? I shall dare, but don’t try this at home. I also warn my readers that they should not accept this opinion of mine with the same confidence that attaches to science’s dismissal of phlogiston.\n\n\nI name the fad of *emergence* or *emergent phenomena* – systems which exhibit high-level behaviors that arise or “emerge” from the interaction of many low-level elements. Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem.\n\n\nIn decrying the emergence fad, I decry the use of “emergence” *as an explanation in itself* . It’s okay to have a completed model to which an emergence enthusiast could attach “emergent” as an adjective. One might legitimately have some *specific* model of how the behavior of an ant colony *emerges from* the behavior of the ants. A hypothesis like that can be formal and/or technical. The model of the ant colony has internal moving parts and produces specific predictions; it’s just that the model happens to fit the verbal term “emergent” – the behavior which emerges from modeling many interacting elements is different from the behavior of those elements considered in isolation. I do not consider it stupid to say that Phenomenon X *emerges from* Y, where Y is some specific model. The phrase “emerges from” is okay, if the phrase precedes some specific model to be judged on its own merits.\n\n\nHowever, this is *not* the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right. I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts – there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane. And even after the answer of “Why? Emergence!” is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.\n\n\nTo say that intelligence is an “emergent phenomenon” fits every possible behavior that intelligence could show, and therefore explains nothing. The model has no moving parts and does not concentrate its probability mass into specific outcomes. It is a disguised hypothesis of zero knowledge.\n\n\nTo see why I object to the academic fad in “emergence”, even though I have admitted the legitimacy of the phrase “emerges from”, consider that “arises from” is also a legitimate phrase. Gravity arises from the curvature of spacetime (according to a certain specific mathematical model, Einstein’s General Relativity). Chemistry arises from interactions between atoms (according to the specific model of quantum electrodynamics). Now suppose I should say that gravity is explained by “arisence” or that chemistry is an “arising phenomenon”, and claim that as my explanation.\n\n\nA fun exercise is to eliminate the adjective “emergent” from any sentence in which it appears, and see if the sentence says anything different.*Before:* Human intelligence is an emergent product of neurons firing.*After:* Human intelligence is a product of neurons firing.*Before:* The behavior of the ant colony is the emergent outcome of the interactions of many individual ants.*After:* The behavior of the ant colony is the outcome of the interactions of many individual ants.*Even better:* A colony is made of ants. We can successfully predict some aspects of colony behavior using models that include only individual ants, without any global colony variables, showing that we understand how those colony behaviors arise from ant behaviors.\n\n\nAnother good exercise is to replace the word “emergent” with the old word, the explanation that people had to use before emergence was invented.*Before:* Life is an emergent phenomenon.*After:* Life is a magical phenomenon.*Before:* Human intelligence is an emergent product of neurons firing.*After:* Human intelligence is a magical product of neurons firing.\n\n\nDoes not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?\n\n\nMagic is unpopular nowadays, unfashionable, not something you could safely postulate in a peer-reviewed journal. Why? Once upon a time, a few exceptionally wise scientists noticed that explanations which invoked “magic” just didn’t work as a way of understanding the world. By dint of strenuous evangelism, these wise scientists managed to make magical explanations unfashionable within a small academic community. But humans are still humans, and they have the same emotional needs and intellectual vulnerabilities. So later academics invented a new word, “emergence”, that carried exactly the same information content as “magic”, but had not yet become unfashionable. “Emergence” became very popular, just as saying “magic” used to be very popular. “Emergence” has the same deep appeal to human psychology, for the same reason. “Emergence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is a popular fad *because* it is the junk food of curiosity. You can explain anything using emergence, and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed in different clothes but still the same species psychology.\n\n\n\n\n---\n\n\nMany people in this world believe that after dying they will face a stern-eyed fellow named St. Peter, who will examine their actions in life and accumulate a score for morality. Presumably St. Peter’s scoring rule is unique and invariant under trivial changes of perspective. Unfortunately, believers cannot obtain a quantitative, precisely computable specification of the scoring rule, which seems rather unfair.\n\n\nThe religion of *Bayesianity* holds that your eternal fate depends on the probability judgments you made in life. Unlike lesser faiths, Bayesianity can give a quantitative, precisely computable specification of how your eternal fate is determined.\n\n\nOur proper Bayesian scoring rule provides a way to accumulate scores across experiments, and the score is invariant regardless of how we slice up the “experiments” or in what order we accumulate the results. We add up the logarithms of the probabilities. This corresponds to multiplying together the probability assigned to the outcome in each experiment, to find the joint probability of all the experiments together. We take the logarithm to simplify our intuitive understanding of the accumulated score, to maintain our grip on the tiny fractions involved, and to ensure we maximize our *expected* score by stating our honest probabilities rather than placing all our play money on the most probable bet.\n\n\nBayesianity states that, when you die, Pierre-Simon Laplace examines every single event in your life, from finding your shoes next to your bed in the morning, to finding your workplace in its accustomed spot. Every losing lottery ticket means you cared enough to play. Laplace assesses the advance probability you assigned to each event. Where you did not assign a precise numerical probability in advance, Laplace examines your degree of anticipation or surprise, extrapolates other possible outcomes and your extrapolated reactions, and renormalizes your extrapolated emotions to a likelihood distribution over possible outcomes. (Hence the phrase “Laplacian superintelligence”.)\n\n\nThen Laplace takes every event in your life, and every probability you assigned to each event, and multiplies all the probabilities together. This is your Final Judgment – the probability you assigned to your life.\n\n\nThose who follow Bayesianity strive all their lives to maximize their Final Judgment. This is the sole virtue of Bayesianity. The rest is just math.\n\n\nMark you: the path of Bayesianity is strict. What probability shall you assign each morning, to the proposition, “The sun shall rise?” (We shall discount such quibbles as cloudy days, and that the Earth orbits the Sun.) Perhaps one who did not follow Bayesianity would be humble, and give a probability of 99.9%. But we who follow Bayesianity shall discard all considerations of modesty and arrogance, and scheme only to maximize our Final Judgment. Like an obsessive video-game player, we care only about this numerical score. We’re going to face this Sun-shall-rise issue 365 times per year, so we might be able to improve our Final Judgment considerably by tweaking our probability assignment.\n\n\nAs it stands, even if the Sun rises every morning, every year our Final Judgment will decrease by a factor of 0.7 (.999^365), roughly -0.52 bits. Every two years, our Final Judgment will decrease more than if we found ourselves ignorant of a coinflip’s outcome! Intolerable. If we increase our daily probability of sunrise to 99.99%, then each year our Final Judgment will decrease only by a factor of 0.964. Better. Still, in the unlikely event that we live exactly 70 years and then die, our Final Judgment will only be 7.75% of what it might have been. What if we assign a 99.999% probability to the sunrise? Then after 70 years, our Final Judgment will be multiplied by 77.4%.\n\n\nWhy not assign a probability of 1.0?\n\n\nOne who follows Bayesianity will *never* assign a probability of 1.0 to *anything* . Assigning a probability of 1.0 to some outcome uses up *all* your probability mass. If you assign a probability of 1.0 to some outcome, and reality delivers a different answer, you must have assigned the *actual* outcome a probability of *0* . This is Bayesianity’s sole mortal sin. Zero times anything is zero. When Laplace multiplies together all the probabilities of your life, the combined probability will be zero. Your Final Judgment will be doodly-squat, zilch, nada, nil. No matter how rational your guesses during the rest of your life, you’ll spend eternity next to some guy who believed in flying saucers and got all his information from the Weekly World News. Again we find it helpful to take the logarithm, revealing the innocent-sounding “zero” in its true form. Risking an outcome probability of zero is like accepting a bet with a payoff of negative infinity.\n\n\nWhat if humanity decides to take apart the Sun for mass (stellar engineering), or to switch off the Sun because it’s wasting entropy? Well, you say, you’ll see that coming, you’ll have a chance to alter your probability assignment before the actual event. What if an Artificial Intelligence in someone’s basement recursively self-improves to superintelligence, stealthily develops nanotechnology, and one morning *it* takes apart the Sun? If on the last night of the world you assign a probability of 99.999% to tomorrow’s sunrise, your Final Judgment will go down by a factor of 100,000. Minus 50 decibels! Awful, isn’t it?\n\n\nSo what is your best strategy? Well, suppose you 50% anticipate that a basement-spawned AI superintelligence will disassemble the Sun sometime in the next 10 years, and you figure there’s about an equal chance of this happening on any given day between now and then. On any given night, you would 99.98% anticipate the sun rising tomorrow. If this is really what you anticipate, then you have no motive to say anything except 99.98% as your probability. If you feel nervous that this anticipation is too low, or too high, it must not be what you anticipate after your nervousness is taken into account.\n\n\nBut the deeper truth of Bayesianity is this: you cannot game the system. You cannot give a humble answer, nor a confident one. You must figure out exactly how much you anticipate the Sun rising tomorrow, and say that number. You must shave away every hair of modesty or arrogance, and ask whether you expect to end up being scored on the Sun rising, or failing to rise. Look not to your excuses, but ask which excuses you expect to need. After you arrive at your exact degree of anticipation, the only way to further improve your Final Judgment is to improve the accuracy, calibration, and discrimination of your anticipation. You cannot do better except by guessing better and anticipating more precisely.\n\n\nEr, well, except that you could commit suicide when you turned five, thereby preventing your Final Judgment from decreasing any further. Or if we patch a new sin onto the utility function, enjoining against suicide, you could flee from mystery, avoiding all situations in which you thought you might not know everything. So much for that religion.\n\n\n\n\n---\n\n\nIdeally, we predict the outcome of the experiment in advance, using our model, and then we perform the experiment to see if the outcome accords with our model. Unfortunately, we can’t always control the information stream. Sometimes Nature throws experiences at us, and by the time we think of an explanation, we’ve already seen the data we’re supposed to explain. This was one of the scientific sins committed by 19th-century evolutionism; Darwin observed the similarity of many species, and their adaptation to particular local environments, before the hypothesis of natural selection occurred to him. 19th-century evolutionism began life as a post facto explanation, not an advance prediction.\n\n\nNor is this a trouble only of semitechnical theories. In 1846, the successful deduction of Neptune’s existence from gravitational perturbations in the orbit of Uranus was considered a grand triumph for Newton’s theory of gravitation. Why? Because Neptune’s existence was the first observation that confirmed an *advance* prediction of Newtonian gravitation. All the other phenomena that Newton explained, such as orbits and orbital perturbations and tides, had been observed in great detail before Newton explained them. No one seriously doubted that Newton’s theory was correct. Newton’s theory explained too much too precisely, and it replaced a collection of ad-hoc models with a single unified mathematical law. Even so, the advance prediction of Neptune’s existence, followed by the observation of Neptune at almost exactly the predicted location, was considered the first grand triumph of Newton’s theory at predicting what no previous model could predict. Considerable time elapsed between widespread acceptance of Newton’s theory and the first impressive *advance* prediction of Newtonian gravitation. By the time Newton came up with his theory, scientists had already observed, in great detail, most of the phenomena that Newtonian gravitation predicted.\n\n\nBut the rule of advance prediction is a morality of science, not a law of probability theory. If you have already seen the data you must explain, then Science may darn you to heck, but your predicament doesn’t collapse the laws of probability theory. What does happen is that it becomes much more difficult for a hapless human to *obey* the laws of probability theory. When you’re deciding how to rate a hypothesis according to the Bayesian scoring rule, you need to figure out how much probability mass that hypothesis assigns to the observed outcome. If we must make our predictions in advance, then it’s easier to notice when someone is trying to claim every possible outcome as an advance prediction, using too much probability mass, being deliberately vague to avoid falsification, and so on.\n\n\nNo numerologist can predict next week’s winning lottery numbers, but they will be happy to explain the mystical significance of last week’s winning lottery numbers. Say the winning Mega Ball was 7 in last week’s lottery, out of 52 possible outcomes. Obviously this happened because 7 is the lucky number. So will the Mega Ball in next week’s lottery also come up 7? We understand that it’s not certain, of course, but if it’s the lucky number, you ought to assign a probability of higher than 1/52… and then we’ll score your guesses over the course of a few years, and if your score is too low we’ll have you flogged… what’s that you say? You want to assign a probability of exactly 1/52? But that’s the same probability as every other number; what happened to 7 being lucky? No, sorry, you can’t assign a 90% probability to 7 and also a 90% probability to 11. We understand they’re both lucky numbers. Yes, we understand that they’re *very* lucky numbers. But that’s not how it works.\n\n\nEven if the listener does not know the way of Bayes and does not ask for formal probabilities, they will probably become suspicious if you try to cover too many bases. Suppose they ask you to predict next week’s winning Mega Ball, and you use numerology to explain why the 1 ball would fit your theory very well, and why the 2 ball would fit your theory very well, and why the 3 ball would fit your theory very well… even the most credulous listener might begin to ask questions by the time you got to 12. Maybe you could tell us which numbers are unlucky and definitely won’t win the lottery? Well, 13 is unlucky, but it’s not absolutely *impossible* (you hedge, *anticipating* in advance which excuse you might need).\n\n\nBut if we ask you to explain *last week’s* lottery numbers, why, the 7 was practically inevitable. That 7 should definitely count as a major success for the “lucky numbers” model of the lottery. And it couldn’t possibly have been 13; luck theory rules that straight out.\n\n\n\n\n---\n\n\nImagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands – you can use it to pick up glasses, drive a car, etc. How would you explain this hypothetical scenario? Take a moment to ponder this puzzle before continuing.\n\n\n\n\n---\n\n\nspoiler space\n\n\n\n\n---\n\n\nspoiler space\n\n\n\n\n---\n\n\nspoiler space\n\n\n\n\n---\n\n\nHow would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn’t. It isn’t going to happen.\n\n\nIt would be easy enough to produce a verbal explanation that “fit” the hypothetical. There are many explanations that can “fit” anything, including (as a special case of “anything”) my arm being replaced by a blue tentacle. Divine intervention is a good all-purpose explanation. Or aliens with arbitrary motives and capabilities. Or I could be mad, hallucinating, dreaming my life away in a hospital. Such explanations “fit” all outcomes equally well, and equally poorly, equating to hypotheses of complete ignorance.\n\n\nThe test of whether a model of reality “explains” my arm turning into a blue tentacle, is whether the model concentrates significant probability mass into that *particular* outcome. Why that dream, in the hospital? Why would aliens do that particular thing to me, as opposed to the other billion things they might do? Why would my arm turn into a tentacle on that morning, after remaining an arm through every other morning of my life? And in all cases I must look for an argument compelling enough to make that particular prediction in *advance* , not mere compatibility. Once I already knew the outcome, it would become far more difficult to sift through hypotheses to find good explanations. Whatever hypothesis I tried, I would be hard-pressed not to allocate more probability mass to yesterday’s blue-tentacle outcome than if I extrapolated blindly, seeking the model’s *most* likely prediction for tomorrow.\n\n\nA model does not always predict all the features of the data. Nature has no privileged tendency to present me with solvable challenges. Perhaps a deity toys with me, and the deity’s mind is computationally intractable. If I flip a fair coin there is no way to further explain the outcome, no model that makes a better prediction than the maximum-entropy hypothesis. But if I guess a model with no internal detail or a model that makes no further predictions, I not only have no reason to believe that guess, I have no reason to care. Last night my arm was replaced with a blue tentacle. Why? Aliens! So what will they do tomorrow? Similarly, if I attribute the blue tentacle to a hallucination as I dream my life away in a coma, I still don’t know any more about what I’ll hallucinate tomorrow. So why do I care whether it was aliens or hallucination?\n\n\nWhat might be a *good* explanation, then, if I woke up one morning and found my arm transformed into a blue tentacle? To claim a “good explanation” for this hypothetical experience would require an argument such that, contemplating the hypothetical argument *now* , *before* my arm has transformed into a blue tentacle, I would go to sleep worrying that my arm *really would* transform into a tentacle.\n\n\nPeople play games with plausibility, explaining events they expect to never actually encounter, yet this necessarily violates the laws of probability theory. How many people who thought they could ‘explain’ the hypothetical experience of waking up with their arm replaced by a tentacle, would go to sleep wondering if it might really happen to them? Had they the courage of their convictions, they would say: I do not expect to ever encounter this hypothetical experience, and therefore I cannot explain, nor have I a motive to try. Such things only happen in webcomics, and I need not prepare explanations, for in real life I shall never have a chance to use them. If I ever find myself in this impossible situation, let me miss no jot or tittle of my valuable bewilderment.\n\n\nTo a Bayesian, probabilities are anticipations, not mere beliefs to proclaim from the rooftops. If I have a model that assigns probability mass to waking up with a blue tentacle, then I am nervous about waking up with a blue tentacle. What if the model is a fanciful one, like a witch casting a spell that transports me into a randomly selected webcomic? Then the *prior probability* of webcomic witchery is so low that my *real-world* understanding doesn’t assign any significant weight to that hypothesis. The witchcraft hypothesis, if taken as a given, might assign non-insignificant likelihood to waking up with a blue tentacle. But my anticipation of that hypothesis is so low that I don’t anticipate any of the predictions of that hypothesis. That I can conceive of a witchcraft hypothesis should in no wise diminish my stark bewilderment if I actually wake up with a tentacle, because the real-world probability I assign to the witchcraft hypothesis is effectively zero. My zero-probability hypothesis wouldn’t help me *explain* waking up with a tentacle, because the argument isn’t good enough to make me *anticipate* waking up with a tentacle.\n\n\nIn the laws of probability theory, likelihood distributions are fixed properties of a hypothesis. In the art of rationality, to *explain* is to *anticipate* . To *anticipate* is to *explain* . Suppose I am a medical researcher, and in the ordinary course of pursuing my research, I notice that my clever new theory of anatomy seems to permit a small and vague possibility that my arm will transform into a blue tentacle. “Ha ha!”, I say, “how remarkable and silly!”, and feel ever so slightly nervous. *That* would be a good explanation for waking up with a tentacle, if it ever happened.\n\n\nIf a chain of reasoning doesn’t make me nervous, in advance, about waking up with a tentacle, then that reasoning would be a poor explanation if the event *did* happen, because the combination of prior probability and likelihood was too low to make me allocate any significant real-world probability mass to that outcome.\n\n\nIf you start from well-calibrated priors, and you apply Bayesian reasoning, you’ll end up with well-calibrated conclusions. Imagine that two million entities, scattered across different planets in the universe, have the opportunity to encounter something so strange as waking up with a tentacle (or – gasp! – ten fingers). One million of these entities say “one in a thousand” for the prior probability of some hypothesis X, and each hypothesis X says “one in a hundred” for the likelihood of waking up with a tentacle. And one million of these entities say “one in a hundred” for the prior probability of some hypothesis Y, and each hypothesis Y says “one in ten” for the likelihood of waking up with a tentacle. If we suppose that all entities are well-calibrated, then we shall look across the universe and find ten entities who wound up with a tentacle because of hypotheses of plausibility class X, and a thousand entities who wound up with tentacles because of hypotheses of plausibility class Y. So if you find yourself with a tentacle, and *if* your probabilities are well-calibrated, then the tentacle is more likely to stem from a hypothesis you would class as probable than a hypothesis you would class as improbable. (What if your probabilities are poorly calibrated, so that when you say “million-to-one” it happens one time out of twenty? Then you’re grossly overconfident, and we adjust your probabilities in the direction of less discrimination / greater entropy.)\n\n\nThe hypothesis of being transported into a webcomic, even if it “explains” the scenario of waking up with a blue tentacle, is a poor explanation because of its low prior probability. The webcomic hypothesis doesn’t contribute to explaining the tentacle, because it doesn’t make you anticipate waking up with a tentacle.\n\n\nIf we start with a quadrillion sentient minds scattered across the universe, quite a lot of entities will encounter events that are very likely, only about a mere million entities will experience events with lifetime likelihoods of a billion-to-one (as we would anticipate, surveying with infinite eyes and perfect calibration), and not a single entity will experience the impossible.\n\n\nIf, somehow, you really did wake up with a tentacle, it would likely be because of something much more probable than “being transported into a webcomic”, some perfectly normal reason to wake up with a tentacle which you just didn’t see coming. A reason like what? I don’t know. Nothing. I don’t anticipate waking up with a tentacle, so I can’t give any good explanation for it. Why should I bother crafting excuses that I don’t expect to use? If I was worried I might someday need a clever excuse for waking up with a tentacle, the *reason I was nervous about the possibility* would *be* my explanation.\n\n\nReality dishes out experiences using probability, not plausibility. If you find out that your laptop doesn’t obey conservation of momentum, then reality must think that a perfectly normal thing to do to you. How could violating conservation of momentum possibly be perfectly normal? I anticipate that question has no answer and will never need answering. Similarly, people do *not* wake up with tentacles, so apparently it is *not* perfectly normal.\n\n\n\n\n---\n\n\nThere is a shattering truth, so surprising and terrifying that people resist the implications with all their strength. Yet there are a lonely few with the courage to accept this satori. Here is wisdom, if you would be wise:*Since the beginning**Not one unusual thing**Has ever happened.*\n\n\nAlas for those who turn their eyes from zebras and dream of dragons! If we cannot learn to take joy in the merely real, our lives shall be empty indeed.\n\n\n\n\n---\n\n\nThis document is ©2005 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/technical/](https://eyudkowsky.wpengine.com/rational/technical/) .\n\n\nIf you enjoyed *A Technical Explanation of Technical Explanation* , you may enjoy the earlier works in the series, the [Twelve Virtues of Rationality](https://eyudkowsky.wpengine.com/rational/virtues) , [The Simple Truth](https://eyudkowsky.wpengine.com/rational/the-simple-truth) , and [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) .\n\n\nIf you’ve already read all that, you can move on to [Overcoming Bias](http://overcomingbias.com/) .\n\n\n\n\n---\n\n\n**References:**\n\n\nJohn Baez (1998). “ [The Crackpot Index.](http://math.ucr.edu/home/baez/crackpot.html) ”\n\n\nKevin Brown (1999). “ [Anomalous Precessions](http://www.mathpages.com/rr/s6-02/6-02.htm) ” In mathpages.com: Postings to sci.math collected by Kevin Brown.\n\n\nRobyn M. Dawes (1988). “Rational Choice in an Uncertain World”. Harcourt Brace Jovanovich, Inc.\n\n\nE. T. Jaynes (1996). “ [Probability Theory: The Logic of Science](http://bayes.wustl.edu/etj/prob/book.pdf) ” Published posthumously by Cambridge University Press, ed. G. Larry Bretthorst. (2003).\n\n\nT. S. Kuhn (1962). “The Structure of Scientific Revolutions.” The Chicago University Press.\n\n\nFriedrich Spee von Langenfeld (1631). “Cautio Criminalis, Or, a Book on Witch Trials.” Translated: Marcus Hellyer, 2003. University Press of Virginia.\n\n\nRuth Moore (1961). “The Coil of Life.” London Constable. See also [Phlogiston Theory](http://www.jimloy.com/physics/phlogstn.htm) , [Demise of Phlogiston](http://mooni.fccj.org/~ethall/phlogist/phlogist.htm) , and [Friedrich W�hler](http://en.wikipedia.org/wiki/Friedrich_Woehler) .\n\n\nRobert Pirsig (1974). “Zen and the Art of Motorcycle Maintenance.” New York: Bantam Books.\n\n\nKarl Popper (1959). “The Logic of Scientific Discovery”. Hutchinson, London.\n\n\nD. Robinson and J. Groves (1998). “Philosophy for Beginners.” Cambridge: Icon Books.\n\n\nCarl Sagan (1995). “The Demon-Haunted World: Science as a Candle in the Dark.” Random House, New York, NY.\n\n\nStephen Thornton (2002). “ [Karl Popper](http://plato.stanford.edu/archives/win2002/entries/popper/) ”. In Edward N. Zalta (ed.), “The Stanford Encyclopedia of Philosophy” (Winter 2002 Edition).\n\n\nA. Tversky and W. Edwards (1966). “Information versus reward in binary choice.” Journal of Experimental Psychology, 71, 680-683. See also Y. Schul and R. Mayo 2003, “ [Searching for certainty in an uncertain world.](http://www3.interscience.wiley.com/cgi-bin/abstract/103020292/ABSTRACT) ” In Journal of Behavioral Decision Making, 16:2, 93-106.\n\n\nJoachim Verhagen (2001). From the “ [Canonical List of Science Jokes](http://web.archive.org/web/20060424082937/http://www.nvon.nl/scheik/best/diversen/scijokes/scijokes.txt) ”, version 7.27, collected by Joachim Verhagen.\n\n\nGeorge Williams (1966). “Adaptation and Natural Selection: A Critique of Some Current Evolutionary Thought.” Princeton, NJ: Princeton University Press.\n\n\nJ.F. Yates, J.W. Lee, W.R. Sieck, I. Choi,&P.C. Price (2002). “Probability judgment across cultures.” In T. Gilovich, D. Griffin,&D. Kahneman (Eds.), “Heuristics and Biases.” New York: Cambridge.\n\n\nLyle Zapato (1998). “ [Lord Kelvin Quotations](http://zapatopi.net/kelvin/quotes.html) ”", "date_published": "2020-09-04T02:43:02Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "72bd0d99e1f180395d9913f3253fe642", "title": "An Intuitive Explanation of Bayes’ Theorem", "url": "https://www.yudkowsky.net/rational/bayes", "source": "yudkowsky_blog", "source_type": "blog", "text": "*Bayes’ Theorem \nfor the curious and bewildered; \nan excruciatingly gentle introduction.*\n\n\n\n\n---\n\n\nThis page has now been obsoleted by a vastly improved guide to Bayes’s Theorem, the [Arbital Guide to Bayes’s Rule](http://arbital.com/p/bayes_rule_guide) . Please read that instead. Seriously. I mean it. The current version is also plagued by a number of technical problems, with various applets no longer working. A mostly functional archived version of this essay can be found [here](https://web.archive.org/web/20150717035217/http://www.eyudkowsky.wpengine.com/rational/bayes).\n\n\n\n\n---\n\n\nYour friends and colleagues are talking about something called “Bayes’ Theorem” or “Bayes’ Rule”, or something called Bayesian reasoning.  They sound really enthusiastic about it, too, so you google and find a webpage about Bayes’ Theorem and…\n\n\nIt’s this equation.  That’s all.  Just one equation.  The page you found gives a definition of it, but it doesn’t say what it is, or why it’s useful, or why your friends would be interested in it.  It looks like this random statistics thing.\n\n\nSo you came here.  Maybe you don’t understand what the equation says.  Maybe you understand it in theory, but every time you try to apply it in practice you get mixed up trying to remember the difference between p(a|x) and p(x|a) , and whether p(a)\\*p(x|a) belongs in the numerator or the denominator.  Maybe you see the theorem, and you understand the theorem, and you can use the theorem, but you can’t understand why your friends and/or research colleagues seem to think it’s the secret of the universe.  Maybe your friends are all wearing Bayes’ Theorem T-shirts, and you’re feeling left out.  Maybe you’re a girl looking for a boyfriend, but the boy you’re interested in refuses to date anyone who “isn’t Bayesian”.  What matters is that Bayes is cool, and if you don’t know Bayes, you aren’t cool.\n\n\nWhy does a mathematical concept generate this strange enthusiasm in its students?  What is the so-called Bayesian Revolution now sweeping through the sciences, which claims to subsume even the experimental method itself as a special case?  What is the secret that the adherents of Bayes know?  What is the light that they have seen?\n\n\nSoon you will know.  Soon you will be one of us.\n\n\nWhile there are a few existing online explanations of Bayes’ Theorem, my experience with trying to introduce people to Bayesian reasoning is that the existing online explanations are too abstract.  Bayesian reasoning is very *counterintuitive.*   People do not employ Bayesian reasoning intuitively, find it very difficult to learn Bayesian reasoning when tutored, and rapidly forget Bayesian methods once the tutoring is over.  This holds equally true for novice students and highly trained professionals in a field.  Bayesian reasoning is apparently one of those things which, like quantum mechanics or the Wason Selection Test, is inherently difficult for humans to grasp with our built-in mental faculties.\n\n\nOr so they claim.  Here you will find an attempt to offer an *intuitive* explanation of Bayesian reasoning – an excruciatingly gentle introduction that invokes all the human ways of grasping numbers, from natural frequencies to spatial visualization.  The intent is to convey, not abstract rules for manipulating numbers, but what the numbers mean, and why the rules are what they are (and cannot possibly be anything else).  When you are finished reading this page, you will see Bayesian problems in your dreams.\n\n\nAnd let’s begin.\n\n\n\n\n---\n\n\nHere’s a story problem about a situation that doctors often encounter:\n\n\n1% of women at age forty who participate in routine screening have breast cancer.  80% of women with breast cancer will get positive mammographies.  9.6% of women without breast cancer will also get positive mammographies.  A woman in this age group had a positive mammography in a routine screening.  What is the probability that she actually has breast cancer?\n\n\nWhat do you think the answer is?  If you haven’t encountered this kind of problem before, please take a moment to come up with your own answer before continuing.\n\n\n\n\n---\n\n\nNext, suppose I told you that most doctors get the same wrong answer on this problem – usually, only around 15% of doctors get it right.  (“Really?  15%?  Is that a real number, or an urban legend based on an Internet poll?”  It’s a real number.  See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies.  It’s a surprising result which is easy to replicate, so it’s been extensively replicated.)\n\n\nDo you want to think about your answer again?  Here’s a Javascript calculator if you need one.  This calculator has the usual precedence rules; multiplication before addition and so on.  If you’re not sure, I suggest using parentheses.\n\n\nCalculator:  Result:  \n\n\n\n\n---\n\n\nOn the story problem above, most doctors estimate the probability to be between 70% and 80%, which is wildly incorrect.\n\n\nHere’s an alternate version of the problem on which doctors fare somewhat better:\n\n\n10 out of 1000 women at age forty who participate in routine screening have breast cancer.  800 out of 1000 women with breast cancer will get positive mammographies.  96 out of 1000 women without breast cancer will also get positive mammographies.  If 1000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?\n\n\nCalculator:  Result:  \n\n\n\n\n---\n\n\nAnd finally, here’s the problem on which doctors fare best of all, with 46% – nearly half – arriving at the correct answer:\n\n\n100 out of 10,000 women at age forty who participate in routine screening have breast cancer.  80 of every 100 women with breast cancer will get a positive mammography.  950 out of  9,900 women without breast cancer will also get a positive mammography.  If 10,000 women in this age group undergo a routine screening, about what fraction of women with positive mammographies will actually have breast cancer?\n\n\nCalculator:  Result:  \n\n\n\n\n---\n\n\nThe correct answer is 7.8%, obtained as follows:  Out of 10,000 women, 100 have breast cancer; 80 of those 100 have positive mammographies.  From the same 10,000 women, 9,900 will not have breast cancer and of those 9,900 women, 950 will also get positive mammographies.  This makes the total number of women with positive mammographies 950+80 or 1,030.  Of those 1,030 women with positive mammographies, 80 will have cancer.  Expressed as a proportion, this is 80/1,030 or 0.07767 or 7.8%.\n\n\nTo put it another way, before the mammography screening, the 10,000 women can be divided into two groups:\n\n\n* Group 1:  100 women *with* breast cancer.\n* Group 2:  9,900 women *without* breast cancer.\n\n\nSumming these two groups gives a total of 10,000 patients, confirming that none have been lost in the math.  After the mammography, the women can be divided into four groups:\n\n\n* Group A:  80 women *with* breast cancer, and a *positive* mammography.\n* Group B:  20 women *with* breast cancer, and a *negative* mammography.\n* Group C:  950 women *without*   breast cancer, and a *positive* mammography.\n* Group D:  8,950 women *without* breast cancer, and a *negative* mammography.\n\n\nCalculator:  Result:  As you can check, the sum of all four groups is still 10,000.  The sum of groups A and B, the groups with breast cancer, corresponds to group 1; and the sum of groups C and D, the groups without breast cancer, corresponds to group 2; so administering a mammography does not actually *change* the number of women with breast cancer.  The proportion of the cancer patients (A + B) within the complete set of patients (A + B + C + D) is the same as the 1% prior chance that a woman has cancer: (80 + 20) / (80 + 20 + 950 + 8950) = 100 / 10000 = 1%.\n\n\nThe proportion of cancer patients with positive results, within the group of *all* patients with positive results, is the proportion of (A) within (A + C):   80 / (80 + 950) = 80 / 1030 = 7.8%.  If you administer a mammography to 10,000 patients, then out of the 1030 with positive mammographies, 80 of those positive-mammography patients will have cancer.  This is the correct answer, the answer a doctor should give a positive-mammography patient if she asks about the chance she has breast cancer; if thirteen patients ask this question, roughly 1 out of those 13 will have cancer.\n\n\n\n\n---\n\n\nThe most common mistake is to ignore the original fraction of women with breast cancer, and the fraction of women without breast cancer who receive false positives, and focus only on the fraction of women with breast cancer who get positive results.  For example, the vast majority of doctors in these studies seem to have thought that if around 80% of women with breast cancer have positive mammographies, then the probability of a women with a positive mammography having breast cancer must be around 80%.\n\n\nFiguring out the final answer always requires *all three* pieces of information – the percentage of women with breast cancer, the percentage of women without breast cancer who receive false positives, and the percentage of women with breast cancer who receive (correct) positives.\n\n\nTo see that the final answer always depends on the original fraction of women with breast cancer, consider an alternate universe in which only one woman out of a million has breast cancer.  Even if mammography in this world  detects breast cancer in 8 out of 10 cases, while returning a false positive on a woman without breast cancer in only 1 out of 10 cases, there will still be a hundred thousand false positives for every real case of cancer detected.  The original probability that a woman has cancer is so extremely low that, although a positive result on the mammography does *increase* the estimated probability, the probability isn’t increased to certainty or even “a noticeable chance”; the probability goes from 1:1,000,000 to 1:100,000.\n\n\nSimilarly, in an alternate universe where only one out of a million women does *not* have breast cancer, a positive result on the patient’s mammography obviously doesn’t mean that she has an 80% chance of having breast cancer!  If this were the case her estimated probability of having cancer would have been revised drastically *downward* after she got a *positive* result on her mammography – an 80% chance of having cancer is a lot less than 99.9999%!  If you administer mammographies to ten million women in this world, around eight million women with breast cancer will get correct positive results, while one woman without breast cancer will get false positive results.  Thus, if you got a positive mammography in this alternate universe, your chance of having cancer would go from 99.9999% up to 99.999987%.  That is, your chance of being healthy would go from 1:1,000,000 down to 1:8,000,000.\n\n\nThese two extreme examples help demonstrate that the mammography result doesn’t *replace* your old information about the patient’s chance of having cancer; the mammography *slides* the estimated probability in the direction of the result.  A positive result slides the original probability upward; a negative result slides the probability downward.  For example, in the original problem where 1% of the women have cancer, 80% of women with cancer get positive mammographies, and 9.6% of women without cancer get positive mammographies, a positive result on the mammography *slides* the 1% chance upward to 7.8%.\n\n\nMost people encountering problems of this type for the first time carry out the mental operation of *replacing* the original 1% probability with the 80% probability that a woman with cancer gets a positive mammography.  It may seem like a good idea, but it just doesn’t work.  “The probability that a woman with a positive mammography has breast cancer” is not at all the same thing as “the probability that a woman with breast cancer has a positive mammography”; they are as unlike as apples and cheese.  Finding the final answer, “the probability that a woman with a positive mammography has breast cancer”, uses all three pieces of problem information – “the prior probability that a woman has breast cancer”, “the probability that a woman with breast cancer gets a positive mammography”, and “the probability that a woman without breast cancer gets a positive mammography”.\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.  What is the Bayesian Conspiracy?****A.** The Bayesian Conspiracy is a multinational, interdisciplinary, and shadowy group of scientists that controls publication, grants, tenure, and the illicit traffic in grad students.  The best way to be accepted into the Bayesian Conspiracy is to join the Campus Crusade for Bayes in high school or college, and gradually work your way up to the inner circles.  It is rumored that at the upper levels of the Bayesian Conspiracy exist nine silent figures known only as the Bayes Council. |\n\n\n\n\n---\n\n\nTo see that the final answer always depends on the chance that a woman *without* breast cancer gets a positive mammography, consider an alternate test, mammography+.  Like the original test, mammography+ returns positive for 80% of women with breast cancer.  However, mammography+ returns a positive result for only one out of a million women without breast cancer – mammography+ has the same rate of false negatives, but a vastly lower rate of false positives.  Suppose a patient receives a positive mammography+.  What is the chance that this patient has breast cancer?  Under the new test, it is a virtual certainty – 99.988%, i.e., a 1 in 8082 chance of being healthy.\n\n\nCalculator:  Result:   \nRemember, at this point, that neither mammography nor mammography+ actually *change* the number of women who have breast cancer.  It may seem like “There is a virtual certainty you have breast cancer” is a terrible thing to say, causing much distress and despair; that the more hopeful verdict of the previous mammography test – a 7.8% chance of having breast cancer – was much to be preferred.  This comes under the heading of “Don’t shoot the messenger”.  The number of women who really do have cancer stays exactly the same between the two cases.  Only the accuracy with which we *detect* cancer changes.  Under the previous mammography test, 80 women with cancer (who *already* had cancer, before the mammography) are first told that they have a 7.8% chance of having cancer, creating X amount of uncertainty and fear, after which more detailed tests will inform them that they definitely do have breast cancer.  The old mammography test also involves informing 950 women *without* breast cancer that they have a 7.8% chance of having cancer, thus creating twelve times as much additional fear and uncertainty.  The new test, mammography+, does *not* give 950 women false positives, and the 80 women with cancer are told the same facts they would have learned eventually, only earlier and without an intervening period of uncertainty.  Mammography+ is thus a better test in terms of its total emotional impact on patients, as well as being more accurate.  Regardless of its emotional impact, it remains a fact that a patient with positive mammography+ has a 99.988% chance of having breast cancer.\n\n\nOf course, that mammography+ does *not* give 950 healthy women false positives means that all 80 of the patients with positive mammography+ will be patients with breast cancer.  Thus, if you have a positive mammography+, your chance of having cancer is a virtual certainty.  It is *because* mammography+ does not generate as many false positives (and needless emotional stress), that the (much smaller) group of patients who *do* get positive results will be composed almost entirely of genuine cancer patients (who have bad news coming to them regardless of when it arrives).\n\n\n\n\n---\n\n\nSimilarly, let’s suppose that we have a *less* discriminating test, mammography\\*, that still has a 20% rate of false negatives, as in the original case.  However, mammography\\* has an 80% rate of false positives.  In other words, a patient *without* breast cancer has an 80% chance of getting a false positive result on her mammography\\* test.  If we suppose the same 1% prior probability that a patient presenting herself for screening has breast cancer, what is the chance that a patient with positive mammography\\* has cancer?\n\n\n* Group 1:  100 patients with breast cancer.\n* Group 2:  9,900 patients without breast cancer.\n\n\nAfter mammography\\* screening:\n\n\n* Group A:  80 patients with breast cancer and a “positive” mammography\\*.\n* Group B:  20 patients with breast cancer and a “negative” mammography\\*.\n* Group C:  7920 patients without breast cancer and a “positive” mammography\\*.\n* Group D:  1980 patients without breast cancer and a “negative” mammography\\*.\n\n\nCalculator:  Result:   \nThe result works out to 80 / 8,000, or 0.01.  This is exactly the same as the 1% prior probability that a patient has breast cancer!  A “positive” result on mammography\\* doesn’t change the probability that a woman has breast cancer at all.  You can similarly verify that a “negative” mammography\\* also counts for nothing.  And in fact it *must* be this way, because if mammography\\* has an 80% hit rate for patients with breast cancer, and also an 80% rate of false positives for patients without breast cancer, then mammography\\* is completely *uncorrelated* with breast cancer.  There’s no reason to call one result “positive” and one result “negative”; in fact, there’s no reason to call the test a “mammography”.  You can throw away your expensive mammography\\* equipment and replace it with a random number generator that outputs a red light 80% of the time and a green light 20% of the time; the results will be the same.  Furthermore, there’s no reason to call the red light a “positive” result or the green light a “negative” result.  You could have a green light 80% of the time and a red light 20% of the time, or a blue light 80% of the time and a purple light 20% of the time, and it would all have the same bearing on whether the patient has breast cancer: i.e., no bearing whatsoever.\n\n\nWe can show algebraically that this *must* hold for any case where the chance of a true positive and the chance of a false positive are the same, i.e:\n\n\n* Group 1:  100 patients with breast cancer.\n* Group 2:  9,900 patients without breast cancer.\n\n\nNow consider a test where the probability of a true positive and the probability of a false positive are the same number M (in the example above, M=80% or M = 0.8):\n\n\n* Group A:  100\\*M patients with breast cancer and a “positive” result.\n* Group B:  100\\*(1 – M) patients with breast cancer and a “negative” result.\n* Group C:  9,900\\*M patients without breast cancer and a “positive” result.\n* Group D:  9,900\\*(1 – M) patients without breast cancer and a “negative” result.\n\n\nThe proportion of patients with breast cancer, within the group of patients with a “positive” result, then equals 100\\*M / (100\\*M + 9900\\*M) = 100 / (100 + 9900) = 1%.  This holds true regardless of whether M is 80%, 30%, 50%, or 100%.  If we have a mammography\\* test that returns “positive” results for 90% of patients with breast cancer and returns “positive” results for 90% of patients without breast cancer, the proportion of “positive”-testing patients who have breast cancer will still equal the original proportion of patients with breast cancer, i.e., 1%.\n\n\nYou can run through the same algebra, replacing the prior proportion of patients with breast cancer with an arbitrary percentage P:\n\n\n* Group 1:  Within some number of patients, a fraction P have breast cancer.\n* Group 2:  Within some number of patients, a fraction (1 – P) do not have breast cancer.\n\n\nAfter a “cancer test” that returns “positive” for a fraction M of patients with breast cancer, and also returns “positive” for the same fraction M of patients *without* cancer:\n\n\n* Group A:  P\\*M patients have breast cancer and a “positive” result.\n* Group B:  P\\*(1 – M) patients have breast cancer and a “negative” result.\n* Group C:  (1 – P)\\*M patients have no breast cancer and a “positive” result.\n* Group D:  (1 – P)\\*(1 – M) patients have no breast cancer and a “negative” result.\n\n\nThe chance that a patient with a “positive” result has breast cancer is then the proportion of group A within the combined group A + C, or P\\*M / [P\\*M + (1 – P)\\*M], which, cancelling the common factor M from the numerator and denominator, is P / [P + (1 – P)] or P / 1 or just P.  If the rate of false positives is the same as the rate of true positives, you always have the same probability after the test as when you started.\n\n\nWhich is common sense.  Take, for example, the “test” of flipping a coin; if the coin comes up heads, does it tell you anything about whether a patient has breast cancer?  No; the coin has a 50% chance of coming up heads if the patient has breast cancer, and also a 50% chance of coming up heads if the patient does not have breast cancer.  Therefore there is no reason to call either heads or tails a “positive” result.  It’s not the probability being “50/50” that makes the coin a bad test; it’s that the two probabilities, for “cancer patient turns up heads” and “healthy patient turns up heads”, are the same.  If the coin was slightly biased, so that it had a 60% chance of coming up heads, it still wouldn’t be a cancer test – what makes a coin a poor test is not that it has a 50/50 chance of coming up heads if the patient has cancer, but that it also has a 50/50 chance of coming up heads if the patient does not have cancer.  You can even use a test that comes up “positive” for cancer patients 100% of the time, and still not learn anything.  An example of such a test is “Add 2 + 2 and see if the answer is 4.”  This test returns positive 100% of the time for patients with breast cancer.  It also returns positive 100% of the time for patients without breast cancer.  So you learn nothing.\n\n\nThe original proportion of patients with breast cancer is known as the *prior probability.*   The chance that a patient with breast cancer gets a positive mammography, and the chance that a patient without breast cancer gets a positive mammography, are known as the two *conditional probabilities.*  Collectively, this initial information is known as *the priors**.*   The final answer – the estimated probability that a patient has breast cancer, given that we know she has a positive result on her mammography – is known as the *revised probability* or the *posterior probability.*   What we’ve just shown is that *if the two conditional probabilities are equal, the posterior probability equals the prior probability.*\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.  How can I find the priors for a problem?****A.**   Many commonly used priors are listed in the *Handbook of Chemistry and Physics.***Q.  Where do priors** ***originally*** **come from?****A.**   Never ask that question.**Q.  Uh huh.  Then where do scientists get their priors?****A.**   Priors for scientific problems are established by annual vote of the AAAS.  In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations.  This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time.  No one is really sure.**Q.  I see.  And where does everyone else get their priors?****A.**   They download their priors from Kazaa.**Q.  What if the priors I want aren’t available on Kazaa?****A.**   There’s a small, cluttered antique shop in a back alley of San Francisco’s Chinatown.  *Don’t ask about the bronze rat.* |\n\n\nActually, priors are true or false just like the final answer – they reflect reality and can be judged by comparing them against reality.  For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong.  For our particular problem, the priors might have been established by three studies – a study on the case histories of women with breast cancer to see how many of them tested positive on a mammography, a study on women without breast cancer to see how many of them test positive on a mammography, and an epidemiological study on the prevalence of breast cancer in some specific demographic.\n\n\n\n\n---\n\n\nSuppose that a barrel contains many small plastic eggs.  Some eggs are painted red and some are painted blue.  40% of the eggs in the bin contain pearls, and 60% contain nothing.   30% of eggs containing pearls are painted blue, and 10% of eggs containing nothing are painted blue.  What is the probability that a blue egg contains a pearl?  For this example the arithmetic is simple enough that you may be able to do it in your head, and I would suggest trying to do so.\n\n\nBut just in case…  Result:  A more compact way of specifying the problem:\n\n\n* p(pearl) = 40%\n* p(blue|pearl) = 30%\n* p(blue|~pearl) = 10%\n* p(pearl|blue) = ?\n\n\n“~” is shorthand for “not”, so ~pearl reads “not pearl”.\n\n\nblue|pearl is shorthand for “blue given pearl” or “the probability that an egg is painted blue, given that the egg contains a pearl”.  One thing that’s confusing about this notation is that the order of implication is read right-to-left, as in Hebrew or Arabic.  blue|pearl means “blue <- pearl”, the degree to which pearl-ness implies blue-ness, not the degree to which blue-ness implies pearl-ness.  This is confusing, but it’s unfortunately the standard notation in probability theory.\n\n\nReaders familiar with quantum mechanics will have already encountered this peculiarity; in quantum mechanics, for example,  reads as “the probability that a particle at A goes to B, then to C, ending up at D”.  To follow the particle, you move your eyes from right to left.  Reading from left to right, “|” means “given”; reading from right to left, “|” means “implies” or “leads to”.  Thus, moving your eyes from left to right, blue|pearl reads “blue given pearl” or “the probability that an egg is painted blue, given that the egg contains a pearl”.  Moving your eyes from right to left, blue|pearl reads “pearl implies blue” or “the probability that an egg containing a pearl is painted blue”.\n\n\nThe item on the right side is what you *already know* or the *premise,* and the item on the left side is the *implication* or *conclusion.*   If we have p(blue|pearl) = 30% , and we *already know* that some egg contains a pearl, then we can *conclude* there is a 30% chance that the egg is painted blue.  Thus, the final fact we’re looking for – “the chance that a blue egg contains a pearl” or “the probability that an egg contains a pearl, if we know the egg is painted blue” – reads p(pearl|blue) .\n\n\nLet’s return to the problem.  We have that 40% of the eggs contain pearls, and 60% of the eggs contain nothing.  30% of the eggs containing pearls are painted blue, so 12% of the eggs altogether contain pearls and are painted blue.  10% of the eggs containing nothing are painted blue, so altogether 6% of the eggs contain nothing and are painted blue.  A total of 18% of the eggs are painted blue, and a total of 12% of the eggs are painted blue and contain pearls, so the chance a blue egg contains a pearl is 12/18 or 2/3 or around 67%.\n\n\nThe applet below, courtesy of Christian Rovner, shows a graphic representation of this problem: \n(Are you having trouble seeing this applet?  Do you see an image of the applet rather than the applet itself?  Try downloading an updated [Java](http://www.java.com/en/index.jsp) .)\n\n\nLooking at this applet, it’s easier to see why the final answer depends on all three probabilities; it’s the *differential pressure*between the two conditional probabilities,  p(blue|pearl) and p(blue|~pearl) , that *slides* the prior probability p(pearl) to the posterior probability p(pearl|blue) .\n\n\nAs before, we can see the necessity of all three pieces of information by considering extreme cases (feel free to type them into the applet).  In a (large) barrel in which only one egg out of a thousand contains a pearl, knowing that an egg is painted blue slides the probability from 0.1% to 0.3% (instead of sliding the probability from 40% to 67%).  Similarly, if 999 out of 1000 eggs contain pearls, knowing that an egg is blue slides the probability from 99.9% to 99.966%; the probability that the egg does *not* contain a pearl goes from 1/1000 to around 1/3000.  Even when the prior probability changes, the differential pressure of the two conditional probabilities always slides the probability in the same *direction.*   If you learn the egg is painted blue, the probability the egg contains a pearl always goes *up* – but it goes up *from* the prior probability, so you need to know the prior probability in order to calculate the final answer.  0.1% goes up to 0.3%, 10% goes up to 25%, 40% goes up to 67%, 80% goes up to 92%, and 99.9% goes up to 99.966%.  If you’re interested in knowing how any other probabilities slide, you can type your own prior probability into the Java applet.  You can also click and drag the dividing line between pearl and ~pearl in the upper bar, and watch the posterior probability change in the bottom bar.\n\n\nStudies of clinical reasoning show that most doctors carry out the mental operation of *replacing* the original 1% probability with the 80% probability that a woman with cancer would get a positive mammography.  Similarly, on the pearl-egg problem, most respondents unfamiliar with Bayesian reasoning would probably respond that the probability a blue egg contains a pearl is 30%, or perhaps 20% (the 30% chance of a true positive minus the 10% chance of a false positive).  Even if this mental operation seems like a good idea at the time, it makes no sense in terms of the question asked.  It’s like the experiment in which you ask a second-grader:  “If eighteen people get on a bus, and then seven more people get on the bus, how old is the bus driver?”  Many second-graders will respond:  “Twenty-five.”  They understand when they’re being prompted to carry out a particular mental procedure, but they haven’t quite connected the procedure to reality.  Similarly, to find the probability that a woman with a positive mammography has breast cancer, it makes no sense whatsoever to *replace* the original probability that the woman has cancer with the probability that a woman with breast cancer gets a positive mammography.  Neither can you subtract the probability of a false positive from the probability of the true positive.  These operations are as wildly irrelevant as adding the number of people on the bus to find the age of the bus driver.\n\n\n\n\n---\n\n\nI keep emphasizing the idea that evidence *slides* probability because of research that shows people tend to use spatial intutions to grasp numbers.  In particular, there’s interesting evidence that we have an innate sense of quantity that’s localized to left inferior parietal cortex – patients with damage to this area can selectively lose their sense of whether 5 is less than 8, while retaining their ability to read, write, and so on.  (Yes, really!)  The parietal cortex processes our sense of where things are in space (roughly speaking), so an innate “number line”, or rather “quantity line”, may be responsible for the human sense of numbers.  This is why I suggest visualizing Bayesian evidence as *sliding* the probability along the number line; my hope is that this will translate Bayesian reasoning into something that makes sense to innate human brainware.  (That, really, is what an “intuitive explanation” *is.* )  For more information, see Stanislas Dehaene’s *The Number Sense.*\n\n\n\n\n---\n\n\nA study by Gigerenzer and Hoffrage in 1995 showed that some ways of phrasing story problems are much more evocative of correct Bayesian reasoning.  The *least* evocative phrasing used probabilities.  A slightly more evocative phrasing used frequencies instead of probabilities; the problem remained the same, but instead of saying that 1% of women had breast cancer, one would say that 1 out of 100 women had breast cancer, that 80 out of 100 women with breast cancer would get a positive mammography, and so on.  Why did a higher proportion of subjects display Bayesian reasoning on this problem?  Probably because saying “1 out of 100 women” encourages you to concretely visualize X women with cancer, leading you to visualize X women with cancer and a positive mammography, etc.\n\n\nThe most effective presentation found so far is what’s known as *natural frequencies* – saying that 40 out of 100 eggs contain pearls, 12 out of 40 eggs containing pearls are painted blue, and 6 out of 60 eggs containing nothing are painted blue.  A *natural frequencies* presentation is one in which the information about the prior probability is included in presenting the conditional probabilities.  If you were just learning about the eggs’ conditional probabilities through natural experimentation, you would – in the course of cracking open a hundred eggs – crack open around 40 eggs containing pearls, of which 12 eggs would be painted blue, while cracking open 60 eggs containing nothing, of which about 6 would be painted blue.  In the course of learning the conditional probabilities, you’d see examples of blue eggs containing pearls about twice as often as you saw examples of blue eggs containing nothing.\n\n\nIt may seem like presenting the problem in this way is “cheating”, and indeed if it were a story problem in a math book, it probably *would* be cheating.  However, if you’re talking about real doctors, you *want* to cheat; you *want* the doctors to draw the right conclusions as easily as possible.  The obvious next move would be to present all medical statistics in terms of natural frequencies.  Unfortunately, while natural frequencies are a step in the right direction, it probably won’t be enough.  When problems are presented in natural frequences, the proportion of people using Bayesian reasoning rises to around half.  A big improvement, but not big enough when you’re talking about real doctors and real patients.\n\n\nA presentation of the problem in *natural frequencies*might be visualized like this:\n\n\nIn the frequency visualization, the *selective attrition*of the two conditional probabilities changes the *proportion* of eggs that contain pearls.  The bottom bar is shorter than the top bar, just as the number of eggs painted blue is less than the total number of eggs.  The probability graph shown earlier is really just the frequency graph with the bottom bar “renormalized”, stretched out to the same length as the top bar.  In the frequency applet you can change the conditional probabilities by clicking and dragging the left and right edges of the graph.  (For example, to change the conditional probability blue|pearl , click and drag the line on the left that stretches from the left edge of the top bar to the left edge of the bottom bar.)\n\n\nIn the probability applet, you can see that when the conditional probabilities are equal, there’s no *differential* pressure – the arrows are the same size – so the prior probability doesn’t slide between the top bar and the bottom bar.  But the bottom bar in the probability applet is just a renormalized (stretched out) version of the bottom bar in the frequency applet, and the frequency applet shows *why* the probability doesn’t slide if the two conditional probabilities are equal.  Here’s a case where the prior proportion of pearls remains 40%, and the proportion of pearl eggs painted blue remains 30%, but the number of empty eggs painted blue is also 30%:\n\n\nIf you diminish two shapes by the same factor, their relative proportion will be the same as before.  If you diminish the left section of the top bar by the same factor as the right section, then the bottom bar will have the same proportions as the top bar – it’ll just be smaller.  If the two conditional probabilities are equal, learning that the egg is blue doesn’t change the probability that the egg contains a pearl – for the same reason that similar triangles have identical angles; geometric figures don’t change shape when you shrink them by a constant factor.\n\n\nIn this case, you might as well just say that *30% of eggs are painted blue,* since the probability of an egg being painted blue is independent of whether the egg contains a pearl.  Applying a “test” that is statistically independent of its condition just shrinks the sample size.  In this case, requiring that the egg be painted blue doesn’t shrink the group of eggs with pearls any more or less than it shrinks the group of eggs without pearls.  It just shrinks the total number of eggs in the sample.\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.  Why did the Bayesian reasoner cross the road?****A.** You need more information to answer this question. |\n\n\n\n\n---\n\n\nHere’s what the original medical problem looks like when graphed.  1% of women have breast cancer, 80% of those women test positive on a mammography, and 9.6% of women without breast cancer also receive positive mammographies.\n\n\nAs is now clearly visible, the mammography doesn’t increase the probability a positive-testing woman has breast cancer by increasing the number of women with breast cancer – of course not; if mammography increased the number of women with breast cancer, no one would ever take the test!  However, *requiring* a positive mammography is a membership test that *eliminates* many more women without breast cancer than women with cancer.  The number of women without breast cancer diminishes by a factor of more than ten, from 9,900 to 950, while the number of women with breast cancer is diminished only from 100 to 80.  Thus, the proportion of 80 within 1,030 is much larger than the proportion of 100 within 10,000.  In the graph, the left sector (representing women with breast cancer) is small, but the mammography test projects almost all of this sector into the bottom bar.  The right sector (representing women without breast cancer) is large, but the mammography test projects a much smaller fraction of this sector into the bottom bar.  There are, indeed, fewer women with breast cancer and positive mammographies than there are women with breast cancer – obeying the law of probabilities which requires that p(A) >= p(A&B) .  But even though the left sector in the bottom bar is actually slightly smaller, the proportion of the left sector *within* the bottom bar is greater – though still not very great.  If the bottom bar were renormalized to the same length as the top bar, it would look like the left sector had expanded.  This is why the proportion of “women with breast cancer” in the group “women with positive mammographies” is higher than the proportion of “women with breast cancer” in the general population – although the proportion is still not very high.  The evidence of the positive mammography slides the prior probability of 1% to the posterior probability of 7.8%.\n\n\n\n\n---\n\n\nSuppose there’s yet another variant of the mammography test, mammography@, which behaves as follows.  1% of women in a certain demographic have breast cancer.  Like ordinary mammography, mammography@ returns positive 9.6% of the time for women without breast cancer.  However, mammography@ returns positive 0% of the time (say, once in a billion) for women with breast cancer.  The graph for this scenario looks like this:\n\n\nWhat is it that this test actually does?  If a patient comes to you with a positive result on her mammography@, what do you say?\n\n\n\n\n---\n\n\n“Congratulations, you’re among the rare 9.5% of the population whose health is definitely established by this test.”\n\n\nMammography@ isn’t a cancer test; it’s a health test!  Few women without breast cancer get positive results on mammography@, but *only* women without breast cancer ever get positive results at all.  Not much of the right sector of the top bar projects into the bottom bar, but *none* of the left sector projects into the bottom bar.  So a positive result on mammography@ means you *definitely* don’t have breast cancer.\n\n\n\n\n---\n\n\nWhat makes ordinary mammography a *positive* indicator for breast cancer is not that someone *named* the result “positive”, but rather that the test result stands in a specific Bayesian relation to the condition of breast cancer.  You could call the same result “positive” or “negative” or “blue” or “red” or “James Rutherford”, or give it no name at all, and the test result would still slide the probability in exactly the same way.  To minimize confusion, a test result which slides the probability of breast cancer upward should be called “positive”.  A test result which slides the probability of breast cancer downward should be called “negative”.  If the test result is statistically unrelated to the presence or absence of breast cancer – if the two conditional probabilities are equal – then we shouldn’t call the procedure a “cancer test”!  The *meaning* of the test is determined by the two conditional probabilities; any names attached to the results are simply convenient labels.\n\n\n\n\n---\n\n\nThe bottom bar for the graph of mammography@ is small; mammography@ is a test that’s only rarely useful.  Or rather, the test only rarely gives *strong* evidence, and most of the time gives *weak* evidence.  A negative result on mammography@ does slide probability – it just doesn’t slide it very far.  Click the “Result” switch at the bottom left corner of the applet to see what a *negative* result on mammography@ would imply.  You might intuit that since the test *could* have returned positive for health, but didn’t, then the failure of the test to return positive must mean that the woman has a higher chance of having breast cancer – that her probability of having breast cancer must be slid upward by the negative result on her health test.\n\n\nThis intuition is correct!  The sum of the groups with negative results and positive results must always equal the group of all women.  If the positive-testing group has “more than its fair share” of women *without* breast cancer, there must be an at least slightly higher proportion of women *with* cancer in the negative-testing group.  A positive result is rare but very strong evidence in one direction, while a negative result is common but very weak evidence in the opposite direction.  You might call this the Law of Conservation of Probability – not a standard term, but the conservation rule is exact.  If you take the revised probability of breast cancer after a positive result, times the *probability* of a positive result, and add that to the revised probability of breast cancer after a negative result, times the *probability* of a negative result, then you must always arrive at the prior probability.  If you don’t yet *know* what the test result is, the *expected revised probability*after the test result arrives – taking both possible results into account – should always equal the prior probability.\n\n\nOn ordinary mammography, the test is expected to return “positive” 10.3% of the time – 80 positive women with cancer plus 950 positive women without cancer equals 1030 women with positive results.  Conversely, the mammography should return negative 89.7% of the time:  100% – 10.3% = 89.7%.  A positive result slides the revised probability from 1% to 7.8%, while a negative result slides the revised probability from 1% to 0.22%.  So p(cancer|positive)\\*p(positive) + p(cancer|negative)\\*p(negative) = 7.8%\\*10.3% + 0.22%\\*89.7% = 1% = p(cancer) , as expected.\n\n\nCalculator:  Result:  \n\n\n\n\n---\n\n\nWhy “as expected”?  Let’s take a look at the quantities involved:\n\n\n\n\n| | | | |\n| --- | --- | --- | --- |\n| p(cancer): | 0.01 | | Group 1: 100 women with breast cancer |\n| p(~cancer): | 0.99 | | Group 2: 9900 women without breast cancer |\n| | | | |\n| p(positive|cancer): | 80.0% | | 80% of women with breast cancer have positive mammographies |\n| p(~positive|cancer): | 20.0% | | 20% of women with breast cancer have negative mammographies |\n| p(positive|~cancer): | 9.6% | | 9.6% of women without breast cancer have positive mammographies |\n| p(~positive|~cancer): | 90.4% | | 90.4% of women without breast cancer have negative mammographies |\n| | | | |\n| p(cancer&positive): | 0.008 | | Group A:  80 women with breast cancer and positive mammographies |\n| p(cancer&~positive): | 0.002 | | Group B: 20 women with breast cancer and negative mammographies |\n| p(~cancer&positive): | 0.095 | | Group C: 950 women without breast cancer and positive mammographies |\n| p(~cancer&~positive): | 0.895 | | Group D: 8950 women without breast cancer and negative mammographies |\n| | | | |\n| p(positive): | 0.103 | | 1030 women with positive results |\n| p(~positive): | 0.897 | | 8970 women with negative results |\n| | | | |\n| p(cancer|positive): | 7.80% | | Chance you have breast cancer if mammography is positive: 7.8% |\n| p(~cancer|positive): | 92.20% | | Chance you are healthy if mammography is positive: 92.2% |\n| p(cancer|~positive): | 0.22% | | Chance you have breast cancer if mammography is negative: 0.22% |\n| p(~cancer|~positive): | 99.78% | | Chance you are healthy if mammography is negative: 99.78% |\n\n\nOne of the common confusions in using Bayesian reasoning is to mix up some or all of these quantities – which, as you can see, are all numerically different and have different meanings.  p(A&B) is the same as p(B&A) , but p(A|B) is not the same thing as p(B|A) , and p(A&B) is completely different from p(A|B) .  (I don’t know who chose the symmetrical “|” symbol to mean “implies”, and then made the direction of implication right-to-left, but it was probably a bad idea.)\n\n\nTo get acquainted with all these quantities and the relationships between them, we’ll play “follow the degrees of freedom”.  For example, the two quantities p(cancer) and p(~cancer) have 1 degree of freedom between them, because of the general law p(A) + p(~A) = 1 .  If you know that p(~cancer) = .99 , you can obtain p(cancer) = 1 – p(~cancer) = .01 .  There’s no room to say that p(~cancer) = .99 and then also specify p(cancer) = .25 ; it would violate the rule p(A) + p(~A) = 1 .\n\n\np(positive|cancer) and p(~positive|cancer) also have only one degree of freedom between them; either a woman with breast cancer gets a positive mammography or she doesn’t.  On the other hand, p(positive|cancer) and p(positive|~cancer) have *two* degrees of freedom.  You can have a mammography test that returns positive for 80% of cancerous patients and 9.6% of healthy patients, or that returns positive for 70% of cancerous patients and 2% of healthy patients, or even a health test that returns “positive” for 30% of cancerous patients and 92% of healthy patients.  The two quantities, the output of the mammography test for cancerous patients and the output of the mammography test for healthy patients, are in mathematical terms independent; one cannot be obtained from the other in any way, and so they have two degrees of freedom between them.\n\n\nWhat about p(positive&cancer) , p(positive|cancer) , and p(cancer) ?  Here we have three quantities; how many degrees of freedom are there?  In this case the equation that must hold is p(positive&cancer) = p(positive|cancer) \\* p(cancer) .  This equality reduces the degrees of freedom by one.  If we know the fraction of patients with cancer, and chance that a cancerous patient has a positive mammography, we can deduce the fraction of patients who have breast cancer *and* a positive mammography by multiplying.  You should recognize this operation from the graph; it’s the projection of the top bar into the bottom bar.  p(cancer) is the left sector of the top bar, and p(positive|cancer) determines how much of that sector projects into the bottom bar, and the left sector of the bottom bar is p(positive&cancer) .\n\n\nSimilarly, if we know the number of patients with breast cancer and positive mammographies, and also the number of patients with breast cancer, we can estimate the chance that a woman with breast cancer gets a positive mammography by dividing: p(positive|cancer) = p(positive&cancer) / p(cancer) .  In fact, this is exactly how such medical diagnostic tests are calibrated; you do a study on 8,520 women with breast cancer and see that there are 6,816 (or thereabouts) women with breast cancer *and* positive mammographies, then divide 6,816 by 8520 to find that 80% of women with breast cancer had positive mammographies.  (Incidentally, if you accidentally divide 8520 by 6,816 instead of the other way around, your calculations will start doing strange things, such as insisting that 125% of women with breast cancer and positive mammographies have breast cancer.  This is a common mistake in carrying out Bayesian arithmetic, in my experience.)  And finally, if you know p(positive&cancer) and p(positive|cancer) , you can deduce how many cancer patients there must have been originally.  There are two degrees of freedom shared out among the three quantities; if we know any two, we can deduce the third.\n\n\nHow about p(positive) , p(positive&cancer) , and p(positive&~cancer) ?  Again there are only two degrees of freedom among these three variables.  The equation occupying the extra degree of freedom is p(positive) = p(positive&cancer) + p(positive&~cancer) .  This is how p(positive) is computed to begin with; we figure out the number of women with breast cancer who have positive mammographies, and the number of women without breast cancer who have positive mammographies, then add them together to get the total number of women with positive mammographies.  It would be very strange to go out and conduct a study to determine the number of women with positive mammographies – just that one number and nothing else – but in theory you could do so.  And if you then conducted another study and found the number of those women who had positive mammographies *and* breast cancer, you would also know the number of women with positive mammographies and *no* breast cancer – either a woman with a positive mammography has breast cancer or she doesn’t.  In general, p(A&B) + p(A&~B) = p(A) .  Symmetrically, p(A&B) + p(~A&B) = p(B) . \n \nWhat about p(positive&cancer) , p(positive&~cancer) , p(~positive&cancer) , and p(~positive&~cancer) ?  You might at first be tempted to think that there are only two degrees of freedom for these four quantities – that you can, for example, get p(positive&~cancer) by multiplying p(positive) \\* p(~cancer) , and thus that all four quantities can be found given only the two quantities p(positive) and p(cancer) .  This is not the case!  p(positive&~cancer) = p(positive) \\* p(~cancer) only if the two probabilities are *statistically independent* – if the chance that a woman has breast cancer has no bearing on whether she has a positive mammography.  As you’ll recall, this amounts to requiring that the two conditional probabilities be equal to each other – a requirement which would eliminate one degree of freedom.  If you remember that these four quantities are the groups A, B, C, and D, you can look over those four groups and realize that, in theory, you can put any number of people into the four groups.  If you start with a group of 80 women with breast cancer and positive mammographies, there’s no reason why you can’t add another group of 500 women with breast cancer and negative mammographies, followed by a group of 3 women without breast cancer and negative mammographies, and so on.  So now it seems like the four quantities have four degrees of freedom.  And they would, except that in expressing them as *probabilities,* we need to normalize them to *fractions* of the complete group, which adds the constraint that p(positive&cancer) + p(positive&~cancer) + p(~positive&cancer) + p(~positive&~cancer) = 1 .  This equation takes up one degree of freedom, leaving three degrees of freedom among the four quantities.  If you specify the *fractions* of women in groups A, B, and D, you can deduce the fraction of women in group C.\n\n\n\n\n---\n\n\nGiven the four groups A, B, C, and D, it is very straightforward to compute everything else:  p(cancer) = A + B , p(~positive|cancer) = B / (A + B) , and so on.  Since ABCD contains three degrees of freedom, it follows that the entire set of 16 probabilities contains only three degrees of freedom.  Remember that in our problems we always needed *three* pieces of information – the prior probability and the two conditional probabilities – which, indeed, have three degrees of freedom among them.  Actually, for Bayesian problems, *any* three quantities with three degrees of freedom between them should logically specify the entire problem.  For example, let’s take a barrel of eggs with p(blue) = 0.40 ,  p(blue|pearl) = 5/13 , and p(~blue&~pearl) = 0.20 .  Given this information, you *can* compute p(pearl|blue) .\n\n\nAs a story problem: \nSuppose you have a large barrel containing a number of plastic eggs.  Some eggs contain pearls, the rest contain nothing.  Some eggs are painted blue, the rest are painted red.  Suppose that 40% of the eggs are painted blue, 5/13 of the eggs containing pearls are painted blue, and 20% of the eggs are both empty and painted red.  What is the probability that an egg painted blue contains a pearl?\n\n\nTry it – I assure you it is possible.\n\n\nCalculator:  Result:  You probably shouldn’t try to solve this with just a Javascript calculator, though.  I used a Python console.  (In theory, pencil and paper should also work, but I don’t know anyone who owns a pencil so I couldn’t try it personally.)\n\n\nAs a check on your calculations, does the (meaningless) quantity p(~pearl|~blue)/p(pearl) roughly equal .51?  (In story problem terms:  The likelihood that a red egg is empty, divided by the likelihood that an egg contains a pearl, equals approximately .51.)  Of course, using this information in the problem would be cheating.\n\n\nIf you can solve *that* problem, then when we revisit Conservation of Probability, it seems perfectly straightforward.  Of course the mean revised probability, after administering the test, must be the same as the prior probability.  Of course strong but rare evidence in one direction must be counterbalanced by common but weak evidence in the other direction.\n\n\nBecause: \np(cancer|positive)\\*p(positive) \n+ p(cancer|~positive)\\*p(~positive) \n= p(cancer)\n\n\nIn terms of the four groups:\n\n\np(cancer|positive)  = A / (A + C) \np(positive)         = A + C \np(cancer&positive)  = A \np(cancer|~positive) = B / (B + D) \np(~positive)        = B + D \np(cancer&~positive) = B \np(cancer)           = A + B\n\n\n\n\n---\n\n\nLet’s return to the original barrel of eggs – 40% of the eggs containing pearls, 30% of the pearl eggs painted blue, 10% of the empty eggs painted blue.  The graph for this problem is:\n\n\nWhat happens to the revised probability, p(pearl|blue) , if the proportion of eggs containing pearls is kept constant, but 60% of the eggs with pearls are painted blue (instead of 30%), and 20% of the empty eggs are painted blue (instead of 10%)?  You could type 60% and 20% into the inputs for the two conditional probabilities, and see how the graph changes – but can you figure out in advance what the change will look like?\n\n\n\n\n---\n\n\nIf you guessed that the revised probability *remains the same,* because the bottom bar grows by a factor of 2 but retains the same proportions, congratulations!  Take a moment to think about how far you’ve come.  Looking at a problem like\n\n\n1% of women have breast cancer.  80% of women with breast cancer get positive mammographies.  9.6% of women without breast cancer get positive mammographies.  If a woman has a positive mammography, what is the probability she has breast cancer?\n\n\nthe vast majority of respondents intuit that around 70-80% of women with positive mammographies have breast cancer.  Now, looking at a problem like\n\n\nSuppose there are two barrels containing many small plastic eggs.  In both barrels, some eggs are painted blue and the rest are painted red.  In both barrels, 40% of the eggs contain pearls and the rest are empty.  In the first barrel, 30% of the pearl eggs are painted blue, and 10% of the empty eggs are painted blue.  In the second barrel, 60% of the pearl eggs are painted blue, and 20% of the empty eggs are painted blue.  Would you rather have a blue egg from the first or second barrel?\n\n\nyou can see it’s *intuitively obvious* that the probability of a blue egg containing a pearl is the same for either barrel.  Imagine how hard it would be to see that using the old way of thinking!\n\n\n\n\n---\n\n\nIt’s intuitively obvious, but how to prove it?  Suppose that we call P the prior probability that an egg contains a pearl, that we call M the first conditional probability (that a pearl egg is painted blue), and N the second conditional probability (that an empty egg is painted blue).  Suppose that M and N are both increased or diminished by an arbitrary factor X – for example, in the problem above, they are both increased by a factor of 2.  Does the revised probability that an egg contains a pearl, given that we know the egg is blue, stay the same?\n\n\n* p(pearl) = P\n* p(blue|pearl) = M\\*X\n* p(blue|~pearl) = N\\*X\n* p(pearl|blue) = ?\n\n\nFrom these quantities, we get the four groups:\n\n\n* Group A:  p(pearl&blue)   = P\\*M\\*X\n* Group B:  p(pearl&~blue)  = P\\*(1 – (M\\*X))\n* Group C:  p(~pearl&blue)  = (1 – P)\\*N\\*X\n* Group D:  p(~pearl&~blue) = (1 – P)\\*(1 – (N\\*X))\n\n\nThe proportion of eggs that contain pearls and are blue, within the group of all blue eggs, is then the proportion of group (A) within the group (A + C), equalling P\\*M\\*X / (P\\*M\\*X + (1 – P)\\*N\\*X) .  The factor X in the numerator and denominator cancels out, so increasing or diminishing both conditional probabilities by a constant factor doesn’t change the revised probability.\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.** **Suppose that there are two barrels, each containing a number of plastic eggs.  In both barrels, some eggs are painted blue and the rest are painted red.  In the first barrel, 90% of the eggs contain pearls and 20% of the pearl eggs are painted blue.  In the second barrel, 45% of the eggs contain pearls and 60% of the empty eggs are painted red.  Would you rather have a blue pearl egg from the first or second barrel?****A.**   Actually, it doesn’t matter which barrel you choose!  Can you see why? |\n\n\n\n\n---\n\n\n*The probability that a test gives a true positive*divided by *the probability that a test gives a false positive*is known as the *likelihood ratio* of that test.   Does the likelihood ratio of a medical test sum up everything there is to know about the usefulness of the test?\n\n\nNo, it does not!  The likelihood ratio sums up everything there is to know about the *meaning* of a *positive* result on the medical test, but the meaning of a *negative* result on the test is not specified, nor is the frequency with which the test is useful.  If we examine the algebra above, while p(pearl|blue) remains constant, p(pearl|~blue) may change – the X does *not* cancel out.  As a story problem, this strange fact would look something like this:\n\n\nSuppose that there are two barrels, each containing a number of plastic eggs.  In both barrels, 40% of the eggs contain pearls and the rest contain nothing.  In both barrels, some eggs are painted blue and the rest are painted red.  In the first barrel, 30% of the eggs with pearls are painted blue, and 10% of the empty eggs are painted blue.  In the second barrel, 90% of the eggs with pearls are painted blue, and 30% of the empty eggs are painted blue.  Would you rather have a blue egg from the first or second barrel?  Would you rather have a red egg from the first or second barrel?\n\n\nFor the first question, the answer is that we don’t care whether we get the blue egg from the first or second barrel.  For the second question, however, the probabilities *do* change – in the first barrel, 34% of the red eggs contain pearls, while in the second barrel 8.7% of the red eggs contain pearls!  Thus, we should prefer to get a red egg from the first barrel.  In the first barrel, 70% of the pearl eggs are painted red, and 90% of the empty eggs are painted red.  In the second barrel, 10% of the pearl eggs are painted red, and 70% of the empty eggs are painted red.\n\n\nCalculator:  Result:  What goes on here?  We start out by noting that, counter to intuition, p(pearl|blue) and p(pearl|~blue) have two degrees of freedom among them even when p(pearl) is fixed – so there’s no reason why one quantity shouldn’t change while the other remains constant.  But we didn’t we just get through establishing a law for “Conservation of Probability”, which says that p(pearl|blue)\\*p(blue) + p(pearl|~blue)\\*p(~blue) = p(pearl) ?  Doesn’t this equation take up one degree of freedom?  No, because p(blue) isn’t fixed between the two problems.  In the second barrel, the proportion of blue eggs containing pearls is the same as in the first barrel, but a much larger fraction of eggs are painted blue!  This alters the set of *red* eggs in such a way that the proportions *do* change.  Here’s a graph for the red eggs in the second barrel:\n\n\n\n\n---\n\n\nLet’s return to the example of a medical test.  The likelihood ratio of a medical test – the number of true positives divided by the number of false positives – tells us everything there is to know about the *meaning* of a *positive* result.  But it doesn’t tell us the meaning of a negative result, and it doesn’t tell us how often the test is useful.  For example, a mammography with a hit rate of 80% for patients with breast cancer and a false positive rate of 9.6% for healthy patients has the same likelihood ratio as a test with an 8% hit rate and a false positive rate of 0.96%.  Although these two tests have the same likelihood ratio, the first test is more useful in every way – it detects disease more often, and a negative result is stronger evidence of health.\n\n\nThe likelihood ratio for a positive result summarizes the differential pressure of the two conditional probabilities for a positive result, and thus summarizes how much a positive result will slide the prior probability.  Take a probability graph, like this one:\n\n\nThe likelihood ratio of the mammography is what determines the slant of the line.  If the prior probability is 1%, then knowing only the likelihood ratio is enough to determine the posterior probability after a positive result.\n\n\nBut, as you can see from the frequency graph, the likelihood ratio doesn’t tell the whole story – in the frequency graph, the *proportions* of the bottom bar can stay fixed while the *size* of the bottom bar changes.   p(blue) increases but p(pearl|blue) doesn’t change, because p(pearl&blue) and p(~pearl&blue) increase by the same factor.  But when you flip the graph to look at p(~blue) , the proportions of p(pearl&~blue) and p(~pearl&~blue) do *not* remain constant.\n\n\nOf course the likelihood ratio *can’t* tell the whole story; the likelihood ratio and the prior probability together are only two numbers, while the problem has three degrees of freedom.\n\n\n\n\n---\n\n\nSuppose that you apply *two* tests for breast cancer in succession – say, a standard mammography and also some other test which is independent of mammography.  Since I don’t know of any such test which is *independent* of mammography, I’ll invent one for the purpose of this problem, and call it the Tams-Braylor Division Test, which checks to see if any cells are dividing more rapidly than other cells.  We’ll suppose that the Tams-Braylor gives a true positive for 90% of patients with breast cancer, and gives a false positive for 5% of patients without cancer.  Let’s say the prior prevalence of breast cancer is 1%.  If a patient gets a positive result on her mammography *and* her Tams-Braylor, what is the revised probability she has breast cancer?\n\n\nOne way to solve this problem would be to take the revised probability for a positive mammography, which we already calculated as 7.8%, and plug that into the Tams-Braylor test as the new prior probability.  If we do this, we find that the result comes out to 60%.\n\n\nCalculator:  Result:  But this assumes that first we see the positive mammography result, and then the positive result on the Tams-Braylor.  What if first the woman gets a positive result on the Tams-Braylor, followed by a positive result on her mammography.  Intuitively, it seems like it shouldn’t matter.  Does the math check out?\n\n\nFirst we’ll administer the Tams-Braylor to a woman with a 1% prior probability of breast cancer. \n\n\nCalculator:  Result:  Then we administer a mammography, which gives 80% true positives and 9.6% false positives, and it also comes out positive.\n\n\nCalculator:  Result:  Lo and behold, the answer is again 60%.  (If it’s not exactly the same, it’s due to rounding error – you can get a more precise calculator, or work out the fractions by hand, and the numbers will be exactly equal.)\n\n\nAn algebraic proof that both strategies are equivalent is left to the reader.  To visualize, imagine that the lower bar of the frequency applet for mammography projects an even lower bar using the probabilities of the Tams-Braylor Test, and that the final lowest bar is the same regardless of the order in which the conditional probabilities are projected.\n\n\n\n\n---\n\n\nWe might also reason that since the two tests are independent, the probability a woman with breast cancer gets a positive mammography *and* a positive Tams-Braylor is 90% \\* 80% = 72%.  And the probability that a woman without breast cancer gets false positives on mammography and Tams-Braylor is 5% \\* 9.6% = 0.48%.  So if we wrap it all up as a single test with a likelihood ratio of 72%/0.48%, and apply it to a woman with a 1% prior probability of breast cancer:\n\n\nCalculator:  Result:  …we find once again that the answer is 60%.\n\n\nSuppose that the prior prevalence of breast cancer in a demographic is 1%.  Suppose that we, as doctors, have a repertoire of three independent tests for breast cancer.  Our first test, test A, a mammography, has a likelihood ratio of 80%/9.6% = 8.33.  The second test, test B, has a likelihood ratio of 18.0 (for example, from 90% versus 5%); and the third test, test C, has a likelihood ratio of 3.5 (which could be from 70% versus 20%, or from 35% versus 10%; it makes no difference).  Suppose a patient gets a positive result on all three tests.  What is the probability the patient has breast cancer?\n\n\nHere’s a fun trick for simplifying the bookkeeping.  If the prior prevalence of breast cancer in a demographic is 1%, then 1 out of 100 women have breast cancer, and 99 out of 100 women do not have breast cancer.  So if we rewrite the *probability* of 1% as an *odds ratio,* the odds are:\n\n\n1:99\n\n\nAnd the likelihood ratios of the three tests A, B, and C are:\n\n\n8.33:1 = 25:3 \n18.0:1 = 18:1 \n 3.5:1 =  7:2\n\n\nThe *odds* for women with breast cancer who score positive on all three tests, versus women without breast cancer who score positive on all three tests, will equal:\n\n\n1\\*25\\*18\\*7:99\\*3\\*1\\*2 = \n3,150:594\n\n\nTo recover the probability from the odds, we just write: \n3,150 / (3,150 + 594) = 84%\n\n\nThis always works regardless of how the odds ratios are written; i.e., 8.33:1 is just the same as 25:3 or 75:9.  It doesn’t matter in what order the tests are administered, or in what order the results are computed.  The proof is left as an exercise for the reader.\n\n\n\n\n---\n\n\nE. T. Jaynes, in “Probability Theory With Applications in Science and Engineering”, suggests that credibility and evidence should be measured in decibels.\n\n\nDecibels?\n\n\nDecibels are used for measuring exponential differences of intensity.  For example, if the sound from an automobile horn carries 10,000 times as much energy (per square meter per second) as the sound from an alarm clock, the automobile horn would be 40 decibels louder.  The sound of a bird singing might carry 1,000 times less energy than an alarm clock, and hence would be 30 decibels softer.  To get the number of decibels, you take the logarithm base 10 and multiply by 10.\n\n\ndecibels = 10 log 10 (intensity) \n*or* \nintensity = 10 (decibels/10)\n\n\nSuppose we start with a prior probability of 1% that a woman has breast cancer, corresponding to an odds ratio of 1:99.  And then we administer three tests of likelihood ratios 25:3, 18:1, and 7:2.  You *could* multiply those numbers… or you could just add their logarithms:\n\n\n10 log 10 (1/99) = -20 \n10 log 10 (25/3) = 9 \n10 log 10 (18/1) = 13 \n10 log 10 (7/2)  = 5\n\n\nIt starts out as fairly unlikely that a woman has breast cancer – our credibility level is at -20 decibels.  Then three test results come in, corresponding to 9, 13, and 5 decibels of evidence.  This raises the credibility level by a total of 27 decibels, meaning that the prior credibility of -20 decibels goes to a posterior credibility of 7 decibels.  So the odds go from 1:99 to 5:1, and the probability goes from 1% to around 83%.\n\n\n\n\n---\n\n\nIn front of you is a bookbag containing 1,000 poker chips.  I started out with two such bookbags, one containing 700 red and 300 blue chips, the other containing 300 red and 700 blue.  I flipped a fair coin to determine which bookbag to use, so your prior probability that the bookbag in front of you is the red bookbag is 50%.  Now, you sample randomly, with replacement after each chip.  In 12 samples, you get 8 reds and 4 blues.  What is the probability that this is the predominantly red bag?\n\n\nJust for fun, try and work this one out in your head.  You don’t need to be exact – a rough estimate is good enough.  When you’re ready, continue onward.\n\n\n\n\n---\n\n\nAccording to a study performed by Lawrence Phillips and Ward Edwards in 1966, most people, faced with this problem, give an answer in the range 70% to 80%.  Did you give a substantially higher probability than that?  If you did, congratulations – Ward Edwards wrote that very seldom does a person answer this question properly, even if the person is relatively familiar with Bayesian reasoning.  The correct answer is 97%.\n\n\nThe likelihood ratio for the test result “red chip” is 7/3, while the likelihood ratio for the test result “blue chip” is 3/7.  Therefore a blue chip is exactly the same amount of evidence as a red chip, just in the other direction – a red chip is 3.6 decibels of evidence for the red bag, and a blue chip is -3.6 decibels of evidence.  If you draw one blue chip and one red chip, they cancel out.  So the *ratio* of red chips to blue chips does not matter; only the *excess* of red chips over blue chips matters.  There were eight red chips and four blue chips in twelve samples; therefore, four *more* red chips than blue chips.  Thus the posterior odds will be:\n\n\n7 4 :3 4 = 2401:81 \nwhich is around 30:1, i.e., around 97%.\n\n\nThe prior credibility starts at 0 decibels and there’s a total of around 14 decibels of evidence, and indeed this corresponds to odds of around 25:1 or around 96%.  Again, there’s some rounding error, but if you performed the operations using exact arithmetic, the results would be identical.\n\n\nWe can now see *intuitively* that the bookbag problem would have exactly the same answer, obtained in just the same way, if sixteen chips were sampled and we found ten red chips and six blue chips.\n\n\n\n\n---\n\n\nYou are a mechanic for gizmos.  When a gizmo stops working, it is due to a blocked hose 30% of the time.  If a gizmo’s hose is blocked, there is a 45% probability that prodding the gizmo will produce sparks.  If a gizmo’s hose is unblocked, there is only a 5% chance that prodding the gizmo will produce sparks.  A customer brings you a malfunctioning gizmo.  You prod the gizmo and find that it produces sparks.  What is the probability that a spark-producing gizmo has a blocked hose?\n\n\nCalculator:  Result:  What is the sequence of arithmetical operations that you performed to solve this problem?\n\n\n(45%\\*30%) / (45%\\*30% + 5%\\*70%)\n\n\nSimilarly, to find the chance that a woman with positive mammography has breast cancer, we computed:\n\n\np(positive|cancer)\\*p(cancer) \n\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_ \np(positive|cancer)\\*p(cancer) + p(positive|~cancer)\\*p(~cancer)\n\n\n*which is* \np(positive&cancer) / [p(positive&cancer) + p(positive&~cancer)] \n*which is* \np(positive&cancer) / p(positive) \n*which is* \np(cancer|positive)\n\n\nThe fully general form of this calculation is known as *Bayes’ Theorem* or *Bayes’ Rule:*\n\n\n\n\n| |\n| --- |\n| Bayes' Theorem: |\n| p(A|X) = |        p(X|A)\\* p(A)          p(X|A)\\* p(A) + p(X|~A)\\* p(~A) |\n\n\nGiven some phenomenon A that we want to investigate, and an observation X that is evidence about A – for example, in the previous example, A is breast cancer and X is a positive mammography – Bayes’ Theorem tells us how we should *update* our probability of A, given the *new evidence*X.\n\n\nBy this point, Bayes’ Theorem may seem blatantly obvious or even tautological, rather than exciting and new.  If so, this introduction has *entirely succeeded* in its purpose.\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.  Who originally discovered Bayes’ Theorem?****A.** The Reverend Thomas Bayes, by far the most enigmatic figure in mathematical history.  Almost nothing is known of Bayes’s life, and very few of his manuscripts survived.  Thomas Bayes was born in 1701 or 1702 to Joshua Bayes and Ann Carpenter, and his date of death is listed as 1761.  The exact date of Thomas Bayes’s birth is not known for certain because Joshua Bayes, though a surprisingly wealthy man, was a member of an unusual, esoteric, and even heretical religious sect, the “Nonconformists”.  The Nonconformists kept their birth registers secret, supposedly from fear of religious discrimination; whatever the reason, no true record exists of Thomas Bayes’s birth.  Thomas Bayes was raised a Nonconformist and was soon promoted into the higher ranks of the Nonconformist theosophers, whence comes the “Reverend” in his name.In 1742 Bayes was elected a Fellow of the Royal Society of London, the most prestigious scientific body of its day, despite Bayes having published no scientific or mathematical works at that time.  Bayes’s nomination certificate was signed by sponsors including the President and the Secretary of the Society, making his election almost certain.  Even today, however, it remains a mystery *why* such weighty names sponsored an unknown into the Royal Society.Bayes’s sole publication during his known lifetime was allegedly a mystical book entitled *Divine Benevolence,*laying forth the original causation and ultimate purpose of the universe.  The book is commonly attributed to Bayes, though it is said that no author appeared on the title page, and the entire work is sometimes considered to be of dubious provenance.Most mysterious of all, Bayes’ Theorem itself appears in a Bayes manuscript presented to the Royal Society of London in 1764, *three years after Bayes’s supposed death in 1761!*Despite the shocking circumstances of its presentation, Bayes’ Theorem was soon forgotten, and was popularized within the scientific community only by the later efforts of the great mathematician Pierre-Simon Laplace.  Laplace himself is almost as enigmatic as Bayes; we don’t even know whether it was “Pierre” or “Simon” that was his actual first name.  Laplace’s papers are said to have contained a design for an AI capable of predicting all future events, the so-called “Laplacian superintelligence”.  While it is generally believed that Laplace never tried to implement his design, there remains the fact that Laplace presciently fled the guillotine that claimed many of his colleagues during the Reign of Terror.  Even today, physicists sometimes attribute unusual effects to a “Laplacian Operator” intervening in their experiments.In summary, we do not know the real circumstances of Bayes’s birth, the ultimate origins of Bayes’ Theorem, Bayes’s actual year of death, or even whether Bayes ever really died.  Nonetheless “Reverend Thomas Bayes”, whatever his true identity, has the greatest fondness and gratitude of Earth’s scientific community. |\n\n\n\n\n---\n\n\nSo why is it that some people are so *excited* about Bayes’ Theorem?\n\n\n“Do you believe that a nuclear war will occur in the next 20 years?  If no, why not?”  Since I wanted to use some common answers to this question to make a point about rationality, I went ahead and asked the above question in an IRC channel, #philosophy on EFNet.\n\n\nOne EFNetter who answered replied “No” to the above question, but added that he believed biological warfare would wipe out “99.4%” of humanity within the next ten years.  I then asked whether he believed 100% was a possibility.  “No,” he said.  “Why not?”, I asked.  “Because I’m an optimist,” he said.  (Roanoke of #philosophy on EFNet wishes to be credited with this statement, even having been warned that it will not be cast in a complimentary light.  Good for him!)  Another person who answered the above question said that he didn’t expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.”  “But why extend that out for 100 years?”, I asked.  “Pure hope,” was his reply.\n\n\nWhat is it *exactly* that makes these thoughts “irrational” – a poor way of arriving at truth?  There are a number of intuitive replies that can be given to this; for example:  “It is not rational to believe things only because they are comforting.”  Of course it is equally irrational to believe things only because they are *discomforting;* the second error is less common, but equally irrational.  Other intuitive arguments include the idea that “Whether or not you happen to be an optimist has nothing to do with whether biological warfare wipes out the human species”, or “Pure hope is not evidence about nuclear war because it is not an observation about nuclear war.”\n\n\nThere is also a mathematical reply that is precise, exact, and contains all the intuitions as special cases.  This mathematical reply is known as Bayes’ Theorem.\n\n\nFor example, the reply “Whether or not you happen to be an optimist has nothing to do with whether biological warfare wipes out the human species” can be translated into the statement:\n\n\np(you are currently an optimist | biological war occurs within ten years and wipes out humanity) = \np(you are currently an optimist | biological war occurs within ten years and does not wipe out humanity)\n\n\nSince the two probabilities for p(X|A) and p(X|~A) are equal, Bayes’ Theorem says that p(A|X) = p(A) ; as we have earlier seen, when the two conditional probabilities are equal, the revised probability equals the prior probability.  If X and A are unconnected – statistically independent – then finding that X is true cannot be evidence that A is true; observing X does not update our probability for A; saying “X” is not an argument for A.\n\n\nBut suppose you are arguing with someone who is verbally clever and who says something like, “Ah, but since I’m an optimist, I’ll have renewed hope for tomorrow, work a little harder at my dead-end job, pump up the global economy a little, eventually, through the trickle-down effect, sending a few dollars into the pocket of the researcher who ultimately finds a way to stop biological warfare – so you see, the two events are related after all, and I can use one as valid evidence about the other.”  In one sense, this is correct – *any* correlation, no matter how weak, is fair prey for Bayes’ Theorem; *but* Bayes’ Theorem distinguishes between weak and strong evidence.  That is, Bayes’ Theorem not only tells us what is and isn’t evidence, it also describes the *strength* of evidence.  Bayes’ Theorem not only tells us *when* to revise our probabilities, but *how much* to revise our probabilities.  A correlation between hope and biological warfare may exist, but it’s a lot weaker than the speaker wants it to be; he is revising his probabilities much too far.\n\n\nLet’s say you’re a woman who’s just undergone a mammography.  Previously, you figured that you had a very small chance of having breast cancer; we’ll suppose that you read the statistics somewhere and so you know the chance is 1%.  When the positive mammography comes in, your estimated chance should now shift to 7.8%.  There is no room to say something like, “Oh, well, a positive mammography isn’t definite evidence, some healthy women get positive mammographies too.  I don’t want to despair too early, and I’m not going to revise my probability until more evidence comes in.  Why?  Because I’m a optimist.”  And there is similarly no room for saying, “Well, a positive mammography may not be definite evidence, but I’m going to assume the worst until I find otherwise.  Why?  Because I’m a pessimist.”  Your revised probability should go to 7.8%, no more, no less.\n\n\nBayes’ Theorem describes what makes something “evidence” and how much evidence it is.  Statistical models are judged by comparison to the *Bayesian method* because, in statistics, the Bayesian method is as good as it gets – the Bayesian method defines the maximum amount of mileage you can get out of a given piece of evidence, in the same way that thermodynamics defines the maximum amount of work you can get out of a temperature differential.  This is why you hear cognitive scientists talking about *Bayesian reasoners* .  In cognitive science, *Bayesian reasoner* is the technically precise codeword that we use to mean *rational mind.*\n\n\nThere are also a number of general heuristics about human reasoning that you can learn from looking at Bayes’ Theorem.\n\n\nFor example, in many discussions of Bayes’ Theorem, you may hear cognitive psychologists saying that people *do not take prior frequencies sufficiently into account,*meaning that when people approach a problem where there’s some evidence X indicating that condition A might hold true, they tend to judge A’s likelihood solely by how well the evidence X seems to match A, without taking into account the prior frequency of A.  If you think, for example, that under the mammography example, the woman’s chance of having breast cancer is in the range of 70%-80%, then this kind of reasoning is insensitive to the prior frequency given in the problem; it doesn’t notice whether 1% of women or 10% of women start out having breast cancer.  “Pay more attention to the prior frequency!” is one of the many things that humans need to bear in mind to partially compensate for our built-in inadequacies.\n\n\nA related error is to pay too much attention to p(X|A) and not enough to p(X|~A) when determining how much evidence X is for A.  The degree to which a result X is *evidence for A* depends, not only on the strength of the statement *we’d expect to see result X if A were true,*but also on the strength of the statement *we **wouldn’t** expect to see result X if A weren’t true.*  For example, if it is raining, this very strongly implies the grass is wet – p(wetgrass|rain) ~ 1 – but seeing that the grass is wet doesn’t necessarily mean that it has just rained; perhaps the sprinkler was turned on, or you’re looking at the early morning dew.  Since p(wetgrass|~rain) is substantially greater than zero, p(rain|wetgrass) is substantially less than one.  On the other hand, if the grass was *never* wet when it wasn’t raining, then knowing that the grass was wet would *always* show that it was raining, p(rain|wetgrass) ~ 1 , even if p(wetgrass|rain) = 50% ; that is, even if the grass only got wet 50% of the times it rained.  Evidence is always the result of the *differential* between the two conditional probabilities.  *Strong* evidence is not the product of a very high probability that A leads to X, but the product of a very *low* probability that *not-A* could have led to X. \n \nThe *Bayesian revolution in the sciences*is fueled, not only by more and more cognitive scientists suddenly noticing that mental phenomena have Bayesian structure in them; not only by scientists in every field learning to judge their statistical methods by comparison with the Bayesian method; but also by the idea that *science itself is a special case of Bayes’ Theorem; experimental evidence is Bayesian evidence.*  The Bayesian revolutionaries hold that when you perform an experiment and get evidence that “confirms” or “disconfirms” your theory, this confirmation and disconfirmation is governed by the Bayesian rules.  For example, you have to take into account, not only whether your theory predicts the phenomenon, but whether other possible explanations also predict the phenomenon.  Previously, the most popular philosophy of science was probably Karl Popper’s *falsificationism* – this is the old philosophy that the Bayesian revolution is currently dethroning.  Karl Popper’s idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules; if p(X|A) ~ 1 – if the theory makes a definite prediction – then observing ~X very strongly falsifies A.  On the other hand, if p(X|A) ~ 1 ,  and we observe X, this doesn’t definitely confirm the theory; there might be some other condition B such that p(X|B) ~ 1 , in which case observing X doesn’t favor A over B.  For observing X to definitely confirm A, we would have to know, not that p(X|A) ~ 1 , but that p(X|~A) ~ 0 , which is something that we can’t know because we can’t range over all possible alternative explanations.  For example, when Einstein’s theory of General Relativity toppled Newton’s incredibly well-confirmed theory of gravity, it turned out that all of Newton’s predictions were just a special case of Einstein’s predictions.\n\n\nYou can even formalize Popper’s philosophy mathematically.  The likelihood ratio for X, p(X|A)/p(X|~A) , determines how much observing X slides the probability for A; the likelihood ratio is what says *how strong*X is as evidence.  Well, in your theory A, you can predict X with probability 1, if you like; but you can’t control the denominator of the likelihood ratio, p(X|~A) – there will always be some alternative theories that also predict X, and while we go with the simplest theory that fits the current evidence, you may someday encounter some evidence that an alternative theory predicts but your theory does not.  That’s the hidden gotcha that toppled Newton’s theory of gravity.  So there’s a limit on how much mileage you can get from successful predictions; there’s a limit on how high the likelihood ratio goes for *confirmatory* evidence.\n\n\nOn the other hand, if you encounter some piece of evidence Y that is definitely *not* predicted by your theory, this is *enormously* strong evidence *against* your theory.  If p(Y|A) is infinitesimal, then the likelihood ratio will also be infinitesimal.  For example, if p(Y|A) is 0.0001%, and p(Y|~A) is 1%, then the likelihood ratio p(Y|A)/p(Y|~A) will be 1:10000.  -40 decibels of evidence!  Or flipping the likelihood ratio, if p(Y|A) is *very small,* then p(Y|~A)/p(Y|A) will be *very large,* meaning that observing Y greatly favors ~A over A.  Falsification is much stronger than confirmation.  This is a consequence of the earlier point that *very strong* evidence is not the product of a very high probability that A leads to X, but the product of a very *low* probability that *not-A* could have led to X.  This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.\n\n\nSimilarly, Popper’s dictum that an idea must be falsifiable can be interpreted as a manifestation of the Bayesian conservation-of-probability rule; if a result X is positive evidence for the theory, then the result ~X would have disconfirmed the theory to some extent.  If you try to interpret both X and ~X as “confirming” the theory, the Bayesian rules say this is impossible!  To increase the probability of a theory you *must* expose it to tests that can potentially decrease its probability; this is not just a rule for detecting would-be cheaters in the social process of science, but a consequence of Bayesian probability theory.  On the other hand, Popper’s idea that there is *only* falsification and *no such thing* as confirmation turns out to be incorrect.  Bayes’ Theorem shows that falsification is *very strong* evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued.\n\n\nSo we find that many phenomena in the cognitive sciences, plus the statistical methods used by scientists, plus the scientific method itself, are all turning out to be special cases of Bayes’ Theorem.  Hence the Bayesian revolution.\n\n\n\n\n---\n\n\n\n\n| | |\n| --- | --- |\n| **FunFact!** | **Q.  Are there any limits to the power of Bayes’ Theorem?****A.** According to legend, one who fully grasped Bayes’ Theorem would gain the ability to create and physically enter an alternate universe using only off-the-shelf equipment and a short computer program.  One who fully grasps Bayes’ Theorem, yet remains in our universe to aid others, is known as a Bayesattva. |\n\n\n\n\n---\n\n\n\n\n| |\n| --- |\n| Bayes' Theorem: |\n| p(A|X) = |        p(X|A)\\* p(A)          p(X|A)\\* p(A) + p(X|~A)\\* p(~A) |\n\n\nWhy wait so long to introduce Bayes’ Theorem, instead of just showing it at the beginning?  Well… because I’ve tried that before; and what happens, in my experience, is that people get all tangled up in trying to apply Bayes’ Theorem as a set of *poorly grounded mental rules;* instead of the Theorem helping, it becomes *one more thing to juggle mentally,* so that in addition to trying to remember how many women with breast cancer have positive mammographies, the reader is also trying to remember whether it’s p(X|A) in the numerator or p(A|X) , and whether a positive mammography result corresponds to A or X, and which side of p(X|A) is the implication, and what the terms are in the denominator, and so on.  In this excruciatingly gentle introduction, I tried to show all the workings of Bayesian reasoning *without* ever introducing the explicit Theorem as something extra to memorize, hopefully reducing the number of factors the reader needed to mentally juggle.\n\n\nEven if you happen to be one of the fortunate people who can easily grasp and apply abstract theorems, the mental-juggling problem is still something to bear in mind if you ever need to explain Bayesian reasoning to someone else.\n\n\nIf you do find yourself losing track, my advice is to forget Bayes’ Theorem as an *equation* and think about the *graph.*   p(A) and p(~A) are at the top.  p(X|A) and p(X|~A) are the projection factors.  p(X&A) and p(X&~A) are at the bottom.  And p(A|X) equals the proportion of p(X&A) within p(X&A)+p(X&~A).  The graph isn’t shown here – but can you see it in your mind?\n\n\nAnd if thinking about the graph doesn’t work, I suggest forgetting about Bayes’ Theorem entirely – just try to work out the specific problem in gizmos, hoses, and sparks, or whatever it is.\n\n\n\n\n---\n\n\nHaving introduced Bayes’ Theorem explicitly, we can explicitly discuss its components.\n\n\n\n\n| | |\n| --- | --- |\n| p(A|X) = |        p(X|A)\\* p(A)          p(X|A)\\* p(A) + p(X|~A)\\* p(~A) |\n\n\nWe’ll start with p(A|X).  If you ever find yourself getting confused about what’s A and what’s X in Bayes’ Theorem, start with p(A|X) on the left side of the equation; that’s the simplest part to interpret.  A is the thing we want to know about.  X is how we’re observing it; X is the evidence we’re using to make inferences about A.  Remember that for every expression p(Q|P), we want to know about the probability for Q given P, the degree to which P implies Q – a more sensible notation, which it is now too late to adopt, would be p(Q<-P) .\n\n\np(Q|P) is closely related to p(Q&P), but they are not identical.  Expressed as a probability or a fraction, p(Q&P) is the proportion of things that have property Q and property P within *all things;* i.e., the proportion of “women with breast cancer and a positive mammography” within the group of *all women.*   If the total number of women is 10,000, and 80 women have breast cancer and a positive mammography, then p(Q&P) is 80/10,000 = 0.8%.  You might say that the absolute quantity, 80, is being normalized to a probability relative to the *group of all women.*  Or to make it clearer, suppose that there’s a group of 641 women with breast cancer and a positive mammography within a total sample group of 89,031 women.  641 is the absolute quantity.  If you pick out a random woman from the *entire sample,* then the *probability* you’ll pick a woman with breast cancer and a positive mammography is p(Q&P), or 0.72% (in this example).\n\n\nOn the other hand, p(Q|P) is the proportion of things that have property Q and property P within *all things that have P;*i.e., the proportion of women with breast cancer and a positive mammography within the group of *all women with positive mammographies.*   If there are 641 women with breast cancer and positive mammographies, 7915 women with positive mammographies, and 89,031 women, then p(Q&P) is the probability of getting one of those 641 women if you’re picking at random from the entire group of 89,031, while p(Q|P) is the probability of getting one of those 641 women if you’re picking at random from the smaller group of 7915.\n\n\nIn a sense, p(Q|P) really means p(Q&P|P) , but specifying the extra P all the time would be redundant.  You already *know* it has property P, so the property you’re *investigating* is Q – even though you’re looking at the size of group Q&P within group P, not the size of group Q within group P (which would be nonsense).  This is what it means to take the property on the right-hand side as *given;* it means you know you’re working only within the group of things that have property P.  When you constrict your focus of attention to see only this smaller group, many other probabilities change.  If you’re taking P as *given,* then p(Q&P) equals just p(Q) – at least, *relative to the group P.*   The *old* p(Q), the frequency of “things that have property Q within the entire sample”, is revised to the new frequency of “things that have property Q within the subsample of things that have property P”.  If P is *given,* if P is our entire world, then looking for Q&P is the same as looking for just Q.\n\n\nIf you constrict your focus of attention to only the population of eggs that are painted blue, then suddenly “the probability that an egg contains a pearl” becomes a different number; this proportion is different for the population of blue eggs than the population of all eggs.  The *given,* the property that constricts our focus of attention, is always on the *right* side of p(Q|P); the P becomes our world, the entire thing we see, and on the other side of the “given”  P always has probability 1 – that is what it means to take P as given.  So p(Q|P) means “If P has probability 1, what is the probability of Q?” or “If we constrict our attention to only things or events where P is true, what is the probability of Q?”  Q, on the other side of the given, is *not* certain – its probability may be 10% or 90% or any other number.  So when you use Bayes’ Theorem, and you write the part on the left side as p(A|X) – how to *update* the probability of A after seeing X, the new probability of A *given* that we know X, the degree to which X *implies* A – you can tell that X is always the *observation* or the *evidence,* and A is the property being investigated, the thing you want to know about.\n\n\n\n\n---\n\n\nThe right side of Bayes’ Theorem is derived from the left side through these steps:\n\n\n\n\n| | |\n| --- | --- |\n| p(A|X) =  | p(A|X) |\n| p(A|X) = |  p(X& A)p(X ) |\n| p(A|X) = |     p(X&A )     p(X& A) + p(X&~A) |\n| p(A|X) = |        p(X|A)\\* p(A)          p(X|A)\\* p(A) + p(X|~A)\\* p(~A) |\n\n\nThe first step, p(A|X) to p(X&A)/p(X) , may look like a tautology.  The actual math performed is different, though.  p(A|X) is a single number, the normalized probability or frequency of A within the subgroup X.  p(X&A)/p(X) are usually the percentage frequencies of X&A and X within the entire sample, but the calculation also works if X&A and X are absolute numbers of people, events, or things.  p(cancer|positive) is a single percentage/frequency/probability, always between 0 and 1.  (positive&cancer)/(positive) can be measured either in probabilities, such as 0.008/0.103, or it might be expressed in groups of women, for example 194/2494.  As long as both the numerator and denominator are measured in the same units, it should make no difference.\n\n\nGoing from p(X) in the denominator to��p(X&A)+p(X&~A) is a very straightforward step whose main purpose is as a stepping stone to the last equation.  However, one common arithmetical mistake in Bayesian calculations is to divide p(X&A) by p(X&~A) , instead of dividing p(X&A) by [p(X&A) + p(X&~A)] .  For example, someone doing the breast cancer calculation tries to get the posterior probability by performing the math operation 80 / 950, instead of 80 / (80 + 950).  I like to think of this as a rose-flowers error.  Sometimes if you show young children a picture with eight roses and two tulips, they’ll say that the picture contains more roses than flowers.  (Technically, this would be called a class inclusion error.)  You have to *add* the roses and the tulips to get the number of *flowers* , which you need to find the proportion of roses *within* the flowers.  You can’t find the proportion of roses in the tulips, or the proportion of tulips in the roses.  When you look at the graph, the bottom bar consists of *all* the patients with positive results.  That’s what the doctor sees – a patient with a positive result.  The question then becomes whether this is a healthy patient with a positive result, or a cancerous patient with a positive result.  To figure the odds of that, you have to look at the proportion of cancerous patients with positive results within all patients who have positive results, because again, “a patient with a positive result” is what you actually see.  You can’t divide 80 by 950 because that would mean you were trying to find the proportion of cancerous patients with positive results within the group of healthy patients with positive results; it’s like asking how many of the tulips are roses, instead of asking how many of the flowers are roses.  Imagine using the same method to find the proportion of *healthy* patients.  You would divide 950 by 80 and find that 1,187% of the patients were healthy.  Or to be exact, you would find that 1,187% of cancerous patients with positive results were healthy patients with positive results.\n\n\nThe last step in deriving Bayes’ Theorem is going from p(X&A) to p(X|A)\\*p(A) , in both the numerator and the denominator, and from p(X&~A) to p(X|~A)\\*p(~A) , in the denominator.\n\n\nWhy?  Well, one answer is because p(X|A), p(X|~A), and p(A) correspond to the initial information given in all the story problems.  But why were the story problems written that way?\n\n\nBecause in many cases, p(X|A), p(X|~A), and p(A) are what we actually *know;* and this in turn happens because p(X|A) and p(X|~A) are often the quantities that directly describe *causal relations,* with the other quantities derived from them and p(A) as *statistical relations.*   For example, p(X|A), the implication from A to X, where A is what we want to know and X is our way of observing it, corresponds to the implication from a woman having breast cancer to a positive mammography.  This is not just a *statistical implication* but a *direct**causal relation;* a woman gets a positive mammography *because* she has breast cancer.  The mammography is *designed* to detect breast cancer, and it is a fact about the physical process of the mammography exam that it has an 80% probability of detecting breast cancer.  As long as the design of the mammography machine stays constant, p(X|A) will stay at 80%, even if p(A) changes – for example, if we screen a group of woman with other risk factors, so that the prior frequency of women with breast cancer is 10% instead of 1%.  In this case, p(X&A) will change along with p(A), and so will p(X), p(A|X), and so on; but p(X|A) stays at 80%, because that’s a fact about the mammography exam itself.  (Though you do need to test this statement before relying on it; it’s possible that the mammography exam might work better on some forms of breast cancer than others.)  p(X|A) is one of the *simple* facts from which complex facts like p(X&A) are constructed; p(X|A) is an *elementary* causal relation within a complex system, and it has a direct physical interpretation.  This is why Bayes’ Theorem has the form it does; it’s not for solving math brainteasers, but for reasoning about the physical universe.\n\n\nOnce the derivation is finished, all the implications on the right side of the equation are of the form p(X|A) or p(X|~A) , while the implication on the left side is p(A|X) .  As long as you remember this and you get the rest of the equation right, it shouldn’t matter whether you happened to start out with p(A|X) or p(X|A) on the left side of the equation, as long as the rules are applied *consistently* – if you started out with the direction of implication p(X|A) on the left side of the equation, you would need to end up with the direction p(A|X) on the right side of the equation.  This, of course, is just changing the variable labels; the point is to remember the symmetry, in order to remember the structure of Bayes’ Theorem.\n\n\nThe symmetry arises because the elementary *causal relations* are generally implications from facts to observations, i.e., from breast cancer to positive mammography.  The elementary *steps in reasoning* are generally implications from observations to facts, i.e., from a positive mammography to breast cancer.  The left side of Bayes’ Theorem is an elementary *inferential* step from the observation of positive mammography to the conclusion of an increased probability of breast cancer.  Implication is written right-to-left, so we write p(cancer|positive) on the left side of the equation.  The right side of Bayes’ Theorem describes the elementary *causal* steps – for example, from breast cancer to a positive mammography – and so the implications on the right side of Bayes’ Theorem take the form p(positive|cancer) or p(positive|~cancer) .\n\n\nAnd that’s Bayes’ Theorem.  Rational inference on the left end, physical causality on the right end; an equation with mind on one side and reality on the other.  Remember how the scientific method turned out to be a special case of Bayes’ Theorem?  If you wanted to put it poetically, you could say that Bayes’ Theorem binds reasoning into the physical universe.\n\n\nOkay, we’re done.\n\n\n\n\n---\n\n\n\n\n| |\n| --- |\n| **Reverend Bayes says:** |\n| |\n| **You are now an initiateof the Bayesian Conspiracy.** |\n\n\n* [Digg](http://digg.com/submit?phase=2&topic=educational&url=http://eyudkowsky.wpengine.com/rational/bayes/&title=Bayes%27%20Theorem&bodytext=Bayes%27%20Theorem%20for%20the%20curious%20and%20bewildered;%20an%20excruciatingly%20gentle%20introduction.)\n* [Del.icio.us](http://del.icio.us/post?&url=http://eyudkowsky.wpengine.com/rational/bayes/&title=Bayes%27%20Theorem)\n* [Stumble](http://www.stumbleupon.com/submit?url=http://eyudkowsky.wpengine.com/rational/bayes/&title=Bayes%27%20Theorem)\n* [Reddit](http://reddit.com/submit?url=http://eyudkowsky.wpengine.com/rational/bayes/&title=Bayes%27%20Theorem)\n\n\n### Further Reading:\n\n\nIf you liked *An Intuitive Explanation of Bayesian Reasoning* , you may also wish to read [A Technical Explanation of Technical Explanation](https://eyudkowsky.wpengine.com/rational/technical) by the same author, which goes into greater detail on the application of Bayescraft to human rationality and the philosophy of science. You may also enjoy the [Twelve Virtues of Rationality](https://eyudkowsky.wpengine.com/rational/virtues/) and [The Simple Truth](https://eyudkowsky.wpengine.com/rational/the-simple-truth) .\n\n\nOther authors:\n\n\nE. T. Jaynes:  [Probability Theory With Applications in Science and Engineering](http://bayes.wustl.edu/etj/science.pdf.html) (full text online).  Theory and applications for Bayes’ Theorem and Bayesian reasoning. See also Jaynes’s magnum opus, [Probability Theory: The Logic of Science](http://bayes.wustl.edu/etj/prob/book.pdf) .\n\n\nD. Kahneman, P. Slovic and A. Tversky, eds, [Judgment under uncertainty:  Heuristics and biases](http://www.amazon.com/exec/obidos/tg/detail/-/0521284147/singinst)*.*   If it seems to you like human thinking often isn’t Bayesian… you’re not wrong.  This terrifying volume catalogues some of the *blatant searing hideous gaping errors* that pop up in human cognition. See also [this forthcoming book chapter](https://eyudkowsky.wpengine.com/singularity/cognitive-biases) for a summary of some better-known biases. \n \nBellhouse, D.R.:  [The Reverend Thomas Bayes FRS: a Biography to Celebrate the Tercentenary of his Birth](http://www.york.ac.uk/depts/maths/histstat/bayesbiog.pdf).  A more “traditional” account of Bayes’s life.\n\n\n[Google Directory for Bayesian analysis](http://directory.google.com/Top/Science/Math/Statistics/Bayesian_Analysis/)(courtesy of the Open Directory Project).\n\n\n\n\n---\n\n\n### About This Document:\n\n\n*An Intuitive Explanation of Bayesian Reasoning*is ©2003 by [Eliezer S. Yudkowsky](mailto:yudkowsky@gmail.com). \n*BayesApplet* is ©2003 by Christian Rovner.  (Email address:  Append “tutopia.com” to “cro1@”).\n\n\nLast updated: 2006.06.04\n\n\nYudkowsky’s “Intuitive Explanation of Bayesian Reasoning” and Rovner’s “BayesApplet” may both be freely used by any nonprofit organization or educational institution.  No royalties or per-page charges are necessary to reproduce this document as course materials, either in printed form or online.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/bayes/](https://eyudkowsky.wpengine.com/rational/bayes/) .\n\n\nThanks to Eric Mitchell, Chris Rovner, Vlad Tarko, Gordon Worley, and Gregg Young for catching errors in the text.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) . If you’ve found Yudkowsky’s pages on rationality useful, please consider [donating](https://intelligence.org/donate/) to the Machine Intelligence Research Institute.\n\n\n\n\n---\n\n\n### Bibliography:\n\n\nBayes, Thomas (1763):  “An essay towards solving a problem in the doctrine of chances.”  *Philosophical Transactions of the Royal Society.***53** : 370-418.\n\n\nCasscells, W., Schoenberger, A., and Grayboys, T. (1978):  “Interpretation by physicians of clinical laboratory results.” *N Engl J Med.***299** :999-1001.\n\n\nDehaene, Stanislas (1997):  *The Number Sense : How the Mind Creates Mathematics.*  Oxford University Press.\n\n\nEddy, David M. (1982):  “Probabilistic reasoning in clinical medicine:  Problems and opportunities.”  In D. Kahneman, P. Slovic, and A. Tversky, eds, *Judgement under uncertainty: Heuristics and biases*. Cambridge University Press, Cambridge, UK.\n\n\nEdwards, Ward (1982):  “Conservatism in human information processing.”  In D. Kahneman, P. Slovic, and A. Tversky, eds, *Judgement under uncertainty: Heuristics and biases*. Cambridge University Press, Cambridge, UK.\n\n\nGigerenzer, Gerd and Hoffrage, Ulrich (1995):  “How to improve Bayesian reasoning without instruction: Frequency formats.”  *Psychological Review.***102** : 684-704.\n\n\nJaynes, E. T. (1996):  *Probability Theory With Applications in Science and Engineering.*  Posthumous manuscript, placed online.  http://bayes.wustl.edu/etj/science.pdf.html\n\n\n\n\n---", "date_published": "2020-09-04T01:30:06Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "7f24ba1849672f447e60626a76e930cc", "title": "The Simple Truth", "url": "https://www.yudkowsky.net/rational/the-simple-truth", "source": "yudkowsky_blog", "source_type": "blog", "text": "> \n> “I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.”\n> \n> — Danielle Egan (journalist)\n\n\n*Author’s Foreword:*\n\n\nThis essay is meant to restore a naive view of truth.\n\n\nSomeone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?”\n\n\nMany people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of ‘truth’. There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall.\n\n\nOften I have seen – especially on Internet mailing lists – that amidst other conversation, someone says “X is true”, and then an argument breaks out over the use of the word ‘true’. This essay is *not* meant as an encyclopedic reference for that argument. Rather, I hope the arguers will read this essay, and then go back to whatever they were discussing before someone questioned the nature of truth.\n\n\nIn this essay I pose questions. If you see what seems like a really obvious answer, it’s probably the answer I intend. The obvious choice isn’t *always* the best choice, but sometimes, by golly, it *is* . I don’t stop looking as soon I find an obvious answer, but if I go on looking, and the obvious-seeming answer *still* seems obvious, I don’t feel guilty about keeping it. Oh, sure, everyone *thinks* two plus two is four, everyone *says* two plus two is four, and in the mere mundane drudgery of everyday life everyone *behaves* as if two plus two is four, but what does two plus two *really, ultimately* equal? As near as I can figure, four. It’s still four even if I intone the question in a solemn, portentous tone of voice. Too simple, you say? Maybe, on this occasion, life doesn’t *need* to be complicated. Wouldn’t that be refreshing?\n\n\nIf you are one of those fortunate folk to whom the question seems trivial at the outset, I hope it still seems trivial at the finish. If you find yourself stumped by deep and meaningful questions, remember that if you know exactly how a system works, and could build one yourself out of buckets and pebbles, it should not be a mystery to you.\n\n\nIf confusion threatens when you interpret a metaphor as a metaphor, try taking everything *completely literally.*\n\n\n\n\n---\n\n\nImagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep. My sheep sleep in an enclosure, a fold; and the enclosure is high enough to guard my sheep from wolves that roam by night. Each day I must release my sheep from the fold to pasture and graze; each night I must find my sheep and return them to the fold. If a sheep is left outside, I will find its body the next morning, killed and half-eaten by wolves. But it is so discouraging, to scour the fields for hours, looking for one last sheep, when I know that probably all the sheep are in the fold. Sometimes I give up early, and usually I get away with it; but around a tenth of the time there is a dead sheep the next morning.\n\n\nIf only there were some way to divine whether sheep are still grazing, without the inconvenience of looking! I try several methods: I toss the divination sticks of my tribe; I train my psychic powers to locate sheep through clairvoyance; I search carefully for reasons to believe all the sheep are in the fold. It makes no difference. Around a tenth of the times I turn in early, I find a dead sheep the next morning. Perhaps I realize that my methods aren’t working, and perhaps I carefully excuse each failure; but my dilemma is still the same. I can spend an hour searching every possible nook and cranny, when most of the time there are no remaining sheep; or I can go to sleep early and lose, on the average, one-tenth of a sheep.\n\n\nLate one afternoon I feel especially tired. I toss the divination sticks and the divination sticks say that all the sheep have returned. I visualize each nook and cranny, and I don’t imagine scrying any sheep. I’m still not confident enough, so I look inside the fold and it seems like there are a lot of sheep, and I review my earlier efforts and decide that I was especially diligent. This dissipates my anxiety, and I go to sleep. The next morning I discover *two* dead sheep. Something inside me snaps, and I begin thinking creatively.\n\n\nThat day, loud hammering noises come from the gate of the sheepfold’s enclosure.\n\n\nThe next morning, I open the gate of the enclosure only a little way, and as each sheep passes out of the enclosure, I drop a pebble into a bucket nailed up next to the door. In the afternoon, as each returning sheep passes by, I take one pebble out of the bucket. When there are no pebbles left in the bucket, I can stop searching and turn in for the night. It is a *brilliant* notion. It will revolutionize shepherding.\n\n\nThat was the theory. In practice, it took considerable refinement before the method worked reliably. Several times I searched for hours and didn’t find any sheep, and the next morning there were no stragglers. On each of these occasions it required deep thought to figure out where my bucket system had failed. On returning from one fruitless search, I thought back and realized that the bucket already contained pebbles when I started; this, it turned out, was a bad idea. Another time I randomly tossed pebbles into the bucket, to amuse myself, between the morning and the afternoon; this too was a bad idea, as I realized after searching for a few hours. But I practiced my pebblecraft, and became a reasonably proficient pebblecrafter.\n\n\nOne afternoon, a man richly attired in white robes, leafy laurels, sandals, and business suit trudges in along the sandy trail that leads to my pastures.\n\n\n“Can I help you?” I inquire.\n\n\nThe man takes a badge from his coat and flips it open, proving beyond the shadow of a doubt that he is Markos Sophisticus Maximus, a delegate from the Senate of Rum. (One might wonder whether another could steal the badge; but so great is the power of these badges that if any other were to use them, they would in that instant be *transformed* into Markos.)\n\n\n“Call me Mark,” he says. “I’m here to confiscate the magic pebbles, in the name of the Senate; artifacts of such great power must not fall into ignorant hands.”\n\n\n“That bleedin’ apprentice,” I grouse under my breath, “he’s been yakkin’ to the villagers again.” Then I look at Mark’s stern face, and sigh. “They aren’t magic pebbles,” I say aloud. “Just ordinary stones I picked up from the ground.”\n\n\nA flicker of confusion crosses Mark’s face, then he brightens again. “I’m here for the magic bucket!” he declares.\n\n\n“It’s not a magic bucket,” I say wearily. “I used to keep dirty socks in it.”\n\n\nMark’s face is puzzled. “Then where is the magic?” he demands.\n\n\nAn interesting question. “It’s hard to explain,” I say.\n\n\nMy current apprentice, Autrey, attracted by the commotion, wanders over and volunteers his explanation: “It’s the level of pebbles in the bucket,” Autrey says. “There’s a magic level of pebbles, and you have to get the level just right, or it doesn’t work. If you throw in more pebbles, or take some out, the bucket won’t be at the magic level anymore. Right now, the magic level is,” Autrey peers into the bucket, “about one-third full.”\n\n\n“I see!” Mark says excitedly. From his back pocket Mark takes out his own bucket, and a heap of pebbles. Then he grabs a few handfuls of pebbles, and stuffs them into the bucket. Then Mark looks into the bucket, noting how many pebbles are there. “There we go,” Mark says, “the magic level of this bucket is half full. Like that?”\n\n\n“No!” Autrey says sharply. “Half full is not the magic level. The magic level is about one-third. Half full is definitely unmagic. Furthermore, you’re using the wrong bucket.”\n\n\nMark turns to me, puzzled. “I thought you said the bucket wasn’t magic?”\n\n\n“It’s not,” I say. A sheep passes out through the gate, and I toss another pebble into the bucket. ��Besides, I’m watching the sheep. Talk to Autrey.”\n\n\nMark dubiously eyes the pebble I tossed in, but decides to temporarily shelve the question. Mark turns to Autrey and draws himself up haughtily. “It’s a free country,” Mark says, “under the benevolent dictatorship of the Senate, of course. I can drop whichever pebbles I like into whatever bucket I like.”\n\n\nAutrey considers this. “No you can’t,” he says finally, “there won’t be any magic.”\n\n\n“Look,” says Mark patiently, “I watched you carefully. You looked in your bucket, checked the level of pebbles, and called that the magic level. I did exactly the same thing.”\n\n\n“That’s not how it works,” says Autrey.\n\n\n“Oh, I see,” says Mark, “It’s not the level of pebbles in *my* bucket that’s magic, it’s the level of pebbles in *your* bucket. Is that what you claim? What makes your bucket so much better than mine, huh?”\n\n\n“Well,” says Autrey, “if we were to empty your bucket, and then pour all the pebbles from my bucket into your bucket, then your bucket would have the magic level. There’s also a procedure we can use to check if your bucket has the magic level, if we know that my bucket has the magic level; we call that a bucket compare operation.”\n\n\nAnother sheep passes, and I toss in another pebble.\n\n\n“He just tossed in another pebble!” Mark says. “And I suppose you claim the new level is also magic? I could toss pebbles into your bucket until the level was the same as mine, and then our buckets would agree. You’re just comparing my bucket to your bucket to determine whether *you* think the level is ‘magic’ or not. Well, I think *your* bucket isn’t magic, because it doesn’t have the same level of pebbles as mine. So there!”\n\n\n“Wait,” says Autrey, “you don’t understand -”\n\n\n“By ‘magic level’, you mean simply the level of pebbles in your own bucket. And when I say ‘magic level’, I mean the level of pebbles in my bucket. Thus you look at my bucket and say it ’isn’t magic’, but the word ‘magic’ means different things to different people. You need to specify *whose* magic it is. You should say that my bucket doesn’t have ’Autrey’s magic level’, and I say that your bucket doesn’t have ’Mark’s magic level’. That way, the apparent contradiction goes away.”\n\n\n“But -” says Autrey helplessly.\n\n\n“Different people can have different buckets with different levels of pebbles, which proves this business about ‘magic’ is completely arbitrary and subjective.”\n\n\n“Mark,” I say, “did anyone tell you what these pebbles *do?* ”\n\n\n“ *Do?* ” says Mark. “I thought they were just magic.”\n\n\n“If the pebbles didn’t do anything,” says Autrey, “our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work.”\n\n\n“What’s your auditor’s name?”\n\n\n“Darwin,” says Autrey.\n\n\n“Hm,” says Mark. “Charles does have a reputation as a strict auditor. So do the pebbles bless the flocks, and cause the increase of sheep?”\n\n\n“No,” I say. “The virtue of the pebbles is this; if we look into the bucket and see the bucket is empty of pebbles, we know the pastures are likewise empty of sheep. If we do not use the bucket, we must search and search until dark, lest one last sheep remain. Or if we stop our work early, then sometimes the next morning we find a dead sheep, for the wolves savage any sheep left outside. If we look in the bucket, we know when all the sheep are home, and we can retire without fear.”\n\n\nMark considers this. “That sounds rather implausible,” he says eventually. “Did you consider using divination sticks? Divination sticks are infallible, or at least, anyone who says they are fallible is burned at the stake. This is an extremely painful way to die; it follows that divination sticks are infallible.”\n\n\n“You’re welcome to use divination sticks if you like,” I say.\n\n\n“Oh, good heavens, of course not,” says Mark. “They work infallibly, with absolute perfection on every occasion, as befits such blessed instruments; but what if there were a dead sheep the next morning? I only use the divination sticks when there is no possibility of their being proven wrong. Otherwise I might be burned alive. So how does your magic bucket work?”\n\n\nHow does the bucket work…? I’d better start with the simplest possible case. “Well,” I say, “suppose the pastures are empty, and the bucket isn’t empty. Then we’ll waste hours looking for a sheep that isn’t there. And if there are sheep in the pastures, but the bucket is empty, then Autrey and I will turn in too early, and we’ll find dead sheep the next morning. So an empty bucket is magical if and only if the pastures are empty -”\n\n\n“Hold on,” says Autrey. “That sounds like a vacuous tautology to me. Aren’t an empty bucket and empty pastures obviously the same thing?”\n\n\n“It’s not vacuous,” I say. “Here’s an analogy: The logician Alfred Tarski once said that the assertion ‘Snow is white’ is true if and only if snow is white. If you can understand that, you should be able to see why an empty bucket is magical if and only if the pastures are empty of sheep.”\n\n\n“Hold on,” says Mark. “These are *buckets* . They don’t have anything to do with *sheep* . Buckets and sheep are obviously completely different. There’s no way the sheep can ever interact with the bucket.”\n\n\n“Then where do *you* think the magic comes from?” inquires Autrey.\n\n\nMark considers. “You said you could compare two buckets to check if they had the same level… I can see how buckets can interact with buckets. Maybe when you get a large collection of buckets, and they all have the same level, *that’s* what generates the magic. I’ll call that the coherentist theory of magic buckets.”\n\n\n“Interesting,” says Autrey. “I know that my master is working on a system with multiple buckets – he says it might work better because of ‘redundancy’ and ‘error correction’. That sounds like coherentism to me.”\n\n\n“They’re not quite the same -” I start to say.\n\n\n“Let’s test the coherentism theory of magic,” says Autrey. “I can see you’ve got five more buckets in your back pocket. I’ll hand you the bucket we’re using, and then you can fill up your other buckets to the same level -”\n\n\nMark recoils in horror. “Stop! These buckets have been passed down in my family for generations, and they’ve always had the same level! If I accept your bucket, my bucket collection will become less coherent, and the magic will go away!”\n\n\n“But your *current* buckets don’t have anything to do with the sheep!” protests Autrey.\n\n\nMark looks exasperated. “Look, I’ve explained before, there’s obviously no way that sheep can interact with buckets. Buckets can only interact with other buckets.”\n\n\n“I toss in a pebble whenever a sheep passes,” I point out.\n\n\n“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”\n\n\n“It’s an interaction between the sheep and the pebbles,” I reply.\n\n\n“No, it’s an interaction between the pebbles and *you* ,” Mark says. “The magic doesn’t come from the sheep, it comes from *you* . Mere sheep are obviously nonmagical. The magic has to come from *somewhere* , on the way to the bucket.”\n\n\nI point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”\n\n\nMark furrows his brow. “I don’t quite follow you… is the *cloth* magical?”\n\n\nI shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket. *Afterward* you can compare the bucket to other buckets, and so on.”\n\n\n“I still don’t get it,” Mark says. “You can’t fit a sheep into a bucket. Only pebbles go in buckets, and it’s obvious that pebbles only interact with other pebbles.”\n\n\n“The sheep interact with things that interact with pebbles…” I search for an analogy. “Suppose you look down at your shoelaces. A photon leaves the Sun; then travels down through Earth’s atmosphere; then bounces off your shoelaces; then passes through the pupil of your eye; then strikes the retina; then is absorbed by a rod or a cone. The photon’s energy makes the attached neuron fire, which causes other neurons to fire. A neural activation pattern in your visual cortex can interact with your beliefs about your shoelaces, since beliefs about shoelaces also exist in neural substrate. If you can understand that, you should be able to see how a passing sheep causes a pebble to enter the bucket.”\n\n\n“At exactly *which* point in the process does the pebble become magic?” says Mark.\n\n\n“It… um…” Now *I’m* starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the *point* of the system is to keep track of sheep.”\n\n\nMark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”\n\n\n“Ha!” Autrey says, scorn rich in his voice. “Mere wishful thinking! Not all pebbles are created equal. The pebbles in *your* bucket are *not* magical. They’re only lumps of stone!”\n\n\nMark’s face turns stern. “Now,” he cries, “now you see the danger of the road you walk! Once you say that some people’s pebbles are magical and some are not, your pride will consume you! You will think yourself superior to all others, and so fall! Many throughout history have tortured and murdered because they thought their own pebbles supreme!” A tinge of condescension enters Mark’s voice. “Worshipping a level of pebbles as ‘magical’ implies that there’s an absolute pebble level in a Supreme Bucket. Nobody believes in a Supreme Bucket these days.”\n\n\n“One,” I say. “Sheep are not absolute pebbles. Two, I don’t think my bucket actually contains the sheep. Three, I don’t worship my bucket level as perfect – I adjust it sometimes – and I do that *because* I care about the sheep.”\n\n\n“Besides,” says Autrey, “someone who believes that possessing absolute pebbles *would* license torture and murder, is making a mistake that has nothing to do with buckets. You’re solving the wrong problem.”\n\n\nMark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.”\n\n\n“Um… yes?” says Autrey.\n\n\n“It doesn’t bother you that *Joseph Stalin* believed that snow is white?”\n\n\n“Um… no?” says Autrey.\n\n\nMark gazes incredulously at Autrey, and finally shrugs. “Let’s suppose, purely for the sake of argument, that your pebbles are magical and mine aren’t. Can you tell me what the difference is?”\n\n\n“My pebbles *represent* the sheep!” Autrey says triumphantly. “ *Your* pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”\n\n\n“Ah!” Mark says. “Special causal powers, instead of magic.”\n\n\n“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”\n\n\n“What kind of special powers does the bucket have?” asks Mark.\n\n\n“Hm,” says Autrey. “Maybe this bucket is imbued with an *about-ness* relation to the pastures. That would explain why it worked – when the bucket is empty, it *means* the pastures are empty.”\n\n\n“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”\n\n\n“It’s an *ordinary bucket* ,” I say. “I used to climb trees with it… I don’t think this question *needs* to be difficult.”\n\n\n“I’m talking to Autrey,” says Mark.\n\n\n“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.\n\n\nAutrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.\n\n\n“You have to throw in a pebble *every* time a sheep leaves through the gate?” says Mark. “Take out a pebble *every* time a sheep returns?”\n\n\nAutrey nods. “Yeah.”\n\n\n“That must be really hard,” Mark says sympathetically.\n\n\nAutrey brightens, soaking up Mark’s sympathy like rain. “Exactly!” says Autrey. “It’s *extremely* hard on your emotions. When the bucket has held its level for a while, you… tend to get attached to that level.”\n\n\nA sheep passes then, leaving through the gate. Autrey sees; he stoops, picks up a pebble, holds it aloft in the air. “Behold!” Autrey proclaims. “A sheep has passed! I must now toss a pebble into this bucket, my dear bucket, and destroy that fond level which has held for so long – ” Another sheep passes. Autrey, caught up in his drama, misses it; so I plunk a pebble into the bucket. Autrey is still speaking: ” – for that is the supreme test of the shepherd, to throw in the pebble, be it ever so agonizing, be the old level ever so precious. Indeed, only the best of shepherds can meet a requirement so stern -“\n\n\n“Autrey,” I say, “if you want to be a great shepherd someday, learn to shut up and throw in the pebble. No fuss. No drama. Just do it.”\n\n\n“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”\n\n\nAutrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”\n\n\n“Can I look at a pebble?” says Mark.\n\n\n“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.\n\n\nAutrey looks at me, puzzled. “Didn’t you just mess it up?”\n\n\nI shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”\n\n\n“But -” Autrey says.\n\n\n“I taught you everything *you* know, but I haven’t taught you everything *I* know,” I say.\n\n\nMark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”\n\n\n“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”\n\n\n“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.\n\n\nAutrey laughs. “Now you’re just being gratuitously evil.”\n\n\nI nod, for this is indeed the case.\n\n\n“Is that really going to work, though?” says Autrey.\n\n\nI nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the *elan vital* that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.\n\n\nMark is looking at his hand, a bit unnerved. “So… the pebble has intentionality again, now?”\n\n\n“Yep,” I say. “Don’t add any more pebbles to your hand, or throw away the one you have, or you’ll break the ritual.”\n\n\nMark nods solemnly. Then he resumes inspecting the pebble. “I understand now how your flocks grew so great,” Mark says. “With the power of this bucket, you could keep in tossing pebbles, and the sheep would keep returning from the fields. You could start with just a few sheep, let them leave, then fill the bucket to the brim before they returned. And if tending so many sheep grew tedious, you could let them all leave, then empty almost all the pebbles from the bucket, so that only a few returned… increasing the flocks again when it came time for shearing… dear heavens, man! Do you realize the sheer *power* of this ritual you’ve discovered? I can only imagine the implications; humankind might leap ahead a decade – no, a century!”\n\n\n“It doesn’t work that way,” I say. “If you add a pebble when a sheep hasn’t left, or remove a pebble when a sheep hasn’t come in, that breaks the ritual. The power does not linger in the pebbles, but vanishes all at once, like a soap bubble popping.”\n\n\nMark’s face is terribly disappointed. “Are you sure?”\n\n\nI nod. “I tried that and it didn’t work.”\n\n\nMark sighs heavily. “And this… *math* … seemed so powerful and useful until then… Oh, well. So much for human progress.”\n\n\n“Mark, it was a *brilliant* idea,” Autrey says encouragingly. “The notion didn’t occur to me, and yet it’s so obvious… it would save an *enormous* amount of effort… there *must* be a way to salvage your plan! We could try different buckets, looking for one that would keep the magical pow- the intentionality in the pebbles, even without the ritual. Or try other pebbles. Maybe our pebbles just have the wrong properties to have *inherent* intentionality. What if we tried it using stones carved to resemble tiny sheep? Or just write ‘sheep’ on the pebbles; that might be enough.”\n\n\n“Not going to work,” I predict dryly.\n\n\nAutrey continues. “Maybe we need organic pebbles, instead of silicon pebbles… or maybe we need to use expensive gemstones. The price of gemstones doubles every eighteen months, so you could buy a handful of cheap gemstones now, and wait, and in twenty years they’d be really expensive.”\n\n\n“You tried adding pebbles to create more sheep, and it didn’t work?” Mark asks me. “What exactly did you do?”\n\n\n“I took a handful of dollar bills. Then I hid the dollar bills under a fold of my blanket, one by one; each time I hid another bill, I took another paperclip from a box, making a small heap. I was careful not to keep track in my head, so that all I knew was that there were ‘many’ dollar bills, and ‘many’ paperclips. Then when all the bills were hidden under my blanket, I added a single additional paperclip to the heap, the equivalent of tossing an extra pebble into the bucket. Then I started taking dollar bills from under the fold, and putting the paperclips back into the box. When I finished, a single paperclip was left over.”\n\n\n“What does that result mean?” asks Autrey.\n\n\n“It means the trick didn’t work. Once I broke ritual by that single misstep, the power did not linger, but vanished instantly; the heap of paperclips and the pile of dollar bills no longer went empty at the same time.”\n\n\n“You *actually* tried this?” asks Mark.\n\n\n“Yes,” I say, “I actually performed the experiment, to verify that the outcome matched my theoretical prediction. I have a sentimental fondness for the scientific method, even when it seems absurd. Besides, what if I’d been wrong?”\n\n\n“If it *had* worked,” says Mark, “you would have been guilty of counterfeiting! Imagine if everyone did that; the economy would collapse! Everyone would have billions of dollars of currency, yet there would be nothing for money to buy!”\n\n\n“Not at all,” I reply. “By that same logic whereby adding another paperclip to the heap creates another dollar bill, creating another dollar bill would create an additional dollar’s worth of goods and services.”\n\n\nMark shakes his head. “Counterfeiting is still a crime… You should not have tried.”\n\n\n“I was *reasonably* confident I would fail.”\n\n\n“Aha!” says Mark. “You *expected* to fail! You didn’t *believe* you could do it!”\n\n\n“Indeed,” I admit. “You have guessed my expectations with stunning accuracy.”\n\n\n“Well, that’s the problem,” Mark says briskly. “Magic is fueled by belief and willpower. If you don’t believe you can do it, you can’t. You need to change your belief about the experimental result; that will change the result itself.”\n\n\n“Funny,” I say nostalgically, “that’s what Autrey said when I told him about the pebble-and-bucket method. That it was too ridiculous for him to believe, so it wouldn’t work for him.”\n\n\n“How did you persuade him?” inquires Mark.\n\n\n“I told him to shut up and follow instructions,” I say, “and when the method worked, Autrey started believing in it.”\n\n\nMark frowns, puzzled. “That makes no sense. It doesn’t resolve the essential chicken-and-egg dilemma.”\n\n\n“Sure it does. The bucket method works whether or not you believe in it.”\n\n\n“That’s *absurd!* ” sputters Mark. “I don’t believe in magic that works whether or not you believe in it!”\n\n\n“I said that too,” chimes in Autrey. “Apparently I was wrong.”\n\n\nMark screws up his face in concentration. “But… if you didn’t believe in magic that works whether or not you believe in it, then why did the bucket method work when you didn’t believe in it? Did you believe in magic that works whether or not you believe in it whether or not you believe in magic that works whether or not you believe in it?”\n\n\n“I don’t… *think* so…” says Autrey doubtfully.\n\n\n“Then if you didn’t believe in magic that works whether or not you… hold on a second, I need to work this out on paper and pencil -” Mark scribbles frantically, looks skeptically at the result, turns the piece of paper upside down, then gives up. “Never mind,” says Mark. “Magic is difficult enough for me to comprehend; metamagic is out of my depth.”\n\n\n“Mark, I don’t think you understand the art of bucketcraft,” I say. “It’s not about using pebbles to control sheep. It’s about making sheep control pebbles. In this art, it is not necessary to begin by believing the art will work. Rather, first the art works, then one comes to believe that it works.”\n\n\n“Or so you believe,” says Mark.\n\n\n“So I believe,” I reply, “ *because* it happens to be a fact. The correspondence between reality and my beliefs comes from reality controlling my beliefs, not the other way around.”\n\n\nAnother sheep passes, causing me to toss in another pebble.\n\n\n“Ah! Now we come to the root of the problem,” says Mark. “What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.”\n\n\nI pause. “Well…” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”\n\n\nMark snorts. “I don’t even know why I bother listening to this obvious nonsense. Whatever you say about this so-called ‘reality’, it is merely another belief. Even your belief that reality precedes your beliefs is a belief. It follows, as a logical inevitability, that reality does not exist; only beliefs exist.”\n\n\n“Hold on,” says Autrey, “could you repeat that last part? You lost me with that sharp swerve there in the middle.”\n\n\n“No matter what you say about reality, it’s just another belief,” explains Mark. “It follows with crushing necessity that there is no reality, only beliefs.”\n\n\n“I see,” I say. “The same way that no matter what you eat, you need to eat it with your mouth. It follows that there is no food, only mouths.”\n\n\n“Precisely,” says Mark. “Everything that you eat has to be in your mouth. How can there be food that exists outside your mouth? The thought is nonsense, proving that ‘food’ is an incoherent notion. That’s why we’re all starving to death; there’s no food.”\n\n\nAutrey looks down at his stomach. “But I’m *not* starving to death.”\n\n\n“ *Aha!* ” shouts Mark triumphantly. “And how did you utter that very objection? With your *mouth* , my friend! With your *mouth* ! What better demonstration could you ask that there is no food?”\n\n\n“ *What’s this about starvation?* ” demands a harsh, rasping voice from directly behind us. Autrey and I stay calm, having gone through this before. Mark leaps a foot in the air, startled almost out of his wits.\n\n\nInspector Darwin smiles tightly, pleased at achieving surprise, and makes a small tick on his clipboard.\n\n\n“Just a metaphor!” Mark says quickly. “You don’t need to take away my mouth, or anything like that -”\n\n\n“ *Why* do you need a *mouth* if there is no *food* ?” demands Darwin angrily. “ *Never mind.* I have no *time* for this *foolishness* . I am here to inspect the *sheep.* ”\n\n\n“Flocks thriving, sir,” I say. “No dead sheep since January.”\n\n\n“ *Excellent.* I award you 0.12 units of *fitness* . Now what is this *person* doing here? Is he a necessary part of the *operations?* ”\n\n\n“As far as I can see, he would be of more use to the human species if hung off a hot-air balloon as ballast,” I say.\n\n\n“Ouch,” says Autrey mildly.\n\n\n“I do not *care* about the *human species* . Let him speak for *himself* .”\n\n\nMark draws himself up haughtily. “This mere *shepherd* ,” he says, gesturing at me, “has claimed that there is such a thing as reality. This offends me, for I know with deep and abiding certainty that there is no truth. The concept of ‘truth’ is merely a stratagem for people to impose their own beliefs on others. Every culture has a different ‘truth’, and no culture’s ‘truth’ is superior to any other. This that I have said holds at all times in all places, and I insist that you agree.”\n\n\n“Hold on a second,” says Autrey. “If nothing is true, why should I believe you when you say that nothing is true?”\n\n\n“I didn’t say that nothing is true -” says Mark.\n\n\n“Yes, you did,” interjects Autrey, “I heard you.”\n\n\n“- I said that ‘truth’ is an excuse used by some cultures to enforce their beliefs on others. So when you say something is ‘true’, you mean only that it would be advantageous to your own social group to have it believed.”\n\n\n“And this that you have said,” I say, “is it true?”\n\n\n“Absolutely, positively true!” says Mark emphatically. “People create their own realities.”\n\n\n“Hold on,” says Autrey, sounding puzzled again, “saying that people create their own realities is, logically, a completely separate issue from saying that there is no truth, a state of affairs I cannot even imagine coherently, perhaps because you still have not explained how exactly it is supposed to work -”\n\n\n“There you go again,” says Mark exasperatedly, “trying to apply your Western concepts of logic, rationality, reason, coherence, and self-consistency.”\n\n\n“Great,” mutters Autrey, “now I need to add a *third* subject heading, to keep track of this entirely separate and distinct claim -”\n\n\n“It’s not separate,” says Mark. “Look, you’re taking the wrong attitude by treating my statements as hypotheses, and carefully deriving their consequences. You need to think of them as fully general excuses, which I apply when anyone says something I don’t like. It’s not so much a model of how the universe works, as a “Get Out of Jail Free” card. The *key* is to apply the excuse *selectively* . When I say that there is no such thing as truth, that applies only to *your* claim that the magic bucket works whether or not I believe in it. It does *not* apply to *my* claim that there is no such thing as truth.”\n\n\n“Um… why not?” inquires Autrey.\n\n\nMark heaves a patient sigh. “Autrey, do you think you’re the first person to think of that question? To ask us how our own beliefs can be meaningful if all beliefs are meaningless? That’s the same thing many students say when they encounter this philosophy, which, I’ll have you know, has many adherents and an extensive literature.”\n\n\n“So what’s the answer?” says Autrey.\n\n\n“We named it the ‘reflexivity problem’,” explains Mark.\n\n\n“But what’s the *answer* ?” persists Autrey.\n\n\nMark smiles condescendingly. “Believe me, Autrey, you’re not the first person to think of such a simple question. There’s no point in presenting it to us as a triumphant refutation.”\n\n\n“But what’s the *actual answer?* ”\n\n\n“Now, I’d like to move on to the issue of how logic kills cute baby seals -”\n\n\n“ *You* are wasting *time* ,” snaps Inspector Darwin.\n\n\n“Not to mention, losing track of sheep,” I say, tossing in another pebble.\n\n\nInspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. *You* say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And *you* fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs *can’t* alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.”\n\n\nWe all pause, considering this.\n\n\n“It *sounds* reasonable…” Mark says finally.\n\n\n“There’s a cliff right there,” observes Inspector Darwin.\n\n\nAutrey is wearing a look of intense concentration. Finally he shouts: “Wait! If that were true, we would all have long since departed into our own private universes, in which case the other people here are only figments of your imagination – there’s no point in trying to prove anything to us -”\n\n\nA long dwindling scream comes from the nearby cliff, followed by a dull and lonely splat. Inspector Darwin flips his clipboard to the page that shows the current gene pool and pencils in a slightly lower frequency for Mark’s alleles.\n\n\nAutrey looks slightly sick. “Was that really necessary?”\n\n\n“ *Necessary?* ” says Inspector Darwin, sounding puzzled. “It just *happened* … I don’t quite understand your question.”\n\n\nAutrey and I turn back to our bucket. It’s time to bring in the sheep. You wouldn’t want to forget about that part. Otherwise what would be the point?\n\n\n\n\n---\n\n\nThis document is ©2008 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/the-simple-truth/](https://eyudkowsky.wpengine.com/rational/the-simple-truth/) .\n\n\nIf you enjoyed this writing, let your journey continue with [An Intuitive Explanation of Bayesian Reasoning](https://eyudkowsky.wpengine.com/rational/bayes) . You may also enjoy [The Twelve Virtues of Rationality](https://eyudkowsky.wpengine.com/rational/virtues) and [A Technical Explanation of Technical Explanation](https://eyudkowsky.wpengine.com/rational/technical)", "date_published": "2020-09-04T01:20:07Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "8dd0edee5ef310955623fdb3f506618c", "title": "Cognitive Biases Potentially Affecting Judgment of Global Risks", "url": "https://www.yudkowsky.net/rational/cognitive-biases", "source": "yudkowsky_blog", "source_type": "blog", "text": "Draft for [Global Catastrophic Risks, Oxford University Press, 2008](http://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1224111364&sr=8-1) . [Download as PDF](https://intelligence.org/files/CognitiveBiases.pdf) .\n\n\n[CognitiveBiases-1](https://eystaging.wpengine.com/wp-content/uploads/2020/09/CognitiveBiases-1.pdf)\n\n\n\n---\n\n\nThis document is ©2007 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/cognitive-biases/](https://eyudkowsky.wpengine.com/rational/cognitive-biases/) .", "date_published": "2020-09-04T01:16:03Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []} +{"id": "5dc85400affd13f24ba884d119a8daec", "title": "Overcoming Bias", "url": "https://www.yudkowsky.net/rational/overcoming-bias", "source": "yudkowsky_blog", "source_type": "blog", "text": "From August 2007 through May 2009, I blogged daily on the topic of human rationality at the econblog [Overcoming Bias](http://www.overcomingbias.com/) by Robin Hanson, getting around a quarter-million monthly pageviews. This then forked off the community blog [Less Wrong](http://lesswrong.com/) , and I moved my old posts there as well for seed content (with URL forwarding, so don’t worry if links are to overcomingbias.com).\n\n\nI suspected I could write faster by requiring myself to publish daily. The experiment was a smashing success.\n\n\nCurrently the *majority* of all my writing is on Less Wrong. To be notified when and if this material is compacted into e-books (or even physical books), [subscribe to this announcement list](http://eyudkowsky.wpengine.com/subscribe/) .\n\n\nThe material is heavily interdependent, and reading in chronological order may prove helpful:\n\n\n* [Andrew Hay’s autogenerated index of all Yudkowsky posts in chronological order.](http://www.cs.auckland.ac.nz/~andwhay/postlist.html)\n\n\nTo see how interdependent it is, try looking over this graph of the dependency structure:\n\n\n* [Andrew Hay’s graphical visualization of major dependencies between Yudkowsky posts.](http://www.cs.auckland.ac.nz/~andwhay/graphsfiles/dependencygraphs.html)\n\n\nTo read organized collections of posts, use the [Sequences](http://wiki.lesswrong.com/wiki/Sequences) on the Less Wrong wiki.\n\n\n\n\n---\n\n\nThis document is ©2008 by [Eliezer Yudkowsky](http://eyudkowsky.wpengine.com/) and free under the [Creative Commons Attribution-No Derivative Works 3.0 License](http://creativecommons.org/licenses/by-nd/3.0/) for copying and distribution, so long as the work is attributed and the text is unaltered.\n\n\nEliezer Yudkowsky’s work is supported by the [Machine Intelligence Research Institute](https://intelligence.org/) .\n\n\nIf you think the world could use some more rationality, consider blogging this page.\n\n\nPraise, condemnation, and feedback are [always welcome](https://eyudkowsky.wpengine.com/contact) . The web address of this page is [http://eyudkowsky.wpengine.com/rational/overcoming-bias/](https://eyudkowsky.wpengine.com/rational/overcoming-bias/) .", "date_published": "2020-09-04T01:09:16Z", "authors": ["Eliezer S. Yudkowsky"], "summaries": []}